Error Java SDK Issue??
by Geschwentner, Patrick
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
[cid:image001.png@01D44537.AF127FD0]
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
Error:
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
Best regards
Patrick
3 years, 8 months
OST's basic suite UI sanity tests optimization
by Marcin Sobczyk
Hi,
_TL; DR_ Let's cut the running time of '008_basic_ui_sanity.py' by more
than 3 minutes by sacrificing Firefox and Chrome screenshots in favor of
Chromium.
During the OST hackathon in Brno this year, I saw an opportunity to
optimize basic UI sanity tests from basic suite.
The way we currently run them, is by setting up a Selenium grid using 3
docker containers, with a dedicated network... that's insanity! (pun
intended).
Let's a look at the running time of '008_basic_ui_sanity.py' scenario
(https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...):
01:31:50 @ Run test: 008_basic_ui_sanity.py:
01:31:50 nose.config: INFO: Ignoring files matching ['^\\.', '^_',
'^setup\\.py$']
01:31:50 # init:
01:31:50 # init: Success (in 0:00:00)
01:31:50 # start_grid:
01:34:05 # start_grid: Success (in 0:02:15)
01:34:05 # initialize_chrome:
01:34:18 # initialize_chrome: Success (in 0:00:13)
01:34:18 # login:
01:34:27 # login: Success (in 0:00:08)
01:34:27 # left_nav:
01:34:45 # left_nav: Success (in 0:00:18)
01:34:45 # close_driver:
01:34:46 # close_driver: Success (in 0:00:00)
01:34:46 # initialize_firefox:
01:35:02 # initialize_firefox: Success (in 0:00:16)
01:35:02 # login:
01:35:11 # login: Success (in 0:00:08)
01:35:11 # left_nav:
01:35:29 # left_nav: Success (in 0:00:18)
01:35:29 # cleanup:
01:35:36 # cleanup: Success (in 0:00:06)
01:35:36 # Results located at
/dev/shm/ost/deployment-basic-suite-master/008_basic_ui_sanity.py.junit.xml
01:35:36 @ Run test: 008_basic_ui_sanity.py: Success (in 0:03:45)
Starting the Selenium grid takes 2:15 out of 3:35 of total running time!
I've investigated a lot of approaches and came up with something like this:
* install 'chromium-headless' package on engine VM
* download 'chromedriver' and 'selenium hub' jar and deploy them in
'/var/opt/' on engine's VM
* run 'selenium.jar' on engine VM from '008_basic_ui_sanity.py' by
using Lago's ssh
* connect to the Selenium instance running on the engine in
'008_basic_ui_sanity.py'
* make screenshots
This series of patches represent the changes:
https://gerrit.ovirt.org/#/q/topic:selenium-on-engine+(status:open+OR+sta....
This is the new running time (https://jenkins.ovirt.org/view/oVirt
system tests/job/ovirt-system-tests_manual/4195/):
20:13:26 @ Run test: 008_basic_ui_sanity.py:
20:13:26 nose.config: INFO: Ignoring files matching ['^\\.', '^_',
'^setup\\.py$']
20:13:26 # init:
20:13:26 # init: Success (in 0:00:00)
20:13:26 # make_screenshots:
20:13:27 * Retrying (Retry(total=2, connect=None, read=None,
redirect=None, status=None)) after connection broken by
'NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7fdb6004f8d0>: Failed to establish a new connection: [Errno 111]
Connection refused',)': /wd/hub
20:13:27 * Retrying (Retry(total=1, connect=None, read=None,
redirect=None, status=None)) after connection broken by
'NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7fdb6004fa10>: Failed to establish a new connection: [Errno 111]
Connection refused',)': /wd/hub
20:13:27 * Retrying (Retry(total=0, connect=None, read=None,
redirect=None, status=None)) after connection broken by
'NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7fdb6004fb50>: Failed to establish a new connection: [Errno 111]
Connection refused',)': /wd/hub
20:13:28 * Redirecting http://192.168.201.4:4444/wd/hub ->
http://192.168.201.4:4444/wd/hub/static/resource/hub.html
20:14:02 # make_screenshots: Success (in 0:00:35)
20:14:02 # Results located at
/dev/shm/ost/deployment-basic-suite-master/008_basic_ui_sanity.py.junit.xml
20:14:02 @ Run test: 008_basic_ui_sanity.py: Success (in 0:00:35)
(The 'NewConnectionErrors' is waiting for Selenium hub to be up and
running, I can silence these later).
And the screenshots are here:
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
_The pros:_
* we cut the running time by more than 3 minutes
_The cons:_
* we don't get Firefox or Chrome screenshots - we get Chromium
screenshots (although AFAIK, QE has much more Selenium tests which
cover both Firefox and Chrome)
* we polute the engine VM with 'chromium-headless' package and deps
(in total: 'chromium-headless', 'chromium-common', 'flac-libs' and
'minizip'), although we can remove these after the tests
_Some design choices explained:_
Q: Why engine VM?
A: Because the engine VM already has 'X11' libs. We could install
'chromium-headless' (and even other browsers) on our Jenkins executors,
but that would mess them up a lot.
Q: Why Chromium?
A: Because it has a separate 'headless' package.
Q: Why not use 'chromedriver' RPM in favor of
https://chromedriver.storage.googleapis.com Chromedriver builds?
A: Because the RPM version pulls a lot of extra dependencies even on the
engine VM ('gtk3', 'cairo' etc.). Builds from the URL are the offical
Google Chromedriver builds, they contain a single binary, and they work
for us.
_What still needs to be polished with the patches:_
* Currently 'setup_engine_selenium.sh' script downloads each time
'selenium.jar' and 'chromedriver.zip' (even with these downloads we
get much faster set-up times) - we should bake these into the engine
VM image template.
* 'selenium_hub_running' function in 'selenium_on_engine.py' is
hackish - an ability to run an ssh command with a context manager
(and auto-terminate on it exits) should be part of Lago. Can be
refactored.
Questions, comments, reviews are welcome.
Regards, Marcin
5 years, 4 months
[ OST Failure Report ] [ oVirt 4.3 (vdsm) ] [ 22-03-2019 ] [ 002_bootstrap.add_master_storage_domain ]
by Dafna Ron
Hi,
We are failing branch 4.3 for test: 002_bootstrap.add_master_storage_domain
It seems that in one of the hosts, the vdsm is not starting
there is nothing in vdsm.log or in supervdsm.log
CQ identified this patch as the suspected root cause:
https://gerrit.ovirt.org/#/c/98748/ - vdsm: client: Add support for flow id
Milan, Marcin, can you please have a look?
full logs:
http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/326/artifact/b...
the only error I can see is about host not being up (makes sense as vdsm is
not running)
Stacktrace
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 417, in add_master_storage_domain
add_iscsi_storage_domain(prefix)
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 561, in add_iscsi_storage_domain
host=_random_host_from_dc(api, DC_NAME),
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 122, in _random_host_from_dc
return _hosts_in_dc(api, dc_name, True)
File "/home/jenkins/workspace/ovirt-4.3_change-queue-tester/ovirt-system-tests/basic-suite-4.3/test-scenarios/002_bootstrap.py",
line 119, in _hosts_in_dc
raise RuntimeError('Could not find hosts that are up in DC %s' % dc_name)
'Could not find hosts that are up in DC test-dc\n--------------------
>> begin captured logging << --------------------\nlago.ssh: DEBUG:
start task:937bdea7-a2a3-47ad-9383-36647ea37ddf:Get ssh client for
lago-basic-suite-4-3-engine:\nlago.ssh: DEBUG: end
task:937bdea7-a2a3-47ad-9383-36647ea37ddf:Get ssh client for
lago-basic-suite-4-3-engine:\nlago.ssh: DEBUG: Running c07b5ee2 on
lago-basic-suite-4-3-engine: cat /root/multipath.txt\nlago.ssh: DEBUG:
Command c07b5ee2 on lago-basic-suite-4-3-engine returned with
0\nlago.ssh: DEBUG: Command c07b5ee2 on lago-basic-suite-4-3-engine
output:\n 3600140516f88cafa71243648ea218995\n360014053e28f60001764fed9978ec4b3\n360014059edc777770114a6484891dcf1\n36001405d93d8585a50d43a4ad0bd8d19\n36001405e31361631de14bcf87d43e55a\n\n-----------
5 years, 7 months
Python 3 vdsm RPM packages
by Marcin Sobczyk
Hi,
On yesterday's vdsm weekly call, we were discussing the need of making
Python 3 vdsm RPM packages.
Some facts:
- it doesn't make a lot sense to spend much time on trying to package
everything - it's completely impossible i.e. to run vdsm without having
'sanlock' module
- our current vdsm.spec file is crap
Two non-exclusive propositions were raised:
- let's try to make a quick-and-dirty patch, that will completely
overwrite the existing 'vdsm.spec' (effectively making it Python 3-only)
for testing purposes, and maintain it for a while
- in the meantime, let's write a completely new, clean and beautiful
spec file in an package-by-package, incremental manner, (also Python
3-only) that would eventually substitute the original one
The quick-and-dirty spec file would be completely unsupported by CI. The
new one would get a proper CI sub-stage in 'build-artifacts' stage.
The steps needed to be done are:
- prepare autotools/Makefiles to differentiate Python 2/Python 3 RPM builds
- prepare the new spec file (for now including only 'vdsm-common' package)
- split 'build-artifacts' stage into 'build-py27' and 'build-py36'
sub-stages (the latter currently running on fc28 only)
The only package we can start with, when making the new spec file, is
'vdsm-common', as it doesn't depend on anything else (or at least I hope
so...).
There were also propositions about how to change the new spec file in
regard to the old one (like making 'vdsm' package a meta-package). This
is a good time for these propositions to be raised, reviewed and
documented (something like this maybe?
https://docs.google.com/document/d/13EXN1Iwq-OPoc2A5Y3PJBpOiNC10ugx6eCE72...),
so we can align the new spec file as we build it.
I can lay the groundwork by doing the autotools/Makefiles and
'build-artifacts' splitting. Gal Zaidman agreed on starting to work on
the new spec file. Milan mentioned, that he had something like the
quick-and-dirty patch, maybe he can share it with us.
Questions, comments are welcome.
Regards, Marcin
5 years, 8 months
[ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]
by Dafna Ron
Hi,
We are failing ovirt-engine master on test 004_basic_sanity.hotplug_cpu
looking at the logs, we can see that the for some reason, libvirt reports a
vm as none responsive which fails the test.
CQ first failure was for patch:
https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for mdevs, use
nodisplay to override
But I do not think this is the cause of failure.
Adding Marcin, Milan and Dan as well as I think it may be netwrok related.
You can see the libvirt log here:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/arti...
you can see the full logs here:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artif...
Evgheni and I confirmed this is not an infra issue and the problem is ssh
connection to the internal vm
Thanks,
Dafna
error:
2019-03-22 15:08:22.658+0000: 22068: warning : qemuDomainObjTaint:7521
: Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
tainted: hook-script
2019-03-22 15:08:22.693+0000: 22068: error :
virProcessRunInMountNamespace:1159 : internal error: child reported:
unable to set security context 'system_u:object_r:virt_content_t:s0'
on '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
No such file or directory
2019-03-22 15:08:28.168+0000: 22070: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
guest agent is not connected
2019-03-22 15:08:58.193+0000: 22070: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
guest agent is not connected
2019-03-22 15:13:58.179+0000: 22071: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
guest agent is not connected
5 years, 8 months
Failed to restart vdsmd due to kdump.service
by Eyal Shenitzky
Hi,
Does someone ever encountered on failure to restart vdsmd due to the
following issue:
# journalctl -xe
Mar 28 14:46:46 dhcp-0-221.tlv.redhat.com kdumpctl[8926]: kexec: failed to
load kdump kernel
Mar 28 14:46:46 dhcp-0-221.tlv.redhat.com kdumpctl[8926]: Starting kdump:
[FAILED]
Mar 28 14:46:46 dhcp-0-221.tlv.redhat.com systemd[1]: kdump.service: Main
process exited, code=exited, status=1/FAILURE
Mar 28 14:46:46 dhcp-0-221.tlv.redhat.com systemd[1]: kdump.service: Failed
with result 'exit-code'.
Mar 28 14:46:46 dhcp-0-221.tlv.redhat.com systemd[1]: Failed to start Crash
recovery kernel arming.
-- Subject: Unit kdump.service has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kdump.service has failed.
--
-- The result is failed.
Mar 28 14:46:46 dhcp-0-221.tlv.redhat.com audit[1]: SERVICE_START pid=1
uid=0 auid=4294967295 ses=4294967295 msg='unit=kdump comm="systemd"
exe="/usr/lib/systemd/systemd" >
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com sudo[9143]: root : problem
with defaults entries ; TTY=pts/0 ; PWD=/root/git/qemu/build ; USER=root ;
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com audit[9143]: USER_ACCT pid=9143
uid=0 auid=0 ses=1 msg='op=PAM:accounting grantors=pam_unix,pam_localuser
acct="root" exe="/usr/b>
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com sudo[9143]: root : TTY=pts/0
; PWD=/root/git/qemu/build ; USER=root ; COMMAND=/usr/bin/chcon
--reference=/usr/bin/qemu-xxx /u>
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com audit[9143]: USER_CMD pid=9143
uid=0 auid=0 ses=1 msg='cwd="/root/git/qemu/build"
cmd=6368636F6E202D2D7265666572656E63653D2F75737>
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com audit[9143]: CRED_REFR pid=9143
uid=0 auid=0 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_fprintd
acct="root" exe="/usr/bin/sud>
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com sudo[9143]:
pam_systemd(sudo:session): Cannot create session: Already running in a
session
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com sudo[9143]:
pam_unix(sudo:session): session opened for user root by root(uid=0)
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com audit[9143]: USER_START pid=9143
uid=0 auid=0 ses=1 msg='op=PAM:session_open
grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limi>
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com sudo[9143]:
pam_unix(sudo:session): session closed for user root
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com audit[9143]: USER_END pid=9143
uid=0 auid=0 ses=1 msg='op=PAM:session_close
grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limit>
Mar 28 14:47:15 dhcp-0-221.tlv.redhat.com audit[9143]: CRED_DISP pid=9143
uid=0 auid=0 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_fprintd
acct="root" exe="/usr/bin/sud>
lines 2486-2509/2509 (END)
# systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-03-28 14:46:45 IST;
5min ago
Process: 9059 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
--post-stop (code=exited, status=0/SUCCESS)
Process: 9048 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=1/FAILURE)
Mar 28 14:46:45 dhcp-0-221.tlv.redhat.com systemd[1]: vdsmd.service:
Service hold-off time over, scheduling restart.
Mar 28 14:46:45 dhcp-0-221.tlv.redhat.com systemd[1]: vdsmd.service:
Scheduled restart job, restart counter is at 5.
Mar 28 14:46:45 dhcp-0-221.tlv.redhat.com systemd[1]: Stopped Virtual
Desktop Server Manager.
Mar 28 14:46:45 dhcp-0-221.tlv.redhat.com systemd[1]: vdsmd.service: Start
request repeated too quickly.
Mar 28 14:46:45 dhcp-0-221.tlv.redhat.com systemd[1]: vdsmd.service: Failed
with result 'exit-code'.
Mar 28 14:46:45 dhcp-0-221.tlv.redhat.com systemd[1]: Failed to start
Virtual Desktop Server Manager.
Occurred to me on a Fedora 28 virtual machine that runs on rhel 7.6 host.
Thanks
--
Regards,
Eyal Shenitzky
5 years, 8 months
[Ovirt] [CQ weekly status] [29-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last CQ job failure on 4.2 was 25-03-2019 on project
ovirt-ansible-hosted-engine-setup due missing polkit package.
*CQ-4.3*: RED (#1)
failures in 4.3 and master this week were caused by two issues:
1. we have had random failures on 4 different tests which we are working on
debugging with Marcin, Martin and several more people. this has been
disruptive this week but did not cause any delays as I was re-triggering
failed projects to insure they have their packages buit in tested.
Currently we suspect api issue or changes to poastgres are causing the
failures
2. we had libcgroup-tools package missing which cause failures for all
projects in initialize engine. I merged a patch to fix the issue and
provide the package.
*CQ-Master:* RED (#1)
We have had the same issues as in 4.3 this week.
Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change...
[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 8 months
OST: add_master_storage_domain failed, Could not find hosts that are up in DC test-dc
by Yedidyah Bar David
Hi all,
I want to verify [1]. So I ran the manual job, basic suite 4.3 [2].
It failed [3] with $subject.
Right before that, verify_add_hosts did succeed, and took 59 seconds.
I gave a brief look at the code of verify_add_hosts, and it checks
that at least one host is UP.
The patch [1] *might* have caused ansible-host-deploy to take longer,
still not sure. In any case, ansible-host-deploy finished at 05:48:45
[5], 14 seconds before verify_add_hosts finished at 09:48:59 [4].
Can it be, that a host is considered UP (from the POV of
verify_add_hosts), but is still not ready for creating storage?
Also, host-1, at the point when OST was killed and collected logs, was
right after finishing host-deploy, and didn't start
ansible-host-deploy yet, according to engine.log. But host-0 should
have been enough, I think.
Thanks,
[1] https://gerrit.ovirt.org/99000
[2] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
[3] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
[4] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
[5] https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
--
Didi
5 years, 9 months