oVirt 4.1 RC1 build planned
by Lev Veyde
Fyi oVirt developers,
An oVirt build is planned for Monday, January 16th at 11:00 AM TLV time (10:00 AM CET).
Taking into consideration the time it takes for Jenkins to run a full CI everything need to be backported by Sunday 11PM.
Please make sure to mark as verified and CR +2 so it will be ready for merging Monday morning.
A list of pending blockers is available here:
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3A4....
Thanks in advance,
Lev Veyde.
7 years, 9 months
test-repo_ovirt_experimental_master fails on jsonrpc errors
by Daniel Belenky
Hi all,
The following job: test-repo_ovirt_experimental_master
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_exp...>
fails
to pass the basic_suite.
The job was triggered by this merge: https://gerrit.ovirt.org/#/c/69936/ to
vdsm project.
The error I suspect cause this issue:
2017-01-11 03:32:26,061-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [] Message received:
{"jsonrpc":"2.0","error":{"code":"192.168.201.2:990178830","message":"Vds
timeout occured"},"id":null}2017-01-11 03:32:26,067-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler7) [57bc898] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM command failed: Message
timeout which can be caused by communication issues2017-01-11
03:32:26,069-05 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler7) [57bc898] ERROR,
GetStoragePoolInfoVDSCommand(
GetStoragePoolInfoVDSCommandParameters:{runAsync='true',
storagePoolId='f92af272-934f-4327-9db0-afe353e6f61c',
ignoreFailoverLimit='true'}), exception: VDSGenericException:
VDSNetworkException: Message timeout which can be caused by
communication issues, log id: 2f12b94a2017-01-11 03:32:26,069-05 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler7) [57bc898] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Message timeout which can be
caused by communication issues
at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand.executeIrsBrokerCommand(GetStoragePoolInfoVDSCommand.java:32)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand.lambda$executeVDSCommand$0(IrsBrokerCommand.java:95)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy.runInControlledConcurrency(IrsProxy.java:262)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand.executeVDSCommand(IrsBrokerCommand.java:92)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:408)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy.proceedStoragePoolStats(IrsProxy.java:348)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy.lambda$updatingTimerElapsed$0(IrsProxy.java:246)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy.runInControlledConcurrency(IrsProxy.java:262)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy.updatingTimerElapsed(IrsProxy.java:227)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source) [:1.8.0_111]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_111]
at org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:77)
[scheduler.jar:]
at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:51)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_111]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_111]
Attached is a zip file with all artifacts from Jenkins.
The error I've mentioned above is found in:
*exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-
<http://ic_sanity.py/lago-basic-suite->master-engine/_var_log_ovirt-engine/engine.log*
Can some one advise?
Thanks,
--
*Daniel Belenky*
*RHV DevOps*
*Red Hat Israel*
7 years, 9 months
Toward self-configuring CI, or how can we stop writing YAML
by Barak Korren
Hi all,
The premise of the CI standard is simple: You (the developers) place
simple script in the 'automation' directory, and we (the infa team)
take care of making Jenkins run it when it should.
But we haven't been able to fully deliver on this premise yet. Getting
a project to work with the CI standard also takes writing some YAML in
the 'Jenkins' repo. Even worse, this YAML needs to be maintained over
time as new project branches get created, new platforms get targeted,
etc.
The core reason behind having to write YAML, is that there are two
technical details that we need to know in order to run the CI jobs,
but are not specified in a way that allows detecting them
automatically. Those details are:
1. The platforms a certain project needs to be built and tested on.
2. The branches of the project that CI needs to look at, and how do
they map to an oVirt releases.
We need to specify a way to specify the details above in a way that
will allow the CI system to automatically detect them. Here are my
ideas on how to do that:
We already have a way to specify platforms in the 'automation'
directory: scripts and files in the 'automation' directory can be
suffixed with a platform name, this make them apply only to that
platform.
I suggest we make the platform suffix explicitly required (with a
compatibility fall-back, see below), so that to have 'check_patch' run
on Fedora 25 for x86_64, one will have to have a
'check_patch.sh.fc25.x86_64' script (or symlink) in the automation
directory.
It would be cumbersome to go and create symlinks in all the projects
at this stage, therefore, I also suggest that we will make the
architecture default to 'x86_64' when unspecified, and the OS default
to 'el7'. That way, 'check_patch.sh.fc25' will be equivalent to
'check_patch.sh.fc25.x86_64', and 'check_patch.sh' will be equivalent
to 'check_patch.sh.el7.x86_64'.
When it comes to branches, I think the way to go is to standardize
branch names. That standard should probably be something like
'ovirt-4.0', 'ovirt-4.1', etc. Alternatively we could use something
like 'ovirt(-.*)?-.4.0' or (<repo_name>-4.0) to accommodate existing
conventions like the engine`s.
Thoughts? Ideas?
( Jira ticket tracking this work: )
( https://ovirt-jira.atlassian.net/browse/OVIRT-1013 )
--
Barak Korren
bkorren(a)redhat.com
RHCE, RHCi, RHV-DevOps Team
https://ifireball.wordpress.com/
7 years, 9 months
Ovirt system tests fail during ldap tests
by Denis Chaplygin
Hello!
I tried to play with system tests and discovered that they some suits are
always failing at my side and that fail seems to be related to test
preparation procedure:
# add_ldap_provider:
* Copy /tmp/dchaplyg/tmpVIWROd to
lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
* Copy /tmp/dchaplyg/tmpVIWROd to
lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
ERROR (in 0:00:00)
I got that error with basic-suite-4.0 and basic-suite-master
What could be wrong?
7 years, 9 months
suspend_resume_vm fail on master experimental
by Daniel Belenky
Hi all,
*test-repo_ovirt_experimental_master* (link to Jenkins
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_exp...>)
job failed on basic_sanity scenario.
The job was triggered by https://gerrit.ovirt.org/#/c/69845/
>From looking at the logs, it seems that the reason is *VDSM*.
In the VDSM log, i see the following error:
2017-01-09 16:47:41,331 ERROR (JsonRpc (StompReactor))
[vds.dispatcher] SSL error receiving from
<yajsonrpc.betterAsyncore.Dispatcher connected ('::1', 34942, 0, 0) at
0x36b95f0>: unexpected eof (betterAsyncore:119)
Also, when looking at the MOM logs, I see the the following:
2017-01-09 16:43:39,508 - mom.vdsmInterface - ERROR - Cannot connect
to VDSM! [Errno 111] Connection refused
I've attached the full VDSM logs here in a zip file.
Can anyone please assist?
Thanks,
--
*Daniel Belenky*
*RHV DevOps*
*Red Hat Israel*
7 years, 9 months
suspend_resume_vm fail on master experimental
by Daniel Belenky
Hi all,
*test-repo_ovirt_experimental_master* (link to Jenkins
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_exp...>)
job failed on basic_sanity scenario.
The job was triggered by https://gerrit.ovirt.org/#/c/69845/
>From looking at the logs, it seems that the reason is *VDSM*.
In the VDSM log, i see the following error:
2017-01-09 16:47:41,331 ERROR (JsonRpc (StompReactor))
[vds.dispatcher] SSL error receiving from
<yajsonrpc.betterAsyncore.Dispatcher connected ('::1', 34942, 0, 0) at
0x36b95f0>: unexpected eof (betterAsyncore:119)
Also, when looking at the MOM logs, I see the the following:
2017-01-09 16:43:39,508 - mom.vdsmInterface - ERROR - Cannot connect
to VDSM! [Errno 111] Connection refused
I've attached the full VDSM logs here in a zip file.
Can anyone please assist?
Thanks,
--
*Daniel Belenky*
*RHV DevOps*
*Red Hat Israel*
7 years, 9 months
New failure in OST - master branch: add secondary storage domains fails
by Nadav Goldin
Hi,
There is a new failure on on master in experimental flow, the failing test
is 'add_secondary_storage_domain', the engine.log has few exceptions:
2017-01-09 10:07:24,943-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
(org.ovirt.thread.pool-6-thread-2) [e9e4e3b] Command
'PollVDSCommand(HostName = lago-basic-suite-master-host1,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='f6ad90f7-1b37-49f0-a958-7151efa0039c'})' execution failed:
VDSGenericException: VDSNetworkException: Timeout during rpc call
2017-01-09 10:07:24,943-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
(org.ovirt.thread.pool-6-thread-2) [e9e4e3b] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Timeout during rpc call
at
org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
[vdsbroker.jar:]
...
2017-01-09 10:10:23,323-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler10) [7cad9211] Command
'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host1,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='f6ad90f7-1b37-49f0-a958-7151efa0039c'})' execution failed:
VDSGenericException: VDSNetworkException: Heartbeat exceeded
2017-01-09 10:10:23,323-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler10) [7cad9211] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Heartbeat exceeded
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188)
[vdsbroker.jar:]
...
2017-01-09 10:10:43,704-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Illegal unquoted character ((CTRL-CHAR, code 10)): has to be escaped using
backslash to be included in name
at [Source: [B@6a84a0d0; line: 1, column: 889]:
org.codehaus.jackson.JsonParseException: Illegal unquoted character
((CTRL-CHAR, code 10)): has to be escaped using backslash to be included in
name
at [Source: [B@6a84a0d0; line: 1, column: 889]
at
org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1433)
[jackson-core-asl-1.9.13.jar:1.9.13]
at
org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:521)
[jackson-core-asl-1.9.13.jar:1.9.13]
at
org.codehaus.jackson.impl.JsonParserMinimalBase._throwUnquotedSpace(JsonParserMinimalBase.java:482)
[jackson-core-asl-1.9.13.jar:1.9.13]
at
org.codehaus.jackson.impl.ReaderBasedParser._parseFieldName2(ReaderBasedParser.java:1042)
[jackson-core-asl-1.9.13.jar:1.9.13]
at
org.codehaus.jackson.impl.ReaderBasedParser._parseFieldName(ReaderBasedParser.java:1008)
[jackson-core-asl-1.9.13.jar:1.9.13]
....
<JsonRpcRequest id: "7711f770-dbef-44be-9f9e-2d8a2bfae937", method:
Host.getAllVmStats, params: {}>
2017-01-09 10:11:33,336-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler7) [7cad9211] Command
'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host1,
VdsIdVDSCommandParameters
Base:{runAsync='true', hostId='f6ad90f7-1b37-49f0-a958-7151efa0039c'})'
execution failed: VDSGenericException: VDSNetworkException: Unrecognized
message received
2017-01-09 10:11:33,336-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler7) [7cad9211] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNe
tworkException: Unrecognized message received
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
[vdsbroker.jar:]
VDSM logs on host1:
2017-01-09 10:11:27,120 ERROR (jsonrpc/4) [storage.StorageDomainCache]
domain 80985016-bdd8-4778-abd9-becc8fedcab4 not found (sdc:157)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 155, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 185, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'80985016-bdd8-4778-abd9-becc8fedcab4',)
2017-01-09 10:11:27,452 ERROR (jsonrpc/4) [storage.StorageDomainCache]
looking for unfetched domain 80985016-bdd8-4778-abd9-becc8fedcab4 (sdc:151)
2017-01-09 10:11:27,453 ERROR (jsonrpc/4) [storage.StorageDomainCache]
looking for domain 80985016-bdd8-4778-abd9-becc8fedcab4 (sdc:168)
2017-01-09 10:11:27,552 WARN (jsonrpc/4) [storage.LVM] lvm vgs failed: 5
[] [' WARNING: Not using lvmetad because config setting use_lvmetad=0.',
' WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).'
, ' Volume group "80985016-bdd8-4778-abd9-becc8fedcab4" not found', '
Cannot process volume group 80985016-bdd8-4778-abd9-becc8fedcab4'] (lvm:377)
2017-01-09 10:11:27,559 ERROR (jsonrpc/4) [storage.StorageDomainCache]
domain 80985016-bdd8-4778-abd9-becc8fedcab4 not found (sdc:157)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 155, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 185, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'80985016-bdd8-4778-abd9-becc8fedcab4',)
2017-01-09 10:11:27,560 ERROR (jsonrpc/4) [storage.TaskManager.Task]
(Task='e2381f1f-eee5-4922-a56d-f6ca40d76eec') Unexpected error (task:870)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 1159, in attachStorageDomain
pool.attachSD(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
dom = sdCache.produce(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 112, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 136, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 155, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 185, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'80985016-bdd8-4778-abd9-becc8fedcab4',)
....
2017-01-09 10:19:31,467 ERROR (jsonrpc/6) [storage.TaskManager.Task]
(Task='700015ba-4aed-4eaf-961b-5a4373b2d4d7') Unexpected error (task:870)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2212, in getAllTasksInfo
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
2017-01-09 10:19:31,471 INFO (jsonrpc/6) [storage.TaskManager.Task]
(Task='700015ba-4aed-4eaf-961b-5a4373b2d4d7') aborting: Task is aborted:
'Not SPM' - code 654 (task:1175)
2017-01-09 10:19:31,471 ERROR (jsonrpc/6) [storage.Dispatcher] {'status':
{'message': 'Not SPM: ()', 'code': 654}} (dispatcher:77)
2017-01-09 10:19:31,472 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call
Host.getAllTasksInfo failed (error 654) in 0.01 seconds (__init__:515)
2017-01-09 10:19:31,479 INFO (jsonrpc/7) [dispatcher] Run and protect:
getAllTasksStatuses(spUUID=None, options=None) (logUtils:49)
2017-01-09 10:19:31,479 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='2841da07-b3b4-4573-ae38-b1500f793221') Unexpected error (task:870)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2172, in getAllTasksStatuses
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
2017-01-09 10:19:31,480 INFO (jsonrpc/7) [storage.TaskManager.Task]
(Task='2841da07-b3b4-4573-ae38-b1500f793221') aborting: Task is aborted:
'Not SPM' - code 654 (task:1175)
Full engine logs can be found here:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4643/art...
VDSM host1 logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4643/art...
Rest of the logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4643/art...
Could someone take a look?
Thanks,
Nadav.
7 years, 9 months
Vdsm - 6 years (in 10 minutes)
by Nir Soffer
Hi all,
Please enjoy this visualization of vdsm development since 2011:
https://www.youtube.com/watch?v=Ui1ouZiENU0
If you want to create your own:
dnf install gourse ffmpeg
cd /path/to/gitrepo
gource -s 0.1 --date-format "%a, %d %b %Y" -1920x1080 -o - | ffmpeg -y
-r 30 -f image2pipe -vcodec ppm -i - -vcodec libx264 -preset ultrafast
-pix_fmt yuv420p -crf 1 -threads 4 -bf 0 vdsm-6-years.mp4
Nir
7 years, 9 months
The feature everyone was asking for is finally here...
by Eyal Edri
FYI,
After many requests from multiple developers and testers, the oVirt CI
added a new simple job that lets you run the full fledged end-to-end oVirt
system tests with a click of a button.
You can read all the details and how-to in the new oVirt blog [1].
We wanted to allow running oVirt system tests on EVERY open patch from ANY
oVirt project, without relaying on complex building code inside the job.
Luckily we just added the 'build-on-demand' so together with it you can
build any rpms you'd like and use them to run the manual job.
So the 2 steps you'll need to do are:
1. Write 'ci please build' inside a comment on an open oVirt patch (
make sure the feature is enabled for that project first, its already
available for ovirt-engine,vdsm,dashboard and vdsm-jsonrpc-java)
2. Run the manual OST job for the version you'd like to test with the
URLs you got from #1
You'll get and email once the job is done and you can browse the results
and check for logs from engine and the hosts.
Please feel free to ask questions on infra(a)ovirt.org as usual.
[1] https://www.ovirt.org/blog/2017/01/ovirt-system-tests-to-the-rescue/
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)
7 years, 9 months
Manageiq ova upload is failing
by Piotr Kliczewski
All,
I want to upload manageiq ova from [1] when I attempted to do it I see:
on the engine side:
2017-01-03 23:43:59,318+01 ERROR
[org.ovirt.engine.core.bll.GetVmFromOvaQuery] (default task-1)
[33a1d8b5-8cec-4b00-9a35-ee9f1d9635b2] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to GetOvaInfoVDS, error
= Error parsing ovf information: no memory size, code = -32603 (Failed
with error unexpected and code 16)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118)
[bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:242)
[bll.jar:]
at org.ovirt.engine.core.bll.GetVmFromOvaQuery.getVmInfoFromOvaFile(GetVmFromOvaQuery.java:24)
[bll.jar:]
at org.ovirt.engine.core.bll.GetVmFromOvaQuery.executeQueryCommand(GetVmFromOvaQuery.java:20)
[bll.jar:]
at org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:110)
[bll.jar:]
at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.executor.DefaultBackendQueryExecutor.execute(DefaultBackendQueryExecutor.java:14)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:579)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:547)
[bll.jar:]
on the vdsm side:
2017-01-03 23:43:58,437 ERROR (jsonrpc/0) [jsonrpc.JsonRpcServer]
Internal server error (__init__:552)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
547, in _handle_request
res = method(**params)
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
198, in _dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 1493, in getExternalVmFromOva
return v2v.get_ova_info(ova_path)
File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 226, in get_ova_info
_add_general_ovf_info(vm, root, ns, ova_path)
File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1225, in
_add_general_ovf_info
raise V2VError('Error parsing ovf information: no memory size')
V2VError: Error parsing ovf information: no memory size
Is our code not parsing correctly or manageiq guys publish not correct file?
I am using:
ovirt-engine
Version : 4.1.0
Release : 0.3.beta2.20161221085908.el7.centos
vdsm
Version : 4.18.999
Release : 1218.gitd36143e.el7.centos
Thanks,
Piotr
[1] http://manageiq.org/download/
7 years, 9 months