[ACTION REQUIRED] fixing CI on FC27 and RAWHIDE
by Barak Korren
Hi to all maintainers,
TL;DR: If you use yum-utils on fedora (For e.g using yum-builddep) you
need to switch to dnf-utils.
As you may know we had encountered many issues when trying to get CI
to work well for FC27 and Fedora RAWHIDE.
One of the main reasons for the issues we've seen was that we were
using 'yum' instead of 'dnf' to setup Fedora build and test
environments. In FC27 and RAWHIDe, some packages simply stopped being
compatible with yum.
It took us a while to resolve this since we needed to get some patches
merged upstream to the tools we use, but since yesterday we switched
to using DNF for setting up FC environments.
While we were waiting for the tools to get patched, our work-around
was to freeze the mirrors our CI system uses so that newer and
incompatible packages were invisible in CI. Since we now have a fix in
place we started re-syncing our mirrors to get newer packages.
A side effect of this, is that dnf-incompatible code in build and test
scripts was exposed. One such common case is scripts using
'yum-builddep' with 'yum-utils' installed instead of 'dnf-utils'. In
this case yum-incompatible packages will fail to be downloaded.
Bottom line - if you use yum-based tools in your build/test scripts,
they need to be changed to dnf-based tools when those scripts run in
Fedora. Sometime this can be as simple as changing the packages listed
in your *.packages files for Fedora.
See for example this patch which switches from yum-utils to dnf-utils:
https://gerrit.ovirt.org/c/88630/
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
6 years, 8 months
Re: [ovirt-devel] [ovirt-users] Unremovable disks created through the API
by Arik Hadas
On Tue, Mar 6, 2018 at 11:19 PM, Richard W.M. Jones <rjones(a)redhat.com>
wrote:
> On Tue, Mar 06, 2018 at 11:14:40PM +0200, Arik Hadas wrote:
> > On Tue, Mar 6, 2018 at 9:18 PM, Richard W.M. Jones <rjones(a)redhat.com>
> > wrote:
> >
> > >
> > > I've been playing with disk uploads through the API. As a result
> > > I now have lots of disks in the states "Paused by System" and
> > > "Paused by User". They are not attached to any VM, and I'm logged
> > > in as admin@internal, but there seems to be no way to use them.
> > > Even worse I've now run out of space so can't do anything else.
> > >
> > > How can I remove them?
> >
> >
> > > Screenshot: http://oirase.annexia.org/tmp/ovirt.png
> >
> >
> > Hi Richard,
> >
> > Selecting Upload->Cancel at that tab will remove such a disk.
> > Note that it may take a minute or two.
>
> Yes, that works, thanks.
>
(Moving to devel-list)
BTW, I think that the import process should include a preliminary phase
where ovirt-engine is informed that the import process starts.
Currently, IIUC, the new process is designed to be:
1. virt-v2v uploads the disks
2. virt-v2v calls an API with the OVF configuration so ovirt-engine will
add the VM and attach the uploaded disks to that VM
IMHO, the process should be comprised of:
1. virt-v2v calls an API with the (probably partial since the OS and other
things are unknown at that point) OVF configuration
2. virt-v2v uploads the disks
3. virt-v2v provides the up-to-date configuration
Step #1 will enable ovirt-engine:
1. Most importantly, to cleanup uploaded disks in case of an error during
the import process. Otherwise, we require the client to clean them up,
which can be challenging (e.g., if the virt-v2v process crashes).
2. To inform the user that the process has started - so he won't get
surprised by seeing disks being uploaded suddenly. That will give a context
to these upload operations.
3. To inform the user about the progress of the import process, much like
we do today when importing VMs from vSphere to RHV.
4. To perform validations on the (partial) VM configuration, e.g.,
verifying that no VM with the same name exists/verifying there is enough
space (optionally mapping different disks to different storage devices) and
so on, before uploading the disks.
The gaps I see:
1. We don't have a command for step #1 yet but that's something we can
provide relatively quickly. We need it also to support uploading an OVA via
oVirt's webadmin.
2. We have a command for step #3, but it is not exposed via the API.
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~
> rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> Fedora Windows cross-compiler. Compile Windows programs, test, and
> build Windows installers. Over 100 libraries supported.
> http://fedoraproject.org/wiki/MinGW
>
6 years, 8 months
planned Jenkins restart
by Evgheni Dereveanchin
Hi everyone,
I'll be performing a planned Jenkins restart within the next hour.
No new jobs will be scheduled during this maintenance period.
I will inform you once it is over.
Regards,
Evgheni Dereveanchin
6 years, 8 months
Fwd: Still Failing: oVirt/ovirt-engine#4888 (master - 1cc377f)
by Sandro Bonazzola
Hi,
not sure who's monitoring Travis failures and maintaining Travis jobs, but
looks like we have an error in the test suite
It's currently failing on:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running org.ovirt.engine.core.dal.job.ExecutionMessageDirectorTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.234 sec
Running org.ovirt.engine.core.dal.dbbroker.DbConnectionUtilTest
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.936
sec <<< FAILURE!
org.ovirt.engine.core.dal.dbbroker.DbConnectionUtilTest Time elapsed:
936 sec <<< FAILURE!
java.lang.AssertionError: Unable to init tests
Looks pretty much similar to what happens in jenkins for fcraw builds,
tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1550033
---------- Forwarded message ----------
From: Travis CI <builds(a)travis-ci.org>
Date: 2018-03-06 7:53 GMT+01:00
Subject: Still Failing: oVirt/ovirt-engine#4888 (master - 1cc377f)
To: sbonazzo(a)redhat.com
*oVirt / ovirt-engine
<https://travis-ci.org/oVirt/ovirt-engine?utm_source=email&utm_medium=noti...>*
(master <https://github.com/oVirt/ovirt-engine/tree/master>)
Build #4888 is still failing.
<https://travis-ci.org/oVirt/ovirt-engine/builds/349675802?utm_source=emai...>
4 minutes and 38 seconds
*Yedidyah Bar David* 1cc377f
<https://github.com/oVirt/ovirt-engine/commit/1cc377fa30286733aeb68aa75a5b...>
Changeset
→
<https://github.com/oVirt/ovirt-engine/compare/6c9465241694...1cc377fa3028>
packaging: setup: postgres95: Refuse to upgrade if new data dir is not
empty
Change-Id: I9f69ceb2b8ea773d1bb883c3e81734a3c4c8af3c
Bug-Url: https://bugzilla.redhat.com/1546771
Signed-off-by: Yedidyah Bar David <didi(a)redhat.com>
*Want to know about upcoming build environment updates?*
Would you like to stay up-to-date with the upcoming Travis CI build
environment updates? We set up a mailing list for you! Sign up here
<http://eepurl.com/9OCsP>.
Documentation <https://docs.travis-ci.com> about Travis CI
Need help? Mail support <support(a)travis-ci.com>!
Choose who receives these build notification emails in your configuration
file <https://docs.travis-ci.com/user/notifications>.
*Would you like to test your private code?*
Travis CI for Private Projects
<https://travis-ci.com?utm_source=build_email_footer&utm_campaign=travis-c...>
could be your new best friend!
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
6 years, 8 months
Packages not landed on tested repo
by Sandro Bonazzola
Hi,
looks like we have some packages which never passed OST, never landed
in ovirt-4.2-snapshot repo
and are currently causing failure building the appliance from 4.2 snapshot repo:
03:17:20,893 INFO program:Error: Package:
ovirt-engine-4.2.2.3-0.0.master.20180305171049.git0e81394.el7.centos.noarch
(ovirt-4.2-snapshot)
03:17:20,893 INFO program:Requires: ovirt-ansible-roles >= 1.1.2
03:17:20,894 INFO program:Error: Package:
ovirt-engine-4.2.2.3-0.0.master.20180305171049.git0e81394.el7.centos.noarch
(ovirt-4.2-snapshot)
03:17:20,896 INFO program:Requires: ovirt-js-dependencies
03:17:20,897 INFO program:Error: Package:
ovirt-engine-4.2.2.3-0.0.master.20180305171049.git0e81394.el7.centos.noarch
(ovirt-4.2-snapshot)
03:17:20,897 INFO program:Requires: ovirt-engine-wildfly-overlay >= 11.0.1
03:17:20,899 INFO program:Error: Package:
ovirt-engine-api-explorer-0.0.3-0.alpha.1.20180215git9b54c9c.el7.centos.noarch
(ovirt-4.2-snapshot)
03:17:20,900 INFO program:Requires: ovirt-js-dependencies >= 1.2
I'm now trying to get the missing packages pass thorough the whole testing
pipeline but be aware those packages have already been released 3 months
ago so they are stable and manually tested and were passing master OST.
I've no clue on why they are missing in the 4.2 testing pipeline.
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
6 years, 8 months
[ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 05-03-2018 ] [ 006_migrations.migrate_vm ]
by Dafna Ron
Hi,
We have a failure in OST on test 006_migrations.migrate_vm.
>From what I can see, the migration succeeded but because there was a
KeyError: 'cpuUsage' engine assume the v is down and tries to retry
migration.
the re-try fails with vm already exists.
*Link and headline of suspected patches: *Failed change:
*https://gerrit.ovirt.org/#/c/88432/2
<https://gerrit.ovirt.org/#/c/88432/2> - *
*engine : Events coming too soon on refresh caps*
*CQ reported this as root ca*use but I don't think its related as well:
*https://gerrit.ovirt.org/#/c/88404/ <https://gerrit.ovirt.org/#/c/88404/>
- *
*core: Auto SD selection on template version update*
*Link to
Job:http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1021/
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1021/>Link to
all
logs:http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1021/arti...
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1021/artifact/>(Relevant)
error snippet from the log: <error>*
*migration starts and finishs *but fails on vm state on eyError: 'cpuUsage'
*2018-03-05 05:21:33,269-0500 INFO (jsonrpc/3) [api.virt] START
migrate(params={u'incomingLimit': 2, u'tunneled': u'false', u'dstqemu':
u'192.0.3.2', u'autoConverge': u'false', u'src':
u'lago-basic-suite-4-2-host-0', u'enableGuestEvents': False, u'dst':
u'lago-basic-suite-4-2-host-1:54321', u'vmId':
u'a80596a2-f57b-4878-adc3-772363e42783', u'abortOnError': u'true',
u'outgoingLimit': 2, u'compressed': u'false', u'method': u'online'})
from=::ffff:192.168.200.2,42452,
flow_id=123f5fd3-81a4-4ba9-9034-525674696629,
vmId=a80596a2-f57b-4878-adc3-772363e42783 (api:46)2018-03-05
05:21:33,271-0500 INFO (jsonrpc/3) [api.virt] FINISH migrate
return={'status': {'message': 'Migration in progress', 'code': 0},
'progress': 0} from=::ffff:192.168.200.2,42452,
flow_id=123f5fd3-81a4-4ba9-9034-525674696629,
vmId=a80596a2-f57b-4878-adc3-772363e42783 (api:52)2018-03-05
05:21:33,271-0500 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call
VM.migrate succeeded in 0.00 seconds (__init__:573)2018-03-05
05:21:33,473-0500 INFO (monitor/bd2f874) [IOProcessClient] Closing client
ioprocess-0 (__init__:583)2018-03-05 05:21:34,936-0500 INFO
(migsrc/a80596a2) [virt.vm] (vmId='a80596a2-f57b-4878-adc3-772363e42783')
Creation of destination VM took: 1 seconds (migration:473)2018-03-05
05:21:34,936-0500 INFO (migsrc/a80596a2) [virt.vm]
(vmId='a80596a2-f57b-4878-adc3-772363e42783') starting migration to
qemu+tls://lago-basic-suite-4-2-host-1/system with miguri tcp://192.0.3.2
<http://192.0.3.2> (migration:502)2018-03-05 05:21:36,356-0500 INFO
(jsonrpc/7) [api.host] START getAllVmStats()
from=::ffff:192.168.200.2,42452 (api:46)2018-03-05 05:21:36,359-0500 INFO
(jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message':
'Done', 'code': 0}, 'statsList': (suppressed)}
from=::ffff:192.168.200.2,42452 (api:52)2018-03-05 05:21:36,361-0500 INFO
(jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded
in 0.00 seconds (__init__:573)2018-03-05 05:21:38,861-0500 INFO
(libvirt/events) [virt.vm] (vmId='a80596a2-f57b-4878-adc3-772363e42783')
CPU stopped: onSuspend (vm:6063)2018-03-05 05:21:40,109-0500 WARN
(periodic/0) [virt.periodic.VmDispatcher] could not run <class
'vdsm.virt.periodic.DriveWatermarkMonitor'> on
['a80596a2-f57b-4878-adc3-772363e42783'] (periodic:323)2018-03-05
05:21:40,109-0500 INFO (migsrc/a80596a2) [virt.vm]
(vmId='a80596a2-f57b-4878-adc3-772363e42783') migration took 6 seconds to
complete (migration:514)2018-03-05 05:21:40,110-0500 INFO
(migsrc/a80596a2) [virt.vm] (vmId='a80596a2-f57b-4878-adc3-772363e42783')
Changed state to Down: Migration succeeded (code=4) (vm:1677)2018-03-05
05:21:40,116-0500 INFO (migsrc/a80596a2) [virt.vm]
(vmId='a80596a2-f57b-4878-adc3-772363e42783') Stopping connection
(guestagent:438)2018-03-05 05:21:41,118-0500 WARN (periodic/3)
[virt.periodic.VmDispatcher] could not run <class
'vdsm.virt.periodic.DriveWatermarkMonitor'> on
['a80596a2-f57b-4878-adc3-772363e42783'] (periodic:323)2018-03-05
05:21:42,063-0500 WARN (periodic/2) [virt.vmstats] Missing stat:
'balloon.current' for vm a80596a2-f57b-4878-adc3-772363e42783
(vmstats:552)2018-03-05 05:21:42,064-0500 ERROR (periodic/2) [virt.vmstats]
VM metrics collection failed (vmstats:264)Traceback (most recent call
last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line
197, in send_metrics data[prefix + '.cpu.usage'] =
stat['cpuUsage']KeyError: 'cpuUsage'*
*</error>*
6 years, 8 months
otopi on PPC (Was: [ACTION REQUIRED] upgrade to Fedora 27 for otopi on master)
by Yedidyah Bar David
(Changed subject)
On Fri, Mar 2, 2018 at 1:17 AM, Eli Mesika <emesika(a)redhat.com> wrote:
> I have upgraded to fc27
> I am not able to add a host to a PPC cluster : "no otopi module ..."
Where do you get this error?
>
> otopi is installed
On the engine? Host? Both? Generally it's enough to have it on the engine.
>
> rpm -qa |grep otopi
> otopi-java-1.8.0-0.0.master.20180228112959.git6eccc31.fc27.noarch
> otopi-common-1.8.0-0.0.master.20180228112959.git6eccc31.fc27.noarch
> python2-otopi-devtools-1.8.0-0.0.master.20180228112959.git6eccc31.fc27.noarch
> python2-otopi-1.8.0-0.0.master.20180228112959.git6eccc31.fc27.noarch
>
> Any ideas ?
I have no experience with PPC, sorry.
But in theory it should work the same, so let's try to debug "as usual".
Thanks and best regards,
>
>
> On Wed, Feb 28, 2018 at 7:01 PM, Greg Sheremeta <gshereme(a)redhat.com> wrote:
>>
>> [changing subject]
>>
>> Looks like we all have to upgrade Fedora devel machines to Fedora 27.
>>
>> Greg
>>
>> On Wed, Feb 28, 2018 at 11:05 AM, Yedidyah Bar David <didi(a)redhat.com>
>> wrote:
>>>
>>> (Adding devel, was asked twice already)
>>>
>>> On Wed, Feb 28, 2018 at 4:52 PM, Eli Mesika <emesika(a)redhat.com> wrote:
>>> > Hi Didi
>>> >
>>> > I am getting :
>>> >
>>> >
>>> > ***L:ERROR Internal error: type object 'Stages' has no attribute
>>> > 'ANSWER_FILE_GENERATED'
>>> > Traceback (most recent call last):
>>> > File "/usr/lib/python2.7/site-packages/otopi/__main__.py", line 88,
>>> > in
>>> > main
>>> > installer.execute()
>>> > File "/usr/lib/python2.7/site-packages/otopi/main.py", line 153, in
>>> > execute
>>> > sys.exc_info()[2],
>>> > File "/usr/lib/python2.7/site-packages/otopi/util.py", line 81, in
>>> > raiseExceptionInformation
>>> > exec('raise info[1], None, info[2]')
>>> > File "/usr/lib/python2.7/site-packages/otopi/main.py", line 147, in
>>> > execute
>>> > self.context.loadPlugins()
>>> > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 859,
>>> > in
>>> > loadPlugins
>>> > self._loadPluginGroups(plugindir, needgroups, loadedgroups)
>>> > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 113,
>>> > in
>>> > _loadPluginGroups
>>> > self._loadPlugins(path, path, groupname)
>>> > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 70, in
>>> > _loadPlugins
>>> > self._loadPlugins(base, d, groupname)
>>> > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 70, in
>>> > _loadPlugins
>>> > self._loadPlugins(base, d, groupname)
>>> > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 101,
>>> > in
>>> > _loadPlugins
>>> > os.path.basename(path),
>>> > File "/usr/lib/python2.7/site-packages/otopi/util.py", line 105, in
>>> > loadModule
>>> > mod_desc
>>> > File
>>> >
>>> > "/home/emesika/ovirt-engine-1481197/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-common/base/core/__init__.py",
>>> > line 24, in <module>
>>> > from . import answerfile
>>> > File
>>> >
>>> > "/home/emesika/ovirt-engine-1481197/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-common/base/core/answerfile.py",
>>> > line 38, in <module>
>>> > class Plugin(plugin.PluginBase):
>>> > File
>>> >
>>> > "/home/emesika/ovirt-engine-1481197/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-common/base/core/answerfile.py",
>>> > line 60, in Plugin
>>> > otopicons.Stages.ANSWER_FILE_GENERATED,
>>> > PluginLoadException: type object 'Stages' has no attribute
>>> > 'ANSWER_FILE_GENERATED'
>>> >
>>>
>>> Please update otopi.
>>>
>>> If you are on fedora, you'll need to upgrade to fc27 first.
>>>
>>> Good luck,
>>> --
>>> Didi
>>> _______________________________________________
>>> Devel mailing list
>>> Devel(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>
--
Didi
6 years, 8 months
FC27 updates broken for ovirt-4.2
by Viktor Mihajlovski
I just tried to update the ovirt packages on my FC27 host, but failed
due to https://gerrit.ovirt.org/#/c/87628/
vdsm now requires libvirt >= 3.10.0-132 but Fedora 27 has only 3.7.0-4
the moment.
It's generic Fedora 27, but since I run on s390, cross-posting to s390 list.
I guess there's good reason to require libvirt 3.10. Is there any chance
that we can get libvirt updated for Fedora 27?
--
Regards,
Viktor Mihajlovski
6 years, 8 months