Thanks to Eldan, we have a community ad for oVirt proposed on Stack
Now, we need some upvotes (current threshold is 6) before SO starts serving
Have a SO account?
Want to help?
Just drop by and upvote.
Have a great weekend,
Please update otopi on your development machines to 1.7.1+.
It should be available from the nightly master snapshot.
A patch to make the engine require it  will soon be merged.
Soon after that I hope to get merged several patches, first one
is , that will break engine-setup if you do not have otopi-1.7.1+.
If you do not upgrade by then, 'engine-setup' will fail silently.
If you run it as 'OTOPI_DEBUG=1 engine-setup', you will get:
_toposortBuildSequence failed: Cyclic dependencies found
('FATAL Internal error (main): Cyclic dependencies found',)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/__main__.py", line 88, in main
File "/usr/lib/python2.7/site-packages/otopi/main.py", line 156, in execute
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 665,
self._sequence = self._toposortBuildSequence()
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 624,
raise RuntimeError(_('Cyclic dependencies found'))
RuntimeError: Cyclic dependencies found
With libvirt 3.2.0 and onwards, it seems we have now the tools to solve
and eventually get rid of the disk polling we do. This change is
expected to have huge impact on performance, so I'm working on it.
I had plans for a comprehensive refactoring in this area, but looks like
a solution backportable for 4.1.z is appealing, so I
started with this first, saving the refactoring (which I still very much
want) for later.
So, quick summary: libvirt >= 3.2.0 allows to set a threshold to any
node in the backing chain of each drive of a VM
and fire one event exactly once
when that threshold is crossed. The event needs to be explicitely
This is exactly what we need to get rid of polling in the steady state,
so far so good.
The problem is: we can't use this for some important flows we have, and
which involve usage of disks not (yet) attached to a given VM.
Possibly affected flows:
- live storage migration:
we use flags = (libvirt.VIR_DOMAIN_BLOCK_COPY_SHALLOW |
meaning that Vdsm is in charge of handling the volume
we use snapFlags = (libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT |
(same meaning as above)
- live merge: should be OK (according to a glance at the source and a
chat with Adam).
So looks like we will need to bridge this gap.
So we can still use the BLOCK_THRESHOLD event for steady state, and
avoid polling in the vast majority of the cases.
With "steady state" I mean that the VM is running, with no
administration (snapshot, live merge, live storage migration...)
operation in progress.
I think it is fair to assume that VMs are in this state the vast
majority of the time.
For the very important cases on which we cannot depend on events, we can
fall back to polling, but in a smarter way:
instead of polling everything every 2s, let's just poll just the drives
involved in the ongoing operations.
Those should be far less of the total amount of drives, and for a far
shorter time than today, so polling should be practical.
Since the event fires once, we will need to rearm it only if the
operation is ongoing, and only just before to start it (both conditions
easy to check)
We can disable the polling on completion, or on error. This per se is
easy, but we will need a careful review of the flows, and perhaps some
safety nets in place.
Anyway, should we miss to disable the polling, we will "just" have some
On recovery, we will need to make sure to rearm all the relevant events,
but we can just plug in the recovery we must do already, so this should
be easy as well.
So it seems to me this could fly and we can actually have the
performance benefits of events.
However, due to the fact that we need to review some existing and
delicate flows, I think we should still keep the current polling code
around for the next release.
I believe the best route is:
1. offer the new event-based code for 4.2, keep the polling around.
Default to events for performance
2. remove the polling completely in 4.3
I'm currently working on the patches here:
Even though the basics are in place, I don't think they are ready for
Comments welcome, as usual.
Senior SW Eng., Virtualization R&D
IRC: fromani github: @fromanirh
Jenkins ran out of file descriptors so we had to restart it to bump the number.
Jobs should be running jobs already but the UI will probably take
30-40 minutes to become available.
Issues seem to have started around 15:00 UTC yesterday (18:00 IST) so
you may need to re-trigger jobs for patches submitted since then.
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
See below eol announcement.
Are we goung to stop delivering fc24 builds for 4.1? Note we are not
building fc25 packages for 4.1 yet.
---------- Messaggio inoltrato ----------
Da: "Jaroslav Reznik" <jreznik(a)redhat.com>
Data: 21 Lug 2017 15:38
Oggetto: Reminder: Fedora 24 End Of Life on 2017-Aug-08
A: "discussions related to Fedora, Development" <
devel(a)lists.fedoraproject.org>, "devel-announce" <
Fedora 24 support is going to be EOL on Tuesday, August 08th, 2017.
> At this day we are going to close all the Fedora 24 bugs which will
> remain open .
> You have last few weeks to submit your updates to the Fedora 24, if
> you have any, before the Fedora 24 release becomes unsupported.
>  https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/
> devel mailing list -- devel(a)lists.fedoraproject.org
> To unsubscribe send an email to devel-leave(a)lists.fedoraproject.org
In order to free space in our Jenkins master host (jenkins.ovirt.org),
we'll need to remove some of the jobs and their artifacts.
For now, we are planning to remove all 4.0 jobs.
If there is any job that you want to keep, please reply to this email with
Associate sw engineer
EMEA VIRTUALIZATION R&D
Red Hat Israel <https://www.redhat.com/>
dbelenky(a)redhat.com IRC: #rhev-integ, #rhev-dev