I would like to kindly ask everyone who is Ansible or oVirt user for
testing of the new Ansible oVirt modules. For everyone who is familiar
with the Ansible and oVirt, this describes the steps you need to do,
to setup oVirt modules library and start using those modules (Most of
those modules will be available in Ansible 2.3, some of them are already
If you have any issue setting this up, please contact me, I will do the
best to help you.
If you have an issue, which you think is a bug, please open an issue
here. Please note that Ansible is merging it's repositories, so since
next week it will actually be stored here. If you are missing
anything please open an issue as well, or just contact me, and I will
do fix it. You are also very welcome to sent PR with fixes.
For those who don't have testing environment which can test against,
I've created an Vagrant project which will deploy you the oVirt instance
using Ansible playbooks. You can find how to use it here.
The repository also contains few examples, so you don't have to
copy-paste them from the source.
Thanks all for reading this and any feedback,
oVirt's development is continuing on pace, as the calendar year draws to a
close and we get ready for a new year of development, evangelism, and
making virtual machine management a simple process for everyone.
Here's what happened in November of 2016:
oVirt 4.0.6 Third Release Candidate is now available
oVirt 4.1.0 First Beta Release is now available for testing
In the Community
Testing ovirt-engine changes without a real cluster
Request for oVirt Ansible modules testing feedback
Deep Dives and Technical Discussions
Important Open Source Cloud Products [German]
Red Hat IT runs OpenShift Container Platform on Red Hat Virtualization and
Keynote: Blurring the Lines: The Continuum Between Containers and VMs
Quick Guide: How to Plan Your Red Hat Virtualization 4.0 Deployment
A Decade of KVM [Chinese]
Expansion of iptables Rules for oVirt 4.0 [Russian]
Principal Community Analyst
Open Source and Standards
I am trying to run a python script '/sbin/gluster-eventsapi' in vdsm verb which internally imports some python modules from /usr/lib/python2.7/site-packages/gluster/cliutils/. But it fails with the import error. Following error is seen in the supervdsm log.
MainProcess|Thread-1::DEBUG::2016-11-28 16:54:35,130::commands::93::root::(execCmd) FAILED: <err> = 'Traceback (most recent call last):\n File "/sbin/gluster-eventsapi", line 25, in <module>\n from gluster.cliutils.cliutils import (Cmd, execute, node_output_ok, node_output_notok,\nImportError: No module named cliutils.cliutils\n'; <rc> = 1
I think the import statement "from gluster.cliutils.cliutils import (Cmd, execute, node_output_ok, node_output_notok)" in the python script resolves to '/usr/share/vdsm/gluster' instead of /usr/lib/python2.7/site-packages/gluster/cliutils.
I see the following in python sys.path while executing a python script from vdsm.
['/usr/libexec/glusterfs', '/usr/share/vdsm', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib64/python2.7/site-packages/gtk-2.0', '/usr/lib/python2.7/site-packages']
Looks like '/usr/share/vdsm' take precedence over '/usr/lib64/python2.7/site-packages'.
Can someone suggests a way to fix this issue?
Note: '/sbin/gluster-eventsapi' works perfectly while running directly from CLI.
Related vdsm patch: https://gerrit.ovirt.org/#/c/67168/2/vdsm/gluster/events.py
If you use repoman please be aware that it currently has hardcoded list of
packages that defaults to being distroless. This is controlled by
"to_all_distros" configuration setting, but if you do not set it than the
defaults are used.
For ovirt experimental we decided that we better have all packages to have
distro specified in spec. This also inline with most distro packaging
guidelines. But then with current defaults repoman will consider those
package to be distroless even when they are not and specifying an empty
list in configuration is a silly way to override this.
This change provides an incompatibility, but you can still use this feature
by specifying to_all_distros package list explicitly in the config. So it
is just the change of default values at this point:
If somebody is not ok with defaults change and need time to update
configuration please let me know. Also if you are not sure we can go
another way and have intermediate repoman version that does not do the
change, but issue deprecation warning if defaults are being used, so
everybody can test their setups. However I think that it is might not be
actually needed in this case as it is a configuration thing.
Please feel free to comment on change request.
Senior Software Engineer - RHEV CI - Red Hat
I would like to address a concernt that had been raised to us by
multiple developers, and reach an agreement on how (and if) to remedy
Lets assume the following situation:
We have a Git repo in Gerrit with top commit C0 in master.
On time t0 developers Alice and Bob push patches P1 and P2 respectively
to master so that we end up with the following situation in git:
C0 <= P1 (this is Alice`s patch)
C0 <= P2 (this is Bob`s patch)
On time t1 CI runs for both patches checking the code as it looks for
each patch. Lets assume CI is successful for both.
On time t2 Alice submits her patch and Gerrit merges it, resulting in
the following situation in master:
C0 <= P1
On time t2 Bob submits his patch. Gerrit, seeing master has changed,
re-bases the patch and merges it, the resulting situation (If the
rebase is successful) is:
C0 <= P1 <= P2
This means that the resulting code was never tested in CI. This, in
turn, causes various failures to show up post-merge despite having
pre-merge CI run successfully.
This situation is a result of the way our repos are currently
configured. Most repos ATM are configured with the "Rebase If
Necessary" submit type. This means that Gerrit tries to automatically
rebase patches as mentioned in t2 above.
We could, instead, configure the repos to use the "Fast Forward Only"
submit type. In that case, when Bob submits on t2, Gerrit refuses to
merge and asks Bob to rebase (While offering a convenient button to do
it). When he does, a new patch set gets pushed, and subsequently
checked by CI.
I recommend we switch all projects to use the "Fast Forward Only" submit type.
For the upcoming oVirt 4.1, the UX team has been focused hard on webadmin
performance improvements. There have been some reports   of UI
sluggishness in both 3.6 and 4.0, usually after the browser had been open
some time, and usually in scale environments.
After some research, we determined that the primary cause of this
sluggishness was memory leaks.
We embarked on several weeks of hunting down memory leak bugs and squashing
them. Alexander Wels and Vojtech Szocs led this work, and I helped test the
performance of each patch as they created them. As they created patches to
squash leaks, performance kept getting better and better. Today we've
merged the last of our patches [*], and I'm happy to announce that we are
now seeing much better UI performance on 4.1-master and 4.0.6.
Over the course of several hours with the browser window open, users should
see no sluggishness at all.
[*] This last patch switches oVirt from using de-rpc to gwt-rpc in the
frontend. This improves performance, but it also allows us to upgrade to
GWT 2.8. We'd been previously blocked on that.
If you're interested in UI performance testing, continue reading. If not,
you can stop here :)
To verify our performance improvements, we took some simple measurements
using selenium webdriver. The tests were unscientific, but very helpful. We
ran a webdriver flow through oVirt that clicked some buttons and tabs and
refreshed some grids. We did it a few hundred or thousand times. The tests
were run using stubbed hosts (ovirt-vdsmfake) so that only the engine and
UI were under test.
Below are the important takeaways. The x axis is time, and each point on a
graph is a loop through the same webdriver flow. The (ms) y axes are
response times, and memory is in MB.
In this graph, we compare oVirt 4.1 with and without our most impactful
patch applied. As you can see, with the patch applied, response time stays
flat for 200 loops of my test script over the course of 18 and 43 minutes.
Without the patch applied, response time quickly degraded such that 200
loops of my test script took 1 hr 2 minutes vs. 18 minutes with the patch
applied -- a 66% improvement!
[image: Inline image 1]
In this graph [ignore the spike], we tested oVirt hard for 6 hours 25
minutes (2000 loops). As you can see, the response times stay relatively
flat over 6 hours! This is a great improvement. Do note that the memory is
still growing, albeit much more slowly now. You can see towards the end of
this run, maybe around hour 5, that the deviation starts to go up (the line
thickens). Takeaway: maybe refresh your browser after many hours of having
webadmin open. But, this is a stress test -- I'm betting users won't notice
this slowdown after even 6 hours of regular webadmin use or idling.
Last, here is a graph that shows gwt-rpc performing slightly better than
de-rpc. Memory consumption is about the same -- gwt-rpc is just a faster
Reply with any questions or concerns. Thanks!
Greg Sheremeta, MBA
Red Hat, Inc.
Sr. Software Engineer