Following the work to omit deathSignal attribute from our cpopen
implementation we posted https://gerrit.ovirt.org/51407 which is ready for
Currently locations that should use it are:
(I wrote above who I expect to check the area and post a patch for that -
we'll discuss it during next vdsm-sync to follow the work)
vdsm/v2v.py - in _start_virt_v2v you return aysnProc that should call
kill() on fail
vdsm_hooks/checkimages/before_vm_start.py - in checkImage - the code looks
ok, but check if not better to use the terminating decorator.. I think it
will be nicer
vdsm/storage/mount.py - good looks ok, I prefer to use terminator there
vdsm/storage/hba.py -good handling, use terminator
please check your usage with the returned process and see that you're not
depending on deathSignal for it to die properly on crush
some places define deathSignal for no reason, the call is sync - please
remove those places:
If you can't get to it in a reasonable time, add the task to the list 
and someone else will be it up.
Please try to go over before the sync call.
recently I looked at how many (indirect) dependencies vdsm is pulling in.
I did the comparisong using the Node Next squashfs image.
What Node Next is doing, it uses the CentOS 7 @core group, and
installs vdsm on-top of this.
The conclusion is:
without-vdsm with-vdsm Ratio
Size of squashfs [MB] 292072 450384 1.542
Package count 297 604 2.034
Disk usage [MB] 839596 1382576 1.647
(Without vdsm: @core group + lvm + efi bits)
So we see that vdsm doubles the amount of packages, and increases the
disk-space requirements by 64%.
Two take aways: CentOS @core could be smaller, and we can take another
look at vdsm's direct and indirect dependencies.
FYI, when I performed my simple Vdsm leak test some time ago, I've found
a leak in Python multiprocessing module, where
multiprocessing.util._afterfork_registry dictionary grows on each
supervdsm call. It's not much dramatic from the point of memory usage
but maybe the possibly large dictionary can cause other performance
It's better to have this fixed and it looks like a Python bug, reported
can we drop FC22 testing in jenkins now that FC23 jobs are up and running?
it will reduce jenkins load. If needed we can keep FC22 builds, just
dropping the check jobs.
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
Somewhere during master upgrades somehow my admin@internal did not get
permissions to create VMs. Its is compiling about no permissions to assign a
CPU profile. Its been a while since I tried creating a VM. Can any one point me
to what is missing and how to fix it?
This is the error in the log:
2016-01-18 15:59:46,843 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(default task-56) [a6fd12b] Lock Acquired to object 'EngineLock:
2016-01-18 15:59:46,893 WARN [org.ovirt.engine.core.bll.AddVmCommand]
(default task-56)  Validation of action 'AddVm' failed for user
$cpuProfileId f9a05b39-9f57-4655-aab2-2846fe6519f6,$cpuProfileName DEV35
2016-01-18 15:59:46,894 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(default task-56)  Lock freed to object 'EngineLock:
The specification of the RESTAPI (a.k.a. the model) and the tools that
process it (a.k.a. the metamodel) have been moved to separate git
git clone https://gerrit.ovirt.org/ovirt-engine-api-modelhttps://github.com/oVirt/ovirt-engine-api-metamodel
git clone https://gerrit.ovirt.org/ovirt-engine-api-metamodelhttps://github.com/oVirt/ovirt-engine-api-model
Currently I'm manually keeping these repositories in sync with the
ovirt-engine repository, but I will very soon remove all this code from
the ovirt-engine repository:
restapi: Move model to separate repository
This means that once the above patch is merged, which will be very soon,
probably this week, when you need to do changes to the specification of
the RESTAPI, you will need first to prepare a patch for the
ovirt-engine-api-model, submit it, review it, etc. Once that patch is
merged I will do a new release of the model. Then you will need to
update the root POM of the engine to use the new version of the model,
And adjust the engine to work with the new version of the model.
As you will probably want to do these changes and test them before
publishing anything, or asking for any review, I'd suggest the following
1. Checkout the model project:
$ git clone git://gerrit.ovirt.org/ovirt-engine-api-model
2. Check the version number in the root POM. It should be a SNAPSHOT
version, unless you explicitly checkout from a tag. For example,
currently it is 4.0.2-SNAPSHOT.
3. Make your changes to the model, and then install it to your local
$ mvn clean install
4. Add to your $HOME/.m2/settings.xml a profile that is activated
automatically and that changes the value of the "model.version" property
used by the engine:
5. Make your changes to the engine, and build it as usual, it will your
modified version of the model.
6. Publish/review the changes to the model.
7. Wait for the new model release.
8. Publish/review the changes to the engine, including the change of the
model version in the root POM. Note that you can publish/review these
changes without waiting for the new model release, but the Jenkins test
Note also that changes to the model will need to be properly documented
in order to be accepted. There are some instructions on how to document
the model here:
All these changes affect only the master branch, nothing changed in
these regards in the 3.6 branch.
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
(Participated: Piotr, Francesco, Adam, Nir, Edy and I)
We discussed about the following topics:
Python3 - terminating contextmanager is ready
https://gerrit.ovirt.org/51407 - this allows us to remove deathSignal
usages and safely kill processes on failures.
Francesco already covered one location (https://gerrit.ovirt.org/#/c/52349/)
this week I hope we'll *start* the work to cover the rest of the code as
If any gaps or dependencies are raised I asked to state it in python3 work
and we'll discuss it in next call.
next will be to remove all deathSignal usages - currently there are
redundant place where its being used that can be removed today:
Hopefully patches for that will be posted during the week.
Next step will be to use Popen where cpopen is not available (
https://gerrit.ovirt.org/48384) and we'll move forward with migrate more
tests to python3.
- Vdsm communicate - we did a little summary about what was explored and
what we want to have. It was only a discussion to see where we're heading.
External Broker - In the past we checked Cupid (IMQP) and ActiveMQ (STOMP).
In cupid we found many bugs and gaps, we thought about using only the
client of it for point to point communication, but then found it too
complex. actionMQ is in java and found to be too slow and consume a lot of
memory so we thought about having only one instance in cluster, or maybe
run it in external host that doesn't run vdsm, or put it with the engine
host - for that we will need to change all "host lifecycle" that we have
today so we left it as well (piotr, fill me up if I missed something)
We have implementation for "mini borker" as part of vdsm which perform what
we need to run it as external process to vdsm and forward messages to
clients. there are alternatives as ZeroMQ that we can explore.
Bottom line - we want to approve the communicate inside the host, between
vdsm to other services such as supervdsm, mom and maybe later we'll split
the current implementation of vdsm to more services that will run in
parallel (such as vm monitoring, vdsm-storage and so on. For that we can
use dbus, multiprocessing(uds), or some kind of a broker.
This leaded us to talk about service separation which we want to design
- Vdsm contract - Piotr sent yaml schema plan in
https://gerrit.ovirt.org/#/c/52404 - please ack that you agree with the
concept. Piotr moves on with that and start to migrate all verbs to that
form. For next version we shell have both types of schema available, and
start to add new verbs and events structures in the new yaml format.
- Exceptions - Nir raises that we should define Virt specific exceptions -
Francesco can elaborate about the plans next week.
- Network updates - Edy checks how to improve network configuration time.
Currently vdsm modifies ifcfg files, which after changing them it takes
time for the update to catch.
Interesting call guys, I encourage more developers to participate, listen
and influence vdsm 4.0 directions.
See you next week.
Sorry for the short notice of this email.
the state of branch ovirt-3.6 is as follows
tag sha subject bug target release
de54e59 virt: Don't expose GuestAgent.guestInfo directly 1295428 3.6.3
83c4ac8 supervdsm: failed validateAccess leaves pipes open 1271575 3.6.3
37bda50 vm: safer handling of conf in restore 1296936 3.6.3
57b1814 virt: safer handling of migration parameters 1296936 3.6.3
9b29ddd lun: Serial attr should not passed to libvirt for lun disks. 1291930 3.6.2 <<<<
7b94531 migration: use context manager for semaphore 1296936 3.6.3
508b2fd virt: Correct epoll unregistration usage in vmchannels 1226911 3.6.3
68a1b69 virt: Set cloexec flag on channel sockets 1226911 3.6.3
4.17.17 505026d spec: require libvirt to fix hotplugging
So, for the next tag the plan is to
1. branch out ovirt-3.6.2 from tag 4.17.17. This branch is expected to be short-lived (basically only for this RC).
2. apply commit 9b29ddd only
3. tag 4.17.18 from branch ovirt-3.6.2
I see we have only two patches in queue for ovirt-3.6
Do we need them in this build?
Should we require another 3.6.2 build, backports will be needed ALSO on branch ovirt-3.6.2.
Will start implement the plan before lunch if noone objects.
RedHat Engineering Virtualization R & D