I'm working to brush up and enhance my old hack https://gerrit.ovirt.org/#/c/37827/1
That patch adds a new MOM interface, to talk with VDSM using the RPC interface.
On top of that, I want to make efficient use of VDSM API (avoid redundant call, possibly
issuing only one getAllVmStats call and caching the results, and so forth)
Next step will be to backport optimizations to current vdsmInterface.
Or maybe, even replacing the new vdsminterface with the new one I'm developing :)
I'd like to use the blessed JSON-RPC interface, but what's the recommended way to
do that? What is (or will be!) the official recommended VDSM external client interface?
I thought about patch https://gerrit.ovirt.org/#/c/39203/
But my _impression_ is that patch will depend on VDSM's internal reactor, thus is not very
suitable to be used into an external process.
RedHat Engineering Virtualization R & D
-----BEGIN PGP SIGNED MESSAGE-----
The oVirt team is pleased to announce that the 3.5.3 Second Release Candidate is now
available for testing as of May 28th 2015.
The release candidate is available now for Fedora 20,
Red Hat Enterprise Linux 6.6, CentOS 6.6 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS 7.1 (or similar).
This release of oVirt includes numerous bug fixes and one CVE fix for the VENOM Vulnerability.
See the release notes  for a list of the new features and bugs fixed.
Please refer to release notes  for Installation / Upgrade instructions.
A new oVirt Live ISO is already available as well.
Please note that mirrors may need usually one day before being synchronized.
Please refer to the release notes for known issues in this release.
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
-----END PGP SIGNATURE-----
I tried to have hosted-engine deploying the engine appliance over oVirt node. I think it will be quite a common scenario.
I tried with an oVirt node build from yesterday.
Unfortunately I'm not able to conclude the setup cause oVirt node got the CPU load indefinitely stuck on 100% and so it's almost unresponsive.
The issue seams to be related to vdsmd daemon witch couldn't really start and so it retries indefinitely using all the available CPU power (it also runs with niceless -20...).
[root@node36 admin]# grep "Unit vdsmd.service entered failed state." /var/log/messages | wc -l
It tried 368 times in a row in a few minutes.
With journalctl I can read:
May 29 10:06:45 node36 systemd: Unit vdsmd.service entered failed state.
May 29 10:06:45 node36 systemd: vdsmd.service holdoff time over, scheduling restart.
May 29 10:06:45 node36 systemd: Stopping Virtual Desktop Server Manager...
May 29 10:06:45 node36 systemd: Starting Virtual Desktop Server Manager...
May 29 10:06:45 node36 vdsmd_init_common.sh: vdsm: Running mkdirs
May 29 10:06:45 node36 vdsmd_init_common.sh: vdsm: Running configure_coredump
May 29 10:06:45 node36 vdsmd_init_common.sh: vdsm: Running configure_vdsm_logs
May 29 10:06:45 node36 vdsmd_init_common.sh: vdsm: Running wait_for_network
May 29 10:06:45 node36 vdsmd_init_common.sh: vdsm: Running run_init_hooks
May 29 10:06:46 node36 vdsmd_init_common.sh: vdsm: Running upgraded_version_check
May 29 10:06:46 node36 vdsmd_init_common.sh: vdsm: Running check_is_configured
May 29 10:06:46 node36 vdsmd_init_common.sh: vdsm: Running validate_configuration
May 29 10:06:47 node36 vdsmd_init_common.sh: vdsm: Running prepare_transient_repository
May 29 10:06:49 node36 vdsmd_init_common.sh: vdsm: Running syslog_available
May 29 10:06:49 node36 vdsmd_init_common.sh: vdsm: Running nwfilter
May 29 10:06:50 node36 vdsmd_init_common.sh: vdsm: Running dummybr
May 29 10:06:51 node36 vdsmd_init_common.sh: vdsm: Running load_needed_modules
May 29 10:06:51 node36 vdsmd_init_common.sh: vdsm: Running tune_system
May 29 10:06:51 node36 vdsmd_init_common.sh: vdsm: Running test_space
May 29 10:06:51 node36 vdsmd_init_common.sh: vdsm: Running test_lo
May 29 10:06:51 node36 systemd: Started Virtual Desktop Server Manager.
May 29 10:06:51 node36 systemd: vdsmd.service: main process exited, code=exited, status=1/FAILURE
May 29 10:06:51 node36 vdsmd_init_common.sh: vdsm: Running run_final_hooks
May 29 10:06:52 node36 systemd: Unit vdsmd.service entered failed state.
May 29 10:06:52 node36 systemd: vdsmd.service holdoff time over, scheduling restart.
May 29 10:06:52 node36 systemd: Stopping Virtual Desktop Server Manager...
May 29 10:06:52 node36 systemd: Starting Virtual Desktop Server Manager...
repeated a lot of times
/var/log/vdsm/vdsm.log is empty.
[root@node36 admin]# /usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/share/vdsm/vdsm; echo $?
A user was unable to import a VM template from a gluster export storage
because of various reasons, and since I'm afraid this could hit many
users (losing many days waiting for import tasks to...fail) I would like
to share the issues and some ideas with you:
FYI, working with sparse files on GlusterFS mounts will be very slow
until https://bugzilla.redhat.com/show_bug.cgi?id=1220173 is implemented.
Maybe the same applies also to other network file-systems.
2) Engine Task timeout:
The last times the user tried to import this template (using nightly
builds), the task was apparently deleted by Engine because of timeout
while the qemu-img process continued running (and consuming resources).
I'm not sure, but I believe the tasks are deleted by Engine after 4 or 5
days no matter if the qemu-img process is still running (!?).
Thus, slow import tasks will never finish.
Can someone please confirm?
Maybe we can improve something here to support long import tasks.
3) Wrong SPM:
If $SRC is on host-1, $DST on host-2 and SPM is host-3, the image will
be unnecessarily crossing host-3 and kill the cluster performance.
I guess Engine should choose automatically the appropriate SPM host
depending on the tasks (host-1 or host-2 in this case).
But it seems like oVirt doesn't currently support changing the SPM when
there are running tasks, based on the fact that the user gets a warning
when trying to do it manually.
4) Convert Optimization:
I see oVirt is running:
/usr/bin/qemu-img convert -t none -T none -f raw $SRC -O raw $DST
In this case, $SRC is in raw format and there are no backing chains.
Shouldn't we do a simple 'cp --sparse=always' instead of a 'qemu-img
convert' in this case?
I guess qemu-img should be doing this optimization for us, but maybe
this raw-to-raw conversion use case is just to silly and will not be
considered by qemu-img maintainers.
The patch seems to break the DAO tests, please fix
20:07:48 Failed tests:
20:07:48 testSaveDetails(org.ovirt.engine.core.dao.gluster.GlusterGeoRepDaoTest): expected:<org.ovirt.engine.core.common.businessentities.gluster.GlusterGeoRepSessionDetails@e51ed1a5> but was:<org.ovirt.engine.core.common.businessentities.gluster.GlusterGeoRepSessionDetails@918a47a5>
I reran the build for this patch and it worked, but I'm not sure
if that's because someone fixed it or it's a flaky test.
Can someone from Gluster comment?
Red Hat, Inc.
Sr. Software Engineer, RHEV
-------- Messaggio Inoltrato --------
Oggetto: Reminder: Fedora 20 end of life on 2015-06-23
Data: Wed, 27 May 2015 11:55:04 -0500
Mittente: Dennis Gilmore <dennis(a)ausil.us>
A: devel-announce(a)lists.fedoraproject.org, test-announce(a)lists.fedoraproject.org, announce(a)lists.fedoraproject.org
This is a reminder email about the end of life process for Fedora 20.
Fedora 20 will reach end of life on 2015-06-23, and no further updates
will be pushed out after that time. Additionally, with the recent
release of Fedora 22, no new packages will be added to the Fedora 20
Please see https://fedoraproject.org/wiki/FedUp for more
information on upgrading from Fedora 20 to a newer release.
Frank Zdarsky and I would like to come to the TLV team offices during
the last week in June (specifically June 28-July 1) to meet with all
interested developers who would like to be involved with NFV and SDN
features in oVirt. Specifically, Frank would like to meet with the
people who know about the integration of Neutron into oVirt, as well as
about high level features like HA setup, automation, workload scheduling.
Will the people who can discuss these topics be available during this week?
oVirt Community Liaison