[vdsm][mom][jsonrpc] new VDSM interface for MOM
by Francesco Romani
Hi everyone,
I'm working to brush up and enhance my old hack https://gerrit.ovirt.org/#/c/37827/1
That patch adds a new MOM interface, to talk with VDSM using the RPC interface.
On top of that, I want to make efficient use of VDSM API (avoid redundant call, possibly
issuing only one getAllVmStats call and caching the results, and so forth)
Next step will be to backport optimizations to current vdsmInterface.
Or maybe, even replacing the new vdsminterface with the new one I'm developing :)
I'd like to use the blessed JSON-RPC interface, but what's the recommended way to
do that? What is (or will be!) the official recommended VDSM external client interface?
I thought about patch https://gerrit.ovirt.org/#/c/39203/
But my _impression_ is that patch will depend on VDSM's internal reactor, thus is not very
suitable to be used into an external process.
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
9 years, 3 months
[ANN] oVirt 3.5.3 Second Release Candidate is now available for testing
by Sandro Bonazzola
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
The oVirt team is pleased to announce that the 3.5.3 Second Release Candidate is now
available for testing as of May 28th 2015.
The release candidate is available now for Fedora 20,
Red Hat Enterprise Linux 6.6, CentOS 6.6 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS 7.1 (or similar).
This release of oVirt includes numerous bug fixes and one CVE fix for the VENOM Vulnerability.
See the release notes [1] for a list of the new features and bugs fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO is already available as well[2].
Please note that mirrors[3] may need usually one day before being synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
- --
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJVZrrzAAoJEHw3u57E4QAOnX4QAMFi+d4Vq5jnZ0e+LaD1vB9R
GmT+k21q6vybjIvovPh7q6StCtPS0sg15BoH5JTmTfl5IWL01qMsFvw/G32PIjZI
u3mm2Z3evbxPg9qsqJPyJm/jAlRHv/L6ntLJaJBDPDSO6Y0iqiHTVUQG16easW3L
ZnUC2IM5TpHuU3PcgVp8zY9gqFqQWnnsA8deJXBvif9J8RdSf7DnkFha9/Ei3v0E
1btkzt0XVS3Cate8pEaTT8zSoE/tOBBS/D65HsnnZdD13RAxYkolzuPvT+7BlmUO
QMhyIE7msWex048HrcDVTEDLrkYF2SpAmYix3sA2O9+AIujenpICHJ8JyxAc+BmJ
scsvv99iDLU3wkpMwKiIABj7nE8gIiUbyZBNaxsxQJ0jurLZGHGIhQGLAJ2q/rd7
MTkcDXOBOZHEadqs87mSEzHt1pQCmZ40KSJCvyWbqa7vFv6kyOgOs3PldoizsOyV
Z1DlDmfMM8hs/6LOP/Mr/4p/l3JgS3gbOy34L2q9J/B+BX/XU7H2bOk6GTEMVPOn
av7m2kDs8DlQJzSqZAURZNt6XTnGGw05THOBWD8NmHuvlVZsSF1mxKb0BC+LpRcs
QmLGAtT12Y2Wnq1xwzcRav9+wiIeEYTVJCbyohMLyqTHqPgRstpeF3JulpqILo5I
rHhGapreYDlzy06Ki7X7
=pUqF
-----END PGP SIGNATURE-----
9 years, 3 months
oVirt node 3.6 and CPU load indefinitely stuck on 100% while vdsmd indefinitely tries to restart
by Simone Tiraboschi
Hi,
I tried to have hosted-engine deploying the engine appliance over oVirt node. I think it will be quite a common scenario.
I tried with an oVirt node build from yesterday.
Unfortunately I'm not able to conclude the setup cause oVirt node got the CPU load indefinitely stuck on 100% and so it's almost unresponsive.
The issue seams to be related to vdsmd daemon witch couldn't really start and so it retries indefinitely using all the available CPU power (it also runs with niceless -20...).
[root@node36 admin]# grep "Unit vdsmd.service entered failed state." /var/log/messages | wc -l
368
It tried 368 times in a row in a few minutes.
With journalctl I can read:
May 29 10:06:45 node36 systemd[1]: Unit vdsmd.service entered failed state.
May 29 10:06:45 node36 systemd[1]: vdsmd.service holdoff time over, scheduling restart.
May 29 10:06:45 node36 systemd[1]: Stopping Virtual Desktop Server Manager...
May 29 10:06:45 node36 systemd[1]: Starting Virtual Desktop Server Manager...
May 29 10:06:45 node36 vdsmd_init_common.sh[13697]: vdsm: Running mkdirs
May 29 10:06:45 node36 vdsmd_init_common.sh[13697]: vdsm: Running configure_coredump
May 29 10:06:45 node36 vdsmd_init_common.sh[13697]: vdsm: Running configure_vdsm_logs
May 29 10:06:45 node36 vdsmd_init_common.sh[13697]: vdsm: Running wait_for_network
May 29 10:06:45 node36 vdsmd_init_common.sh[13697]: vdsm: Running run_init_hooks
May 29 10:06:46 node36 vdsmd_init_common.sh[13697]: vdsm: Running upgraded_version_check
May 29 10:06:46 node36 vdsmd_init_common.sh[13697]: vdsm: Running check_is_configured
May 29 10:06:46 node36 vdsmd_init_common.sh[13697]: vdsm: Running validate_configuration
May 29 10:06:47 node36 vdsmd_init_common.sh[13697]: vdsm: Running prepare_transient_repository
May 29 10:06:49 node36 vdsmd_init_common.sh[13697]: vdsm: Running syslog_available
May 29 10:06:49 node36 vdsmd_init_common.sh[13697]: vdsm: Running nwfilter
May 29 10:06:50 node36 vdsmd_init_common.sh[13697]: vdsm: Running dummybr
May 29 10:06:51 node36 vdsmd_init_common.sh[13697]: vdsm: Running load_needed_modules
May 29 10:06:51 node36 vdsmd_init_common.sh[13697]: vdsm: Running tune_system
May 29 10:06:51 node36 vdsmd_init_common.sh[13697]: vdsm: Running test_space
May 29 10:06:51 node36 vdsmd_init_common.sh[13697]: vdsm: Running test_lo
May 29 10:06:51 node36 systemd[1]: Started Virtual Desktop Server Manager.
May 29 10:06:51 node36 systemd[1]: vdsmd.service: main process exited, code=exited, status=1/FAILURE
May 29 10:06:51 node36 vdsmd_init_common.sh[13821]: vdsm: Running run_final_hooks
May 29 10:06:52 node36 systemd[1]: Unit vdsmd.service entered failed state.
May 29 10:06:52 node36 systemd[1]: vdsmd.service holdoff time over, scheduling restart.
May 29 10:06:52 node36 systemd[1]: Stopping Virtual Desktop Server Manager...
May 29 10:06:52 node36 systemd[1]: Starting Virtual Desktop Server Manager...
repeated a lot of times
/var/log/vdsm/vdsm.log is empty.
while
[root@node36 admin]# /usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/share/vdsm/vdsm; echo $?
1
ciao,
Simone
9 years, 4 months
Unable to import templates
by Christopher Pereira
Hi,
A user was unable to import a VM template from a gluster export storage
because of various reasons, and since I'm afraid this could hit many
users (losing many days waiting for import tasks to...fail) I would like
to share the issues and some ideas with you:
1) Slowness:
FYI, working with sparse files on GlusterFS mounts will be very slow
until https://bugzilla.redhat.com/show_bug.cgi?id=1220173 is implemented.
Maybe the same applies also to other network file-systems.
2) Engine Task timeout:
The last times the user tried to import this template (using nightly
builds), the task was apparently deleted by Engine because of timeout
while the qemu-img process continued running (and consuming resources).
I'm not sure, but I believe the tasks are deleted by Engine after 4 or 5
days no matter if the qemu-img process is still running (!?).
Thus, slow import tasks will never finish.
Can someone please confirm?
Maybe we can improve something here to support long import tasks.
3) Wrong SPM:
If $SRC is on host-1, $DST on host-2 and SPM is host-3, the image will
be unnecessarily crossing host-3 and kill the cluster performance.
I guess Engine should choose automatically the appropriate SPM host
depending on the tasks (host-1 or host-2 in this case).
But it seems like oVirt doesn't currently support changing the SPM when
there are running tasks, based on the fact that the user gets a warning
when trying to do it manually.
4) Convert Optimization:
I see oVirt is running:
/usr/bin/qemu-img convert -t none -T none -f raw $SRC -O raw $DST
In this case, $SRC is in raw format and there are no backing chains.
Shouldn't we do a simple 'cp --sparse=always' instead of a 'qemu-img
convert' in this case?
I guess qemu-img should be doing this optimization for us, but maybe
this raw-to-raw conversion use case is just to silly and will not be
considered by qemu-img maintainers.
9 years, 4 months
Fwd: Reminder: Fedora 20 end of life on 2015-06-23
by Sandro Bonazzola
FYI
-------- Messaggio Inoltrato --------
Oggetto: Reminder: Fedora 20 end of life on 2015-06-23
Data: Wed, 27 May 2015 11:55:04 -0500
Mittente: Dennis Gilmore <dennis(a)ausil.us>
Rispondi-a: devel(a)lists.fedoraproject.org
A: devel-announce(a)lists.fedoraproject.org, test-announce(a)lists.fedoraproject.org, announce(a)lists.fedoraproject.org
Greetings.
This is a reminder email about the end of life process for Fedora 20.
Fedora 20 will reach end of life on 2015-06-23, and no further updates
will be pushed out after that time. Additionally, with the recent
release of Fedora 22, no new packages will be added to the Fedora 20
collection.
Please see https://fedoraproject.org/wiki/FedUp for more
information on upgrading from Fedora 20 to a newer release.
Dennis
9 years, 4 months
Meetings on NFV
by Brian Proffitt
Frank Zdarsky and I would like to come to the TLV team offices during
the last week in June (specifically June 28-July 1) to meet with all
interested developers who would like to be involved with NFV and SDN
features in oVirt. Specifically, Frank would like to meet with the
people who know about the integration of Neutron into oVirt, as well as
about high level features like HA setup, automation, workload scheduling.
Will the people who can discuss these topics be available during this week?
Peace,
Brian
--
oVirt Community Liaison
bkp(a)redhat.com
+1.574.383.9BKP
9 years, 4 months