A recent bug  reported as part of the translation effort alerted me to the fact that we have a lot (and I mean a LOT - over 100 per file) of deprecated, unused keys in the various AppErrors files that serve no purpose and just take up space and waste translators time when they examine them.
To make a long story short - I've just merged a patch to remove all these useless messages, and enforce via unit tests that EVERY key there should have a corresponding constant in the EngineMessage or EngineError enums.
Many thanks to my reviewers!
I know this was an tedious patch that couldn't have been too fun to review.
I am contributing to the engine for three months now. While I dug into the
started to wonder how to visualize what the engine is actually doing.
To get better insights I added hystrix to the engine. Hystrix is a
breaker library which was developed by Netflix and has one pretty
feature: Real time metrics for commands.
In combination with hystrix-dashboard it allows very interesting
You can easily get an overview of commands involved in operations, their
performance and complexity. Look at  and the attachments in  and 
screenshots to get an Impression.
I want to propose to integrate hystrix permanently because from my
the results were really useful and I also had some good experiences with
in past projects.
A first implementation can be found on gerrit.
# Where is it immediately useful?
During development and QA.
An example: I tested the hystrix integration on /api/vms and /api/hosts rest
endpoints and immediately saw that the number of command exectuions grew
lineary whit the number of vms and hosts. The bug reports  and  are
# How to monitor the engine?
It is as easy as starting a hystrix-dashboard  with
$ git clone https://github.com/Netflix/Hystrix.git
$ cd Hystrix/hystrix-dashboard
$ ../gradlew jettyRun
and point the dashboard to
# Other possible benefits?
* Live metrics at customer site for admins, consultants and support.
* Historical metrics for analysis in addition to the log files.
The metrics information is directly usable in graphite . Therefore it
possible to collect the json stream for a certain time period and
later like in . To do that someone just has to run
curl --user admin@internal:engine
http://localhost:8080/ovirt-engine/api/hystrix.stream > hystrix.stream
for as long as necessary. The results can be analyzed later.
# Possible architectural benefits?
In addition to the live metrics we might also have use for the real hystrix
* Circuit Breaker
* Bulk execution of commands
* De-dublication of commands (Caching)
* Synchronous and asynchronous execution support
Our commands do already have a lot of features, so I don't think that there
some quick wins, but maybe there are interesting opportunities for infra.
In  the netflix employees describe their results regarding the overhead
wrapping every command into a new instance of a hystrix command.
They ran their tests on a standard 4-core Amazon EC2 server with a load of
request per second.
When using threadpools they measured a mean overhead of less than one
millisecond (so negligible). At the 90th percentile they measured an
of 3 ms. At the 99th percentile of about 9 ms.
When configuring the hystrix commands to use semaphores instead of
they are even faster.
# How to integrate?
A working implementation can be found on gerrit. These patch sets wrap a
hystrix command around every VdcAction, every VdcQuery and every VDSCommand.
This just required four small modifications in the code base.
In the provided patches the hystrix-metrics-servlet is accessible at
/ovirt-engine/api/hystrix.stream. It is protected by basic auth but
for everyone who can authenticate. We should probably restrict it to admins.
1) We do report failed actions with return values. Hystrix expects failing
commands to throw an exception. So on the dashboard almost every command
like a success. To overcome this, it would be pretty easy to throw an
exception inside the command and catch it immediately after it leaves the
Do we want semaphores or a thread pool. When the thread pool, what size do
3) Three unpackaged dependencies: archaius, hystrix-core, hystrix-contrib
I upgraded my workstation from fc20 to fc21 and had troubles running engine-setup on a 3.5 application.
The following instructions did the trick to me (Thanks to Alon Bar Lev)
1) Download jboss-eap-x.y from http://www.jboss.org/products/eap/overview/ (zip)
2) Extract it to /opt (insure that it has the right permissions)
3) set the following environment variables :
a) export OVIRT_ENGINE_JAVA_HOME=/usr/lib/jvm/jre
b) export OVIRT_ENGINE_JAVA_HOME_FORCE=1
4) run engine-setup as follows :
I need to re-post this as still requiring some extra information about
"Out" part of theses strings.
Is it referring to "Outbound"?
On Thu, Oct 29, 2015 at 5:56 AM, Yuko Katabami <ykatabam(a)redhat.com> wrote:
> Hi Eliraz,
> Just in case you overlooked, I had one additional question in my last
> reply, which is:
> I need one additional clarification.
> In "QoS Out xxx", "out" stands for "outgoing"? (as for outgoing traffic?)
> Could you please help me with this as well?
> Kind regards,
> On Wed, Oct 28, 2015 at 7:05 PM, Yuko Katabami <ykatabam(a)redhat.com>
>> Hi Eliraz and all,
>> Thank you very much for replying to my question with thorough information.
>> I need one additional clarification.
>> In "QoS Out xxx", "out" stands for "outgoing"? (as for outgoing traffic?)
>> Kind regards,
>> On Wed, Oct 28, 2015 at 6:32 PM, Eliraz Levi <elevi(a)redhat.com> wrote:
>>> these strings represent different QoS configuration values.
>>> These will be represented in the "out of sync tool tip" in case of
>>> differences between the values configured in the host and the those defined
>>> in the Data center (DC).
>>> a screen shot is attached.
>>> steps to reproduce:
>>> 1. add a host.
>>> 2. attache a network to a host's interface using setup network.
>>> 3. edit the host's attached network using the pencil to the right,
>>> check "override qos" checkbox and add values in the fields. (say 12, 11,
>>> 4. press ok twice (setup network will take action now). wait until the
>>> action successfully finished.
>>> 5. open a terminal to the host.
>>> 6. as root,
>>> # tc class change dev <host_interface_name_of_step_2> parent 1389:
>>> classid 1388 hfsc ul m2 2008bit ls m2 808 rt m2 1108bit
>>> 7. refresh host capabilities.
>>> 8. go to setup network and hover over the out of sync icon.
>>> for any more details or questions, feel free to ask me :)
>>> ----- Original Message -----
>>> From: "Moti Asayag" <masayag(a)redhat.com>
>>> To: "Eliraz Levi" <elevi(a)redhat.com>, "Alona Kaplan" <
>>> Sent: Wednesday, 28 October, 2015 9:22:16 AM
>>> Subject: Fwd: [ovirt-devel] [oVirt 3.6 Localization Question #35] "QoS
>>> Out average"
>>> Please reply
>>> ---------- Forwarded message ----------
>>> From: Yuko Katabami <ykatabam(a)redhat.com>
>>> Date: Tue, Oct 27, 2015 at 2:59 AM
>>> Subject: [ovirt-devel] [oVirt 3.6 Localization Question #35] "QoS Out
>>> To: devel(a)ovirt.org
>>> Hello all,
>>> I would like to ask for your help with the following question.
>>> File: ApplicationConstants
>>> Resource IDs:
>>> QoS Out average link share
>>> QoS Out average real time
>>> QoS Out average upper limit
>>> Question: Could anyone please tell us where these strings appear in
>>> the GUI and how they are used?
>>> Kind regards,
>>> Yuko Katabami
>>> Devel mailing list
I am a Linux Admin from India aged 24, working for a reputed company
handling their Linux, AIX and HPUX server from last 2 years. I would like
to give some of my time now contributing to some of the open projects. I
Please let me know if i could be of any help in any ways around. Help from
anyone in your team responding to me is appreciated.
Thanks & Regards
The change that switches the master branch to Java 8 has been merged:
core: Use Java 8 as source and target
Make sure that you use Java 8 in your development and runtime environments.
This change doesn't affect other branches, only master.
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
The oVirt Project is pleased to announce the availability
of the First oVirt 3.5.6 Release Candidate for testing, as of October 29th,
This release is available now for
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar),
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar) and Fedora 21.
This release includes updated packages for:
- oVirt Engine
- oVirt Engine client
- oVirt Engine SDK
- oVirt Hosted Engine HA
- QEMU KVM and its dependencies
See the release notes  for a list of fixed bugs.
Please refer to release notes  for Installation / Upgrade instructions.
a new oVirt Live ISO is already available .
Please note that mirrors may need usually one day before being
Please refer to the release notes for known issues in this release.
Please add yourself to the test page if you're testing this release.
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
---------- Forwarded message ----------
From: Stephen Gallagher <sgallagh(a)redhat.com>
Date: Wed, Oct 28, 2015 at 7:12 PM
Subject: System V to systemd unit migration
-----BEGIN PGP SIGNED MESSAGE-----
At the FESCo meeting on October 14th, it was decided that the time
has come to finally complete the migration away from System V init
scripts. What does this mean for you as a packager?
When we branch from Rawhide for Fedora 24 (currently scheduled for
February 2nd, 2016), we will be immediately retiring any package in
the Fedora collection that relies on a System V init script instead of
a systemd unit file to start. We will consider reasonable requests to
delay this action on a case-by-case basis for anyone who submits a
ticket to FESCo no later than 23:59 UTC on January 12, 2016.
There is no plan to remove System V compatibility from Fedora, due to
the necessity of supporting legacy third-party software. We are also
not making any changes at all to the EPEL project. EPEL packages may
continue to use System V init scripts (in the case of EPEL 5 and EPEL
6, there is no other option).
We will be going through the Wiki packaging documentation over the
next month and updating all related entries to reflect this change in
This will be the first such announcement. The Change Wrangler will be
sending additional reminders from now until February 2nd, so no one
will be surprised by this event.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
-----END PGP SIGNATURE-----
devel-announce mailing list
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com