AppErrors cleanup
by Allon Mureinik
Hi all,
A recent bug [1] reported as part of the translation effort alerted me to the fact that we have a lot (and I mean a LOT - over 100 per file) of deprecated, unused keys in the various AppErrors files that serve no purpose and just take up space and waste translators time when they examine them.
To make a long story short - I've just merged a patch to remove all these useless messages, and enforce via unit tests that EVERY key there should have a corresponding constant in the EngineMessage or EngineError enums.
Many thanks to my reviewers!
I know this was an tedious patch that couldn't have been too fun to review.
-Allon
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1244766
8 years, 11 months
Proposal: Hystrix for realtime command monitoring
by Roman Mohr
Hi All,
I am contributing to the engine for three months now. While I dug into the
code I
started to wonder how to visualize what the engine is actually doing.
To get better insights I added hystrix[1] to the engine. Hystrix is a
circuit
breaker library which was developed by Netflix and has one pretty
interesting
feature: Real time metrics for commands.
In combination with hystrix-dashboard[2] it allows very interesting
insights.
You can easily get an overview of commands involved in operations, their
performance and complexity. Look at [2] and the attachments in [5] and [6]
for
screenshots to get an Impression.
I want to propose to integrate hystrix permanently because from my
perspective
the results were really useful and I also had some good experiences with
hystrix
in past projects.
A first implementation can be found on gerrit[3].
# Where is it immediately useful?
During development and QA.
An example: I tested the hystrix integration on /api/vms and /api/hosts rest
endpoints and immediately saw that the number of command exectuions grew
lineary whit the number of vms and hosts. The bug reports [5] and [6] are
the
result.
# How to monitor the engine?
It is as easy as starting a hystrix-dashboard [2] with
$ git clone https://github.com/Netflix/Hystrix.git
$ cd Hystrix/hystrix-dashboard
$ ../gradlew jettyRun
and point the dashboard to
https://<customer.engine.ip>/ovirt-engine/hystrix.stream.
# Other possible benefits?
* Live metrics at customer site for admins, consultants and support.
* Historical metrics for analysis in addition to the log files.
The metrics information is directly usable in graphite [7]. Therefore it
would be
possible to collect the json stream for a certain time period and
analyze them
later like in [4]. To do that someone just has to run
curl --user admin@internal:engine
http://localhost:8080/ovirt-engine/api/hystrix.stream > hystrix.stream
for as long as necessary. The results can be analyzed later.
# Possible architectural benefits?
In addition to the live metrics we might also have use for the real hystrix
features:
* Circuit Breaker
* Bulk execution of commands
* De-dublication of commands (Caching)
* Synchronous and asynchronous execution support
* ...
Our commands do already have a lot of features, so I don't think that there
are
some quick wins, but maybe there are interesting opportunities for infra.
# Overhead?
In [5] the netflix employees describe their results regarding the overhead
of
wrapping every command into a new instance of a hystrix command.
They ran their tests on a standard 4-core Amazon EC2 server with a load of
60
request per second.
When using threadpools they measured a mean overhead of less than one
millisecond (so negligible). At the 90th percentile they measured an
overhead
of 3 ms. At the 99th percentile of about 9 ms.
When configuring the hystrix commands to use semaphores instead of
threadpools
they are even faster.
# How to integrate?
A working implementation can be found on gerrit[3]. These patch sets wrap a
hystrix command around every VdcAction, every VdcQuery and every VDSCommand.
This just required four small modifications in the code base.
# Security?
In the provided patches the hystrix-metrics-servlet is accessible at
/ovirt-engine/api/hystrix.stream. It is protected by basic auth but
accessible
for everyone who can authenticate. We should probably restrict it to admins.
# Todo?
1) We do report failed actions with return values. Hystrix expects failing
commands to throw an exception. So on the dashboard almost every command
looks
like a success. To overcome this, it would be pretty easy to throw an
exception inside the command and catch it immediately after it leaves the
hystrix wrapper.
2) Finetuning
Do we want semaphores or a thread pool. When the thread pool, what size do
we want?
3) Three unpackaged dependencies: archaius, hystrix-core, hystrix-contrib
# References
[1] https://github.com/Netflix/Hystrix
[2] https://github.com/Netflix/Hystrix/tree/master/hystrix-dashboard
[3] https://gerrit.ovirt.org/#/q/topic:hystrix
[4]
http://www.nurkiewicz.com/2015/02/storing-months-of-historical-metrics.html
[5]
https://github.com/Netflix/Hystrix/wiki/FAQ#what-is-the-processing-overhe...
[5] https://bugzilla.redhat.com/show_bug.cgi?id=1268216
[6] https://bugzilla.redhat.com/show_bug.cgi?id=1268224
[7] http://graphite.wikidot.com
8 years, 11 months
Running oVirt 3.5 on fc21 with open jdk 1.8 (dev)
by Eli Mesika
Hi
I upgraded my workstation from fc20 to fc21 and had troubles running engine-setup on a 3.5 application.
The following instructions did the trick to me (Thanks to Alon Bar Lev)
1) Download jboss-eap-x.y from http://www.jboss.org/products/eap/overview/ (zip)
2) Extract it to /opt (insure that it has the right permissions)
3) set the following environment variables :
a) export OVIRT_ENGINE_JAVA_HOME=/usr/lib/jvm/jre
b) export OVIRT_ENGINE_JAVA_HOME_FORCE=1
4) run engine-setup as follows :
engine-setup --jboss-home=/opt/jboss-eap-x.y
Thanks
Eli Mesika
9 years
Re: [ovirt-devel] [oVirt 3.6 Localization Question #35] "QoS Out average"
by Yuko Katabami
Hello again,
I need to re-post this as still requiring some extra information about
"Out" part of theses strings.
Is it referring to "Outbound"?
Kind regards,
Yuko
On Thu, Oct 29, 2015 at 5:56 AM, Yuko Katabami <ykatabam(a)redhat.com> wrote:
> Hi Eliraz,
>
> Just in case you overlooked, I had one additional question in my last
> reply, which is:
>
> I need one additional clarification.
> In "QoS Out xxx", "out" stands for "outgoing"? (as for outgoing traffic?)
>
> Could you please help me with this as well?
>
> Kind regards,
>
> Yuko
>
> On Wed, Oct 28, 2015 at 7:05 PM, Yuko Katabami <ykatabam(a)redhat.com>
> wrote:
>
>> Hi Eliraz and all,
>>
>> Thank you very much for replying to my question with thorough information.
>> I need one additional clarification.
>> In "QoS Out xxx", "out" stands for "outgoing"? (as for outgoing traffic?)
>>
>> Kind regards,
>>
>> Yuko
>>
>>
>> On Wed, Oct 28, 2015 at 6:32 PM, Eliraz Levi <elevi(a)redhat.com> wrote:
>>
>>> Hi,
>>> these strings represent different QoS configuration values.
>>> These will be represented in the "out of sync tool tip" in case of
>>> differences between the values configured in the host and the those defined
>>> in the Data center (DC).
>>> a screen shot is attached.
>>> steps to reproduce:
>>> 1. add a host.
>>> 2. attache a network to a host's interface using setup network.
>>> 3. edit the host's attached network using the pencil to the right,
>>> check "override qos" checkbox and add values in the fields. (say 12, 11,
>>> 11)
>>> 4. press ok twice (setup network will take action now). wait until the
>>> action successfully finished.
>>> 5. open a terminal to the host.
>>> 6. as root,
>>> # tc class change dev <host_interface_name_of_step_2> parent 1389:
>>> classid 1388 hfsc ul m2 2008bit ls m2 808 rt m2 1108bit
>>> 7. refresh host capabilities.
>>> 8. go to setup network and hover over the out of sync icon.
>>>
>>> for any more details or questions, feel free to ask me :)
>>>
>>> thanks.
>>>
>>> Eliraz.
>>>
>>>
>>>
>>>
>>> ----- Original Message -----
>>>
>>>
>>> From: "Moti Asayag" <masayag(a)redhat.com>
>>> To: "Eliraz Levi" <elevi(a)redhat.com>, "Alona Kaplan" <
>>> alkaplan(a)redhat.com>
>>> Sent: Wednesday, 28 October, 2015 9:22:16 AM
>>> Subject: Fwd: [ovirt-devel] [oVirt 3.6 Localization Question #35] "QoS
>>> Out average"
>>>
>>> Please reply
>>>
>>>
>>> ---------- Forwarded message ----------
>>> From: Yuko Katabami <ykatabam(a)redhat.com>
>>> Date: Tue, Oct 27, 2015 at 2:59 AM
>>> Subject: [ovirt-devel] [oVirt 3.6 Localization Question #35] "QoS Out
>>> average"
>>> To: devel(a)ovirt.org
>>>
>>>
>>> Hello all,
>>>
>>> I would like to ask for your help with the following question.
>>>
>>> File: ApplicationConstants
>>> Resource IDs:
>>> outAverageLinkShareOutOfSyncPopUp
>>> outAverageRealTimeOutOfSyncPopUp
>>> outAverageUpperLimitOutOfSyncPopUp
>>> Strings:
>>> QoS Out average link share
>>> QoS Out average real time
>>> QoS Out average upper limit
>>> Question: Could anyone please tell us where these strings appear in
>>> the GUI and how they are used?
>>>
>>> Kind regards,
>>>
>>> Yuko Katabami
>>>
>>>
>>> _______________________________________________
>>> Devel mailing list
>>> Devel(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>> --
>>> Regards,
>>> Moti
>>
>>
>>
>>
>>
>>
>
>
>
>
9 years
Contribute with Ovirt
by Somansh Arora
Hi All,
I am a Linux Admin from India aged 24, working for a reputed company
handling their Linux, AIX and HPUX server from last 2 years. I would like
to give some of my time now contributing to some of the open projects. I
liked ovirt.
Please let me know if i could be of any help in any ways around. Help from
anyone in your team responding to me is appreciated.
--
Thanks & Regards
Somansh Arora
+91- 9582671964
9 years
[ATN] Switch to Java 8 in master branch merged
by Juan Hernández
Hello,
The change that switches the master branch to Java 8 has been merged:
core: Use Java 8 as source and target
https://gerrit.ovirt.org/46288
Make sure that you use Java 8 in your development and runtime environments.
This change doesn't affect other branches, only master.
Regards,
Juan Hernandez
--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
9 years
[ANN] oVirt 3.5.6 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability
of the First oVirt 3.5.6 Release Candidate for testing, as of October 29th,
2015.
This release is available now for
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar),
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar) and Fedora 21.
This release includes updated packages for:
- oVirt Engine
- oVirt Engine client
- oVirt Engine SDK
- oVirt Hosted Engine HA
- QEMU KVM and its dependencies
- VDSM
See the release notes [1] for a list of fixed bugs.
Please refer to release notes [1] for Installation / Upgrade instructions.
a new oVirt Live ISO is already available [2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
Please add yourself to the test page[4] if you're testing this release.
[1] http://www.ovirt.org/OVirt_3.5.6_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-live/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
[4] http://www.ovirt.org/Testing/oVirt_3.5.6_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years
Fwd: System V to systemd unit migration
by Sandro Bonazzola
FYI
---------- Forwarded message ----------
From: Stephen Gallagher <sgallagh(a)redhat.com>
Date: Wed, Oct 28, 2015 at 7:12 PM
Subject: System V to systemd unit migration
To: devel-announce(a)lists.fedoraproject.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
At the FESCo meeting on October 14th, it was decided that the time
has come to finally complete the migration away from System V init
scripts. What does this mean for you as a packager?
When we branch from Rawhide for Fedora 24 (currently scheduled for
February 2nd, 2016), we will be immediately retiring any package in
the Fedora collection that relies on a System V init script instead of
a systemd unit file to start. We will consider reasonable requests to
delay this action on a case-by-case basis for anyone who submits a
ticket to FESCo no later than 23:59 UTC on January 12, 2016.
There is no plan to remove System V compatibility from Fedora, due to
the necessity of supporting legacy third-party software. We are also
not making any changes at all to the EPEL project. EPEL packages may
continue to use System V init scripts (in the case of EPEL 5 and EPEL
6, there is no other option).
We will be going through the Wiki packaging documentation over the
next month and updating all related entries to reflect this change in
policy.
This will be the first such announcement. The Change Wrangler will be
sending additional reminders from now until February 2nd, so no one
will be surprised by this event.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlYxEAAACgkQeiVVYja6o6NmxwCeI7gSKCwmQ7AtBqoM3MoB8Q7a
SlgAniagJ8mrWWmr/BTqEyltF21FRbBd
=U6rs
-----END PGP SIGNATURE-----
_______________________________________________
devel-announce mailing list
devel-announce(a)lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel-announce
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 1 month