Re: [ovirt-devel] Failed to run "make rpm" on VDSM latest master
by Saggi Mizrahi
We have ioprocess in EPEL and brew builds for el6\7, and builds for f19\f20
in bodhi.
There is no such thing as "pushing to stable". In order to be deemed stable
a package has to sit there for 30 days without negative karma or get enough (3)
positive karma.
We have notified on the mailing list that ioprocess is in need for
karma. In any case, you should start getting used to fetching ioprocess
builds from updates-testing when working on the master branch as
we are not going to wait for a month before adding capabilities to VDSM
after adding a feature to ioprocess (or any other external dependency).
There is no reason for VDSM's master branch to only use RPMs from stable
repos. Quite the contrary, the reason that updates-testing exists is so
that consumers of the RPM can test the impending release and make sure it
works before it is delivered to the general public.
Further more, after using a build from updates-testing with VDSM and seeing
that it works it would be most appreciated if you supply give karma
to accelerate the process of getting a released to the stable repo.
For more information:
https://fedoraproject.org/wiki/QA:Updates_Testing
Links to builds:
https://brewweb.devel.redhat.com/packageinfo?packageID=47301
https://admin.fedoraproject.org/updates/search/ioprocess?_csrf_token=adc8...
>
> On Thu, Jun 26, 2014 at 01:38:45PM -0400, Nir Soffer wrote:
> > ----- Original Message -----
> > > From: "Dan Kenigsberg" <danken(a)redhat.com>
> > > To: "Nir Soffer" <nsoffer(a)redhat.com>
> > > Cc: "ybronhei" <ybronhei(a)redhat.com>, "Oved Ourfali" <ovedo(a)redhat.com>,
> > > "Eli Mesika" <emesika(a)redhat.com>, "Yeela
> > > Kaplan" <ykaplan(a)redhat.com>, "Saggi Mizrahi" <smizrahi(a)redhat.com>
> > > Sent: Thursday, June 26, 2014 8:11:59 PM
> > > Subject: Re: Failed to run "make rpm" on VDSM latest master
> > >
> > > On Thu, Jun 26, 2014 at 11:59:13AM -0400, Nir Soffer wrote:
> > > > ----- Original Message -----
> > > > > From: "ybronhei" <ybronhei(a)redhat.com>
> > > > > To: "Oved Ourfali" <ovedo(a)redhat.com>, "Eli Mesika"
> > > > > <emesika(a)redhat.com>
> > > > > Cc: "Nir Soffer" <nsoffer(a)redhat.com>, "Yeela Kaplan"
> > > > > <ykaplan(a)redhat.com>, "Saggi Mizrahi" <smizrahi(a)redhat.com>
> > > > > Sent: Thursday, June 26, 2014 2:22:16 PM
> > > > > Subject: Re: Failed to run "make rpm" on VDSM latest master
> > > > >
> > > > > On 06/26/2014 07:53 AM, Oved Ourfali wrote:
> > > > > > cc-ing also Yeela and Saggi.
> > > > > >
> > > > > > Oved
> > > > > >
> > > > > > ----- Original Message -----
> > > > > >> From: "Eli Mesika" <emesika(a)redhat.com>
> > > > > >> To: "Yaniv Bronheim" <ybronhei(a)redhat.com>, "Nir Soffer"
> > > > > >> <nsoffer(a)redhat.com>
> > > > > >> Cc: "Oved Ourfalli" <oourfali(a)redhat.com>
> > > > > >> Sent: Wednesday, June 25, 2014 11:34:24 PM
> > > > > >> Subject: Failed to run "make rpm" on VDSM latest master
> > > > > >>
> > > > > >>
> > > > > >> Hi, I had applied Nir patch and tried to create rpms , and this is
> > > > > >> what I
> > > > > >> have got :
> > > > > >>
> > > > > >> error: Failed build dependencies:
> > > > > >> python-ioprocess >= 0.5-1 is needed by
> > > > > >> vdsm-4.15.0-185.git5b501bf.el6.x86_64
> > > > > >>
> > > > > >>
> > > > > >> Then I had switched to the master branch and got the same
> > > > > >>
> > > > > >> How can I install this new dependency ?
> > > > > >>
> > > > > >>
> > > > > >> Thanks
> > > > > >> Eli Mesika
> > > > > >>
> > > > > >>
> > > > > solved?
> > > > > just yum install python-ioprocess
> > > > > i will update vdsm_Developers wiki page
> > > >
> > > > # yum install python-ioprocess
> > > > No package python-ioprocess available.
> > > >
> > > > Looks like broken (again) master to me
> > >
> > > # yum --enablerepo=epel-testing install python-ioprocess
> > >
> > > Fixed this for me. Saggi/Yeela, could you push it to stable, and produce
> > > a
> > > f19 build? That would help plenty of us.
> >
> > This patch should fix the situation for everyone, allowing easy testing.
> > http://gerrit.ovirt.org/29304
> >
> > Nir
>
10 years, 4 months
XML benchmarks
by Francesco Romani
------=_Part_23935490_2012384362.1403872214216
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Due to the recent discussion (http://gerrit.ovirt.org/#/c/28712/), and as part
of the ongoing focus on scalability and performances (http://gerrit.ovirt.org/#/c/17694/ and many others),
I took the chance to do a very quick and dirty bench to see how it really cost
to do XML processing in sampling threads (thanks to Nir for the kickstart!), and,
in general, how much the XML processing costs.
Please find attached the test script and the example XML
(real one made by VDSM master on my RHEL6.5 box).
On my laptop:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 58
Model name: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
Stepping: 9
CPU MHz: 1359.375
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 5786.91
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 4096K
NUMA node0 CPU(s): 0-3
8 GiBs of RAM, running GNOME desktop and the usual development stuff
xmlbench.py linuxvm1.xml MODE 300
MODE is either 'md' (minidom) or 'cet' (cElementTree).
This will run $NUMTHREADS threads fast and loose without synchronization.
We can actually have this behaviour if a customer just mass start VMs.
In general I expect some clustering of the sampling activity, not a nice evenly interleaved
time sequence.
CPU measurement: just opened a terminal and run 'htop' on it.
CPU profile: clustered around the sampling interval. Usage negligible most of time, peak on sampling as shown below
300 VMs
minidom: ~38% CPU
cElementTree: ~5% CPU
500 VMs
minidom: ~48% CPU
cElementTree: ~6% CPU
1000 VMs
python thread error :)
File "/usr/lib64/python2.7/threading.py", line 746, in start
_start_new_thread(self.__bootstrap, ())
thread.error: can't start new thread
I think this is another proof (if we need more of them) that
* we _really need_ to move away from the 1 thread per VM model -> http://gerrit.ovirt.org/#/c/29189/ and friends! Let's fire up the discussion!
* we should move to cElementTree anyway in the near future: faster processing, scales better, nicer API.
It is also a pet peeve of mine, I do have some patches floating but we need still some preparation work in the virt package.
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
------=_Part_23935490_2012384362.1403872214216
Content-Type: text/x-python; name=xmlbench.py
Content-Disposition: attachment; filename=xmlbench.py
Content-Transfer-Encoding: base64
IyEvdXNyL2Jpbi9lbnYgcHl0aG9uCgppbXBvcnQgc3lzCmltcG9ydCB0aHJlYWRpbmcKaW1wb3J0
IHRpbWUKI2ltcG9ydCBseG1sLmV0cmVlCmltcG9ydCB4bWwuZG9tLm1pbmlkb20KaW1wb3J0IHht
bC5ldHJlZS5jRWxlbWVudFRyZWUKaW1wb3J0IHhtbC5ldHJlZS5FbGVtZW50VHJlZQoKCmNsYXNz
IFdvcmtlcih0aHJlYWRpbmcuVGhyZWFkKToKICAgIGRlZiBfX2luaXRfXyhzZWxmLCBmdW5jLCB4
bWwsIGRlbGF5LCBudW1ydW5zKToKICAgICAgICBzdXBlcihXb3JrZXIsIHNlbGYpLl9faW5pdF9f
KCkKICAgICAgICBzZWxmLmRhZW1vbiA9IFRydWUKICAgICAgICBzZWxmLmZ1bmMgPSBmdW5jCiAg
ICAgICAgc2VsZi54bWwgPSB4bWwKICAgICAgICBzZWxmLmRlbGF5ID0gZGVsYXkKICAgICAgICBz
ZWxmLm51bXJ1bnMgPSBudW1ydW5zCgogICAgZGVmIG11c3RnbyhzZWxmKToKICAgICAgICBpZiBz
ZWxmLm51bXJ1bnMgaXMgbm90IE5vbmU6CiAgICAgICAgICAgIHNlbGYubnVtcnVucyAtPSAxCiAg
ICAgICAgICAgIGlmIHNlbGYubnVtcnVucyA8PSAwOgogICAgICAgICAgICAgICAgcmV0dXJuIEZh
bHNlCiAgICAgICAgcmV0dXJuIFRydWUKCiAgICBkZWYgcnVuKHNlbGYpOgogICAgICAgIHByaW50
ICclcyBkZWxheT0laSBzdGFydGluZyEnICUoc2VsZi5uYW1lLCBzZWxmLmRlbGF5KQogICAgICAg
IHdoaWxlIHNlbGYubXVzdGdvKCk6CiAgICAgICAgICAgIHRpbWUuc2xlZXAoc2VsZi5kZWxheSkK
ICAgICAgICAgICAgcHJpbnQgJyVzIGdvJyAlKHNlbGYubmFtZSkKICAgICAgICAgICAgc2VsZi5m
dW5jKHNlbGYueG1sKQogICAgICAgIHByaW50ICclcyBkb25lIScgJShzZWxmLm5hbWUpCgoKUEFS
U0VSUyA9IHsKICAgICdtZCc6IHhtbC5kb20ubWluaWRvbS5wYXJzZVN0cmluZywKIyAgICAnbHgn
OiBseG1sLmV0cmVlLmZyb21zdHJpbmcsCiAgICAnZXQnOiB4bWwuZXRyZWUuRWxlbWVudFRyZWUu
ZnJvbXN0cmluZywKICAgICdjZXQnOiB4bWwuZXRyZWUuY0VsZW1lbnRUcmVlLmZyb21zdHJpbmcK
fQoKCmRlZiBydW5uZXIoeG1sLCBtb2RlLCBudGhyZWFkcywgZGVsYXksIG51bXJ1bnMpOgogICAg
d29ya2VycyA9IFtdCiAgICBmb3IgaSBpbiByYW5nZShudGhyZWFkcyk6CiAgICAgICAgdyA9IFdv
cmtlcihQQVJTRVJTW21vZGVdLCB4bWwsIGRlbGF5LCBudW1ydW5zKQogICAgICAgIHcuc3RhcnQo
KQogICAgICAgIHdvcmtlcnMuYXBwZW5kKHcpCgogICAgaWYgbnVtcnVucyBpcyBOb25lOgogICAg
ICAgIHdoaWxlIFRydWU6CiAgICAgICAgICAgIHRpbWUuc2xlZXAoMS4wKQogICAgZWxzZToKICAg
ICAgICBmb3IgdyBpbiB3b3JrZXJzOgogICAgICAgICAgICB3LmpvaW4oKQoKCmRlZiBfdXNhZ2Uo
KToKICAgIHByaW50ICJ1c2FnZTogeG1sYmVuY2ggeG1scGF0aCBtb2RlIG50aHJlYWRzIFtkZWxh
eSBbbnVtcnVuc11dIgogICAgcHJpbnQgImF2YWlsYWJsZSBtb2RlczogJXMiICUgJyAnLmpvaW4o
UEFSU0VSUy5rZXlzKCkpCgpkZWYgX21haW4oYXJncyk6CiAgICBpZiBsZW4oYXJncykgPCAzOgog
ICAgICAgIF91c2FnZSgpCiAgICAgICAgc3lzLmV4aXQoMSkKICAgIGVsc2U6CiAgICAgICAgeG1s
cGF0aCA9IGFyZ3NbMF0KICAgICAgICBtb2RlID0gYXJnc1sxXQogICAgICAgIG50aHJlYWRzID0g
aW50KGFyZ3NbMl0pCiAgICAgICAgZGVsYXkgPSBpbnQoYXJnc1szXSkgaWYgbGVuKGFyZ3MpID4g
MyBlbHNlIDE1CiAgICAgICAgbnVtcnVucyA9IGFyZ3NbNF0gaWYgbGVuKGFyZ3MpID4gNCBlbHNl
IE5vbmUKICAgICAgICBpZiBtb2RlIG5vdCBpbiBQQVJTRVJTOgogICAgICAgICAgICBfdXNhZ2Uo
KQogICAgICAgICAgICBzeXMuZXhpdCgyKQogICAgICAgIHdpdGggb3Blbih4bWxwYXRoLCAncnQn
KSBhcyB4bWw6CiAgICAgICAgICAgIHJ1bm5lcih4bWwucmVhZCgpLCBtb2RlLCBudGhyZWFkcywg
ZGVsYXksIG51bXJ1bnMpCgppZiBfX25hbWVfXyA9PSAiX19tYWluX18iOgogICAgX21haW4oc3lz
LmFyZ3ZbMTpdKQo=
------=_Part_23935490_2012384362.1403872214216
Content-Type: application/xml; name=linuxvm1.xml
Content-Disposition: attachment; filename=linuxvm1.xml
Content-Transfer-Encoding: base64
PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPGRvbWFpbiB0eXBlPSJrdm0i
PgogICAgICAgIDxuYW1lPkYyMF9DMTwvbmFtZT4KICAgICAgICA8dXVpZD41NmQxYzY1Ny1kZDc2
LTQ2MDktYTIwNy1jMDUwNjk5YmU1YmU8L3V1aWQ+CiAgICAgICAgPG1lbW9yeT40MTk0MzA0PC9t
ZW1vcnk+CiAgICAgICAgPGN1cnJlbnRNZW1vcnk+NDE5NDMwNDwvY3VycmVudE1lbW9yeT4KICAg
ICAgICA8dmNwdSBjdXJyZW50PSIyIj4xNjA8L3ZjcHU+CiAgICAgICAgPG1lbXR1bmU+CiAgICAg
ICAgICAgICAgICA8bWluX2d1YXJhbnRlZT4yMDk3MTUyPC9taW5fZ3VhcmFudGVlPgogICAgICAg
IDwvbWVtdHVuZT4KICAgICAgICA8ZGV2aWNlcz4KICAgICAgICAgICAgICAgIDxjaGFubmVsIHR5
cGU9InVuaXgiPgogICAgICAgICAgICAgICAgICAgICAgICA8dGFyZ2V0IG5hbWU9ImNvbS5yZWRo
YXQucmhldm0udmRzbSIgdHlwZT0idmlydGlvIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxz
b3VyY2UgbW9kZT0iYmluZCIgcGF0aD0iL3Zhci9saWIvbGlidmlydC9xZW11L2NoYW5uZWxzLzU2
ZDFjNjU3LWRkNzYtNDYwOS1hMjA3LWMwNTA2OTliZTViZS5jb20ucmVkaGF0LnJoZXZtLnZkc20i
Lz4KICAgICAgICAgICAgICAgIDwvY2hhbm5lbD4KICAgICAgICAgICAgICAgIDxjaGFubmVsIHR5
cGU9InVuaXgiPgogICAgICAgICAgICAgICAgICAgICAgICA8dGFyZ2V0IG5hbWU9Im9yZy5xZW11
Lmd1ZXN0X2FnZW50LjAiIHR5cGU9InZpcnRpbyIvPgogICAgICAgICAgICAgICAgICAgICAgICA8
c291cmNlIG1vZGU9ImJpbmQiIHBhdGg9Ii92YXIvbGliL2xpYnZpcnQvcWVtdS9jaGFubmVscy81
NmQxYzY1Ny1kZDc2LTQ2MDktYTIwNy1jMDUwNjk5YmU1YmUub3JnLnFlbXUuZ3Vlc3RfYWdlbnQu
MCIvPgogICAgICAgICAgICAgICAgPC9jaGFubmVsPgogICAgICAgICAgICAgICAgPGlucHV0IGJ1
cz0icHMyIiB0eXBlPSJtb3VzZSIvPgogICAgICAgICAgICAgICAgPG1lbWJhbGxvb24gbW9kZWw9
InZpcnRpbyIvPgogICAgICAgICAgICAgICAgPHZpZGVvPgogICAgICAgICAgICAgICAgICAgICAg
ICA8YWRkcmVzcyBidXM9IjB4MDAiIGRvbWFpbj0iMHgwMDAwIiBmdW5jdGlvbj0iMHgwIiBzbG90
PSIweDAyIiB0eXBlPSJwY2kiLz4KICAgICAgICAgICAgICAgICAgICAgICAgPG1vZGVsIGhlYWRz
PSIxIiB0eXBlPSJxeGwiIHZyYW09IjMyNzY4Ii8+CiAgICAgICAgICAgICAgICA8L3ZpZGVvPgog
ICAgICAgICAgICAgICAgPGdyYXBoaWNzIGF1dG9wb3J0PSJ5ZXMiIGtleW1hcD0iZW4tdXMiIHBh
c3N3ZD0iKioqKioiIHBhc3N3ZFZhbGlkVG89IjE5NzAtMDEtMDFUMDA6MDA6MDEiIHBvcnQ9Ii0x
IiB0bHNQb3J0PSItMSIgdHlwZT0ic3BpY2UiPgogICAgICAgICAgICAgICAgICAgICAgICA8bGlz
dGVuIG5ldHdvcms9InZkc20tb3ZpcnRtZ210IiB0eXBlPSJuZXR3b3JrIi8+CiAgICAgICAgICAg
ICAgICA8L2dyYXBoaWNzPgogICAgICAgICAgICAgICAgPGludGVyZmFjZSB0eXBlPSJicmlkZ2Ui
PgogICAgICAgICAgICAgICAgICAgICAgICA8YWRkcmVzcyBidXM9IjB4MDAiIGRvbWFpbj0iMHgw
MDAwIiBmdW5jdGlvbj0iMHgwIiBzbG90PSIweDAzIiB0eXBlPSJwY2kiLz4KICAgICAgICAgICAg
ICAgICAgICAgICAgPG1hYyBhZGRyZXNzPSIwMDoxYTo0YTpjNzoyMjoxNyIvPgogICAgICAgICAg
ICAgICAgICAgICAgICA8bW9kZWwgdHlwZT0idmlydGlvIi8+CiAgICAgICAgICAgICAgICAgICAg
ICAgIDxzb3VyY2UgYnJpZGdlPSJvdmlydG1nbXQiLz4KICAgICAgICAgICAgICAgICAgICAgICAg
PGZpbHRlcnJlZiBmaWx0ZXI9InZkc20tbm8tbWFjLXNwb29maW5nIi8+CiAgICAgICAgICAgICAg
ICAgICAgICAgIDxsaW5rIHN0YXRlPSJ1cCIvPgogICAgICAgICAgICAgICAgICAgICAgICA8Ym9v
dCBvcmRlcj0iMiIvPgogICAgICAgICAgICAgICAgICAgICAgICA8YmFuZHdpZHRoLz4KICAgICAg
ICAgICAgICAgIDwvaW50ZXJmYWNlPgogICAgICAgICAgICAgICAgPGRpc2sgZGV2aWNlPSJjZHJv
bSIgc25hcHNob3Q9Im5vIiB0eXBlPSJmaWxlIj4KICAgICAgICAgICAgICAgICAgICAgICAgPGFk
ZHJlc3MgYnVzPSIxIiBjb250cm9sbGVyPSIwIiB0YXJnZXQ9IjAiIHR5cGU9ImRyaXZlIiB1bml0
PSIwIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxzb3VyY2UgZmlsZT0iIiBzdGFydHVwUG9s
aWN5PSJvcHRpb25hbCIvPgogICAgICAgICAgICAgICAgICAgICAgICA8dGFyZ2V0IGJ1cz0iaWRl
IiBkZXY9ImhkYyIvPgogICAgICAgICAgICAgICAgICAgICAgICA8cmVhZG9ubHkvPgogICAgICAg
ICAgICAgICAgICAgICAgICA8c2VyaWFsLz4KICAgICAgICAgICAgICAgIDwvZGlzaz4KICAgICAg
ICAgICAgICAgIDxkaXNrIGRldmljZT0iZGlzayIgc25hcHNob3Q9Im5vIiB0eXBlPSJibG9jayI+
CiAgICAgICAgICAgICAgICAgICAgICAgIDxhZGRyZXNzIGJ1cz0iMHgwMCIgZG9tYWluPSIweDAw
MDAiIGZ1bmN0aW9uPSIweDAiIHNsb3Q9IjB4MDUiIHR5cGU9InBjaSIvPgogICAgICAgICAgICAg
ICAgICAgICAgICA8c291cmNlIGRldj0iL3JoZXYvZGF0YS1jZW50ZXIvbW50L2Jsb2NrU0QvOGNl
YjgzOGEtNGU3NC00MjBkLWIxZTItODE3YzBlOWY4ZWVhL2ltYWdlcy9kNWVkMjZkNS0zZDM1LTRl
ZjgtYjUyYy1hNjVkOTllMTJhZDAvZGZmNjMxY2YtMTM1OC00Yjc0LTk3MzItNTYwODMyNjM0ODEw
Ii8+CiAgICAgICAgICAgICAgICAgICAgICAgIDx0YXJnZXQgYnVzPSJ2aXJ0aW8iIGRldj0idmRh
Ii8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxzZXJpYWw+ZDVlZDI2ZDUtM2QzNS00ZWY4LWI1
MmMtYTY1ZDk5ZTEyYWQwPC9zZXJpYWw+CiAgICAgICAgICAgICAgICAgICAgICAgIDxib290IG9y
ZGVyPSIxIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxkcml2ZXIgY2FjaGU9Im5vbmUiIGVy
cm9yX3BvbGljeT0ic3RvcCIgaW89Im5hdGl2ZSIgbmFtZT0icWVtdSIgdHlwZT0icWNvdzIiLz4K
ICAgICAgICAgICAgICAgIDwvZGlzaz4KICAgICAgICAgICAgICAgIDxjaGFubmVsIHR5cGU9InNw
aWNldm1jIj4KICAgICAgICAgICAgICAgICAgICAgICAgPHRhcmdldCBuYW1lPSJjb20ucmVkaGF0
LnNwaWNlLjAiIHR5cGU9InZpcnRpbyIvPgogICAgICAgICAgICAgICAgPC9jaGFubmVsPgogICAg
ICAgIDwvZGV2aWNlcz4KICAgICAgICA8b3M+CiAgICAgICAgICAgICAgICA8dHlwZSBhcmNoPSJ4
ODZfNjQiIG1hY2hpbmU9InJoZWw2LjUuMCI+aHZtPC90eXBlPgogICAgICAgICAgICAgICAgPHNt
YmlvcyBtb2RlPSJzeXNpbmZvIi8+CiAgICAgICAgPC9vcz4KICAgICAgICA8c3lzaW5mbyB0eXBl
PSJzbWJpb3MiPgogICAgICAgICAgICAgICAgPHN5c3RlbT4KICAgICAgICAgICAgICAgICAgICAg
ICAgPGVudHJ5IG5hbWU9Im1hbnVmYWN0dXJlciI+b1ZpcnQ8L2VudHJ5PgogICAgICAgICAgICAg
ICAgICAgICAgICA8ZW50cnkgbmFtZT0icHJvZHVjdCI+b1ZpcnQgTm9kZTwvZW50cnk+CiAgICAg
ICAgICAgICAgICAgICAgICAgIDxlbnRyeSBuYW1lPSJ2ZXJzaW9uIj42U2VydmVyLTYuNS4wLjEu
ZWw2PC9lbnRyeT4KICAgICAgICAgICAgICAgICAgICAgICAgPGVudHJ5IG5hbWU9InNlcmlhbCI+
NEM0QzQ1NDQtMDA1OS00NDEwLTgwNDMtQjZDMDRGNEU1QTMxXzEwOmZlOmVkOjAyOjAyOmU0PC9l
bnRyeT4KICAgICAgICAgICAgICAgICAgICAgICAgPGVudHJ5IG5hbWU9InV1aWQiPjU2ZDFjNjU3
LWRkNzYtNDYwOS1hMjA3LWMwNTA2OTliZTViZTwvZW50cnk+CiAgICAgICAgICAgICAgICA8L3N5
c3RlbT4KICAgICAgICA8L3N5c2luZm8+CiAgICAgICAgPGNsb2NrIGFkanVzdG1lbnQ9IjAiIG9m
ZnNldD0idmFyaWFibGUiPgogICAgICAgICAgICAgICAgPHRpbWVyIG5hbWU9InJ0YyIgdGlja3Bv
bGljeT0iY2F0Y2h1cCIvPgogICAgICAgICAgICAgICAgPHRpbWVyIG5hbWU9InBpdCIgdGlja3Bv
bGljeT0iZGVsYXkiLz4KICAgICAgICAgICAgICAgIDx0aW1lciBuYW1lPSJocGV0IiBwcmVzZW50
PSJubyIvPgogICAgICAgIDwvY2xvY2s+CiAgICAgICAgPGZlYXR1cmVzPgogICAgICAgICAgICAg
ICAgPGFjcGkvPgogICAgICAgIDwvZmVhdHVyZXM+CiAgICAgICAgPGNwdSBtYXRjaD0iZXhhY3Qi
PgogICAgICAgICAgICAgICAgPG1vZGVsPlNhbmR5QnJpZGdlPC9tb2RlbD4KICAgICAgICAgICAg
ICAgIDx0b3BvbG9neSBjb3Jlcz0iMSIgc29ja2V0cz0iMTYwIiB0aHJlYWRzPSIxIi8+CiAgICAg
ICAgPC9jcHU+CjwvZG9tYWluPgo=
------=_Part_23935490_2012384362.1403872214216--
10 years, 4 months
HP ILO2 , fence not working, with SSH port specified, a Bug?
by mad Engineer
hi i have an old HP server with ILO2
on manager i configured power management and configured SSH port to use for
ILO2
for checking SSH i manually ssh to ILO and is working fine,
but power management test always fail with "*Unable to connect/login to
fencing device*"
log shows its using fence_ilo instead of fence_ilo2
Thread-18::DEBUG::2014-06-30 08:23:14,106::API::1133::vds::(fenceNode)
fenceNode(addr=XXXX,port=,*agent=ilo*
,user=Administrator,passwd=XXXX,action=status,secure=,options=ipport=22
ssl=no)
Thread-18::DEBUG::2014-06-30 08:23:14,741::API::1159::vds::(fenceNode) rc 1
in agent=*fence_ilo*
ipaddr=xxxxxxxxxx
login=Administrator
action=status
passwd=XXXX
ipport=22
ssl=no out err *Unable to connect/login to fencing device*
*Manually testing*
fence_ilo -a xxxxxx -l Administrator -p xxxxx -o status
Status: ON
but with ssh port specified ie *-u *
fence_ilo -a xxxxxx -l Administrator -p xxxxx -o status -u 22
*Unable to connect/login to fencing device*
So when we specify ssh port it fails and with out ssh port its working
this is the case with ILO2 also
for ilo3 and ilo4 since it does not ask for SSH port its working
Is this a Bug
Thanks,
10 years, 4 months
Project vdsm-jsonrpc-java backup maintainer
by Sandro Bonazzola
Hi,
looks like only one person has +2 / merge rights on vdsm-jsonrpc-java.
I think every project should have a "backup" maintainer.
Since it seems pkliczewski is the only committer there, I would like to propose him as maintainer too.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months
Tune HA timing,is it possible
by mad Engineer
Hi,
Is it possible to tune time required to reboot VM in case of host
failures.
in our test we powered off one of the Host.
Manager was still showing both Guest and Host as UP for around 5 minutes
and after that it took another 4-5 minutes to start VM.
Is there any tunable parameter that can be modified to bring down the time
required to recognize Host failures.
Thanks
10 years, 5 months
[QE][ACTION REQUIRED] postponing beta release due to basic sanity test failure
by Sandro Bonazzola
Hi,
while performing basic sanity test on 3.5.0 beta candidate I've seen the following bugs:
Bug 1113882 - [ ERROR ] Failed to execute stage 'Closing up': [ERROR]::oVirt sdk is disconnected from the server.
Bug 1113891 - storage domains are listed twice
Bug 1113898 - New Virtual Machine dialog drop down list are all empty
Bug 1113898 make impossible to create a new VM, so basic sanity test failed.
We need to postpone beta release until this one is fixed.
repoclosure test failed on F19: http://lists.ovirt.org/pipermail/users/2014-June/025475.html
Since candidate beta build failed sanity test, I'm going to rebuild engine from master and repeat sanity test on the new build.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months
Engine - Changes to various issues around commands and queries construction and internal execution - please read
by Yair Zaslavsky
Hi all,
Thanks to the help of Alon, oved, Tal, Moti, Arik and others, the following changes were introduced:
1. Internal commands invocation -
When invoking an internal command from a command, please use the following :
Instead of Backend.getInstance().runInternalAction...
Use
runInternalAction - a new method that was introdced at CommandBase.
This method has two variants - one that accepts command context, and the other that does not have a command contet -
runInternalAction(VdcActionType,VdcActionParametersBase,CommandContext)
and
runInternalAction(VdcActionType,VdcActionParametersBase)
If CommandContext is not passed the context at the calling command will be cloned and set at the child command.
If a Command context is pased - it should be the responsibility of the caller to clone, however, this will give the caller some degree of freedom to determine whether various
parts of the context will be cloned, or not.
Examples:
runInternalAction(VdcActionType.AddPermission, permissionParams) has the same effect as : runInternlAction(VdcActionType.AddPermissiosn, permissionParams, getContext().clone())
runInternlAction(VdcActionType.AddPermissiosn, permissionParams, getContext().clone().withoutCompensationContext()) - will cause the compensation context to be reset, and let the child command determine the value of compensation context (at handleTransactivity method).
The complete list of "context alteration methods" are -
withCompensationContext(CompensationContext) , withoutCompensationContext()
withLock(EngineLock), withoutLock()
withExecutionContext(ExecutionContext), withoutExecutionContext() - bare in mind that all these follow the chaining method "design pattern" [1] (I would like to thank Moti for the naming suggestions)
two methods for running InternalAction with context for tasks were created:
runInternalActionWithTasksContext(VdcActionType, VdcActionParametersBase)
runInternalActionWithTasksContext(VdcActionType, VdcActionParametersBase, EngineLock)
These methods use ExecutionHandler.createDefaultContextForTasks to create the relevant command context to be passed to the child command.
runInternalMultipleActions was introduced to command base in a similar manner, with 3 versions:
runInternalMultipleActions(VdcActionType, ArrayList<VdcActionParametersBase>)
runInternalMultipleActions(VdcActionType, ArrayList<VdcActionParametersBase>, ExecutionContext)
runInternalMultipleActions(VdcActionType, ArrayList<VdcActionParametersBase>, CommandContext)
2. Queries invocation -
runInternalQuery(VdcQueryType, VdcQueryParametersBase) was introduced to command base.
Basically it takes the engine context from the current command context, and runs the internal query with it.
EngineContext is the context which should hold all the common attributes to our flows at engine - currently it holds the engineSessionId, working towards moving correlationId to it as well.
3. Commands & Queries coding
Each internal query should have a ctor that takes the parameters, and also the engine context .
As some of the queries are both internal and non internal you may have two ctors - one with parameters only, one with parameters and EngineContext
for example
public class GetUnregisteredDiskQuery<P extends GetUnregisteredDiskQueryParameters> extends QueriesCommandBase<P> {
public GetUnregisteredDiskQuery(P parameters) {
this(parameters, null);
}
public GetUnregisteredDiskQuery(P parameters, EngineContext context) {
super(parameters, context);
}
Notice that the ctor without the context calls the one with the context.
Same happens at Commands:
public RemovePermissionCommand(T parameters) {
this(parameters, null);
}
public RemovePermissionCommand(T parameters, CommandContext commandContext) {
super(parameters, commandContext);
}
4. runVdsCommand was introduced to CommandBase as well
runVdsCommand(VDSCommandType, VdsCommandParameters) - currently this just runs the vds command on vds broker, working on propagating the engine context via vds broker as well.
Please use the above in your code. If you see any issues , or places where its problematic to use, feel free to contact me.
[1]
http://en.wikipedia.org/wiki/Method_chaining
10 years, 5 months
Node down,but ovirt still shows VM as up
by mad Engineer
Hi,
I am using Cisco UCS C200 M2 as Host running Centos 6.5 and KVM,
Power Management not working properly,hence even with Node down Ovirt shows
VM as still up,with uptime of VM increasing(on the manager)
if i continue and save the changes its causing problems:
1. HA is not working ( Node status changed to Non responsive but VM status
still up!!
2. Restart of Host gives wrong information-it shows host as rebooting but
actually nothing is happening to Host!!
While configuring power management:
On ovirtmanager edited Power management and chose cisco_ucs with proper
authentication to CIMC,but when clicked on test,it shows
*Test Failed, Failed: You have to enter plug number Please use '-h' for
usage *
What is this plug,i couldn't find anything in CIMC
This can be the reason for failure of HA and restart
Can someone please help me fix this
Thanks
10 years, 5 months
Stats for oVirt Downloads: Jan-May 2014
by Brian Proffitt
All:
For some time, traffic statistics for the sites in the ovirt.org domain (lists, resources, gerrit, and linode01) have been collected and organized using awstats[1] at stats.ovirt.org[2]. From the data provided for resources, I have been able to put together what should be a fairly definitive set of statistics for software downloads within the oVirt Project.
Methodology
The statistics that were analyzed were for five parts of the project:
* Engine
* Live
* Node
* Engine Reports
* Engine dwh
Downloads of the Engine RPM file, it was determined, would be indicative of actual oVirt installs. The allinone RPMs were not analyzed at this time, since installing this RPM is a choice made during the installation process itself. Tracking the Engine Reports and dwh RPMs was done to determine the popularity of these two tools and to ensure their numbers were comparable.
To count downloads, data was gathered each month that listed the total downloads of every file, a number derived from the total hits each file had, minus any 206 hits, which indicated incomplete downloads. Key files for Engine, Engine Reports, and Engine dwh were identified and the data filtered to include counts for each one of the files, in whatever version released.
Tracking of Live and Node RPMs was already set up within the awstats reporting, and was taken directly from awstats in each month.
If there is any part of this methodology that is in error, feedback is very much appreciated.
Presentation of Data
Data was aggregated using Pivot Tables in Google Docs, and the application of SUMIFS functions in the same spreadsheet document. This document is still being formatted into a more presentable form, but until then, I wanted to deliver some preliminary results to the community for the first five months of the year.
It will be noted that in general, download numbers are on the rise, particularly around the time oVirt 3.4 was released. There are significant gaps in downloads of Engine Reports and Engine dwh. I am still investigating the cause of this lack of data.
Engine
Jan 5815
Feb 4251
Mar 1980
Apr 10509
May 12043
Engine Reports
Jan 0
Feb 1209
Mar 0
Apr 10282
May 9647
Engine dwh
Jan 0
Feb 603
Mar 0
Apr 9974
May 10311
Live
Jan 845
Feb 757
Mar 739
Apr 1981
May 1187
Node
Jan 95
Feb 757
Mar 739
Apr 1981
May 1187
Thanks for Alon Bar-Lev and Michael Scherer for their invaluable assistance with this data gathering and reporting.
BKP
[1] http://awstats.sourceforge.net/
[2] http://stats.ovirt.org/
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 5 months
multipath configuration
by Yeela Kaplan
Hi,
Currently multipath.conf is being rotated each time we reconfigure it.
We'd like to change the behaviour for multipath.conf so that current configuration will be commented out
and we'd stop rotating (in the same manner as libvirt conf works today).
Does anybody have any comment for/ against?
Thanks in advance,
Yeela
10 years, 5 months