Network Address Change
by Paul.LKW
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
2 years, 4 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
5 years, 2 months
[Users] Problem Creating "oVirtEngine" Machine
by Richie@HIP
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
I can't agree with you more. Modifying every box's or Virtual Machine's =
HOSTS file with a FQDN and IP SHOULD work, but in my case it is not. =
There are several reasons I've come to believe could be the problem =
during my trial-and-errors testing and learning.
FIRST - MACHINE IPs.
THe machine's "Names" where not appearing in the Microsoft Active =
Directory DHCP along with their assigned IPs; in other words, the DHCP =
just showed an "Assigned IP", equal to the Linux Machine's IP, with a =
<empty> ('i.e. blank, none, silch, plan old "no-letters-or-numbers") =
"Name" in the "Name" (i.e. machines "network name", or FQDN-value used =
by the Windows AD DNS-service) column. =20
if your IP is is appearing with an <empty> "name", there is no "host =
name" to associate the IP, it makes it difficult to define a FQDN; which =
isn't that useful if we're going to use the HOSTS files in all =
participating machines in an oVirt Installation.
I kept banging my head for three (3) long hours trying to find the =
problem.
In Fedora 18, I could't find where the "network name" of the machine =
could be defined. =20
I tried putting the "Additional Search Domains" and/or "DHCP Client ID" =
in Fedora's 18 Desktop - under "System Settings > Hardware > Network > =
Options > IPv4 Setting"
The DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"
Kept wondering around the "Settings" and seeing which one made sense, =
but what the heck, I went for it. =20
Under "System Settings > System > Details" I found the information about =
GNOME and the machine's hardware. =20
There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =20
I also installed all Kerberos libraries and client (i.e. authconfig-gtk, =
authhub, authhub-client, krb5-apple-clents, krb5-auth-dialog, =
krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) and rebooted
VOILA=85!!! =20
I don;t know if it was the definition of "Device Name" from =
"localhost.localdomain" to "ovirtengine", of the Kerberos libraries =
install, or both. But finally the MS AD DHCP was showing the =
Addigned-IP, the machine "Name" and the proper MAC-address. Regardless, =
setting the machine's "Network Name" under "System Settings > System > =
Details > Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this network =
setting could be defined.
NOTE - Somebody has to try the two steps I did together, separately. to =
see which one is the real problem-solver; for me it is working, and "if =
it ain't broke, don't fix it=85"
Now that I have the DHCP / IP thing sorted, I have to do the DNS stuff.
To this point, I've addressed the DHCP and "Network Name" of the =
IP-Lease (required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "as long as I do not =
use default HTTPd service parameters as suggested by the install". By =
using the HOST file to "define" FQDNs, AND NOT using the default HTTPd =
suggested changes, I'm able to install the oVirtEngine (given that I use =
ports 8700 and 8701) to access the "oVirtEngine Welcome Screen", BUT =
NONE of the "oVirt Portals" work=85 YET=85!!!
More to come during the week
Richie
Jos=E9 E ("Richie") Piovanetti, MD, MS=20
M: 787-615-4884 | richiepiovanetti(a)healthcareinfopartners.com
On Aug 2, 2013, at 3:10 AM, Joop <jvdwege(a)xs4all.nl> wrote:
> Hello Ritchie,
>=20
>> In a conversation via IRC, someone suggested that I activate =
"dnsmask" to overcome what appears to be a DNS problem. I'll try that =
other possibility once I get home later today.
>>=20
>> In the mean time, what do you mean by "fixing the hostname"=85? I =
opened and fixed the HOSTNAMES and changed it from =
"localhost-localdomain" to "localhost.localdomain" and that made no =
difference. Albeit, after changing I didm;t restart, remove ovirtEngine =
((using "engine-cleanup") and reinstalled via "engine-setup". Is that =
what you mean=85?
>>=20
>>=20
>>=20
>> In the mean time, the fact that even if I resolve the issue of =
oVirtEngine I will not be able to connect to the oVirt Nodes unless I =
have DNS resolution, apparently means I should do something with =
resolving via DNS in my home LAN (i.e implement some sort of "DNS Cache" =
so I can resolve my home computers via DNS inside my LAN).
>>=20
>> Any suggestions are MORE THAN WELCOME=85!!!
>> =20
>=20
> Having setup ovirt more than I can count right now I share your =
feeling that it isn't always clear why things are going wrong, but in =
this case I suspect that there is a rather small thing missing.
> In short if you setup ovirt-engine, either using virtualbox or on real =
hardware, and you give your host a meaningfull name AND you add that =
info also in your /etc/hosts file than things SHOULD work, no need for =
dnsmasq or even bind. Would make things easier once you start adding =
virt hosts to you infrastructure since you will need to duplicate these =
actions on each host (add engine name/ip to each host and add each host =
to the others and all hosts to engine)
>=20
> Just ask if you need more assistance and I will write down a small =
howto that should work out of the box else I might have some time to see =
if I can get things going.
>=20
> Regards,
>=20
> Joop
>=20
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I =
can't agree with you more. Modifying every box's or Virtual =
Machine's HOSTS file with a FQDN and IP SHOULD work, but in my case it =
is not. There are several reasons I've come to believe could be =
the problem during my trial-and-errors testing and =
learning.<div><div><br></div><div>FIRST - MACHINE IPs.</div><ul =
class=3D"MailOutline"><li>THe machine's "Names" where not appearing in =
the <b>Microsoft Active Directory DHCP</b> along with their assigned =
IPs; in other words, the DHCP just showed an "Assigned IP", equal to the =
Linux Machine's IP, with a <empty> ('i.e. blank, none, silch, plan =
old "no-letters-or-numbers") "Name" in the "Name" (i.e. machines =
"network name", or FQDN-value used by the Windows AD DNS-service) =
column. </li><li>if your IP is is appearing with an <empty> =
"name", there is no "host name" to associate the IP, it makes it =
difficult to define a FQDN; which isn't that useful if we're going to =
use the HOSTS files in all participating machines in an oVirt =
Installation.</li><li>I kept banging my head for three (3) long hours =
trying to find the problem.</li><ul><li>In Fedora 18, I could't find =
where the "network name" of the machine could be defined. =
</li><li>I tried putting the "Additional Search Domains" and/or =
"DHCP Client ID" in Fedora's 18 Desktop - under "System Settings > =
Hardware > Network > Options > IPv4 Setting"</li><ul><li>The =
DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"</li></ul><li>Kept wondering around the =
"Settings" and seeing which one made sense, but what the heck, I went =
for it. </li><ul><li>Under "System Settings > System > =
Details" I found the information about GNOME and the machine's hardware. =
</li><li>There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =
</li><li>I also installed all Kerberos libraries and client (i.e. =
authconfig-gtk, authhub, authhub-client, krb5-apple-clents, =
krb5-auth-dialog, krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) =
and rebooted</li><li>VOILA=85!!! </li></ul><li>I don;t know if it =
was the definition of "Device Name" from "localhost.localdomain" to =
"ovirtengine", of the Kerberos libraries install, or both. But =
finally the MS AD DHCP was showing the Addigned-IP, the machine "Name" =
and the proper MAC-address. Regardless, setting the machine's =
"Network Name" under "System Settings > System > Details =
> Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this =
network setting could be defined.</li><li><b>NOTE</b> - Somebody has to =
try the two steps I did together, separately. to see which one is the =
real problem-solver; for me it is working, and "if it ain't broke, don't =
fix it=85"</li></ul></ul><div><br =
class=3D"webkit-block-placeholder"></div><div>Now that I have the DHCP / =
IP thing sorted, I have to do the DNS stuff.</div><div><br></div><div>To =
this point, I've addressed the DHCP and "Network Name" of the IP-Lease =
(required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "<b><i>as long as I =
do not use default HTTPd service parameters as suggested by the =
install</i></b>". <b>By using the HOST file to "define" FQDNs, AND =
NOT using the default HTTPd suggested changes, I'm able to install the =
oVirtEngine (given that I use ports 8700 and 8701) to access the =
"oVirtEngine Welcome Screen", BUT NONE of the "oVirt Portals" work</b>=85 =
YET=85!!!</div><div><br></div><div>More to come during the =
week</div><div><br></div><div>Richie</div><div =
apple-content-edited=3D"true"><br>Jos=E9 E ("Richie") Piovanetti, MD, =
MS <br>M: 787-615-4884 | <a =
href=3D"mailto:richiepiovanetti@healthcareinfopartners.com">richiepiovanet=
ti(a)healthcareinfopartners.com</a><br><br><br><br><br><br></div><br><div><d=
iv>On Aug 2, 2013, at 3:10 AM, Joop <<a =
href=3D"mailto:jvdwege@xs4all.nl">jvdwege(a)xs4all.nl</a>> =
wrote:</div><br class=3D"Apple-interchange-newline"><blockquote =
type=3D"cite">Hello Ritchie,<br><br><blockquote type=3D"cite">In a =
conversation via IRC, someone suggested that I activate "dnsmask" to =
overcome what appears to be a DNS problem. I'll try that other =
possibility once I get home later today.<br><br>In the mean time, what =
do you mean by "fixing the hostname"=85? I opened and fixed the =
HOSTNAMES and changed it from "localhost-localdomain" to =
"localhost.localdomain" and that made no difference. Albeit, after =
changing I didm;t restart, remove ovirtEngine ((using "engine-cleanup") =
and reinstalled via "engine-setup". Is that what you =
mean=85?<br><br><br><br>In the mean time, the fact that even if I =
resolve the issue of oVirtEngine I will not be able to connect to the =
oVirt Nodes unless I have DNS resolution, apparently means I should do =
something with resolving via DNS in my home LAN (i.e implement some sort =
of "DNS Cache" so I can resolve my home computers via DNS inside my =
LAN).<br><br>Any suggestions are MORE THAN WELCOME=85!!!<br> =
<br></blockquote><br>Having setup ovirt more than I can count =
right now I share your feeling that it isn't always clear why things are =
going wrong, but in this case I suspect that there is a rather small =
thing missing.<br>In short if you setup ovirt-engine, either using =
virtualbox or on real hardware, and you give your host a meaningfull =
name AND you add that info also in your /etc/hosts file than things =
SHOULD work, no need for dnsmasq or even bind. Would make things easier =
once you start adding virt hosts to you infrastructure since you will =
need to duplicate these actions on each host (add engine name/ip to each =
host and add each host to the others and all hosts to =
engine)<br><br>Just ask if you need more assistance and I will write =
down a small howto that should work out of the box else I might have =
some time to see if I can get things =
going.<br><br>Regards,<br><br>Joop<br><br></blockquote></div><br></div></b=
ody></html>=
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241--
5 years, 12 months
Re: [ovirt-users] Question about the ovirt-engine-sdk-java
by Michael Pasternak
------=_Part_1975902_834617789.1445161505459
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Salifou,
Actually java sdk is=C2=A0intentionally=C2=A0hiding transport level interna=
ls so developers could stay in java domain,if your headers are static, easi=
est way would be using reverse proxy in a middle to intercept requests,=C2=
=A0
can you tell me why do you need this?
=20
On Friday, October 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah@=
redhat.com> wrote:
=20
Hi Micheal,
I have a question about the ovirt-engine-sdk-java.
Is there a way to add custom request headers to each RHEVM API call?
Here is an example of a request that I would like to do:
$ curl -v -k \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "ID: user1(a)ad.xyz.com" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "PASSWORD: Pwssd" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "TARGET: kobe" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 https://vm0.smalick.com/api/hosts
I would like to add ID, PASSWORD and TARGET as HTTP request header.=20
Thanks,
Salifou
------=_Part_1975902_834617789.1445161505459
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:HelveticaNeue-Light, Helvetica Neue Light, Helvetica Neue, Helve=
tica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id=3D"yui_3_16_=
0_1_1445160422533_3555" dir=3D"ltr"><span id=3D"yui_3_16_0_1_1445160422533_=
4552">Hi </span><span style=3D"font-family: 'Helvetica Neue', 'Segoe U=
I', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3_16_0_1_1445=
160422533_3568" class=3D"">Salifou,</span></div><div id=3D"yui_3_16_0_1_144=
5160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica Neue', =
'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" class=3D""><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
style=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Luci=
da Grande', sans-serif;" class=3D"" id=3D"yui_3_16_0_1_1445160422533_3595">=
Actually java sdk is </span><span style=3D"font-family: 'Helvetica Neu=
e', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3=
_16_0_1_1445160422533_4360" class=3D"">intentionally </span><span styl=
e=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Lucida G=
rande', sans-serif;" id=3D"yui_3_16_0_1_1445160422533_4362" class=3D"">hidi=
ng transport level internals so developers could stay in java domain,</span=
></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span class=
=3D"" id=3D"yui_3_16_0_1_1445160422533_4435"><font face=3D"Helvetica Neue, =
Segoe UI, Helvetica, Arial, Lucida Grande, sans-serif" id=3D"yui_3_16_0_1_1=
445160422533_4432" class=3D"">if your headers are static, easiest way would=
be using reverse proxy in a middle to intercept requests, </font><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
class=3D""><font face=3D"Helvetica Neue, Segoe UI, Helvetica, Arial, Lucida=
Grande, sans-serif" class=3D""><br></font></span></div><div id=3D"yui_3_16=
_0_1_1445160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica=
Neue', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"y=
ui_3_16_0_1_1445160422533_4357">can you tell me why do you need this?</span=
><br></div> <br><div class=3D"qtdSeparateBR"><br><br></div><div class=3D"y=
ahoo_quoted" style=3D"display: block;"> <div style=3D"font-family: Helvetic=
aNeue-Light, Helvetica Neue Light, Helvetica Neue, Helvetica, Arial, Lucida=
Grande, sans-serif; font-size: 13px;"> <div style=3D"font-family: Helvetic=
aNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-si=
ze: 16px;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Friday, O=
ctober 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah(a)redhat.com>=
wrote:<br> </font> </div> <br><br> <div class=3D"y_msg_container">Hi Mich=
eal,<br><br>I have a question about the ovirt-engine-sdk-java.<br><br>Is th=
ere a way to add custom request headers to each RHEVM API call?<br><br>Here=
is an example of a request that I would like to do:<br><br>$ curl -v -k \<=
br> -H "ID: <a ymailto=3D"mailto:user1@ad=
.xyz.com" href=3D"mailto:user1@ad.xyz.com">user1(a)ad.xyz.com</a>" \<br> =
; -H "PASSWORD: Pwssd" \<br>  =
; -H "TARGET: kobe" \<br> <=
a href=3D"https://vm0.smalick.com/api/hosts" target=3D"_blank">https://vm0.=
smalick.com/api/hosts</a><br><br><br>I would like to add ID, PASSWORD and T=
ARGET as HTTP request header. <br><br>Thanks,<br>Salifou<br><br><br><br></d=
iv> </div> </div> </div></div></body></html>
------=_Part_1975902_834617789.1445161505459--
5 years, 12 months
[Users] oVirt Weekly Sync Meeting Minutes -- 2012-05-23
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:41)
* Status of next release (mburns, 14:05:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145 (mburns,
14:05:29)
* AGREED: freeze date and beta release delayed by 1 week to 2012-06-07
(mburns, 14:12:33)
* post freeze, release notes flag needs to be used where required
(mburns, 14:14:21)
* https://bugzilla.redhat.com/show_bug.cgi?id=821867 is a VDSM blocker
for 3.1 (oschreib, 14:17:27)
* ACTION: dougsland to fix upstream vdsm right now, and open a bug on
libvirt augeas (oschreib, 14:21:44)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822158 (mburns,
14:23:39)
* assignee not available, update to come tomorrow (mburns, 14:24:59)
* ACTION: oschreib to make sure BZ#822158 is handled quickly
(oschreib, 14:25:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824397 (mburns,
14:28:55)
* 824397 expected to be merged prior next week's meeting (mburns,
14:29:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824420 (mburns,
14:30:15)
* tracker for node based on F17 (mburns, 14:30:28)
* blocked by util-linux bug currently (mburns, 14:30:40)
* new build expected from util-linux maintainer in next couple days
(mburns, 14:30:55)
* sub-project status -- engine (mburns, 14:32:49)
* nothing to report outside of blockers discussed above (mburns,
14:34:00)
* sub-project status -- vdsm (mburns, 14:34:09)
* nothing outside of blockers above (mburns, 14:35:36)
* sub-project status -- node (mburns, 14:35:43)
* working on f17 migration, but blocked by util-linux bug (mburns,
14:35:58)
* should be ready for freeze deadline (mburns, 14:36:23)
* Review decision on Java 7 and Fedora jboss rpms in oVirt Engine
(mburns, 14:36:43)
* Java7 basically working (mburns, 14:37:19)
* LINK: http://gerrit.ovirt.org/#change,4416 (oschreib, 14:39:35)
* engine will make ack/nack statement next week (mburns, 14:39:49)
* fedora jboss rpms patch is in review, short tests passed (mburns,
14:40:04)
* engine ack on fedora jboss rpms and java7 needed next week (mburns,
14:44:47)
* Upcoming Workshops (mburns, 14:45:11)
* NetApp workshop set for Jan 22-24 2013 (mburns, 14:47:16)
* already at half capacity for Workshop at LinuxCon Japan (mburns,
14:47:37)
* please continue to promote it (mburns, 14:48:19)
* proposal: board meeting to be held at all major workshops (mburns,
14:48:43)
* LINK: http://www.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:49:30)
* Open Discussion (mburns, 14:50:12)
* oVirt/Quantum integration discussion will be held separately
(mburns, 14:50:43)
Meeting ended at 14:52:47 UTC.
Action Items
------------
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib to make sure BZ#822158 is handled quickly
Action Items, by person
-----------------------
* dougsland
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib
* oschreib to make sure BZ#822158 is handled quickly
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (98)
* oschreib (55)
* doronf (12)
* lh (11)
* sgordon (8)
* dougsland (8)
* ovirtbot (6)
* ofrenkel (4)
* cestila (2)
* RobertMdroid (2)
* ydary (2)
* rickyh (1)
* yzaslavs (1)
* cctrieloff (1)
* mestery_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
5 years, 12 months
[QE][ACTION REQUIRED] oVirt 3.5.1 RC status - postponed
by Sandro Bonazzola
Hi,
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
- ACTION: Gilad please provide ETA on above blocker, the new proposed RC date will be decided on the given ETA.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 57 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 37 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- ACTION: Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- ACTION: Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
5 years, 12 months
[Users] Lifecycle / upgradepath
by Sven Kieske
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
6 years
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
6 years
Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
6 years
[Users] importing from kvm into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
6 years
Trying to reset password for ovirt wiki
by noc
This is a multi-part message in MIME format.
--------------000005070002050708050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hoping someone can help me out.
For some reason I keep getting the following error when I try to reset
my password:
Reset password
* Error sending mail: Failed to add recipient: jvandewege(a)nieuwland.nl
[SMTP: Invalid response code received from server (code: 554,
response: 5.7.1 <jvandewege(a)nieuwland.nl>: Relay access denied)]
Complete this form to receive an e-mail reminder of your account details.
Since I receive the ML on this address it is definitely a working address.
Tried my home account too and same error but then for my home provider,
Relay denied ??
A puzzled user,
Joop
--------------000005070002050708050606
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hoping someone can help me out.<br>
For some reason I keep getting the following error when I try to
reset my password:<br>
<br>
<fieldset><legend>Reset password</legend>
<div class="error">
<ul>
<li>Error sending mail: Failed to add recipient:
<a class="moz-txt-link-abbreviated" href="mailto:jvandewege@nieuwland.nl">jvandewege(a)nieuwland.nl</a> [SMTP: Invalid response code
received from server (code: 554, response: 5.7.1
<a class="moz-txt-link-rfc2396E" href="mailto:jvandewege@nieuwland.nl"><jvandewege(a)nieuwland.nl></a>: Relay access denied)]</li>
</ul>
</div>
<p>Complete this form to receive an e-mail reminder of your
account details.<br>
</p>
</fieldset>
<br>
Since I receive the ML on this address it is definitely a working
address.<br>
Tried my home account too and same error but then for my home
provider, Relay denied ??<br>
<br>
A puzzled user,<br>
<br>
Joop<br>
<br>
</body>
</html>
--------------000005070002050708050606--
6 years, 1 month
ovirt-guest-agent issue on rhel5.5
by John Michael Mercado
Hi All,
I need your help. Anyone who encounter the below error and have the
solution? Can you help me how to fix this?
MainThread::INFO::2015-01-27
10:22:53,247::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2015-01-27
10:22:53,248::ovirt-guest-agent::138::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in ?
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 371, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
Thanks
6 years, 1 month
[Users] oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
6 years, 1 month
[Users] Moving iSCSI Master Data
by rni@chef.net
--========GMXBoundary282021374122634158505
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it's me again....
I started my oVirt 'project' as a proof of concept,.. but it happend as always, it became production
Now, I've to move the iSCSI Master data to the real iSCSI traget.
Is there any way to do this, and to become rid of the old Master Data?
Thank you for your help
Hans-Joachim
--========GMXBoundary282021374122634158505
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it's me again....<br /><br />I started my oVirt 'project' as a proof=
of concept,.. but it happend as always, it became production <img alt=
=3D" " title=3D" " src=3D"http://images.gmx.com/images/outsource/applicatio=
n/mailclient/mailcom/resource/mailclient/icons/blue/emoticons/animated/S_02=
-516742918.gif" /><br /><br />Now, I've to move the iSCSI Master data to th=
e real iSCSI traget.<br />Is there any way to do this, and to become rid of=
the old Master Data?<br /><br /><span id=3D"editor_signature">Thank you fo=
r your help</span><br /><br />Hans-Joachim</span></span>
--========GMXBoundary282021374122634158505--
6 years, 1 month
[Users] Can't access RHEV-H aka ovirt-node
by Scotto Alberto
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
6 years, 2 months
Unable to make Single Sign on working on Windows 7 Guest
by Felipe Herrera Martinez
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
6 years, 2 months
Re: [ovirt-users] Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
6 years, 2 months
Re: [ovirt-users] Problem Windows guests start in pause
by Dafna Ron
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
6 years, 2 months
Feature: Hosted engine VM management
by Roy Golan
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review and comment on the wiki below:
http://www.ovirt.org/Hosted_engine_VM_management
Thanks,
Roy
6 years, 2 months
Re: [ovirt-users] Packet loss
by Doron Fediuck
----_com.android.email_640187878761650
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
SGkgS3lsZSzCoApXZSBtYXkgaGF2ZSBzZWVuIHNvbWV0aGluZyBzaW1pbGFyIGluIHRoZSBwYXN0
IGJ1dCBJIHRoaW5rIHRoZXJlIHdlcmUgdmxhbnMgaW52b2x2ZWQuwqAKSXMgaXQgdGhlIHNhbWUg
Zm9yIHlvdT/CoApUb255IC8gRGFuLCBkb2VzIGl0IHJpbmcgYSBiZWxsP8Kg
----_com.android.email_640187878761650
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5IaSBLeWxlLCZuYnNwOzwv
ZGl2PjxkaXY+V2UgbWF5IGhhdmUgc2VlbiBzb21ldGhpbmcgc2ltaWxhciBpbiB0aGUgcGFzdCBi
dXQgSSB0aGluayB0aGVyZSB3ZXJlIHZsYW5zIGludm9sdmVkLiZuYnNwOzwvZGl2PjxkaXY+SXMg
aXQgdGhlIHNhbWUgZm9yIHlvdT8mbmJzcDs8L2Rpdj48ZGl2PlRvbnkgLyBEYW4sIGRvZXMgaXQg
cmluZyBhIGJlbGw/Jm5ic3A7PC9kaXY+PC9ib2R5PjwvaHRtbD4=
----_com.android.email_640187878761650--
6 years, 2 months
Changing gateway ping address
by Matteo
Hi all,
I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
Thanks,
Matteo
8 years, 1 month
Dedicated NICs for gluster network
by Nicolas Ecarnot
Hello,
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
the hosts].
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
--
Nicolas ECARNOT
8 years, 8 months
oVirt-shell command to move a disk
by Nicolas Ecarnot
Hello,
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
storage domains.
May you help me please?
(oVirt 3.4.4)
--
Nicolas Ecarnot
8 years, 9 months
One RHEV Virtual Machine does not Automatically Resume following Compellent SAN Controller Failover
by Duckworth, Douglas C
Hello --
Not sure if y'all can help with this issue we've been seeing with RHEV...
On 11/13/2015, during Code Upgrade of Compellent SAN at our Disaster
Recovery Site, we Failed Over to Secondary SAN Controller. Most Virtual
Machines in our DR Cluster Resumed automatically after Pausing except VM
"BADVM" on Host "BADHOST."
In Engine.log you can see that BADVM was sent into "VM_PAUSED_EIO" state
at 10:47:57:
"VM BADVM has paused due to storage I/O problem."
On this Red Hat Enterprise Virtualization Hypervisor 6.6
(20150512.0.el6ev) Host, two other VMs paused but then automatically
resumed without System Administrator intervention...
In our DR Cluster, 22 VMs also resumed automatically...
None of these Guest VMs are engaged in high I/O as these are DR site VMs
not currently doing anything.
We sent this information to Dell. Their response:
"The root cause may reside within your virtualization solution, not the
parent OS (RHEV-Hypervisor disc) or Storage (Dell Compellent.)"
We are doing this Failover again on Sunday November 29th so we would
like to know how to mitigate this issue, given we have to manually
resume paused VMs that don't resume automatically.
Before we initiated SAN Controller Failover, all iSCSI paths to Targets
were present on Host tulhv2p03.
VM logs on Host show in /var/log/libvirt/qemu/badhost.log that Storage
error was reported:
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
All disks used by this Guest VM are provided by single Storage Domain
COM_3TB4_DR with serial "270." In syslog we do see that all paths for
that Storage Domain Failed:
Nov 13 16:47:40 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 0
Though these recovered later:
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: sdbg -
tur checker reports path is up
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 8
Does anyone have an idea of why the VM would fail to automatically
resume if the iSCSI paths used by its Storage Domain recovered?
Thanks
Doug
--
Thanks
Douglas Charles Duckworth
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: duckd(a)tulane.edu
O: 504-988-9341
F: 504-988-8505
8 years, 11 months
3.6 upgrade issue
by Jon Archer
Hi all,
Wonder if anyone can shed any light on an error i'm seeing while running
engine-setup.
If just upgraded the packages to the latest 3.6 ones today (from 3.5),
run engine-setup, answered the questions, confirming install then get
presented with:
[ INFO ] Cleaning async tasks and compensations
[ INFO ] Unlocking existing entities
[ INFO ] Checking the Engine database consistency
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ ERROR ] Failed to execute stage 'Misc configuration': function
getdwhhistorytimekeepingbyvarname(unknown) does not exist LINE 2:
select * from GetDwhHistoryTimekeepingByVarName(
^ HINT: No function matches the given name and argument
types. You might need to add explicit type casts.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20150929144137-7u5rhg.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20150929144215-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Any ideas, where to look to fix things?
Thanks
Jon
9 years
MAC spoofing for specific VMs
by Christopher Young
I'm working on some load-balancing solutions and they appear to require MAC
spoofing. I did some searching and reading and as I understand it, you can
disable the MAC spoofing protection through a few methods.
I was wondering about the best manner to enable this for the VMs that
require it and not across the board (if that is even possible). I'd like
to just allow my load-balancer VMs to do what they need to, but keep the
others untouched as a security mechanism.
If anyone has any advice on the best method to handle this scenario, I
would greatly appreciate it. It seems that this might turn into some type
of feature request, though I'm not sure if this is something that has to be
done at the Linux bridge level, the port level, or the VM level. Any
explanations into that would also help in my education.
Thanks,
Chris
9 years, 2 months
Highly Available in 3.6 and USB support
by jaumotte, styve
--_000_AM3PR02MB29653FCC03F78819A980A6A86070AM3PR02MB296eurprd_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi everybody,
After testing some features on 3.5, we are planning to finaly go to 3.6. So=
me problems still exist.
A major problem still remain on the < Highly Available > option on vm wich =
doesn't work. I had a cluster with 4 engines and a simple vm. When I start =
a poweroff from the node where this vm is living, the node is shutting down=
but my vm doesn't restart on another node of the cluster. The power managm=
ent of all the node are correctly configure. The HA feature of then hosted-=
engine is working well (except it is very long).
Another problem consist of passing usb host device to the virtual machine. =
We've got some specials usb keys for activating old application and we need=
to attach this key to vm. At first, I try with standard usb mass storage k=
ey to test this approach. I can't start virtual machine when I add usb devi=
ce, I always have the message < The host ... did not satisfy internal filte=
r HostDevice because it does not support host device passthrough >. Have an=
y idea where I can find an HowTo to help me ?
Tanks for your help,
SJ
--_000_AM3PR02MB29653FCC03F78819A980A6A86070AM3PR02MB296eurprd_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Arial",sans-serif;
color:black;
font-weight:normal;
font-style:normal;
text-decoration:none none;}
.MsoChpDefault
{mso-style-type:export-only;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">Hi everybody,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">After testing some features on 3.5, we ar=
e planning to finaly go to 3.6. Some problems still exist.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">A major problem still remain on the &laqu=
o; Highly Available » option on vm wich doesn’t work.=
I had a cluster with 4 engines and a simple vm. When I start a poweroff fr=
om
the node where this vm is living, the node is shutting down but my vm does=
n’t restart on another node of the cluster. The power managment of al=
l the node are correctly configure. The HA feature of then hosted-engine is=
working well (except it is very long).<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">Another problem consist of passing usb ho=
st device to the virtual machine. We’ve got some specials usb keys fo=
r activating old application and we need to attach this
key to vm. At first, I try with standard usb mass storage key to test this=
approach. I can’t start virtual machine when I add usb device, I alw=
ays have the message « The host … did not satisfy internal=
filter HostDevice because it does not support host device
passthrough ». Have any idea where I can find an HowTo to help =
me ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">Tanks for your help,<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">SJ<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
</div>
</body>
</html>
--_000_AM3PR02MB29653FCC03F78819A980A6A86070AM3PR02MB296eurprd_--
9 years, 2 months
How to add faster NIC to oVirt cluser / host?
by Christophe TREFOIS
Dear all,
I have currently a data center with two clusters and each cluster had 1 host.
On one of the hosts, I now enabled 10 GbE NIC called p4p1. Currently everything is going over a 1 GbE NIC called em1 (ovirtmgmt).
My question now is, how I could add my 10 GbE NIC to the setup for instance to use for transferring data during transmissions to export domain or to simply replace the 1 GbE?
I prefer little downtime (eg only temp network loss), but a full downtime (shutting down) could be acceptable.
Thank you for any pointers or starting points you could provide,
Kind regards,
—
Christophe
9 years, 4 months
python floppy in RunOnce mode
by Giulio Casella
Hi,
I'm trying to boot a vm with non persistent floppy using python ovirt
sdk (the "RunOnce" way in administrator portal), but guest OS can't see
floppy drive. The ultimate goal is to deploy floppy with sysprep
unattend.xml file for windows 7 pools of vm.
Here is a snippet of code I use:
-------------------------------------------------
myvm = api.vms.get(name="vmname")
content="This is file content!"
f=params.File(name="foobar.txt",content=content)
fs=params.Files()
fs.add_file(f)
payload=params.Payload()
payload.set_type("floppy")
payload.set_files(fs)
payloads=params.Payloads()
payloads.add_payload(payload)
thevm=params.VM()
thevm.set_payloads(payloads)
action=params.Action(vm=thevm)
myvm.start(action=action)
xml = ParseHelper.toXml(action)
print xml
-------------------------------------------------
As you can see, for debugging purpose, I print my xml action, and I get:
-------------------------------------------------
<action>
<vm>
<payloads>
<payload type="floppy">
<files>
<file>
<name>foobar.txt</name>
<content>This is file content</content>
</file>
</files>
</payload>
</payloads>
</vm>
</action>
-------------------------------------------------
in the admin portal I can see my vm in "RunOnce" state, but no floppy is
present...
In fact in the vm process command line
(ps -ef | grep qemu-kvm | grep vmname) I can't see -drive option
referring to floppy (I only see 2 "-drive" options, referring to vm
system disk and to a correctly mounted cdrom ISO)
What I'm doing wrong?
(The engine is RHEV-M version 3.4.1-0.31.el6ev)
Thanks in advance,
Giulio
9 years, 4 months
Delete disk references without deleting the disk
by Johan Kooijman
Hi all,
I have about 100 old VM's in my cluster. They're powered down, ready for
deletion. What I want to do is delete the VM's including disks without
actually deleting the disk images from the storage array itself. Is that
possible? At the end I want to be able to delete the storage domain (which
then should not hold any data, as far as ovirt is concerned).
Reason for this: it's a ZFS pool with dedup enabled, deleting the images
one by one will kill the array with 100% iowa for some time.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
9 years, 4 months
HA cluster
by Budur Nagaraju
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
It has been detected that this program is executed through an SSH
connection without using screen.
Continuing with the installation may lead to broken installation
if the network connection fails.
It is highly recommended to abort the installation and run it
inside a screen session using command "screen".
Do you want to continue anyway? (Yes, No)[No]: yes
[WARNING] Cannot detect if hardware supports virtualization
[ INFO ] Bridge ovirtmgmt already created
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found:
2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
'Environment setup': Cannot setup Hosted Engine with other VMs running*
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[root@he ~]#
9 years, 4 months
Upgrade path from Fedora 20 oVirt 3.5 to Fedora 22 oVirt 3.6
by David Marzal Canovas
--=_35f37747b7409088194fea6f348517b4
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8
Hi would like to be sure of the correct upgrate path to take.
Should I first upgrade Fedora 20 -> Fedora 21 -> Fedora 22
and then upgrade oVirt from 3.5 to 3.6?
Or would be better to upgrade in the Fedora 20: oVirt 3.5 -> oVirt 3.6
and then make the OS upgrades FD20->FD21->FD22
In the release notes
http://www.ovirt.org/OVirt_3.6_Release_Notes#Install_.2F_Upgrade_from_pre...
[1]
don't says anything about the upgrade path of the OS, but searching I'm
aware that
oVirt 3.5 is only compatible with FD20 not with 21, or 22
oVirt 3.6 is only compatible with FD20 not with 21 or 20
Thanks in advance
--
David Marzal Cánovas
Servicio de Mecanización e Informática
Asamblea Regional de Murcia
Paseo Alfonso XIII, nº53
30203 - Cartagena
Tlfno: 968326800
Links:
------
[1]
http://www.ovirt.org/OVirt_3.6_Release_Notes#Install_.2F_Upgrade_from_pre...
--=_35f37747b7409088194fea6f348517b4
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; charset=
=3DUTF-8" /></head><body style=3D'font-size: 10pt'>
<p>Hi would like to be sure of the correct upgrate path to take.</p>
<p>Should I first upgrade Fedora 20 -> Fedora 21 -> Fedora 22</p>
<p>and then upgrade oVirt from 3.5 to 3.6?</p>
<p>Or would be better to upgrade in the Fedora 20: oVirt 3.5 -> oVirt 3=
=2E6</p>
<p>and then make the OS upgrades FD20->FD21->FD22</p>
<p>In the release notes <a href=3D"http://www.ovirt.org/OVirt_3.6_Release_N=
otes#Install_.2F_Upgrade_from_previous_versions">http://www.ovirt.org/OVirt=
_3.6_Release_Notes#Install_.2F_Upgrade_from_previous_versions</a></p>
<p>don't says anything about the upgrade path of the OS, but searching I'm =
aware that</p>
<p>oVirt 3.5 is only compatible with FD20 not with 21, or 22</p>
<p>oVirt 3.6 is only compatible with FD20 not with 21 or 20</p>
<p>Thanks in advance</p>
<div>-- <br />
<table border=3D"0" cellspacing=3D"0" cellpadding=3D"0">
<tbody>
<tr>
<td><img src=3D"http://www.asambleamurcia.es/templates/sumandoesfuerzos/ima=
ges/ARM.png" alt=3D"" width=3D"130" height=3D"79" /></td>
<td>David Marzal Cánovas <br />Servicio de Mecanización =
e Informática<br />Asamblea Regional de Murcia<br />Paseo Alfonso XI=
II, nº53<br />30203 - Cartagena<br />Tlfno: 968326800</td>
</tr>
</tbody>
</table>
</div>
</body></html>
--=_35f37747b7409088194fea6f348517b4--
9 years, 4 months
Problem with hosted engine setup - vsdmd does not start
by Willard Dennis
--Apple-Mail=_6ED95494-A5E5-4D6F-BDAF-D1727917E172
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="utf-8"
Hi all,
Tried to run the hosted engine setup, got:
[ ERROR ] Failed to execute stage 'Environment setup': [Errno 111] =
Connection refused
( full run output here: =
https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/raw/c0f912=
7c676759b19b1d32a212c8c3694fac580e/hosted-engine-setup-output.txt =
<https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/raw/c0f91=
27c676759b19b1d32a212c8c3694fac580e/hosted-engine-setup-output.txt> )
Looked at the setup log file referenced in the output, saw this =
traceback in it that also referenced the Error 111:
=
https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/raw/c0f912=
7c676759b19b1d32a212c8c3694fac580e/configurevm-py_error.txt =
<https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/raw/c0f91=
27c676759b19b1d32a212c8c3694fac580e/configurevm-py_error.txt>
I diagnosed it (Google ftw) and saw that it was likely a vdsmd problem; =
sure enough, when I did a 'systemctl status vdsmd=E2=80=99 I saw that it =
was in error state -
Process: 28163 ExecStart=3D/usr/share/vdsm/daemonAdapter -0 /dev/null -1 =
/dev/null -2 /dev/null /usr/share/vdsm/vdsm (code=3Dexited, =
status=3D1/FAILURE)
( full status output here: =
https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/raw/c0f912=
7c676759b19b1d32a212c8c3694fac580e/systemctl_status_vdsmd.txt =
<https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/raw/c0f91=
27c676759b19b1d32a212c8c3694fac580e/systemctl_status_vdsmd.txt> )
Does anyone know how I can troubleshoot and fix the VDSM daemon start =
process?
Thanks,
Will
--Apple-Mail=_6ED95494-A5E5-4D6F-BDAF-D1727917E172
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset="utf-8"
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">Hi all,<div class=3D""><br class=3D""></div><div =
class=3D"">Tried to run the hosted engine setup, got:<div class=3D"">[ =
ERROR ] Failed to execute stage 'Environment setup': [Errno 111] =
Connection refused</div><div class=3D""><br class=3D""></div><div =
class=3D"">( full run output here: <a =
href=3D"https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/ra=
w/c0f9127c676759b19b1d32a212c8c3694fac580e/hosted-engine-setup-output.txt"=
=
class=3D"">https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4=
/raw/c0f9127c676759b19b1d32a212c8c3694fac580e/hosted-engine-setup-output.t=
xt</a> )</div><div class=3D""><br class=3D""></div><div =
class=3D"">Looked at the setup log file referenced in the output, saw =
this traceback in it that also referenced the Error 111:</div><div =
class=3D""><a =
href=3D"https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/ra=
w/c0f9127c676759b19b1d32a212c8c3694fac580e/configurevm-py_error.txt" =
class=3D"">https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4=
/raw/c0f9127c676759b19b1d32a212c8c3694fac580e/configurevm-py_error.txt</a>=
</div><div class=3D""><br class=3D""></div><div class=3D"">I diagnosed =
it (Google ftw) and saw that it was likely a vdsmd problem; sure enough, =
when I did a 'systemctl status vdsmd=E2=80=99 I saw that it was in error =
state -</div><div class=3D"">Process: 28163 =
ExecStart=3D/usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 =
/dev/null /usr/share/vdsm/vdsm (code=3Dexited, =
status=3D1/FAILURE)</div><div class=3D""><br class=3D""></div><div =
class=3D"">( full status output here: <a =
href=3D"https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4/ra=
w/c0f9127c676759b19b1d32a212c8c3694fac580e/systemctl_status_vdsmd.txt" =
class=3D"">https://gist.githubusercontent.com/wdennis/4390042447354b2cb1c4=
/raw/c0f9127c676759b19b1d32a212c8c3694fac580e/systemctl_status_vdsmd.txt</=
a> )</div><div class=3D""><br class=3D""></div><div class=3D"">Does =
anyone know how I can troubleshoot and fix the VDSM daemon start =
process?</div><div class=3D""><br class=3D""></div><div =
class=3D"">Thanks,</div><div class=3D"">Will</div><div class=3D""><br =
class=3D""></div></div></body></html>=
--Apple-Mail=_6ED95494-A5E5-4D6F-BDAF-D1727917E172--
9 years, 4 months
Re: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on F20 (Cannot add the host to cluster ... SSH has failed)
by Bob Doolittle
On 03/09/2015 07:12 AM, Simone Tiraboschi wrote:
>
> ----- Original Message -----
>> From: "Bob Doolittle" <bob(a)doolittle.us.com>
>> To: "Simone Tiraboschi" <stirabos(a)redhat.com>
>> Sent: Monday, March 9, 2015 12:02:49 PM
>> Subject: Re: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on F20 (Cannot add the host to cluster ... SSH
>> has failed)
>>
>> On Mar 9, 2015 5:23 AM, "Simone Tiraboschi" <stirabos(a)redhat.com> wrote:
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Bob Doolittle" <bob(a)doolittle.us.com>
>>>> To: "users-ovirt" <users(a)ovirt.org>
>>>> Sent: Friday, March 6, 2015 9:21:20 PM
>>>> Subject: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on
>> F20 (Cannot add the host to cluster ... SSH has
>>>> failed)
>>>>
>>>> Hi,
>>>>
>>>> I'm following the instructions here:
>> http://www.ovirt.org/Hosted_Engine_Howto
>>>> My self-hosted install failed near the end:
>>>>
>>>> To continue make a selection from the options below:
>>>> (1) Continue setup - engine installation is complete
>>>> (2) Power off and restart the VM
>>>> (3) Abort setup
>>>> (4) Destroy VM and abort setup
>>>>
>>>> (1, 2, 3, 4)[1]: 1
>>>> [ INFO ] Engine replied: DB Up!Welcome to Health Status!
>>>> Enter the name of the cluster to which you want to add the
>> host
>>>> (Default) [Default]:
>>>> [ ERROR ] Cannot automatically add the host to cluster Default: Cannot
>> add
>>>> Host. Connecting to host via SSH has failed, verify that the host is
>>>> reachable (IP address, routable address etc.) You may refer to the
>>>> engine.log file for further details.
>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot add the host to
>>>> cluster Default
>>>> [ INFO ] Stage: Clean up
>>>> [ INFO ] Generating answer file
>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150306135624.conf'
>>>> [ INFO ] Stage: Pre-termination
>>>> [ INFO ] Stage: Termination
>>>>
>>>> I can ssh into the engine VM both locally and remotely. There is no
>>>> /root/.ssh directory, however. Did I need to set that up somehow?
>>> It's the engine that needs to open an SSH connection to the host calling
>> it by its hostname.
>>> So please be sure that you can SSH to the host from the engine using its
>> hostname and not its IP address.
>>
>> I'm assuming this should be a password-less login (key-based
>> authentication?).
> Yes, it is.
>
>> As what user?
> root
OK, I see a couple of problems.
First off, I didn't have my deploying-host hostname in the hosts map for my engine.
After adding it to /etc/hosts (both hostname and FQDN), when I try to ssh from root@engine to root@host it is prompting me for a password.
On my engine, ~root/.ssh does not contain any keys.
On my host, ~root/.ssh has authorized_keys, and in it there is a key with the comment "ovirt-engine".
It's possible that I inadvertently removed ~root/.ssh on engine while I was preparing the engine (I started to set up my own no-password logins and then thought better and cleaned up, not realizing that some prior setup affecting that directory had occurred). That would explain the second issue.
How/when does the key for root@engine get populated to the host's ~root/.ssh/authenticated_keys during setup?
-Bob
>
>> -Bob
>>
>>> Till hosted-engine hosts were simply identified by their IP address but
>> than we had some bug report on side effects of that.
>>> So now we generate and sign certs using host hostnames and so the engine
>> should be able to correctly resolve them.
>>>> When I log into the Administration portal, the engine VM does not appear
>>>> under the Virtual machine view (it's empty).
>>> It's cause the setup didn't complete.
>>>
>>>> I've attached what I think are the relevant logs.
>>>>
>>>> Also, when my host reboots, the ovirt-ha-broker and ovirt-ha-agent
>> services
>>>> do not come up automatically. I have to use systemctl to start them
>>>> manually.
>>> It's cause the setup didn't complete.
>>>
>>>> This is a fresh Fedora 20 machine installing a fresh copy of Ovirt
>> 3.5.1.
>>>> What's the cleanest approach to restore/complete sanity of my setup
>> please?
>>> First step is to clarify what went wrong in order to avoid it in the
>> future.
>>> Than, if you want a really sanity environment for production use I'd
>> suggest to redeploy.
>>> So
>>> hosted-engine --vm-poweroff
>>> empty the storage domain share and deploy again
>>>
>>>> Thanks,
>>>> Bob
>>>>
>>>>
>>>> I've linked 3 files to this email:
>>>> server.log (12.4 MB) Dropbox https://db.tt/g5p09AaD
>>>> vdsm.log (3.2 MB) Dropbox https://db.tt/P4572SUm
>>>> ovirt-hosted-engine-setup-20150306123622-tad1fy.log (413 KB) Dropbox
>>>> https://db.tt/XAM9ffhi
>>>> Mozilla Thunderbird makes it easy to share large files over email.
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
9 years, 4 months
Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6
by Stefano Danzi
This is a multi-part message in MIME format.
--------------020306050904050501090802
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
> the content is:
>
> [email]
> smtp-server=localhost
> smtp-port=25
> destination-emails=root@localhost
> source-email=root@localhost
>
> [notify]
> state_transition=maintenance|start|stop|migrate|up|down
>
and is the default. My conf was lost during upgrade.
If I restart ovirt-ha-broker the broker.conf is replaced with the default
If I don't restart ovirt-ha-broker, the broker.conf is silently replaced
after a while.
Looking here
http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html
I understand that broker.conf is stored in another place and overwrite
at runtime.
>
> Il 05/11/2015 18.44, Simone Tiraboschi ha scritto:
>> Can you please paste here the content of
>> /var/lib/ovirt-hosted-engine-ha/broker.conf ?
>> eventually make it anonymous if you prefer
>>
>>
>>
>> On Thu, Nov 5, 2015 at 6:42 PM, Stefano Danzi <s.danzi(a)hawai.it
>> <mailto:s.danzi@hawai.it>> wrote:
>>
>> After upgrading from 3.5 to 3.6 Hosted engine notifications stop
>> to work.
>> I think that broker.conf was lost during upgrade.
>>
>> I found this: https://bugzilla.redhat.com/show_bug.cgi?id=1260757
>> But I don't undertand how to change the configuration now.
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
--------------020306050904050501090802
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix"><br>
</div>
<blockquote cite="mid:563B97C7.8050600@hawai.it" type="cite">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
the content is:<br>
<br>
[email]<br>
smtp-server=localhost<br>
smtp-port=25<br>
destination-emails=root@localhost<br>
source-email=root@localhost<br>
<br>
[notify]<br>
state_transition=maintenance|start|stop|migrate|up|down<br>
<br>
</blockquote>
and is the default. My conf was lost during upgrade.<br>
If I restart ovirt-ha-broker the broker.conf is replaced with the
default<br>
<br>
If I don't restart ovirt-ha-broker, the broker.conf is silently
replaced after a while.<br>
<br>
Looking here
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html">http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html</a><br>
I understand that broker.conf is stored in another place and
overwrite at runtime.<br>
<br>
<blockquote cite="mid:563B97C7.8050600@hawai.it" type="cite"> <br>
<div class="moz-cite-prefix">Il 05/11/2015 18.44, Simone
Tiraboschi ha scritto:<br>
</div>
<blockquote
cite="mid:CAN8-ONpSxYVP050rp0ZGOxTmqKPDUUZy+8+2wf3Y8XSXgSTZMA@mail.gmail.com"
type="cite">
<div dir="ltr">Can you please paste here the content of
/var/lib/ovirt-hosted-engine-ha/broker.conf ?
<div>eventually make it anonymous if you prefer<br>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Nov 5, 2015 at 6:42 PM,
Stefano Danzi <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:s.danzi@hawai.it" target="_blank">s.danzi(a)hawai.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">After
upgrading from 3.5 to 3.6 Hosted engine notifications stop
to work.<br>
I think that broker.conf was lost during upgrade.<br>
<br>
I found this: <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1260757"
rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1260757</a><br>
But I don't undertand how to change the configuration now.<br>
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true" href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</blockquote>
<br>
<pre class="moz-signature" cols="72">
</pre>
</body>
</html>
--------------020306050904050501090802--
9 years, 4 months
no guest info shown and timezone warning
by Paul Groeneweg | Pazion
We have upgraded to oVirt 3.6, all went without any trouble and integration
with satellite looks very nice!
I also see a guest info tab, but this holds no info ( timezone unknown )
and the virtual machines give a timezone mismatch warning.
I do have the ovirt-guest-agent installed and running.
See screenshot for details: http://screencast.com/t/fJ9dnE9u
Kind Regards,
Paul Groeneweg
--
Met vriendelijke groeten,
Paul Groeneweg
Pazion
Webdevelopment - Hosting - Apps
T +31 26 3020038
M +31 614 277 577
E paul(a)pazion.nl
***disclaimer***
"This e-mail and any attachments thereto may contain information which is
confidential and/or protected by intellectual property rights and are
intended for the sole use of the recipient(s) named above. Any use of the
information contained herein (including, but not limited to, total or
partial reproduction, communication or distribution in any form) by persons
other than the designated recipient(s) is prohibited. If you have received
this e-mail in error, please notify the sender either by telephone or by
e-mail and delete the material from any computer. Thank you for your
cooperation."
9 years, 5 months
Strange permissions on Hosted Engine HA Agent log files
by Giuseppe Ragusa
Hi all,
I'm installing oVirt (3.6) in self-hosted mode, hyperconverged with GlusterFS (3.7.6).
I'm using the oVirt snapshot generated the night between the 18th and 19th of November, 2015.
The (single, at the moment) host and the Engine are both CentOS 7.1 fully up-to-date.
After ovirt-hosted-engine-setup successful completion, I found the following (about 3 days after setup completed) "anomalies":
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/agent.log
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/agent.log.2015-11-23
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/broker.log
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/broker.log.2015-11-23
The listing above comes from a custom security checking script that gives:
"octal permissions" "number of links" "owner" "group" - "absolute pathname"
Is the ominous "666" mark actually intended/necessary? ;-)
Do I need to open a bugzilla notification for this?
Many thanks in advance for your attention.
Regards,
Giuseppe
9 years, 5 months
Importing a Windows Guest in oVirt
by David Lo Bascio
This is a multi-part message in MIME format.
--------------090807030805030403030506
Content-Type: text/plain; charset=iso-8859-15; format=flowed
Content-Transfer-Encoding: 7bit
Hi everyone,
I migrated several Linux guest this way:
/virt-v2v -ic qemu+ssh://root@<host>/system -o rhev -os <host>:<path>
--network <network> <guest>/
Now, I have some Windows guest running on KVM managed by libvirt and I
would like to import them in oVirt through virt-v2v in a similar way.
Can you help me?
Thanks a lot!
David
--------------090807030805030403030506
Content-Type: text/html; charset=iso-8859-15
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=iso-8859-15">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi everyone,<br>
<br>
I migrated several Linux guest this way:<br>
<i>virt-v2v -ic qemu+ssh://root@<host>/system -o rhev -os
<host>:<path> --network <network> <guest></i><br>
<br>
Now, I have some Windows guest running on KVM managed by libvirt and
I would like to import them in oVirt through virt-v2v in a similar
way. <br>
Can you help me?<br>
<br>
Thanks a lot!<br>
David<br>
<br>
<br>
<br>
</body>
</html>
--------------090807030805030403030506--
9 years, 5 months
Multiple export domains limit?
by Nicolas Ecarnot
Hello,
How comes there is a limitation (of one) in the number of simultaneously
mounted export domains?
Thank you.
--
Nicolas ECARNOT
9 years, 5 months
Unknown libvirterror - where to start?
by Christophe TREFOIS
--_000_ABC69195C9484895AAFF1900F54BBE43unilu_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCkkgY2hlY2tlZCB0aGUgbG9ncyBvbiBteSBoeXBlcnZpc29yIHRoYXQgY29udGFpbnMg
YWxzbyB0aGUgb3ZlcnQtZW5naW5lIChzZWxmLWhvc3RlZCkgYW5kIEkgc2VlIHN0cmFuZ2UgdW5r
bm93biBsaWJ2aXJ0ZXJyb3JzIHRoYXQgY29tZSBwZXJpb2RpY2FsbHkgaW4gdGhlIHZkc20ubG9n
IGZpbGUuIFRoZSBzdG9yYWdlIGlzIGdsdXN0ZXJGUyBydW5uaW5nIG9uIHRoZSBoeXBlcnZpc29y
IGFzIHdlbGwsIG9uZSBORlMgZXhwb3J0IGRvbWFpbiBhbmQgYW4gSVNPIGRvbWFpbi4gQSBORlMg
ZG9tYWluIGZyb20gYW5vdGhlciBwbGFjZSBpcyBpbiBtYWludGVuYW5jZSBtb2RlLg0KDQpJIGFt
IHJ1bm5pbmcgb1ZpcnQgMy41LjMuDQoNClRoYW5rIHlvdSBmb3IgYW55IHBvaW50ZXJzIGFzIHRv
IHdoZXJlIHRvIHN0YXJ0IGZpeGluZyB0aGlzIGlzc3VlLg0KDQrigJQgbG9nIGV4Y2VycHQgLS0N
Cg0KVGhyZWFkLTE5NDc2NDE6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMxLDM5ODo6c3RvbXBS
ZWFjdG9yOjoxNjM6OnlhanNvbnJwYy5TdG9tcFNlcnZlcjo6KHNlbmQpIFNlbmRpbmcgcmVzcG9u
c2UNClRocmVhZC04MTA4OjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMSw0MTA6OmxpYnZpcnRj
b25uZWN0aW9uOjoxNDM6OnJvb3Q6Oih3cmFwcGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNv
ZGU6IDgwIGVkb206IDIwIGxldmVsOiAyIG1lc3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVx
dWVzdGVkIG1ldGFkYXRhIGVsZW1lbnQgaXMgbm90IHByZXNlbnQNCkR1bW15LTE4OTUyNjA6OkRF
QlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMxLDQ3Nzo6c3RvcmFnZV9tYWlsYm94Ojo3MzE6OlN0b3Jh
Z2UuTWlzYy5leGNDbWQ6OihfY2hlY2tGb3JNYWlsKSBkZCBpZj0vcmhldi9kYXRhLWNlbnRlci8w
MDAwMDAwMi0wMDAyLTAwMDItMDAwMi0wMDAwMDAwMDAzZDUvbWFzdGVyc2QvZG9tX21kL2luYm94
IGlmbGFnPWRpcmVjdCxmdWxsYmxvY2sgY291bnQ9MSBicz0xMDI0MDAwIChjd2QgTm9uZSkNCkR1
bW15LTE4OTUyNjA6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMxLDUwMTo6c3RvcmFnZV9tYWls
Ym94Ojo3MzE6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihfY2hlY2tGb3JNYWlsKSBTVUNDRVNTOiA8
ZXJyPiA9ICcxKzAgcmVjb3JkcyBpblxuMSswIHJlY29yZHMgb3V0XG4xMDI0MDAwIGJ5dGVzICgx
LjAgTUIpIGNvcGllZCwgMC4wMDMzMTI3OCBzLCAzMDkgTUIvc1xuJzsgPHJjPiA9IDANClRocmVh
ZC03OTEzOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMiwyOTg6OmxpYnZpcnRjb25uZWN0aW9u
OjoxNDM6OnJvb3Q6Oih3cmFwcGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVk
b206IDIwIGxldmVsOiAyIG1lc3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1l
dGFkYXRhIGVsZW1lbnQgaXMgbm90IHByZXNlbnQNClRocmVhZC01NjgyOjpERUJVRzo6MjAxNS0x
MS0wMyAwODo0NzozMiw0MTc6OmxpYnZpcnRjb25uZWN0aW9uOjoxNDM6OnJvb3Q6Oih3cmFwcGVy
KSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVkb206IDIwIGxldmVsOiAyIG1lc3Nh
Z2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1ldGFkYXRhIGVsZW1lbnQgaXMgbm90
IHByZXNlbnQNCkRldGVjdG9yIHRocmVhZDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzIsNTkx
Ojpwcm90b2NvbGRldGVjdG9yOjoxODc6OnZkcy5NdWx0aVByb3RvY29sQWNjZXB0b3I6OihfYWRk
X2Nvbm5lY3Rpb24pIEFkZGluZyBjb25uZWN0aW9uIGZyb20gMTI3LjAuMC4xOjQ0NjcxDQpEZXRl
Y3RvciB0aHJlYWQ6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMyLDU5ODo6cHJvdG9jb2xkZXRl
Y3Rvcjo6MjAxOjp2ZHMuTXVsdGlQcm90b2NvbEFjY2VwdG9yOjooX3JlbW92ZV9jb25uZWN0aW9u
KSBDb25uZWN0aW9uIHJlbW92ZWQgZnJvbSAxMjcuMC4wLjE6NDQ2NzENCkRldGVjdG9yIHRocmVh
ZDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzIsNTk5Ojpwcm90b2NvbGRldGVjdG9yOjoyNDc6
OnZkcy5NdWx0aVByb3RvY29sQWNjZXB0b3I6OihfaGFuZGxlX2Nvbm5lY3Rpb25fcmVhZCkgRGV0
ZWN0ZWQgcHJvdG9jb2wgeG1sIGZyb20gMTI3LjAuMC4xOjQ0NjcxDQpEZXRlY3RvciB0aHJlYWQ6
OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMyLDU5OTo6QmluZGluZ1hNTFJQQzo6MTE3Mzo6WG1s
RGV0ZWN0b3I6OihoYW5kbGVTb2NrZXQpIHhtbCBvdmVyIGh0dHAgZGV0ZWN0ZWQgZnJvbSAoJzEy
Ny4wLjAuMScsIDQ0NjcxKQ0KVGhyZWFkLTE5NDc2NDI6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3
OjMyLDYwMjo6dGFzazo6NTk1OjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihfdXBkYXRlU3Rh
dGUpIFRhc2s9YDFkOTlhMTY2LWNiOWEtNDAyNS04MjExLWE0OGUyMTBiNTIzNGA6Om1vdmluZyBm
cm9tIHN0YXRlIGluaXQgLT4gc3RhdGUgcHJlcGFyaW5nDQpUaHJlYWQtMTk0NzY0Mjo6SU5GTzo6
MjAxNS0xMS0wMyAwODo0NzozMiw2MDM6OmxvZ1V0aWxzOjo0NDo6ZGlzcGF0Y2hlcjo6KHdyYXBw
ZXIpIFJ1biBhbmQgcHJvdGVjdDogcmVwb1N0YXRzKG9wdGlvbnM9Tm9uZSkNClRocmVhZC0xOTQ3
NjQyOjpJTkZPOjoyMDE1LTExLTAzIDA4OjQ3OjMyLDYwMzo6bG9nVXRpbHM6OjQ3OjpkaXNwYXRj
aGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiByZXBvU3RhdHMsIFJldHVybiByZXNwb25z
ZToge3UnZGU5ZWI3MzctNjkxZi00NjIyLTkwNzAtODkxNTMxZDU5OWEwJzogeydjb2RlJzogMCwg
J2FjdHVhbCc6IFRydWUsICd2ZXJzaW9uJzogMCwgJ2FjcXVpcmVkJzogVHJ1ZSwgJ2RlbGF5Jzog
JzAuMDAwMzczNjEzJywgJ2xhc3RDaGVjayc6ICcyLjUnLCAndmFsaWQnOiBUcnVlfSwgdSdmZTRm
ZDE5YS04NzE0LTQ0ZTAtYWU0MS02NjNhNGI2MmRhN2EnOiB7J2NvZGUnOiAwLCAnYWN0dWFsJzog
VHJ1ZSwgJ3ZlcnNpb24nOiAwLCAnYWNxdWlyZWQnOiBUcnVlLCAnZGVsYXknOiAnMC4wMDA0MDk0
NDYnLCAnbGFzdENoZWNrJzogJzYuNCcsICd2YWxpZCc6IFRydWV9LCB1JzgyNTNhODliLTY1MWUt
NGZmNC04NjViLTU3YWRlZjA1ZDM4Myc6IHsnY29kZSc6IDAsICdhY3R1YWwnOiBUcnVlLCAndmVy
c2lvbic6IDMsICdhY3F1aXJlZCc6IFRydWUsICdkZWxheSc6ICcwLjAwMDUyMDY3MScsICdsYXN0
Q2hlY2snOiAnMS44JywgJ3ZhbGlkJzogVHJ1ZX0sICdiMThlYjI5ZS04YmIxLTQ1YjktYTYwZS1h
OGUwNzIxMGUwNjYnOiB7J2NvZGUnOiAwLCAnYWN0dWFsJzogVHJ1ZSwgJ3ZlcnNpb24nOiAzLCAn
YWNxdWlyZWQnOiBUcnVlLCAnZGVsYXknOiAnMC4wMDA0MjQ0NDUnLCAnbGFzdENoZWNrJzogJzYu
NScsICd2YWxpZCc6IFRydWV9fQ0KVGhyZWFkLTE5NDc2NDI6OkRFQlVHOjoyMDE1LTExLTAzIDA4
OjQ3OjMyLDYwMzo6dGFzazo6MTE5MTo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjoocHJlcGFy
ZSkgVGFzaz1gMWQ5OWExNjYtY2I5YS00MDI1LTgyMTEtYTQ4ZTIxMGI1MjM0YDo6ZmluaXNoZWQ6
IHt1J2RlOWViNzM3LTY5MWYtNDYyMi05MDcwLTg5MTUzMWQ1OTlhMCc6IHsnY29kZSc6IDAsICdh
Y3R1YWwnOiBUcnVlLCAndmVyc2lvbic6IDAsICdhY3F1aXJlZCc6IFRydWUsICdkZWxheSc6ICcw
LjAwMDM3MzYxMycsICdsYXN0Q2hlY2snOiAnMi41JywgJ3ZhbGlkJzogVHJ1ZX0sIHUnZmU0ZmQx
OWEtODcxNC00NGUwLWFlNDEtNjYzYTRiNjJkYTdhJzogeydjb2RlJzogMCwgJ2FjdHVhbCc6IFRy
dWUsICd2ZXJzaW9uJzogMCwgJ2FjcXVpcmVkJzogVHJ1ZSwgJ2RlbGF5JzogJzAuMDAwNDA5NDQ2
JywgJ2xhc3RDaGVjayc6ICc2LjQnLCAndmFsaWQnOiBUcnVlfSwgdSc4MjUzYTg5Yi02NTFlLTRm
ZjQtODY1Yi01N2FkZWYwNWQzODMnOiB7J2NvZGUnOiAwLCAnYWN0dWFsJzogVHJ1ZSwgJ3ZlcnNp
b24nOiAzLCAnYWNxdWlyZWQnOiBUcnVlLCAnZGVsYXknOiAnMC4wMDA1MjA2NzEnLCAnbGFzdENo
ZWNrJzogJzEuOCcsICd2YWxpZCc6IFRydWV9LCAnYjE4ZWIyOWUtOGJiMS00NWI5LWE2MGUtYThl
MDcyMTBlMDY2Jzogeydjb2RlJzogMCwgJ2FjdHVhbCc6IFRydWUsICd2ZXJzaW9uJzogMywgJ2Fj
cXVpcmVkJzogVHJ1ZSwgJ2RlbGF5JzogJzAuMDAwNDI0NDQ1JywgJ2xhc3RDaGVjayc6ICc2LjUn
LCAndmFsaWQnOiBUcnVlfX0NClRocmVhZC0xOTQ3NjQyOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0
NzozMiw2MDM6OnRhc2s6OjU5NTo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0ZVN0
YXRlKSBUYXNrPWAxZDk5YTE2Ni1jYjlhLTQwMjUtODIxMS1hNDhlMjEwYjUyMzRgOjptb3Zpbmcg
ZnJvbSBzdGF0ZSBwcmVwYXJpbmcgLT4gc3RhdGUgZmluaXNoZWQNClRocmVhZC0xOTQ3NjQyOjpE
RUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMiw2MDQ6OnJlc291cmNlTWFuYWdlcjo6OTQwOjpTdG9y
YWdlLlJlc291cmNlTWFuYWdlci5Pd25lcjo6KHJlbGVhc2VBbGwpIE93bmVyLnJlbGVhc2VBbGwg
cmVxdWVzdHMge30gcmVzb3VyY2VzIHt9DQpUaHJlYWQtMTk0NzY0Mjo6REVCVUc6OjIwMTUtMTEt
MDMgMDg6NDc6MzIsNjA0OjpyZXNvdXJjZU1hbmFnZXI6Ojk3Nzo6U3RvcmFnZS5SZXNvdXJjZU1h
bmFnZXIuT3duZXI6OihjYW5jZWxBbGwpIE93bmVyLmNhbmNlbEFsbCByZXF1ZXN0cyB7fQ0KVGhy
ZWFkLTE5NDc2NDI6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMyLDYwNDo6dGFzazo6OTkzOjpT
dG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihfZGVjcmVmKSBUYXNrPWAxZDk5YTE2Ni1jYjlhLTQw
MjUtODIxMS1hNDhlMjEwYjUyMzRgOjpyZWYgMCBhYm9ydGluZyBGYWxzZQ0KVGhyZWFkLTYzNDg6
OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDI2MTo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6
cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTogMjAg
bGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEg
ZWxlbWVudCBpcyBub3QgcHJlc2VudA0KVGhyZWFkLTcyODY6OkRFQlVHOjoyMDE1LTExLTAzIDA4
OjQ3OjMzLDQ2Mjo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25v
d24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTogMjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0
YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEgZWxlbWVudCBpcyBub3QgcHJlc2Vu
dA0KRHVtbXktMTg5NTI2MDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzMsNTE0OjpzdG9yYWdl
X21haWxib3g6OjczMTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9jaGVja0Zvck1haWwpIGRkIGlm
PS9yaGV2L2RhdGEtY2VudGVyLzAwMDAwMDAyLTAwMDItMDAwMi0wMDAyLTAwMDAwMDAwMDNkNS9t
YXN0ZXJzZC9kb21fbWQvaW5ib3ggaWZsYWc9ZGlyZWN0LGZ1bGxibG9jayBjb3VudD0xIGJzPTEw
MjQwMDAgKGN3ZCBOb25lKQ0KRHVtbXktMTg5NTI2MDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6
MzMsNTQwOjpzdG9yYWdlX21haWxib3g6OjczMTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9jaGVj
a0Zvck1haWwpIFNVQ0NFU1M6IDxlcnI+ID0gJzErMCByZWNvcmRzIGluXG4xKzAgcmVjb3JkcyBv
dXRcbjEwMjQwMDAgYnl0ZXMgKDEuMCBNQikgY29waWVkLCAwLjAwMzg1OTAxIHMsIDI2NSBNQi9z
XG4nOyA8cmM+ID0gMA0KVGhyZWFkLTc2Mjc6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDkz
ODo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmly
dGVycm9yOiBlY29kZTogODAgZWRvbTogMjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90
IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEgZWxlbWVudCBpcyBub3QgcHJlc2VudA0KVGhyZWFk
LTc5NTE6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDkzODo6bGlidmlydGNvbm5lY3Rpb246
OjE0Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRv
bTogMjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0
YWRhdGEgZWxlbWVudCBpcyBub3QgcHJlc2VudA0KVGhyZWFkLTM4ODI6OkRFQlVHOjoyMDE1LTEx
LTAzIDA4OjQ3OjMzLDk0MDo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6cm9vdDo6KHdyYXBwZXIp
IFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTogMjAgbGV2ZWw6IDIgbWVzc2Fn
ZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEgZWxlbWVudCBpcyBub3Qg
cHJlc2VudA0KVGhyZWFkLTc5Njc6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDk0MDo6bGli
dmlydGNvbm5lY3Rpb246OjE0Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9y
OiBlY29kZTogODAgZWRvbTogMjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5k
OiBSZXF1ZXN0ZWQgbWV0YWRhdGEgZWxlbWVudCBpcyBub3QgcHJlc2VudA0KVGhyZWFkLTc4OTk6
OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDk1MTo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6
cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTogMjAg
bGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEg
ZWxlbWVudCBpcyBub3QgcHJlc2VudA0KVk0gQ2hhbm5lbHMgTGlzdGVuZXI6OkRFQlVHOjoyMDE1
LTExLTAzIDA4OjQ3OjM0LDM4Mzo6dm1jaGFubmVsczo6OTY6OnZkczo6KF9oYW5kbGVfdGltZW91
dHMpIFRpbWVvdXQgb24gZmlsZW5vIDEzMy4NClZNIENoYW5uZWxzIExpc3RlbmVyOjpERUJVRzo6
MjAxNS0xMS0wMyAwODo0NzozNCwzODM6OnZtY2hhbm5lbHM6Ojk2Ojp2ZHM6OihfaGFuZGxlX3Rp
bWVvdXRzKSBUaW1lb3V0IG9uIGZpbGVubyAxMzUuDQpWTSBDaGFubmVscyBMaXN0ZW5lcjo6REVC
VUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzgzOjp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRs
ZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTM2Lg0KVk0gQ2hhbm5lbHMgTGlzdGVuZXI6
OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjM0LDM4Mzo6dm1jaGFubmVsczo6OTY6OnZkczo6KF9o
YW5kbGVfdGltZW91dHMpIFRpbWVvdXQgb24gZmlsZW5vIDE0Ni4NClZNIENoYW5uZWxzIExpc3Rl
bmVyOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozNCwzODQ6OnZtY2hhbm5lbHM6Ojk2Ojp2ZHM6
OihfaGFuZGxlX3RpbWVvdXRzKSBUaW1lb3V0IG9uIGZpbGVubyAxNjAuDQpWTSBDaGFubmVscyBM
aXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg0Ojp2bWNoYW5uZWxzOjo5Njo6
dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTYxLg0KVk0gQ2hhbm5l
bHMgTGlzdGVuZXI6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjM0LDM4NDo6dm1jaGFubmVsczo6
OTY6OnZkczo6KF9oYW5kbGVfdGltZW91dHMpIFRpbWVvdXQgb24gZmlsZW5vIDE2Ny4NClZNIENo
YW5uZWxzIExpc3RlbmVyOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozNCwzODQ6OnZtY2hhbm5l
bHM6Ojk2Ojp2ZHM6OihfaGFuZGxlX3RpbWVvdXRzKSBUaW1lb3V0IG9uIGZpbGVubyAxNzEuDQpW
TSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg0Ojp2bWNo
YW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTcy
Lg0KVk0gQ2hhbm5lbHMgTGlzdGVuZXI6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjM0LDM4NDo6
dm1jaGFubmVsczo6OTY6OnZkczo6KF9oYW5kbGVfdGltZW91dHMpIFRpbWVvdXQgb24gZmlsZW5v
IDE1OS4NClZNIENoYW5uZWxzIExpc3RlbmVyOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozNCwz
ODU6OnZtY2hhbm5lbHM6Ojk2Ojp2ZHM6OihfaGFuZGxlX3RpbWVvdXRzKSBUaW1lb3V0IG9uIGZp
bGVubyAxODkuDQpWTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6
MzQsMzg1Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBv
biBmaWxlbm8gMTkwLg0KVk0gQ2hhbm5lbHMgTGlzdGVuZXI6OkRFQlVHOjoyMDE1LTExLTAzIDA4
OjQ3OjM0LDM4NTo6dm1jaGFubmVsczo6OTY6OnZkczo6KF9oYW5kbGVfdGltZW91dHMpIFRpbWVv
dXQgb24gZmlsZW5vIDE5NS4NClZNIENoYW5uZWxzIExpc3RlbmVyOjpERUJVRzo6MjAxNS0xMS0w
MyAwODo0NzozNCwzODU6OnZtY2hhbm5lbHM6Ojk2Ojp2ZHM6OihfaGFuZGxlX3RpbWVvdXRzKSBU
aW1lb3V0IG9uIGZpbGVubyAxOTcuDQpWTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUt
MTEtMDMgMDg6NDc6MzQsMzg1Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0
cykgVGltZW91dCBvbiBmaWxlbm8gMTk4Lg0KVk0gQ2hhbm5lbHMgTGlzdGVuZXI6OkRFQlVHOjoy
MDE1LTExLTAzIDA4OjQ3OjM0LDM4NTo6dm1jaGFubmVsczo6OTY6OnZkczo6KF9oYW5kbGVfdGlt
ZW91dHMpIFRpbWVvdXQgb24gZmlsZW5vIDIxMy4NClZNIENoYW5uZWxzIExpc3RlbmVyOjpERUJV
Rzo6MjAxNS0xMS0wMyAwODo0NzozNCwzODY6OnZtY2hhbm5lbHM6Ojk2Ojp2ZHM6OihfaGFuZGxl
X3RpbWVvdXRzKSBUaW1lb3V0IG9uIGZpbGVubyAyMTUuDQpWTSBDaGFubmVscyBMaXN0ZW5lcjo6
REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg2Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hh
bmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gOTguDQpWTSBDaGFubmVscyBMaXN0ZW5l
cjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg2Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjoo
X2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTI1Lg0KSnNvblJwYyAoU3RvbXBS
ZWFjdG9yKTo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsNDEyOjpzdG9tcFJlYWN0b3I6Ojk4
OjpCcm9rZXIuU3RvbXBBZGFwdGVyOjooaGFuZGxlX2ZyYW1lKSBIYW5kbGluZyBtZXNzYWdlIDxT
dG9tcEZyYW1lIGNvbW1hbmQ9J1NFTkQnPg0KSnNvblJwY1NlcnZlcjo6REVCVUc6OjIwMTUtMTEt
MDMgMDg6NDc6MzQsNDEzOjpfX2luaXRfXzo6NTMwOjpqc29ucnBjLkpzb25ScGNTZXJ2ZXI6Oihz
ZXJ2ZV9yZXF1ZXN0cykgV2FpdGluZyBmb3IgcmVxdWVzdA0KVGhyZWFkLTE5NDc2NDM6OkRFQlVH
OjoyMDE1LTExLTAzIDA4OjQ3OjM0LDQzNzo6c3RvbXBSZWFjdG9yOjoxNjM6OnlhanNvbnJwYy5T
dG9tcFNlcnZlcjo6KHNlbmQpIFNlbmRpbmcgcmVzcG9uc2UNClRocmVhZC03NjEzOjpERUJVRzo6
MjAxNS0xMS0wMyAwODo0NzozNCw3MTM6OmxpYnZpcnRjb25uZWN0aW9uOjoxNDM6OnJvb3Q6Oih3
cmFwcGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVkb206IDIwIGxldmVsOiAy
IG1lc3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1ldGFkYXRhIGVsZW1lbnQg
aXMgbm90IHByZXNlbnQNClRocmVhZC02MzkzOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozNCw3
MTM6OmxpYnZpcnRjb25uZWN0aW9uOjoxNDM6OnJvb3Q6Oih3cmFwcGVyKSBVbmtub3duIGxpYnZp
cnRlcnJvcjogZWNvZGU6IDgwIGVkb206IDIwIGxldmVsOiAyIG1lc3NhZ2U6IG1ldGFkYXRhIG5v
dCBmb3VuZDogUmVxdWVzdGVkIG1ldGFkYXRhIGVsZW1lbnQgaXMgbm90IHByZXNlbnQNClRocmVh
ZC0yMjY5NDE6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjM1LDUxMTo6dGFzazo6NTk1OjpTdG9y
YWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihfdXBkYXRlU3RhdGUpIFRhc2s9YGJkZjI2NDAxLTMyNGMt
NDIyMC05MDM0LTE5YzdkODE2ZjY0MmA6Om1vdmluZyBmcm9tIHN0YXRlIGluaXQgLT4gc3RhdGUg
cHJlcGFyaW5nDQpUaHJlYWQtNDcwOTo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzUsNTExOjp0
YXNrOjo1OTU6OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkgVGFzaz1g
MGFjMmY4NTQtNDFlMy00NDI3LWEwZjQtNWVhYTE4NDJlMjEyYDo6bW92aW5nIGZyb20gc3RhdGUg
aW5pdCAtPiBzdGF0ZSBwcmVwYXJpbmcNCg0KLS0NCg0KQ2hyaXN0b3BoZQ0KDQoNCg==
--_000_ABC69195C9484895AAFF1900F54BBE43unilu_
Content-Type: text/html; charset="utf-8"
Content-ID: <05E547F58ADC464D9D0F9ACEF203BCD2(a)uni.lux>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KSGksDQo8ZGl2IGNsYXNzPSIiPjxi
ciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5JIGNoZWNrZWQgdGhlIGxvZ3Mgb24g
bXkgaHlwZXJ2aXNvciB0aGF0IGNvbnRhaW5zIGFsc28gdGhlIG92ZXJ0LWVuZ2luZSAoc2VsZi1o
b3N0ZWQpIGFuZCBJIHNlZSBzdHJhbmdlIHVua25vd24gbGlidmlydGVycm9ycyB0aGF0IGNvbWUg
cGVyaW9kaWNhbGx5IGluIHRoZSB2ZHNtLmxvZyBmaWxlLiBUaGUgc3RvcmFnZSBpcyBnbHVzdGVy
RlMgcnVubmluZyBvbiB0aGUgaHlwZXJ2aXNvciBhcyB3ZWxsLCBvbmUgTkZTIGV4cG9ydA0KIGRv
bWFpbiBhbmQgYW4gSVNPIGRvbWFpbi4gQSBORlMgZG9tYWluIGZyb20gYW5vdGhlciBwbGFjZSBp
cyBpbiBtYWludGVuYW5jZSBtb2RlLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+
DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SSBhbSBydW5uaW5nIG9WaXJ0IDMuNS4zLjwvZGl2Pg0K
PGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhhbmsg
eW91IGZvciBhbnkgcG9pbnRlcnMgYXMgdG8gd2hlcmUgdG8gc3RhcnQgZml4aW5nIHRoaXMgaXNz
dWUuPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFz
cz0iIj7igJQgbG9nIGV4Y2VycHQgLS08L2Rpdj4NCjxkaXYgY2xhc3M9IiI+DQo8ZGl2IGNsYXNz
PSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiI+
VGhyZWFkLTE5NDc2NDE6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMxLDM5ODo6c3RvbXBSZWFj
dG9yOjoxNjM6OnlhanNvbnJwYy5TdG9tcFNlcnZlcjo6KHNlbmQpIFNlbmRpbmcgcmVzcG9uc2U8
L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFkLTgxMDg6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3
OjMxLDQxMDo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25vd24g
bGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTogMjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRh
dGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEgZWxlbWVudCBpcyBub3QgcHJlc2VudDwv
ZGl2Pg0KPGRpdiBjbGFzcz0iIj5EdW1teS0xODk1MjYwOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0
NzozMSw0Nzc6OnN0b3JhZ2VfbWFpbGJveDo6NzMxOjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooX2No
ZWNrRm9yTWFpbCkgZGQgaWY9L3JoZXYvZGF0YS1jZW50ZXIvMDAwMDAwMDItMDAwMi0wMDAyLTAw
MDItMDAwMDAwMDAwM2Q1L21hc3RlcnNkL2RvbV9tZC9pbmJveCBpZmxhZz1kaXJlY3QsZnVsbGJs
b2NrIGNvdW50PTEgYnM9MTAyNDAwMCAoY3dkIE5vbmUpPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkR1
bW15LTE4OTUyNjA6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMxLDUwMTo6c3RvcmFnZV9tYWls
Ym94Ojo3MzE6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihfY2hlY2tGb3JNYWlsKSBTVUNDRVNTOiAm
bHQ7ZXJyJmd0OyA9ICcxJiM0MzswIHJlY29yZHMgaW5cbjEmIzQzOzAgcmVjb3JkcyBvdXRcbjEw
MjQwMDAgYnl0ZXMgKDEuMCBNQikgY29waWVkLCAwLjAwMzMxMjc4IHMsIDMwOSBNQi9zXG4nOyAm
bHQ7cmMmZ3Q7ID0gMDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQtNzkxMzo6REVCVUc6OjIw
MTUtMTEtMDMgMDg6NDc6MzIsMjk4OjpsaWJ2aXJ0Y29ubmVjdGlvbjo6MTQzOjpyb290Ojood3Jh
cHBlcikgVW5rbm93biBsaWJ2aXJ0ZXJyb3I6IGVjb2RlOiA4MCBlZG9tOiAyMCBsZXZlbDogMiBt
ZXNzYWdlOiBtZXRhZGF0YSBub3QgZm91bmQ6IFJlcXVlc3RlZCBtZXRhZGF0YSBlbGVtZW50IGlz
IG5vdCBwcmVzZW50PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVhZC01NjgyOjpERUJVRzo6MjAx
NS0xMS0wMyAwODo0NzozMiw0MTc6OmxpYnZpcnRjb25uZWN0aW9uOjoxNDM6OnJvb3Q6Oih3cmFw
cGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVkb206IDIwIGxldmVsOiAyIG1l
c3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1ldGFkYXRhIGVsZW1lbnQgaXMg
bm90IHByZXNlbnQ8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+RGV0ZWN0b3IgdGhyZWFkOjpERUJVRzo6
MjAxNS0xMS0wMyAwODo0NzozMiw1OTE6OnByb3RvY29sZGV0ZWN0b3I6OjE4Nzo6dmRzLk11bHRp
UHJvdG9jb2xBY2NlcHRvcjo6KF9hZGRfY29ubmVjdGlvbikgQWRkaW5nIGNvbm5lY3Rpb24gZnJv
bSAxMjcuMC4wLjE6NDQ2NzE8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+RGV0ZWN0b3IgdGhyZWFkOjpE
RUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMiw1OTg6OnByb3RvY29sZGV0ZWN0b3I6OjIwMTo6dmRz
Lk11bHRpUHJvdG9jb2xBY2NlcHRvcjo6KF9yZW1vdmVfY29ubmVjdGlvbikgQ29ubmVjdGlvbiBy
ZW1vdmVkIGZyb20gMTI3LjAuMC4xOjQ0NjcxPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkRldGVjdG9y
IHRocmVhZDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzIsNTk5Ojpwcm90b2NvbGRldGVjdG9y
OjoyNDc6OnZkcy5NdWx0aVByb3RvY29sQWNjZXB0b3I6OihfaGFuZGxlX2Nvbm5lY3Rpb25fcmVh
ZCkgRGV0ZWN0ZWQgcHJvdG9jb2wgeG1sIGZyb20gMTI3LjAuMC4xOjQ0NjcxPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPkRldGVjdG9yIHRocmVhZDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzIsNTk5
OjpCaW5kaW5nWE1MUlBDOjoxMTczOjpYbWxEZXRlY3Rvcjo6KGhhbmRsZVNvY2tldCkgeG1sIG92
ZXIgaHR0cCBkZXRlY3RlZCBmcm9tICgnMTI3LjAuMC4xJywgNDQ2NzEpPC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPlRocmVhZC0xOTQ3NjQyOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMiw2MDI6OnRh
c2s6OjU5NTo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWAx
ZDk5YTE2Ni1jYjlhLTQwMjUtODIxMS1hNDhlMjEwYjUyMzRgOjptb3ZpbmcgZnJvbSBzdGF0ZSBp
bml0IC0mZ3Q7IHN0YXRlIHByZXBhcmluZzwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQtMTk0
NzY0Mjo6SU5GTzo6MjAxNS0xMS0wMyAwODo0NzozMiw2MDM6OmxvZ1V0aWxzOjo0NDo6ZGlzcGF0
Y2hlcjo6KHdyYXBwZXIpIFJ1biBhbmQgcHJvdGVjdDogcmVwb1N0YXRzKG9wdGlvbnM9Tm9uZSk8
L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFkLTE5NDc2NDI6OklORk86OjIwMTUtMTEtMDMgMDg6
NDc6MzIsNjAzOjpsb2dVdGlsczo6NDc6OmRpc3BhdGNoZXI6Oih3cmFwcGVyKSBSdW4gYW5kIHBy
b3RlY3Q6IHJlcG9TdGF0cywgUmV0dXJuIHJlc3BvbnNlOiB7dSdkZTllYjczNy02OTFmLTQ2MjIt
OTA3MC04OTE1MzFkNTk5YTAnOiB7J2NvZGUnOiAwLCAnYWN0dWFsJzogVHJ1ZSwgJ3ZlcnNpb24n
OiAwLCAnYWNxdWlyZWQnOiBUcnVlLCAnZGVsYXknOiAnMC4wMDAzNzM2MTMnLA0KICdsYXN0Q2hl
Y2snOiAnMi41JywgJ3ZhbGlkJzogVHJ1ZX0sIHUnZmU0ZmQxOWEtODcxNC00NGUwLWFlNDEtNjYz
YTRiNjJkYTdhJzogeydjb2RlJzogMCwgJ2FjdHVhbCc6IFRydWUsICd2ZXJzaW9uJzogMCwgJ2Fj
cXVpcmVkJzogVHJ1ZSwgJ2RlbGF5JzogJzAuMDAwNDA5NDQ2JywgJ2xhc3RDaGVjayc6ICc2LjQn
LCAndmFsaWQnOiBUcnVlfSwgdSc4MjUzYTg5Yi02NTFlLTRmZjQtODY1Yi01N2FkZWYwNWQzODMn
OiB7J2NvZGUnOiAwLCAnYWN0dWFsJzoNCiBUcnVlLCAndmVyc2lvbic6IDMsICdhY3F1aXJlZCc6
IFRydWUsICdkZWxheSc6ICcwLjAwMDUyMDY3MScsICdsYXN0Q2hlY2snOiAnMS44JywgJ3ZhbGlk
JzogVHJ1ZX0sICdiMThlYjI5ZS04YmIxLTQ1YjktYTYwZS1hOGUwNzIxMGUwNjYnOiB7J2NvZGUn
OiAwLCAnYWN0dWFsJzogVHJ1ZSwgJ3ZlcnNpb24nOiAzLCAnYWNxdWlyZWQnOiBUcnVlLCAnZGVs
YXknOiAnMC4wMDA0MjQ0NDUnLCAnbGFzdENoZWNrJzogJzYuNScsICd2YWxpZCc6IFRydWV9fTwv
ZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQtMTk0NzY0Mjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6
NDc6MzIsNjAzOjp0YXNrOjoxMTkxOjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihwcmVwYXJl
KSBUYXNrPWAxZDk5YTE2Ni1jYjlhLTQwMjUtODIxMS1hNDhlMjEwYjUyMzRgOjpmaW5pc2hlZDog
e3UnZGU5ZWI3MzctNjkxZi00NjIyLTkwNzAtODkxNTMxZDU5OWEwJzogeydjb2RlJzogMCwgJ2Fj
dHVhbCc6IFRydWUsICd2ZXJzaW9uJzogMCwgJ2FjcXVpcmVkJzoNCiBUcnVlLCAnZGVsYXknOiAn
MC4wMDAzNzM2MTMnLCAnbGFzdENoZWNrJzogJzIuNScsICd2YWxpZCc6IFRydWV9LCB1J2ZlNGZk
MTlhLTg3MTQtNDRlMC1hZTQxLTY2M2E0YjYyZGE3YSc6IHsnY29kZSc6IDAsICdhY3R1YWwnOiBU
cnVlLCAndmVyc2lvbic6IDAsICdhY3F1aXJlZCc6IFRydWUsICdkZWxheSc6ICcwLjAwMDQwOTQ0
NicsICdsYXN0Q2hlY2snOiAnNi40JywgJ3ZhbGlkJzogVHJ1ZX0sIHUnODI1M2E4OWItNjUxZS00
ZmY0LTg2NWItNTdhZGVmMDVkMzgzJzoNCiB7J2NvZGUnOiAwLCAnYWN0dWFsJzogVHJ1ZSwgJ3Zl
cnNpb24nOiAzLCAnYWNxdWlyZWQnOiBUcnVlLCAnZGVsYXknOiAnMC4wMDA1MjA2NzEnLCAnbGFz
dENoZWNrJzogJzEuOCcsICd2YWxpZCc6IFRydWV9LCAnYjE4ZWIyOWUtOGJiMS00NWI5LWE2MGUt
YThlMDcyMTBlMDY2Jzogeydjb2RlJzogMCwgJ2FjdHVhbCc6IFRydWUsICd2ZXJzaW9uJzogMywg
J2FjcXVpcmVkJzogVHJ1ZSwgJ2RlbGF5JzogJzAuMDAwNDI0NDQ1JywgJ2xhc3RDaGVjayc6DQog
JzYuNScsICd2YWxpZCc6IFRydWV9fTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQtMTk0NzY0
Mjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzIsNjAzOjp0YXNrOjo1OTU6OlN0b3JhZ2UuVGFz
a01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkgVGFzaz1gMWQ5OWExNjYtY2I5YS00MDI1LTgy
MTEtYTQ4ZTIxMGI1MjM0YDo6bW92aW5nIGZyb20gc3RhdGUgcHJlcGFyaW5nIC0mZ3Q7IHN0YXRl
IGZpbmlzaGVkPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVhZC0xOTQ3NjQyOjpERUJVRzo6MjAx
NS0xMS0wMyAwODo0NzozMiw2MDQ6OnJlc291cmNlTWFuYWdlcjo6OTQwOjpTdG9yYWdlLlJlc291
cmNlTWFuYWdlci5Pd25lcjo6KHJlbGVhc2VBbGwpIE93bmVyLnJlbGVhc2VBbGwgcmVxdWVzdHMg
e30gcmVzb3VyY2VzIHt9PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVhZC0xOTQ3NjQyOjpERUJV
Rzo6MjAxNS0xMS0wMyAwODo0NzozMiw2MDQ6OnJlc291cmNlTWFuYWdlcjo6OTc3OjpTdG9yYWdl
LlJlc291cmNlTWFuYWdlci5Pd25lcjo6KGNhbmNlbEFsbCkgT3duZXIuY2FuY2VsQWxsIHJlcXVl
c3RzIHt9PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVhZC0xOTQ3NjQyOjpERUJVRzo6MjAxNS0x
MS0wMyAwODo0NzozMiw2MDQ6OnRhc2s6Ojk5Mzo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjoo
X2RlY3JlZikgVGFzaz1gMWQ5OWExNjYtY2I5YS00MDI1LTgyMTEtYTQ4ZTIxMGI1MjM0YDo6cmVm
IDAgYWJvcnRpbmcgRmFsc2U8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFkLTYzNDg6OkRFQlVH
OjoyMDE1LTExLTAzIDA4OjQ3OjMzLDI2MTo6bGlidmlydGNvbm5lY3Rpb246OjE0Mzo6cm9vdDo6
KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTogMjAgbGV2ZWw6
IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRhdGEgZWxlbWVu
dCBpcyBub3QgcHJlc2VudDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQtNzI4Njo6REVCVUc6
OjIwMTUtMTEtMDMgMDg6NDc6MzMsNDYyOjpsaWJ2aXJ0Y29ubmVjdGlvbjo6MTQzOjpyb290Ojoo
d3JhcHBlcikgVW5rbm93biBsaWJ2aXJ0ZXJyb3I6IGVjb2RlOiA4MCBlZG9tOiAyMCBsZXZlbDog
MiBtZXNzYWdlOiBtZXRhZGF0YSBub3QgZm91bmQ6IFJlcXVlc3RlZCBtZXRhZGF0YSBlbGVtZW50
IGlzIG5vdCBwcmVzZW50PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkR1bW15LTE4OTUyNjA6OkRFQlVH
OjoyMDE1LTExLTAzIDA4OjQ3OjMzLDUxNDo6c3RvcmFnZV9tYWlsYm94Ojo3MzE6OlN0b3JhZ2Uu
TWlzYy5leGNDbWQ6OihfY2hlY2tGb3JNYWlsKSBkZCBpZj0vcmhldi9kYXRhLWNlbnRlci8wMDAw
MDAwMi0wMDAyLTAwMDItMDAwMi0wMDAwMDAwMDAzZDUvbWFzdGVyc2QvZG9tX21kL2luYm94IGlm
bGFnPWRpcmVjdCxmdWxsYmxvY2sgY291bnQ9MSBicz0xMDI0MDAwIChjd2QgTm9uZSk8L2Rpdj4N
CjxkaXYgY2xhc3M9IiI+RHVtbXktMTg5NTI2MDo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzMs
NTQwOjpzdG9yYWdlX21haWxib3g6OjczMTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9jaGVja0Zv
ck1haWwpIFNVQ0NFU1M6ICZsdDtlcnImZ3Q7ID0gJzEmIzQzOzAgcmVjb3JkcyBpblxuMSYjNDM7
MCByZWNvcmRzIG91dFxuMTAyNDAwMCBieXRlcyAoMS4wIE1CKSBjb3BpZWQsIDAuMDAzODU5MDEg
cywgMjY1IE1CL3Ncbic7ICZsdDtyYyZndDsgPSAwPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVh
ZC03NjI3OjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMyw5Mzg6OmxpYnZpcnRjb25uZWN0aW9u
OjoxNDM6OnJvb3Q6Oih3cmFwcGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVk
b206IDIwIGxldmVsOiAyIG1lc3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1l
dGFkYXRhIGVsZW1lbnQgaXMgbm90IHByZXNlbnQ8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFk
LTc5NTE6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDkzODo6bGlidmlydGNvbm5lY3Rpb246
OjE0Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRv
bTogMjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0
YWRhdGEgZWxlbWVudCBpcyBub3QgcHJlc2VudDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQt
Mzg4Mjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzMsOTQwOjpsaWJ2aXJ0Y29ubmVjdGlvbjo6
MTQzOjpyb290Ojood3JhcHBlcikgVW5rbm93biBsaWJ2aXJ0ZXJyb3I6IGVjb2RlOiA4MCBlZG9t
OiAyMCBsZXZlbDogMiBtZXNzYWdlOiBtZXRhZGF0YSBub3QgZm91bmQ6IFJlcXVlc3RlZCBtZXRh
ZGF0YSBlbGVtZW50IGlzIG5vdCBwcmVzZW50PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVhZC03
OTY3OjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozMyw5NDA6OmxpYnZpcnRjb25uZWN0aW9uOjox
NDM6OnJvb3Q6Oih3cmFwcGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVkb206
IDIwIGxldmVsOiAyIG1lc3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1ldGFk
YXRhIGVsZW1lbnQgaXMgbm90IHByZXNlbnQ8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFkLTc4
OTk6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjMzLDk1MTo6bGlidmlydGNvbm5lY3Rpb246OjE0
Mzo6cm9vdDo6KHdyYXBwZXIpIFVua25vd24gbGlidmlydGVycm9yOiBlY29kZTogODAgZWRvbTog
MjAgbGV2ZWw6IDIgbWVzc2FnZTogbWV0YWRhdGEgbm90IGZvdW5kOiBSZXF1ZXN0ZWQgbWV0YWRh
dGEgZWxlbWVudCBpcyBub3QgcHJlc2VudDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVs
cyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzgzOjp2bWNoYW5uZWxzOjo5
Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTMzLjwvZGl2Pg0K
PGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6
NDc6MzQsMzgzOjp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91
dCBvbiBmaWxlbm8gMTM1LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5l
cjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzgzOjp2bWNoYW5uZWxzOjo5Njo6dmRzOjoo
X2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTM2LjwvZGl2Pg0KPGRpdiBjbGFz
cz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzgz
Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxl
bm8gMTQ2LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6
OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg0Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90
aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTYwLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBD
aGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg0Ojp2bWNoYW5u
ZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTYxLjwv
ZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEt
MDMgMDg6NDc6MzQsMzg0Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykg
VGltZW91dCBvbiBmaWxlbm8gMTY3LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBM
aXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg0Ojp2bWNoYW5uZWxzOjo5Njo6
dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTcxLjwvZGl2Pg0KPGRp
diBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6
MzQsMzg0Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBv
biBmaWxlbm8gMTcyLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6
REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg0Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hh
bmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTU5LjwvZGl2Pg0KPGRpdiBjbGFzcz0i
Ij5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg1Ojp2
bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8g
MTg5LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIw
MTUtMTEtMDMgMDg6NDc6MzQsMzg1Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1l
b3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTkwLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFu
bmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg1Ojp2bWNoYW5uZWxz
Ojo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTk1LjwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMg
MDg6NDc6MzQsMzg1Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGlt
ZW91dCBvbiBmaWxlbm8gMTk3LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0
ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg1Ojp2bWNoYW5uZWxzOjo5Njo6dmRz
OjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMTk4LjwvZGl2Pg0KPGRpdiBj
bGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQs
Mzg1Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBm
aWxlbm8gMjEzLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WTSBDaGFubmVscyBMaXN0ZW5lcjo6REVC
VUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg2Ojp2bWNoYW5uZWxzOjo5Njo6dmRzOjooX2hhbmRs
ZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gMjE1LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5W
TSBDaGFubmVscyBMaXN0ZW5lcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsMzg2Ojp2bWNo
YW5uZWxzOjo5Njo6dmRzOjooX2hhbmRsZV90aW1lb3V0cykgVGltZW91dCBvbiBmaWxlbm8gOTgu
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlZNIENoYW5uZWxzIExpc3RlbmVyOjpERUJVRzo6MjAxNS0x
MS0wMyAwODo0NzozNCwzODY6OnZtY2hhbm5lbHM6Ojk2Ojp2ZHM6OihfaGFuZGxlX3RpbWVvdXRz
KSBUaW1lb3V0IG9uIGZpbGVubyAxMjUuPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkpzb25ScGMgKFN0
b21wUmVhY3Rvcik6OkRFQlVHOjoyMDE1LTExLTAzIDA4OjQ3OjM0LDQxMjo6c3RvbXBSZWFjdG9y
Ojo5ODo6QnJva2VyLlN0b21wQWRhcHRlcjo6KGhhbmRsZV9mcmFtZSkgSGFuZGxpbmcgbWVzc2Fn
ZSAmbHQ7U3RvbXBGcmFtZSBjb21tYW5kPSdTRU5EJyZndDs8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+
SnNvblJwY1NlcnZlcjo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsNDEzOjpfX2luaXRfXzo6
NTMwOjpqc29ucnBjLkpzb25ScGNTZXJ2ZXI6OihzZXJ2ZV9yZXF1ZXN0cykgV2FpdGluZyBmb3Ig
cmVxdWVzdDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQtMTk0NzY0Mzo6REVCVUc6OjIwMTUt
MTEtMDMgMDg6NDc6MzQsNDM3OjpzdG9tcFJlYWN0b3I6OjE2Mzo6eWFqc29ucnBjLlN0b21wU2Vy
dmVyOjooc2VuZCkgU2VuZGluZyByZXNwb25zZTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaHJlYWQt
NzYxMzo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzQsNzEzOjpsaWJ2aXJ0Y29ubmVjdGlvbjo6
MTQzOjpyb290Ojood3JhcHBlcikgVW5rbm93biBsaWJ2aXJ0ZXJyb3I6IGVjb2RlOiA4MCBlZG9t
OiAyMCBsZXZlbDogMiBtZXNzYWdlOiBtZXRhZGF0YSBub3QgZm91bmQ6IFJlcXVlc3RlZCBtZXRh
ZGF0YSBlbGVtZW50IGlzIG5vdCBwcmVzZW50PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRocmVhZC02
MzkzOjpERUJVRzo6MjAxNS0xMS0wMyAwODo0NzozNCw3MTM6OmxpYnZpcnRjb25uZWN0aW9uOjox
NDM6OnJvb3Q6Oih3cmFwcGVyKSBVbmtub3duIGxpYnZpcnRlcnJvcjogZWNvZGU6IDgwIGVkb206
IDIwIGxldmVsOiAyIG1lc3NhZ2U6IG1ldGFkYXRhIG5vdCBmb3VuZDogUmVxdWVzdGVkIG1ldGFk
YXRhIGVsZW1lbnQgaXMgbm90IHByZXNlbnQ8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFkLTIy
Njk0MTo6REVCVUc6OjIwMTUtMTEtMDMgMDg6NDc6MzUsNTExOjp0YXNrOjo1OTU6OlN0b3JhZ2Uu
VGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkgVGFzaz1gYmRmMjY0MDEtMzI0Yy00MjIw
LTkwMzQtMTljN2Q4MTZmNjQyYDo6bW92aW5nIGZyb20gc3RhdGUgaW5pdCAtJmd0OyBzdGF0ZSBw
cmVwYXJpbmc8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhyZWFkLTQ3MDk6OkRFQlVHOjoyMDE1LTEx
LTAzIDA4OjQ3OjM1LDUxMTo6dGFzazo6NTk1OjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6Oihf
dXBkYXRlU3RhdGUpIFRhc2s9YDBhYzJmODU0LTQxZTMtNDQyNy1hMGY0LTVlYWExODQyZTIxMmA6
Om1vdmluZyBmcm9tIHN0YXRlIGluaXQgLSZndDsgc3RhdGUgcHJlcGFyaW5nPC9kaXY+DQo8ZGl2
IGFwcGxlLWNvbnRlbnQtZWRpdGVkPSJ0cnVlIiBjbGFzcz0iIj4NCjxkaXYgc3R5bGU9ImxldHRl
ci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0
LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsg
d2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB3b3JkLXdyYXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsg
LXdlYmtpdC1saW5lLWJyZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPHAgc3R5
bGU9ImZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMHB0OyBsaW5l
LWhlaWdodDogMTZweDsiIGNsYXNzPSIiPg0KPGZvbnQgY29sb3I9IiMzZDNiM2IiIGNsYXNzPSIi
PjxiIGNsYXNzPSIiPi0tPC9iPjwvZm9udD48L3A+DQo8ZGl2IGNsYXNzPSIiPkNocmlzdG9waGU8
L2Rpdj4NCiZuYnNwOzxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiIHN0eWxlPSJj
b2xvcjogcmdiKDAsIDAsIDApOyI+Jm5ic3A7PC9zcGFuPjwvZGl2Pg0KPC9kaXY+DQo8YnIgY2xh
c3M9IiI+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_ABC69195C9484895AAFF1900F54BBE43unilu_--
9 years, 5 months
Greetings and observations from an oVirt noob
by Kenneth Marsh
Hello all,
I do development operations for a part of a software division of a large
multinational. I'm an experienced user of VMWare and Amazon AWS, soon to be
pushed onto Azure, and I've found a common thread among all solutions -
they are all expensive enough that my budget will certainly not be approved
with them. I'm deferred to the IT part of the organisation, which operates
too slowly and inefficiently (in terms of both cost and time) for my
requirements. This is what led me to RHEV, and ultimately to oVirt. This is
a feasibility study for what may ultimately be a RHEV-based data center in
a new office, and if I succeed we will be doing more on a fixed budget by
using more RHEV and less Azure.
I spent the weekend working with oVirt and I'm very impressed. I had no
idea such a comprehensive enterprise-class solution was even available.
Being a complete newcomer, I started without a clue and after a weekend had
set up a nearly-working data centre including an oVirt hypervisor node, all
on old Dell notebooks loaned to me temporarily by our IT group. I started
with RHEV but decided to use oVirt for two reasons - one being to see
what's possible with the latest and greatest, the other because RHEV
required some licensing I've not yet purchased. Long term it'll have to be
RHEV for enterprise support reasons I'm sure many are familiar with.
There are a few things I found, from a newcomer's perspective, very unclear.
- What is oVirt, vs oVirt engine, vs oVirt node, vs oVirt host. Try to
find documentation on any of these and get spammed with references to the
others. I think I've worked out that these are the collective suite of
products, the management centre, the bare-metal hypervisor, and
participating member servers, respectively.
- Which versions of CentOS/Fedora/oVirt Node are compatible at which
oVirt compatibility level? This would normally be addressed in the release
notes. It was also confusing to discover oVirt node 3.2.1 is compatible at
the 3.5 level. The answer to this remains unclear but I'm trying to use
Fedora 22 across the board now with oVirt node 3.2.1 and this seems to be
working, although I haven't gotten a server node into a cluster yet, only
oVirt nodes.
- Storage domains - much doco about them being needed and how to
configure them but nothing about what they are or why they are needed. I
would have expected an oVirt node to be capable of both data and ISO
storage but apparently there needs to be an NFS or iSCSI filesystem there
first? And there's local storage vs shared, another concept much talked
about how to prepare and add it but not explained why one would want to do
that or what it means.
I think with further internet combing and by trial-and-error I'm very
likely to figure it all out. I hope all goes well and implement this stuff
in our new data centre and then I'd be keen to contribute some of my own
tech writing.
Meanwhile, I hope to be active on this mailing list and I thank everyone in
advance for sharing their oVirt experience. For any who are looking at the
doco thanks much for the plethora of stuff out there already and I hope the
above bullet points help you understand where doco most needs more
attention. At least from the perspective of one who has just come across
oVirt.
Kind Regards,
Ken Marsh
Brisbane, Australia
9 years, 5 months
Engine upgrade error
by Frank Rothenstein
Hello,
I tried several times to upgrade my ovirt-engine 3.5.5 to 3.6. Every
time the setup stops at updating the DB-schema. The log revealed when
this happpens: The setup ist complaining about a duplicated key - here
is the part of the log. Of course there is only one network in the
interface.
Can anybody help me getting a solution?
Thnaks, Frank
Running upgrade sql script '/usr/share/ovirt-
engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
rface.sql'...
2542964-
2542965-2015-11-30 08:15:26 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
plugin.execute:941 execute-output: ['/usr/share/ovirt-
engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u',
'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-
engine-setup-20151130073216-erdh9f.log', '-c', 'apply'] stderr:
2543300-psql:/usr/share/ovirt-
engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
rface.sql:1: ERROR: could not create unique index
"vds_interface_vds_id_network_name_unique"
2543487:DETAIL: Key (vds_id, network_name)=(ba954f0f-6ecb-4ec8-b169-
7e0a1147b4cd, KHnetz) is duplicated.
2543585-CONTEXT: SQL statement "ALTER TABLE vds_interface ADD
CONSTRAINT vds_interface_vds_id_network_name_unique unique (vds_id,
network_name)"
2543723-PL/pgSQL function fn_db_create_constraint(character
varying,character varying,text) line 4 at EXECUTE statement
2543835-FATAL: Cannot execute sql command: --file=/usr/share/ovirt-
engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
rface.sql
2543975-
2543976-2015-11-30 08:15:26 DEBUG otopi.context
context._executeMethod:156 method exception
2544060-Traceback (most recent call last):
2544095- File "/usr/lib/python2.7/site-packages/otopi/context.py",
line 146, in _executeMethod
2544183- method['method']()
2544206- File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-
engine-setup/ovirt-engine/db/schema.py", line 291, in _misc
2544325- oenginecons.EngineDBEnv.PGPASS_FILE
2544365- File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line
946, in execute
2544445- command=args[0],
2544466-RuntimeError: Command '/usr/share/ovirt-
engine/dbscripts/schema.sh' failed to execute
2544552-2015-11-30 08:15:26 ERROR otopi.context
context._executeMethod:165 Failed to execute stage 'Misc
configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh'
failed to execute
2544737-2015-11-30 08:15:26 DEBUG otopi.transaction
transaction.abort:134 aborting 'Yum Transaction'
2544830-2015-11-30 08:15:26 INFO
otopi.plugins.otopi.packagers.yumpackager yumpackager.info:95 Yum
Performing yum transaction rollback
______________________________________________________________________________
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten
Telefon: 03821-700-0
Fax: 03821-700-240
E-Mail: info(a)bodden-kliniken.de Internet: http://www.bodden-kliniken.de
Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188
Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski
Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorge-
sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Veröf-
fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sofort den
Absender zu informieren und die E-Mail zu löschen.
Bodden-Kliniken Ribnitz-Damgarten GmbH 2015
*** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
9 years, 5 months
hosted engine vm.conf missing
by Thomas Scofield
I have recently setup a ovirt hosted engine on iscsi storage and after a
reboot of the system I am unable to start the hosted engine. The agent.log
gives errors indicating there is a missing value in the vm.conf file, but
the vm.conf file does not appear in the location indicated. There is no
error indicated when the agent attempts to reload the vm.conf. Any ideas
on how to get the hosted engine up and running?
MainThread::INFO::2015-11-26
21:31:13,071::hosted_engine::699::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file:
/var/run/vdsm/storage/ddf4a26b-61ff-49a4-81db-9f82da35e44b/6ed6d868-aaf3-4b3f-bdf0-a4ad262709ae/1fe5b7fc-eae7-4f07-a2fe-5a082e14c876)
MainThread::INFO::2015-11-26
21:31:13,072::upgrade::836::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade)
Host configuration is already up-to-date
MainThread::INFO::2015-11-26
21:31:13,072::hosted_engine::422::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Reloading vm.conf from the shared storage domain
MainThread::ERROR::2015-11-26
21:31:13,100::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: ''Configuration value not found:
file=/var/run/ovirt-hosted-engine-ha/vm.conf,
key=memSize'' - trying to restart agent
MainThread::WARNING::2015-11-26
21:31:18,105::agent::208::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '9'
MainThread::ERROR::2015-11-26
21:31:18,106::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Too many errors occurred, giving up. Please review the log and consider
filing a bug.
MainThread::INFO::2015-11-26
21:31:18,106::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
Agent shutting down
9 years, 5 months
Windows Networking Issues -
by Matt Wells
Hi all, I have a question about Windows VMs and a networking issue I'm
having.
Here's the setup -
* oVirt - 3.5.1.1-1
* Hypervisors are CentOS 6.7 box with 2 NICs in bond0 'mode=4 miimon=100
lacp_rate=1'
* On bond0 I have a few networks using vlan tagging.
* Networks are 5,10,15,20 - All on an external switch
Network 15 has a Windows 2012 R2 server and a CentOS 6.7 server on it.
The rest of the networks have a few linux.
Every linux box on every network is happy. However any and all Windows
boxes I bring online are incapable of patching or hitting the web. I
pointed the Windows box to the linux box next to it as a proxy (after
installing squid on it) When I do that the Windows box has no issues at
all; it's only when he's attempting to leave on his own.
On my firewall I put in a 'permit any any' on the M$ box IP however all I
see is tcp resets in PCAPs,
I've been playing with for some time but can't seem to find the issue. It
would be one thing if everything on the 15 was bad but the linux box on the
network is fine. Here's the rub, I'm 99.999% sure this used to work.
gggrrr...
Any assistance anyone can offer would be amazingly appreciated.
Thank you for taking the time to read this.
9 years, 5 months
Re: [ovirt-users] [ANN] oVirt 3.6.1 First Release Candidate is now available for testing
by jvdwege
Sandro Bonazzola schreef op 2015-11-30 15:10:
> On Mon, Nov 30, 2015 at 1:56 PM, Joop <jvdwege(a)xs4all.nl> wrote:
>
>> On 30-11-2015 13:06, Sandro Bonazzola wrote:
>>
>> On Sat, Nov 28, 2015 at 1:34 PM, Joop <jvdwege(a)xs4all.nl> wrote:
>> On 25-11-2015 15:28, Sandro Bonazzola wrote:
>>> The oVirt Project is pleased to announce the availability
>>> of the First Release Candidate of oVirt 3.6.1 for testing, as of
>>> November 25th, 2015.
>>>
>>> This release is available now for Fedora 22,
>>> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
>>> Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar).
>>>
>>> This release supports Hypervisor Hosts running
>>> Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar)
>> and
>>> Fedora 22.
>>> Highly experimental support for Debian 8.1 Jessie has been added
>> too.
>>>
>>> This release of oVirt 3.6.1 includes numerous bug fixes.
>>> See the release notes [1] for an initial list of the new features
>> and bugs
>>> fixed.
>>>
>> Tried the 3.6.1 prerelease but the sanlock error 22 is still there
>> and
>> its not possible to activate the imported hosted-engine storage
>> domain.
>> Host F22, hosted-engine CentOS7.1, storage domain(s) NFS.
>>
>> Sanlock error 22 shows up because BZ 1269768 hasn't been fixed yet.
>>
>> But if you don't import the hosted engine storage everything else
>> should still work fine.
> Ah, but that won't work for my use case since I need to import an
> existing data domain and that won't work without a working data
> domain.
> Will see if creating a dummy small data domain will let me import the
> real one.
>
> it should, let me know if it didn't work.
Thanks that worked :-)
> Adding Simone and Roy so they have a better sight on why people are
> keep trying to import the hosted engine domain despite it's not fixed
> yet :-)
Sorry being pushy just wanted to get on with my ovirt stuff and felt a
little frustrated that such a basic feature didn't work (IMHO).
Again apologies, you're all working hard to get things fixed and I
shouldn't complain.
Regards,
Joop
9 years, 5 months
Windows 10
by Koen Vanoppen
Dear all,
Yes, onther question :-). This time it's about windows 10.
I'm running ovirt 3.5.4 and I don't manage to install windows 10 on it.
Keeps giving me a blue screen (yes, I know, it's still a windows... ;-) )
on reboot.
Are there any special settings you need to enable when creating the vm?
Which OS do I need to select? Or shall I just wait untile the relase of
ovirt 3.6 :-) ?
Kind regards,
Koen
9 years, 5 months
Corruped disks
by Koen Vanoppen
Dear all,
lately we are experience some strange behaviour on our vms...
Every now and then we have disks that went corrupt. Is there a chance that
ovirt is the issue here or...? It happens (luckily) on our DEV/UAT cluster.
Since the last 4 weeks, we already had 6 vm's that went totaly corrupt...
Kind regards,
Koen
9 years, 5 months
Re: [ovirt-users] Engine upgrade error - 03_06_1670_unique_vds_id_network_name_vds_interface.sql
by Yedidyah Bar David
On Mon, Nov 30, 2015 at 3:34 PM, Frank Rothenstein
<f.rothenstein(a)bodden-kliniken.de> wrote:
> Hello,
>
> I tried several times to upgrade my ovirt-engine 3.5.5 to 3.6. Every
> time the setup stops at updating the DB-schema. The log revealed when
> this happpens: The setup ist complaining about a duplicated key - here
> is the part of the log. Of course there is only one network in the
> interface.
>
> Can anybody help me getting a solution?
>
> Thnaks, Frank
>
> Running upgrade sql script '/usr/share/ovirt-
> engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
> rface.sql'...
> 2542964-
> 2542965-2015-11-30 08:15:26 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
> plugin.execute:941 execute-output: ['/usr/share/ovirt-
> engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u',
> 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-
> engine-setup-20151130073216-erdh9f.log', '-c', 'apply'] stderr:
> 2543300-psql:/usr/share/ovirt-
> engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
> rface.sql:1: ERROR: could not create unique index
> "vds_interface_vds_id_network_name_unique"
> 2543487:DETAIL: Key (vds_id, network_name)=(ba954f0f-6ecb-4ec8-b169-
> 7e0a1147b4cd, KHnetz) is duplicated.
> 2543585-CONTEXT: SQL statement "ALTER TABLE vds_interface ADD
> CONSTRAINT vds_interface_vds_id_network_name_unique unique (vds_id,
> network_name)"
> 2543723-PL/pgSQL function fn_db_create_constraint(character
> varying,character varying,text) line 4 at EXECUTE statement
> 2543835-FATAL: Cannot execute sql command: --file=/usr/share/ovirt-
> engine/dbscripts/upgrade/03_06_1670_unique_vds_id_network_name_vds_inte
> rface.sql
Can you post output of 'select * from vds_interface' from engine database?
Adding Eli and changing subject.
> 2543975-
> 2543976-2015-11-30 08:15:26 DEBUG otopi.context
> context._executeMethod:156 method exception
> 2544060-Traceback (most recent call last):
> 2544095- File "/usr/lib/python2.7/site-packages/otopi/context.py",
> line 146, in _executeMethod
> 2544183- method['method']()
> 2544206- File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-
> engine-setup/ovirt-engine/db/schema.py", line 291, in _misc
> 2544325- oenginecons.EngineDBEnv.PGPASS_FILE
> 2544365- File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line
> 946, in execute
> 2544445- command=args[0],
> 2544466-RuntimeError: Command '/usr/share/ovirt-
> engine/dbscripts/schema.sh' failed to execute
> 2544552-2015-11-30 08:15:26 ERROR otopi.context
> context._executeMethod:165 Failed to execute stage 'Misc
> configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh'
> failed to execute
> 2544737-2015-11-30 08:15:26 DEBUG otopi.transaction
> transaction.abort:134 aborting 'Yum Transaction'
> 2544830-2015-11-30 08:15:26 INFO
> otopi.plugins.otopi.packagers.yumpackager yumpackager.info:95 Yum
> Performing yum transaction rollback
>
>
>
>
>
> ______________________________________________________________________________
> BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
> Sandhufe 2
> 18311 Ribnitz-Damgarten
>
> Telefon: 03821-700-0
> Fax: 03821-700-240
>
> E-Mail: info(a)bodden-kliniken.de Internet: http://www.bodden-kliniken.de
>
> Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188
> Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski
>
> Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorge-
> sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Veröf-
> fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sofort den
> Absender zu informieren und die E-Mail zu löschen.
>
>
> Bodden-Kliniken Ribnitz-Damgarten GmbH 2015
> *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
--
Didi
9 years, 5 months
Re: [ovirt-users] [SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE
by Giuseppe Ragusa
On Wed, Nov 25, 2015, at 12:10, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 23, 2015 at 10:17 PM, Giuseppe Ragusa
> <giuseppe.ragusa(a)hotmail.com> wrote:
>> On Tue, Oct 27, 2015, at 00:10, Giuseppe Ragusa wrote:
>>
> On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
>>
> >
>>
> >
>>
> > On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa
> > <giuseppe.ragusa(a)hotmail.com> wrote:
>>
> >> Hi all,
>>
> >> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>>
> >>
>>
> >> I'm trying to trick the self-hosted-engine setup to create a custom
> >> engine vm with 3 nics (with fixed MACs/UUIDs).
>>
> >>
>>
> >> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the
> >> engine vm) and the network bridges (ovirtmgmt and other two
> >> bridges, called nfs and lan, for the engine vm) have been
> >> preconfigured on the initial fully-patched CentOS 7.1 host (plus
> >> other two identical hosts which are awaiting to be added).
>>
> >>
>>
> >> I'm stuck at a point with the engine vm successfully starting but
> >> with only one nic present (connected to the ovirtmgmt bridge).
>>
> >>
>>
> >> I'm trying to obtain the modified engine vm by means of a trick
> >> which used to work in a previous (aborted because of lacking GlusterFS-by-
> >> libgfapi support) oVirt 3.5 test setup (about a year ago, maybe
> >> more): I'm substituting the standard /usr/share/ovirt-hosted-engine-
> >> setup/templates/vm.conf.in with the following:
>>
> >>
>>
> >> vmId=@VM_UUID@
>>
> >> memSize=@MEM_SIZE@
>>
> >> display=@CONSOLE_TYPE@
>>
> >> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0,
> >> bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUI-
> >> D@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>>
> >> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID-
> >> :@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domain-
> >> ID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
> >> slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shar-
> >> ed:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>>
> >> devices={device:scsi,model:virtio-scsi,type:controller}
>>
> >> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:-
> >> true,network:@BRIDGE@,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
> >> slot:0x03, domain:0x0000, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:-
> >> true,network:lan,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-
> >> 1113f4bfefee,address:{bus:0x00, slot:0x09, domain:0x0000, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive-
> >> :true,network:nfs,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-
> >> 7f98bb59858d,address:{bus:0x00, slot:0x0c, domain:0x0000, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={device:console,specParams:{},type:console,deviceId:@CONSO-
> >> LE_UUID@,alias:console0}
>>
> >> vmName=@NAME@
>>
> >> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,sreco-
> >> rd,ssmartcard,susbredir
>>
> >> smp=@VCPUS@
>>
> >> cpuType=@CPU_TYPE@
>>
> >> emulatedMachine=@EMULATED_MACHINE@
>>
> >>
>>
> >> but unfortunately the vm gets created like this (output from "ps";
> >> note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded
> >> kickstart: the installation should proceed by HTTP on the lan
> >> network but obviously fails):
>>
> >>
>>
> >> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
>>
> >> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -
> >> realtime mlock=off
>>
> >> -smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a -s
>>
> >> mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-
> >> 1.1503.el7.centos.2
>>
> >> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-
> >> 4422-8b91-e91a
>>
> >> 0e38aa4a -no-user-config -nodefaults -chardev
> >> socket,id=charmonitor,path=/var/li
>>
> >> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon
> >> chardev=charmonitor,id=mo
>>
> >> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -
> >> global kvm-pit.l
>>
> >> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
> >> piix3-usb-uh
>>
> >> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-
> >> pci,id=scsi0,bus=pci.0,addr
>>
> >> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
> >> =-drive file=
>>
> >> /var/tmp/engine.iso,if=none,id=drive-ide0-1-
> >> 0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-
> >> 0,bootindex=1 -drive file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-
> >> d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-
> >> 4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-
> >> disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-
> >> e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads -device
> >> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-
> >> disk0,id=virtio-disk0 -netdev
> >> tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-
> >> pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0-
> >> x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/chan-
> >> nels/f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-
> >> serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rh-
> >> evm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qem-
> >> u/channels/f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-
> >> serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.gues-
> >> t_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qe-
> >> mu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-
> >> engine-setup.0,server,nowait -device virtserialport,bus=virtio-
> >> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hos-
> >> ted-engine-setup.0 -chardev socket,id=charconsole0,path=/var/run/ovirt-vmconsole-
> >> console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait -
> >> device virtconsole,chardev=charconsole0,id=console0 -vnc
> >> 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg
> >> timestamp=on
>>
> >>
>>
> >> There seem to be no errors in the logs.
>>
> >>
>>
> >> I've tried reading some (limited) Python setup code but I've not
> >> found any obvious reason why the trick should not work anymore.
>>
> >>
>>
> >> I know that 3.6 has different network configuration/management and
> >> this could be the hot point.
>>
> >>
>>
> >> Does anyone have any further suggestion or clue (code/logs to
> >> read)?
>>
> >
>>
> > The VM creation path is now a bit different cause we use just vdscli
> > library instead of vdsClient.
>>
> > Please take a a look at mixins.py
>>
>
>>
> Many thanks for your very valuable hint:
>>
>
>>
> I've restored the original /usr/share/ovirt-hosted-engine-
> setup/templates/vm.conf.in and I've managed to obtain the 3-nics-
> customized vm by modifying /usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_setup/mixins.py like this ("diff -Naur"
> output):
>>
>> Hi Simone,
>>
it seems that I spoke too soon :(
>>
>>
A separate network issue (already reported to the list) prevented me
from successfully closing up the setup in its final phase
(registering the host inside the Engine), so all seemed well while
being stuck there :)
>>
>>
Now that I've solved that (for which I'm informing the list asap in a
separate message) and the setup ended successfully, it seems that the
last step of the HE setup (shutdown the Engine vm to place it under HA
agent control) starts/creates a "different vm" and my virtual hardware
customizations seem to be gone (only one NIC present, connected to
ovirtmgmt).
>>
>>
My wild guess: maybe I need BOTH the mixins.py AND the vm.conf.in
customizations? ;)
>
> Yes, you are right: the final configuration is still generated from
> the template so you need to fix both.
>
>>
>>
It seems (from /etc/ovirt-hosted-engine/hosted-engine.conf) that the
Engine vm definition is now in /var/run/ovirt-hosted-engine-ha/vm.conf
>>
>
>
> the vm configuration is now read from the shared domain converting
> back from the OVF_STORE, the idea is to let you edit it from the
> engine without the need to write it on each host. /var/run/ovirt-hosted-engine-
> ha/vm.conf is just a temporary copy.
>
> As you are probably still not able to import the hosted-engine storage
> domain in the engine due to a well know bug, your system will fallback
> to the initial vm.conf configuration still on the shared storage and
> you can manually fix it: Please follow this procedure substituting
> '192.168.1.115:_Virtual_ext35u36' with the mount point of hosted-
> engine storage domain on your system:
>
> mntpoint=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36
> dir=`mktemp -d` && cd $dir sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-
> engine.conf) sdUUID=${sdUUID_line:7:36} conf_volume_UUID_line=$(grep
> conf_volume_UUID /etc/ovirt-hosted-engine/hosted-engine.conf)
> conf_volume_UUID=${conf_volume_UUID_line:17:36}
> conf_image_UUID_line=$(grep conf_image_UUID /etc/ovirt-hosted-engine/hosted-
> engine.conf) conf_image_UUID=${conf_image_UUID_line:16:36} dd
> if=$mntpoint/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
> 2>/dev/null| tar -xvf -
> # directly edit vm.conf as you need
> tar -cO * | dd
> of=$mntpoint/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
>
> When your engine will import the hosted-engine storage domain it will
> generate an OVF_STORE with the configuration of the engine VM, you
> will be able to edit some parameters from the engine and the agent
> will read the VM configuration from there.
Hi Simone, I followed your advice and modified the vm.conf inside the
conf_volume tar archive.
Then I put the system (still with only one physical host) in global
maintenance from the host:
hosted-engine --set-maintenance --mode=global
Then i regularly powered off the Engine vm and confirmed it from the
host:
hosted-engine --vm-status
Then restarted it from the host:
hosted-engine --vm-start
Finally, I exited maintenance from the host:
hosted-engine --set-maintenance --mode=none
Waited a bit and verified it:
hosted-engine --vm-status
It all worked as expected!
Many thanks again.
Regards, Giuseppe
>
>>
>>
Many thanks for your assistance (and obviously I just sent a related
wishlist item on the HE setup ;)
>>
>>
Regards,
>>
Giuseppe
>>
>>
>>
> *********************************************************************-
> ***************
>>
>
>>
> --- mixins.py.orig 2015-10-20 16:57:40.000000000 +0200
>>
> +++ mixins.py 2015-10-26 22:22:58.351223922 +0100
>>
> @@ -25,6 +25,7 @@
>>
>import random
>>
>import string
>>
>import time
>>
> +import uuid
>>
>
>>
>
>>
>from ovirt_hosted_engine_setup import constants as ohostedcons
>>
> @@ -247,6 +248,44 @@
>>
>]['@BOOT_PXE@'] == ',bootOrder:1':
>>
>nic['bootOrder'] = '1'
>>
>conf['devices'].append(nic)
>>
> + nic2 = {
>>
> + 'nicModel': 'pv',
>>
> + 'macAddr': '02:50:56:3f:c4:a0',
>>
> + 'linkActive': 'true',
>>
> + 'network': 'lan',
>>
> + 'filter': 'vdsm-no-mac-spoofing',
>>
> + 'specParams': {},
>>
> + 'deviceId': str(uuid.uuid4()),
>>
> + 'address': {
>>
> + 'bus': '0x00',
>>
> + 'slot': '0x09',
>>
> + 'domain': '0x0000',
>>
> + 'type': 'pci',
>>
> + 'function': '0x0'
>>
> + },
>>
> + 'device': 'bridge',
>>
> + 'type': 'interface',
>>
> + }
>>
> + conf['devices'].append(nic2)
>>
> + nic3 = {
>>
> + 'nicModel': 'pv',
>>
> + 'macAddr': '02:50:56:3f:c4:c0',
>>
> + 'linkActive': 'true',
>>
> + 'network': 'nfs',
>>
> + 'filter': 'vdsm-no-mac-spoofing',
>>
> + 'specParams': {},
>>
> + 'deviceId': str(uuid.uuid4()),
>>
> + 'address': {
>>
> + 'bus': '0x00',
>>
> + 'slot': '0x0c',
>>
> + 'domain': '0x0000',
>>
> + 'type': 'pci',
>>
> + 'function': '0x0'
>>
> + },
>>
> + 'device': 'bridge',
>>
> + 'type': 'interface',
>>
> + }
>>
> + conf['devices'].append(nic3)
>>
>
>>
>cli = self.environment[ohostedcons.VDSMEnv.VDS_CLI]
>>
>status = cli.create(conf)
>>
>
>>
> *********************************************************************-
> ***************
>>
>
>>
> Obviously this is a horrible ad-hoc hack that I'm not able to generalize/clean-
> up now: doing so would involve (apart from a deeper understanding of
> the whole setup code/workflow) some well-thought-out design decisions
> and, given the effective deprecation of the aforementioned easy-to-
> modify vm.conf.in template substituted by hardwired Python program
> logic, it seems that such a functionality is not very high on the
> development priority list atm ;)
>>
>
>>
> Many thanks again!
>>
>
>>
> Kind regards,
>>
> Giuseppe
>>
>
>>
> >> Many thanks in advance.
>>
> >>
>>
> >> Kind regards,
>>
> >> Giuseppe
>>
> >>
>>
> >> PS: please keep also my address in replying because I'm
> >> experiencing some problems between Hotmail and oVirt-mailing-
> >> list
>>
> >>
>>
> >> _______________________________________________
>>
> >>
>>
> Users mailing list
>>
> >>Users(a)ovirt.org
>>
> >>http://lists.ovirt.org/mailman/listinfo/users
>>
> >>
9 years, 5 months
Issue with kernel 2.6.32-573.3.1.el6.x86_64?
by Michael Kleinpaste
So I patched my vhosts and updated the kernel
to 2.6.32-573.3.1.el6.x86_64. Afterwards the networking became unstable
for my vyatta firewall vm. Lots of packet loss and out of order packets
(based on my tshark at the time).
Has anybody else experienced this?
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
9 years, 5 months
oVirt 4.0 wishlist: oVirt Engine
by Giuseppe Ragusa
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
I've sent separate wishlist messages for oVirt Node and VDSM.
oVirt Engine:
*) add Samba/CTDB/Ganesha capabilities (maybe in the GlusterFS management UI); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on oVirt Node and on VDSM
*) add the ability to manage containers (maybe initially as an exclusive cluster type but allowing it to coexist with GlusterFS); there are related wishlist items on supporting containers on the oVirt Node and on VDSM
*) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on oVirt Node and on VDSM
*) add DRBD9 as a supported Storage Domain type, HC/HE too, managed from the Engine UI similarly to GlusterFS; there are related wishlist items on configuring/managing DRBD9 on oVirt Node and on VDSM
*) add support for managing/limiting GlusterFS heal/rebalance bandwidth usage in HC setup [1]; this is actually a GlusterFS wishlist item first and foremost, but I hope our use case could be considered compelling enough to "force their hand" a bit ;)
Regards,
Giuseppe
[1] bandwidth limiting seems to be supported only for geo-replication on GlusterFS side; it is my understanding that on non-HC setups the heal/rebalance traffic could be kept separate from hypervisor/client traffic (if a separate, Gluster-only, network is physically available and Gluster cluster nodes have been peer-probed on those network addresses)
9 years, 5 months
Re: [ovirt-users] Strange permissions on Hosted Engine HA Agent log files
by Giuseppe Ragusa
On Fri, Nov 27, 2015, at 18:15, Simone Tiraboschi wrote:
>
>
> On Wed, Nov 25, 2015 at 11:53 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> Hi all,
>> I'm installing oVirt (3.6) in self-hosted mode, hyperconverged with GlusterFS (3.7.6).
>>
>> I'm using the oVirt snapshot generated the night between the 18th and 19th of November, 2015.
>>
>> The (single, at the moment) host and the Engine are both CentOS 7.1 fully up-to-date.
>>
>> After ovirt-hosted-engine-setup successful completion, I found the following (about 3 days after setup completed) "anomalies":
>>
>> 666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/agent.log
>> 666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/agent.log.2015-11-23
>> 666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/broker.log
>> 666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/broker.log.2015-11-23
>>
>> The listing above comes from a custom security checking script that gives:
>>
>> "octal permissions" "number of links" "owner" "group" - "absolute pathname"
>>
>> Is the ominous "666" mark actually intended/necessary? ;-)
>
> Thanks for the report Giuseppe, I double checked on one of my test systems.
>
> [root@c71het20151028 ~]# ls -l /var/log/ovirt-hosted-engine-ha/*
> -rw-r--r--. 1 root root 10136 Nov 27 18:08 /var/log/ovirt-hosted-engine-ha/agent.log
> -rw-r--r--. 1 root root 1769029 Oct 29 17:20 /var/log/ovirt-hosted-engine-ha/agent.log.2015-10-28
> -rw-rw-rw-. 1 vdsm kvm 97685 Oct 29 18:21 /var/log/ovirt-hosted-engine-ha/agent.log.2015-10-29
> -rw-r--r--. 1 root root 3620102 Nov 25 12:09 /var/log/ovirt-hosted-engine-ha/agent.log.2015-11-24
> -rw-rw-rw-. 1 vdsm kvm 715086 Nov 25 17:01 /var/log/ovirt-hosted-engine-ha/agent.log.2015-11-25
> -rw-r--r--. 1 root root 13904 Nov 27 18:09 /var/log/ovirt-hosted-engine-ha/broker.log
> -rw-r--r--. 1 root root 9711872 Oct 29 17:20 /var/log/ovirt-hosted-engine-ha/broker.log.2015-10-28
> -rw-rw-rw-. 1 vdsm kvm 468475 Oct 29 18:21 /var/log/ovirt-hosted-engine-ha/broker.log.2015-10-29
> -rw-r--r--. 1 root root 14066693 Nov 25 12:09 /var/log/ovirt-hosted-engine-ha/broker.log.2015-11-24
> -rw-rw-rw-. 1 vdsm kvm 4505277 Nov 25 17:01 /var/log/ovirt-hosted-engine-ha/broker.log.2015-11-25
>
>
> So I've something 644 root:root and something at 666 vdsm:kvm depending from the rotation date (???).
> The directory is 700 vdsm:kvm so it's not really an issue but I think it's still worth to open a bug on that.
Many thanks Simone for your confirmation: bug opened
https://bugzilla.redhat.com/show_bug.cgi?id=1286568
Regards,
Giuseppe
>> Do I need to open a bugzilla notification for this?
>> Many thanks in advance for your attention.
>>
>> Regards,
>> Giuseppe
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
9 years, 5 months
ovirt-engine-sdk-python too slow
by John Hunter
Hi guys,
I am using the ovirt-engine-sdk-python to communicate with the ovirt-engine,
I am ok to list the vms but the processing time is too long, like 4.5
seconds,
and this line:
from ovirtsdk.api import API
take almost 3 seconds.
This seems a little bit longer than I expected it to be, so I am asking is
there
a quicker way to communicate with the ovirt-engine?
--
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
9 years, 5 months
Install ovirt locally
by John Hunter
Hi all,
Here is my situation: in my office, we can not access the website out of
China,
I have to grab my PC home to install the ovirt environment, this makes me
sad.
Can I download all the packages needed by ovirt at home, store them in my
U-disk, and back to office and install the enviroment? I am worry about the
package dependency.
Does anybody who has done this or does anybody has a better solution for
this?
Any help would be appreciated :)
--
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
9 years, 5 months
live snapshot merging
by Koen Vanoppen
Dear Community,
Are there plans for live merging of snapshots?
I know it is supported from libvirt 1.2.10.
Kind regards,
Koen
9 years, 5 months
Networking fails for VM running on Centos6.7.Works on Centos6.5
by mad Engineer
hello all i am having strange network issue with vms that are running on
centos 6.7 ovirt nodes.
I recently added one more ovirt node which is running centos6.7 and
upgraded from centos6.5 to centos6.7 on all other nodes.
All VMs running on nodes with centos6.7 as host Operating system fail to
reach network gateway,but if i reboot that same host to centos6.5 kernel
everything works fine(with out changing any network configuration).
Initially i thought it as configuration issue but its there on all nodes.if
i reboot to old kernel everything is working.
I am aware about ghost vlan0 issue in centos6.6 kernel.Not aware about any
issue in centos6.7 Also all my servers are up to date.
All physical interfaces are in access mode VLAN connected to nexus 5k
switches.
working kernel- 2.6.32-431.20.3.el6.x86_64
non working kernel- 2.6.32-573.8.1.el6.x86_64
Any idea?
9 years, 5 months
Random VMs stuck in [drm] fb: depth 24,pitch 4096
by kevin parrikar
Running centos 6.7 and trying to PXE boot centos 6.5 vms but some of
them are stuck with this in boot message:
[drm] fb: depth 24,pitch 4096
screenshot of vm console:
http://snag.gy/7Bhyc.jpg
tried restarting vms but its still stuck there
rpm -qa |grep kvm
qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
rpm -qa |grep seabio
seabios-0.6.1.2-30.el6.x86_64
uname -a
2.6.32-431.el6.x86_64
ever seen this behaviour?
vms have 8gb/4cpu
9 years, 5 months
3.6 vm from template with cloud-init. Not able to confirm OK button
by Gianluca Cecchi
Hello,
I have created a template based on a CentOS 7.1 VM where I installed
cloud-init.
Now I'm trying to create a VM from that template using cloud-init and
changing hostname and network parameters.
It seems that in "Initial Run" section there is something wrong because if
I press OK it doesn't happen anything but a sort of dotted square around
the ok button, but I'm not able to see what is the thing that is not ok and
correct it.....
Can I debug anywhere?
BTW: if I succeed in this preliminar test I would like to create a vm pool
from this same template where, for every started VM in the pool (stateless
is ok for them), a new hostname and a preconfigured static ip is set (I
have not and I can't set up a dhcp server on that network). Is something
configurable in vm pool with cloud init?
Thanks
Gianluca
9 years, 5 months
oVirt 4.0 wishlist: oVirt Self Hosted Engine Setup
by Giuseppe Ragusa
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
I've sent separate wishlist messages for oVirt Node, oVirt Engine and VDSM.
oVirt Self Hosted Engine Setup:
*) allow virtual hardware customizations for locally-created Engine vm, specifically: allow to add an arbitrary number of NICs (asking for MAC address and local bridge to connect to) and maybe also an arbitrary number of disks (asking for size) as these seem to be the only/most_useful items missing; maybe the prebuilt appliance image too may be inspected by setup to detect a customized one and connect any further NICs to custom local bridges (which the user should be asked for)
Regards,
Giuseppe
9 years, 5 months
3.6: updating a template in a vm pool causes removal of soundcard and memory ballon device
by Cristian Mammoli
Hi, when I change template version in a vmpool the souncard and balloon
device get cleared, even if the new template has sound and ballon enabled.
After that the checkbox are greyed out and cannot be enabled without
destroying and recreating the pool...
How to reproduce:
Create a template with soundcard and memory ballon device
Create a pool from the above template
VMs and the pool itself have sound and balloon device
Create a new Template as version 2 of the original one
Check that sound and balloon are enabled in the template
Reconfigure the pool to use version 2 of the template
VMs and the pool itself have sound and balloon *unchecked* and cannot be
enabled anymore
9 years, 5 months
vdsm using 100% CPU, rapidly filling logs with _handle_event messages
by Robert Story
--Sig_/War1DBdDuKL/JOwM_gN0y1g
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
I'm running oVirt 3.5.x with a hosted engine. This morning I noticed that 2
of my 5 hosts were showing 99-100% cpu usage. Logging in to them, vdsmd
seemed to be the culprit, and it was filling the log file with these
messages:
VM Channels Listener::DEBUG::2015-11-12
08:09:26,292::vmchannels::59::vds::(_handle_event) Received 00000011. On fd=
removed by epoll. VM Channels Listener::INFO::2015-11-12 08:09:26,293::vmc=
hannels::54::vds::(_handle_event) Received 00000011 on fileno 119
VM Channels Listener::DEBUG::2015-11-12 08:09:26,293::vmchannels::59::vds::=
(_handle_event) Received 00000011. On fd removed by epoll.
VM Channels Listener::INFO::2015-11-12 08:09:26,293::vmchannels::54::vds::(=
_handle_event) Received 00000011 on fileno 75
VM Channels Listener::DEBUG::2015-11-12 08:09:26,293::vmchannels::59::vds::=
(_handle_event) Received 00000011. On fd removed by epoll.
VM Channels Listener::INFO::2015-11-12 08:09:26,294::vmchannels::54::vds::(=
_handle_event) Received 00000011 on fileno 119
VM Channels Listener::DEBUG::2015-11-12 08:09:26,294::vmchannels::59::vds::=
(_handle_event) Received 00000011. On fd removed by epoll.
I googled to see how to change the debug level to turn of DEBUG messages
for vdsm, which referred me to libvirtd.conf, but the debug level there was
not set, which should have meant a log level of 3 (warnings and errors), so
I'm not sure why the log was filling up with DEBUG/INFO messages.
I restarted vdsmd, which resulted in those nodes being marked as
'disconnected', but they did eventually recover and loads went back to
normal.
This may or may not be related to the fact that the 3 hosts where this did
not happen can't seem to keep their ha brokers up. I'll be starting a new
thread on that shortly.
Robert
--=20
Senior Software Engineer @ Parsons
--Sig_/War1DBdDuKL/JOwM_gN0y1g
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJWRJgKAAoJEMHFVuy5l8Y4SGcP/jvgozmL+MsLRWFK+hWBa4+8
lP4TVCWdDqdd/3Kt3ZIO7vNBCEWNmicbtOu6o6yeG1o4l5OVh0GG+wspQOHm5D7g
ScfsQ405e4EP3skKx4yauHmM8kxBuZo2adRj1CT+mbB/xXMfUd/gcVWgFYuPMMuN
EkJR2Q9NqpP1ccUduccAMkiXSyRbUt6Ng7iZqx2RhDAdnRW2CYUCSqkSxI7KE/Mw
NZQ4bGAl4w6eJwBxV9jB6eDW6RD2pGmncRswq7TGv8+iGvLd+XAxhQRviIJYeaFH
8GTil9/qEW8wzmbEYsd+YzbwAaPuveKx9TZCTVyhgabzeipLv0Xj4eKMO8fJ66kB
s+MYZSeIMDAGnDoHdFvf8hVphb+DHoGOZXx+C76Ohfwb+WL+ZDCIAra/lLx57CUa
H+JPU9pl4Bn3f0o3srX6Mkbj8qljXHGXCKI206SoYfgMyRlN9/VylpVrkryP3m2d
8CXK0X1+bxnvohelZGXlIca7t7/eTVd9Vob2aBKjvUkGLqQ920fXxVRqI78Rpmdd
druse94ZN8KYk2NOMaM0YuZERlkJP8aqQAYNqGwZ6QgIPuYGyEeY4pnJWpH+Z0sF
g9CUW9AblprC1E668qh4gRrKv7a3VWpkQyZNJTaIVxfJt97oXIbU0RtFQJEF5o2n
7kSRCB0xu7vyJEw32O28
=NjlM
-----END PGP SIGNATURE-----
--Sig_/War1DBdDuKL/JOwM_gN0y1g--
9 years, 5 months
timeouts
by paf1@email.cz
This is a multi-part message in MIME format.
--------------020003090101020807030701
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
can anybody help me with this timeouts ??
Volumes are not active yes ( bricks down )
desc. of gluster bellow ...
*/var/log/glusterfs/**etc-glusterfs-glusterd.vol.log*
[2015-11-26 14:44:47.174221] I [MSGID: 106004]
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer
<1hp1-SAN> (<87fc7db8-aba8-41f2-a1cd-b77e83b17436>), in state <Peer in
Cluster>, has disconnected from glusterd.
[2015-11-26 14:44:47.174354] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P1 not held
[2015-11-26 14:44:47.174444] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P3 not held
[2015-11-26 14:44:47.174521] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P1 not held
[2015-11-26 14:44:47.174662] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P3 not held
[2015-11-26 14:44:47.174532] W [MSGID: 106118]
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: Lock
not released for 2HP12-P1
[2015-11-26 14:44:47.174675] W [MSGID: 106118]
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: Lock
not released for 2HP12-P3
[2015-11-26 14:44:49.423334] I [MSGID: 106488]
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req
The message "I [MSGID: 106488]
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req" repeated 4 times between [2015-11-26
14:44:49.423334] and [2015-11-26 14:44:49.429781]
[2015-11-26 14:44:51.148711] I [MSGID: 106163]
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
0-management: using the op-version 30702
[2015-11-26 14:44:52.177266] W [socket.c:869:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid
argument
[2015-11-26 14:44:52.177291] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:53.180426] W [socket.c:869:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 17, Invalid
argument
[2015-11-26 14:44:53.180447] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:52.395468] I [MSGID: 106163]
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
0-management: using the op-version 30702
[2015-11-26 14:44:54.851958] I [MSGID: 106488]
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req
[2015-11-26 14:44:57.183969] W [socket.c:869:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 19, Invalid
argument
[2015-11-26 14:44:57.183990] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
After volumes creation all works fine ( volumes up ) , but then, after
several reboots ( yum updates) volumes failed due timeouts .
Gluster description:
4 nodes with 4 volumes replica 2
oVirt 3.6 - the last
gluster 3.7.6 - the last
vdsm 4.17.999 - from git repo
oVirt - mgmt.nodes 172.16.0.0
oVirt - bricks 16.0.0.0 ( "SAN" - defined as "gluster" net)
Network works fine, no lost packets
# gluster volume status
Staging failed on 2hp1-SAN. Please check log file for details.
Staging failed on 1hp2-SAN. Please check log file for details.
Staging failed on 2hp2-SAN. Please check log file for details.
# gluster volume info
Volume Name: 1HP12-P1
Type: Replicate
Volume ID: 6991e82c-9745-4203-9b0a-df202060f455
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1hp1-SAN:/STORAGE/p1/G
Brick2: 1hp2-SAN:/STORAGE/p1/G
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: 1HP12-P3
Type: Replicate
Volume ID: 8bbdf0cb-f9b9-4733-8388-90487aa70b30
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1hp1-SAN:/STORAGE/p3/G
Brick2: 1hp2-SAN:/STORAGE/p3/G
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: 2HP12-P1
Type: Replicate
Volume ID: e2cd5559-f789-4636-b06a-683e43e0d6bb
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 2hp1-SAN:/STORAGE/p1/G
Brick2: 2hp2-SAN:/STORAGE/p1/G
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: 2HP12-P3
Type: Replicate
Volume ID: b5300c68-10b3-4ebe-9f29-805d3a641702
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 2hp1-SAN:/STORAGE/p3/G
Brick2: 2hp2-SAN:/STORAGE/p3/G
Options Reconfigured:
performance.readdir-ahead: on
regs. for any hints
Paf1
--------------020003090101020807030701
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
can anybody help me with this timeouts ??<br>
Volumes are not active yes ( bricks down )<br>
<br>
desc. of gluster bellow ...<br>
<br>
<b>/var/log/glusterfs/</b><b>etc-glusterfs-glusterd.vol.log</b><br>
[2015-11-26 14:44:47.174221] I [MSGID: 106004]
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management:
Peer <1hp1-SAN>
(<87fc7db8-aba8-41f2-a1cd-b77e83b17436>), in state <Peer in
Cluster>, has disconnected from glusterd.<br>
[2015-11-26 14:44:47.174354] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P1 not held<br>
[2015-11-26 14:44:47.174444] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P3 not held<br>
[2015-11-26 14:44:47.174521] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P1 not held<br>
[2015-11-26 14:44:47.174662] W
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
[0x7fb7039d44dc]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
[0x7fb7039de542]
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P3 not held<br>
[2015-11-26 14:44:47.174532] W [MSGID: 106118]
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management:
Lock not released for 2HP12-P1<br>
[2015-11-26 14:44:47.174675] W [MSGID: 106118]
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management:
Lock not released for 2HP12-P3<br>
[2015-11-26 14:44:49.423334] I [MSGID: 106488]
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume]
0-glusterd: Received get vol req<br>
The message "I [MSGID: 106488]
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume]
0-glusterd: Received get vol req" repeated 4 times between
[2015-11-26 14:44:49.423334] and [2015-11-26 14:44:49.429781]<br>
[2015-11-26 14:44:51.148711] I [MSGID: 106163]
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
0-management: using the op-version 30702<br>
[2015-11-26 14:44:52.177266] W [socket.c:869:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid
argument<br>
[2015-11-26 14:44:52.177291] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-26 14:44:53.180426] W [socket.c:869:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 17, Invalid
argument<br>
[2015-11-26 14:44:53.180447] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-26 14:44:52.395468] I [MSGID: 106163]
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
0-management: using the op-version 30702<br>
[2015-11-26 14:44:54.851958] I [MSGID: 106488]
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume]
0-glusterd: Received get vol req<br>
[2015-11-26 14:44:57.183969] W [socket.c:869:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 19, Invalid
argument<br>
[2015-11-26 14:44:57.183990] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
<br>
After volumes creation all works fine ( volumes up ) , but then,
after several reboots ( yum updates) volumes failed due timeouts .<br>
<br>
Gluster description:<br>
<br>
4 nodes with 4 volumes replica 2 <br>
oVirt 3.6 - the last<br>
gluster 3.7.6 - the last <br>
vdsm 4.17.999 - from git repo<br>
oVirt - mgmt.nodes 172.16.0.0<br>
oVirt - bricks 16.0.0.0 ( "SAN" - defined as "gluster" net)<br>
Network works fine, no lost packets<br>
<br>
# gluster volume status <br>
Staging failed on 2hp1-SAN. Please check log file for details.<br>
Staging failed on 1hp2-SAN. Please check log file for details.<br>
Staging failed on 2hp2-SAN. Please check log file for details.<br>
<br>
# gluster volume info<br>
<br>
Volume Name: 1HP12-P1<br>
Type: Replicate<br>
Volume ID: 6991e82c-9745-4203-9b0a-df202060f455<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 1hp1-SAN:/STORAGE/p1/G<br>
Brick2: 1hp2-SAN:/STORAGE/p1/G<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
<br>
Volume Name: 1HP12-P3<br>
Type: Replicate<br>
Volume ID: 8bbdf0cb-f9b9-4733-8388-90487aa70b30<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 1hp1-SAN:/STORAGE/p3/G<br>
Brick2: 1hp2-SAN:/STORAGE/p3/G<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
<br>
Volume Name: 2HP12-P1<br>
Type: Replicate<br>
Volume ID: e2cd5559-f789-4636-b06a-683e43e0d6bb<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 2hp1-SAN:/STORAGE/p1/G<br>
Brick2: 2hp2-SAN:/STORAGE/p1/G<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
<br>
Volume Name: 2HP12-P3<br>
Type: Replicate<br>
Volume ID: b5300c68-10b3-4ebe-9f29-805d3a641702<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 2hp1-SAN:/STORAGE/p3/G<br>
Brick2: 2hp2-SAN:/STORAGE/p3/G<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
<br>
regs. for any hints<br>
Paf1<br>
</body>
</html>
--------------020003090101020807030701--
9 years, 5 months
Bug?
by Koen Vanoppen
Hi All,
One of our users on ovirt who always was able to login with the AD account,
now all of a sudden can't login anymore... I already tried kicking him out
and putting him back in again, but no change... Following error is
appearing in the log file when he logs in:
2015-11-27 07:01:00,418 ERROR
[org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand]
(ajp--127.0.0.1-8702-1) Error during CanDoActionFailure.: Class: class
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException
Input:
{Extkey[name=EXTENSION_INVOKE_CONTEXT;type=class
org.ovirt.engine.api.extensions.ExtMap;uuid=EXTENSION_INVOKE_CONTEXT[886d2ebb-312a-49ae-9cc3-e1f849834b7d];]={Extkey[name=EXTENSION_INTERFACE_VERSION_MAX;type=class
java.lang.Integer;uuid=EXTENSION_INTERFACE_VERSION_MAX[f4cff49f-2717-4901-8ee9-df362446e3e7];]=0,
Extkey[name=EXTENSION_LICENSE;type=class
java.lang.String;uuid=EXTENSION_LICENSE[8a61ad65-054c-4e31-9c6d-1ca4d60a4c18];]=ASL
2.0, Extkey[name=EXTENSION_NOTES;type=class
java.lang.String;uuid=EXTENSION_NOTES[2da5ad7e-185a-4584-aaff-97f66978e4ea];]=Display
name: ovirt-engine-extension-aaa-ldap-1.0.2-1.el6,
Extkey[name=EXTENSION_HOME_URL;type=class
java.lang.String;uuid=EXTENSION_HOME_URL[4ad7a2f4-f969-42d4-b399-72d192e18304];]=
http://www.ovirt.org, Extkey[name=EXTENSION_LOCALE;type=class
java.lang.String;uuid=EXTENSION_LOCALE[0780b112-0ce0-404a-b85e-8765d778bb29];]=en_US,
Extkey[name=EXTENSION_NAME;type=class
java.lang.String;uuid=EXTENSION_NAME[651381d3-f54f-4547-bf28-b0b01a103184];]=ovirt-engine-extension-aaa-ldap.authz,
Extkey[name=EXTENSION_INTERFACE_VERSION_MIN;type=class
java.lang.Integer;uuid=EXTENSION_INTERFACE_VERSION_MIN[2b84fc91-305b-497b-a1d7-d961b9d2ce0b];]=0,
Extkey[name=EXTENSION_CONFIGURATION;type=class
java.util.Properties;uuid=EXTENSION_CONFIGURATION[2d48ab72-f0a1-4312-b4ae-5068a226b0fc];]=***,
Extkey[name=EXTENSION_AUTHOR;type=class
java.lang.String;uuid=EXTENSION_AUTHOR[ef242f7a-2dad-4bc5-9aad-e07018b7fbcc];]=The
oVirt Project, Extkey[name=AAA_AUTHZ_QUERY_MAX_FILTER_SIZE;type=class
java.lang.Integer;uuid=AAA_AUTHZ_QUERY_MAX_FILTER_SIZE[2eb1f541-0f65-44a1-a6e3-014e247595f5];]=50,
Extkey[name=EXTENSION_INSTANCE_NAME;type=class
java.lang.String;uuid=EXTENSION_INSTANCE_NAME[65c67ff6-aeca-4bd5-a245-8674327f011b];]=BRU_AIR-authz,
Extkey[name=EXTENSION_BUILD_INTERFACE_VERSION;type=class
java.lang.Integer;uuid=EXTENSION_BUILD_INTERFACE_VERSION[cb479e5a-4b23-46f8-aed3-56a4747a8ab7];]=0,
Extkey[name=EXTENSION_CONFIGURATION_SENSITIVE_KEYS;type=interface
java.util.Collection;uuid=EXTENSION_CONFIGURATION_SENSITIVE_KEYS[a456efa1-73ff-4204-9f9b-ebff01e35263];]=[],
Extkey[name=EXTENSION_GLOBAL_CONTEXT;type=class
org.ovirt.engine.api.extensions.ExtMap;uuid=EXTENSION_GLOBAL_CONTEXT[9799e72f-7af6-4cf1-bf08-297bc8903676];]=*skip*,
Extkey[name=EXTENSION_VERSION;type=class
java.lang.String;uuid=EXTENSION_VERSION[fe35f6a8-8239-4bdb-ab1a-af9f779ce68c];]=1.0.2,
Extkey[name=AAA_AUTHZ_AVAILABLE_NAMESPACES;type=interface
java.util.Collection;uuid=AAA_AUTHZ_AVAILABLE_NAMESPACES[6dffa34c-955f-486a-bd35-0a272b45a711];]=[DC=brussels,DC=airport,
DC=airport], Extkey[name=EXTENSION_MANAGER_TRACE_LOG;type=interface
org.slf4j.Logger;uuid=EXTENSION_MANAGER_TRACE_LOG[863db666-3ea7-4751-9695-918a3197ad83];]=org.slf4j.impl.Slf4jLogger(org.ovirt.engine.core.extensions.mgr.ExtensionsManager.trace.ovirt-engine-extension-aaa-ldap.authz.BRU_AIR-authz),
Extkey[name=EXTENSION_PROVIDES;type=interface
java.util.Collection;uuid=EXTENSION_PROVIDES[8cf373a6-65b5-4594-b828-0e275087de91];]=[org.ovirt.engine.api.extensions.aaa.Authz],
Extkey[name=EXTENSION_CONFIGURATION_FILE;type=class
java.lang.String;uuid=EXTENSION_CONFIGURATION_FILE[4fb0ffd3-983c-4f3f-98ff-9660bd67af6a];]=/etc/ovirt-engine/extensions.d/BRU_AIR-authz.properties},
Extkey[name=AAA_AUTHZ_QUERY_FLAGS;type=class
java.lang.Integer;uuid=AAA_AUTHZ_QUERY_FLAGS[97d226e9-8d87-49a0-9a7f-af689320907b];]=3,
Extkey[name=EXTENSION_INVOKE_COMMAND;type=class
org.ovirt.engine.api.extensions.ExtUUID;uuid=EXTENSION_INVOKE_COMMAND[485778ab-bede-4f1a-b823-77b262a2f28d];]=AAA_AUTHZ_FETCH_PRINCIPAL_RECORD[5a5bf9bb-9336-4376-a823-26efe1ba26df],
Extkey[name=AAA_AUTHN_AUTH_RECORD;type=class
org.ovirt.engine.api.extensions.ExtMap;uuid=AAA_AUTHN_AUTH_RECORD[e9462168-b53b-44ac-9af5-f25e1697173e];]={Extkey[name=AAA_AUTHN_AUTH_RECORD_PRINCIPAL;type=class
java.lang.String;uuid=AAA_AUTHN_AUTH_RECORD_PRINCIPAL[c3498f07-11fe-464c-958c-8bd7490b119a];]=
users(a)company.be}}
Output:
{Extkey[name=EXTENSION_INVOKE_RESULT;type=class
java.lang.Integer;uuid=EXTENSION_INVOKE_RESULT[0909d91d-8bde-40fb-b6c0-099c772ddd4e];]=2,
Extkey[name=EXTENSION_INVOKE_MESSAGE;type=class
java.lang.String;uuid=EXTENSION_INVOKE_MESSAGE[b7b053de-dc73-4bf7-9d26-b8bdb72f5893];]=No
search for principal 'users(a)company.be'}
at
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:91)
[extensions-manager.jar:]
at
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:109)
[extensions-manager.jar:]
at
org.ovirt.engine.core.aaa.AuthzUtils.fetchPrincipalRecordImpl(AuthzUtils.java:47)
[aaa.jar:]
at
org.ovirt.engine.core.aaa.AuthzUtils.fetchPrincipalRecord(AuthzUtils.java:38)
[aaa.jar:]
at
org.ovirt.engine.core.bll.aaa.LoginBaseCommand.isUserCanBeAuthenticated(LoginBaseCommand.java:265)
[bll.jar:]
at
org.ovirt.engine.core.bll.aaa.LoginBaseCommand.canDoAction(LoginBaseCommand.java:122)
[bll.jar:]
at
org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand.canDoAction(LoginAdminUserCommand.java:15)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.internalCanDoAction(CommandBase.java:768)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:347)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.login(Backend.java:575) [bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_65]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_65]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptorFactory$ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptorFactory.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:374)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:114)
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:125)
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:135)
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:36)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:374)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.ovirt.engine.core.bll.interceptors.ThreadLocalSessionCleanerInterceptor.injectWebContextToThreadLocal(ThreadLocalSessionCleanerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source) [:1.7.0_65]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:123)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:36)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:82)
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:211)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:363)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:194)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:165)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:173)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:72)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view6.login(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.login(GenericApiGWTServiceImpl.java:183)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_65]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_65]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
at com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:754)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:72)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:64)
[utils.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.aaa.filters.SessionMgmtFilter.doFilter(SessionMgmtFilter.java:31)
[aaa.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.aaa.filters.LoginFilter.doFilter(LoginFilter.java:74)
[aaa.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.aaa.filters.NegotiationFilter.doFilter(NegotiationFilter.java:132)
[aaa.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.aaa.filters.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:90)
[aaa.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.ovirt.engine.core.aaa.filters.SessionValidationFilter.doFilter(SessionValidationFilter.java:73)
[aaa.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:280)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:489)
at
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:153)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.rewrite.RewriteValve.invoke(RewriteValve.java:466)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:505)
at
org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:445)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:930)
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]
Mvg,
Koen
9 years, 5 months
Re: [ovirt-users] Error when trying to retrieve cluster, hosts via ovirt API
by Karli Sjöberg
--_000_96cf1fa7feef44229918d04fbaad99afemailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMTcgbm92LiAyMDE1IDU6MzAgZW0gc2tyZXYgSmVhbi1QaWVycmUgUmliZWF1dmlsbGUg
PGpwcmliZWF1dmlsbGVAYXh3YXkuY29tPjoNCj4NCj4gSGksDQo+DQo+DQo+DQo+IEJ5IHJ1bm5p
bmcgcHl0aG9uIGV4YW1wbGUgZ290IGhlcmUgKCA6IGh0dHA6Ly93ZWJzaXRlLWh1bWJsZWMucmhj
bG91ZC5jb20vb3ZpcnQtZmluZC1ob3N0cy1jbHVzdGVycy12bS1ydW5uaW5nLXN0YXR1cy1pZHMt
c3RvcmFnZS1kb21haW4tZGV0YWlscy1vdmlydC1kYy1weXRob25vdmlydC1zZGstcGFydC0zKQ0K
PiBhbmQgbW9kaWZpZWQgd2l0aCBteSBjb25uZWN0aW9uICBwYXJhbWV0ZXJzLCBJIGdvdCBmb2xs
b3dpbmcgZXJyb3IgOg0KPg0KPg0KPg0KPg0KPg0KPiBVbmV4cGVjdGVkIGVycm9yOiBbRVJST1Jd
OjpvVmlydCBBUEkgY29ubmVjdGlvbiBmYWlsdXJlLCAoNzcsICcnKQ0KPg0KPg0KPg0KPiBIb3cg
bWF5IEkgZ2V0IGVycm9yICBjb2RlcyBtZWFuaW5ncyA/DQoNCkkgZG9uJ3Qga25vdyB0aGUgbWVh
bmluZyBidXQgSSBzYXcgdGhhdCBBUElVUkwgd2FzIHdyb25nLCBpdCBzaG91bGQgYmU6DQoNCkFQ
SVVSTCA9ICJodHRwczovLyR7RU5HSU5FX0FERFJFU1N9L292aXJ0LWVuZ2luZS9hcGkiDQoNCkNv
dWxkIHlvdSBjb3JyZWN0IHRoYXQgYW5kIHRyeSBhZ2Fpbj8NCg0KL0sNCg0KPg0KPg0KPg0KPiBU
aGFua3MgZm9yIGhlbHAuDQo+DQo+DQo+DQo+DQo+DQo+DQo+DQo+IEouUC4gUmliZWF1dmlsbGUN
Cj4NCj4NCj4NCj4gUDogKzMzLigwKS4xLjQ3LjE3LjIwLjQ5DQo+DQo+IC4NCj4NCj4gUHV0ZWF1
eCAzIEV0YWdlIDUgIEJ1cmVhdSA0DQo+DQo+DQo+DQo+IGpwcmliZWF1dmlsbGVAYXh3YXkuY29t
DQo+IGh0dHA6Ly93d3cuYXh3YXkuY29tDQo+DQo+DQo+DQo+DQo+DQo+IFAgUGVuc2V6IMOgIGzi
gJllbnZpcm9ubmVtZW50IGF2YW50IGTigJlpbXByaW1lci4NCj4NCj4NCj4NCj4NCg==
--_000_96cf1fa7feef44229918d04fbaad99afemailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <E09F39220D73ED47AEF4D7C19E580A68(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAxNyBub3YuIDIwMTUgNTozMCBlbSBza3JldiBKZWFuLVBpZXJyZSBSaWJlYXV2
aWxsZSAmbHQ7anByaWJlYXV2aWxsZUBheHdheS5jb20mZ3Q7Ojxicj4NCiZndDs8YnI+DQomZ3Q7
IEhpLDxicj4NCiZndDs8YnI+DQomZ3Q7ICZuYnNwOzxicj4NCiZndDs8YnI+DQomZ3Q7IEJ5IHJ1
bm5pbmcgcHl0aG9uIGV4YW1wbGUgZ290IGhlcmUgKCA6IGh0dHA6Ly93ZWJzaXRlLWh1bWJsZWMu
cmhjbG91ZC5jb20vb3ZpcnQtZmluZC1ob3N0cy1jbHVzdGVycy12bS1ydW5uaW5nLXN0YXR1cy1p
ZHMtc3RvcmFnZS1kb21haW4tZGV0YWlscy1vdmlydC1kYy1weXRob25vdmlydC1zZGstcGFydC0z
KQ0KPGJyPg0KJmd0OyBhbmQgbW9kaWZpZWQgd2l0aCBteSBjb25uZWN0aW9uJm5ic3A7IHBhcmFt
ZXRlcnMsIEkgZ290IGZvbGxvd2luZyBlcnJvciZuYnNwOzo8YnI+DQomZ3Q7PGJyPg0KJmd0OyAm
bmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyAmbmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBVbmV4
cGVjdGVkIGVycm9yOiBbRVJST1JdOjpvVmlydCBBUEkgY29ubmVjdGlvbiBmYWlsdXJlLCAoNzcs
ICcnKTxicj4NCiZndDs8YnI+DQomZ3Q7ICZuYnNwOzxicj4NCiZndDs8YnI+DQomZ3Q7IEhvdyBt
YXkgSSBnZXQgZXJyb3IgJm5ic3A7Y29kZXMgbWVhbmluZ3MgPzwvcD4NCjxwIGRpcj0ibHRyIj5J
IGRvbid0IGtub3cgdGhlIG1lYW5pbmcgYnV0IEkgc2F3IHRoYXQgQVBJVVJMIHdhcyB3cm9uZywg
aXQgc2hvdWxkIGJlOjwvcD4NCjxwIGRpcj0ibHRyIj5BUElVUkwgPSAmcXVvdDtodHRwczovLyR7
RU5HSU5FX0FERFJFU1N9L292aXJ0LWVuZ2luZS9hcGkmcXVvdDs8L3A+DQo8cCBkaXI9Imx0ciI+
Q291bGQgeW91IGNvcnJlY3QgdGhhdCBhbmQgdHJ5IGFnYWluPzwvcD4NCjxwIGRpcj0ibHRyIj4v
SzwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7PGJyPg0KJmd0OyAmbmJzcDs8YnI+DQomZ3Q7PGJyPg0K
Jmd0OyBUaGFua3MgZm9yIGhlbHAuPGJyPg0KJmd0Ozxicj4NCiZndDsgJm5ic3A7PGJyPg0KJmd0
Ozxicj4NCiZndDsgJm5ic3A7PGJyPg0KJmd0Ozxicj4NCiZndDsgJm5ic3A7PGJyPg0KJmd0Ozxi
cj4NCiZndDsgSi5QLiBSaWJlYXV2aWxsZTxicj4NCiZndDs8YnI+DQomZ3Q7ICZuYnNwOzxicj4N
CiZndDs8YnI+DQomZ3Q7IFA6ICYjNDM7MzMuKDApLjEuNDcuMTcuMjAuNDk8YnI+DQomZ3Q7PGJy
Pg0KJmd0OyAuPGJyPg0KJmd0Ozxicj4NCiZndDsgUHV0ZWF1eCAzIEV0YWdlIDUmbmJzcDsgQnVy
ZWF1IDQ8YnI+DQomZ3Q7PGJyPg0KJmd0OyAmbmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBqcHJp
YmVhdXZpbGxlQGF4d2F5LmNvbTxicj4NCiZndDsgaHR0cDovL3d3dy5heHdheS5jb208YnI+DQom
Z3Q7PGJyPg0KJmd0OyAmbmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyAmbmJzcDs8YnI+DQomZ3Q7
PGJyPg0KJmd0OyBQJm5ic3A7UGVuc2V6IMOgIGzigJllbnZpcm9ubmVtZW50IGF2YW50IGTigJlp
bXByaW1lci48YnI+DQomZ3Q7PGJyPg0KJmd0OyAmbmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyAm
bmJzcDs8L3A+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_96cf1fa7feef44229918d04fbaad99afemailandroidcom_--
9 years, 5 months
Mix local and shared storage on ovirt3.6 rc?
by Liam Curtis
Hello all,
Loving ovirt...Have reinstalled many a time trying to understand and
thought I had this working, though now that everything operating properly
it seems this functionality is not possible.
I am running hosted engine over glusterfs and would also like to use some
of the other bricks I have set up on the gluster host, but when I try to
create a new gluster cluster in data center, I get error message:
Failed to connect host <myhost> to Storage Pool Default.
I dont want to use just gluster shared storage. Any way to work around this?
9 years, 5 months
Adding direct lun from API doesn't populate attributes like size, vendor, etc
by Groten, Ryan
--_000_a0d6297bdc3245ba9b9edd86480d19b9CD1001M21corpads_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Using this python I am able to create a direct FC lun properly (and it work=
s if the lun_id is valid). But in the GUI after the disk is added none of =
the fields are populated except LUN ID (Size is <1GB, Serial, Vendor, Produ=
ct ID are all blank).
I see this Bugzilla [1] is very similar (for iSCSI) which says the issue wa=
s fixed in 3.5.0, but it seems to still be present in 3.5.1 for Fibre Chann=
el Direct Luns at least.
Here's the python I used to test:
lun_id =3D '3600a098038303053453f463045727654'
lu =3D params.LogicalUnit()
lu.set_id(lun_id)
lus =3D list()
lus.append(lu)
storage_params =3D params.Storage()
storage_params.set_id(lun_id)
storage_params.set_logical_unit(lus)
storage_params.set_type('fcp')
disk_params =3D params.Disk()
disk_params.set_format('raw')
disk_params.set_interface('virtio')
disk_params.set_alias(disk_name)
disk_params.set_active(True)
disk_params.set_lun_storage(storage_params)
disk =3D api.disks.add(disk_params)
[1] https://bugzilla.redhat.com/show_bug.cgi?id=3D1096217
Thanks,
Ryan
--_000_a0d6297bdc3245ba9b9edd86480d19b9CD1001M21corpads_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Century Gothic";
panose-1:2 11 5 2 2 2 2 2 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Century Gothic","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Using this python I am able to create a direct FC lun properly (and it wo=
rks if the lun_id is valid). But in the GUI after the disk is added n=
one of the fields are populated except LUN
ID (Size is <1GB, Serial, Vendor, Product ID are all blank).<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">I see this Bugzilla [1] is very similar (for iSCSI) which says the issue =
was fixed in 3.5.0, but it seems to still be present in 3.5.1 for Fibre Cha=
nnel Direct Luns at least.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Here’s the python I used to test:<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lun_id =3D '3600a098038303053453f463045727654'=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lu =3D params.LogicalUnit()
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lu.set_id(lun_id) &nbs=
p; &=
nbsp; &nbs=
p; <=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lus =3D list()<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lus.append(lu)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas"> &nbs=
p; &=
nbsp; &nbs=
p;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params =3D params.Storage()<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params.set_id(lun_id)<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params.set_logical_unit(lus)<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params.set_type('fcp')<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params =3D params.Disk()<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_format('raw')<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_interface('virtio')<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_alias(disk_name)<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_active(True)<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_lun_storage(storage_params)<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk =3D api.disks.add(disk_params)<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">[1] <a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1096217">
https://bugzilla.redhat.com/show_bug.cgi?id=3D1096217</a><o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Ryan<o:p></o:p></span></p>
</div>
</body>
</html>
--_000_a0d6297bdc3245ba9b9edd86480d19b9CD1001M21corpads_--
9 years, 5 months
Archiving huge ovirt_engine_history table
by Eric Wong
--_000_2ad2e929edd04ef4a923887e0c9ab804MSEX04FRA9corpsolviansc_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello oVirt guru out there:
I notice our oVirt engine postgres db size is growing quite fast for past c=
ouple of months. I checked the database size. Found that our ovirt_engine=
_history is 73GB in size.
engine=3D# \connect ovirt_engine_history
You are now connected to database "ovirt_engine_history" as user "postgres"=
.
ovirt_engine_history=3D# SELECT pg_size_pretty( pg_database_size( current_d=
atabase() ) ) As human_size
, pg_database_size( current_database() ) As raw_size;
human_size | raw_size
------------+-------------
73 GB | 78444213368
(1 row)
Brief check the records, there are entries dated back 2014.
I want to see if there is a safe way to archive and remove some of the olde=
r records?
Thanks,
Eric
--_000_2ad2e929edd04ef4a923887e0c9ab804MSEX04FRA9corpsolviansc_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Helvetica;
panose-1:2 11 6 4 2 2 2 2 2 4;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:8.5pt;font-family:"Hel=
vetica","sans-serif"">Hello oVirt guru out there:<br>
<br>
I notice our oVirt engine postgres db size is growing quite fast for past c=
ouple of months. I checked the database size. Found that our ov=
irt_engine_history is 73GB in size. <br>
<br>
<br>
engine=3D# \connect ovirt_engine_history<br>
You are now connected to database "ovirt_engine_history" as user =
"postgres".<br>
ovirt_engine_history=3D# SELECT pg_size_pretty( pg_database_size( current_d=
atabase() ) ) As human_size<br>
, pg_database_size( current_database() ) As raw_size;<br>
human_size | raw_size<br>
------------+-------------<br>
73 GB | 78444213368<br>
(1 row)<br>
<br>
<br>
Brief check the records, there are entries dated back 2014. <br>
<br>
I want to see if there is a safe way to archive and remove some of the olde=
r records?<br>
<br>
Thanks,<br>
Eric</span><o:p></o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:"Ca=
libri","sans-serif""><o:p> </o:p></span></p>
</div>
</body>
</html>
--_000_2ad2e929edd04ef4a923887e0c9ab804MSEX04FRA9corpsolviansc_--
9 years, 5 months
VM Network activity on RHEV-M UI
by Marc Seward
I'm generating network activity on a RHEV 3.5 VM using iperf.The VM acts as
an iperf client.On the client,iperf reports that data has been successfully
sent to iperf server.The iperf server also shows that it's successfully
receiving data from the iperf client.But,network is at 0% on the RHEV-M
UI.The client and server are on different private networks.
On the same VM,when I generate network activity by fetching a file from a
public network using wget,the network column correctly shows activity on
the RHEV-M UI for the VM.
Could someone help me understand why I am unable to see network activity on
the RHEV-M UI when iperf is used?
Appreciate your help.TIA.
9 years, 5 months
Node not talking NFS to Node.
by admin
Hello
Been trying to resolve a problem with my second node which does not want to
connect :
one of my 2 nodes are not connecting, the problem is that the Node cant talk
NFS to Node. the node used to function fine, after performing a hardware
upgrade and bringing the system back up it seems to have come up with this
issue. we also performed a yum update, updated around 447 packages.
Can someone please help me with this.
Thanks
Sol
9 years, 5 months
3.6: vNIC profiles aren't selectable - can't add NIC for VMs
by Wee Sritippho
Hi,
I'm trying to add virtual NICs for a new VM. However, there is no vNIC
profile in the drop down list for 'nic1' even though I already have a
vNIC profile named 'ovirtmgmt' within 'ovirtmgmt' network. I attached
'ovirt-cant-add-nic-for-vms.mp4' as a visual explanation.
oVirt 3.6 @ CentOS 7
Regards,
Wee
---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus
9 years, 5 months
Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480
by Sahina Bose
This is a multi-part message in MIME format.
--------------010405060202050306040708
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
The error 480 - Gluster volume replica count is not supported, seems to
indicate that vdsm has not read the updated conf.
Have you made the change in all the nodes (hypervisor)?
[+users]
On 11/26/2015 07:13 AM, Punit Dambiwal wrote:
> Hi Sahina,
>
> Still the same error "480"
>
> Engine Logs :- http://fpaste.org/294595/
>
> -----------
> 2015-11-06 15:01:51,042 ERROR
> [org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection with
> details 'gluster1.3linux.com:/ssd' failed because of error code '480'
> and error message is: 480
> 2015-11-06 15:01:51,103 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: The error message for
> connection gluster.3linux.com:/sata returned by VDSM was: 480
> 2015-11-06 15:01:51,104 ERROR
> [org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection with
> details 'gluster.3linux.com:/sata' failed because of error code '480'
> and error message is: 480
> 2015-11-06 15:01:51,104 INFO
> [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] Host 'compute6' storage
> connection was failed
> ----------
>
> Thanks,
> Punit
>
> On Wed, Nov 25, 2015 at 9:30 PM, Sahina Bose <sabose(a)redhat.com
> <mailto:sabose@redhat.com>> wrote:
>
> Hi Punit,
>
> Based on comment 2 in
> https://bugzilla.redhat.com/show_bug.cgi?id=1238093, you should be
> able to edit /etc/vdsm/vdsm.conf
>
> allowed_replica_counts=2,3
>
> and restart vdsm.
>
> Could you attach error that you continue to face (post above step?)
>
>
>
> On 11/25/2015 08:02 AM, Punit Dambiwal wrote:
>> Hi Sahina,
>>
>> Yes...i have restarted the vdsm and even rebooted the whole
>> machine but not work..
>>
>> On Tue, Nov 24, 2015 at 10:19 PM, Sahina Bose <sabose(a)redhat.com
>> <mailto:sabose@redhat.com>> wrote:
>>
>> Have you restarted vdsm?
>>
>> On 11/24/2015 07:38 AM, Punit Dambiwal wrote:
>>> Hi Sahina,
>>>
>>> Either after make the changes in the vdsm.conf,still not
>>> able to connect to the replica=2 storage..
>>>
>>> Thanks,
>>> Punit
>>>
>>> On Mon, Nov 23, 2015 at 4:15 PM, Punit Dambiwal
>>> <hypunit(a)gmail.com <mailto:hypunit@gmail.com>> wrote:
>>>
>>> Hi Sahina,
>>>
>>> Thanks for the update...would you mind to let me know
>>> the correct syntax to add the line in the vdsm.conf ??
>>>
>>> Thanks,
>>> Punit
>>>
>>> On Mon, Nov 23, 2015 at 3:48 PM, Sahina Bose
>>> <sabose(a)redhat.com <mailto:sabose@redhat.com>> wrote:
>>>
>>> You can change the allowed_replica_count to 2 in
>>> vdsm.conf - though this is not recommended in
>>> production. Supported replica count is 3.
>>>
>>> thanks
>>> sahina
>>>
>>>
>>> On 11/23/2015 07:58 AM, Punit Dambiwal wrote:
>>>> Hi Sahina,
>>>>
>>>> Is there any workaround to solve this issue ?
>>>>
>>>> Thanks,
>>>> Punit
>>>>
>>>> On Wed, Nov 11, 2015 at 9:36 AM, Sahina Bose
>>>> <sabose(a)redhat.com <mailto:sabose@redhat.com>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> Thanks for your email. I will be back on 16th
>>>> Nov and will get back to you then.
>>>>
>>>> thanks
>>>> sahina
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>
>
--------------010405060202050306040708
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
The error 480 - Gluster volume replica count is not supported, seems
to indicate that vdsm has not read the updated conf.<br>
Have you made the change in all the nodes (hypervisor)?<br>
<br>
[+users]<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 11/26/2015 07:13 AM, Punit Dambiwal
wrote:<br>
</div>
<blockquote
cite="mid:CAGZcrBn+8Vh4kkkrCwPyhx+XaQgtwaj344D2KXAXPF+Lio=NOw@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Still the same error "480"</div>
<div><br>
</div>
<div>Engine Logs :- <a moz-do-not-send="true"
href="http://fpaste.org/294595/">http://fpaste.org/294595/</a></div>
<div><br>
</div>
<div>-----------</div>
<div>
<div>2015-11-06 15:01:51,042 ERROR
[org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection
with details 'gluster1.3linux.com:/ssd' failed because of
error code '480' and error message is: 480</div>
<div>2015-11-06 15:01:51,103 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message:
The error message for connection gluster.3linux.com:/sata
returned by VDSM was: 480</div>
<div>2015-11-06 15:01:51,104 ERROR
[org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection
with details 'gluster.3linux.com:/sata' failed because of
error code '480' and error message is: 480</div>
<div>2015-11-06 15:01:51,104 INFO
[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] Host
'compute6' storage connection was failed</div>
</div>
<div>----------</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Nov 25, 2015 at 9:30 PM, Sahina
Bose <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank">sabose(a)redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Hi Punit,<br>
<br>
Based on comment 2 in <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1238093"
target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1238093</a>,
you should be able to edit /etc/vdsm/vdsm.conf<br>
<br>
allowed_replica_counts=2,3<br>
<br>
and restart vdsm.<br>
<br>
Could you attach error that you continue to face (post
above step?)
<div>
<div class="h5"><br>
<br>
<br>
<div>On 11/25/2015 08:02 AM, Punit Dambiwal wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Yes...i have restarted the vdsm and even
rebooted the whole machine but not work..</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Nov 24, 2015 at
10:19 PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Have
you restarted vdsm?<span><br>
<br>
<div>On 11/24/2015 07:38 AM, Punit
Dambiwal wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Either after make the changes in
the vdsm.conf,still not able to
connect to the replica=2 storage..</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Nov
23, 2015 at 4:15 PM, Punit Dambiwal
<span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:hypunit@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:hypunit@gmail.com">hypunit(a)gmail.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Thanks for the
update...would you mind to let
me know the correct syntax to
add the line in the vdsm.conf
??</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Mon, Nov 23, 2015 at 3:48
PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"> You
can change the
allowed_replica_count
to 2 in vdsm.conf -
though this is not
recommended in
production. Supported
replica count is 3.<br>
<br>
thanks<span><font
color="#888888"><br>
sahina</font></span>
<div>
<div><br>
<br>
<div>On 11/23/2015
07:58 AM, Punit
Dambiwal wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Hi
Sahina,
<div><br>
</div>
<div>Is there
any workaround
to solve this
issue ?</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
Wed, Nov 11,
2015 at 9:36
AM, Sahina
Bose <span
dir="ltr"><<a
moz-do-not-send="true" href="mailto:sabose@redhat.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0
0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">Hi,<br>
<br>
Thanks for
your email. I
will be back
on 16th Nov
and will get
back to you
then.<br>
<br>
thanks<br>
<span><font
color="#888888">sahina<br>
</font></span></blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</span></div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------010405060202050306040708--
9 years, 5 months
Re: [ovirt-users] oVirt 4.0 wishlist: oVirt Self Hosted Engine Setup
by Giuseppe Ragusa
On Wed, Nov 25, 2015, at 12:13, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 23, 2015 at 10:10 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> Hi all,
>> I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
>>
>> I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
>>
>> I've sent separate wishlist messages for oVirt Node, oVirt Engine and VDSM.
>>
>> oVirt Self Hosted Engine Setup:
>>
>> *) allow virtual hardware customizations for locally-created Engine vm, specifically: allow to add an arbitrary number of NICs (asking for MAC address and local bridge to connect to) and maybe also an arbitrary number of disks (asking for size) as these seem to be the only/most_useful items missing; maybe the prebuilt appliance image too may be inspected by setup to detect a customized one and connect any further NICs to custom local bridges (which the user should be asked for)
>
> For 3.6.1 (it should be in 3.6.0 but it's bugged) you will be able to edit some parameter of the engine VM from the engine (than of course you need to reboot to make them effective).
> I'm not sure if it's worth to make the setup more complex or if it's better to keep it simple (single nic, single disk) and then let you edit the VM only from the engine as for other VMs.
Thanks Simone for your reply!
You are right: I was bothering you with this setup wishlist item *mainly* because further Engine vm modification was impossible/awkward/difficult before 3.6.1
Nonetheless I have seen many cases in which at least a second NIC would be absolutely needed to complete the Engine installation: it is a well known best practice to keep the management network (maybe conflated with the IPMI network in smaller cases) completely isolated from other services and to allow only limited access to/from it, and that network would be the ovirtmgmt-bridge-connected network (the only one available to the Engine, as of now); now think of a kickstart-based Engine OS installation/update from a local repository/mirror which would be reachable on a different network only (further access to the User/Administration Web portal could have similar needs but could be more easily covered by successive Engine vm modifications)
The "additional disks" part was (maybe "artificially") added by me out of fantasy, but I know of at least one enterprise customer that by policy mandates separate disks for OS and data (mainly on FC LUNs, to be honest, but FC is supported by hosted Engine now, isn't it?)
I absolutely don't know how the setup code is structured (and the recent logical "duplication" between mixins.py and vm.conf.in scares me a bit, actually ;), but I naively hope that changing the two single hardcoded nic/hdd questions into two loops of minimum 1 iteration (with a corresponding generalization of related otopi parameters) should not increase the complexity too much (and could be an excuse to rationalize/unify it further).
Obviously I could stand instantly corrected by anyone who really knows the code, but in exchange I would gain for free some interesting pointers/insights into the setup code/structure ;)
>> Regards,
>> Giuseppe
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
9 years, 5 months
Import OVA
by Massimo Mad
Hi,
How i cam inport a virtual appliance (OVA), what should I write in the
field "path" ?
I try with 10.10.10.73:/export/nfsshare but i have this error:
VDSM ovirtsrv102 command failed: ['Permission denied, please try again.',
'Permission denied, please try again.', 'Permission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).', '/usr/bin/tar:
10.10.10.73\\:/export/nfsshare: Cannot open: Input/output error',
'/usr/bin/tar: Error is not recoverable: exiting now']
I try to add the memory on a running vm ad i have this error:
Failed to hot set memory to VM CoverterV2V-REL6. Underlying error message:
unsupported configuration: unknown device type 'memory'
VDSM ovirtsrv104 command failed: unsupported configuration: unknown device
type 'memory'
Regards
Massimo
9 years, 5 months
[ANN] oVirt 3.6.1 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability
of the First Release Candidate of oVirt 3.6.1 for testing, as of November
25th, 2015.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar) and
Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.
This release of oVirt 3.6.1 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO is also available[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 5 months
oVirt Node roadmap
by Fabian Deutsch
Hey,
in the last few months the Node team spent a lot of efforts in
stabilizing Node by closing tons of bugs, rebasing Node onto CentOS
7.2, and in addition adding features like Hosted-Engine - to get Node
in shape for the recently released oVirt 3.6.0.
But we were also seeing how Node is showing it's age. It becomes more
and more challenging to solve bugs and implement features in Node's
current architecture.
To address these problems, and let Node adjust better to new changes,
a few months ago we started to look at how we can change Node, to make
it easier to develop, test, and use.
This comparison [1] shows a brief summary of our investigations. We
especially looked at Atomic and containers [2].
At the bottom line both of them provided us an architecture which
would help us to achieve something like 70% of what we need. But
during early trials we quickly ran into issues which we experience in
similar ways with today's Node.
Finally we settled an approach - the idea was around since right from
the beginning - which aligns very well with existing technologies
which we already use in the oVirt and Fedora/CentOS scope.
The new Node will be using anaconda for installation, LVM for upgrades
and rollbacks, and Cockpit [3] for administration. The future design
is taking care that packages don't need to take special care to work
on Node - which was a major obstacle in the past. Node will rather
behave (mostly) like a regular host - but with the advantages of an
easy & ready to use image, image based delivery and a robust rollback.
The current design principles and links to some additional resources
are in the wiki [4].
Stay tuned, we are just getting started.
On behalf of the Node Team
fabian
--
[1] http://www.ovirt.org/Node/Specs#Comparison:_Possible_Implementations
[2] http://www.projectatomic.io/ ~ http://docker.com/
[3] http://cockpit-project.org/
[4] http://www.ovirt.org/Node/4.0
9 years, 5 months
Re: [ovirt-users] 3.6 master hosted engine setup error
by Gianluca Cecchi
On Tue, Nov 17, 2015 at 6:29 PM, Simone Tiraboschi wrote:
>
>
> On Tue, Nov 17, 2015 at 5:04 PM, Gianluca Cecchi wrote:
>
>>
>>
>> Yes, I know. No problem for me.
>> I'm using master because I want to test Italian layout so that I can
>> change current translation where necessary.
>> But do I have to start from scratch or is there any command to clean the
>> current half-completed setup?
>>
>
> No, unfortunately is still not available for hosted-engine-setup
>
>
>>
>> Gianluca
>>
>>
>
Hello,
retrying from master, as the problem solution should have been merged.
But it seems the package for the appliance is not available now...
[root@ovc71 ~]# yum install ovirt-engine-appliance
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mi.mirror.garr.it
* extras: mi.mirror.garr.it
* ovirt-master-epel: fr2.rpmfind.net
* ovirt-master-snapshot: ftp.plusline.net
* ovirt-master-snapshot-static: ftp.plusline.net
* updates: mi.mirror.garr.it
No package ovirt-engine-appliance available.
Error: Nothing to do
Any alternative manually downloading the ova file or other strategy?
Gianluca
9 years, 5 months
[SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE
by Giuseppe Ragusa
On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
>
>
> On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> Hi all,
>> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>>
>> I'm trying to trick the self-hosted-engine setup to create a custom engine vm with 3 nics (with fixed MACs/UUIDs).
>>
>> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) and the network bridges (ovirtmgmt and other two bridges, called nfs and lan, for the engine vm) have been preconfigured on the initial fully-patched CentOS 7.1 host (plus other two identical hosts which are awaiting to be added).
>>
>> I'm stuck at a point with the engine vm successfully starting but with only one nic present (connected to the ovirtmgmt bridge).
>>
>> I'm trying to obtain the modified engine vm by means of a trick which used to work in a previous (aborted because of lacking GlusterFS-by-libgfapi support) oVirt 3.5 test setup (about a year ago, maybe more): I'm substituting the standard /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the following:
>>
>> vmId=@VM_UUID@
>> memSize=@MEM_SIZE@
>> display=@CONSOLE_TYPE@
>> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>> devices={device:scsi,model:virtio-scsi,type:controller}
>> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00, slot:0x0c, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
>> vmName=@NAME@
>> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>> smp=@VCPUS@
>> cpuType=@CPU_TYPE@
>> emulatedMachine=@EMULATED_MACHINE@
>>
>> but unfortunately the vm gets created like this (output from "ps"; note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: the installation should proceed by HTTP on the lan network but obviously fails):
>>
>> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
>> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime mlock=off
>> -smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
>> mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2
>> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
>> 0e38aa4a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/li
>> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=mo
>> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global kvm-pit.l
>> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uh
>> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr
>> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=
>> /var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev socket,id=charconsole0,path=/var/run/ovirt-vmconsole-console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
>>
>> There seem to be no errors in the logs.
>>
>> I've tried reading some (limited) Python setup code but I've not found any obvious reason why the trick should not work anymore.
>>
>> I know that 3.6 has different network configuration/management and this could be the hot point.
>>
>> Does anyone have any further suggestion or clue (code/logs to read)?
>
> The VM creation path is now a bit different cause we use just vdscli library instead of vdsClient.
> Please take a a look at mixins.py
Many thanks for your very valuable hint:
I've restored the original /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in and I've managed to obtain the 3-nics-customized vm by modifying /usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py like this ("diff -Naur" output):
************************************************************************************
--- mixins.py.orig 2015-10-20 16:57:40.000000000 +0200
+++ mixins.py 2015-10-26 22:22:58.351223922 +0100
@@ -25,6 +25,7 @@
import random
import string
import time
+import uuid
from ovirt_hosted_engine_setup import constants as ohostedcons
@@ -247,6 +248,44 @@
]['@BOOT_PXE@'] == ',bootOrder:1':
nic['bootOrder'] = '1'
conf['devices'].append(nic)
+ nic2 = {
+ 'nicModel': 'pv',
+ 'macAddr': '02:50:56:3f:c4:a0',
+ 'linkActive': 'true',
+ 'network': 'lan',
+ 'filter': 'vdsm-no-mac-spoofing',
+ 'specParams': {},
+ 'deviceId': str(uuid.uuid4()),
+ 'address': {
+ 'bus': '0x00',
+ 'slot': '0x09',
+ 'domain': '0x0000',
+ 'type': 'pci',
+ 'function': '0x0'
+ },
+ 'device': 'bridge',
+ 'type': 'interface',
+ }
+ conf['devices'].append(nic2)
+ nic3 = {
+ 'nicModel': 'pv',
+ 'macAddr': '02:50:56:3f:c4:c0',
+ 'linkActive': 'true',
+ 'network': 'nfs',
+ 'filter': 'vdsm-no-mac-spoofing',
+ 'specParams': {},
+ 'deviceId': str(uuid.uuid4()),
+ 'address': {
+ 'bus': '0x00',
+ 'slot': '0x0c',
+ 'domain': '0x0000',
+ 'type': 'pci',
+ 'function': '0x0'
+ },
+ 'device': 'bridge',
+ 'type': 'interface',
+ }
+ conf['devices'].append(nic3)
cli = self.environment[ohostedcons.VDSMEnv.VDS_CLI]
status = cli.create(conf)
************************************************************************************
Obviously this is a horrible ad-hoc hack that I'm not able to generalize/clean-up now: doing so would involve (apart from a deeper understanding of the whole setup code/workflow) some well-thought-out design decisions and, given the effective deprecation of the aforementioned easy-to-modify vm.conf.in template substituted by hardwired Python program logic, it seems that such a functionality is not very high on the development priority list atm ;)
Many thanks again!
Kind regards,
Giuseppe
>> Many thanks in advance.
>>
>> Kind regards,
>> Giuseppe
>>
>> PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>
>> _______________________________________________
>>
Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
9 years, 5 months
Problem with Adding Pre-Configured Domain
by Roger Meier
This is a multi-part message in MIME format.
--------------050208060806020800020208
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi All,
I don't know if this is a Bug or an error on my side.
At the moment, i have a oVirt 3.6 installation with two Nodes and Two
Storage Server, which
are configured themselfs als master/slave (solaris zfs snapshot copy
from master to slave all 2 hours)
Now i try to do some tests for some failure use cases like, master
storage isn't available anymore or
one of the virtual machines must be restored from the snapshot.
Because the data n the slave is a snapshot copy, all data which are on
the Data Domain NFS Storage,
are also on the slave NFS Storage.
I tried it to add over WebUI over the option "Import Domain" (Import
Pre-Configured Domain) with both
Domain Functions (Data and Export) but nothing happens, expect some
errors in the vdsm.log Logfile.
Something like this
Thread-253746::ERROR::2015-11-24
11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could
not disconnect from storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2545, in
disconnectStorageServer
conObj.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect
return self._mountCon.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect
self._mount.umount(True, True)
File "/usr/share/vdsm/storage/mount.py", line 256, in umount
return self._runcmd(cmd, timeout)
File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount:
/rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
mountpoint not found\n')
I checked with nfs-check.py if all permissions are ok, the tool say this:
Konsole output
[root@lin-ovirt1 contrib]# python ./nfs-check.py
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
Trying to /bin/mount -t nfs
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!
Greetings
Roger Meier
--------------050208060806020800020208
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi All,<br>
<br>
I don't know if this is a Bug or an error on my side.<br>
<br>
At the moment, i have a oVirt 3.6 installation with two Nodes and
Two Storage Server, which<br>
are configured themselfs als master/slave (solaris zfs snapshot copy
from master to slave all 2 hours)<br>
<br>
Now i try to do some tests for some failure use cases like, master
storage isn't available anymore or<br>
one of the virtual machines must be restored from the snapshot.<br>
<br>
Because the data n the slave is a snapshot copy, all data which are
on the Data Domain NFS Storage,<br>
are also on the slave NFS Storage. <br>
<br>
I tried it to add over WebUI over the option "Import Domain" (Import
Pre-Configured Domain) with both<br>
Domain Functions (Data and Export) but nothing happens, expect some
errors in the vdsm.log Logfile.<br>
<br>
Something like this<br>
<br>
Thread-253746::ERROR::2015-11-24
11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer)
Could not disconnect from storageServer<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/storage/hsm.py", line 2545, in
disconnectStorageServer<br>
conObj.disconnect()<br>
File "/usr/share/vdsm/storage/storageServer.py", line 425, in
disconnect<br>
return self._mountCon.disconnect()<br>
File "/usr/share/vdsm/storage/storageServer.py", line 254, in
disconnect<br>
self._mount.umount(True, True)<br>
File "/usr/share/vdsm/storage/mount.py", line 256, in umount<br>
return self._runcmd(cmd, timeout)<br>
File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd<br>
raise MountError(rc, ";".join((out, err)))<br>
MountError: (32, ';umount:
/rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
mountpoint not found\n')<br>
<br>
I checked with nfs-check.py if all permissions are ok, the tool say
this:<br>
<br>
<title>Konsole output</title>
<div>
<span style="font-family:monospace"><span
style="color:#000000;background-color:#ffffff;">[root@lin-ovirt1
contrib]# python ./nfs-check.py
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
</span><br>
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
<br>
Trying to /bin/mount -t nfs
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
<br>
Executing NFS tests..
<br>
Removing vdsmTest file..
<br>
Status of tests [OK]
<br>
Disconnecting from NFS Server..
<br>
Done!<br>
<br>
</span></div>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Greetings<br>
Roger Meier<br>
</body>
</html>
--------------050208060806020800020208--
9 years, 5 months
Move cluster between datacenters
by Dael Maselli
Hello,
I have an environment with two data centers with some clusters in each
one. Each cluster has one or more dedicated FC storage domains.
We have some management problem because adding storage to one cluster
means to add it to all nodes of all clusters in the same data center, by
the way I found this a little overkill and I think it should be managed
like networks.
Anyway, to resolve out problems we would like to create a new datacenter
and move a cluster from old data center to the new one, obviously with
it's own storage domain.
Is there a way to do this without export/import all vms?
Thank you.
Regards,
Dael Maselli.
--
___________________________________________________________________
Dael Maselli --- INFN-LNF Computing Service -- +39.06.9403.2214
___________________________________________________________________
* http://www.lnf.infn.it/~dmaselli/ *
___________________________________________________________________
Democracy is two wolves and a lamb voting on what to have for lunch
___________________________________________________________________
9 years, 5 months
oVirt 4.0 wishlist: oVirt Node
by Giuseppe Ragusa
Hi all,
I'm trying to organize my wishes/hopes for oVirt 4.0
These items derive from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
Since I have related interests/wishes also for Engine and VDSM, I'll send a separate message for each one.
Let's start from the oVirt Node:
*) oVirt Node complete convergence with Atomic Host: start from Project Atomic tools and define an ISO-installable Atomic Host variant [1] to include gluster, qemu, libvirt, vdsm and all the packages/configurations that an oVirt Node would need (remove unneeded parts)
*) add Samba, CTDB and Ganesha to oVirt Node to allow it to be used as a full storage appliance (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on VDSM
*) add oVirt Node ability to host containers (independent of the above mentioned convergence with Atomic); note that Atomic Host has Docker/Kubernetes, but libvirt already has a LXC driver [2] and the Engine could benefit from some added smartness in managing groups of guests etc. in the vm case too; there are related wishlist items on configuring/managing containers on the Engine and on VDSM
*) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on the Engine and on VDSM
*) add DRBD9 as a supported Storage Domain type, maybe for HC and HE too; there are related wishlist items on configuring/managing DRBD9 on the Engine and on VDSM
*) add oVirt Node ability to fully perform as a stand-alone hypervisor: I hear that Cockpit is coming, so why not Kimchi too? ;)
Regards,
Giuseppe
[1] product.json, I suppose, but I'm starting to learn Atomic now...
[2] barring a pending deprecation in RHEL7, but I suppose that a community/Centos-Virt-SIG libvirt build could restore it and maybe RedHat too could support it on a special libvirt build for RHEV (just to remove those support costs from the base RHEL OS offering)
9 years, 5 months
[OT] Gmail is marking the list as spam.
by Juan Pablo Lorier
Hi,
Someone may take a look at the spam policy of Gmail to see what is not
been done as it's been months since I have to get the mails from spam
folder no matter how much I try to get them as legitime.
Regards
El 22/10/15 a las 12:24 p.m., users-request(a)ovirt.org escribió:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. [ANN] oVirt 3.6.0 Third Release Candidate is now available
> for testing (Sandro Bonazzola)
> 2. Re: Testing self hosted engine in 3.6: hostname not resolved
> error (Gianluca Cecchi)
> 3. Re: 3.6 upgrade issue (Yaniv Dary)
> 4. Re: How to change the hosted engine VM RAM size after
> deploying (Simone Tiraboschi)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 22 Oct 2015 16:08:25 +0200
> From: Sandro Bonazzola <sbonazzo(a)redhat.com>
> To: announce(a)ovirt.org, users <users(a)ovirt.org>, devel
> <devel(a)ovirt.org>
> Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Release Candidate is
> now available for testing
> Message-ID:
> <CAPQRNTm4GyWo0zo-L=92ScLJvWQFEPKaF5UfmdJ4SroKCCC3pQ(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> The oVirt Project is pleased to announce the availability
> of the Third Release Candidate of oVirt 3.6 for testing, as of October
> 22nd, 2015.
>
> This release is available now for Fedora 22,
> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
>
> This release supports Hypervisor Hosts running
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
> Fedora 21 and Fedora 22.
> Highly experimental support for Debian 8.1 Jessie has been added too.
>
> This release of oVirt 3.6.0 includes numerous bug fixes.
> See the release notes [1] for an initial list of the new features and bugs
> fixed.
>
> Please refer to release notes [1] for Installation / Upgrade instructions.
> New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
>
> Please note that mirrors[3] may need usually one day before being
> synchronized.
>
> Please refer to the release notes for known issues in this release.
>
> [1] http://www.ovirt.org/OVirt_3.6_Release_Notes
> [2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
> [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
>
>
9 years, 5 months
Could not associate brick
by Bello Florent
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi,
I activate gluster service in Cluster, then my engine.log
chain : Could not add brick xxx to volume xxxx server uuid xxx not found
in cluster.
I found in mailing list i have to put all my hosts in
maintenance mode and put on.
Then now engine.log chain :
2015-11-09
11:15:53,563 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] START,
GlusterVolumesListVDSCommand(HostName = ovirt02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='0d1284e1-fa18-4309-b196-df9a6a337c44'}), log id:
6ddd5b9d
2015-11-09 11:15:53,711 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,714 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,716 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,719 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,722 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,725 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,732 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] FINISH,
GlusterVolumesListVDSCommand, return:
{e9a24161-3e72-47ea-b593-57f3302e7c4e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244,
e5df896f-b818-4d70-ac86-ad9270f9d5f2=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cb7d0349},
log id: 6ddd5b9d
Here my vdsm.log on host 1:
Thread-4247::DEBUG::2015-11-09
11:17:47,621::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'Host.getVMFullList' in bridge with [{u'status': 'Up',
u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'1',
u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices':
[{u'alias': u'console0', u'specParams': {}, 'deviceType': u'console',
u'deviceId': u'ab824f92-f636-4c0f-96ad-b4f3d1c352be', u'device':
u'console', u'type': u'console'}, {u'target': 1572864, u'alias':
u'balloon0', u'specParams': {u'model': u'none'}, 'deviceType':
u'balloon', u'device': u'memballoon', u'type': u'balloon'}, {u'device':
u'unix', u'alias': u'channel0', 'deviceType': u'channel', u'type':
u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type':
u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias':
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address':
{u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port':
u'2'}}, {u'device': u'unix', u'alias': u'channel2', 'deviceType':
u'channel', u'type': u'channel', u'address': {u'bus': u'0',
u'controller': u'0', u'type': u'virtio-serial', u'port': u'3'}},
{u'alias': u'scsi0', 'deviceType': u'controller', u'address': {u'slot':
u'0x04', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci',
u'function': u'0x0'}, u'device': u'scsi', u'model': u'virtio-scsi',
u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x1'}}, {u'device': u'virtio-serial', u'alias':
u'virtio-serial0', 'deviceType': u'controller', u'type': u'controller',
u'address': {u'slot': u'0x05', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}}, {u'device': u'', u'alias':
u'video0', 'deviceType': u'video', u'type': u'video', u'address':
{u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams':
{u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'graphics',
u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr':
u'00:16:3e:43:96:7b', u'linkActive': True, u'network': u'ovirtmgmt',
u'specParams': {}, u'filter': u'vdsm-no-mac-spoofing', u'alias':
u'net0', 'deviceType': u'interface', u'deviceId':
u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {u'slot': u'0x03',
u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function':
u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name':
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias':
u'ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType':
u'disk', u'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9',
u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive',
u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared':
u'false', u'path': '', u'type': u'disk'}, {u'poolID':
u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'},
u'index': u'0', u'iface': u'virtio', u'apparentsize': '26843545600',
u'specParams': {}, u'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'readonly': 'False', 'deviceType': u'disk', u'shared': u'exclusive',
u'truesize': '3515854848', u'type': u'disk', u'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize': u'0', u'format':
u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
u'propagateErrors': u'off', u'optional': u'false', u'name': u'vda',
u'bootOrder': u'1', u'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0',
u'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}],
u'guestDiskMapping': {}, u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'vmType': u'kvm', u'displayIp': '0', u'displaySecurePort': '-1',
u'memSize': 1536, u'displayPort': u'5900', u'cpuType': u'Conroe',
'clientIp': u'', u'statusTime': '4299704920', u'vmName':
u'HostedEngine', u'display': 'vnc'}]
Reactor thread::INFO::2015-11-09
11:17:48,004::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,012::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:48,013::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,013::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57851)
BindingXMLRPC::INFO::2015-11-09
11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57851
Thread-4248::INFO::2015-11-09
11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 started
Thread-4248::INFO::2015-11-09
11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 stopped
Thread-303::DEBUG::2015-11-09
11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay)
/usr/bin/dd
if=/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f140-4636-90f7-f43904735da0/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd
None)
Thread-303::DEBUG::2015-11-09
11:17:48,154::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records inn0+1 records outn461 bytes (461 B) copied,
0.000382969 s, 1.2 MB/sn'; <rc> =
0
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,783::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00507258 s, 202 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:49,939::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:49,947::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57852)
BindingXMLRPC::INFO::2015-11-09
11:17:49,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57852
Thread-4249::INFO::2015-11-09
11:17:49,949::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852 started
Thread-4249::DEBUG::2015-11-09
11:17:49,950::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getCapabilities with ()
{}
Thread-4249::DEBUG::2015-11-09
11:17:49,962::netinfo::454::root::(_dhcp_used) DHCPv6 configuration not
specified for ovirtmgmt.
Thread-4249::DEBUG::2015-11-09
11:17:49,963::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,970::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc qdisc show (cwd
None)
Thread-4249::DEBUG::2015-11-09
11:17:49,979::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,980::utils::676::root::(execCmd) /usr/sbin/tc class show dev
enp2s0 classid 0:1388 (cwd None)
Thread-4249::DEBUG::2015-11-09
11:17:49,989::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,993::caps::807::root::(_getKeyPackages) rpm package
('glusterfs-rdma',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,001::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}],
'FC': []}, 'packages2': {'kernel': {'release': '229.20.1.el7.x86_64',
'buildtime': 1446588607.0, 'version': '3.10.0'}, 'glusterfs-fuse':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'},
'spice-server': {'release': '9.el7_1.3', 'buildtime': 1444691699L,
'version': '0.12.4'}, 'librbd1': {'release': '2.el7', 'buildtime':
1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '0.el7.centos',
'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qemu-kvm':
{'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3.0'},
'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'version':
'3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L,
'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime':
1444310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7',
'buildtime': 1442501481L, 'version': '0.5.1'},
'glusterfs-geo-replication': {'release': '1.el7', 'buildtime':
1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release':
'1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}},
'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Quad
CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start':
{'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}},
'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapshot': 'true',
'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt',
'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes',
'IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0',
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE':
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs':
['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcpv4':
False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off',
'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'addr':
'10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR':
'10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes'}, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway':
'10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6':
False, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['vnet0', 'enp2s0'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':
'0', 'priority': '32768', 'multicast_membership_interval': '26000',
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.6c626db33b72', 'bridge_id': '8000.6c626db33b72',
'topology_change_timer': '0', 'ageing_time': '30000',
'nf_call_ip6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0',
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100',
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer':
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay':
'0'}}}, 'uuid': 'c2cac9d6-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus':
'0,1,2,3', 'nics': {'enp2s0': {'addr': '', 'ipv6gateway': '::',
'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED': 'no',
'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0',
'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000,
'gateway': ''}}, 'software_revision': '0', 'hostdevPassthrough':
'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags':
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'],
'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT',
'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem':
'321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg':
{'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu':
'1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [],
'hwaddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '',
'opts': {}}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed':
'2670.000', 'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2,
3]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'lo',
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'version_name': 'Snow Man', 'cpuThreads': '4', 'emulatedMachines':
['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.2.0',
'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc',
'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0',
'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'],
'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7',
'name': 'RHEL'}, 'lastClient':
'127.0.0.1'}}
Thread-4249::INFO::2015-11-09
11:17:50,020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,815::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00511026 s, 200 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:52,098::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,106::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,107::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,107::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57853)
BindingXMLRPC::INFO::2015-11-09
11:17:52,108::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57853
Thread-4250::INFO::2015-11-09
11:17:52,110::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 started
Thread-4250::DEBUG::2015-11-09
11:17:52,111::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with ()
{}
Thread-4250::DEBUG::2015-11-09
11:17:52,112::bindingxmlrpc::1264::vds::(wrapper) return getHardwareInfo
with {'status': {'message': 'Done', 'code': 0}, 'info':
{'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be Filled By
O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion':
'1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72',
'systemManufacturer': 'MICRO-STAR INTERNATIONAL
CO.,LTD'}}
Thread-4250::INFO::2015-11-09
11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 stopped
Reactor thread::INFO::2015-11-09
11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57854
Reactor
thread::DEBUG::2015-11-09
11:17:52,124::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57854
BindingXMLRPC::INFO::2015-11-09
11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57854
Reactor thread::DEBUG::2015-11-09
11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57854)
Thread-4251::INFO::2015-11-09
11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 started
Thread-4251::DEBUG::2015-11-09
11:17:52,129::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4251::DEBUG::2015-11-09
11:17:52,130::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state init ->
state preparing
Thread-4251::INFO::2015-11-09
11:17:52,130::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterfs',
'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}],
options=None)
Thread-4251::DEBUG::2015-11-09
11:17:52,132::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-4251::DEBUG::2015-11-09
11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
u'0af99439-f140-4636-90f7-f43904735da0')
Thread-4251::DEBUG::2015-11-09
11:17:52,147::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{b4c488af-9d2f-4b7b-a6f6-74a0bac06c41: storage.glusterSD.findDomain,
0af99439-f140-4636-90f7-f43904735da0:
storage.glusterSD.findDomain}
Thread-4251::INFO::2015-11-09
11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::1191::Storage.TaskManager.Task::(prepare)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist':
[{'status': 0, 'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing
-> state finished
Thread-4251::DEBUG::2015-11-09
11:17:52,147::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::task::993::Storage.TaskManager.Task::(_decref)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::ref 0 aborting
False
Thread-4251::INFO::2015-11-09
11:17:52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 stopped
Reactor thread::INFO::2015-11-09
11:17:52,150::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57855
Reactor
thread::DEBUG::2015-11-09
11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,158::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57855
BindingXMLRPC::INFO::2015-11-09
11:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57855
Reactor thread::DEBUG::2015-11-09
11:17:52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57855)
Thread-4255::INFO::2015-11-09
11:17:52,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 started
Thread-4255::DEBUG::2015-11-09
11:17:52,162::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4255::DEBUG::2015-11-09
11:17:52,163::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init ->
state preparing
Thread-4255::INFO::2015-11-09
11:17:52,163::logUtils::48::dispatcher::(wrapper) Run and protect:
getStorageDomainStats(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
options=None)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '2848' at
'getStorageDomainStats'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted
request
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 aborting
False
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.sdc.refreshStorage)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.iscsi.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,
this will take up to 30 seconds
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo
-n /sbin/iscsiadm -m session -R (cwd
None)
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.hba.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,184::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,184::hba::56::Storage.HBA::(rescan) Starting
scan
Thread-4255::DEBUG::2015-11-09
11:17:52,295::hba::62::Storage.HBA::(rescan) Scan
finished
Thread-4255::DEBUG::2015-11-09
11:17:52,296::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo
-n /usr/sbin/multipath (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-4255::DEBUG::2015-11-09
11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle
--timeout=5 (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,371::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,374::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,386::fileSD::157::Storage.StorageDomainManifest::(__init__)
Reading domain in path
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
Thread-4255::DEBUG::2015-11-09
11:17:52,387::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW
backend
Thread-4255::DEBUG::2015-11-09
11:17:52,395::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=ovirt02.mafia.kru:/engine', 'ROLE=Regular',
'SDUUID=b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=GLUSTERFS',
'VERSION=3',
'_SHA_CKSUM=cb09606ada74ed4155ad158923dd930264780fc8']
Thread-4255::DEBUG::2015-11-09
11:17:52,398::fileSD::647::Storage.StorageDomain::(imageGarbageCollector)
Removing remnants of deleted images []
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::442::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_imageNS already
registered
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS
already registered
Thread-4255::INFO::2015-11-09
11:17:52,400::logUtils::51::dispatcher::(wrapper) Run and protect:
getStorageDomainStats, Return response: {'stats': {'mdasize': 0,
'mdathreshold': True, 'mdavalid': True, 'diskfree': '210878988288',
'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::1191::Storage.TaskManager.Task::(prepare)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats':
{'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree':
'210878988288', 'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing
-> state finished
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 0 aborting
False
Thread-4255::INFO::2015-11-09
11:17:52,404::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 stopped
Reactor thread::INFO::2015-11-09
11:17:52,405::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57856
Reactor
thread::DEBUG::2015-11-09
11:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,414::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57856
BindingXMLRPC::INFO::2015-11-09
11:17:52,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57856
Reactor thread::DEBUG::2015-11-09
11:17:52,414::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57856)
Thread-4259::INFO::2015-11-09
11:17:52,417::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 started
Thread-4259::DEBUG::2015-11-09
11:17:52,418::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4259::DEBUG::2015-11-09
11:17:52,418::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init ->
state preparing
Thread-4259::INFO::2015-11-09
11:17:52,419::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='56461302-0710-4df0-964d-5e7b1ff07828',
leafUUID='8f8ee034-de86-4438-b6eb-9109faa8b3d3')
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Granted
request
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 1 aborting
False
Thread-4259::DEBUG::2015-11-09
11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4259::WARNING::2015-11-09
11:17:52,446::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,447::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,448::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::INFO::2015-11-09
11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::1191::Storage.TaskManager.Task::(prepare)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state preparing
-> state finished
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting
False
Thread-4259::INFO::2015-11-09
11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 stopped
Reactor thread::INFO::2015-11-09
11:17:52,454::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,463::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,463::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,464::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57857)
BindingXMLRPC::INFO::2015-11-09
11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57857
Thread-4260::INFO::2015-11-09
11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 started
Thread-4260::DEBUG::2015-11-09
11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4260::DEBUG::2015-11-09
11:17:52,467::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state init ->
state preparing
Thread-4260::INFO::2015-11-09
11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='fd81353f-b654-4493-bcaf-2f417849b830',
leafUUID='8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted
request
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 aborting
False
Thread-4260::DEBUG::2015-11-09
11:17:52,485::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::DEBUG::2015-11-09
11:17:52,486::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4260::WARNING::2015-11-09
11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,488::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::INFO::2015-11-09
11:17:52,490::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::1191::Storage.TaskManager.Task::(prepare)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state preparing
-> state finished
Thread-4260::DEBUG::2015-11-09
11:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4260::DEBUG::2015-11-09
11:17:52,492::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 0 aborting
False
Thread-4260::INFO::2015-11-09
11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 stopped
Reactor thread::INFO::2015-11-09
11:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57858
Reactor
thread::DEBUG::2015-11-09
11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57858
BindingXMLRPC::INFO::2015-11-09
11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57858
Reactor thread::DEBUG::2015-11-09
11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57858)
Thread-4261::INFO::2015-11-09
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 started
Thread-4261::DEBUG::2015-11-09
11:17:52,508::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4261::DEBUG::2015-11-09
11:17:52,508::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state init ->
state preparing
Thread-4261::INFO::2015-11-09
11:17:52,509::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='0e1c20d1-94aa-4003-8e12-0dbbf06a6af8',
leafUUID='3fc3362d-ab6d-4e06-bd72-82d5750c7095')
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Granted
request
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 1 aborting
False
Thread-4261::DEBUG::2015-11-09
11:17:52,526::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4261::WARNING::2015-11-09
11:17:52,528::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::INFO::2015-11-09
11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,531::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d39463fd-486f-4280-903a-51b72862b648`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state preparing
-> state finished
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting
False
Thread-4261::INFO::2015-11-09
11:17:52,535::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 stopped
Reactor thread::INFO::2015-11-09
11:17:52,536::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,544::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,545::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,545::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57859)
BindingXMLRPC::INFO::2015-11-09
11:17:52,545::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57859
Thread-4262::INFO::2015-11-09
11:17:52,548::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 started
Thread-4262::DEBUG::2015-11-09
11:17:52,548::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4262::DEBUG::2015-11-09
11:17:52,549::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init ->
state preparing
Thread-4262::INFO::2015-11-09
11:17:52,549::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='350fb787-049a-4174-8914-f371aabfa72c',
leafUUID='02c5d59d-638c-4672-814d-d734e334e24a')
Thread-4262::DEBUG::2015-11-09
11:17:52,549::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Granted
request
Thread-4262::DEBUG::2015-11-09
11:17:52,550::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4262::DEBUG::2015-11-09
11:17:52,551::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 1 aborting
False
Thread-4262::DEBUG::2015-11-09
11:17:52,566::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4262::WARNING::2015-11-09
11:17:52,568::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,570::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::INFO::2015-11-09
11:17:52,572::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::1191::Storage.TaskManager.Task::(prepare)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state preparing
-> state finished
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 0 aborting
False
Thread-4262::INFO::2015-11-09
11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 stopped
Reactor thread::INFO::2015-11-09
11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57860
Reactor
thread::DEBUG::2015-11-09
11:17:52,619::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57860
BindingXMLRPC::INFO::2015-11-09
11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57860
Reactor thread::DEBUG::2015-11-09
11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57860)
Thread-4263::INFO::2015-11-09
11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860 started
Thread-4263::DEBUG::2015-11-09
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state init ->
state preparing
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41':
{'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000392118', 'lastCheck': '6.0', 'valid': True},
u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck':
'4.5', 'valid': True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::finished:
{'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck':
'6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code':
0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000382969', 'lastCheck': '4.5', 'valid':
True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparing
-> state finished
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::993::Storage.TaskManager.Task::(_decref)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::ref 0 aborting
False
Thread-4263::INFO::2015-11-09
11:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,845::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00494757 s, 207 MB/sn'; <rc> = 0
--
Florent BELLO
Service
Informatique
informatique(a)ville-kourou.fr
0594 22 31 22
Mairie de Kourou
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>Hi,</p>
<p>I activate gluster service in Cluster, then my engine.log chain : Could =
not add brick xxx to volume xxxx server uuid xxx not found in cluster=
=2E<br />I found in mailing list i have to put all my hosts in maintenance =
mode and put on.</p>
<p>Then now engine.log chain :</p>
<p>2015-11-09 11:15:53,563 INFO [org.ovirt.engine.core.vdsbroker.glus=
ter.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] STA=
RT, GlusterVolumesListVDSCommand(HostName =3D ovirt02, GlusterVolumesListVD=
SParameters:{runAsync=3D'true', hostId=3D'0d1284e1-fa18-4309-b196-df9a6a337=
c44'}), log id: 6ddd5b9d<br />2015-11-09 11:15:53,711 WARN [org.ovirt=
=2Eengine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (Defaul=
tQuartzScheduler_Worker-64) [] Could not associate brick 'ovirt01.mafia.kru=
:/gfs1/engine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with =
correct network as no gluster network found in cluster '00000002-0002-0002-=
0002-00000000022d'<br />2015-11-09 11:15:53,714 WARN [org.ovirt.engin=
e.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzS=
cheduler_Worker-64) [] Could not associate brick 'ovirt02.mafia.kru:/gfs1/e=
ngine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct =
network as no gluster network found in cluster '00000002-0002-0002-0002-000=
00000022d'<br />2015-11-09 11:15:53,716 WARN [org.ovirt.engine.core=
=2Evdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzSched=
uler_Worker-64) [] Could not associate brick 'ovirt03.mafia.kru:/gfs1/engin=
e/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct netw=
ork as no gluster network found in cluster '00000002-0002-0002-0002-0000000=
0022d'<br />2015-11-09 11:15:53,719 WARN [org.ovirt.engine.core.vdsbr=
oker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Wor=
ker-64) [] Could not associate brick 'ovirt01.mafia.kru:/gfs2/engine/brick'=
of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as n=
o gluster network found in cluster '00000002-0002-0002-0002-00000000022d'<b=
r />2015-11-09 11:15:53,722 WARN [org.ovirt.engine.core.vdsbroker.glu=
ster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) =
[] Could not associate brick 'ovirt02.mafia.kru:/gfs2/engine/brick' of volu=
me 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluste=
r network found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015=
-11-09 11:15:53,725 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glu=
sterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) [] Could=
not associate brick 'ovirt03.mafia.kru:/gfs2/engine/brick' of volume 'e9a2=
4161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluster networ=
k found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015-11-09 1=
1:15:53,732 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolu=
mesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] FINISH, GlusterVol=
umesListVDSCommand, return: {e9a24161-3e72-47ea-b593-57f3302e7c4e=3Dorg.ovi=
rt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244=
, e5df896f-b818-4d70-ac86-ad9270f9d5f2=3Dorg.ovirt.engine.core.common.busin=
essentities.gluster.GlusterVolumeEntity@cb7d0349}, log id: 6ddd5b9d</p>
<p> </p>
<p>Here my vdsm.log on host 1:</p>
<p>Thread-4247::DEBUG::2015-11-09 11:17:47,621::__init__::533::jsonrpc.Json=
RpcServer::(_serveRequest) Return 'Host.getVMFullList' in bridge with [{u's=
tatus': 'Up', u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'=
1', u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',=
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices': [{u'alias': =
u'console0', u'specParams': {}, 'deviceType': u'console', u'deviceId': u'ab=
824f92-f636-4c0f-96ad-b4f3d1c352be', u'device': u'console', u'type': u'cons=
ole'}, {u'target': 1572864, u'alias': u'balloon0', u'specParams': {u'model'=
: u'none'}, 'deviceType': u'balloon', u'device': u'memballoon', u'type': u'=
balloon'}, {u'device': u'unix', u'alias': u'channel0', 'deviceType': u'chan=
nel', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', =
u'type': u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias': =
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address': {u'=
bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'2'}}=
, {u'device': u'unix', u'alias': u'channel2', 'deviceType': u'channel', u't=
ype': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': =
u'virtio-serial', u'port': u'3'}}, {u'alias': u'scsi0', 'deviceType': u'con=
troller', u'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x00=
00', u'type': u'pci', u'function': u'0x0'}, u'device': u'scsi', u'model': u=
'virtio-scsi', u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb=
0', 'deviceType': u'controller', u'type': u'controller', u'address': {u'slo=
t': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'func=
tion': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0', 'deviceType': u'con=
troller', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'=
0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {u'dev=
ice': u'virtio-serial', u'alias': u'virtio-serial0', 'deviceType': u'contro=
ller', u'type': u'controller', u'address': {u'slot': u'0x05', u'bus': u'0x0=
0', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}}, {u'device=
': u'', u'alias': u'video0', 'deviceType': u'video', u'type': u'video', u'a=
ddress': {u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':=
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams': {u'spice=
SecureChannels': u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartc=
ard,susbredir', u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'g=
raphics', u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr': u'00:16:3e:43=
:96:7b', u'linkActive': True, u'network': u'ovirtmgmt', u'specParams': {}, =
u'filter': u'vdsm-no-mac-spoofing', u'alias': u'net0', 'deviceType': u'inte=
rface', u'deviceId': u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {=
u'slot': u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u=
'function': u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name': =
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias': u'=
ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType': u'disk', u=
'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9', u'address': {u'bus': u=
'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'=
}, u'device': u'cdrom', u'shared': u'false', u'path': '', u'type': u'disk'}=
, {u'poolID': u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'dom=
ainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseO=
ffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath'=
: u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee03=
4-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'56461302-0710-4df0-964d-=
5e7b1ff07828', 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}, u'index': u'0', u'ifa=
ce': u'virtio', u'apparentsize': '26843545600', u'specParams': {}, u'imageI=
D': u'56461302-0710-4df0-964d-5e7b1ff07828', u'readonly': 'False', 'deviceT=
ype': u'disk', u'shared': u'exclusive', u'truesize': '3515854848', u'type':=
u'disk', u'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize':=
u'0', u'format': u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff078=
28', u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', =
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path': u'/var/=
run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-96=
4d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'propagateErrors': =
u'off', u'optional': u'false', u'name': u'vda', u'bootOrder': u'1', u'volum=
eID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0', u=
'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volT=
ype': 'path', 'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109f=
aa8b3d3', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'564=
61302-0710-4df0-964d-5e7b1ff07828', 'path': u'/rhev/data-center/mnt/gluster=
SD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56=
461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}=
], u'guestDiskMapping': {}, u'spiceSecureChannels': u'smain,sdisplay,sinput=
s,scursor,splayback,srecord,ssmartcard,susbredir', u'vmType': u'kvm', u'dis=
playIp': '0', u'displaySecurePort': '-1', u'memSize': 1536, u'displayPort':=
u'5900', u'cpuType': u'Conroe', 'clientIp': u'', u'statusTime': '429970492=
0', u'vmName': u'HostedEngine', u'display': 'vnc'}]<br />Reactor thread::IN=
FO::2015-11-09 11:17:48,004::protocoldetector::72::ProtocolDetector.Accepto=
rImpl::(handle_accept) Accepting connection from 127.0.0.1:57851<br />React=
or thread::DEBUG::2015-11-09 11:17:48,012::protocoldetector::82::ProtocolDe=
tector.Detector::(__init__) Using required_size=3D11<br />Reactor thread::I=
NFO::2015-11-09 11:17:48,013::protocoldetector::118::ProtocolDetector.Detec=
tor::(handle_read) Detected protocol xml from 127.0.0.1:57851<br />Reactor =
thread::DEBUG::2015-11-09 11:17:48,013::bindingxmlrpc::1297::XmlDetector::(=
handle_socket) xml over http detected from ('127.0.0.1', 57851)<br />Bindin=
gXMLRPC::INFO::2015-11-09 11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(hand=
le_request) Starting request handler for 127.0.0.1:57851<br />Thread-4248::=
INFO::2015-11-09 11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requ=
ests) Request handler for 127.0.0.1:57851 started<br />Thread-4248::INFO::2=
015-11-09 11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) R=
equest handler for 127.0.0.1:57851 stopped<br />Thread-303::DEBUG::2015-11-=
09 11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/=
dd if=3D/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f1=
40-4636-90f7-f43904735da0/dom_md/metadata iflag=3Ddirect of=3D/dev/null bs=
=3D4096 count=3D1 (cwd None)<br />Thread-303::DEBUG::2015-11-09 11:17:48,15=
4::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> =
=3D '0+1 records in\n0+1 records out\n461 bytes (461 B) copied, 0.000382969=
s, 1.2 MB/s\n'; <rc> =3D 0<br />mailbox.SPMMonitor::DEBUG::2015-11-0=
9 11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) =
dd if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom=
_md/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />m=
ailbox.SPMMonitor::DEBUG::2015-11-09 11:17:48,783::storage_mailbox::735::St=
orage.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in=
\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00507258 s, 202 MB/s\n'=
; <rc> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:49,939::prot=
ocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting =
connection from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:1=
7:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using=
required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:49,947::pro=
tocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected proto=
col xml from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:17:4=
9,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http dete=
cted from ('127.0.0.1', 57852)<br />BindingXMLRPC::INFO::2015-11-09 11:17:4=
9,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57852<br />Thread-4249::INFO::2015-11-09 11:17:49,949::xm=
lrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57852 started<br />Thread-4249::DEBUG::2015-11-09 11:17:49,950::bind=
ingxmlrpc::1257::vds::(wrapper) client [127.0.0.1]::call getCapabilities wi=
th () {}<br />Thread-4249::DEBUG::2015-11-09 11:17:49,962::netinfo::454::ro=
ot::(_dhcp_used) DHCPv6 configuration not specified for ovirtmgmt.<br />Thr=
ead-4249::DEBUG::2015-11-09 11:17:49,963::netinfo::686::root::(_get_gateway=
) The gateway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thre=
ad-4249::DEBUG::2015-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) T=
here is no VDSM network configured on enp2s0.<br />Thread-4249::DEBUG::2015=
-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM netw=
ork configured on enp2s0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968:=
:netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bo=
nd0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968::netinfo::440::root::=
(_dhcp_used) There is no VDSM network configured on bond0.<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,970::netinfo::686::root::(_get_gateway) The ga=
teway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thread-4249:=
:DEBUG::2015-11-09 11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc q=
disc show (cwd None)<br />Thread-4249::DEBUG::2015-11-09 11:17:49,979::util=
s::694::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />=
Thread-4249::DEBUG::2015-11-09 11:17:49,980::utils::676::root::(execCmd) /u=
sr/sbin/tc class show dev enp2s0 classid 0:1388 (cwd None)<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,989::utils::694::root::(execCmd) SUCCESS: <=
err> =3D ''; <rc> =3D 0<br />Thread-4249::DEBUG::2015-11-09 11:17:=
49,993::caps::807::root::(_getKeyPackages) rpm package ('glusterfs-rdma',) =
not found<br />Thread-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root=
::(_getKeyPackages) rpm package ('gluster-swift-object',) not found<br />Th=
read-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root::(_getKeyPackage=
s) rpm package ('gluster-swift-proxy',) not found<br />Thread-4249::DEBUG::=
2015-11-09 11:17:50,001::caps::807::root::(_getKeyPackages) rpm package ('g=
luster-swift-plugin',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17:=
50,002::caps::807::root::(_getKeyPackages) rpm package ('gluster-swift',) n=
ot found<br />Thread-4249::DEBUG::2015-11-09 11:17:50,002::caps::807::root:=
:(_getKeyPackages) rpm package ('gluster-swift-container',) not found<br />=
Thread-4249::DEBUG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPacka=
ges) rpm package ('gluster-swift-account',) not found<br />Thread-4249::DEB=
UG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPackages) rpm package=
('gluster-swift-doc',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17=
:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities with {'=
status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI':=
[{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}], 'FC': []}, 'pa=
ckages2': {'kernel': {'release': '229.20.1.el7.x86_64', 'buildtime': 144658=
8607.0, 'version': '3.10.0'}, 'glusterfs-fuse': {'release': '1.el7', 'build=
time': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7=
_1.3', 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release=
': '2.el7', 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'relea=
se': '0.el7.centos', 'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qe=
mu-kvm': {'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3=
=2E0'}, 'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'versio=
n': '3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L=
, 'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime': 144=
4310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7', 'buildtime': 144=
2501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1=
=2Eel7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server':=
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glust=
erfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7=
=2E5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Q=
uad CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hoo=
ks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6c=
f1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapsh=
ot': 'true', 'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmg=
mt', 'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', '=
IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY=
': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'no=
ne', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'=
}, 'bridged': True, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway=
': '10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': F=
alse, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6g=
ateway': '::', 'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'a=
ddr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR'=
: '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY': '0',=
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'S=
TP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv=
6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcp=
v4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv=
4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['vnet0', 'enp2s0'], 'opts': {'multicast_last_member_count': '2', 'hash_ela=
sticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask=
': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3=
125', 'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age=
': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':=
'0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_=
path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_sta=
rtup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'h=
ello_time': '200', 'root_id': '8000.6c626db33b72', 'bridge_id': '8000.6c626=
db33b72', 'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip=
6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0', 'group_addr':=
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': =
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_rout=
er': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': 'c2cac9d6=
-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus': '0,1,2,3', 'nics': {'enp2s0': =
{'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/=
64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4=
addrs': [], 'cfg': {'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED=
': 'no', 'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp=
2s0', 'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000, 'gate=
way': ''}}, 'software_revision': '0', 'hostdevPassthrough': 'false', 'clust=
erLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,m=
ce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,s=
se2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,=
nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,=
sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model=
_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iq=
n.1994-05.com.redhat:1954deeb7a38', 'netConfigDirty': 'False', 'supportedEN=
GINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures'=
: ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT=
'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '=
', 'cfg': {'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu'=
: '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [], 'h=
waddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '', 'opts': {}=
}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed': '2670.000', =
'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2, 3]}}, 'cpuSock=
ets': '1', 'vlans': {}, 'lastClientIface': 'lo', 'cpuCores': '4', 'kvmEnabl=
ed': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads=
': '4', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rh=
el7.2.0', 'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc=
', 'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0', 'r=
hel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'], 'operatingS=
ystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'=
}, 'lastClient': '127.0.0.1'}}<br />Thread-4249::INFO::2015-11-09 11:17:50,=
020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for =
127.0.0.1:57852 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 11:17:50=
,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd if=3D/r=
hev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox =
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mailbox.SPM=
Monitor::DEBUG::2015-11-09 11:17:50,815::storage_mailbox::735::Storage.Misc=
=2EexcCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n1+0 re=
cords out\n1024000 bytes (1.0 MB) copied, 0.00511026 s, 200 MB/s\n'; <rc=
> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:52,098::protocoldete=
ctor::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connecti=
on from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,106=
::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using require=
d_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,107::protocoldet=
ector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml =
from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,107::b=
indingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected fro=
m ('127.0.0.1', 57853)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,108::x=
mlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for =
127.0.0.1:57853<br />Thread-4250::INFO::2015-11-09 11:17:52,110::xmlrpc::84=
::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57853=
started<br />Thread-4250::DEBUG::2015-11-09 11:17:52,111::bindingxmlrpc::1=
257::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br =
/>Thread-4250::DEBUG::2015-11-09 11:17:52,112::bindingxmlrpc::1264::vds::(w=
rapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': =
0}, 'info': {'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be F=
illed By O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion'=
: '1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72', 'systemManuf=
acturer': 'MICRO-STAR INTERNATIONAL CO.,LTD'}}<br />Thread-4250::INFO::2015=
-11-09 11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Requ=
est handler for 127.0.0.1:57853 stopped<br />Reactor thread::INFO::2015-11-=
09 11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(hand=
le_accept) Accepting connection from 127.0.0.1:57854<br />Reactor thread::D=
EBUG::2015-11-09 11:17:52,124::protocoldetector::82::ProtocolDetector.Detec=
tor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11=
-09 11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle=
_read) Detected protocol xml from 127.0.0.1:57854<br />BindingXMLRPC::INFO:=
:2015-11-09 11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) St=
arting request handler for 127.0.0.1:57854<br />Reactor thread::DEBUG::2015=
-11-09 11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml =
over http detected from ('127.0.0.1', 57854)<br />Thread-4251::INFO::2015-1=
1-09 11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reques=
t handler for 127.0.0.1:57854 started<br />Thread-4251::DEBUG::2015-11-09 1=
1:17:52,129::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thr=
ead-4251::DEBUG::2015-11-09 11:17:52,130::task::595::Storage.TaskManager.Ta=
sk::(_updateState) Task=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving fr=
om state init -> state preparing<br />Thread-4251::INFO::2015-11-09 11:1=
7:52,130::logUtils::48::dispatcher::(wrapper) Run and protect: connectStora=
geServer(domType=3D7, spUUID=3D'00000000-0000-0000-0000-000000000000', conL=
ist=3D[{'id': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterf=
s', 'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}], options=3DN=
one)<br />Thread-4251::DEBUG::2015-11-09 11:17:52,132::hsm::2417::Storage=
=2EHSM::(__prefetchDomains) glusterDomPath: glusterSD/*<br />Thread-4251::D=
EBUG::2015-11-09 11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) =
Found SD uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'0af99439-f140-4=
636-90f7-f43904735da0')<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::h=
sm::2489::Storage.HSM::(connectStorageServer) knownSDs: {b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41: storage.glusterSD.findDomain, 0af99439-f140-4636-90f7-f=
43904735da0: storage.glusterSD.findDomain}<br />Thread-4251::INFO::2015-11-=
09 11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect: conne=
ctStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '2c69=
bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-11-09 11=
:17:52,147::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8535d95=
e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist': [{'status': 0, 'id=
': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-=
11-09 11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState) Tas=
k=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing -&g=
t; state finished<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148:=
:resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.can=
celAll requests {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148::task::=
993::Storage.TaskManager.Task::(_decref) Task=3D`8535d95e-dce6-4474-bd8d-78=
24f68cf68a`::ref 0 aborting False<br />Thread-4251::INFO::2015-11-09 11:17:=
52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler f=
or 127.0.0.1:57854 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,1=
50::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Ac=
cepting connection from 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11=
-09 11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init_=
_) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,=
158::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detect=
ed protocol xml from 127.0.0.1:57855<br />BindingXMLRPC::INFO::2015-11-09 1=
1:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting reques=
t handler for 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11-09 11:17:=
52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http det=
ected from ('127.0.0.1', 57855)<br />Thread-4255::INFO::2015-11-09 11:17:52=
,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for=
127.0.0.1:57855 started<br />Thread-4255::DEBUG::2015-11-09 11:17:52,162::=
bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4255::DEB=
UG::2015-11-09 11:17:52,163::task::595::Storage.TaskManager.Task::(_updateS=
tate) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init=
-> state preparing<br />Thread-4255::INFO::2015-11-09 11:17:52,163::log=
Utils::48::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdU=
UID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', options=3DNone)<br />Thread-4=
255::DEBUG::2015-11-09 11:17:52,164::resourceManager::198::Storage.Resource=
Manager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request was made i=
n '/usr/share/vdsm/storage/hsm.py' line '2848' at 'getStorageDomainStats'<b=
r />Thread-4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::542::Stor=
age.ResourceManager::(registerResource) Trying to register resource 'Storag=
e.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::601::Storage.Resourc=
eManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-4255=
::DEBUG::2015-11-09 11:17:52,164::resourceManager::238::Storage.ResourceMan=
ager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted request<br />Thr=
ead-4255::DEBUG::2015-11-09 11:17:52,165::task::827::Storage.TaskManager.Ta=
sk::(resourceAcquired) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_reso=
urcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,165::task::993::Storage.TaskManager=
=2ETask::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 abo=
rting False<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::St=
orage.SamplingMethod::(__call__) Trying to enter sampling method (storage=
=2Esdc.refreshStorage)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::mi=
sc::753::Storage.SamplingMethod::(__call__) Got in to sampling method<br />=
Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::753::Storage.SamplingMe=
thod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2015-11=
-09 11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,=
this will take up to 30 seconds<br />Thread-4255::DEBUG::2015-11-09 11:17:=
52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin=
/iscsiadm -m session -R (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:1=
7:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning last resu=
lt<br />Thread-4255::DEBUG::2015-11-09 11:17:52,183::misc::750::Storage.Sam=
plingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan=
)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,184::misc::753::Storage.Samp=
lingMethod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2=
015-11-09 11:17:52,184::hba::56::Storage.HBA::(rescan) Starting scan<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,295::hba::62::Storage.HBA::(rescan) =
Scan finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,296::misc::760::=
Storage.SamplingMethod::(__call__) Returning last result<br />Thread-4255::=
DEBUG::2015-11-09 11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan=
) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br />Thread-4255::DEBUG::=
2015-11-09 11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCE=
SS: <err> =3D ''; <rc> =3D 0<br />Thread-4255::DEBUG::2015-11-0=
9 11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle --timeout=
=3D5 (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,371::utils::69=
4::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />Threa=
d-4255::DEBUG::2015-11-09 11:17:52,372::lvm::498::Storage.OperationMutex::(=
_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation m=
utex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,372::lvm::500::Storage.Op=
erationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' rele=
ased the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::=
lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invali=
date operation' got the operation mutex<br />Thread-4255::DEBUG::2015-11-09=
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operat=
ion 'lvm invalidate operation' released the operation mutex<br />Thread-425=
5::DEBUG::2015-11-09 11:17:52,373::lvm::529::Storage.OperationMutex::(_inva=
lidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex<=
br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::lvm::531::Storage.Operati=
onMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released =
the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,374::misc:=
:760::Storage.SamplingMethod::(__call__) Returning last result<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,386::fileSD::157::Storage.StorageDomainMan=
ifest::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/ov=
irt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41<br />Thread-42=
55::DEBUG::2015-11-09 11:17:52,387::persistentDict::192::Storage.Persistent=
Dict::(__init__) Created a persistent dict with FileMetadataRW backend<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,395::persistentDict::234::Storage=
=2EPersistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData',=
'DESCRIPTION=3Dhosted_storage', 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3',=
'LEASETIMESEC=3D60', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_=
UUID=3D', 'REMOTE_PATH=3Dovirt02.mafia.kru:/engine', 'ROLE=3DRegular', 'SDU=
UID=3Db4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=3DGLUSTERFS', 'VERSION=
=3D3', '_SHA_CKSUM=3Dcb09606ada74ed4155ad158923dd930264780fc8']<br />Thread=
-4255::DEBUG::2015-11-09 11:17:52,398::fileSD::647::Storage.StorageDomain::=
(imageGarbageCollector) Removing remnants of deleted images []<br />Thread-=
4255::INFO::2015-11-09 11:17:52,399::sd::442::Storage.StorageDomain::(_regi=
sterResourceNamespaces) Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41_imageNS already registered<br />Thread-4255::INFO::2015-11-09 11:17:52=
,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces) Resourc=
e namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS already registere=
d<br />Thread-4255::INFO::2015-11-09 11:17:52,400::logUtils::51::dispatcher=
::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stat=
s': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '210=
878988288', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::D=
EBUG::2015-11-09 11:17:52,401::task::1191::Storage.TaskManager.Task::(prepa=
re) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats': {'md=
asize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '21087898828=
8', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::DEBUG::20=
15-11-09 11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState) =
Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing =
-> state finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::reso=
urceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.release=
All requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': =
< ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: '=
True' obj: 'None'>}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::re=
sourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancel=
All requests {}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::resourceM=
anager::616::Storage.ResourceManager::(releaseResource) Trying to release r=
esource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4255::DE=
BUG::2015-11-09 11:17:52,402::resourceManager::635::Storage.ResourceManager=
::(releaseResource) Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' (0 active users)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402=
::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource=
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, finding out if any=
one is waiting for it.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::re=
sourceManager::649::Storage.ResourceManager::(releaseResource) No one is wa=
iting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing=
records.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::task::993::Stor=
age.TaskManager.Task::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed=
8`::ref 0 aborting False<br />Thread-4255::INFO::2015-11-09 11:17:52,404::x=
mlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57855 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,405::pr=
otocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Acceptin=
g connection from 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11=
:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__) Usi=
ng required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,414::p=
rotocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected pro=
tocol xml from 127.0.0.1:57856<br />BindingXMLRPC::INFO::2015-11-09 11:17:5=
2,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11:17:52,414=
::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected =
from ('127.0.0.1', 57856)<br />Thread-4259::INFO::2015-11-09 11:17:52,417::=
xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127=
=2E0.0.1:57856 started<br />Thread-4259::DEBUG::2015-11-09 11:17:52,418::bi=
ndingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,418::task::595::Storage.TaskManager.Task::(_updateSta=
te) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init -=
> state preparing<br />Thread-4259::INFO::2015-11-09 11:17:52,419::logUt=
ils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c4=
88af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000=
000000', imgUUID=3D'56461302-0710-4df0-964d-5e7b1ff07828', leafUUID=3D'8f8e=
e034-de86-4438-b6eb-9109faa8b3d3')<br />Thread-4259::DEBUG::2015-11-09 11:1=
7:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)=
ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-=
42b9-4acb-88cf-841d9dc28fb0`::Request was made in '/usr/share/vdsm/storage/=
hsm.py' line '3205' at 'prepareImage'<br />Thread-4259::DEBUG::2015-11-09 1=
1:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResour=
ce) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41' for lock type 'shared'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420=
::resourceManager::601::Storage.ResourceManager::(registerResource) Resourc=
e 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 's=
hared' (1 active user)<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420::re=
sourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Sto=
rage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-42b9-4acb-88cf-=
841d9dc28fb0`::Granted request<br />Thread-4259::DEBUG::2015-11-09 11:17:52=
,420::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`9dbe0=
1b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired: Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,420::task::993::Storage.TaskManager.Task::(_decref) Task=3D`9dbe01b2-e3e=
0-466b-90e1-b9803dfce88b`::ref 1 aborting False<br />Thread-4259::DEBUG::20=
15-11-09 11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes=
) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_=
engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-=
5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::DEBUG::=
2015-11-09 11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Cre=
ating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41=
mode: None<br />Thread-4259::WARNING::2015-11-09 11:17:52,446::fileUtils::=
152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4259::DEBUG::2015-11-09 =
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks) Creati=
ng symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b=
4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff0=
7828 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302=
-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:52,4=
47::fileSD::516::Storage.StorageDomain::(createImageLinks) img run dir alre=
ady exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/5646=
1302-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,448::fileVolume::535::Storage.Volume::(validateVolumePath) validate path=
for 8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::INFO::2015-11-0=
9 11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect: prepar=
eImage, Return response: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-910=
9faa8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath=
': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2=
f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee0=
34-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-=
5e7b1ff07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109=
faa8b3d3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac=
06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mn=
t/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/=
images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa=
8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u=
'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-d=
e86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b=
1ff07828'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::1191::=
Storage.TaskManager.Task::(prepare) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/g=
lusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/ima=
ges/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3=
d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/r=
hev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-=
a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86=
-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff=
07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d=
3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',=
'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glust=
erSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/=
56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',=
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/rhev/=
data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6=
-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-443=
8-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff0782=
8'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::595::Storage=
=2ETaskManager.Task::(_updateState) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::moving from state preparing -> state finished<br />Thread-4259::=
DEBUG::2015-11-09 11:17:52,450::resourceManager::940::Storage.ResourceManag=
er.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,450::resourceManager::977::Storage.ResourceMan=
ager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,451::resourceManager::616::Storage.ResourceManager::(=
releaseResource) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,451::resourceM=
anager::635::Storage.ResourceManager::(releaseResource) Released resource '=
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-=
4259::DEBUG::2015-11-09 11:17:52,451::resourceManager::641::Storage.Resourc=
eManager::(releaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41' is free, finding out if anyone is waiting for it.<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,451::resourceManager::649::Storage.ResourceMan=
ager::(releaseResource) No one is waiting for resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thread-4259::DEBUG::2015=
-11-09 11:17:52,451::task::993::Storage.TaskManager.Task::(_decref) Task=3D=
`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting False<br />Thread-42=
59::INFO::2015-11-09 11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_=
requests) Request handler for 127.0.0.1:57856 stopped<br />Reactor thread::=
INFO::2015-11-09 11:17:52,454::protocoldetector::72::ProtocolDetector.Accep=
torImpl::(handle_accept) Accepting connection from 127.0.0.1:57857<br />Rea=
ctor thread::DEBUG::2015-11-09 11:17:52,463::protocoldetector::82::Protocol=
Detector.Detector::(__init__) Using required_size=3D11<br />Reactor thread:=
:INFO::2015-11-09 11:17:52,463::protocoldetector::118::ProtocolDetector.Det=
ector::(handle_read) Detected protocol xml from 127.0.0.1:57857<br />Reacto=
r thread::DEBUG::2015-11-09 11:17:52,464::bindingxmlrpc::1297::XmlDetector:=
:(handle_socket) xml over http detected from ('127.0.0.1', 57857)<br />Bind=
ingXMLRPC::INFO::2015-11-09 11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(ha=
ndle_request) Starting request handler for 127.0.0.1:57857<br />Thread-4260=
::INFO::2015-11-09 11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_re=
quests) Request handler for 127.0.0.1:57857 started<br />Thread-4260::DEBUG=
::2015-11-09 11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client [127=
=2E0.0.1]<br />Thread-4260::DEBUG::2015-11-09 11:17:52,467::task::595::Stor=
age.TaskManager.Task::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692=
fd56ae`::moving from state init -> state preparing<br />Thread-4260::INF=
O::2015-11-09 11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and pro=
tect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=
=3D'00000000-0000-0000-0000-000000000000', imgUUID=3D'fd81353f-b654-4493-bc=
af-2f417849b830', leafUUID=3D'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::198::Storage=
=2EResourceManager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request=
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at 'prepareImage'=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::542::St=
orage.ResourceManager::(registerResource) Trying to register resource 'Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::601::Storage.Resou=
rceManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-42=
60::DEBUG::2015-11-09 11:17:52,468::resourceManager::238::Storage.ResourceM=
anager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted request<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,469::task::827::Storage.TaskManager=
=2ETask::(resourceAcquired) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::=
_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<b=
r />Thread-4260::DEBUG::2015-11-09 11:17:52,469::task::993::Storage.TaskMan=
ager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 a=
borting False<br />Thread-4260::DEBUG::2015-11-09 11:17:52,485::fileSD::536=
::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data=
-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a2=
27-3819b6ecfdd9<br />Thread-4260::DEBUG::2015-11-09 11:17:52,486::fileUtils=
::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/sto=
rage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br />Thread-4260::WARN=
ING::2015-11-09 11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir=
) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already ex=
ists<br />Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::511::Storage=
=2EStorageDomain::(createImageLinks) Creating symlink from /rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/fd81353f-b654-4493-bcaf-2f417849b830 to /var/run/vdsm/storage/b4=
c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::516::Storage.Storage=
Domain::(createImageLinks) img run dir already exists: /var/run/vdsm/storag=
e/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,488::fileVolume::535::Storage=
=2EVolume::(validateVolumePath) validate path for 8bb29fcb-c109-4f0a-a227-3=
819b6ecfdd9<br />Thread-4260::INFO::2015-11-09 11:17:52,490::logUtils::51::=
dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::1191::Storage.TaskManager.Tas=
k::(prepare) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::595::Storage.TaskManager.Task=
::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from=
state preparing -> state finished<br />Thread-4260::DEBUG::2015-11-09 1=
1:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseA=
ll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41', isValid: 'True' obj: 'None'>}<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancel=
All) Owner.cancelAll requests {}<br />Thread-4260::DEBUG::2015-11-09 11:17:=
52,491::resourceManager::616::Storage.ResourceManager::(releaseResource) Tr=
ying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,491::resourceManager::635::Storag=
e.ResourceManager::(releaseResource) Released resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4260::DEBUG::2015-1=
1-09 11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseR=
esource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, f=
inding out if anyone is waiting for it.<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResou=
rce) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0ba=
c06c41', Clearing records.<br />Thread-4260::DEBUG::2015-11-09 11:17:52,492=
::task::993::Storage.TaskManager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5=
-92ef-356692fd56ae`::ref 0 aborting False<br />Thread-4260::INFO::2015-11-0=
9 11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request h=
andler for 127.0.0.1:57857 stopped<br />Reactor thread::INFO::2015-11-09 11=
:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_ac=
cept) Accepting connection from 127.0.0.1:57858<br />Reactor thread::DEBUG:=
:2015-11-09 11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::=
(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 1=
1:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read=
) Detected protocol xml from 127.0.0.1:57858<br />BindingXMLRPC::INFO::2015=
-11-09 11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Startin=
g request handler for 127.0.0.1:57858<br />Reactor thread::DEBUG::2015-11-0=
9 11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over =
http detected from ('127.0.0.1', 57858)<br />Thread-4261::INFO::2015-11-09 =
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request han=
dler for 127.0.0.1:57858 started<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,508::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4=
261::DEBUG::2015-11-09 11:17:52,508::task::595::Storage.TaskManager.Task::(=
_updateState) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::moving from st=
ate init -> state preparing<br />Thread-4261::INFO::2015-11-09 11:17:52,=
509::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUU=
ID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0=
000-000000000000', imgUUID=3D'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8', leafUU=
ID=3D'3fc3362d-ab6d-4e06-bd72-82d5750c7095')<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::=
(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D=
`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request was made in '/usr/share/vds=
m/storage/hsm.py' line '3205' at 'prepareImage'<br />Thread-4261::DEBUG::20=
15-11-09 11:17:52,509::resourceManager::542::Storage.ResourceManager::(regi=
sterResource) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41' for lock type 'shared'<br />Thread-4261::DEBUG::2015-11-09 11=
:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResourc=
e) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now loc=
king as 'shared' (1 active user)<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,510::resourceManager::238::Storage.ResourceManager.Request::(grant) ResN=
ame=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`240c2aba-6c2e-=
44da-890d-c3d605e1933f`::Granted request<br />Thread-4261::DEBUG::2015-11-0=
9 11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired) Tas=
k=3D`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired: Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4261::DEBUG::2015-11=
-09 11:17:52,510::task::993::Storage.TaskManager.Task::(_decref) Task=3D`d3=
9463fd-486f-4280-903a-51b72862b648`::ref 1 aborting False<br />Thread-4261:=
:DEBUG::2015-11-09 11:17:52,526::fileSD::536::Storage.StorageDomain::(activ=
ateVolumes) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-9=
4aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread=
-4261::DEBUG::2015-11-09 11:17:52,528::fileUtils::143::Storage.fileUtils::(=
createdir) Creating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41 mode: None<br />Thread-4261::WARNING::2015-11-09 11:17:52,52=
8::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage=
/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4261::DEBU=
G::2015-11-09 11:17:52,528::fileSD::511::Storage.StorageDomain::(createImag=
eLinks) Creating symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-40=
03-8e12-0dbbf06a6af8 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks) =
img run dir already exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-7=
4a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2=
015-11-09 11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePat=
h) validate path for 3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread-4261:=
:INFO::2015-11-09 11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and=
protect: prepareImage, Return response: {'info': {'domainID': 'b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'=
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab=
6d-4e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7=
095', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_eng=
ine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0db=
bf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1=
-94aa-4003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6=
d-4e06-bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4=
b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhe=
v/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4=
e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095'=
, 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/=
b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06=
a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94a=
a-4003-8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,5=
31::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`d39463fd-486f-4=
280-903a-51b72862b648`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/d=
ata-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06=
-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', '=
leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6a=
f8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4=
003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-=
bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd7=
2-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leas=
ePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488a=
f-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3=
fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4003-=
8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::tas=
k::595::Storage.TaskManager.Task::(_updateState) Task=3D`d39463fd-486f-4280=
-903a-51b72862b648`::moving from state preparing -> state finished<br />=
Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::940::Storage=
=2EResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resourc=
es {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Stora=
ge.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<=
br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Th=
read-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::616::Storage.Re=
sourceManager::(releaseResource) Trying to release resource 'Storage.b4c488=
af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4261::DEBUG::2015-11-09 11:17:5=
2,532::resourceManager::635::Storage.ResourceManager::(releaseResource) Rel=
eased resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active use=
rs)<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::641:=
:Storage.ResourceManager::(releaseResource) Resource 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41' is free, finding out if anyone is waiting for it=
=2E<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::649:=
:Storage.ResourceManager::(releaseResource) No one is waiting for resource =
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thre=
ad-4261::DEBUG::2015-11-09 11:17:52,533::task::993::Storage.TaskManager.Tas=
k::(_decref) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting =
False<br />Thread-4261::INFO::2015-11-09 11:17:52,535::xmlrpc::92::vds.XMLR=
PCServer::(_process_requests) Request handler for 127.0.0.1:57858 stopped<b=
r />Reactor thread::INFO::2015-11-09 11:17:52,536::protocoldetector::72::Pr=
otocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,544::protoco=
ldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=3D=
11<br />Reactor thread::INFO::2015-11-09 11:17:52,545::protocoldetector::11=
8::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,545::binding=
xmlrpc::1297::XmlDetector::(handle_socket) xml over http detected from ('12=
7.0.0.1', 57859)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,545::xmlrpc:=
:73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0=
=2E0.1:57859<br />Thread-4262::INFO::2015-11-09 11:17:52,548::xmlrpc::84::v=
ds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57859 st=
arted<br />Thread-4262::DEBUG::2015-11-09 11:17:52,548::bindingxmlrpc::325:=
:vds::(wrapper) client [127.0.0.1]<br />Thread-4262::DEBUG::2015-11-09 11:1=
7:52,549::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`0eebd=
b1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init -> state prepar=
ing<br />Thread-4262::INFO::2015-11-09 11:17:52,549::logUtils::48::dispatch=
er::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000000000', imgUUID=
=3D'350fb787-049a-4174-8914-f371aabfa72c', leafUUID=3D'02c5d59d-638c-4672-8=
14d-d734e334e24a')<br />Thread-4262::DEBUG::2015-11-09 11:17:52,549::resour=
ceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8=
b84b2e51746`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '32=
05' at 'prepareImage'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::res=
ourceManager::542::Storage.ResourceManager::(registerResource) Trying to re=
gister resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock typ=
e 'shared'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManage=
r::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 'shared' (1 active=
user)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManager::2=
38::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::G=
ranted request<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::task::827:=
:Storage.TaskManager.Task::(resourceAcquired) Task=3D`0eebdb1c-6c4d-4b86-a2=
fa-00ad35a19f24`::_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 (shared)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,551::task::99=
3::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad=
35a19f24`::ref 1 aborting False<br />Thread-4262::DEBUG::2015-11-09 11:17:5=
2,566::fileSD::536::Storage.StorageDomain::(activateVolumes) Fixing permiss=
ions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-=
9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c=
5d59d-638c-4672-814d-d734e334e24a<br />Thread-4262::DEBUG::2015-11-09 11:17=
:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating directory:=
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br /=
>Thread-4262::WARNING::2015-11-09 11:17:52,568::fileUtils::152::Storage.fil=
eUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 already exists<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fi=
leSD::511::Storage.StorageDomain::(createImageLinks) Creating symlink from =
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c to /var/run=
/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-=
f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fileSD::516:=
:Storage.StorageDomain::(createImageLinks) img run dir already exists: /var=
/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8=
914-f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,570::fileVolu=
me::535::Storage.Volume::(validateVolumePath) validate path for 02c5d59d-63=
8c-4672-814d-d734e334e24a<br />Thread-4262::INFO::2015-11-09 11:17:52,572::=
logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return r=
esponse: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'vol=
Type': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/=
ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb=
787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'vol=
umeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814=
d-d734e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, =
'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb7=
87-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgV=
olumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType=
': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovir=
t02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-=
049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeI=
D': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cent=
er/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d7=
34e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br =
/>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::1191::Storage.TaskMana=
ger.Task::(prepare) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished=
: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': =
'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path=
': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-04=
9a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgVolume=
sInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'p=
ath', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br /=
>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::movin=
g from state preparing -> state finished<br />Thread-4262::DEBUG::2015-1=
1-09 11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(re=
leaseAll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(=
cancelAll) Owner.cancelAll requests {}<br />Thread-4262::DEBUG::2015-11-09 =
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResour=
ce) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c4=
1'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,573::resourceManager::635::=
Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4262::DEBUG::=
2015-11-09 11:17:52,574::resourceManager::641::Storage.ResourceManager::(re=
leaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is f=
ree, finding out if anyone is waiting for it.<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,574::resourceManager::649::Storage.ResourceManager::(releas=
eResource) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41', Clearing records.<br />Thread-4262::DEBUG::2015-11-09 11:17:=
52,574::task::993::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4=
d-4b86-a2fa-00ad35a19f24`::ref 0 aborting False<br />Thread-4262::INFO::201=
5-11-09 11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Req=
uest handler for 127.0.0.1:57859 stopped<br />Reactor thread::INFO::2015-11=
-09 11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(han=
dle_accept) Accepting connection from 127.0.0.1:57860<br />Reactor thread::=
DEBUG::2015-11-09 11:17:52,619::protocoldetector::82::ProtocolDetector.Dete=
ctor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-1=
1-09 11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handl=
e_read) Detected protocol xml from 127.0.0.1:57860<br />BindingXMLRPC::INFO=
::2015-11-09 11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) S=
tarting request handler for 127.0.0.1:57860<br />Reactor thread::DEBUG::201=
5-11-09 11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml=
over http detected from ('127.0.0.1', 57860)<br />Thread-4263::INFO::2015-=
11-09 11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reque=
st handler for 127.0.0.1:57860 started<br />Thread-4263::DEBUG::2015-11-09 =
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Th=
read-4263::DEBUG::2015-11-09 11:17:52,624::task::595::Storage.TaskManager=
=2ETask::(_updateState) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::movi=
ng from state init -> state preparing<br />Thread-4263::INFO::2015-11-09=
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect: repoSta=
ts(options=3DNone)<br />Thread-4263::INFO::2015-11-09 11:17:52,624::logUtil=
s::51::dispatcher::(wrapper) Run and protect: repoStats, Return response: {=
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True, 'versio=
n': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck': '6.0', 'valid=
': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': Tr=
ue, 'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck': '4=
=2E5', 'valid': True}}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,624::ta=
sk::1191::Storage.TaskManager.Task::(prepare) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::finished: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code=
': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000392118=
', 'lastCheck': '6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735d=
a0': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '=
0.000382969', 'lastCheck': '4.5', 'valid': True}}<br />Thread-4263::DEBUG::=
2015-11-09 11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState=
) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparin=
g -> state finished<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::re=
sourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.relea=
seAll requests {} resources {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52=
,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owne=
r.cancelAll requests {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::t=
ask::993::Storage.TaskManager.Task::(_decref) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::ref 0 aborting False<br />Thread-4263::INFO::2015-11-09 1=
1:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request hand=
ler for 127.0.0.1:57860 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 =
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd=
if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_m=
d/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mai=
lbox.SPMMonitor::DEBUG::2015-11-09 11:17:52,845::storage_mailbox::735::Stor=
age.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n=
1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00494757 s, 207 MB/s\n'; =
<rc> =3D 0<br /><br /></p>
<div>-- <br />
<p>Florent BELLO<br />Service Informatique<br />informatique@ville-kourou=
=2Efr<br />0594 22 31 22<br />Mairie de Kourou</p>
</div>
</body></html>
--=_920b73690f5d77fd5fed2f8d20a489a7--
9 years, 5 months
ovirt-hosted-engine-setup and single machine install
by Johan Vermeulen
Hello All,
after configuring a first setup, based on the quick start guide, I'm now
looking
at the hosted-engine setup.
My question is: after I do the hosted-engine-setup, how do I setup vm's on
the same
machine that hosts the now-virtualized engine?
Greetings, J.
9 years, 5 months
Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480
by Punit Dambiwal
Hi Sahina,
Either after make the changes in the vdsm.conf,still not able to connect to
the replica=2 storage..
Thanks,
Punit
On Mon, Nov 23, 2015 at 4:15 PM, Punit Dambiwal <hypunit(a)gmail.com> wrote:
> Hi Sahina,
>
> Thanks for the update...would you mind to let me know the correct syntax
> to add the line in the vdsm.conf ??
>
> Thanks,
> Punit
>
> On Mon, Nov 23, 2015 at 3:48 PM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>> You can change the allowed_replica_count to 2 in vdsm.conf - though this
>> is not recommended in production. Supported replica count is 3.
>>
>> thanks
>> sahina
>>
>>
>> On 11/23/2015 07:58 AM, Punit Dambiwal wrote:
>>
>> Hi Sahina,
>>
>> Is there any workaround to solve this issue ?
>>
>> Thanks,
>> Punit
>>
>> On Wed, Nov 11, 2015 at 9:36 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>>
>>> Hi,
>>>
>>> Thanks for your email. I will be back on 16th Nov and will get back to
>>> you then.
>>>
>>> thanks
>>> sahina
>>>
>>
>>
>>
>
9 years, 5 months
What is a data center with local storage?
by Christophe TREFOIS
Hi,
When creating a new data center, in oVirt 3.5 there is the options to have “local” or “shared” storage types.
Is there any resource out there that explains the difference between the two? The official doc does not really help there.
My current understanding is as follows:
In shared mode, I can create data domains that are shared between hosts in a same data center, eg NFS, iSCSI etc.
In lcoal mode, I can only create data domains locally, but I can “import” an existing iSCSI or Export domain to move VMs (with downtime) between data centers.
1. Is this correct or am I missing something here?
2. What would be the reason to go for a “local” storage type cluster?
Thank you very much for helping out a newcomer :)
Kind regards,
—
Christophe
9 years, 5 months
import ova/ovf
by alireza sadeh seighalan
hi everyone
how can i import an ovf file from a server to my ovirt. thanks in advance
9 years, 5 months
Failed to create live snapshot
by mots
------=_Part_16_134777541.1448296754211
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello,
I'm getting the following error when I try to create a snapshot of one VM. Snapshots of all other VMs work as expected. I'm using oVirt 3.5 on Centos 7.
>Failed to create live snapshot 'fsbu3' for VM 'Odoo'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
I think this is the relevant part of vdsm.log, what strikes me as odd is the line:
>Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
The part "The base volume doesn't exist" seems interesting.
Also interesting is that it does create a snapshot, though I don't know if that snapshot is missing data.
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::103::Storage.TaskManager::(getTaskStatus) Entry. taskID: 21a1c403-f306-40b1-bad8-377d0265ebca
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::106::Storage.TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::taskManager::123::Storage.TaskManager::(getAllTasksStatuses) Return: {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::INFO::2015-11-23 17:18:20,422::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::finished: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::moving from state preparing -> state finished
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::993::Storage.TaskManager.Task::(_decref) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::ref 0 aborting False
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getAllTasksStatuses' in bridge with {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,423::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,423::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,424::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192049::DEBUG::2015-11-23 17:18:20,426::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,438::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,439::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192050::DEBUG::2015-11-23 17:18:20,441::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,442::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,443::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192051::DEBUG::2015-11-23 17:18:20,445::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,529::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
Thread-1192052::DEBUG::2015-11-23 17:18:20,530::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.snapshot' in bridge with {'vmID': '581cebb3-7729-4c29-b98c-f9e04aa2fdd0', 'snapDrives': [{'baseVolumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}]}
JsonRpcServer::DEBUG::2015-11-23 17:18:20,530::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
Thread-1192052::DEBUG::2015-11-23 17:18:20,532::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,588::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,590::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192053::DEBUG::2015-11-23 17:18:20,590::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Volume.getInfo' in bridge with {'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'storagepoolID': '00000002-0002-0002-0002-000000000354', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'storagedomainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,592::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state init -> state preparing
Thread-1192053::INFO::2015-11-23 17:18:20,592::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID=3D'b4e7425a-53c7-40d4-befc-ea36ed7891fc', spUUID=3D'00000002-0002-0002-0002-000000000354', imgUUID=3D'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', volUUID=3D'16f92498-c142-4330-bc0f-c96f210c379d', options=3DNone)
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3124' at 'getVolumeInfo'
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' for lock type 'shared'
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free. Now locking as 'shared' (1 active user)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Granted request
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::_resourcesAcquired: Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc (shared)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 1 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::lvm::419::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1192053::DEBUG::2015-11-23 17:18:20,595::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D [ '\''a|/dev/mapper/1p_storage_store1|'\'', '\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags b4e7425a-53c7-40d4-befc-ea36ed7891fc (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,731::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1192053::INFO::2015-11-23 17:18:20,734::volume::847::Storage.Volume::(getInfo) Info request: sdUUID=3Db4e7425a-53c7-40d4-befc-ea36ed7891fc imgUUID=3Ddfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d volUUID =3D 16f92498-c142-4330-bc0f-c96f210c379d=20
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::blockVolume::594::Storage.Misc.excCmd::(getMetadata) /bin/dd iflag=3Ddirect skip=3D40 bs=3D512 if=3D/dev/b4e7425a-53c7-40d4-befc-ea36ed7891fc/metadata count=3D1 (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::blockVolume::594::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> =3D '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s\n'; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::misc::262::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s'], size: 512
Thread-1192053::INFO::2015-11-23 17:18:20,746::volume::875::Storage.Volume::(getInfo) b4e7425a-53c7-40d4-befc-ea36ed7891fc/dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d/16f92498-c142-4330-bc0f-c96f210c379d info is {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::INFO::2015-11-23 17:18:20,746::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::finished: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state preparing -> state finished
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc': < ResourceRef 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', isValid: 'True' obj: 'None'>}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc'
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' (0 active users)
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free, finding out if anyone is waiting for it.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', Clearing records.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 0 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,863::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,864::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192054::DEBUG::2015-11-23 17:18:20,864::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
------=_Part_16_134777541.1448296754211
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.9.0-0)
iQIcBAEBCAAGBQJWU0EyAAoJEJ44dql+IcLKI0MP/3Asq2nBywx8kXJ0Gbt1wDYp
Yvakc220hhIxSKDCSuaWPoZHgBCMMi32po0lBrnyb/nq28O6vINL7EDNH5Did4fi
HIQ7JG/iyExgEwyWjhllM6d+yOIHYbIQnAqilxlnk/nBs98yXZn4LnYkaf9tDuXC
2VIYWisVM1VRkduGsIptkqN7qwfKek3NGwKBQbSL+o7CQNsCfSu7I7v7St+VyxH7
fSxRacILEaZmPSK7toSJKpZjabsmWEAbF1Finy2ni/qvnF984w/3b7Lmo0wmCbUG
2NeLveD5avQrO1RLMuSLPAO0/8aZnhPOL5ifYyM/f5WihFszmPn+72l17oMPxpwW
vpcpcsC2nq+ZTNvdR1LqYIzPlb201ohTv7YV5RJg4rU4uDfMlzv+0AEKJUlZgnmz
OLkfFH23OmmYMs6J/s/FDbsZBOZAdJpYwSdIWFyL3jfzqfoFwH0xUk6GKyZxAbRt
cMBUpDAtPoQ7qYJ+ctegY9XmexsnXhrRUTJWXTUBH7a/XKkgJbDIBLT5YRxACF33
tf1FIoc2f9RD3acdwAt/uF4wVQZI1CGZtmox0yk1dKDrPIfS3GQn6+9HZWRulT7W
han8iQLlyTN53sO4UKH8FHRjHqR0/JPmMXw4NR6r6CW6KA7Jf/m+9Bxh5sHVJXJt
kd8EaYpl165dFqUS+fy/
=fC70
-----END PGP SIGNATURE-----
------=_Part_16_134777541.1448296754211--
9 years, 5 months
[Need help] Want to write my graduate thesis related to ovirt
by John Hunter
Hi guys,
I am college student who is going to graduate next year, major in CS, I
want to
work on my graduate thesis related to ovirt.
For now I need a project which I can work on for the next 6 month, so I need
an idealist which I can reference to write a proposal.
I have contributed to #dri-devel during my Google Summer of Code project,
so I
know how opensource groups works, though workflow are kind of different.
I am appreciate if anyone could help, and I hope some one could be my
mentor.
BR,
Zhao Junwang
--
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
9 years, 5 months
Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE
by Giuseppe Ragusa
On Mon, Nov 9, 2015, at 08:16, Sandro Bonazzola wrote:
> On Sun, Nov 8, 2015 at 9:57 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
> >>>> Hi all,
> >>>> I'm stuck with the following error during the final phase of ovirt-hosted-engine-setup:
> >>>>
> >>>> The host hosted_engine_1 is in non-operational state.
> >>>> Please try to activate it via the engine webadmin UI.
> >>>>
> >>>> If I login on the engine administration web UI I find the corresponding message (inside NonOperational first host hosted_engine_1 Events tab):
> >>>>
> >>>> Host hosted_engine_1 does not comply with the cluster Default networks, the following networks are missing on host: 'ovirtmgmt'
> >>>>
> >>>> I'm installing with an oVirt snapshot from October the 27th on a fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine-vm) pre-created and network interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on underlying 802.3ad bonds or plain interfaces) manually pre-configured in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service; NetworkManager disabled).
> >>>>
> >>>
> >>> If you manually created the network bridges, the match between them and the logical network should happen on name bases.
> >>
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >>
> >> As you may note from the above comment, the name should actually match (it's exactly ovirtmgmt) but it doesn't get recognized.
> >>
> >>
> >>> If it doesn't for any reasons (please report if you find any evidence), you can manually bind logical network and network interfaces editing the host properties from the web-ui. At that point the host should become active in a few seconds.
> >>
> >>
> >> Well, the most immediate evidence are the error messages already reported (given that the bridge is actually present, with the right name and actually working).
> >> Apart from that, I find the following past logs (I don't know whether they are relevant or not):
> >>
> >> From /var/log/vdsm/connectivity.log:
> >
> >
> > Can you please add also host-deploy logs?
>
> Please find a gzipped tar archive of the whole directory /var/log/ovirt-engine/host-deploy/ at:
>
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110&authkey=!AIQUc...
>>
>> Since I suppose that there's nothing relevant on those logs, I'm planning to specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on the host, then making the (still blocked) setup re-check.
>>
>>
Is there anything I should pay attention to before proceeding? (in particular while restarting VDSM)
>
>
> ^^ Dan?
I went on and unfortunately "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restarting VDSM on the host did not solve it (same error as before).
While trying (always without success) all other steps suggested by Simone (binding logical network and synchronizing networks from host) I found an interesting-looking libvirt network definition (autostart too) for vdsm-ovirtmgmt and this recalled some memories from past mailing list messages (that I still cannot find...) ;)
Long story short: aborting setup, cleaning up all and creating a libvirt network for each pre-provisioned bridge worked! ("net_persistence = ifcfg" has been kept for other, client-specific, reasons so I don't know if it's needed too)
Here it is, in BASH form:
for my_bridge in ovirtmgmt bridge1 bridge2; do
cat <<- EOM > /root/my-${my_bridge}.xml
<network>
<name>vdsm-${my_bridge}</name>
<forward mode='bridge'/>
<bridge name='${my_bridge}'/>
</network>
EOM
virsh -c qemu:///system net-define /root/my-${my_bridge}.xml
virsh -c qemu:///system net-autostart vdsm-${my_bridge}
rm -f /root/my-${my_bridge}.xml
done
I was able to connect (with the "virsh" commands above) to libvirtd (which must be running for the above to work) by removing the VDSM-added config fragment, allowing tcp connections and denying TLS-only connections in /etc/libvirt/libvirtd.conf and finally by removing /etc/sasl2/libvirt.conf (all these modifications must be reverted after configuring networks then stopping libvirtd and before relaunching setup).
Many thanks again for suggestions, hints etc.
Regards,
Giuseppe
>>
I will report back here on the results.
>>
>>
Regards,
>>
Giuseppe
>>
>>
> Many thanks again for your kind assistance.
>>
>
>>
> Regards,
>>
> Giuseppe
>>
>
>>
> >> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
>>
> >> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
>>
> >> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) d
>>
> >> ropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up spee
>>
> >> d:0 duplex:full)
>>
> >> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
>>
> >> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
>>
> >> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>>
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>>
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>>
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>>
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>>
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>>
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown)
>>
> >> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
>>
> >> duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(opers
>>
> >> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
>>
> >> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>>
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>>
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>>
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>>
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>>
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>>
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 22:58:16,215:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 00:04:54,019:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) dropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 00:05:39,102:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 01:16:47,194:DEBUG:recent_client:True
>>
> >> 2015-11-02 01:17:32,693:DEBUG:recent_client:True, vnet0:(operstate:up speed:0 duplex:full), lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), vnet2:(operstate:up speed:0 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), vnet1:(operstate:up speed:0 duplex:full), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-02 01:18:02,749:DEBUG:recent_client:False
>>
> >> 2015-11-02 01:20:18,001:DEBUG:recent_client:True
>>
> >>
>>
> >> From /var/log/vdsm/vdsm.log:
>>
> >>
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,991::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,992::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,994::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,995::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,005::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,006::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,007::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,008::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,009::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,010::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,014::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,015::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
> >>
>>
> >> And further down, always in /var/log/vdsm/vdsm.log:
>>
> >>
>>
> >> Thread-17::DEBUG::2015-11-02 01:17:18,747::__init__::533::jsonrpc.JsonRpcServer:
>>
> >> :(_serveRequest) Return 'Host.getCapabilities' in bridge with {'HBAInventory': {
>>
> >> 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5'}], 'FC': []}, '
>>
> >> packages2': {'kernel': {'release': '229.14.1.el7.x86_64', 'buildtime': 144232235
>>
> >> 1.0, 'version': '3.10.0'}, 'glusterfs-rdma': {'release': '1.el7', 'buildtime': 1
>>
> >> 444235292L, 'version': '3.7.5'}, 'glusterfs-fuse': {'release': '1.el7', 'buildti
>>
> >> me': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7_1.3',
>>
> >> 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release': '2.el7',
>>
> >> 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '2.gitdbbc5a
>>
> >> 4.el7', 'buildtime': 1445459370L, 'version': '4.17.10'}, 'qemu-kvm': {'release':
>>
> >> '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'glusterfs': {'r
>>
> >> elease': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'libvirt': {'release': '16.el7_1.4', 'buildtime': 1442325910L, 'version': '1.2.8'}, 'qemu-img': {'release': '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'mom': {'release': '2.el7', 'buildtime': 1442501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Atom(TM) CPU C2750 @ 2.40GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '0'}, 'liveSnapshot': 'true', 'kdumpStatus': 0, 'networks': {}, 'bridges': {'ovirtmgmt': {'addr': '172.25.10.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.10.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.10.21/24'[http://172.25.10.21/24%27][http://172.25.10.21/24%27%5Bhttp://172.25.10.2..., 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0', 'vnet0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '83', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb37', 'bridge_id': '8000.002590f1cb37', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'lan': {'addr': '192.168.164.218', 'cfg': {'AGEING': '0', 'IPADDR': '192.168.164.218', 'GATEWAY': '192.168.164.254', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'lan', 'IPV4_FAILURE_FATAL': 'yes', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'gateway': '192.168.164.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['192.168.164.218/24', '192.168.164.216/24'[http://192.168.164.216/24%27][http://192.168.164.216/24%27%5Bhttp://192.1..., 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['vnet1', 'enp6s0f0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '82', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.a0369f3888cd', 'bridge_id': '8000.a0369f3888cd', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '82', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'nfs': {'addr': '172.25.15.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.15.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'nfs', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.15.21/24', '172.25.15.203/24'[http://172.25.15.203/24%27][http://172.25.15.203/24%27%5Bhttp://172.25.15..., 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1', 'vnet2'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '183', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb35', 'bridge_id': '8000.002590f1cb35', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': '2a1855a9-18fb-4d7a-b8b8-6fc898a8e827', 'onlineCpus': '0,1,2,3,4,5,6,7', 'nics': {'enp0s20f1': {'permhwaddr': '00:25:90:f1:cb:35', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp0s20f1', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': '00:25:90:F1:CB:35', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp7s0f0': {'permhwaddr': 'a0:36:9f:38:88:cf', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp7s0f0', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': 'A0:36:9F:38:88:CF', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp6s0f0': {'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'BRIDGE': 'lan', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp6s0f0', 'BOOTPROTO': 'none', 'HWADDR': 'A0:36:9F:38:88:CD', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': 'a0:36:9f:38:88:cd', 'speed': 100, 'gateway': ''}, 'enp6s0f1': {'permhwaddr': 'a0:36:9f:38:88:cc', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp6s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': 'A0:36:9F:38:88:CC', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp7s0f1': {'permhwaddr': 'a0:36:9f:38:88:ce', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp7s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': 'A0:36:9F:38:88:CE', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f0': {'permhwaddr': '00:25:90:f1:cb:34', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f0', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:34', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f3': {'permhwaddr': '00:25:90:f1:cb:37', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f3', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': '00:25:90:F1:CB:37', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp0s20f2': {'permhwaddr': '00:25:90:f1:cb:36', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f2', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:36', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway'
>>
> >> : ''}}, 'software_revision': '2', 'hostdevPassthrough': 'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,movbe,popcnt,tsc_deadline_timer,aes,rdrand,lahf_lm,3dnowprefetch,ida,arat,epb,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,tsc_adjust,smep,erms,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=balance-rr miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'active_slave': '', 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f3', 'enp6s0f1'], 'hwaddr': '00:25:90:f1:cb:37', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100'}}, 'bond1': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'nfs', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f1', 'enp7s0f0'], 'hwaddr': '00:25:90:f1:cb:35', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}, 'bond2': {'ipv4addrs': ['172.25.5.21/24'[http://172.25.5.21/24%27][http://172.25.5.21/24%27%5Bhttp://172.25.5.21/2..., 'addr': '172.25.5.21', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '172.25.5.21', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond2', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb34/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'slaves': ['enp0s20f0', 'enp0s20f2', 'enp7s0f1'], 'hwaddr': '00:25:90:f1:cb:34', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}}, 'software_version': '4.17', 'memSize': '16021', 'cpuSpeed': '2401.000', 'numaNodes': {'0': {'totalMemory': '16021', 'cpus': [0, 1, 2, 3, 4, 5, 6, 7]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'ovirtmgmt', 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads': '8', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc', 'pc-q35-rhel7.1.0', 'q35', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0', 'pc-i440fx-rhel7.0.0'], 'rngSources': ['random'], 'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'}}
>>
> >>
>>
> >> Navigating the Admin web UI offered by the engine, editing (with the "Edit" button or using the corresponding context menu entry) the hosted_engine_1 host, I do not find any way to associate the logical oVirt ovirtmgmt network to the already present ovirtmgmt Linux bridge.
>>
> >> Furthermore, the "Network Interfaces" tab of the aforementioned host shows only plain interfaces and bonds (all marked with a down-pointing red arrow, even if they are actually up and running), but not the already defined Linux bridges; inside this tab I find two buttons: "Setup Host Networks" (which would allow me to drag-and-drop-associate the ovirtmgmt logical network to an already present bond, like the right one: bond0, but I avoid it, since I fear it would try to create a bridge from scratch, while it's actually present now and it already has the host address assigned on top, allowing engine-host communication at the moment) and "Sync All Networks" (which actively scares me with a threatening "Are you sure you want to synchronize all host's networks?", which I deny, since it's view is already wrong and it's absolutely not clear in which direction the synchronization would go).
>>
> >>
>>
> >> So, it seems to me that either I need to perform on the host further pre-configuration steps for the ovirtmgmt bridge (beyond the ifcfg-* setup) or there is a bug in the setup/adminportal (a UI/usability bug, maybe) :)
>>
> >>
>>
> >> Many thanks again for your help.
>>
> >>
>>
> >> Kind regards,
>>
> >> Giuseppe
>>
> >>
>>
> >>
>>
> >>> When the host will become active you'll can continue with hosted-engine-setup.
>>
> >>>
>>
> >>>> I seem to recall that a preconfigured network setup on oVirt 3.6 would need something predefined on the libvirt side too (apart from usual ifcfg-* files), but I cannot find the relevant mailing list message anymore nor any other specific documentation.
>>
> >>>>
>>
> >>>>
>>
> Does anyone have any further suggestion or clue (code/docs to read)?
>>
> >>>>
>>
> >>>>
>>
> Many thanks in advance.
>>
> >>>>
>>
> >>>>
>>
> Kind regards,
>>
> >>>>
>>
> Giuseppe
>>
> >>>>
>>
> >>>>
>>
> PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>
> >>>>
>>
> _______________________________________________
>>
> >>>>
>>
> Users mailing list
>>
> >>>> Users(a)ovirt.org
>>
> >>>> http://lists.ovirt.org/mailman/listinfo/users
>>
> >>
>>
> >>
>>
_______________________________________________
>>
Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
9 years, 5 months
Re: [ovirt-users] oVirt 4.0 wishlist: VDSM
by Giuseppe Ragusa
On Sat, Nov 21, 2015, at 13:59, Dan Kenigsberg wrote:
> On Fri, Nov 20, 2015 at 01:54:35PM +0100, Giuseppe Ragusa wrote:
> > Hi all,
> > I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
> >
> > I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
> >
> > I've sent separate wishlist messages for oVirt Node and Engine.
> >
> > VDSM:
> >
> > *) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt Node
>
> I'd apreciate a more detailed feature definition. Vdsm (and ovirt) try
> to configure only thing that are needed for their own usage. What do you
> want to control? When? You're welcome to draf a feature page prior to
> coding the fix ;-)
I was thinking of adding CIFS/NFSv4 functionality to an hyperconverged cluster (GlusterFS/oVirt) which would have separate volumes for virtual machines storage (one volume for the Engine and one for other vms, with no CIFS/NFSv4 capabilities offered) and for data shares (directly accessible by clients on LAN and obviously from local vms too).
Think of it as a 3-node HA NetApp+VMware killer ;-)
The UI idea (but that would be the Engine part, I understand) was along the lines of single-check enabling CIFS and/or NFSv4 sharing for a GlusterFS data volume, then optionally adding any further specific options (hosts allowed, users/groups for read/write access, network recycle_bin etc.); global Samba (domain/workgroup membership etc.) and CTDB (IPs/interfaces) configuration parameters would be needed too.
I have no experience on a GaneshaNFS clustered/HA configuration with GlusterFS, but (from superficial skimming through docs) it seems that it was not possible at all before 2.2 and now it needs a full Pacemaker/Corosync setup too (contrary to the IBM-GPFS-backed case), so that could be a problem.
This VDSM wishlist item was driven by the idea that all actions (and so future GlusterFS/Samba/CTDB too) performed by the Engine through the hosts/nodes were somehow "mediated" by VDSM and its API, but if this is not the case, then I retire my suggestion here and I will try to pursue it only on the Engine/Node side ;)
Many thanks for your attention.
Regards,
Giuseppe
> > *) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on oVirt Node and on the Engine
>
> That's on our immediate roadmap. Soon, vdsm-hook-ovs would be ready for
> testing.
>
> >
> > *) add DRBD9 as a supported Storage Domain type; there are related wishlist items on configuring/managing DRBD9 on the Engine and on oVirt Node
> >
> > *) allow VDSM to configure/manage containers (maybe extend it by use of the LXC libvirt driver, similarly to the experimental work that has been put up to allow Xen vm management); there are related wishlist items on configuring/managing containers on the Engine and on oVirt Node
> >
> > *) add a VDSM_remote mode (for lack of a better name, but mainly inspired by pacemaker_remote) to be used inside a guest by the above mentioned container support (giving to the Engine the required visibility on the managed containers, but excluding the "virtual node" from power management and other unsuitable actions)
> >
> > Regards,
> > Giuseppe
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
9 years, 5 months
FOSDEM16 Virt & IaaS Devroom CFP Extension, Speaker Mentoring, CoC
by Mikey Ariel
The CFP for the Virtualization & IaaS devroom at FOSDEM 2016 is in full
swing, and we'd like to share a few updates with you:
-------------------------
Speaker Mentoring Program
-------------------------
As a part of the rising efforts to grow our communities and encourage a
diverse and inclusive conference ecosystem, we're happy to announce that
we'll be offering mentoring for newcomer speakers. Our mentors can help
you with tasks such as reviewing your abstract, reviewing your
presentation outline or slides, or practicing your talk with you.
You may apply to the mentoring program as a newcomer speaker if you:
* Never presented before or
* Presented only lightning talks or
* Presented full-length talks at small meetups (<50 ppl)
Submission guidelines:
* Mentored presentations will have 25-minute slots, where 20 minutes
will include the presentation and 5 minutes will be reserved for questions.
* The number of newcomer session slots is limited, so we will probably
not be able to accept all applications.
* You must submit your talk and abstract to apply for the mentoring
program, our mentors are volunteering their time and will happily
provide feedback but won't write your presentation for you! If you are
experiencing problems with Pentabarf, the proposal submission interface,
or have other questions, you can email iaas-virt-devroom at
lists.fosdem.org and we will try to help you.
How to apply:
* Follow the same procedure to submit an abstract in Pentabarf as
standard sessions. Instructions can be found in our original CFP
announcement:
http://community.redhat.com/blog/2015/10/call-for-proposals-fosdem16-virt...
* In addition to agreeing to video recording and confirming that you can
attend FOSDEM in case your session is accepted, please write "speaker
mentoring program application" in the "Submission notes" field, and list
any prior speaking experience or other relevant information for your
application.
Call for mentors!
Interested in mentoring newcomer speakers? We'd love to have your help!
Please email iaas-virt-devroom at lists.fosdem.org with a short speaker
bio and any specific fields of expertise (for example, KVM, OpenStack,
storage, etc) so that we can match you with a newcomer speaker from a
similar field. Estimated time investment can be as low as a 5-10 hours
in total, usually distributed weekly or bi-weekly.
Never mentored a newcomer speaker but interested to try? Our mentoring
program coordinator will be happy to answer your questions and give you
tips on how to optimize the mentoring process. Email us and we'll be
happy to answer your questions!
-------------------------
CFP Deadline Extension
-------------------------
To help accommodate the newcomer speaker proposals, we have decided to
extend the deadline for submitting proposals by one week.
The new deadline is **TUESDAY, DECEMBER 8 @ midnight CET**.
-------------------------
Code of Conduct
-------------------------
Following the release of the updated code of conduct for FOSDEM[1], we'd
like to remind all speakers and attendees that all of the presentations
and discussions in our devroom are held under the guidelines set in the
CoC and we expect attendees, speakers, and volunteers to follow the CoC
at all times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about
the CoC or wish to have one of the devroom organizers review your
presentation slides or any other content for CoC compliance, please
email iaas-virt-devroom at lists.fosdem.org and we will do our best to
help you out.
[1] https://www.fosdem.org/2016/practical/conduct/
--
Mikey Ariel
Community Lead, oVirt
www.ovirt.org
"To be is to do" (Socrates)
"To do is to be" (Jean-Paul Sartre)
"Do be do be do" (Frank Sinatra)
Mobile: +420-702-131-141
IRC: mariel / thatdocslady
Twitter: @ThatDocsLady
9 years, 5 months
Cannot setup Networks. The address of the network 'NFS' cannot be modified
by Ihor Piddubnyak
Trying to change IP for VLAN interface attached to hypervisor, getting
vhi2:
Cannot setup Networks. The address of the network 'NFS' cannot be
modified without reinstalling the host, since this address was used to
create the host's certification.
Host reinstall does not help. Any clue how to do it?
--
Ihor Piddubnyak <ip(a)surftown.com>
surftown a/s
9 years, 5 months
[3.6] User can't create a VM. No permission for EDIT_ADMIN_VM_PROPERTIES
by Maksim Naumov
Hello
I faced with the problem. The user can;t create a VM. The user has
PowerUserRole on Cluster. He tried to create a VM with a base template and
had no success.
Here some lines from log. Have no idea why it wants
for EDIT_ADMIN_VM_PROPERTIES permission for user?
2015-11-20 16:42:10,888 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Checking whether user
'acc9ced5-a764-4d60-84d7-db4b4a498a18' or one of the groups he is member
of, have the following permissions: ID:
a303bbca-af20-4de5-9eff-01c52d3bf615 Type: VdsGroupsAction group CREATE_VM
with role type USER, ID: 00000000-0000-0000-0000-000000000000 Type:
VmTemplateAction group CREATE_VM with role type USER, ID:
a303bbca-af20-4de5-9eff-01c52d3bf615 Type: VdsGroupsAction group
EDIT_ADMIN_VM_PROPERTIES with role type ADMIN
2015-11-20 16:42:10,890 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Found permission
'129c57bb-df56-4529-93d9-52db0265263f' for user when running 'AddVm', on
'Cluster' with id 'a303bbca-af20-4de5-9eff-01c52d3bf615'
2015-11-20 16:42:10,893 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Found permission
'00000004-0004-0004-0004-000000000355' for user when running 'AddVm', on
'Template' with id '00000000-0000-0000-0000-000000000000'
2015-11-20 16:42:10,894 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] No permission found for user when running
action 'AddVm', on object 'Cluster' for action group
'EDIT_ADMIN_VM_PROPERTIES' with id 'a303bbca-af20-4de5-9eff-01c52d3bf615'.
2015-11-20 16:42:10,894 WARN [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] CanDoAction of action 'AddVm' failed for user
vincent.engel@hitmeister.de(a)hitmeister.de. Reasons:
VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
--
Maksim Naumov
Hitmeister GmbH
Softwareentwickler
Habsburgerring 2
50674 Köln
E: maksim.naumov(a)hitmeister.de
www.hitmeister.de
HRB 59046, Amtsgericht Köln
Geschäftsführer: Dr. Gerald Schönbucher
9 years, 5 months
Re: [ovirt-users] Allowing a user to manage all machines in a pool
by Nicolás
----_com.android.email_3020879656702700
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
QW55IGhpbnRzIHRvIHRoaXMsIHBsZWFzZT8KCi0tLS0tLS0tIE1lbnNhamUgb3JpZ2luYWwgLS0t
LS0tLS0KRGU6IE5pY29sw6FzIDxuaWNvbGFzQGRldmVscy5lcz4gCkZlY2hhOjIwLzExLzIwMTUg
IDE4OjM5ICAoR01UKzAwOjAwKSAKUGFyYTogdXNlcnNAb3ZpcnQub3JnIApBc3VudG86IFtvdmly
dC11c2Vyc10gQWxsb3dpbmcgYSB1c2VyIHRvIG1hbmFnZSBhbGwgbWFjaGluZXMgaW4gYSBwb29s
IAoKSGksCgpXZSdyZSBydW5uaW5nIG9WaXJ0IDMuNS4zLjEtMSwgYW5kIHdlJ3JlIGN1cnJlbnRs
eSBkZXBsb3lpbmcgc29tZSBQb29scyAKZm9yIHN0dWRlbnRzIGFuZCB0ZWFjaGVycywgc28gZWFj
aCBoYXMgYWNjZXNzIHRvIG9uZSBtYWNoaW5lIGluIHRoZSAKcG9vbC4gVGh1cywgZWFjaCBvZiB0
aGVtIGlzIGdyYW50ZWQgdGhlIFVzZXJSb2xlIGluIHRoZSBwb29sLiBOb3cgdGhlIAp0ZWFjaGVy
IGlzIGFza2luZyB1cyB0byBhbGxvdyBoaW0gYWNjZXNzIHRvIGFsbCBzdHVkZW50cycgVk1zIHZp
YSB0aGUgCldlYiBHVUkgdG8gZXZhbHVhdGUgdGhlaXIgd29yay4KCklzIHRoZXJlIGEgcGVybWlz
c2lvbiB0byBhY2NvbXBsaXNoIHRoYXQ/IEluIHdvcnN0IG9mIGNhc2VzIEkgd2lsbCAKZGV0YWNo
IHRoZSBWTXMgZnJvbSB0aGUgcG9vbCBhbmQgZ3JhbnQgdGhlIHRlYWNoZXIgdGhlIFVzZXJSb2xl
IG9uIGVhY2ggCm9mIHRoZW0sIGJ1dCBJJ2QgbGlrZSB0byBrbm93IGlmIHRoZXJlJ3MgYSAiY2xl
YW5lciIgd2F5LgoKVGhhbmtzLgoKUmVnYXJkcywKCk5pY29sw6FzCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlzdApVc2Vyc0Bv
dmlydC5vcmcKaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzCg==
----_com.android.email_3020879656702700
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+QW55IGhpbnRzIHRvIHRoaXMsIHBs
ZWFzZT88YnI+PGJyPi0tLS0tLS0tIE1lbnNhamUgb3JpZ2luYWwgLS0tLS0tLS08YnI+RGU6IE5p
Y29sw6FzIDxuaWNvbGFzQGRldmVscy5lcz4gPGJyPkZlY2hhOjIwLzExLzIwMTUgIDE4OjM5ICAo
R01UKzAwOjAwKSA8YnI+UGFyYTogdXNlcnNAb3ZpcnQub3JnIDxicj5Bc3VudG86IFtvdmlydC11
c2Vyc10gQWxsb3dpbmcgYSB1c2VyIHRvIG1hbmFnZSBhbGwgbWFjaGluZXMgaW4gYSBwb29sIDxi
cj48YnI+SGksPGJyPjxicj5XZSdyZSBydW5uaW5nIG9WaXJ0IDMuNS4zLjEtMSwgYW5kIHdlJ3Jl
IGN1cnJlbnRseSBkZXBsb3lpbmcgc29tZSBQb29scyA8YnI+Zm9yIHN0dWRlbnRzIGFuZCB0ZWFj
aGVycywgc28gZWFjaCBoYXMgYWNjZXNzIHRvIG9uZSBtYWNoaW5lIGluIHRoZSA8YnI+cG9vbC4g
VGh1cywgZWFjaCBvZiB0aGVtIGlzIGdyYW50ZWQgdGhlIFVzZXJSb2xlIGluIHRoZSBwb29sLiBO
b3cgdGhlIDxicj50ZWFjaGVyIGlzIGFza2luZyB1cyB0byBhbGxvdyBoaW0gYWNjZXNzIHRvIGFs
bCBzdHVkZW50cycgVk1zIHZpYSB0aGUgPGJyPldlYiBHVUkgdG8gZXZhbHVhdGUgdGhlaXIgd29y
ay48YnI+PGJyPklzIHRoZXJlIGEgcGVybWlzc2lvbiB0byBhY2NvbXBsaXNoIHRoYXQ/IEluIHdv
cnN0IG9mIGNhc2VzIEkgd2lsbCA8YnI+ZGV0YWNoIHRoZSBWTXMgZnJvbSB0aGUgcG9vbCBhbmQg
Z3JhbnQgdGhlIHRlYWNoZXIgdGhlIFVzZXJSb2xlIG9uIGVhY2ggPGJyPm9mIHRoZW0sIGJ1dCBJ
J2QgbGlrZSB0byBrbm93IGlmIHRoZXJlJ3MgYSAiY2xlYW5lciIgd2F5Ljxicj48YnI+VGhhbmtz
Ljxicj48YnI+UmVnYXJkcyw8YnI+PGJyPk5pY29sw6FzPGJyPl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPlVzZXJzIG1haWxpbmcgbGlzdDxicj5Vc2Vy
c0BvdmlydC5vcmc8YnI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3Vz
ZXJzPGJyPjwvYm9keT4=
----_com.android.email_3020879656702700--
9 years, 5 months