Users
Threads by month
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
March 2016
- 141 participants
- 199 discussions
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
4
6
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero 18 Feb '20
by Jasper Siero 18 Feb '20
18 Feb '20
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
3
3
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
I can't agree with you more. Modifying every box's or Virtual Machine's =
HOSTS file with a FQDN and IP SHOULD work, but in my case it is not. =
There are several reasons I've come to believe could be the problem =
during my trial-and-errors testing and learning.
FIRST - MACHINE IPs.
THe machine's "Names" where not appearing in the Microsoft Active =
Directory DHCP along with their assigned IPs; in other words, the DHCP =
just showed an "Assigned IP", equal to the Linux Machine's IP, with a =
<empty> ('i.e. blank, none, silch, plan old "no-letters-or-numbers") =
"Name" in the "Name" (i.e. machines "network name", or FQDN-value used =
by the Windows AD DNS-service) column. =20
if your IP is is appearing with an <empty> "name", there is no "host =
name" to associate the IP, it makes it difficult to define a FQDN; which =
isn't that useful if we're going to use the HOSTS files in all =
participating machines in an oVirt Installation.
I kept banging my head for three (3) long hours trying to find the =
problem.
In Fedora 18, I could't find where the "network name" of the machine =
could be defined. =20
I tried putting the "Additional Search Domains" and/or "DHCP Client ID" =
in Fedora's 18 Desktop - under "System Settings > Hardware > Network > =
Options > IPv4 Setting"
The DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"
Kept wondering around the "Settings" and seeing which one made sense, =
but what the heck, I went for it. =20
Under "System Settings > System > Details" I found the information about =
GNOME and the machine's hardware. =20
There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =20
I also installed all Kerberos libraries and client (i.e. authconfig-gtk, =
authhub, authhub-client, krb5-apple-clents, krb5-auth-dialog, =
krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) and rebooted
VOILA=85!!! =20
I don;t know if it was the definition of "Device Name" from =
"localhost.localdomain" to "ovirtengine", of the Kerberos libraries =
install, or both. But finally the MS AD DHCP was showing the =
Addigned-IP, the machine "Name" and the proper MAC-address. Regardless, =
setting the machine's "Network Name" under "System Settings > System > =
Details > Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this network =
setting could be defined.
NOTE - Somebody has to try the two steps I did together, separately. to =
see which one is the real problem-solver; for me it is working, and "if =
it ain't broke, don't fix it=85"
Now that I have the DHCP / IP thing sorted, I have to do the DNS stuff.
To this point, I've addressed the DHCP and "Network Name" of the =
IP-Lease (required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "as long as I do not =
use default HTTPd service parameters as suggested by the install". By =
using the HOST file to "define" FQDNs, AND NOT using the default HTTPd =
suggested changes, I'm able to install the oVirtEngine (given that I use =
ports 8700 and 8701) to access the "oVirtEngine Welcome Screen", BUT =
NONE of the "oVirt Portals" work=85 YET=85!!!
More to come during the week
Richie
Jos=E9 E ("Richie") Piovanetti, MD, MS=20
M: 787-615-4884 | richiepiovanetti(a)healthcareinfopartners.com
On Aug 2, 2013, at 3:10 AM, Joop <jvdwege(a)xs4all.nl> wrote:
> Hello Ritchie,
>=20
>> In a conversation via IRC, someone suggested that I activate =
"dnsmask" to overcome what appears to be a DNS problem. I'll try that =
other possibility once I get home later today.
>>=20
>> In the mean time, what do you mean by "fixing the hostname"=85? I =
opened and fixed the HOSTNAMES and changed it from =
"localhost-localdomain" to "localhost.localdomain" and that made no =
difference. Albeit, after changing I didm;t restart, remove ovirtEngine =
((using "engine-cleanup") and reinstalled via "engine-setup". Is that =
what you mean=85?
>>=20
>>=20
>>=20
>> In the mean time, the fact that even if I resolve the issue of =
oVirtEngine I will not be able to connect to the oVirt Nodes unless I =
have DNS resolution, apparently means I should do something with =
resolving via DNS in my home LAN (i.e implement some sort of "DNS Cache" =
so I can resolve my home computers via DNS inside my LAN).
>>=20
>> Any suggestions are MORE THAN WELCOME=85!!!
>> =20
>=20
> Having setup ovirt more than I can count right now I share your =
feeling that it isn't always clear why things are going wrong, but in =
this case I suspect that there is a rather small thing missing.
> In short if you setup ovirt-engine, either using virtualbox or on real =
hardware, and you give your host a meaningfull name AND you add that =
info also in your /etc/hosts file than things SHOULD work, no need for =
dnsmasq or even bind. Would make things easier once you start adding =
virt hosts to you infrastructure since you will need to duplicate these =
actions on each host (add engine name/ip to each host and add each host =
to the others and all hosts to engine)
>=20
> Just ask if you need more assistance and I will write down a small =
howto that should work out of the box else I might have some time to see =
if I can get things going.
>=20
> Regards,
>=20
> Joop
>=20
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I =
can't agree with you more. Modifying every box's or Virtual =
Machine's HOSTS file with a FQDN and IP SHOULD work, but in my case it =
is not. There are several reasons I've come to believe could be =
the problem during my trial-and-errors testing and =
learning.<div><div><br></div><div>FIRST - MACHINE IPs.</div><ul =
class=3D"MailOutline"><li>THe machine's "Names" where not appearing in =
the <b>Microsoft Active Directory DHCP</b> along with their assigned =
IPs; in other words, the DHCP just showed an "Assigned IP", equal to the =
Linux Machine's IP, with a <empty> ('i.e. blank, none, silch, plan =
old "no-letters-or-numbers") "Name" in the "Name" (i.e. machines =
"network name", or FQDN-value used by the Windows AD DNS-service) =
column. </li><li>if your IP is is appearing with an <empty> =
"name", there is no "host name" to associate the IP, it makes it =
difficult to define a FQDN; which isn't that useful if we're going to =
use the HOSTS files in all participating machines in an oVirt =
Installation.</li><li>I kept banging my head for three (3) long hours =
trying to find the problem.</li><ul><li>In Fedora 18, I could't find =
where the "network name" of the machine could be defined. =
</li><li>I tried putting the "Additional Search Domains" and/or =
"DHCP Client ID" in Fedora's 18 Desktop - under "System Settings > =
Hardware > Network > Options > IPv4 Setting"</li><ul><li>The =
DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"</li></ul><li>Kept wondering around the =
"Settings" and seeing which one made sense, but what the heck, I went =
for it. </li><ul><li>Under "System Settings > System > =
Details" I found the information about GNOME and the machine's hardware. =
</li><li>There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =
</li><li>I also installed all Kerberos libraries and client (i.e. =
authconfig-gtk, authhub, authhub-client, krb5-apple-clents, =
krb5-auth-dialog, krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) =
and rebooted</li><li>VOILA=85!!! </li></ul><li>I don;t know if it =
was the definition of "Device Name" from "localhost.localdomain" to =
"ovirtengine", of the Kerberos libraries install, or both. But =
finally the MS AD DHCP was showing the Addigned-IP, the machine "Name" =
and the proper MAC-address. Regardless, setting the machine's =
"Network Name" under "System Settings > System > Details =
> Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this =
network setting could be defined.</li><li><b>NOTE</b> - Somebody has to =
try the two steps I did together, separately. to see which one is the =
real problem-solver; for me it is working, and "if it ain't broke, don't =
fix it=85"</li></ul></ul><div><br =
class=3D"webkit-block-placeholder"></div><div>Now that I have the DHCP / =
IP thing sorted, I have to do the DNS stuff.</div><div><br></div><div>To =
this point, I've addressed the DHCP and "Network Name" of the IP-Lease =
(required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "<b><i>as long as I =
do not use default HTTPd service parameters as suggested by the =
install</i></b>". <b>By using the HOST file to "define" FQDNs, AND =
NOT using the default HTTPd suggested changes, I'm able to install the =
oVirtEngine (given that I use ports 8700 and 8701) to access the =
"oVirtEngine Welcome Screen", BUT NONE of the "oVirt Portals" work</b>=85 =
YET=85!!!</div><div><br></div><div>More to come during the =
week</div><div><br></div><div>Richie</div><div =
apple-content-edited=3D"true"><br>Jos=E9 E ("Richie") Piovanetti, MD, =
MS <br>M: 787-615-4884 | <a =
href=3D"mailto:richiepiovanetti@healthcareinfopartners.com">richiepiovanet=
ti(a)healthcareinfopartners.com</a><br><br><br><br><br><br></div><br><div><d=
iv>On Aug 2, 2013, at 3:10 AM, Joop <<a =
href=3D"mailto:jvdwege@xs4all.nl">jvdwege(a)xs4all.nl</a>> =
wrote:</div><br class=3D"Apple-interchange-newline"><blockquote =
type=3D"cite">Hello Ritchie,<br><br><blockquote type=3D"cite">In a =
conversation via IRC, someone suggested that I activate "dnsmask" to =
overcome what appears to be a DNS problem. I'll try that other =
possibility once I get home later today.<br><br>In the mean time, what =
do you mean by "fixing the hostname"=85? I opened and fixed the =
HOSTNAMES and changed it from "localhost-localdomain" to =
"localhost.localdomain" and that made no difference. Albeit, after =
changing I didm;t restart, remove ovirtEngine ((using "engine-cleanup") =
and reinstalled via "engine-setup". Is that what you =
mean=85?<br><br><br><br>In the mean time, the fact that even if I =
resolve the issue of oVirtEngine I will not be able to connect to the =
oVirt Nodes unless I have DNS resolution, apparently means I should do =
something with resolving via DNS in my home LAN (i.e implement some sort =
of "DNS Cache" so I can resolve my home computers via DNS inside my =
LAN).<br><br>Any suggestions are MORE THAN WELCOME=85!!!<br> =
<br></blockquote><br>Having setup ovirt more than I can count =
right now I share your feeling that it isn't always clear why things are =
going wrong, but in this case I suspect that there is a rather small =
thing missing.<br>In short if you setup ovirt-engine, either using =
virtualbox or on real hardware, and you give your host a meaningfull =
name AND you add that info also in your /etc/hosts file than things =
SHOULD work, no need for dnsmasq or even bind. Would make things easier =
once you start adding virt hosts to you infrastructure since you will =
need to duplicate these actions on each host (add engine name/ip to each =
host and add each host to the others and all hosts to =
engine)<br><br>Just ask if you need more assistance and I will write =
down a small howto that should work out of the box else I might have =
some time to see if I can get things =
going.<br><br>Regards,<br><br>Joop<br><br></blockquote></div><br></div></b=
ody></html>=
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241--
2
1
15 May '19
------=_Part_1975902_834617789.1445161505459
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Salifou,
Actually java sdk is=C2=A0intentionally=C2=A0hiding transport level interna=
ls so developers could stay in java domain,if your headers are static, easi=
est way would be using reverse proxy in a middle to intercept requests,=C2=
=A0
can you tell me why do you need this?
=20
On Friday, October 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah@=
redhat.com> wrote:
=20
Hi Micheal,
I have a question about the ovirt-engine-sdk-java.
Is there a way to add custom request headers to each RHEVM API call?
Here is an example of a request that I would like to do:
$ curl -v -k \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "ID: user1(a)ad.xyz.com" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "PASSWORD: Pwssd" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "TARGET: kobe" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 https://vm0.smalick.com/api/hosts
I would like to add ID, PASSWORD and TARGET as HTTP request header.=20
Thanks,
Salifou
------=_Part_1975902_834617789.1445161505459
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:HelveticaNeue-Light, Helvetica Neue Light, Helvetica Neue, Helve=
tica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id=3D"yui_3_16_=
0_1_1445160422533_3555" dir=3D"ltr"><span id=3D"yui_3_16_0_1_1445160422533_=
4552">Hi </span><span style=3D"font-family: 'Helvetica Neue', 'Segoe U=
I', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3_16_0_1_1445=
160422533_3568" class=3D"">Salifou,</span></div><div id=3D"yui_3_16_0_1_144=
5160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica Neue', =
'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" class=3D""><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
style=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Luci=
da Grande', sans-serif;" class=3D"" id=3D"yui_3_16_0_1_1445160422533_3595">=
Actually java sdk is </span><span style=3D"font-family: 'Helvetica Neu=
e', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3=
_16_0_1_1445160422533_4360" class=3D"">intentionally </span><span styl=
e=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Lucida G=
rande', sans-serif;" id=3D"yui_3_16_0_1_1445160422533_4362" class=3D"">hidi=
ng transport level internals so developers could stay in java domain,</span=
></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span class=
=3D"" id=3D"yui_3_16_0_1_1445160422533_4435"><font face=3D"Helvetica Neue, =
Segoe UI, Helvetica, Arial, Lucida Grande, sans-serif" id=3D"yui_3_16_0_1_1=
445160422533_4432" class=3D"">if your headers are static, easiest way would=
be using reverse proxy in a middle to intercept requests, </font><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
class=3D""><font face=3D"Helvetica Neue, Segoe UI, Helvetica, Arial, Lucida=
Grande, sans-serif" class=3D""><br></font></span></div><div id=3D"yui_3_16=
_0_1_1445160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica=
Neue', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"y=
ui_3_16_0_1_1445160422533_4357">can you tell me why do you need this?</span=
><br></div> <br><div class=3D"qtdSeparateBR"><br><br></div><div class=3D"y=
ahoo_quoted" style=3D"display: block;"> <div style=3D"font-family: Helvetic=
aNeue-Light, Helvetica Neue Light, Helvetica Neue, Helvetica, Arial, Lucida=
Grande, sans-serif; font-size: 13px;"> <div style=3D"font-family: Helvetic=
aNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-si=
ze: 16px;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Friday, O=
ctober 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah(a)redhat.com>=
wrote:<br> </font> </div> <br><br> <div class=3D"y_msg_container">Hi Mich=
eal,<br><br>I have a question about the ovirt-engine-sdk-java.<br><br>Is th=
ere a way to add custom request headers to each RHEVM API call?<br><br>Here=
is an example of a request that I would like to do:<br><br>$ curl -v -k \<=
br> -H "ID: <a ymailto=3D"mailto:user1@ad=
.xyz.com" href=3D"mailto:user1@ad.xyz.com">user1(a)ad.xyz.com</a>" \<br> =
; -H "PASSWORD: Pwssd" \<br>  =
; -H "TARGET: kobe" \<br> <=
a href=3D"https://vm0.smalick.com/api/hosts" target=3D"_blank">https://vm0.=
smalick.com/api/hosts</a><br><br><br>I would like to add ID, PASSWORD and T=
ARGET as HTTP request header. <br><br>Thanks,<br>Salifou<br><br><br><br></d=
iv> </div> </div> </div></div></body></html>
------=_Part_1975902_834617789.1445161505459--
2
1
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:41)
* Status of next release (mburns, 14:05:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145 (mburns,
14:05:29)
* AGREED: freeze date and beta release delayed by 1 week to 2012-06-07
(mburns, 14:12:33)
* post freeze, release notes flag needs to be used where required
(mburns, 14:14:21)
* https://bugzilla.redhat.com/show_bug.cgi?id=821867 is a VDSM blocker
for 3.1 (oschreib, 14:17:27)
* ACTION: dougsland to fix upstream vdsm right now, and open a bug on
libvirt augeas (oschreib, 14:21:44)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822158 (mburns,
14:23:39)
* assignee not available, update to come tomorrow (mburns, 14:24:59)
* ACTION: oschreib to make sure BZ#822158 is handled quickly
(oschreib, 14:25:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824397 (mburns,
14:28:55)
* 824397 expected to be merged prior next week's meeting (mburns,
14:29:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824420 (mburns,
14:30:15)
* tracker for node based on F17 (mburns, 14:30:28)
* blocked by util-linux bug currently (mburns, 14:30:40)
* new build expected from util-linux maintainer in next couple days
(mburns, 14:30:55)
* sub-project status -- engine (mburns, 14:32:49)
* nothing to report outside of blockers discussed above (mburns,
14:34:00)
* sub-project status -- vdsm (mburns, 14:34:09)
* nothing outside of blockers above (mburns, 14:35:36)
* sub-project status -- node (mburns, 14:35:43)
* working on f17 migration, but blocked by util-linux bug (mburns,
14:35:58)
* should be ready for freeze deadline (mburns, 14:36:23)
* Review decision on Java 7 and Fedora jboss rpms in oVirt Engine
(mburns, 14:36:43)
* Java7 basically working (mburns, 14:37:19)
* LINK: http://gerrit.ovirt.org/#change,4416 (oschreib, 14:39:35)
* engine will make ack/nack statement next week (mburns, 14:39:49)
* fedora jboss rpms patch is in review, short tests passed (mburns,
14:40:04)
* engine ack on fedora jboss rpms and java7 needed next week (mburns,
14:44:47)
* Upcoming Workshops (mburns, 14:45:11)
* NetApp workshop set for Jan 22-24 2013 (mburns, 14:47:16)
* already at half capacity for Workshop at LinuxCon Japan (mburns,
14:47:37)
* please continue to promote it (mburns, 14:48:19)
* proposal: board meeting to be held at all major workshops (mburns,
14:48:43)
* LINK: http://www.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:49:30)
* Open Discussion (mburns, 14:50:12)
* oVirt/Quantum integration discussion will be held separately
(mburns, 14:50:43)
Meeting ended at 14:52:47 UTC.
Action Items
------------
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib to make sure BZ#822158 is handled quickly
Action Items, by person
-----------------------
* dougsland
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib
* oschreib to make sure BZ#822158 is handled quickly
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (98)
* oschreib (55)
* doronf (12)
* lh (11)
* sgordon (8)
* dougsland (8)
* ovirtbot (6)
* ofrenkel (4)
* cestila (2)
* RobertMdroid (2)
* ydary (2)
* rickyh (1)
* yzaslavs (1)
* cctrieloff (1)
* mestery_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
2
1
Hi,
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
- ACTION: Gilad please provide ETA on above blocker, the new proposed RC date will be decided on the given ETA.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 57 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 37 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- ACTION: Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- ACTION: Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
2
1
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
2
1
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi 11 May '19
by Gianluca Cecchi 11 May '19
11 May '19
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
2
1
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
5
4
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
3
2
This is a multi-part message in MIME format.
--------------000005070002050708050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hoping someone can help me out.
For some reason I keep getting the following error when I try to reset
my password:
Reset password
* Error sending mail: Failed to add recipient: jvandewege(a)nieuwland.nl
[SMTP: Invalid response code received from server (code: 554,
response: 5.7.1 <jvandewege(a)nieuwland.nl>: Relay access denied)]
Complete this form to receive an e-mail reminder of your account details.
Since I receive the ML on this address it is definitely a working address.
Tried my home account too and same error but then for my home provider,
Relay denied ??
A puzzled user,
Joop
--------------000005070002050708050606
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hoping someone can help me out.<br>
For some reason I keep getting the following error when I try to
reset my password:<br>
<br>
<fieldset><legend>Reset password</legend>
<div class="error">
<ul>
<li>Error sending mail: Failed to add recipient:
<a class="moz-txt-link-abbreviated" href="mailto:jvandewege@nieuwland.nl">jvandewege(a)nieuwland.nl</a> [SMTP: Invalid response code
received from server (code: 554, response: 5.7.1
<a class="moz-txt-link-rfc2396E" href="mailto:jvandewege@nieuwland.nl"><jvandewege(a)nieuwland.nl></a>: Relay access denied)]</li>
</ul>
</div>
<p>Complete this form to receive an e-mail reminder of your
account details.<br>
</p>
</fieldset>
<br>
Since I receive the ML on this address it is definitely a working
address.<br>
Tried my home account too and same error but then for my home
provider, Relay denied ??<br>
<br>
A puzzled user,<br>
<br>
Joop<br>
<br>
</body>
</html>
--------------000005070002050708050606--
2
1
Hi All,
I need your help. Anyone who encounter the below error and have the
solution? Can you help me how to fix this?
MainThread::INFO::2015-01-27
10:22:53,247::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2015-01-27
10:22:53,248::ovirt-guest-agent::138::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in ?
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 371, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
Thanks
2
1
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-work…
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
2
1
--========GMXBoundary282021374122634158505
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it's me again....
I started my oVirt 'project' as a proof of concept,.. but it happend as always, it became production
Now, I've to move the iSCSI Master data to the real iSCSI traget.
Is there any way to do this, and to become rid of the old Master Data?
Thank you for your help
Hans-Joachim
--========GMXBoundary282021374122634158505
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it's me again....<br /><br />I started my oVirt 'project' as a proof=
of concept,.. but it happend as always, it became production <img alt=
=3D" " title=3D" " src=3D"http://images.gmx.com/images/outsource/applicatio=
n/mailclient/mailcom/resource/mailclient/icons/blue/emoticons/animated/S_02=
-516742918.gif" /><br /><br />Now, I've to move the iSCSI Master data to th=
e real iSCSI traget.<br />Is there any way to do this, and to become rid of=
the old Master Data?<br /><br /><span id=3D"editor_signature">Thank you fo=
r your help</span><br /><br />Hans-Joachim</span></span>
--========GMXBoundary282021374122634158505--
3
2
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
3
2
11 Mar '19
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
3
5
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
3
3
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
3
3
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review and comment on the wiki below:
http://www.ovirt.org/Hosted_engine_VM_management
Thanks,
Roy
2
3
----_com.android.email_640187878761650
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
SGkgS3lsZSzCoApXZSBtYXkgaGF2ZSBzZWVuIHNvbWV0aGluZyBzaW1pbGFyIGluIHRoZSBwYXN0
IGJ1dCBJIHRoaW5rIHRoZXJlIHdlcmUgdmxhbnMgaW52b2x2ZWQuwqAKSXMgaXQgdGhlIHNhbWUg
Zm9yIHlvdT/CoApUb255IC8gRGFuLCBkb2VzIGl0IHJpbmcgYSBiZWxsP8Kg
----_com.android.email_640187878761650
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5IaSBLeWxlLCZuYnNwOzwv
ZGl2PjxkaXY+V2UgbWF5IGhhdmUgc2VlbiBzb21ldGhpbmcgc2ltaWxhciBpbiB0aGUgcGFzdCBi
dXQgSSB0aGluayB0aGVyZSB3ZXJlIHZsYW5zIGludm9sdmVkLiZuYnNwOzwvZGl2PjxkaXY+SXMg
aXQgdGhlIHNhbWUgZm9yIHlvdT8mbmJzcDs8L2Rpdj48ZGl2PlRvbnkgLyBEYW4sIGRvZXMgaXQg
cmluZyBhIGJlbGw/Jm5ic3A7PC9kaXY+PC9ib2R5PjwvaHRtbD4=
----_com.android.email_640187878761650--
2
1
This is a multipart message in MIME format.
------=_NextPart_000_0050_01D18069.35C995E0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi,
I am having an issue with getting SSO to work when a standard user(UserRole)
logs in to the UserPortal.
The user has permission to use only this VM, so after login the console is
automatically opened for that VM.
Problem is that it doesn't login on the VM system with the provided
credentials. Manual login at the console works without any issues.
HBAC-rule check on IPA shows access is granted. Client has SELINUX in
permissive mode and a disabled firewalld.
On the client side I do see some PAM related errors in the logs (see details
below). Extensive Google search on error 17 "Failure setting user
credentials" didn't show helpful information :-(
AFAIK this is did a pretty standard set-up, all working with RH-family
products. I would expect others to encounter this issue as well.
If someone knows any solution or has some directions to fix this it would be
greatly appreciated.
Thanks,
Paul
------------------------------------------------------
System setup: I have 3 systems
The connection between the Engine and IPA is working fine. (I can log in
with IPA users etc.) Connection is made according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat
ion/3.6/html-single/Administration_Guide/index.html#sect-Configuring_an_Exte
rnal_LDAP_Provider
Configuration of the client is done according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat
ion/3.6/html/Virtual_Machine_Management_Guide/chap-Additional_Configuration.
html#sect-Configuring_Single_Sign-On_for_Virtual_Machines
--- Hosted Engine:
[root@engine ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@engine ~]# uname -a
Linux engine.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@engine ~]# rpm -qa | grep ovirt
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-engine-restapi-3.6.2.6-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-3.6.3.4-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.0.5-1.el7.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-setup-1.1.2-1.el7.centos.noarch
ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch
ovirt-engine-wildfly-8.2.1-1.el7.x86_64
ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
ovirt-engine-tools-3.6.2.6-1.el7.centos.noarch
ovirt-engine-dbscripts-3.6.2.6-1.el7.centos.noarch
ovirt-engine-backend-3.6.2.6-1.el7.centos.noarch
ovirt-engine-3.6.2.6-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-1.1.2-1.el7.centos.noarch
ovirt-engine-setup-base-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.3.4-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.3.4-1.el7.centos.noarch
ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
ovirt-engine-userportal-3.6.2.6-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.6.2.6-1.el7.centos.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
ovirt-release36-003-1.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-lib-3.6.3.4-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.3.4-1.el7.centos.noarch
ovirt-engine-websocket-proxy-3.6.3.4-1.el7.centos.noarch
ovirt-log-collector-3.6.1-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-3.6.3.4-1.el7.centos.noarch
--- FreeIPA:
[root@ipa01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@ipa01 ~]# uname -a
Linux ipa01.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@ipa01 ~]# rpm -qa | grep ipa
ipa-python-4.2.0-15.el7_2.6.x86_64
ipa-client-4.2.0-15.el7_2.6.x86_64
python-libipa_hbac-1.13.0-40.el7_2.1.x86_64
python-iniparse-0.4-9.el7.noarch
libipa_hbac-1.13.0-40.el7_2.1.x86_64
sssd-ipa-1.13.0-40.el7_2.1.x86_64
ipa-admintools-4.2.0-15.el7_2.6.x86_64
ipa-server-4.2.0-15.el7_2.6.x86_64
ipa-server-dns-4.2.0-15.el7_2.6.x86_64
--- Client:
[root@test06 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@test06 ~]# uname -a
Linux test06.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@test06 ~]# rpm -qa | grep ipa
python-libipa_hbac-1.13.0-40.el7_2.1.x86_64
python-iniparse-0.4-9.el7.noarch
sssd-ipa-1.13.0-40.el7_2.1.x86_64
ipa-client-4.2.0-15.0.1.el7.centos.6.x86_64
libipa_hbac-1.13.0-40.el7_2.1.x86_64
ipa-python-4.2.0-15.0.1.el7.centos.6.x86_64
device-mapper-multipath-0.4.9-85.el7.x86_64
device-mapper-multipath-libs-0.4.9-85.el7.x86_64
[root@test06 ~]# rpm -qa | grep guest-agent
qemu-guest-agent-2.3.0-4.el7.x86_64
ovirt-guest-agent-pam-module-1.0.11-1.el7.x86_64
ovirt-guest-agent-gdm-plugin-1.0.11-1.el7.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
---------------------------------------------------
Relevant logs:
--- Engine:
//var/log/ovirt-engine/engine
2016-03-17 15:22:10,516 INFO
[org.ovirt.engine.core.bll.aaa.LoginUserCommand] (default task-22) []
Running command: LoginUserCommand internal: false.
2016-03-17 15:22:10,568 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-22) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: User test6@DOMAIN logged in.
2016-03-17 15:22:13,795 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-6)
[7400ae46] The message key 'VmLogon' is missing from
'bundles/ExecutionMessages'
2016-03-17 15:22:13,839 INFO [org.ovirt.engine.core.bll.VmLogonCommand]
(default task-6) [7400ae46] Running command: VmLogonCommand internal: false.
Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: VMAction
group CONNECT_TO_VM with role type USER
2016-03-17 15:22:13,842 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default
task-6) [7400ae46] START, VmLogonVDSCommand(HostName = host01,
VmLogonVDSCommandParameters:{runAsync='true',
hostId='225157c0-224b-4aa6-9210-db4de7c7fc30',
vmId='64a84b40-6050-4a96-a59d-d557a317c38c', domain='DOMAIN-authz',
password='***', userName='test6@DOMAIN'}), log id: 2015a1e0
2016-03-17 15:22:14,848 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default
task-6) [7400ae46] FINISH, VmLogonVDSCommand, log id: 2015a1e0
2016-03-17 15:22:15,317 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand]
(default task-18) [10dad788] Running command: SetVmTicketCommand internal:
true. Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type:
VMAction group CONNECT_TO_VM with role type USER
2016-03-17 15:22:15,322 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-18) [10dad788] START, SetVmTicketVDSCommand(HostName = host01,
SetVmTicketVDSCommandParameters:{runAsync='true',
hostId='225157c0-224b-4aa6-9210-db4de7c7fc30',
vmId='64a84b40-6050-4a96-a59d-d557a317c38c', protocol='SPICE',
ticket='rd8avqvdBnRl', validTime='120', userName='test6',
userId='10b2da3e-6401-4a09-a330-c0780bc0faef',
disconnectAction='LOCK_SCREEN'}), log id: 72efb73b
2016-03-17 15:22:16,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-18) [10dad788] FINISH, SetVmTicketVDSCommand, log id: 72efb73b
2016-03-17 15:22:16,377 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-18) [10dad788] Correlation ID: 10dad788, Call Stack: null,
Custom Event ID: -1, Message: User test6@DOMAIN initiated console session
for VM test06
2016-03-17 15:22:19,418 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-53) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: User test6@DOMAIN-authz is connected to
VM test06.
--- Client:
/var/log/ovirt-guest-agent/ovirt-guest-agent.log
MainThread::INFO::2016-03-17
15:20:58,145::ovirt-guest-agent::57::root::Starting oVirt guest agent
CredServer::INFO::2016-03-17 15:20:58,214::CredServer::257::root::CredServer
is running...
Dummy-1::INFO::2016-03-17 15:20:58,216::OVirtAgentLogic::294::root::Received
an external command: lock-screen...
Dummy-1::INFO::2016-03-17 15:22:13,104::OVirtAgentLogic::294::root::Received
an external command: login...
Dummy-1::INFO::2016-03-17 15:22:13,104::CredServer::207::root::The following
users are allowed to connect: [0]
Dummy-1::INFO::2016-03-17 15:22:13,104::CredServer::273::root::Opening
credentials channel...
Dummy-1::INFO::2016-03-17 15:22:13,105::CredServer::132::root::Emitting user
authenticated signal (651416).
CredChannel::INFO::2016-03-17 15:22:13,188::CredServer::225::root::Incomming
connection from user: 0 process: 2570
CredChannel::INFO::2016-03-17 15:22:13,188::CredServer::232::root::Sending
user's credential (token: 651416)
Dummy-1::INFO::2016-03-17 15:22:13,189::CredServer::277::root::Credentials
channel was closed.
/var/log/secure
Mar 17 15:21:07 test06 gdm-launch-environment]:
pam_unix(gdm-launch-environment:session): session opened for user gdm by
(uid=0)
Mar 17 15:21:10 test06 polkitd[749]: Registered Authentication Agent for
unix-session:c1 (system bus name :1.34 [gnome-shell --mode=gdm], object path
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Mar 17 15:22:13 test06 gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth):
authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=test6
Mar 17 15:22:13 test06 gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth): received
for user test6: 17 (Failure setting user credentials)
/var/log/sssd/krb5_child.log (debug-level 10)
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [get_and_save_tgt]
(0x0020): 1234: [-1765328360][Preauthentication failed]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [map_krb5_error]
(0x0020): 1303: [-1765328360][Preauthentication failed]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [k5c_send_data]
(0x0200): Received error code 1432158215
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [pack_response_packet]
(0x2000): response packet size: [4]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [k5c_send_data]
(0x4000): Response sent.
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [main] (0x0400):
krb5_child completed successfully
/var/log/sssd/sssd_DOMAIN.COM.log (debug-level 10)
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler] (0x0100):
Got request with the following data
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
command: PAM_AUTHENTICATE
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
domain: DOMAIN.COM
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
user: test6
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
service: gdm-ovirtcred
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
tty:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
ruser:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
rhost:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
authtok type: 1
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
newauthtok type: 0
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
priv: 1
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
cli_pid: 2570
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
logon name: not set
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_send]
(0x1000): Wait queue of user [test6] is empty, running request
[0x7fe30df03cc0] immediately.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_setup] (0x4000): No
mapping for: test6
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added
timed event "ltdb_callback": 0x7fe30df07120
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added
timed event "ltdb_timeout": 0x7fe30df16590
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Running
timer event 0x7fe30df07120 "ltdb_callback"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Destroying
timer event 0x7fe30df16590 "ltdb_timeout"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Ending
timer event 0x7fe30df07120 "ltdb_callback"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [fo_resolve_service_send]
(0x0100): Trying to resolve service 'IPA'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status]
(0x1000): Status of server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_port_status]
(0x1000): Port status of port 389 for server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6
seconds
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [resolve_srv_send]
(0x0200): The status of SRV lookup is resolved
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status]
(0x1000): Status of server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[be_resolve_server_process] (0x1000): Saving the first resolved server
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[be_resolve_server_process] (0x0200): Found address for server
ipa01.DOMAIN.COM: [10.0.1.21] TTL 1200
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ipa_resolve_callback]
(0x0400): Constructed uri 'ldap://ipa01.DOMAIN.COM'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sss_krb5_realm_has_proxy]
(0x0040): profile_get_values failed.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_handler_setup]
(0x2000): Setting up signal handler up for pid [2575]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_handler_setup]
(0x2000): Signal handler set up for pid [2575]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [write_pipe_handler]
(0x0400): All data has been sent!
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler]
(0x1000): Waiting for child [2575].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler]
(0x0100): child [2575] finished successfully.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [read_pipe_handler]
(0x0400): EOF received, client finished
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [check_wait_queue]
(0x1000): Wait queue for user [test6] is empty.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_done]
(0x1000): krb5_auth_queue request [0x7fe30df03cc0] done.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_id_op_connect_step]
(0x4000): reusing cached connection
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_print_server]
(0x2000): Searching 10.0.1.21
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with
[(&(cn=ipaConfig)(objectClass=ipaGuiConfig))][cn=etc,dc=DOMAIN,dc=com].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaMigrationEnabled]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaSELinuxUserMapDefault]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaSELinuxUserMapOrder]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 122
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_op_add] (0x2000):
New operation 122 timeout 60
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_entry]
(0x1000): OriginalDN: [cn=ipaConfig,cn=etc,dc=DOMAIN,dc=com].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaMigrationEnabled]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaSELinuxUserMapDefault]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaSELinuxUserMapOrder]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no
errmsg set
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_op_destructor]
(0x2000): Operation 122 finished
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_id_op_destroy]
(0x4000): releasing operation connection
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[ipa_get_migration_flag_done] (0x0100): Password migration is not enabled.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Backend returned: (0, 17, <NULL>) [Success (Failure setting user
credentials)]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Sending result [17][DOMAIN.COM]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Sent result [17][DOMAIN.COM]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[(nil)],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: ldap_result found nothing!
------=_NextPart_000_0050_01D18069.35C995E0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hi,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>I am having =
an issue with getting SSO to work when a standard user(UserRole) logs in =
to the UserPortal.<o:p></o:p></p><p class=3DMsoNormal>The user has =
permission to use only this VM, so after login the console is =
automatically opened for that VM.<o:p></o:p></p><p =
class=3DMsoNormal>Problem is that it doesn't login on the VM system with =
the provided credentials. Manual login at the console works without any =
issues. <o:p></o:p></p><p class=3DMsoNormal>HBAC-rule check on IPA shows =
access is granted. Client has SELINUX in permissive mode and a disabled =
firewalld. <o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>On the client side I do see some PAM related errors in =
the logs (see details below). Extensive Google search on error 17 =
"Failure setting user credentials" didn't show helpful =
information :-(<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>AFAIK this =
is did a pretty standard set-up, all working with RH-family products. I =
would expect others to encounter this issue as well. <o:p></o:p></p><p =
class=3DMsoNormal>If someone knows any solution or has some directions =
to fix this it would be greatly appreciated.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Paul<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>------------------------------------------------------<=
o:p></o:p></p><p class=3DMsoNormal>System setup: I have 3 systems =
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>The connection between the Engine and IPA is working =
fine. (I can log in with IPA users etc.) Connection is made according to =
this document: =
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali=
zation/3.6/html-single/Administration_Guide/index.html#sect-Configuring_a=
n_External_LDAP_Provider<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Configuration of the client is done according to this =
document: =
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali=
zation/3.6/html/Virtual_Machine_Management_Guide/chap-Additional_Configur=
ation.html#sect-Configuring_Single_Sign-On_for_Virtual_Machines<o:p></o:p=
></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>--- =
Hosted Engine:<o:p></o:p></p><p class=3DMsoNormal>[root@engine ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core)<o:p></o:p></p><p class=3DMsoNormal>[root@engine =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux engine.DOMAIN.COM =
3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 =
x86_64 x86_64 GNU/Linux<o:p></o:p></p><p class=3DMsoNormal>[root@engine =
~]# rpm -qa | grep ovirt<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-vmconsole-1.0.0-1.el7.centos.noarch<o:p></o:p></p=
><p =
class=3DMsoNormal>ovirt-engine-restapi-3.6.2.6-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-setup-lib-1.0.1-1.el7.centos.noarch<o:p></o:p></p=
><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-ovirt-engine-common-3.6.3.4-1=
.el7.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-3.6.3.4-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-image-uploader-3.6.0-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-engine-extension-aaa-jdbc-1.0.5-1.el7.noarch<o:p>=
</o:p></p><p =
class=3DMsoNormal>ovirt-host-deploy-1.4.1-1.el7.centos.noarch<o:p></o:p><=
/p><p =
class=3DMsoNormal>ovirt-engine-extension-aaa-ldap-setup-1.1.2-1.el7.cento=
s.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch<o:p></o=
:p></p><p =
class=3DMsoNormal>ovirt-engine-wildfly-8.2.1-1.el7.x86_64<o:p></o:p></p><=
p =
class=3DMsoNormal>ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch<o:p></o=
:p></p><p =
class=3DMsoNormal>ovirt-engine-tools-3.6.2.6-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-engine-dbscripts-3.6.2.6-1.el7.centos.noarch<o:p>=
</o:p></p><p =
class=3DMsoNormal>ovirt-engine-backend-3.6.2.6-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-engine-3.6.2.6-1.el7.centos.noarch<o:p></o:p></p>=
<p =
class=3DMsoNormal>ovirt-engine-extension-aaa-ldap-1.1.2-1.el7.centos.noar=
ch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-base-3.6.3.4-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-ovirt-engine-3.6.3.4-1.el7.ce=
ntos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-websocket-proxy-3.6.3.4-1.el7=
.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-vmconsole-proxy-helper-3.6.3.4-1.el7.cento=
s.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-engine-userportal-3.6.2.6-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-webadmin-portal-3.6.2.6-1.el7.centos.noarc=
h<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-common-1.0.11-1.el7.noarch<o:p></o:p>=
</p><p class=3DMsoNormal>ovirt-release36-003-1.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-iso-uploader-3.6.0-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-engine-lib-3.6.3.4-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.3.=
4-1.el7.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-websocket-proxy-3.6.3.4-1.el7.centos.noarc=
h<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-log-collector-3.6.1-1.el7.centos.noarch<o:p></o:p=
></p><p =
class=3DMsoNormal>ovirt-engine-extensions-api-impl-3.6.3.4-1.el7.centos.n=
oarch<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>--- FreeIPA:<o:p></o:p></p><p =
class=3DMsoNormal>[root@ipa01 ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core) <o:p></o:p></p><p class=3DMsoNormal>[root@ipa01 =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux =
ipa01.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 =
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux<o:p></o:p></p><p =
class=3DMsoNormal>[root@ipa01 ~]# rpm -qa | grep ipa<o:p></o:p></p><p =
class=3DMsoNormal>ipa-python-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-client-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>python-libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>python-iniparse-0.4-9.el7.noarch<o:p></o:p></p><p =
class=3DMsoNormal>libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>sssd-ipa-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-admintools-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p=
class=3DMsoNormal>ipa-server-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-server-dns-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p=
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>--- =
Client:<o:p></o:p></p><p class=3DMsoNormal>[root@test06 ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core) <o:p></o:p></p><p class=3DMsoNormal>[root@test06 =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux test06.DOMAIN.COM =
3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 =
x86_64 x86_64 GNU/Linux<o:p></o:p></p><p class=3DMsoNormal>[root@test06 =
~]# rpm -qa | grep ipa<o:p></o:p></p><p =
class=3DMsoNormal>python-libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>python-iniparse-0.4-9.el7.noarch<o:p></o:p></p><p =
class=3DMsoNormal>sssd-ipa-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-client-4.2.0-15.0.1.el7.centos.6.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-python-4.2.0-15.0.1.el7.centos.6.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>device-mapper-multipath-0.4.9-85.el7.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>device-mapper-multipath-libs-0.4.9-85.el7.x86_64<o:p></=
o:p></p><p class=3DMsoNormal>[root@test06 ~]# rpm -qa | grep =
guest-agent<o:p></o:p></p><p =
class=3DMsoNormal>qemu-guest-agent-2.3.0-4.el7.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-pam-module-1.0.11-1.el7.x86_64<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-gdm-plugin-1.0.11-1.el7.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-common-1.0.11-1.el7.noarch<o:p></o:p>=
</p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>---------------------------------------------------<o:p=
></o:p></p><p class=3DMsoNormal>Relevant logs:<o:p></o:p></p><p =
class=3DMsoNormal>--- Engine:<o:p></o:p></p><p =
class=3DMsoNormal>//var/log/ovirt-engine/engine<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:10,516 INFO =
[org.ovirt.engine.core.bll.aaa.LoginUserCommand] (default task-22) [] =
Running command: LoginUserCommand internal: false.<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:10,568 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(default task-22) [] Correlation ID: null, Call Stack: null, Custom =
Event ID: -1, Message: User test6@DOMAIN logged in.<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,795 WARN =
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default =
task-6) [7400ae46] The message key 'VmLogon' is missing from =
'bundles/ExecutionMessages'<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,839 INFO =
[org.ovirt.engine.core.bll.VmLogonCommand] (default task-6) [7400ae46] =
Running command: VmLogonCommand internal: false. Entities affected =
: ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: VMAction group =
CONNECT_TO_VM with role type USER<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,842 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default =
task-6) [7400ae46] START, VmLogonVDSCommand(HostName =3D host01, =
VmLogonVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'225157c0-224b-4aa6-9210-db4de7c7fc30', =
vmId=3D'64a84b40-6050-4a96-a59d-d557a317c38c', domain=3D'DOMAIN-authz', =
password=3D'***', userName=3D'test6@DOMAIN'}), log id: =
2015a1e0<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:14,848 =
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] =
(default task-6) [7400ae46] FINISH, VmLogonVDSCommand, log id: =
2015a1e0<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:15,317 =
INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default =
task-18) [10dad788] Running command: SetVmTicketCommand internal: true. =
Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: =
VMAction group CONNECT_TO_VM with role type USER<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:15,322 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] =
(default task-18) [10dad788] START, SetVmTicketVDSCommand(HostName =3D =
host01, SetVmTicketVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'225157c0-224b-4aa6-9210-db4de7c7fc30', =
vmId=3D'64a84b40-6050-4a96-a59d-d557a317c38c', protocol=3D'SPICE', =
ticket=3D'rd8avqvdBnRl', validTime=3D'120', userName=3D'test6', =
userId=3D'10b2da3e-6401-4a09-a330-c0780bc0faef', =
disconnectAction=3D'LOCK_SCREEN'}), log id: 72efb73b<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:16,340 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] =
(default task-18) [10dad788] FINISH, SetVmTicketVDSCommand, log id: =
72efb73b<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:16,377 =
INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(default task-18) [10dad788] Correlation ID: 10dad788, Call Stack: null, =
Custom Event ID: -1, Message: User test6@DOMAIN initiated console =
session for VM test06<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 =
15:22:19,418 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(DefaultQuartzScheduler_Worker-53) [] Correlation ID: null, Call Stack: =
null, Custom Event ID: -1, Message: User test6@DOMAIN-authz is connected =
to VM test06.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>--- Client:<o:p></o:p></p><p =
class=3DMsoNormal>/var/log/ovirt-guest-agent/ovirt-guest-agent.log<o:p></=
o:p></p><p class=3DMsoNormal>MainThread::INFO::2016-03-17 =
15:20:58,145::ovirt-guest-agent::57::root::Starting oVirt guest =
agent<o:p></o:p></p><p class=3DMsoNormal>CredServer::INFO::2016-03-17 =
15:20:58,214::CredServer::257::root::CredServer is =
running...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:20:58,216::OVirtAgentLogic::294::root::Received an external command: =
lock-screen...<o:p></o:p></p><p =
class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::OVirtAgentLogic::294::root::Received an external command: =
login...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::CredServer::207::root::The following users are allowed to =
connect: [0]<o:p></o:p></p><p =
class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::CredServer::273::root::Opening credentials =
channel...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,105::CredServer::132::root::Emitting user authenticated signal =
(651416).<o:p></o:p></p><p =
class=3DMsoNormal>CredChannel::INFO::2016-03-17 =
15:22:13,188::CredServer::225::root::Incomming connection from user: 0 =
process: 2570<o:p></o:p></p><p =
class=3DMsoNormal>CredChannel::INFO::2016-03-17 =
15:22:13,188::CredServer::232::root::Sending user's credential (token: =
651416)<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,189::CredServer::277::root::Credentials channel was =
closed.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>/var/log/secure<o:p></o:p></p><p class=3DMsoNormal>Mar =
17 15:21:07 test06 gdm-launch-environment]: =
pam_unix(gdm-launch-environment:session): session opened for user gdm by =
(uid=3D0)<o:p></o:p></p><p class=3DMsoNormal>Mar 17 15:21:10 test06 =
polkitd[749]: Registered Authentication Agent for unix-session:c1 =
(system bus name :1.34 [gnome-shell --mode=3Dgdm], object path =
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale =
en_US.UTF-8)<o:p></o:p></p><p class=3DMsoNormal>Mar 17 15:22:13 test06 =
gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth): authentication failure; =
logname=3D uid=3D0 euid=3D0 tty=3D ruser=3D rhost=3D =
user=3Dtest6<o:p></o:p></p><p class=3DMsoNormal><b><span =
style=3D'color:red'>Mar 17 15:22:13 test06 gdm-ovirtcred]: =
pam_sss(gdm-ovirtcred:auth): received for user test6: 17 (Failure =
setting user credentials)<o:p></o:p></span></b></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
lang=3DNL>/var/log/sssd/krb5_child.log (debug-level =
10)<o:p></o:p></span></p><p class=3DMsoNormal><b><span =
style=3D'color:red'>(Thu Mar 17 15:22:13 2016) =
[[sssd[krb5_child[2575]]]] [get_and_save_tgt] (0x0020): 1234: =
[-1765328360][Preauthentication failed]<o:p></o:p></span></b></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [[sssd[krb5_child[2575]]]] [map_krb5_error] (0x0020): 1303: =
[-1765328360][Preauthentication failed]<o:p></o:p></span></b></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [[sssd[krb5_child[2575]]]] [k5c_send_data] (0x0200): Received =
error code 1432158215<o:p></o:p></span></b></p><p class=3DMsoNormal>(Thu =
Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [pack_response_packet] =
(0x2000): response packet size: [4]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] =
[k5c_send_data] (0x4000): Response sent.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] =
[main] (0x0400): krb5_child completed successfully<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>/var/log/sssd/sssd_DOMAIN.COM.log (debug-level =
10)<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [be_pam_handler] (0x0100): Got request with the =
following data<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): command: =
PAM_AUTHENTICATE<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): domain: =
DOMAIN.COM<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): user: =
test6<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): service: =
gdm-ovirtcred<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
tty:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
ruser:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
rhost:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): authtok type: =
1<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): newauthtok type: =
0<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): priv: =
1<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): cli_pid: =
2570<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): logon name: not =
set<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [krb5_auth_queue_send] (0x1000): Wait queue of =
user [test6] is empty, running request [0x7fe30df03cc0] =
immediately.<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [krb5_setup] (0x4000): No mapping for: =
test6<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added timed event =
"ltdb_callback": 0x7fe30df07120<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added timed event =
"ltdb_timeout": 0x7fe30df16590<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Running timer =
event 0x7fe30df07120 "ltdb_callback"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Destroying timer =
event 0x7fe30df16590 "ltdb_timeout"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Ending timer event =
0x7fe30df07120 "ltdb_callback"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [fo_resolve_service_send] =
(0x0100): Trying to resolve service 'IPA'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[get_server_status] (0x1000): Status of server 'ipa01.DOMAIN.COM' is =
'working'<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [get_port_status] (0x1000): Port status of port =
389 for server 'ipa01.DOMAIN.COM' is 'working'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6 =
seconds<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [resolve_srv_send] (0x0200): The status of SRV =
lookup is resolved<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status] (0x1000): =
Status of server 'ipa01.DOMAIN.COM' is 'working'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[be_resolve_server_process] (0x1000): Saving the first resolved =
server<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [be_resolve_server_process] (0x0200): Found =
address for server ipa01.DOMAIN.COM: [10.0.1.21] TTL =
1200<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [ipa_resolve_callback] (0x0400): Constructed uri =
'ldap://ipa01.DOMAIN.COM'<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sss_krb5_realm_has_proxy] =
(0x0040): profile_get_values failed.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[child_handler_setup] (0x2000): Setting up signal handler up for pid =
[2575]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [child_handler_setup] (0x2000): Signal handler =
set up for pid [2575]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [write_pipe_handler] (0x0400): All =
data has been sent!<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler] (0x1000): =
Waiting for child [2575].<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler] (0x0100): =
child [2575] finished successfully.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[read_pipe_handler] (0x0400): EOF received, client =
finished<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [check_wait_queue] (0x1000): Wait queue for user =
[test6] is empty.<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_done] (0x1000): =
krb5_auth_queue request [0x7fe30df03cc0] done.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_id_op_connect_step] (0x4000): reusing cached =
connection<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_print_server] (0x2000): Searching =
10.0.1.21<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] (0x0400): calling =
ldap_search_ext with =
[(&(cn=3DipaConfig)(objectClass=3DipaGuiConfig))][cn=3Detc,dc=3DDOMAI=
N,dc=3Dcom].<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] (0x1000): =
Requesting attrs: [ipaMigrationEnabled]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_ext_step] (0x1000): Requesting attrs: =
[ipaSELinuxUserMapDefault]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar =
17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] =
(0x1000): Requesting attrs: [ipaSELinuxUserMapOrder]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid =3D =
122<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_op_add] (0x2000): New operation 122 timeout =
60<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0], =
ldap[0x7fe30def2920]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message] (0x4000): =
Message type: [LDAP_RES_SEARCH_ENTRY]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_entry] (0x1000): OriginalDN: =
[cn=3DipaConfig,cn=3Detc,dc=3DDOMAIN,dc=3Dcom].<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_range] (0x2000): No sub-attributes for =
[ipaMigrationEnabled]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range] (0x2000): No =
sub-attributes for [ipaSELinuxUserMapDefault]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_range] (0x2000): No sub-attributes for =
[ipaSELinuxUserMapOrder]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): =
Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0], =
ldap[0x7fe30def2920]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message] (0x4000): =
Message type: [LDAP_RES_SEARCH_RESULT]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no =
errmsg set<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_op_destructor] (0x2000): Operation 122 =
finished <o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_id_op_destroy] (0x4000): releasing =
operation connection <o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ipa_get_migration_flag_done] =
(0x0100): Password migration is not enabled. <o:p></o:p></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback] (0x0100): Backend =
returned: (0, 17, <NULL>) [Success (Failure setting user =
credentials)] <o:p></o:p></span></b></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback] =
(0x0100): Sending result [17][DOMAIN.COM] <o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[be_pam_handler_callback] (0x0100): Sent result [17][DOMAIN.COM] =
<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
sh[0x7fe30deef090], connected[1], ops[(nil)], ldap[0x7fe30def2920] =
<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
ldap_result found nothing!<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal> =
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p></div></body></html>
------=_NextPart_000_0050_01D18069.35C995E0--
2
5
Hi all,
I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
Thanks,
Matteo
4
7
--_000_5697777B2050209dmcamcnetworkscom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCkZpcnN0IEkgY3JlYXRlZCBhIGJvbmRpbmcgaW50ZXJmYWNlOg0KDQojIGFkZCBuaWMg
LS1wYXJlbnQtaG9zdC1uYW1lIHNlcnZlcjAxIC0tbmFtZSBib25kMCAtLW5ldHdvcmstbmFtZSBW
TEFONjAyIC0tYm9uZGluZy1zbGF2ZXMtaG9zdF9uaWMgaG9zdF9uaWMubmFtZT1lbm8xIC0tYm9u
ZGluZy1zbGF2ZXMtaG9zdF9uaWMgaG9zdF9uaWMubmFtZT1lbm8yDQoNClRoaXMgd29ya3MgZ3Jl
YXQgYnV0IG5vIElQIGlzIHNldCBvbiBWTEFONjAyLg0KDQpUaGVuIEknbSB0cnlpbmcgdG8gYWRk
IGFuIGlwIGFkZHJlc3MgdG8gYSBuZXR3b3JrIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kOg0K
DQojIHVwZGF0ZSBob3N0bmljIC0tcGFyZW50LWhvc3QtbmFtZSBzZXJ2ZXIwMSAtLW5ldHdvcmst
bmFtZSBWTEFONjAyIC0tYm9vdF9wcm90b2NvbCBzdGF0aWMgLS1pcC1hZGRyZXNzIDEwLjEwLjEw
LjEwIC0taXAtbmV0bWFzayAyNTUuMjU1LjI1NS4wDQoNCj09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09IEVSUk9SID09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgd3JvbmcgbnVtYmVyIG9mIGFyZ3VtZW50cywgdHJ5ICdoZWxwIHVwZGF0ZScgZm9yIGhl
bHAuDQo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PQ0KDQpMb29raW5nIGF0IHRoaXMgZG9jdW1lbnQgaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9k
b2N1bWVudGF0aW9uL2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYt
QmV0YS9odG1sL1JIRVZNX1NoZWxsX0d1aWRlL25pYy5odG1sIEkgbmVlZCB0byB1c2UgIm5pYyIg
aW5zdGVhZCBvZiAiaG9zdG5pYyIgYnV0IHRoZW4gSSBkb24ndCBoYXZlIHRoZSBvcHRpb25zIHRv
IHNheSB0aGlzIGlzIGEgLS1wYXJlbnQtaG9zdC1uYW1lLiBPbmx5IFZNIHJlbGF0ZWQgY29tbWFu
ZCBvcHRpb25zLg0KDQpTbyBJIHRoaW5rIHRoZSBkb2N1bWVudGF0aW9uIGlzIGJlaGluZC4NCg0K
Q2FuIHNvbWVib2R5IGhlbHAgbWUgd2l0aCB3aGF0IHRoZSBjb21tYW5kIGlzIHRvIGFkZCBhIElQ
IHRvIGEgVkxBTi9OZXR3b3JrIGZvciBhIGhvc3Q/DQoNCg0KLS0NCktpbmQgcmVnYXJkcywNCg0K
SnVycmnDq24gQmxvZW1lbg0KDQpUaGlzIG1lc3NhZ2UgKGluY2x1ZGluZyBhbnkgYXR0YWNobWVu
dHMpIG1heSBjb250YWluIGluZm9ybWF0aW9uIHRoYXQgaXMgcHJpdmlsZWdlZCBvciBjb25maWRl
bnRpYWwuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNpcGllbnQsIHBsZWFzZSBub3Rp
ZnkgdGhlIHNlbmRlciBhbmQgZGVsZXRlIHRoaXMgZW1haWwgaW1tZWRpYXRlbHkgZnJvbSB5b3Vy
IHN5c3RlbXMgYW5kIGRlc3Ryb3kgYWxsIGNvcGllcyBvZiBpdC4gWW91IG1heSBub3QsIGRpcmVj
dGx5IG9yIGluZGlyZWN0bHksIHVzZSwgZGlzY2xvc2UsIGRpc3RyaWJ1dGUsIHByaW50IG9yIGNv
cHkgdGhpcyBlbWFpbCBvciBhbnkgcGFydCBvZiBpdCBpZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5k
ZWQgcmVjaXBpZW50DQo=
--_000_5697777B2050209dmcamcnetworkscom_
Content-Type: text/html; charset="utf-8"
Content-ID: <DED479EC8EDE1E4F9CD5EE636812330C(a)chellomedia.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHRleHQ9IiMwMDAwMDAi
IGJnY29sb3I9IiNGRkZGRkYiPg0KPHR0PkhpLDxicj4NCjxicj4NCkZpcnN0IEkgY3JlYXRlZCBh
IGJvbmRpbmcgaW50ZXJmYWNlOjxicj4NCjxicj4NCiMgYWRkIG5pYyAtLXBhcmVudC1ob3N0LW5h
bWUgc2VydmVyMDEgLS1uYW1lIGJvbmQwIC0tbmV0d29yay1uYW1lIFZMQU42MDIgLS1ib25kaW5n
LXNsYXZlcy1ob3N0X25pYyBob3N0X25pYy5uYW1lPWVubzEgLS1ib25kaW5nLXNsYXZlcy1ob3N0
X25pYyBob3N0X25pYy5uYW1lPWVubzI8YnI+DQo8YnI+DQpUaGlzIHdvcmtzIGdyZWF0IGJ1dCBu
byBJUCBpcyBzZXQgb24gVkxBTjYwMi48YnI+DQo8YnI+DQpUaGVuIEknbSB0cnlpbmcgdG8gYWRk
IGFuIGlwIGFkZHJlc3MgdG8gYSBuZXR3b3JrIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kOjxi
cj4NCjxicj4NCiMgdXBkYXRlIGhvc3RuaWMgLS1wYXJlbnQtaG9zdC1uYW1lIHNlcnZlcjAxIC0t
bmV0d29yay1uYW1lIFZMQU42MDIgLS1ib290X3Byb3RvY29sIHN0YXRpYyAtLWlwLWFkZHJlc3Mg
MTAuMTAuMTAuMTAgLS1pcC1uZXRtYXNrIDI1NS4yNTUuMjU1LjA8YnI+DQo8YnI+DQo9PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSBFUlJPUiA9PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08YnI+DQombmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg
d3JvbmcgbnVtYmVyIG9mIGFyZ3VtZW50cywgdHJ5ICdoZWxwIHVwZGF0ZScgZm9yIGhlbHAuPGJy
Pg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08
YnI+DQo8YnI+DQpMb29raW5nIGF0IHRoaXMgZG9jdW1lbnQgPGEgY2xhc3M9Im1vei10eHQtbGlu
ay1mcmVldGV4dCIgaHJlZj0iaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9kb2N1bWVudGF0aW9u
L2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYtQmV0YS9odG1sL1JI
RVZNX1NoZWxsX0d1aWRlL25pYy5odG1sIj4NCmh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9j
dW1lbnRhdGlvbi9lbi1VUy9SZWRfSGF0X0VudGVycHJpc2VfVmlydHVhbGl6YXRpb24vMy42LUJl
dGEvaHRtbC9SSEVWTV9TaGVsbF9HdWlkZS9uaWMuaHRtbDwvYT4gSSBuZWVkIHRvIHVzZSAmcXVv
dDtuaWMmcXVvdDsgaW5zdGVhZCBvZiAmcXVvdDtob3N0bmljJnF1b3Q7IGJ1dCB0aGVuIEkgZG9u
J3QgaGF2ZSB0aGUgb3B0aW9ucyB0byBzYXkgdGhpcyBpcyBhIC0tcGFyZW50LWhvc3QtbmFtZS4g
T25seSBWTSByZWxhdGVkIGNvbW1hbmQNCiBvcHRpb25zLjxicj4NCjxicj4NClNvIEkgdGhpbmsg
dGhlIGRvY3VtZW50YXRpb24gaXMgYmVoaW5kLiA8YnI+DQo8YnI+DQpDYW4gc29tZWJvZHkgaGVs
cCBtZSB3aXRoIHdoYXQgdGhlIGNvbW1hbmQgaXMgdG8gYWRkIGEgSVAgdG8gYSBWTEFOL05ldHdv
cmsgZm9yIGEgaG9zdD88YnI+DQo8YnI+DQo8YnI+DQo8L3R0Pg0KPGRpdiBjbGFzcz0ibW96LXNp
Z25hdHVyZSI+LS0gPGJyPg0KPHRpdGxlPjwvdGl0bGU+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdi
KDAsIDAsIDApOyI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0iZm9udC1zaXplOiAxNHB4
OyBmb250LWZhbWlseToNCiAgICAgICAgICBDYWxpYnJpLCBzYW5zLXNlcmlmOyBtYXJnaW46IDBj
bSAwY20gMC4wMDAxcHQ7Ij4NCjxiPjxmb250IGZhY2U9IkFyaWFsLHNhbnMtc2VyaWYiIGNvbG9y
PSIjMmM4Y2I2Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMHB0OyI+Szwvc3Bhbj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOg0KICAgICAgICAgICAgICAgIDEzcHg7Ij5pPC9zcGFuPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6IDEwcHQ7Ij5uZCByZWdhcmRzLDwvc3Bhbj48L2ZvbnQ+PC9iPjwvcD4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJmb250LXNpemU6IDExcHQ7IGZvbnQtZmFtaWx5
Og0KICAgICAgICAgIENhbGlicmksIHNhbnMtc2VyaWY7IG1hcmdpbjogMGNtIDBjbSAwLjAwMDFw
dDsiPg0KPGI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6IEFyaWFs
LCBzYW5zLXNlcmlmOw0KICAgICAgICAgICAgICBjb2xvcjogcmdiKDQ0LCAxNDAsIDE4Mik7Ij4m
bmJzcDs8L3NwYW4+PC9iPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJmb250LXNp
emU6IDE0cHg7IGZvbnQtZmFtaWx5Og0KICAgICAgICAgIENhbGlicmksIHNhbnMtc2VyaWY7IG1h
cmdpbjogMGNtIDBjbSAwLjAwMDFwdDsiPg0KPGIgc3R5bGU9ImZvbnQtc2l6ZTogMTFwdDsiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6IDEwcHQ7DQogICAgICAgICAgICAgIGZvbnQtZmFtaWx5OiBB
cmlhbCwgc2Fucy1zZXJpZjsgY29sb3I6IHJnYig0NCwgMTQwLCAxODIpOyI+SnVycmnDq24gQmxv
ZW1lbjwvc3Bhbj48L2I+PGIgc3R5bGU9ImZvbnQtc2l6ZTogMTFwdDsiPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6IDEwcHQ7IGZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsNCiAgICAgICAg
ICAgICAgY29sb3I6IGdyYXk7Ij48YnI+DQo8L3NwYW4+PC9iPjxmb250IGZhY2U9IkFyaWFsLHNh
bnMtc2VyaWYiIGNvbG9yPSIjODA4MDgwIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMHB0OyI+
PC9zcGFuPjwvZm9udD48L3A+DQo8YnI+DQo8L2Rpdj4NCjwvZGl2Pg0KVGhpcyBtZXNzYWdlIChp
bmNsdWRpbmcgYW55IGF0dGFjaG1lbnRzKSBtYXkgY29udGFpbiBpbmZvcm1hdGlvbiB0aGF0IGlz
IHByaXZpbGVnZWQgb3IgY29uZmlkZW50aWFsLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQg
cmVjaXBpZW50LCBwbGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXIgYW5kIGRlbGV0ZSB0aGlzIGVtYWls
IGltbWVkaWF0ZWx5IGZyb20geW91ciBzeXN0ZW1zIGFuZCBkZXN0cm95IGFsbCBjb3BpZXMgb2Yg
aXQuIFlvdSBtYXkgbm90LA0KIGRpcmVjdGx5IG9yIGluZGlyZWN0bHksIHVzZSwgZGlzY2xvc2Us
IGRpc3RyaWJ1dGUsIHByaW50IG9yIGNvcHkgdGhpcyBlbWFpbCBvciBhbnkgcGFydCBvZiBpdCBp
ZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_5697777B2050209dmcamcnetworkscom_--
3
10
Hello,
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
the hosts].
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
--
Nicolas ECARNOT
5
11
Hello,
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
storage domains.
May you help me please?
(oVirt 3.4.4)
--
Nicolas Ecarnot
4
10
Hi all,
Since we upgraded to the latest ovirt node running 7.2, we're seeing that
nodes become unavailable after a while. It's running fine, with a couple of
VM's on it, untill it becomes non responsive. At that moment it doesn't
even respond to ICMP. It'll come back by itself after a while, but oVirt
fences the machine before that time and restarts VM's elsewhere.
Engine tells me this message:
VDSM host09 command failed: Message timeout which can be caused by
communication issues
Is anyone else experiencing these issues with ixgbe drivers? I'm running on
Intel X540-AT2 cards.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
7
8
Dear all,
I have a problem since couple of weeks, where randomly 1 VM (not always the same) becomes completely unresponsive.
We find this out because our Icinga server complains that host is down.
Upon inspection, we find we can’t open a console to the VM, nor can we login.
In oVirt engine, the VM looks like “up”. The only weird thing is that RAM usage shows 0% and CPU usage shows 100% or 75% depending on number of cores.
The only way to recover is to force shutdown the VM via 2-times shutdown from the engine.
Could you please help me to start debugging this?
I can provide any logs, but I’m not sure which ones, because I couldn’t see anything with ERROR in the vdsm logs on the host.
The host is running
OS Version: RHEL - 7 - 1.1503.el7.centos.2.8
Kernel Version: 3.10.0 - 229.14.1.el7.x86_64
KVM Version: 2.1.2 - 23.el7_1.8.1
LIBVIRT Version: libvirt-1.2.8-16.el7_1.4
VDSM Version: vdsm-4.16.26-0.el7.centos
SPICE Version: 0.12.4 - 9.el7_1.3
GlusterFS Version: glusterfs-3.7.5-1.el7
We use a locally exported gluster as storage domain (eg, storage is on the same machine exposed via gluster). No replica.
We run around 50 VMs on that host.
Thank you for your help in this,
—
Christophe
5
20
One RHEV Virtual Machine does not Automatically Resume following Compellent SAN Controller Failover
by Duckworth, Douglas C 30 May '16
by Duckworth, Douglas C 30 May '16
30 May '16
Hello --
Not sure if y'all can help with this issue we've been seeing with RHEV...
On 11/13/2015, during Code Upgrade of Compellent SAN at our Disaster
Recovery Site, we Failed Over to Secondary SAN Controller. Most Virtual
Machines in our DR Cluster Resumed automatically after Pausing except VM
"BADVM" on Host "BADHOST."
In Engine.log you can see that BADVM was sent into "VM_PAUSED_EIO" state
at 10:47:57:
"VM BADVM has paused due to storage I/O problem."
On this Red Hat Enterprise Virtualization Hypervisor 6.6
(20150512.0.el6ev) Host, two other VMs paused but then automatically
resumed without System Administrator intervention...
In our DR Cluster, 22 VMs also resumed automatically...
None of these Guest VMs are engaged in high I/O as these are DR site VMs
not currently doing anything.
We sent this information to Dell. Their response:
"The root cause may reside within your virtualization solution, not the
parent OS (RHEV-Hypervisor disc) or Storage (Dell Compellent.)"
We are doing this Failover again on Sunday November 29th so we would
like to know how to mitigate this issue, given we have to manually
resume paused VMs that don't resume automatically.
Before we initiated SAN Controller Failover, all iSCSI paths to Targets
were present on Host tulhv2p03.
VM logs on Host show in /var/log/libvirt/qemu/badhost.log that Storage
error was reported:
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
All disks used by this Guest VM are provided by single Storage Domain
COM_3TB4_DR with serial "270." In syslog we do see that all paths for
that Storage Domain Failed:
Nov 13 16:47:40 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 0
Though these recovered later:
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: sdbg -
tur checker reports path is up
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 8
Does anyone have an idea of why the VM would fail to automatically
resume if the iSCSI paths used by its Storage Domain recovered?
Thanks
Doug
--
Thanks
Douglas Charles Duckworth
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: duckd(a)tulane.edu
O: 504-988-9341
F: 504-988-8505
5
5
13 May '16
My oVirt DataCenter
Cluster CPU Type: Intel SandyBridge Family
Emulated Machine: pc-i440fx-rhel7.2.0
3
7
Hi,
I created a snapshot of a running VM prior to an OS upgrade. The OS
upgrade has now been succesful and I would like to remove the snapshot.
I've selected the snapshot in the UI and clicked Delete to start the task.
After a few minutes, the task has failed. When I click delete again on
the same snapshot, the failed message is returned after a few seconds.
>From browsing through the engine log (attached) it seems the snapshot
was correctly merged in the first try but something went wrong in the
finalizing fase. On retries, the log indicates the snapshot/disk image
no longer exists and the removal of the snapshot fails for this reason.
Is there any way to clean up this snapshot?
I can see the snapshot in the "Disk snapshot" tab of the storage. It has
a status of "illegal". Is it OK to (try to) remove this snapshot? Will
this impact the running VM and/or disk image?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
7
22
Hi all,
Wonder if anyone can shed any light on an error i'm seeing while running
engine-setup.
If just upgraded the packages to the latest 3.6 ones today (from 3.5),
run engine-setup, answered the questions, confirming install then get
presented with:
[ INFO ] Cleaning async tasks and compensations
[ INFO ] Unlocking existing entities
[ INFO ] Checking the Engine database consistency
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ ERROR ] Failed to execute stage 'Misc configuration': function
getdwhhistorytimekeepingbyvarname(unknown) does not exist LINE 2:
select * from GetDwhHistoryTimekeepingByVarName(
^ HINT: No function matches the given name and argument
types. You might need to add explicit type casts.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20150929144137-7u5rhg.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20150929144215-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Any ideas, where to look to fix things?
Thanks
Jon
5
4
Hi all,
Since I upgraded engine to 3.6, I noticed that the webadmin takes a lot
of ressources whatever is the browser. It can become very slow even for
small actions, like changing tabs or editing a vm. The browser activity
becomes intensive (100% of cpu) and the processor very hot with a
increased fan activity. I suppose javascript to be responsible of this
behaviour. Is there a way to reduce the resource allocated to the webadmin?
(This is not a weakness of my laptop which is an i7 cpu with 16GB of RAM)
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7
13
Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).
by Paul Groeneweg | Pazion 20 Apr '16
by Paul Groeneweg | Pazion 20 Apr '16
20 Apr '16
After the 3.6 updates ( which didn't went without a hitch )
I get the following errors in my event log:
Failed to update OVF disks 18c50ea6-4654-4525-b241-09e15acf5e99, OVF data
isn't updated on those OVF stores (Data Center Default, Storage Domain
hostedengine_nfs).
VDSM command failed: Could not acquire resource. Probably resource factory
threw an exception.: ()
http://screencast.com/t/S8cfXMsdGM
When I check on file there is some data, but not updated:
http://screencast.com/t/hbXQFlou
When I check in the web interface I see 2 OVF files listed. What are these
for, can I delete them? http://screencast.com/t/ymnzsNHj7e
Hopefully someone knows what to do about these warnings/erros and whether I
can delete the OVF files.
Best Regards,
Paul Groeneweg
3
21
Hello,
I'd like to ask community, what is the best way to use oVirt in the
following hardware configuration:
Three servers connected 1GB network. Each server - 32 threads, 256GB
RAM, 4TB RAID.
Please note that a local storage and an 1GB network is a typical
hardware configuration for almost any dedicated hosting.
Unfortunately, oVirt doesn't support multi-node local storage clusters.
And Gluster/CEPH doesn't work well over 1G network. It looks like that
the only way to use oVirt in a three-node cluster is to share local
storages over NFS. At least it makes possible to migrate VMs and move
disks among hardware nodes.
Does somebody have such setup?
Thanks
3
10
I'm setting up a ovirt cluster using glusterfs and noticing not stellar
performance.
Maybe my setup could use some adjustments?
3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM
with one logical volume for system and one for gluster.
They each have 4 NICs,
NIC1 = ovirtmgmt
NIC2 = gluster
NIC3 = VM traffic
I tried with default glusterfs settings and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB
[root@ovirt3 test scripts]# gluster volume info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB
Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
Another VM not in ovirt using nfs:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
Is that expected or is there a better way to set it up to get better
performance?
Thanks.
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates.
5
15
This is a multi-part message in MIME format.
--------------070802090208020205070907
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
we tried the following test - with unwanted results
input:
5 node gluster
A = replica 3 with arbiter 1 ( node1+node2+arbiter on node 5 )
B = replica 3 with arbiter 1 ( node3+node4+arbiter on node 5 )
C = distributed replica 3 arbiter 1 ( node1+node2, node3+node4, each
arbiter on node 5)
node 5 has only arbiter replica ( 4x )
TEST:
1) directly reboot one node - OK ( is not important which ( data node
or arbiter node ))
2) directly reboot two nodes - OK ( if nodes are not from the same
replica )
3) directly reboot three nodes - yes, this is the main problem and a
questions ....
- rebooted all three nodes from replica "B" ( not so possible, but
who knows ... )
- all VMs with data on this replica was paused ( no data access ) - OK
- all VMs running on replica "B" nodes lost ( started manually,
later )( datas on other replicas ) - acceptable
BUT
- !!! all oVIrt domains went down !! - master domain is on replica
"A" which lost only one member from three !!!
so we are not expecting that all domain will go down, especially
master with 2 live members.
Results:
- the whole cluster unreachable until at all domains up - depent of
all nodes up !!!
- all paused VMs started back - OK
- rest of all VMs rebooted and runnig - OK
Questions:
1) why all domains down if master domain ( on replica "A" ) has two
runnig members ( 2 of 3 ) ??
2) how to fix that colaps without waiting to all nodes up ? ( in
worste case if node has HW error eg. ) ??
3) which oVirt cluster policy can prevent that situation ?? ( if
any )
regs.
Pavel
--------------070802090208020205070907
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
we tried the following test - with unwanted results<br>
<br>
input:<br>
5 node gluster<br>
A = replica 3 with arbiter 1 ( node1+node2+arbiter on node 5 )<br>
B = replica 3 with arbiter 1 ( node3+node4+arbiter on node 5 )<br>
C = distributed replica 3 arbiter 1 ( node1+node2, node3+node4,
each arbiter on node 5)<br>
node 5 has only arbiter replica ( 4x )<br>
<br>
TEST:<br>
1) directly reboot one node - OK ( is not important which ( data
node or arbiter node ))<br>
2) directly reboot two nodes - OK ( if nodes are not from the same
replica ) <br>
3) directly reboot three nodes - yes, this is the main problem and
a questions ....<br>
- rebooted all three nodes from replica "B" ( not so possible,
but who knows ... )<br>
- all VMs with data on this replica was paused ( no data access
) - OK<br>
- all VMs running on replica "B" nodes lost ( started manually,
later )( datas on other replicas ) - acceptable<br>
BUT<br>
- !!! all oVIrt domains went down !! - master domain is on
replica "A" which lost only one member from three !!!<br>
so we are not expecting that all domain will go down, especially
master with 2 live members.<br>
<br>
Results: <br>
- the whole cluster unreachable until at all domains up - depent
of all nodes up !!!<br>
- all paused VMs started back - OK<br>
- rest of all VMs rebooted and runnig - OK<br>
<br>
Questions:<br>
1) why all domains down if master domain ( on replica "A" ) has
two runnig members ( 2 of 3 ) ??<br>
2) how to fix that colaps without waiting to all nodes up ? ( in
worste case if node has HW error eg. ) ??<br>
3) which oVirt cluster policy can prevent that situation ?? (
if any )<br>
<br>
regs.<br>
Pavel<br>
<br>
<br>
</body>
</html>
--------------070802090208020205070907--
3
7
I am trying to import a domain that I have used as an export on a previous
install. The previous install was no older then v3.5 and was built with
the all-in-one-plugin. Before destroying that system I took a portable
drive and made an export domain to export my VMs and templates.
The new system is up to date an was built as a hosted engine. When I try
to import the domain I get the following error:
"Error while executing action: Cannot add Storage. Storage format V3 is not
supported on the selected host version."
I just need to recover the VMs.
I connect the USB hard drive to the host and make an export directory just
like I did on the old host.
# ls -ld /mnt/export_ovirt
drwxr-xr-x. 5 vdsm kvm 4096 Mar 6 11:27 /mnt/export_ovirt
I have tried both doing an NFS mount
# cat /etc/exports.d/ovirt.exports
/home/engineha 127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
/mnt/backup-vm/ 10.3.1.0/24(rw,anonuid=36,anongid=36,all_squash)
127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
# cat
/mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/dom_md/metadata
CLASS=Backup
DESCRIPTION=eport_storage
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
POOL_UUID=053926e4-e63d-450e-8aa7-6f1235b944c6
REMOTE_PATH=/mnt/export_ovirt/images
ROLE=Regular
SDUUID=4be3f6ac-7946-4e7b-9ca2-11731c8ba236
TYPE=LOCALFS
VERSION=3
_SHA_CKSUM=2e6e203168bd84f3dc97c953b520ea8f78119bf0
# ls -l
/mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf
-rw-r--r--. 1 vdsm kvm 9021 Mar 6 11:50
/mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf
Thanks,
Alex
3
6
I attached a image disk to a VM , but set it using the wrong disk profile, I powered off the VM, and then tried to change it on the GUI.
The operation in the GUI is ok.
But nothing is done.
And in the log I get:
2016-03-25 10:12:10,467 INFO [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (default task-26) [2f3b7d9] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[a32e1043-a5a5-4e4c-8436-f7b7a4ff644c=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-03-25 10:12:10,608 INFO [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (default task-26) [2f3b7d9] Running command: UpdateVmDiskCommand internal: false. Entities affected : ID: 55d2be6b-7a78-4712-82be-b725b7812db8 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2016-03-25 10:12:10,794 INFO [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (default task-26) [2f3b7d9] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[a32e1043-a5a5-4e4c-8436-f7b7a4ff644c=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-03-25 10:12:10,808 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [2f3b7d9] Correlation ID: 2f3b7d9, Call Stack: null, Custom Event ID: -1, Message: VM test test_Disk3 disk was updated by FA4@apachesso.
It says "with role type USER" but I'm logged as a super admin
The set up is totally new, on dedicated centos 7.2, running 3.6.3.4-1.el7.centos.
2
3
I’m running 3.6.2 rc1 with hosted engine on an FCP storage domain.
As of yesterday, I can’t run some VMs. I’ve experience corruption on others (I now have a Windows VM that blue screens on boot).
Here’s the log from my engine.
2016-01-04 16:55:39,446 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-16) [1f1deb62] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,479 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-16) [1f1deb62] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 299a5052
2016-01-04 16:55:39,479 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-16) [1f1deb62] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 299a5052
2016-01-04 16:55:39,517 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Running command: RunVmCommand internal: false. Entities affected : ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:39,579 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dadddaa9'}), log id: 6574710a
2016-01-04 16:55:39,582 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, UpdateVmDynamicDataVDSCommand, log id: 6574710a
2016-01-04 16:55:39,585 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 55e0849d
2016-01-04 16:55:39,586 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVDSCommand(HostName = ov-101, CreateVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 1d5c1c04
2016-01-04 16:55:39,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Bootable disk '9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:39,600 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, cpus=0, nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={heads=1, vram=32768}, device=cirrus, deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, controller=0, unit=0, type=drive, target=0}, imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, bootOrder=1, poolID=00000001-0001-0001-0001-000000000154, volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, {shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, poolID=00000001-0001-0001-0001-000000000154, volumeID=20fc4399-0b02-4da1-8aee-68df1629ca94, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=rtl8139, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=8e00d4cc-6a60-4598-82ee-645d742708de, macAddr=FA:0D:49:9E:A2:E6, network=server-vlan10}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=8ac5777e-375f-4ec6-a6fd-856c7cd7363b}],custom={device_8617fb20-b870-45ea-8232-a70dd8b4551c=VmDevice:{id='VmDeviceId:{deviceId='8617fb20-b870-45ea-8232-a70dd8b4551c', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76device_30bd748e-6ea8-434f-8587-d8ff8db5555e=VmDevice:{id='VmDeviceId:{deviceId='30bd748e-6ea8-434f-8587-d8ff8db5555e', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76=VmDevice:{id='VmDeviceId:{deviceId='f691fc09-31c8-43bf-bd82-c5acac8a1a76', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}},display=vnc,timeOffset=0,spiceSslCipherSuite=DEFAULT,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4054
2016-01-04 16:55:39,627 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, CreateVDSCommand, log id: 1d5c1c04
2016-01-04 16:55:39,631 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 55e0849d
2016-01-04 16:55:39,631 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,634 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: VM adm1 was started by jforeman@us.dignitastech.com@Dignitas AD (Host: ov-101).
2016-01-04 16:55:40,724 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] START, DestroyVDSCommand(HostName = ov-101, DestroyVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', force='false', secondsToWait='0', gracefully='false', reason=''}), log id: 7935781d
2016-01-04 16:55:41,730 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] FINISH, DestroyVDSCommand, log id: 7935781d
2016-01-04 16:55:41,747 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM adm1 is down with error. Exit message: Unable to get volume size for domain 1fb79d91-b245-4447-91e0-e57671152a8c volume c736baca-de76-4593-b3dc-28bb8807e7a3.
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] Running on vds during rerun failed vm: '2fe6c27b-9346-4678-8cd3-c9d367ec447f'
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87(adm1) is running in db and not running in VDS 'ov-101'
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'adm1' to HA rerun treatment
2016-01-04 16:55:41,752 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87'. Called from VDS 'ov-101'
2016-01-04 16:55:41,756 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 on Host ov-101.
2016-01-04 16:55:41,760 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:41,770 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 2577cd3a
2016-01-04 16:55:41,770 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2577cd3a
2016-01-04 16:55:41,798 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Running command: RunVmCommand internal: false. Entities affected : ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:41,850 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dbe0ef0a'}), log id: 351fb749
2016-01-04 16:55:41,852 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, UpdateVmDynamicDataVDSCommand, log id: 351fb749
2016-01-04 16:55:41,854 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 3163c7c3
2016-01-04 16:55:41,857 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, CreateVDSCommand(HostName = ov-102, CreateVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 569ec368
2016-01-04 16:55:41,860 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (org.ovirt.thread.pool-8-thread-30) [] Bootable disk '9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:41,869 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, cpus=0, nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={heads=1, vram=32768}, device=cirrus, deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, controller=0, unit=0, type=drive, target=0}, imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, bootOrder=1, poolID=00000001-0001-0001-0001-000000000154, volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, {shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, poolID=00000001-0001-0001-0001-000000000154, volumeID=20fc4399-0b02-4da1-8aee-68df1629ca94, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=rtl8139, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=8e00d4cc-6a60-4598-82ee-645d742708de, macAddr=FA:0D:49:9E:A2:E6, network=server-vlan10}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=8ac5777e-375f-4ec6-a6fd-856c7cd7363b}],custom={device_8617fb20-b870-45ea-8232-a70dd8b4551c=VmDevice:{id='VmDeviceId:{deviceId='8617fb20-b870-45ea-8232-a70dd8b4551c', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76device_30bd748e-6ea8-434f-8587-d8ff8db5555e=VmDevice:{id='VmDeviceId:{deviceId='30bd748e-6ea8-434f-8587-d8ff8db5555e', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76=VmDevice:{id='VmDeviceId:{deviceId='f691fc09-31c8-43bf-bd82-c5acac8a1a76', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}},display=vnc,timeOffset=0,spiceSslCipherSuite=DEFAULT,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4054
2016-01-04 16:55:41,987 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, CreateVDSCommand, log id: 569ec368
2016-01-04 16:55:41,991 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3163c7c3
2016-01-04 16:55:41,992 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:41,994 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: VM adm1 was started by jforeman@us.dignitastech.com@Dignitas AD (Host: ov-102).
2016-01-04 16:55:43,069 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] START, DestroyVDSCommand(HostName = ov-102, DestroyVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', force='false', secondsToWait='0', gracefully='false', reason=''}), log id: 43dd93c5
2016-01-04 16:55:44,075 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] FINISH, DestroyVDSCommand, log id: 43dd93c5
2016-01-04 16:55:44,091 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-3) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM adm1 is down with error. Exit message: Unable to get volume size for domain 1fb79d91-b245-4447-91e0-e57671152a8c volume c736baca-de76-4593-b3dc-28bb8807e7a3.
2016-01-04 16:55:44,091 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] Running on vds during rerun failed vm: '65555052-9601-4e4f-88f5-a0f14dcc29eb'
2016-01-04 16:55:44,092 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87(adm1) is running in db and not running in VDS 'ov-102'
2016-01-04 16:55:44,092 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] add VM 'adm1' to HA rerun treatment
2016-01-04 16:55:44,096 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-3) [] Rerun VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87'. Called from VDS 'ov-102'
2016-01-04 16:55:44,128 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-35) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 on Host ov-102.
2016-01-04 16:55:44,132 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:44,141 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-35) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 545236ca
2016-01-04 16:55:44,141 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-35) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 545236ca
2016-01-04 16:55:44,162 WARN [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] CanDoAction of action 'RunVm' failed for user jforeman@us.dignitastech.com@Dignitas AD. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2016-01-04 16:55:44,162 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:44,170 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-35) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 (User: jforeman@us.dignitastech.com@Dignitas AD).
2016-01-04 16:55:44,173 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-8-thread-46) [48c1f0bd] Running command: ProcessDownVmCommand internal: true.
2
2
--------------020404060709070704020700
Content-Type: text/plain; charset="gbk"; format=flowed
Content-Transfer-Encoding: 8bit
Hi guys,
I created a VM in ovirt, and I found it a port = -1, How can I connect
to it using like remote-viewer.
--------------
console.vv
[virt-viewer]
type=spice
host=XXX.XXX.XXX.XXX
port=-1
password=J4xu1swd59A5
# Password is valid for 120 seconds.
delete-this-file=1
fullscreen=0
title=test:%d
toggle-fullscreen=shift+f11
...
...
...
--------------
I usually use the following command to connect to my VM when it has a
positive value¡£
remote-viewer spice://XXX.XXX.XXX.XXX:590X
Regards,
Kenn
--------------020404060709070704020700
Content-Type: text/html; charset="gbk"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=gbk">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi guys,<br>
<br>
<br>
I created a VM in ovirt, and I found it a port = -1, How can I
connect to it using like remote-viewer.<br>
<br>
--------------<br>
console.vv<br>
<br>
[virt-viewer]<br>
type=spice<br>
host=XXX.XXX.XXX.XXX<br>
port=-1<br>
password=J4xu1swd59A5<br>
# Password is valid for 120 seconds.<br>
delete-this-file=1<br>
fullscreen=0<br>
title=test:%d<br>
toggle-fullscreen=shift+f11<br>
...<br>
...<br>
...<br>
--------------<br>
<br>
I usually use the following command to connect to my VM when it has
a
<meta http-equiv="content-type" content="text/html; charset=gbk">
positive value¡£<br>
remote-viewer <a class="moz-txt-link-freetext" href="spice://XXX.XXX.XXX.XXX:590X">spice://XXX.XXX.XXX.XXX:590X</a><br>
<br>
<br>
Regards,<br>
Kenn<br>
</body>
</html>
--------------020404060709070704020700--
3
5
Hi All,
When creating a new host i lose the ip configuration on the host while creating the interfaces.
Please advice.
Using:
oVirt engine 3.6.3.4-1 on CentOS 7
oVirt engine SDK python 3.6.3.0
->Rein.
2
2
Hello,
i will install my first vm with centos and setup vnc and spice with
bootoption and centos as iso in dvd drive. i downloaded virt-viewer 3.1
for my windows10. if i start the vnc console which opened console.vv and
than opened the virt-viewer i see only message of bios version and no
boot medium as information. if i reboot the screen doesnt changed and i
also have no option from the activated bootmenue? Can anyone help?
3
4
Hello,
We recently had our storage array hang. We were able to get the disk array
back online, however several of our VM's attempted to migrate from one Host
to another. Now that the storage is online, several of the VM's still
indicate they are in a state of migration and I cannot manage them from
withing the ovirt engine web administration gui.
Most of the VM's are actually running properly and I can access them either
via ssh or RDP (depending on the OS).
I have attempted to clear the migration status with the following:
- service ovirt-engine restart (on the ovirt-engine vm)
- reboot ovirt-engine vm
- PGPASSWORD=############### ./unlock_entity.sh -t vm vm-name
- restart the vm from within the vm
additionally - i went on the HV of one of the VM's and did:
vdsClient -s 0 destroy vm_guid
unfortunately, that removed all traces of the vm from that HV and it was
not on any of the other three HV's - although ovirt-engine still shows it
and reports it as migrating.
I'd like to clear the migrating status of the affected VM's and figure out
how to recover the missing one.
Any help would be appreciated. I am not the person who originally setup the
ovirt installation, so I am not sure where to go to pull logs.
Thank you for your assistance.
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai…>
This
email has been sent from a virus-free computer protected by Avast.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai…>
<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
3
3
I' m running on a brand new Centos 7.2 an up to date ovirt 3.6.3.4.
The host is new too and dedicated to ovirt.
When I try to launch a vm, I get :
Thread-9407::ERROR::2016-03-24 09:16:18,301::vm::759::virt.vm::(_startUnderlyingVm) vmId=`a32e1043-a5a5-4e4c-8436-f7b7a4ff644c`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/virt/vm.py", line 1941, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: process exited while connecting to monitor: ((null):23672): Spice-Warning **: reds.c:3311:reds_init_ssl: Could not use private key file
2016-03-24T08:16:18.005359Z qemu-kvm: failed to initialize spice server
/var/log/libvirt/qemu/test.log says
2016-03-24 08:55:48.214+0000: starting up libvirt version: 1.2.17, package: 13.el7_2.3 (CentOS BuildSystem <http://bugs.centos.org>, 2016-02-16-17:06:00, worker1.bsys.centos.org) qemu version: 2.3.0 (qemu-kvm-ev-2.3.0-31.el7_2.7.1)
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name test -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=2048 -uuid a32e1043-a5a5-4e4c-8436-f7b7a4ff644c -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-2.1511.el7.centos.2.10,serial=30373237-3132-5A43-3235-343233333937,uuid=a32e1043-a5a5-4e4c-8436-f7b7a4ff644c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-test/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2016-03-24T08:55:46,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot menu=on,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/00000001-0001-0001-0001-00000000022a/85d19e93-ee08-41bb-94c9-56adf17287b4/images/da6f49dd-8662-418b-a859-3523b4360c0e/930bbe74-7470-4b22-b096-fdb03276262d,if=none,id=drive-scsi0-0-0-0,format=raw,serial=da6f49dd-8662-418b-a859-3523b4360c0e,cache=none,werror=stop,rerror=stop,aio=native,iops=300 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3,bootindex=2 -chardev socket,id=charserial0,path=/var/run/ovirt-vmconsole-console/a32e1043-a5a5-4e4c-8436-f7b7a4ff644c.sock,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/a32e1043-a5a5-4e4c-8436-f7b7a4ff644c.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/a32e1043-a5a5-4e4c-8436-f7b7a4ff644c.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
((null):29166): Spice-Warning **: reds.c:3311:reds_init_ssl: Could not use private key file
2016-03-24T08:55:48.329252Z qemu-kvm: failed to initialize spice server
2016-03-24 08:55:48.479+0000: shutting down
and indeed, when I try to strace libvirt :
open("/etc/pki/vdsm/libvirt-spice/server-key.pem", O_RDONLY) = -1 EACCES (Permission denied)
chmod a+r /etc/pki/vdsm/libvirt-spice/server-key.pem solved the problem, but it's obviously not a solution.
3
3
I know this has been mentioned several times, but I'm unable to find a
solution.
On my Mac, I've tried Firefox, Chrome, Safari
On Linux, I've tried Iceweasel and Firefox
On Windows, I've tried Edge
All of these state that the browser is not optimal and things either never
render, or in the case of FF, render after about 2-3 minutes per click.
Beyond the occasional break in speed, the browsers are completely
inoperational.
I'm at the end of the "oVirt Installation" in the oVirt Quick Start Guide (
https://www.ovirt.org/documentation/quickstart/quickstart-guide/) There's
a lot of mention around Spice, but I think that's something further down
the road after I can interact with the Engine portal.
Any help would be appreciated on figuring out what secret handshake is
needed to get a usable browser
oVirt Engine:
Version 3.6.3.4-1.el7.centos
CentOS 7.2
Thanks in advance.
2
2
Is it planned to allow the snapshots that are created during a live storage
migration to be automatically deleted once the migration has completed? It
is easy to forget about them and end up with large snapshots.
-Alastair
2
2
Spice lags using a proxy / Spice foldersharing in oVirt / spice driver for win10
by Marco Bormann 01 Apr '16
by Marco Bormann 01 Apr '16
01 Apr '16
Good evening everyone,
I am Marco from Germany and I currently setup a oVirt environment in our
data center. The current environment are 1 dedicated engine and 6 oVirt
Hosts.
I have three questions which I didnt get answered very well during my
research in the WWW during the last days.
So it would be very helpfull if someone of you can support me.
We are using the cluster for normal VM usage and for VDI desktop
virtualization with the Spice protocol.
1. Question
We are using a spice proxy because we have some VMs reachable via the
Engine and Spice external. Since we are using the proxy some users are
observing some disconnects of the USB redirection and sometimes mouse and
keyboard are not working in the session so that we have to restart the
console. Are there some of you also using a proxy and may had the same
behaviour and a workaround or fix for this?
2. Question
We want to use the Spice foldersharing function but it is always greyed and
not usable for us. I dont find anything in the web about the function in
the Remote viewer. What is necessary to use it? (Linux and Windows clients)
3. Question
The last question is about future usage of the cluster for VMs running
Windows10 (and other microsoft distributions of this generation like
Server2012 etc). Is there a way getting the spice video drivers working
with this operating systems? The official statement I always read is that
there is no official driver at the moment available.
thanks a lot and best wishes from Cologne in Germany
Marco
2
1
Hi Everyone,
This doesn't seem to be a bug in oVirt so much as in the
libvirt/virt-v2v tool it uses for importing, but I figured someone here
might have run into this issue before. I'm trying to import some VMs
from my VMWare cluster and it's failing with a "file not found" when
trying to download the disk images. I have files in the datastore like:
systest-55-000001-delta.vmdk 4G
systest-55-000001.vmdk 1K
systest-55-Snapshot1.vmsn 4G
systest-55-flat.vmdk 40G
systest-55.vmdk 1K
... bunch more small .vm?? files
For some reason virt-v2v is trying to download a file called
systest-55-000001-flat.vmdk which doesn't exist. I'm assuming this has
something to do with the snapshots stored in the folder... Does anyone
know a way to deal with that? Can I just delete the snapshots or is
that going to delete data stored on the VM since the last snapshot? I'm
using virt-v2v 1.28.1 if that makes a difference.
4
4
31 Mar '16
On 03/31/2016 07:00 AM, Vishal Panchal wrote:
> I send the following request:
>
> {"name":"vishaltestdisk","storage_domains":{"storage_domain":[{"name":"s1eu"}]},"provisioned_size":"512000000","size":"512000000","interface":"virtio","format":"cow","sparse":true,"bootable":true}
>
That request is syntactically correct. But as explained by Ondra
Machacek there is a bug that makes this fail:
assign DiskProfileUser role to Everyone group to newly added
storagedomain's profile
https://bugzilla.redhat.com/1209505
That can affect you if you are trying to perform this operation with an
user that doesn't have the permission. Are you? Or are you using the
adminstrator?
Ondra, do you know if there is a workaround for this? Can the permission
be added manually? How?
>
>
> On Wed, Mar 30, 2016 at 8:04 PM, Juan Hernández <jhernand(a)redhat.com
> <mailto:jhernand@redhat.com>> wrote:
>
> On 03/30/2016 01:31 PM, Vishal Panchal wrote:
> > Hello,
> >
> > I got following error during add new disk using API but on other side
> > from admin panel I can create new disk.
> >
> > *Error :*
> > Cannot add Virtual Machine Disk. The user doesn't have permissions to
> > attach Disk Profile to the Disk.*
> > *
>
> What version of the engine? How are you creating the disk? Can you share
> the request that you are sending to the server?
>
--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
2
1
------=_NextPart_000_0014_01D18A63.37CCD060
Content-Type: multipart/alternative;
boundary="----=_NextPart_001_0015_01D18A63.37CCD060"
------=_NextPart_001_0015_01D18A63.37CCD060
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi,
is there a way to "shrink" a thin disk image to reclaim deleted space like
using virt-sparsify?
Or can it be done by making a snapshot and clone the VM to a new one?
Thanks a lot
Christian
------=_NextPart_001_0015_01D18A63.37CCD060
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE-AT =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal><span lang=3DEN-US>Hi,<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>is there a way to =
„shrink“ a thin disk image to reclaim deleted space like =
using virt-sparsify?<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Or can it be done by making a snapshot and clone the VM to =
a new one?<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thanks a lot<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>Christian<o:p></o:p></span></p></div></body></html>
------=_NextPart_001_0015_01D18A63.37CCD060--
------=_NextPart_000_0014_01D18A63.37CCD060
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIISkjCCBR0w
ggQFoAMCAQICECuNLnuZVUx6dsyG0RuZk70wDQYJKoZIhvcNAQELBQAwdTELMAkGA1UEBhMCSUwx
FjAUBgNVBAoTDVN0YXJ0Q29tIEx0ZC4xKTAnBgNVBAsTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24g
QXV0aG9yaXR5MSMwIQYDVQQDExpTdGFydENvbSBDbGFzcyAxIENsaWVudCBDQTAeFw0xNjAyMDkx
NDAyMDNaFw0xNzAyMDkxNDAyMDNaMFwxKTAnBgNVBAMMIGNocmlzdGlhbi5ncnVuZG1hbm5AZmFi
YXNvZnQuY29tMS8wLQYJKoZIhvcNAQkBFiBjaHJpc3RpYW4uZ3J1bmRtYW5uQGZhYmFzb2Z0LmNv
bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMSXN+Y9o0GeYdHtPJMREJ8RKlJKPKv+
9IQGXFUkCmo3e2TdExbAVeckzKSHLyKQp7dUDNkTbH3fIEAsrl4MHasJ885dI9vwoF/j8D43SmvK
e2CfG5C5CG5mHoilx8OfKeun0uRoQ3HYz3rg8Bw7xWZ7OcR3FKMNHkQj3AAcs7SXsgTQ5zMm5siI
QnKQxeKt4CP9+3y598FQka7l+HggIvM58ufaGBLbQjlC1qNjSObxMQbtSBlVFxDr3Lbol5N560iQ
zvNn/0zti1H6nRqZwfMtHFe8uLAIn2V09WC8m8qL7gN5GpfthVHRUSXta3FnFu0Ux3dXG+CdqkFR
g+MQZ2kCAwEAAaOCAcAwggG8MAsGA1UdDwQEAwIEsDAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYB
BQUHAwQwCQYDVR0TBAIwADAdBgNVHQ4EFgQUuxqfYToDcp+2hlegIBv/4UOIAzEwHwYDVR0jBBgw
FoAUJIFsOWG+SQ+PtxtGK8kotSdIbWgwbwYIKwYBBQUHAQEEYzBhMCQGCCsGAQUFBzABhhhodHRw
Oi8vb2NzcC5zdGFydHNzbC5jb20wOQYIKwYBBQUHMAKGLWh0dHA6Ly9haWEuc3RhcnRzc2wuY29t
L2NlcnRzL3NjYS5jbGllbnQxLmNydDA4BgNVHR8EMTAvMC2gK6AphidodHRwOi8vY3JsLnN0YXJ0
c3NsLmNvbS9zY2EtY2xpZW50MS5jcmwwKwYDVR0RBCQwIoEgY2hyaXN0aWFuLmdydW5kbWFubkBm
YWJhc29mdC5jb20wIwYDVR0SBBwwGoYYaHR0cDovL3d3dy5zdGFydHNzbC5jb20vMEYGA1UdIAQ/
MD0wOwYLKwYBBAGBtTcBAgQwLDAqBggrBgEFBQcCARYeaHR0cDovL3d3dy5zdGFydHNzbC5jb20v
cG9saWN5MA0GCSqGSIb3DQEBCwUAA4IBAQALbgNP3IjDHG+7Gt+grj1LpmAfqe47LtP4p8Qze3VU
qowcg+pKhdFMyLpvEMdDf873N7RHSQG927zSkN3FikHKiAbZGS0c3wdZvLuKLgqRDVoG/5rgeU45
Ai0hjNwOodbAbUWfl2frWFVOAQdWThjUSulBL3go9J1Ws8VT+VxRFEcsc3J8OtuNfzdG5DHejt5l
PBZ91RkJN487WSG5scamKEG0bOHrBvClOtmhNs8GZt5Gjk6dqmYZ8MfVW0ZxVBnNrbwZGIlP0Z/q
+MplyxQeO+6O7uH1SArQ2smTaIFWcs24d/JMLGLO1sw6ITRjMyxJm1HYbtf+01TRdiW4UV+kMIIF
4jCCA8qgAwIBAgIQa6eKfQrXiNZRCvlZ5Oe04TANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJJ
TDEWMBQGA1UEChMNU3RhcnRDb20gTHRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlm
aWNhdGUgU2lnbmluZzEpMCcGA1UEAxMgU3RhcnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkw
HhcNMTUxMjE2MDEwMDA1WhcNMzAxMjE2MDEwMDA1WjB1MQswCQYDVQQGEwJJTDEWMBQGA1UEChMN
U3RhcnRDb20gTHRkLjEpMCcGA1UECxMgU3RhcnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkx
IzAhBgNVBAMTGlN0YXJ0Q29tIENsYXNzIDEgQ2xpZW50IENBMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAvX3a98OifYP2W4L921tfrh4bdcC1Ga+YJKy7V3nYNewJHnzMlBsK0Hb8Dm4W
o3FZpylcYa1MJGT10QMGWaLER3xCIuRR+8eklf/EqeZWRLojJ7zBRtjMywPOCelrOU+DX12dKp+E
z4J6919rz1UudTO1GvZyCYJ/I7062uHsskM8b7gPxmcCoO1UHwwpgkvpCArJWGFoFzjLdsZbErJc
S3HtAhlkbE/BKTMrdYg35Uo12SLBO5tbk8h2imbKTC8iMs+pskrvI/AVlh6QoTTXk6xboVX6zgMg
zxSVVLymQiygYYm0y5aMsvi2raFhC643SOGvErWWPPnSEfbeAD1xswIDAQABo4IBZDCCAWAwDgYD
VR0PAQH/BAQDAgEGMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDASBgNVHRMBAf8ECDAG
AQH/AgEAMDIGA1UdHwQrMCkwJ6AloCOGIWh0dHA6Ly9jcmwuc3RhcnRzc2wuY29tL3Nmc2NhLmNy
bDBmBggrBgEFBQcBAQRaMFgwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbTAw
BggrBgEFBQcwAoYkaHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvY2EuY3J0MB0GA1UdDgQW
BBQkgWw5Yb5JD4+3G0YrySi1J0htaDAfBgNVHSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jA/
BgNVHSAEODA2MDQGBFUdIAAwLDAqBggrBgEFBQcCARYeaHR0cDovL3d3dy5zdGFydHNzbC5jb20v
cG9saWN5MA0GCSqGSIb3DQEBCwUAA4ICAQCL4/eH7AGLhK0PAQJbnOEjJyMEvTTwcAJuUh/bodjQ
l06u4putYOxdSyIjSP/sKt+31LmjG8+IO1WqykE4H/Lm7NKezWVnCHuwb3ptgFmlwbMbGkU2MOZB
twzfKXdYUhFLhaE2uw5jXhXvLYitQay962wP5uPI6eAIhV4L8aaya1u4s7MnrTq0Rz25FuGNO79v
THYWj797tSRC8rM16js4yGKOLFpQvIg0F8IElv57b1stp+C7omqM5Qn15dePbSnqr8Jb65WtmJJb
nv6rlqfY/aLuE/zmNAlzLmPgfMDStKIXdg+EoYBZTEo8wBUaBxihfNbJ069ndQOxMNNqBelEMgpA
tmjTbCuXFjqIwWq+XOx6ZV/Wh2FAmaLsSHlNvEjjSQMZwE4EeHCdo66ZmEs/5JYlCeOkulKVQ6P3
m5/XOj2jP17Q2AgmjP+11+sHN7PvrG0OwrQp9QMe3X+rn0G8MjtFfqBWvR9CgLIxzM3MJNxFdgdj
S2rYnShP5uxvqwfZvhZVYCIkqdJhpYON0DvSodfiar0wiM79mySZJjzC0CTbiisBzS/BeBhqeo2w
Ffli/iw3hn1XKvAx0ty6w/scmBF0AYqmRHYj1TjMSw0lAl7AztLglqWjUPI+sukvadMRPxmtKXlS
2nVR4an/Z16imsZ69+fFYH68c1CK7zmjozCCB4cwggVvoAMCAQICAS0wDQYJKoZIhvcNAQELBQAw
fTELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBE
aWdpdGFsIENlcnRpZmljYXRlIFNpZ25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRp
b24gQXV0aG9yaXR5MB4XDTA2MDkxNzE5NDYzN1oXDTM2MDkxNzE5NDYzNlowfTELMAkGA1UEBhMC
SUwxFjAUBgNVBAoTDVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRp
ZmljYXRlIFNpZ25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAwYjbCbxsRnx4n5V7tTOQ8nJi1sE2ICIk
Xs7pd/JDCqIGZKTMjjb4OOYj8G5tsTzdcqOFHKHTPbQzK9Mvr/7qsEFZZ7bEBn0KnnSF1nlMgDd6
3zkFUln39BtGQ6TShYXSw3HzdWI0uiyKfx6P7u000BHHls1SPboz1t1N3gs7SkufwiYv+rUWHHI1
d8o8XebK4SaLGjZ2XAHbdBQl/u21oIgP3XjKLR8HlzABLXJ5+kbWEyqouaarg0kd5fLv3eQBjhgK
j2NTFoViqQ4ZOsy1ZqbCa3QH5Cvhdj60bdj2ROFzYh87xL6gU1YlbFEJ96qryr92/W2b853bvz1m
vAxWqq+YSJU6S9+nWFDZOHWpW+pDDAL/mevobE1wWyllnN2qXcyvATHsDOvSjejqnHvmbvcnZgwa
SNduQuM/3iE+e+ENcPtjqqhsGlS0XCV6yaLJixamuyx+F14FTVhuEh0B7hIQDcYyfxj//PT6zW6R
6DZJvhpIaYvClk0aErJpF8EKkNb6eSJIv7p7afhwx/p6N9jYDdJ2T1f/kLfjkdLd78Jgt2c63f6q
nPDUi39yIs7Gn5e2+K+KoBCo2fsYxra1XFI8ibYZKnMBCg8DsxJg8novgdujbv8mMJf1i92JV7at
PbOvK8W3dgLwpdYrmoYUKnL24zOMXQlLE9+7jHQTUksCAwEAAaOCAhAwggIMMA8GA1UdEwEB/wQF
MAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBROC+8apEBbpRdphzDKNGhD0EGu8jAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jCCAVoGA1UdIASCAVEwggFNMIIBSQYLKwYBBAGB
tTcBAQEwggE4MC4GCCsGAQUFBwIBFiJodHRwOi8vd3d3LnN0YXJ0c3NsLmNvbS9wb2xpY3kucGRm
MDQGCCsGAQUFBwIBFihodHRwOi8vd3d3LnN0YXJ0c3NsLmNvbS9pbnRlcm1lZGlhdGUucGRmMIHP
BggrBgEFBQcCAjCBwjAnFiBTdGFydCBDb21tZXJjaWFsIChTdGFydENvbSkgTHRkLjADAgEBGoGW
TGltaXRlZCBMaWFiaWxpdHksIHJlYWQgdGhlIHNlY3Rpb24gKkxlZ2FsIExpbWl0YXRpb25zKiBv
ZiB0aGUgU3RhcnRDb20gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgUG9saWN5IGF2YWlsYWJsZSBh
dCBodHRwOi8vd3d3LnN0YXJ0c3NsLmNvbS9wb2xpY3kucGRmMBEGCWCGSAGG+EIBAQQEAwIABzA4
BglghkgBhvhCAQ0EKxYpU3RhcnRDb20gRnJlZSBTU0wgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkw
DQYJKoZIhvcNAQELBQADggIBAI6P59yUeXzxhX+fSW9ryl37jP4ExcFi0X1CirxTt5QDZjA/secK
p1AgVSV/dnoUDesEDkDmPtiIqwcng6l1pjdzx/1L0k2tF0DIRr47f1H8w7YFMdzNhSJOcbfycV6w
Gsa6k4t4kkqF+HgPg/4vrSz35KS7LdDnDTq4Ps72ePauRyTKozU2zsfGh5ja7Pvpss4nm4jDBKH2
C1lor8nbEA9N9mRjXKUSb5Kyk5THiBcOk7Z+YouQf6tOn/zjdRRPKjLfWw3g9XuTDauhz4fhpQRF
6DwSpQnFsNG3U/NgFLqFaWohfB91YRcgF3tsO0EpXOGsWtHNjJvrYB0Z7PflsNr5eRilRT9JQ1fS
3STVLKP9kY0nteXrFAaaTHshuzqtMAYYwNjBayx/WVxdkbFwIlfrimtIStUPKezGQMAviExoARd3
9CQZT7364bIgIUvdGtgpfaq43lTsIVWAbB71MMijEOWy5ioUMcOFLYyYsYZaT4lZLbnH9xzIin/A
nQVK5kJPYqNtKaQfhavb5YHIrSo9TF1bhCZxxIVecSTKpRts2GHTGuBU2866qTK1IvZzQQlduBdd
Dg+ZkNZH2m8KOmIoFGeC2fHQgFmbyzHYmw+Md061aIrybPYkDi1scMVz0d4U0HGPttN7AvbjuNQJ
bmuedYQ55n8lpfJIAMCkAdo/MYID5DCCA+ACAQEwgYkwdTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKTAnBgNVBAsTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5
MSMwIQYDVQQDExpTdGFydENvbSBDbGFzcyAxIENsaWVudCBDQQIQK40ue5lVTHp2zIbRG5mTvTAJ
BgUrDgMCGgUAoIICLzAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0x
NjAzMzAwNzA0NTFaMCMGCSqGSIb3DQEJBDEWBBSE7Hhe7qx/XOgH0n9cFQJyqAT1GTCBkwYJKoZI
hvcNAQkPMYGFMIGCMAsGCWCGSAFlAwQBKjALBglghkgBZQMEARYwCgYIKoZIhvcNAwcwCwYJYIZI
AWUDBAECMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCGjALBglghkgBZQME
AgMwCwYJYIZIAWUDBAICMAsGCWCGSAFlAwQCATCBmgYJKwYBBAGCNxAEMYGMMIGJMHUxCzAJBgNV
BAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSkwJwYDVQQLEyBTdGFydENvbSBDZXJ0aWZp
Y2F0aW9uIEF1dGhvcml0eTEjMCEGA1UEAxMaU3RhcnRDb20gQ2xhc3MgMSBDbGllbnQgQ0ECECuN
LnuZVUx6dsyG0RuZk70wgZwGCyqGSIb3DQEJEAILMYGMoIGJMHUxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSkwJwYDVQQLEyBTdGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhv
cml0eTEjMCEGA1UEAxMaU3RhcnRDb20gQ2xhc3MgMSBDbGllbnQgQ0ECECuNLnuZVUx6dsyG0RuZ
k70wDQYJKoZIhvcNAQEBBQAEggEARQbmivZYqW8JUGkSiINi9McHI6r8X8Ude/KsmI6KabXNnDgr
EQDtUbO3hlhRnTz4lWnUM0pkW1eeYQvgzQ/crJp+Cy9JjHJsm4Wrfiz4JCsPi9nmMUoRozmUYPjV
JG1kSMY/BXNhxAK5eQCxMjotY+tgqMljN69O9ugCiLG6S/VkZ6/w5SMNqnHUDqtRiKd8Q64Pb4d9
6kjhhKYgu1bdpha+o23nU7m7cAXtRzLaljeZZtWkRdxo3u7BLuJtn+rIXSEMAb4X5LpC7aDG5WGg
gDP21oDeVJ0j4MsDar2FFKWyA6ELWXPB8aB6v7Cbs7URGF9Xf4LyeprIsGCd0ArVhQAAAAAAAA==
------=_NextPart_000_0014_01D18A63.37CCD060--
2
7
--_000_D15FABB6263B477486650932D7C176DEacroniscom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8sDQoNCkkgaGl0IGEgYnVnLCBhbmQganVzdCB3YW50IHRvIHNoYXJlIGEgc29sdXRpb24u
DQoNClZNIHdpdGggYSBwYXlsb2FkIChJbml0aWFsIHJ1bikgZG8gbm90IHN0YXJ0IHdpdGggbGli
dmlydCA+PSAxLjMuMi4gVkRTTSBsb2cgc2F5czogImxpYnZpcnRFcnJvcjogdW5zdXBwb3J0ZWQg
Y29uZmlndXJhdGlvbjogRGlza3MgJ2hkYycgYW5kICdoZGQnIGhhdmUgaWRlbnRpY2FsIHNlcmlh
bCIuDQoNClllcywgYm90aCBjZHJvbSBkZXZpY2VzIGhhdmUgdGhlIHNhbWUgc2VyaWFsLiBFbXB0
eSBzZXJpYWw6DQoNCiAgICAgICAgICAgICAgICA8ZGlzayBkZXZpY2U9ImNkcm9tIiBzbmFwc2hv
dD0ibm8iIHR5cGU9ImZpbGUiPg0KICAgICAgICAgICAgICAgICAgICAgICAgPHNvdXJjZSBmaWxl
PSIvdmFyL3J1bi92ZHNtL3BheWxvYWQvMmVhZjljOGUtMjEyMy00YjQ4LTliNjItZjk2MTY4YWM3
YTM2LjQxZjIyY2IxNjc2ODU4YWQ0ZTIyZTg0NDA1MTkwMzJkLmltZyIgc3RhcnR1cFBvbGljeT0i
b3B0aW9uYWwiLz4NCiAgICAgICAgICAgICAgICAgICAgICAgIDx0YXJnZXQgYnVzPSJpZGUiIGRl
dj0iaGRkIi8+DQogICAgICAgICAgICAgICAgICAgICAgICA8cmVhZG9ubHkvPg0KICAgICAgICAg
ICAgICAgICAgICAgICAgPHNlcmlhbC8+DQogICAgICAgICAgICAgICAgPC9kaXNrPg0KICAgICAg
ICAgICAgICAgIDxkaXNrIGRldmljZT0iY2Ryb20iIHNuYXBzaG90PSJubyIgdHlwZT0iZmlsZSI+
DQogICAgICAgICAgICAgICAgICAgICAgICA8c291cmNlIGZpbGU9IiIgc3RhcnR1cFBvbGljeT0i
b3B0aW9uYWwiLz4NCiAgICAgICAgICAgICAgICAgICAgICAgIDx0YXJnZXQgYnVzPSJpZGUiIGRl
dj0iaGRjIi8+DQogICAgICAgICAgICAgICAgICAgICAgICA8cmVhZG9ubHkvPg0KICAgICAgICAg
ICAgICAgICAgICAgICAgPHNlcmlhbC8+DQogICAgICAgICAgICAgICAgPC9kaXNrPg0KDQpJIGRv
bid0IGtub3cgd2hlcmUgaXMgdGhlIGlzc3VlLiBFaXRoZXIgbGlidmlydCBzaG91bGQgd29yayB3
aXRoIGVtcHR5IHNlcmlhbHMsIG9yIFZEU00gc2hvdWxkIGdlbmVyYXRlIHNlcmlhbCBhdCBsZWFz
dCBmb3IgcGF5bG9hZCBkZXZpY2UuDQoNClJlbGF0ZWQgYnVnIC0gIGh0dHBzOi8vYnVnemlsbGEu
cmVkaGF0LmNvbS9zaG93X2J1Zy5jZ2k/aWQ9MTI0NTAxMw0KDQpRdWljayBmaXggaXMgdG8gaW5z
dGFsbCBhIFZEU00gaG9vayB0byAvdXNyL2xpYmV4ZWMvdmRzbS9ob29rcy9iZWZvcmVfdm1fc3Rh
cnQ6DQotLS0tLS0tLS0tIGN1dCBoZXJlIC0tLS0tLS0tLS0NCiMhL3Vzci9iaW4vcHl0aG9uDQoN
CmltcG9ydCBob29raW5nDQppbXBvcnQgdXVpZA0KDQpkb214bWwgPSBob29raW5nLnJlYWRfZG9t
eG1sKCkNCg0KZm9yIGRpc2sgaW4gZG9teG1sLmdldEVsZW1lbnRzQnlUYWdOYW1lKCdkaXNrJyk6
DQogICAgaWYgZGlzay5nZXRBdHRyaWJ1dGUoJ2RldmljZScpID09ICdjZHJvbSc6DQogICAgICAg
IGZvciBzb3VyY2UgaW4gZGlzay5nZXRFbGVtZW50c0J5VGFnTmFtZSgnc291cmNlJyk6DQogICAg
ICAgICAgICBpZiBzb3VyY2UuZ2V0QXR0cmlidXRlKCdmaWxlJykuZmluZCgnL3BheWxvYWQvJykg
PiAwOg0KICAgICAgICAgICAgICAgIGZvciBzZXJpYWwgaW4gZGlzay5nZXRFbGVtZW50c0J5VGFn
TmFtZSgnc2VyaWFsJyk6DQogICAgICAgICAgICAgICAgICAgIGlmIG5vdCBzZXJpYWwuaGFzQ2hp
bGROb2RlcygpOg0KICAgICAgICAgICAgICAgICAgICAgICAgc2VyaWFsLmFwcGVuZENoaWxkKGRv
bXhtbC5jcmVhdGVUZXh0Tm9kZShzdHIodXVpZC51dWlkNCgpKSkpDQogICAgICAgICAgICAgICAg
ICAgICAgICBob29raW5nLndyaXRlX2RvbXhtbChkb214bWwpDQotLS0tLS0tLS0tIGN1dCBoZXJl
IC0tLS0tLS0tLS0NCg0K
--_000_D15FABB6263B477486650932D7C176DEacroniscom_
Content-Type: text/html; charset="utf-8"
Content-ID: <7E43200465BE214A855971DE0A701E42(a)acronis.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiPg0KPGRpdiBzdHlsZT0iY29sb3I6IHJnYigwLCAwLCAw
KTsgZm9udC1mYW1pbHk6IENhbGlicmksIHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTRweDsiPg0K
SGVsbG8sPC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWls
eTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxNHB4OyI+DQo8YnI+DQo8L2Rpdj4N
CjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBDYWxpYnJpLCBz
YW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7Ij4NCkkgaGl0IGEgYnVnLCBhbmQganVzdCB3YW50
IHRvIHNoYXJlIGEgc29sdXRpb24uPC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDAsIDAs
IDApOyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxNHB4OyI+
DQo8YnI+DQo8L2Rpdj4NCjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFt
aWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7Ij4NClZNIHdpdGggYSBw
YXlsb2FkIChJbml0aWFsIHJ1bikgZG8gbm90IHN0YXJ0IHdpdGggbGlidmlydCAmZ3Q7PSZuYnNw
OzEuMy4yLiBWRFNNIGxvZyBzYXlzOiAmcXVvdDtsaWJ2aXJ0RXJyb3I6IHVuc3VwcG9ydGVkIGNv
bmZpZ3VyYXRpb246IERpc2tzICdoZGMnIGFuZCAnaGRkJyBoYXZlIGlkZW50aWNhbCBzZXJpYWwm
cXVvdDsuPC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWls
eTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxNHB4OyI+DQo8YnI+DQo8L2Rpdj4N
CjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBDYWxpYnJpLCBz
YW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7Ij4NClllcywgYm90aCBjZHJvbSBkZXZpY2VzIGhh
dmUgdGhlIHNhbWUgc2VyaWFsLiBFbXB0eSBzZXJpYWw6PC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1z
aXplOiAxNHB4OyI+DQo8YnI+DQo8L2Rpdj4NCjxkaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGli
cmksc2Fucy1zZXJpZiI+Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbHQ7ZGlzayBkZXZpY2U9JnF1b3Q7Y2Ryb20mcXVvdDsgc25hcHNob3Q9
JnF1b3Q7bm8mcXVvdDsgdHlwZT0mcXVvdDtmaWxlJnF1b3Q7Jmd0OzwvZm9udD48L2Rpdj4NCjxk
aXY+PGZvbnQgZmFjZT0iQ2FsaWJyaSxzYW5zLXNlcmlmIj4mbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbHQ7c291cmNlIGZpbGU9JnF1b3Q7L3Zhci9ydW4vdmRzbS9wYXlsb2FkLzJlYWY5Yzhl
LTIxMjMtNGI0OC05YjYyLWY5NjE2OGFjN2EzNi40MWYyMmNiMTY3Njg1OGFkNGUyMmU4NDQwNTE5
MDMyZC5pbWcmcXVvdDsgc3RhcnR1cFBvbGljeT0mcXVvdDtvcHRpb25hbCZxdW90Oy8mZ3Q7PC9m
b250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMtc2VyaWYiPiZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZsdDt0YXJnZXQgYnVzPSZxdW90O2lkZSZxdW90OyBkZXY9JnF1
b3Q7aGRkJnF1b3Q7LyZndDs8L2ZvbnQ+PC9kaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGlicmks
c2Fucy1zZXJpZiI+Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJmx0O3JlYWRvbmx5LyZndDs8
L2ZvbnQ+PC9kaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+Jm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJmx0O3NlcmlhbC8mZ3Q7PC9mb250PjwvZGl2Pg0KPGRpdj48
Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMtc2VyaWYiPiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJmx0Oy9kaXNrJmd0OzwvZm9udD48L2Rpdj4N
CjxkaXY+PGZvbnQgZmFjZT0iQ2FsaWJyaSxzYW5zLXNlcmlmIj4mbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZsdDtkaXNrIGRldmljZT0mcXVv
dDtjZHJvbSZxdW90OyBzbmFwc2hvdD0mcXVvdDtubyZxdW90OyB0eXBlPSZxdW90O2ZpbGUmcXVv
dDsmZ3Q7PC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMtc2VyaWYi
PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZsdDtzb3VyY2UgZmlsZT0mcXVvdDsmcXVvdDsg
c3RhcnR1cFBvbGljeT0mcXVvdDtvcHRpb25hbCZxdW90Oy8mZ3Q7PC9mb250PjwvZGl2Pg0KPGRp
dj48Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMtc2VyaWYiPiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZsdDt0YXJnZXQgYnVzPSZxdW90O2lkZSZxdW90OyBkZXY9JnF1b3Q7aGRjJnF1b3Q7LyZn
dDs8L2ZvbnQ+PC9kaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+Jm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJmx0O3JlYWRvbmx5LyZndDs8L2ZvbnQ+PC9kaXY+DQo8
ZGl2Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+Jm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJmx0O3NlcmlhbC8mZ3Q7PC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxp
YnJpLHNhbnMtc2VyaWYiPiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJmx0Oy9kaXNrJmd0OzwvZm9udD48L2Rpdj4NCjxkaXYgc3R5bGU9ImNv
bG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyBmb250
LXNpemU6IDE0cHg7Ij4NCjxicj4NCjwvZGl2Pg0KPC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1zaXpl
OiAxNHB4OyI+DQpJIGRvbid0IGtub3cgd2hlcmUgaXMgdGhlIGlzc3VlLiBFaXRoZXIgbGlidmly
dCBzaG91bGQgd29yayB3aXRoIGVtcHR5IHNlcmlhbHMsIG9yIFZEU00gc2hvdWxkIGdlbmVyYXRl
IHNlcmlhbCBhdCBsZWFzdCBmb3IgcGF5bG9hZCBkZXZpY2UuPC9kaXY+DQo8ZGl2IHN0eWxlPSJj
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9u
dC1zaXplOiAxNHB4OyI+DQo8YnI+DQo8L2Rpdj4NCjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwg
MCwgMCk7IGZvbnQtZmFtaWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7
Ij4NClJlbGF0ZWQgYnVnIC0gJm5ic3A7PGEgaHJlZj0iaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQu
Y29tL3Nob3dfYnVnLmNnaT9pZD0xMjQ1MDEzIj5odHRwczovL2J1Z3ppbGxhLnJlZGhhdC5jb20v
c2hvd19idWcuY2dpP2lkPTEyNDUwMTM8L2E+PC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdi
KDAsIDAsIDApOyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1zaXplOiAx
NHB4OyI+DQo8YnI+DQo8L2Rpdj4NCjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZv
bnQtZmFtaWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7Ij4NClF1aWNr
IGZpeCBpcyB0byBpbnN0YWxsIGEgVkRTTSBob29rIHRvJm5ic3A7L3Vzci9saWJleGVjL3Zkc20v
aG9va3MvYmVmb3JlX3ZtX3N0YXJ0OjwvZGl2Pg0KPGRpdiBzdHlsZT0iY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IENhbGlicmksIHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTRweDsi
Pg0KLS0tLS0tLS0tLSBjdXQgaGVyZSAtLS0tLS0tLS0tPC9kaXY+DQo8ZGl2Pg0KPGRpdj4NCjxk
aXY+PGZvbnQgZmFjZT0iQ2FsaWJyaSxzYW5zLXNlcmlmIj4jIS91c3IvYmluL3B5dGhvbjwvZm9u
dD48L2Rpdj4NCjxkaXY+PGZvbnQgZmFjZT0iQ2FsaWJyaSxzYW5zLXNlcmlmIj48YnI+DQo8L2Zv
bnQ+PC9kaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+aW1wb3J0IGhv
b2tpbmc8L2ZvbnQ+PC9kaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+
aW1wb3J0IHV1aWQ8L2ZvbnQ+PC9kaXY+DQo8ZGl2Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1z
ZXJpZiI+PGJyPg0KPC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMt
c2VyaWYiPmRvbXhtbCA9IGhvb2tpbmcucmVhZF9kb214bWwoKTwvZm9udD48L2Rpdj4NCjxkaXY+
PGZvbnQgZmFjZT0iQ2FsaWJyaSxzYW5zLXNlcmlmIj48YnI+DQo8L2ZvbnQ+PC9kaXY+DQo8ZGl2
Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+Zm9yIGRpc2sgaW4gZG9teG1sLmdldEVs
ZW1lbnRzQnlUYWdOYW1lKCdkaXNrJyk6PC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJD
YWxpYnJpLHNhbnMtc2VyaWYiPiZuYnNwOyAmbmJzcDsgaWYgZGlzay5nZXRBdHRyaWJ1dGUoJ2Rl
dmljZScpID09ICdjZHJvbSc6PC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxpYnJp
LHNhbnMtc2VyaWYiPiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBmb3Igc291cmNlIGluIGRp
c2suZ2V0RWxlbWVudHNCeVRhZ05hbWUoJ3NvdXJjZScpOjwvZm9udD48L2Rpdj4NCjxkaXY+PGZv
bnQgZmFjZT0iQ2FsaWJyaSxzYW5zLXNlcmlmIj4mbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyBpZiBzb3VyY2UuZ2V0QXR0cmlidXRlKCdmaWxlJykuZmluZCgnL3BheWxv
YWQvJykgJmd0OyAwOjwvZm9udD48L2Rpdj4NCjxkaXY+PGZvbnQgZmFjZT0iQ2FsaWJyaSxzYW5z
LXNlcmlmIj4mbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7IGZvciBzZXJpYWwgaW4gZGlzay5nZXRFbGVtZW50c0J5VGFnTmFtZSgnc2VyaWFsJyk6
PC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMtc2VyaWYiPiZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyBpZiBub3Qgc2VyaWFsLmhhc0NoaWxkTm9kZXMoKTo8L2ZvbnQ+PC9kaXY+DQo8ZGl2
Pjxmb250IGZhY2U9IkNhbGlicmksc2Fucy1zZXJpZiI+Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgc2VyaWFsLmFwcGVuZENoaWxkKGRvbXhtbC5jcmVhdGVUZXh0Tm9kZShzdHIodXVpZC51dWlk
NCgpKSkpPC9mb250PjwvZGl2Pg0KPGRpdj48Zm9udCBmYWNlPSJDYWxpYnJpLHNhbnMtc2VyaWYi
PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IGhvb2tpbmcud3JpdGVfZG9teG1sKGRvbXhtbCk8
L2ZvbnQ+PC9kaXY+DQo8L2Rpdj4NCjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZv
bnQtZmFtaWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7Ij4NCi0tLS0t
LS0tLS0gY3V0IGhlcmUgLS0tLS0tLS0tLTwvZGl2Pg0KPC9kaXY+DQo8ZGl2IHN0eWxlPSJjb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsgZm9udC1z
aXplOiAxNHB4OyI+DQo8YnI+DQo8L2Rpdj4NCjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwg
MCk7IGZvbnQtZmFtaWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyBmb250LXNpemU6IDE0cHg7Ij4N
CjxkaXYgaWQ9Ik1BQ19PVVRMT09LX1NJR05BVFVSRSI+PC9kaXY+DQo8L2Rpdj4NCjwvYm9keT4N
CjwvaHRtbD4NCg==
--_000_D15FABB6263B477486650932D7C176DEacroniscom_--
2
2
Hi Zhao,
This is not a bug. you reboot the host without letting ovirt engine knowing
it, ovirt engine will think there must be something wrong going on with the
host, and IIRC will trigger some mechanism to fence the host(ssh soft
fence, power management, etc). In this circumstances, If the vms are not ha
vms, these vms will not auto restart on other normal hosts.
You can edit the vms as highly available to make the vm auto reboot on
other host.
2016-03-31 11:55 GMT+08:00 赵亮1 <zhao.liang1(a)puxinasset.com>:
> Hi plysan,
>
> I had reboot the host manually from the host itself(i did it many times),
> and i had setup the power management succssefully(i used the DELL IDRAC
> ---idrac8), the problem that i meet is a bug? or this is new feature?
>
>
> ------------------ Original ------------------
> *From: * "plysan"<plysab(a)gmail.com>;
> *Date: * Thu, Mar 31, 2016 11:10 AM
> *To: * "Phillip Bailey"<phbailey(a)redhat.com>;
> *Cc: * "zhao.liang1(a)puxinasset.com"<zhao.liang1(a)puxinasset.com>; "users"<
> users(a)ovirt.org>;
> *Subject: * Re: [ovirt-users] i meet some problems of ovirt3.6.2
>
> Hi Zhao,
>
> Did you reboot the host manually from the host itself (outside of ovirt
> ui)? Have you setup the host with power management in ovirt ui?
>
> Cheers
>
> 2016-03-31 9:13 GMT+08:00 Phillip Bailey <phbailey(a)redhat.com>:
>
>> Hi Zhao,
>>
>> Have you configured the migration policy? It's possible that your VM is
>> currently set to not allow migration. See the "Resilience Policy Settings
>> Explained" section of this document:
>> http://www.ovirt.org/documentation/admin-guide/administration-guide/.
>>
>> -Phillip Bailey
>>
>> On Wed, Mar 30, 2016 at 4:19 AM, zhao.liang1(a)puxinasset.com <
>> zhao.liang1(a)puxinasset.com> wrote:
>>
>>> hi,all , i'm a chinese user of ovirt3.6.2 ,here are my problems
>>>
>>> i have 3 hosts, they all work fine , there are 10 vms run on it. i want
>>> test if vm will still work when i stop a host( the vm is running on this
>>> host),but when i reboot the host , the vm is down , but, you know it should
>>> be running on other host, i dont know what happened.
>>>
>>> ------------------------------
>>> zhao.liang1(a)puxinasset.com
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
2
4
[ANN] oVirt 3.6.5 First Release Candidate is now available for testing
by Sandro Bonazzola 31 Mar '16
by Sandro Bonazzola 31 Mar '16
31 Mar '16
The oVirt Project is pleased to announce the availability of the First
Release Candidate of oVirt 3.6.5 for testing, as of March 31st, 2016
This release is available now for:
* Fedora 22
* Red Hat Enterprise Linux 6.7
* CentOS Linux 6.7 (or similar)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 22
This release is also available with experimental support for:
* Debian 8.3 Jessie
This release candidate includes the following updated packages:
- ovirt-engine
- ovirt-engine-reports
- ovirt-engine-sdk-python
- ovirt-engine-sdk-java
- ovirt-hosted-engine-ha
- ovirt-hosted-engine-setup
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
* A new oVirt Live ISO will be available soon [2].
* Mirrors[3] might need up to one day to synchronize.
Additional Resources:
* Read more about the oVirt 3.6.3 release highlights:
http://www.ovirt.org/release/3.6.5/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/3.6.5/
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Hey,
just a quick heads up that our oVirt Node Next jenkins builds
are currently broken.
Some of the symptoms are that the installation is failing and
that no new updates are available.
We are working on a solution.
- fabian
1
0
--_000_DBAF4AB8D219498FBCADE00421B1DAD9unilu_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNClNvIGFtIHJ1bm5pbmcgMy42LjMuNC0xIG9uIENlbnRPUyA3LjIsIGFuZCBk
aWQgYSBlbmdpbmUtc2V0dXAuDQoNCkkgY2hlY2tlZCBub3csIGFuZCBzZWUgdGhhdCBvdmlydC12
bWNvbnNvbGUtcHJveHkgd2FzIGluIGZhY3QgaW5zdGFsbGVkIChiZWZvcmUsIGR1cmluZyBvciBh
ZnRlciwgSSBkb27igJl0IGtub3cpLg0KDQpbcm9vdEBiaW8yLWVuZ2luZS1zZXJ2ZXIgfl0jIHl1
bSBpbnN0YWxsIG92aXJ0LXZtY29uc29sZS1wcm94eQ0KTG9hZGVkIHBsdWdpbnM6IGZhc3Rlc3Rt
aXJyb3IsIHZlcnNpb25sb2NrDQpMb2FkaW5nIG1pcnJvciBzcGVlZHMgZnJvbSBjYWNoZWQgaG9z
dGZpbGUNCiAqIGJhc2U6IGNlbnRvcy5taXJyb3Iucm9vdC5sdTxodHRwOi8vY2VudG9zLm1pcnJv
ci5yb290Lmx1Pg0KICogZXBlbDogbWlycm9yLmltdC1zeXN0ZW1zLmNvbTxodHRwOi8vbWlycm9y
LmltdC1zeXN0ZW1zLmNvbT4NCiAqIGV4dHJhczogY2VudG9zLm1pcnJvci5yb290Lmx1PGh0dHA6
Ly9jZW50b3MubWlycm9yLnJvb3QubHU+DQogKiBvdmlydC0zLjU6IGZ0cC5ubHV1Zy5ubDxodHRw
Oi8vZnRwLm5sdXVnLm5sPg0KICogb3ZpcnQtMy41LWVwZWw6IG1pcnJvci5pbXQtc3lzdGVtcy5j
b208aHR0cDovL21pcnJvci5pbXQtc3lzdGVtcy5jb20+DQogKiBvdmlydC0zLjY6IGZ0cC5ubHV1
Zy5ubDxodHRwOi8vZnRwLm5sdXVnLm5sPg0KICogb3ZpcnQtMy42LWVwZWw6IG1pcnJvci5pbXQt
c3lzdGVtcy5jb208aHR0cDovL21pcnJvci5pbXQtc3lzdGVtcy5jb20+DQogKiB1cGRhdGVzOiBj
ZW50b3MubWlycm9yLnJvb3QubHU8aHR0cDovL2NlbnRvcy5taXJyb3Iucm9vdC5sdT4NClBhY2th
Z2Ugb3ZpcnQtdm1jb25zb2xlLXByb3h5LTEuMC4wLTEuZWw3LmNlbnRvcy5ub2FyY2ggYWxyZWFk
eSBpbnN0YWxsZWQgYW5kIGxhdGVzdCB2ZXJzaW9uDQpOb3RoaW5nIHRvIGRvDQoNCkJ1dCBJIGRv
buKAmXQgc2VlIGFueSBwcm9jZXNzIHJ1bm5pbmcgb24gcG9ydCAyMjIyLg0KDQpJIGNvdWxkbuKA
mXQgZmluZCBpbiB0aGUgZG9jcywgaG93IHRvIHN0YXJ0IHRoZSBsaXN0ZW5lci4NCg0KQ291bGQg
YW55Ym9keSBoZWxwIG1lIG91dD8NCg0KVGhhbmsgeW91LA0KDQrigJQNCkNocmlzdG9waGUNCg0K
RHIgQ2hyaXN0b3BoZSBUcmVmb2lzLCBEaXBsLi1JbmcuDQpUZWNobmljYWwgU3BlY2lhbGlzdCAv
IFBvc3QtRG9jDQoNClVOSVZFUlNJVMOJIERVIExVWEVNQk9VUkcNCg0KTFVYRU1CT1VSRyBDRU5U
UkUgRk9SIFNZU1RFTVMgQklPTUVESUNJTkUNCkNhbXB1cyBCZWx2YWwgfCBIb3VzZSBvZiBCaW9t
ZWRpY2luZQ0KNiwgYXZlbnVlIGR1IFN3aW5nDQpMLTQzNjcgQmVsdmF1eA0KVDogKzM1MiA0NiA2
NiA0NCA2MTI0DQpGOiArMzUyIDQ2IDY2IDQ0IDY5NDkNCmh0dHA6Ly93d3cudW5pLmx1L2xjc2IN
Cg0KW0ZhY2Vib29rXTxodHRwczovL3d3dy5mYWNlYm9vay5jb20vdHJlZmV4PiAgW1R3aXR0ZXJd
IDxodHRwczovL3R3aXR0ZXIuY29tL1RyZWZleD4gICBbR29vZ2xlIFBsdXNdIDxodHRwczovL3Bs
dXMuZ29vZ2xlLmNvbS8rQ2hyaXN0b3BoZVRyZWZvaXMvPiAgIFtMaW5rZWRpbl0gPGh0dHBzOi8v
d3d3LmxpbmtlZGluLmNvbS9pbi90cmVmb2lzY2hyaXN0b3BoZT4gICBbc2t5cGVdIDxodHRwOi8v
c2t5cGU6VHJlZmV4P2NhbGw+DQoNCi0tLS0NClRoaXMgbWVzc2FnZSBpcyBjb25maWRlbnRpYWwg
YW5kIG1heSBjb250YWluIHByaXZpbGVnZWQgaW5mb3JtYXRpb24uDQpJdCBpcyBpbnRlbmRlZCBm
b3IgdGhlIG5hbWVkIHJlY2lwaWVudCBvbmx5Lg0KSWYgeW91IHJlY2VpdmUgaXQgaW4gZXJyb3Ig
cGxlYXNlIG5vdGlmeSBtZSBhbmQgcGVybWFuZW50bHkgZGVsZXRlIHRoZSBvcmlnaW5hbCBtZXNz
YWdlIGFuZCBhbnkgY29waWVzLg0KLS0tLQ0KDQoNCg0K
--_000_DBAF4AB8D219498FBCADE00421B1DAD9unilu_
Content-Type: text/html; charset="utf-8"
Content-ID: <3BC9CC0BB4CC46438D2EB4CFA669ABC0(a)uni.lux>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KRGVhciBhbGwsDQo8ZGl2IGNsYXNz
PSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5TbyBhbSBydW5uaW5nIDMu
Ni4zLjQtMSBvbiBDZW50T1MgNy4yLCBhbmQgZGlkIGEgZW5naW5lLXNldHVwLjwvZGl2Pg0KPGRp
diBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SSBjaGVja2Vk
IG5vdywgYW5kIHNlZSB0aGF0Jm5ic3A7b3ZpcnQtdm1jb25zb2xlLXByb3h5IHdhcyBpbiBmYWN0
IGluc3RhbGxlZCAoYmVmb3JlLCBkdXJpbmcgb3IgYWZ0ZXIsIEkgZG9u4oCZdCBrbm93KS48L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPg0K
PGRpdiBjbGFzcz0iIj5bcm9vdEBiaW8yLWVuZ2luZS1zZXJ2ZXIgfl0jIHl1bSBpbnN0YWxsIG92
aXJ0LXZtY29uc29sZS1wcm94eTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5Mb2FkZWQgcGx1Z2luczog
ZmFzdGVzdG1pcnJvciwgdmVyc2lvbmxvY2s8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+TG9hZGluZyBt
aXJyb3Igc3BlZWRzIGZyb20gY2FjaGVkIGhvc3RmaWxlPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPiZu
YnNwOyogYmFzZTogPGEgaHJlZj0iaHR0cDovL2NlbnRvcy5taXJyb3Iucm9vdC5sdSIgY2xhc3M9
IiI+Y2VudG9zLm1pcnJvci5yb290Lmx1PC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsq
IGVwZWw6IDxhIGhyZWY9Imh0dHA6Ly9taXJyb3IuaW10LXN5c3RlbXMuY29tIiBjbGFzcz0iIj5t
aXJyb3IuaW10LXN5c3RlbXMuY29tPC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsqIGV4
dHJhczogPGEgaHJlZj0iaHR0cDovL2NlbnRvcy5taXJyb3Iucm9vdC5sdSIgY2xhc3M9IiI+Y2Vu
dG9zLm1pcnJvci5yb290Lmx1PC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsqIG92aXJ0
LTMuNTogPGEgaHJlZj0iaHR0cDovL2Z0cC5ubHV1Zy5ubCIgY2xhc3M9IiI+ZnRwLm5sdXVnLm5s
PC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsqIG92aXJ0LTMuNS1lcGVsOiA8YSBocmVm
PSJodHRwOi8vbWlycm9yLmltdC1zeXN0ZW1zLmNvbSIgY2xhc3M9IiI+DQptaXJyb3IuaW10LXN5
c3RlbXMuY29tPC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsqIG92aXJ0LTMuNjogPGEg
aHJlZj0iaHR0cDovL2Z0cC5ubHV1Zy5ubCIgY2xhc3M9IiI+ZnRwLm5sdXVnLm5sPC9hPjwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsqIG92aXJ0LTMuNi1lcGVsOiA8YSBocmVmPSJodHRwOi8v
bWlycm9yLmltdC1zeXN0ZW1zLmNvbSIgY2xhc3M9IiI+DQptaXJyb3IuaW10LXN5c3RlbXMuY29t
PC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4mbmJzcDsqIHVwZGF0ZXM6IDxhIGhyZWY9Imh0dHA6
Ly9jZW50b3MubWlycm9yLnJvb3QubHUiIGNsYXNzPSIiPmNlbnRvcy5taXJyb3Iucm9vdC5sdTwv
YT48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+UGFja2FnZSBvdmlydC12bWNvbnNvbGUtcHJveHktMS4w
LjAtMS5lbDcuY2VudG9zLm5vYXJjaCBhbHJlYWR5IGluc3RhbGxlZCBhbmQgbGF0ZXN0IHZlcnNp
b248L2Rpdj4NCjxkaXYgY2xhc3M9IiI+Tm90aGluZyB0byBkbzwvZGl2Pg0KPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5CdXQgSSBkb27i
gJl0IHNlZSBhbnkgcHJvY2VzcyBydW5uaW5nIG9uIHBvcnQgMjIyMi48L2Rpdj4NCjxkaXYgY2xh
c3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkkgY291bGRu4oCZdCBm
aW5kIGluIHRoZSBkb2NzLCBob3cgdG8gc3RhcnQgdGhlIGxpc3RlbmVyLjwvZGl2Pg0KPGRpdiBj
bGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+Q291bGQgYW55Ym9k
eSBoZWxwIG1lIG91dD88L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPlRoYW5rIHlvdSw8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNz
PSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPuKAlDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5DaHJp
c3RvcGhlPGJyIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYgc3R5bGU9ImNvbG9yOiBy
Z2IoMCwgMCwgMCk7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQt
YWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hp
dGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB3b3JkLXdyYXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQt
bmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJyZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsi
IGNsYXNzPSIiPg0KPHAgc3R5bGU9ImZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsgZm9u
dC1zaXplOiAxMHB0OyBsaW5lLWhlaWdodDogMTZweDsgY29sb3I6IHJnYigzMywgMzMsIDMzKTsi
IGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImZvbnQtd2VpZ2h0OiBib2xkOyBjb2xvcjogcmdiKDYx
LCA1OSwgNTkpOyBkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj5EciBDaHJpc3RvcGhlIFRyZWZv
aXMsIERpcGwuLUluZy48L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+
Jm5ic3A7PC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwv
c3Bhbj48c3BhbiBzdHlsZT0iZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+PC9zcGFuPjxiciBj
bGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDYxLCA1OSwgNTkpOyBkaXNwbGF5OiBp
bmxpbmU7IGZvbnQtc2l6ZTogNy41cHQ7IiBjbGFzcz0iIj5UZWNobmljYWwgU3BlY2lhbGlzdCAv
IFBvc3QtRG9jPC9zcGFuPjwvcD4NCjxwIHN0eWxlPSJmb250LWZhbWlseTogQXJpYWwsIHNhbnMt
c2VyaWY7IGZvbnQtc2l6ZTogNy41cHQ7IGxpbmUtaGVpZ2h0OiAxNnB4OyIgY2xhc3M9IiI+DQo8
c3BhbiBzdHlsZT0iZm9udC13ZWlnaHQ6IGJvbGQ7IGNvbG9yOiByZ2IoNjEsIDU5LCA1OSk7IGRp
c3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPlVOSVZFUlNJVMOJIERVIExVWEVNQk9VUkc8L3NwYW4+
PGJyIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImRpc3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPjxi
ciBjbGFzcz0iIj4NCjwvc3Bhbj48c3BhbiBzdHlsZT0iZm9udC13ZWlnaHQ6IGJvbGQ7IGNvbG9y
OiByZ2IoNjEsIDU5LCA1OSk7IGRpc3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPkxVWEVNQk9VUkcg
Q0VOVFJFIEZPUiBTWVNURU1TIEJJT01FRElDSU5FPC9zcGFuPjxiciBjbGFzcz0iIj4NCjxzcGFu
IHN0eWxlPSJjb2xvcjogcmdiKDYxLCA1OSwgNTkpOyBkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0i
Ij5DYW1wdXMgQmVsdmFsIHwgSG91c2Ugb2YgQmlvbWVkaWNpbmU8c3BhbiBjbGFzcz0iQXBwbGUt
Y29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRl
ZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxiciBjbGFzcz0iIj4NCjxzcGFuIGNsYXNzPSJBcHBsZS1j
b252ZXJ0ZWQtc3BhY2UiPjYsIGF2ZW51ZSBkdSBTd2luZyZuYnNwOzwvc3Bhbj48YnIgY2xhc3M9
IiI+DQpMLTQzNjcgQmVsdmF1eDxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZu
YnNwOzwvc3Bhbj48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3Nw
YW4+PC9zcGFuPjxiciBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDYxLCA1OSwg
NTkpOyBkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj5UOjxzcGFuIGNsYXNzPSJBcHBsZS1jb252
ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48L3NwYW4+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2Io
NjEsIDU5LCA1OSk7IGRpc3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPiYjNDM7MzUyIDQ2IDY2IDQ0
IDYxMjQ8L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9z
cGFuPjxiciBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDYxLCA1OSwgNTkpOyBk
aXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj5GOjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQt
c3BhY2UiPiZuYnNwOzwvc3Bhbj48L3NwYW4+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoNjEsIDU5
LCA1OSk7IGRpc3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPiYjNDM7MzUyIDQ2IDY2IDQ0IDY5NDk8
L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxz
cGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48c3BhbiBzdHls
ZT0iZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9zcGFuPjxhIGhy
ZWY9Imh0dHA6Ly93d3cudW5pLmx1L2xjc2IiIHN0eWxlPSJjb2xvcjogcmdiKDAsIDEwOSwgMTg5
KTsgZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+aHR0cDovL3d3dy51bmkubHUvbGNzYjwvYT48
L3A+DQo8cCBzdHlsZT0iZm9udC1mYW1pbHk6IEFyaWFsLCBzYW5zLXNlcmlmOyBmb250LXNpemU6
IDE0cHg7IGxpbmUtaGVpZ2h0OiAxNnB4OyIgY2xhc3M9IiI+DQo8YSBocmVmPSJodHRwczovL3d3
dy5mYWNlYm9vay5jb20vdHJlZmV4IiBzdHlsZT0iZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+
PGltZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIGRhdGEtZmlsZW5hbWU9ImZhY2Vib29rLnBuZyIg
c3JjPSJodHRwczovL3MzLmFtYXpvbmF3cy5jb20vaHRtbHNpZy1hc3NldHMvcm91bmRlZC9mYWNl
Ym9vay5wbmciIGFsdD0iRmFjZWJvb2siIGNsYXNzPSIiPjwvYT48c3BhbiBjbGFzcz0iQXBwbGUt
Y29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRl
ZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxhIGhyZWY9Imh0dHBzOi8vdHdpdHRlci5jb20vVHJlZmV4
IiBzdHlsZT0iZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+PGltZyB3aWR0aD0iMjQiIGhlaWdo
dD0iMjQiIGRhdGEtZmlsZW5hbWU9InR3aXR0ZXIucG5nIiBzcmM9Imh0dHBzOi8vczMuYW1hem9u
YXdzLmNvbS9odG1sc2lnLWFzc2V0cy9yb3VuZGVkL3R3aXR0ZXIucG5nIiBhbHQ9IlR3aXR0ZXIi
IGNsYXNzPSIiPjwvYT48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8
L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxh
IGhyZWY9Imh0dHBzOi8vcGx1cy5nb29nbGUuY29tLyYjNDM7Q2hyaXN0b3BoZVRyZWZvaXMvIiBz
dHlsZT0iZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+PGltZyB3aWR0aD0iMjQiIGhlaWdodD0i
MjQiIGRhdGEtZmlsZW5hbWU9Imdvb2dsZXBsdXMucG5nIiBzcmM9Imh0dHBzOi8vczMuYW1hem9u
YXdzLmNvbS9odG1sc2lnLWFzc2V0cy9yb3VuZGVkL2dvb2dsZXBsdXMucG5nIiBhbHQ9Ikdvb2ds
ZSBQbHVzIiBjbGFzcz0iIj48L2E+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+
Jm5ic3A7PC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwv
c3Bhbj48YSBocmVmPSJodHRwczovL3d3dy5saW5rZWRpbi5jb20vaW4vdHJlZm9pc2NocmlzdG9w
aGUiIHN0eWxlPSJkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj48aW1nIHdpZHRoPSIyNCIgaGVp
Z2h0PSIyNCIgZGF0YS1maWxlbmFtZT0ibGlua2VkaW4ucG5nIiBzcmM9Imh0dHBzOi8vczMuYW1h
em9uYXdzLmNvbS9odG1sc2lnLWFzc2V0cy9yb3VuZGVkL2xpbmtlZGluLnBuZyIgYWx0PSJMaW5r
ZWRpbiIgY2xhc3M9IiI+PC9hPjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZu
YnNwOzwvc3Bhbj48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3Nw
YW4+PGEgaHJlZj0iaHR0cDovL3NreXBlOlRyZWZleD9jYWxsIiBzdHlsZT0iZGlzcGxheTogaW5s
aW5lOyIgY2xhc3M9IiI+PGltZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIGRhdGEtZmlsZW5hbWU9
InNreXBlLnBuZyIgc3JjPSJodHRwczovL3MzLmFtYXpvbmF3cy5jb20vaHRtbHNpZy1hc3NldHMv
cm91bmRlZC9za3lwZS5wbmciIGFsdD0ic2t5cGUiIGNsYXNzPSIiPjwvYT48L3A+DQo8cCBjbGFz
cz0iYmFubmVyLWNvbnRhaW5lciIgc3R5bGU9ImZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJp
ZjsgZm9udC1zaXplOiAxNHB4OyBsaW5lLWhlaWdodDogMTZweDsiPg0KPC9wPg0KPHAgc3R5bGU9
ImZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsgY29sb3I6IHJnYig2MSwgNTksIDU5KTsg
Zm9udC1zaXplOiA5cHg7IGxpbmUtaGVpZ2h0OiAxNnB4OyIgY2xhc3M9IiI+DQotLS0tPGJyIGNs
YXNzPSIiPg0KVGhpcyBtZXNzYWdlIGlzIGNvbmZpZGVudGlhbCBhbmQgbWF5IGNvbnRhaW4gcHJp
dmlsZWdlZCBpbmZvcm1hdGlvbi48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4m
bmJzcDs8L3NwYW4+PGJyIGNsYXNzPSIiPg0KSXQgaXMgaW50ZW5kZWQgZm9yIHRoZSBuYW1lZCBy
ZWNpcGllbnQgb25seS48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8
L3NwYW4+PGJyIGNsYXNzPSIiPg0KSWYgeW91IHJlY2VpdmUgaXQgaW4gZXJyb3IgcGxlYXNlIG5v
dGlmeSBtZSBhbmQgcGVybWFuZW50bHkgZGVsZXRlIHRoZSBvcmlnaW5hbCBtZXNzYWdlIGFuZCBh
bnkgY29waWVzLjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bh
bj48YnIgY2xhc3M9IiI+DQotLS0tPGJyIGNsYXNzPSIiPg0KPC9wPg0KJm5ic3A7PHNwYW4gY2xh
c3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjwvZGl2Pg0KPC9kaXY+DQo8
YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_DBAF4AB8D219498FBCADE00421B1DAD9unilu_--
3
2
I'm reading the documentation here :
http://www.ovirt.org/documentation/admin-guide/serial-console-setup/
After a few strace, I found the ssh configuration used for the custom ssh that listen on port 2222:
/usr/share/ovirt-vmconsole/ovirt-vmconsole-proxy/ovirt-vmconsole-proxy-sshd/sshd_config
And I have a big problem with it.
It says "GSSAPIAuthentication no" but public key authentication is not allowed in my data center, we use kerberos every where.
So I wonder if I can edit this file ? How is it managed by ovirt ?
I can always use puppet to modify just this line, it will be fine for me.
The point 4 in Automatic Setup is not very helpfull:
" • once the setup succesfully run, and once ovirt-engine is running, you can log in and register a SSH key. (TODO: add picture)"
what does it mean ?
4
14
This is a multi-part message in MIME format.
------=_001_NextPart844315613801_=----
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: base64
aGksYWxsICwgaSdtIGEgY2hpbmVzZSB1c2VyIG9mIG92aXJ0My42LjIgICxoZXJlIGFyZSBteSBw
cm9ibGVtcw0KDQppIGhhdmUgMyBob3N0cywgdGhleSBhbGwgd29yayBmaW5lICwgdGhlcmUgYXJl
IDEwIHZtcyBydW4gb24gaXQuIGkgd2FudCB0ZXN0IGlmIHZtIHdpbGwgc3RpbGwgd29yayB3aGVu
IGkgc3RvcCBhIGhvc3QoIHRoZSB2bSBpcyBydW5uaW5nIG9uIHRoaXMgaG9zdCksYnV0IHdoZW4g
aSByZWJvb3QgdGhlIGhvc3QgLCB0aGUgdm0gaXMgZG93biAsIGJ1dCwgeW91IGtub3cgaXQgc2hv
dWxkIGJlIHJ1bm5pbmcgb24gb3RoZXIgaG9zdCwgaSBkb250IGtub3cgd2hhdCBoYXBwZW5lZC4N
Cg0KDQoNCnpoYW8ubGlhbmcxQHB1eGluYXNzZXQuY29tDQo=
------=_001_NextPart844315613801_=----
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3Dus-ascii"><style>body { line-height: 1.5; }body { font-size: 10.5pt; f=
ont-family: 'Microsoft YaHei UI'; color: rgb(0, 0, 0); line-height: 1.5; }=
</style></head><body>=0A<div><span></span>hi,all , i'm a chinese user of o=
virt3.6.2 ,here are my problems</div><div><br></div><div>i have 3 ho=
sts, they all work fine , there are 10 vms run on it. i want test if vm wi=
ll still work when i stop a host( the vm is running on this host),but when=
i reboot the host , the vm is down , but, you know it should be running o=
n other host, i dont know what happened.</div>=0A<div><br></div><hr style=
=3D"WIDTH: 210px; HEIGHT: 1px" color=3D"#b5c4df" size=3D"1" align=3D"left"=
>=0A<div><span><div style=3D"MARGIN: 10px; FONT-FAMILY: verdana; FONT-SIZE=
: 10pt"><div>zhao.liang1(a)puxinasset.com</div></div></span></div>=0A</body>=
</html>
------=_001_NextPart844315613801_=------
3
2
30 Mar '16
--------------000508050504010509050906
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
Any one have an idea on how to "hack" or "patch" ovirt so that instead
of rejecting a new network with duplicate vlan id it would just warn, or
even just ignore?
On 03/18/2016 01:54 PM, bugzilla(a)redhat.com wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1319323
>
> Bug ID: 1319323
> Summary: VLAN ID check for duplicates
> Product: ovirt-engine
> Version: 3.3
> Component: RFEs
> Severity: medium
> Assignee: sherold(a)redhat.com
> Reporter: bill.james(a)j2.com
> QA Contact: gklein(a)redhat.com
> CC: bugs(a)ovirt.org
> oVirt Team: Network
> Flags: testing_ack?
> Flags: planning_ack?
>
>
>
> Description of problem:
> Adding new network with vlan tag, ovirt doesn't allow duplicate VLAN IDs.
> But it should be allowed, because if you are using multiple interfaces you can
> have the same vlan ID as long as they aren't assigned to the same interface on
> the hardware node.
>
>
> Version-Release number of selected component (if applicable):
> ovirt-engine-3.6.3.4-1.el7.centos.noarch
>
>
> How reproducible:
> 100%
>
> Steps to Reproduce:
> 1. Just add network with same vlan id as an already added interface.
>
> 2.
> 3.
>
> Actual results:
> See email thread labeled "Re: [ovirt-users] multiple NICs VLAN ID conflict".
> GUI says vlan already used.
>
> Expected results:
> Duplicate VLAN ID should be checked when you are assign network to the hardware
> node, not when creating the interface.
>
>
> Additional info:
> Trying to work around this with vdsm hooks in before_network_setup,
> after_get_caps and after_get_stats is very difficult to get it to work right.
> (see email thread)
>
Yaniv Kaul <mailto:ykaul@redhat.com> 2016-03-20 03:12:24 EDT
Perhaps only WARN.
--------------000508050504010509050906
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Any one have an idea on how to "hack" or "patch" ovirt so that
instead of rejecting a new network with duplicate vlan id it would
just warn, or even just ignore?<br>
<br>
<br>
<div class="moz-cite-prefix">On 03/18/2016 01:54 PM,
<a class="moz-txt-link-abbreviated" href="mailto:bugzilla@redhat.com">bugzilla(a)redhat.com</a> wrote:<br>
</div>
<blockquote cite="mid:bug-1319323-393952@bugzilla.redhat.com"
type="cite">
<pre wrap=""><a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1319323">https://bugzilla.redhat.com/show_bug.cgi?id=1319323</a>
Bug ID: 1319323
Summary: VLAN ID check for duplicates
Product: ovirt-engine
Version: 3.3
Component: RFEs
Severity: medium
Assignee: <a class="moz-txt-link-abbreviated" href="mailto:sherold@redhat.com">sherold(a)redhat.com</a>
Reporter: <a class="moz-txt-link-abbreviated" href="mailto:bill.james@j2.com">bill.james(a)j2.com</a>
QA Contact: <a class="moz-txt-link-abbreviated" href="mailto:gklein@redhat.com">gklein(a)redhat.com</a>
CC: <a class="moz-txt-link-abbreviated" href="mailto:bugs@ovirt.org">bugs(a)ovirt.org</a>
oVirt Team: Network
Flags: testing_ack?
Flags: planning_ack?
Description of problem:
Adding new network with vlan tag, ovirt doesn't allow duplicate VLAN IDs.
But it should be allowed, because if you are using multiple interfaces you can
have the same vlan ID as long as they aren't assigned to the same interface on
the hardware node.
Version-Release number of selected component (if applicable):
ovirt-engine-3.6.3.4-1.el7.centos.noarch
How reproducible:
100%
Steps to Reproduce:
1. Just add network with same vlan id as an already added interface.
2.
3.
Actual results:
See email thread labeled "Re: [ovirt-users] multiple NICs VLAN ID conflict".
GUI says vlan already used.
Expected results:
Duplicate VLAN ID should be checked when you are assign network to the hardware
node, not when creating the interface.
Additional info:
Trying to work around this with vdsm hooks in before_network_setup,
after_get_caps and after_get_stats is very difficult to get it to work right.
(see email thread)
</pre>
</blockquote>
<span class="bz_comment_user"> <span class="vcard redhat_user"><a
class="email" href="mailto:ykaul@redhat.com" title="Yaniv Kaul
<ykaul(a)redhat.com>"> <span class="fn">Yaniv Kaul</span></a>
</span> </span> <span class="bz_comment_time"> 2016-03-20
03:12:24 EDT </span>
<pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_1">Perhaps only WARN.
</pre>
</body>
</html>
--------------000508050504010509050906--
1
0
Hi,
Have you got some free time and do you want to get involved in oVirt
project?
Do you like the idea of having fresh disk images of recent distribution in
oVirt Glance repository?
You can help us by testing existing online images ensuring they works with
cloud-init
or creating one yourself and report your success to devel(a)ovirt.org.
We'll be happy to upload the images once these are ready.
Do you like Debian and do you have some programming or packaging skills?
Help us getting Vdsm running on it! We work on inclusion of Vdsm into
Debian.
We need to test and fix Vdsm Debian packages to make them working before
the next Debian freeze.
You can find all current Debian work (Vdsm and related packages) in Debian
git repositories [10].
You can follow the progress and participate on oVirt development mailing
list [11].
Here are some bugs you can try to help with:
1159784 ovirt-engine Documentation NEW [RFE] Document when and
where new features are available ...
1074301 ovirt-engine-cli RFEs NEW [RFE] ovirt-shell has no man page
772931 ovirt-engine-reports RFEs NEW [RFE] Reports should include
the name of the oVirt engine
1120585 ovirt-image-uploader Documentation NEW update image
uploader documentation
1120586 ovirt-iso-uploader Documentation NEW update iso uploader
documentation
1120588 ovirt-log-collector Documentation NEW update log collector
documentation
1237132 ovirt-engine Setup.Engine NEW [TEXT] New package listing of
engine-setup when upgrading...
1115059 ovirt-engine General ASSIGNED Incomplete error message when
adding VNIC profile to runn...
Are you great at packaging software? Do you prefer a distribution which is
currently unsupported by oVirt?
Do you want to have packages included in your preferred distribution? Help
getting oVirt ported there!
Fedora: http://lists.ovirt.org/pipermail/devel/2015-September/011426.html
CentOS: https://wiki.centos.org/SpecialInterestGroup/Virtualization
Gentoo: https://wiki.gentoo.org/wiki/OVirt (GSoC:
https://wiki.gentoo.org/wiki/Google_Summer_of_Code/2016/Ideas )
Debian:
http://www.ovirt.org/develop/release-management/features/debian-support-for…
Archlinux: http://www.ovirt.org/develop/developer-guide/arch-linux/
OpenSUSE: https://build.opensuse.org/project/show/Virtualization:oVirt
Do you love "DevOps?", you count stable builds in jenkins ci while trying
to fall a sleep?
Then oVirt infra team is looking for you!, join the infra team and dive in
to do the newest and coolest devops tools today!
Here are some of our open tasks you can help with:
https://ovirt-jira.atlassian.net/secure/RapidBoard.jspa?rapidView=6
You can also help us by sharing how you use oVirt in your DevOps
environment (please use [DevOps] in the subject).
You can check out more docs on the DevOps side of oVirt in [12][13]
You don't have programming skills, not enough time for DevOps but you want
still to contribute?
Here are some bugs you can take care of, also without writing a line of
code:
https://bugzilla.redhat.com/buglist.cgi?quicksearch=classification%3Aovirt%…
Do you prefer to test things? We have some test cases[5] you can try using
nightly snapshots[6].
Do you want to contribute test cases? Most of the features[7] included in
oVirt are missing a test case, you're welcome to contribute one!
Do you want to contribute artworks? oVirt Live backgrounds and covers,
release banners, stickers, .... Take a look at Fedora Artworks[9] as an
example of what you can do
Is this the first time you try to contribute to oVirt project?
You can start from here [1][2]!
You don't know gerrit very well? You can find some more docs here [3].
Any other question about development? Feel free to ask on devel(a)ovirt.org
or on irc channel[4].
You don't really have time / skills for any development / documentation /
testing related task?
Spread the word[8]!
Let us know you're getting involved, present yourself and tell us what
you're going to do, you'll be welcome!
[1] http://www.ovirt.org/develop/
[2] http://www.ovirt.org/develop/dev-process/working-with-gerrit/
[3] https://gerrit-review.googlesource.com/Documentation
[4] http://www.ovirt.org/community/
[5] http://www.ovirt.org/develop/infra/testing/
[6] http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/
[7] http://www.ovirt.org/develop/release-management/features/
[8]
http://www.zdnet.com/article/how-much-longer-can-red-hats-ovirt-remain-cove…
[9] https://fedoraproject.org/wiki/Artwork#Resources
[10] http://git.debian.org
[11] http://lists.ovirt.org/mailman/listinfo/devel
[12] http://ovirt-infra-docs.readthedocs.org/en/latest/
[13] http://www.ovirt.org/develop/infra/infrastructure-documentation/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Hello,
I got following error during add new disk using API but on other side from
admin panel I can create new disk.
*Error :*
Cannot add Virtual Machine Disk. The user doesn't have permissions to
attach Disk Profile to the Disk.
Regards,
*Vishal Panchal*
Software Developer
*+918140283911*
* <http:///sculptsoft.com>*
* <https://www.facebook.com/SculptSoft>
<https://www.linkedin.com/company/sculptsoft>
<https://twitter.com/SculptSoft>*
3
2
Hello,
we are using ovirt api with json data format.
we have create vm from template and i want to set ip,macaddress,user and
password for that we are using cloud_init for that we it can't set any
options.
Regards,
*Arpit Makhiyaviya*
Software Engineer
+91-79-40038284
+91-971-437-6669
<http://www.sculptsoft.com>
3
6
Hello,
one question from a beginner. The case, i rent a server with 1 public
IP-Address. Is it possible to setup oVirt as Hosted-Engine via ssh and
vnc remote with only 1 public IP-Address, or need i a second public
IP-Address for the engine? Perhaps its about an host entriy or something
like this possible?!
Thx
3
2
Hello Joop,
ans sorry for my late reply. Indeed the docs of red hat are very good,
if you found the right one. i worke a few weeks with oVirt and i can
make all run, but for my personal needs, i will understand the system
and what is made where and why. Thats why i asked a lot question about
deeper sense...There are also lots of howtos in the net which didnt work
any more or for diffent cases or systems.
So as i first startet with all-in-one, someone say that this will not
longer supported at 4.x ok, so i test now the hosted-engine, but its a
bit tricky to setup if you new and some changes came with new releases.
i actually do more than 10 installation, and learned each time i play
with options in setup. I searched the docs too, but some things i didnt
find. My problem is still the handling of nfs export paths and activate
them in ovirt later.
one of my experience is: 1 path for data, 1 path for iso and all as nfs
export, which work well. if iam in ovirt i cant activate the data path,
while the wizard says it is allready in use.
so next installation i created same 2 paths plus one for engine which i
take at installation for the engine. if i later in ovirt add the data
nfs path, i suddenly see both data nfs domains, engine and data. and
thats what i not understand?
For understanding, the storage cames though the Cluster and will used by
the hosts, right? Or have i add each storage over the system, which will
used in each cluster by the hosts? Or depend its on the concept of using
the storage, e.g. storage on host or extra storage in the net?
Greetings
Am 2016-03-22 20:49, schrieb Joop:
> On 22-3-2016 17:39, Taste-Of-IT wrote:
>> Sorry for write again, but its still unclear how to add the nfs
>> storage and where.
>>
>> - as far as i understand the engine is in own vm what is neccessary
>> for ha with 2 ore most host.
>> - the e.g. nfs storage could be external on own server and will
>> connected in engine-vm setup
>>
>> - the host itself needs a storage path at installation process too,
>> which one? local or external nfs too? what will hosted there? the
>> engine-vm right?
>>
>> - so if i only have one host with nfs storage on itself, i have to
>> create 2 nfs paths, one for the host with engine-vm and one for the
>> engine-vm for nfs storage right?
>> - so what is the storage on engine for, all vms?
>>
> Don't know if you just want to play around with it for personal use or
> if its going to be using in a production environment. Doesn't matter
> whether its just one host or more than one.
> If the first than try to read up on ovirt or use the redhat Rhev docs
> on
> the redhat site. They are quite nice in explaining the concepts behind
> oVirt.
> If its the latter than <advert-mode>I have been using/installing oVirt
> for quite some time, since the 3.0 times, and offer support on it for
> those in need</advert-mode>
>
> I have no problems trying to help off or on list and maybe its just
> seeing how all fits together that will make you understand how things
> work. Hosted-engine is a 'warped' concept but it really works although
> there are certain pitfalls to be aware of.
>
> Regards,
>
> Joop
1
0
Hi guys,
I was wondering if there is a version of the Ovirt Engine Appliance for
RHEL6.7?
The documentation is Git has the builds for RHEL/CentOS 7 or higher.
Thanks,
John
2
1
The oVirt Project is pleased to announce today the general availability of
oVirt 3.6.4.
This latest community release includes numerous bug fixes for
- ovirt-engine
- vdsm
- ovirt-hosted-engine-setup
- ovirt-hosted-engine-ha
oVirt is an open-source, openly-governed enterprise virtualization
management application, developed by a global community. You can use the
oVirt management interface (oVirt Engine) to manage hardware nodes, storage
and network resources, and to deploy and monitor virtual machines running
in your data center.
If you are familiar with VMware products, oVirt is conceptually similar to
vSphere. oVirt serves as the bedrock for Red Hat's Enterprise
Virtualization product, and it is the "upstream" project where new features
are developed prior to their inclusion in Red Hat's supported product
offering.
Additional Resources:
* Read more about the oVirt 3.6.4 release highlights:
http://www.ovirt.org/release/3.6.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Re: [ovirt-users] oVirt 3.6 AAA LDAP cannot not log in when end of UPN is different from domain base
by Karli Sjöberg 28 Mar '16
by Karli Sjöberg 28 Mar '16
28 Mar '16
--_000_5d0ea4970f114d75b59b0194a4e78348exch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjggbWFycyAyMDE2IDc6MzkgZW0gc2tyZXYgT25kcmEgTWFjaGFjZWsgPG9tYWNoYWNl
QHJlZGhhdC5jb20+Og0KPg0KPiBPbiAwMy8yNy8yMDE2IDExOjQwIEFNLCBLYXJsaSBTasO2YmVy
ZyB3cm90ZToNCj4gPg0KPiA+PiBPbiAyNiBNYXIgMjAxNiwgYXQgMjE6MzIsIE9uZHJhIE1hY2hh
Y2VrIDxvbWFjaGFjZUByZWRoYXQuY29tPiB3cm90ZToNCj4gPj4NCj4gPj4gT24gMDMvMjYvMjAx
NiAwMjowOSBQTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6DQo+ID4+Pg0KPiA+Pj4+IE9uIDI2IE1h
ciAyMDE2LCBhdCAxMzo0OSwgS2FybGkgU2rDtmJlcmcgPEthcmxpLlNqb2JlcmdAc2x1LnNlDQo+
ID4+Pj4gPG1haWx0bzpLYXJsaS5Tam9iZXJnQHNsdS5zZT4+IHdyb3RlOg0KPiA+Pj4+DQo+ID4+
Pj4NCj4gPj4+Pj4gT24gMjYgTWFyIDIwMTYsIGF0IDExOjM1LCBPbmRyYSBNYWNoYWNlayA8b21h
Y2hhY2VAcmVkaGF0LmNvbQ0KPiA+Pj4+PiA8bWFpbHRvOm9tYWNoYWNlQHJlZGhhdC5jb20+PiB3
cm90ZToNCj4gPj4+Pj4NCj4gPj4+Pj4gRm9yIG1lIGl0J3Mgd29ya2luZyBjb21wbGV0ZWxseSBm
aW5lOg0KPiA+Pj4+Pg0KPiA+Pj4+PiAuLi4NCj4gPj4+Pj4gY29uZmlnLm1hcFVzZXIudHlwZSA9
IHJlZ2V4DQo+ID4+Pj4+IGNvbmZpZy5tYXBVc2VyLnJlZ2V4LnBhdHRlcm4gPSBeKD88dXNlcj5b
XkBdKikkDQo+ID4+Pj4+IGNvbmZpZy5tYXBVc2VyLnJlZ2V4LnJlcGxhY2VtZW50ID0gJHt1c2Vy
fUBET01BSU5YLmNvbQ0KPiA+Pj4+PiA8aHR0cDovL2RvbWFpbnguY29tLz4NCj4gPj4+Pj4gY29u
ZmlnLm1hcFVzZXIucmVnZXgubXVzdE1hdGNoID0gZmFsc2UNCj4gPj4+Pj4gLi4uDQo+ID4+Pj4+
DQo+ID4+Pj4+ICQgb3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNlcg0K
PiA+Pj4+PiAtLXBhc3N3b3JkPXBhc3M6cGFzc3dvcmQgLS11c2VyLW5hbWU9dXNlckBET01BSU5Z
IC0tcHJvZmlsZT1hZA0KPiA+Pj4+Pg0KPiA+Pj4+PiBJTkZPICAgIEFQSTogLS0+TWFwcGluZy5J
bnZva2VDb21tYW5kcy5NQVBfVVNFUiBwcm9maWxlPSdhZCcNCj4gPj4+Pj4gdXNlcj0ndXNlckBE
T01BSU5ZJw0KPiA+Pj4+PiBJTkZPICAgIEFQSTogPC0tTWFwcGluZy5JbnZva2VDb21tYW5kcy5N
QVBfVVNFUiBwcm9maWxlPSdhZCcNCj4gPj4+Pj4gdXNlcj0ndXNlckBET01BSU5ZJw0KPiA+Pj4+
Pg0KPiA+Pj4+PiAkIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgYWFhIGxvZ2luLXVzZXIN
Cj4gPj4+Pj4gLS1wYXNzd29yZD1wYXNzOnBhc3N3b3JkIC0tdXNlci1uYW1lPXVzZXIgLS1wcm9m
aWxlPWFkDQo+ID4+Pj4+DQo+ID4+Pj4+IElORk8gICAgQVBJOiAtLT5NYXBwaW5nLkludm9rZUNv
bW1hbmRzLk1BUF9VU0VSIHByb2ZpbGU9J2FkJyB1c2VyPSd1c2VyJw0KPiA+Pj4+PiBJTkZPICAg
IEFQSTogPC0tTWFwcGluZy5JbnZva2VDb21tYW5kcy5NQVBfVVNFUiBwcm9maWxlPSdhZCcNCj4g
Pj4+Pj4gdXNlcj0ndXNlckBET01BSU5YLmNvbSA8bWFpbHRvOnVzZXI9J3VzZXJARE9NQUlOWC5j
b20+Jw0KPiA+Pj4+Pg0KPiA+Pj4+PiBBcyB5b3UgY2FuIHNlZSBpdCdzIGNvcnJlY3RseSBtYXBw
ZWQuDQo+ID4+Pj4+DQo+ID4+Pj4+IFBsZWFzZSBjaGVjayBvbmNlIGFnYWluIHRoZSByZWdleCBp
cyBjb3JyZWN0LCBpZiBpdCBzdGlsbCB3b24ndCB3b3JrLA0KPiA+Pj4+PiBwbGVhc2Ugc2VuZCBs
b2cgb3V0cHV0IGFnYWluLg0KPiA+Pj4+DQo+ID4+Pj4gL2V0Yy9vdmlydC1lbmdpbmUvZXh0ZW5z
aW9ucy5kL21hcHBpbmctc3VmZml4LnByb3BlcnRpZXM6DQo+ID4+Pj4gb3ZpcnQuZW5naW5lLmV4
dGVuc2lvbi5uYW1lID0gbWFwcGluZy1zdWZmaXgNCj4gPj4+PiBvdmlydC5lbmdpbmUuZXh0ZW5z
aW9uLmJpbmRpbmdzLm1ldGhvZCA9IGpib3NzbW9kdWxlDQo+ID4+Pj4gb3ZpcnQuZW5naW5lLmV4
dGVuc2lvbi5iaW5kaW5nLmpib3NzbW9kdWxlLm1vZHVsZSA9DQo+ID4+Pj4gb3JnLm92aXJ0LmVu
Z2luZS1leHRlbnNpb25zLmFhYS5taXNjDQo+ID4+Pj4gb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5i
aW5kaW5nLmpib3NzbW9kdWxlLmNsYXNzDQo+ID4+Pj4gPSBvcmcub3ZpcnQuZW5naW5lZXh0ZW5z
aW9ucy5hYWEubWlzYy5tYXBwaW5nLk1hcHBpbmdFeHRlbnNpb24NCj4gPj4+PiBvdmlydC5lbmdp
bmUuZXh0ZW5zaW9uLnByb3ZpZGVzID0NCj4gPj4+PiBvcmcub3ZpcnQuZW5naW5lLmFwaS5leHRl
bnNpb25zLmFhYS5NYXBwaW5nDQo+ID4+Pj4gY29uZmlnLm1hcFVzZXIudHlwZSA9IHJlZ2V4DQo+
ID4+Pj4gY29uZmlnLm1hcFVzZXIucmVnZXgucGF0dGVybiA9IF4oPzx1c2VyPlteQF0qKSQNCj4g
Pj4+PiBjb25maWcubWFwVXNlci5yZWdleC5yZXBsYWNlbWVudCA9ICR7dXNlcn1AZm9vLmJhcg0K
PiA+Pj4+IGNvbmZpZy5tYXBVc2VyLnJlZ2V4Lm11c3RNYXRjaCA9IGZhbHNlDQo+ID4+Pj4NCj4g
Pj4+PiAjIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgLS1sb2ctbGV2ZWw9RklORVNUIGFh
YSBsb2dpbi11c2VyDQo+ID4+Pj4gLS1wcm9maWxlPWJhei5mb28uYmFyLW5ldyAtLXVzZXItbmFt
ZT11c2VyQGJhei5mb28uYmFyDQo+ID4+Pj4gPG1haWx0bzp1c2VyLW5hbWU9dXNlckBiYXouZm9v
LmJhcj4NCj4gPj4+PiAjIGdyZXAgTWFwcGluZy5JbnZva2VDb21tYW5kcy5NQVBfVVNFUiBsb2dp
bi5sb2cNCj4gPj4+PiAyMDE2LTAzLTI2IDEzOjI3OjQwIElORk8gICAgQVBJOiAtLT5NYXBwaW5n
Lkludm9rZUNvbW1hbmRzLk1BUF9VU0VSDQo+ID4+Pj4gdXNlcj0ndXNlckBiYXouZm9vLmJhciA8
bWFpbHRvOnVzZXI9J3VzZXJAYmF6LmZvby5iYXI+Jw0KPiA+Pj4+IDIwMTYtMDMtMjYgMTM6Mjc6
NDAgSU5GTyAgICBBUEk6IDwtLU1hcHBpbmcuSW52b2tlQ29tbWFuZHMuTUFQX1VTRVINCj4gPj4+
PiB1c2VyPSd1c2VyQGJhei5mb28uYmFyIDxtYWlsdG86dXNlcj0ndXNlckBiYXouZm9vLmJhcj4n
DQo+ID4+Pj4NCj4gPj4+PiBBbmQgaGVyZSBpcyB0aGUgbG9nOg0KPiA+Pj4+IGh0dHBzOi8vZHJv
cG9mZi5zbHUuc2UvaW5kZXgucGhwL3MvU0s5VDh2T1VPN3lCM1BNL2Rvd25sb2FkDQo+ID4+Pj4N
Cj4gPj4+PiAvSw0KPiA+Pj4NCj4gPj4+IEV1cmVrYSEgSSBjaGFuZ2VkIOKAmHZhcnMudXNlcuKA
mSBpbiDigJhiYXouZm9vLmJhci1uZXcucHJvcGVydGllc+KAmSBmcm9tIG9uZQ0KPiA+Pj4gd2l0
aCBzdWZmaXgg4oCYQGJhei5mb28uYmFy4oCZIHRvIG1pbmUgdGhhdCBoYXMgYSDigJhAZm9vLmJh
cuKAmSBlbmRpbmcgYW5kIG5vdw0KPiA+Pj4gaXQgd29ya3MsIGZvciBzb21lIHJlYXNvbi4gVmVy
eSBzdHJhbmdlLCBidXQgYW55d2F5Li4uIEhvdyBkbyBJIGdvIGFib3V0DQo+ID4+PiBjaGFuZ2lu
ZyBmcm9tIFVQTiB0byBzYW1BY2NvdW50TmFtZSwgaWYgScK0ZCB3YW50IHRoYXQgaW5zdGVhZD8N
Cj4gPj4NCj4gPj4gV2VsbCwgd2Ugc3VwcG9ydCBvbmx5IFVQTiwgYmVjYXVzZSBzYW0gc3VwcG9y
dCBvbmx5IDE1Y2hhcmFjdGVycyBpbiB1c2VybmFtZS4NCj4gPg0KPiA+IE9LLCB0aGFuayB5b3Uu
IEZyb20gaGVyZSBjb21lcyB0aGUgcmVhbGx5IGRhdW50aW5nIHBhcnQsIHdoaWNoIGlzIHRvIGdv
IHRocm91Z2ggYWxsIHRoZSBWTXMsIGNoZWNrIHRoZWlyIHBlcm1pc3Npb25zLCBhZGQgc2FtZSB1
c2VyKHMpIGZyb20gdGhlIG5ldyBwcm92aWRlciBhbmQgZGVsZXRlIHRoZSBvbGQuIFByb2JhYmx5
IGdvaW5nIHRvIHN0YXJ0IGEgbmV3IHRocmVhZCBmb3IgZG9pbmcgdGhhdCB3aXRoIFB5dGhvbiwg
YnV0IEnCtGxsIGNyb3NzIHRoYXQgYnJpZGdlIHdoZW4gSSBnZXQgdG8gaXQsIHRoaXMgd2FzIG9u
bHkgYSB2aXJ0dWFsIHRlc3QgZW52aXJvbm1lbnQgZm9yIGdvaW5nIGZyb20gMy40IHRvIDMuNi4N
Cj4NCj4gTm90IHN1cmUgSSB1bmRlcnN0YW5kLCB3aHkgd291bGQgeW91IGRvIHRoYXQ/IFRoaXMg
aXMgd2hhdCBtaWdyYXRpb24NCj4gdG9vbCBkbyBmb3IgeW91IGFzIHdlbGwsDQo+IHNvIHdoeSBk
byB5b3UgbmVlZCBpdCB0byBkbyBhZ2Fpbj8NCg0KQWgsIEkgbXVzdCBoYXZlIG1pc3JlYWQgdGhl
IGluc3RydWN0aW9ucy4gU28gaWYgaXQgdHVybnMgb3V0IHRvIGJlIG5lY2Vzc2FyeSwgSSBrbm93
IHdobyB0byBibGFtZTpQIFRoYW5rcyBmb3IgcG9pbnRpbmcgdGhhdCBvdXQhDQoNCi9LDQoNCj4N
Cj4gPg0KPiA+IC9LDQo+ID4NCj4gPj4NCj4gPj4+DQo+ID4+PiAvSw0KPiA+Pj4NCj4gPj4+Pg0K
PiA+Pj4+Pg0KPiA+Pj4+PiBPbiAwMy8yNi8yMDE2IDEwOjA3IEFNLCBLYXJsaSBTasO2YmVyZyB3
cm90ZToNCj4gPj4+Pj4+IFdoYXQgdGhlIGhlY2ssIG15IG1lc3NhZ2UgZGlzYXBwZWFyZXMhIFRy
eWluZyBhZ2Fpbi4NCj4gPj4+Pj4+DQo+ID4+Pj4+PiBPaywgc28gaXQncyBtYXBwaW5nIG5vdyBi
dXQgdGhlIG9ubHkgdGhpbmcgd29ya2luZyBpczoNCj4gPj4+Pj4+IGNvbmZpZy5tYXBVc2VyLnJl
Z2V4LnBhdHRlcm4gPSB1c2VyQGJhei5mb28uYmFyDQo+ID4+Pj4+PiA8bWFpbHRvOnVzZXJAYmF6
LmZvby5iYXI+DQo+ID4+Pj4+PiBjb25maWcubWFwVXNlci5yZWdleC5yZXBsYWNlbWVudCA9IHVz
ZXJAZm9vLmJhciA8bWFpbHRvOnVzZXJAZm9vLmJhcj4NCj4gPj4+Pj4+DQo+ID4+Pj4+PiBBbmQg
dGhhdCBpc24ndCB2ZXJ5IHVzZWZ1bC4gUGxlYXNlIGFkdmljZSENCj4gPj4+Pj4+DQo+ID4+Pj4+
PiAvSw0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IE9uIDAzLzI1LzIwMTYgMTI6MjYgQU0sIEthcmxpIFNq
w7ZiZXJnIHdyb3RlOg0KPiA+Pj4+Pj4+DQo+ID4+Pj4+Pj4gRGVuIDI1IG1hcnMgMjAxNiAxMjox
MCBmbSBza3JldiBLYXJsaSBTasO2YmVyZyA8a2FybGkuc2pvYmVyZ0BzbHUuc2UNCj4gPj4+Pj4+
PiA8bWFpbHRvOmthcmxpLnNqb2JlcmdAc2x1LnNlPj46DQo+ID4+Pj4+Pj4+DQo+ID4+Pj4+Pj4+
DQo+ID4+Pj4+Pj4+IERlbiAyNCBtYXJzIDIwMTYgMTE6MjYgZW0gc2tyZXYgT25kcmEgTWFjaGFj
ZWsNCj4gPj4+Pj4+PiA8b21hY2hhY2VAcmVkaGF0LmNvbSA8bWFpbHRvOm9tYWNoYWNlQHJlZGhh
dC5jb20+PjoNCj4gPj4+Pj4+Pj4+DQo+ID4+Pj4+Pj4+PiBPbiAwMy8yNC8yMDE2IDExOjE0IFBN
LCBLYXJsaSBTasO2YmVyZyB3cm90ZToNCj4gPj4+Pj4+Pj4+Pg0KPiA+Pj4+Pj4+Pj4+IERlbiAy
NCBtYXJzIDIwMTYgNzoyNiBlbSBza3JldiBPbmRyYSBNYWNoYWNlaw0KPiA+Pj4+Pj4+IDxvbWFj
aGFjZUByZWRoYXQuY29tIDxtYWlsdG86b21hY2hhY2VAcmVkaGF0LmNvbT4+Og0KPiA+Pj4+Pj4+
Pj4+ICAgPg0KPiA+Pj4+Pj4+Pj4+ICAgPiBPbiAwMy8yNC8yMDE2IDA2OjE2IFBNLCBLYXJsaSBT
asO2YmVyZyB3cm90ZToNCj4gPj4+Pj4+Pj4+PiAgID4gPiBIaSENCj4gPj4+Pj4+Pj4+PiAgID4g
Pg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+DQo+ID4+Pj4+Pj4+Pj4gICA+ID4gU3RhcnRpbmcgbmV3IHRo
cmVhZCBpbnN0ZWFkIG9mIGphY2tpbmcgc29tZW9uZSBlbHNlwrRzLg0KPiA+Pj4+Pj4+Pj4+ICAg
PiA+DQo+ID4+Pj4+Pj4+Pj4gICA+ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPiBNYW5hZ2VkIHRvIG1p
Z3JhdGUgZnJvbSBvbGQgJ2VuZ2luZS1tYW5hZ2UtZG9tYWlucycgYXV0aCB0bw0KPiA+Pj4+Pj4+
Pj4+IGFhYS1sZGFwIHVzaW5nOg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+DQo+ID4+Pj4+Pj4+Pj4gICA+
ID4gI3wgb3ZpcnQtZW5naW5lLWtlcmJsZGFwLW1pZ3JhdGlvbi10b29sIC0tZG9tYWluDQo+ID4+
Pj4+Pj4gYmF6LmZvby5iYXINCj4gPj4+Pj4+PiAtLWNhY2VydA0KPiA+Pj4+Pj4+Pj4+ICAgPiA+
IC90bXAvY2EuY3J0IC0tYXBwbHkNCj4gPj4+Pj4+Pj4+PiAgID4gPiB8DQo+ID4+Pj4+Pj4+Pj4g
ICA+ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+IEFsbCBPSywgbm8g
ZXJyb3JzLCBidXQgY2Fubm90IGxvZyBpbjoNCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+
Pj4+ICAgPiA+ICMgb3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNlcg0K
PiA+Pj4+Pj4+IC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXcNCj4gPj4+Pj4+Pj4+PiAgID4gPiAt
LXVzZXItbmFtZT11c2VyOg0KPiA+Pj4+Pj4+Pj4+ICAgPg0KPiA+Pj4+Pj4+Pj4+ICAgPiBJZiB5
b3Ugd2FudCB0byBsb2dpbiB3aXRoIHVzZXIgd2l0aCBkaWZmZXJlbnQgdXBuIHN1ZmZpeCwNCj4g
Pj4+Pj4+PiB0aGVuDQo+ID4+Pj4+Pj4ganVzdA0KPiA+Pj4+Pj4+Pj4+ICAgPiBhcHBlbmQgdGhh
dCBzdWZmaXgNCj4gPj4+Pj4+Pj4+PiAgID4NCj4gPj4+Pj4+Pj4+PiAgID4gJCBvdmlydC1lbmdp
bmUtZXh0ZW5zaW9ucy10b29sIGFhYSBsb2dpbi11c2VyDQo+ID4+Pj4+Pj4gLS1wcm9maWxlPWJh
ei5mb28uYmFyLW5ldw0KPiA+Pj4+Pj4+Pj4+ICAgPiAtLXVzZXItbmFtZT11c2VyQGZvby5iYXIg
PG1haWx0bzp1c2VyLW5hbWU9dXNlckBmb28uYmFyPg0KPiA+Pj4+Pj4+Pj4+DQo+ID4+Pj4+Pj4+
Pj4gT0ssIHNvbWUgcHJvZ3Jlc3MsIHRoYXQgd29ya3MhDQo+ID4+Pj4+Pj4+Pj4NCj4gPj4+Pj4+
Pj4+PiAgID4NCj4gPj4+Pj4+Pj4+PiAgID4gSWYgeW91IGhhdmUgbW9yZSBzdWZmaXhlcyBhbmQg
d2FudCB0byBoYXZlIHNvbWUgYXMNCj4gPj4+Pj4+PiBkZWZhdWx0IHlvdQ0KPiA+Pj4+Pj4+IGNh
biB1c2UNCj4gPj4+Pj4+Pj4+PiAgID4gZm9sbG93aW5nIGFwcHJvYWNoOg0KPiA+Pj4+Pj4+Pj4+
ICAgPg0KPiA+Pj4+Pj4+Pj4+ICAgPiAxKSBpbnN0YWxsIG92aXJ0LWVuZ2luZS1leHRlbnNpb24t
YWFhLW1pc2MNCj4gPj4+Pj4+Pj4+PiAgID4NCj4gPj4+Pj4+Pj4+PiAgID4gMikgY3JlYXRlIG5l
dyBtYXBwaW5nIGV4dGVuc2lvbiBsaWtlIHRoaXM6DQo+ID4+Pj4+Pj4+Pj4gICA+IC9ldGMvb3Zp
cnQtZW5naW5lL2V4dGVuc2lvbnMuZC9tYXBwaW5nLXN1ZmZpeC5wcm9wZXJ0aWVzDQo+ID4+Pj4+
Pj4+Pj4gICA+DQo+ID4+Pj4+Pj4+Pj4gICA+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24ubmFtZSA9
IG1hcHBpbmctc3VmZml4DQo+ID4+Pj4+Pj4+Pj4gICA+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24u
YmluZGluZ3MubWV0aG9kID0gamJvc3Ntb2R1bGUNCj4gPj4+Pj4+Pj4+PiAgID4gb3ZpcnQuZW5n
aW5lLmV4dGVuc2lvbi5iaW5kaW5nLmpib3NzbW9kdWxlLm1vZHVsZSA9DQo+ID4+Pj4+Pj4+Pj4g
ICA+IG9yZy5vdmlydC5lbmdpbmUtZXh0ZW5zaW9ucy5hYWEubWlzYw0KPiA+Pj4+Pj4+Pj4+ICAg
PiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Ntb2R1bGUuY2xhc3MgPQ0KPiA+
Pj4+Pj4+Pj4+ICAgPiBvcmcub3ZpcnQuZW5naW5lZXh0ZW5zaW9ucy5hYWEubWlzYy5tYXBwaW5n
Lk1hcHBpbmdFeHRlbnNpb24NCj4gPj4+Pj4+Pj4+PiAgID4gb3ZpcnQuZW5naW5lLmV4dGVuc2lv
bi5wcm92aWRlcyA9DQo+ID4+Pj4+Pj4+Pj4gICA+IG9yZy5vdmlydC5lbmdpbmUuYXBpLmV4dGVu
c2lvbnMuYWFhLk1hcHBpbmcNCj4gPj4+Pj4+Pj4+PiAgID4gY29uZmlnLm1hcFVzZXIudHlwZSA9
IHJlZ2V4DQo+ID4+Pj4+Pj4+Pj4gICA+IGNvbmZpZy5tYXBVc2VyLnBhdHRlcm4gPSBeKD88dXNl
cj5bXkBdKikkDQo+ID4+Pj4+Pj4+Pj4NCj4gPj4+Pj4+Pj4+PiBJcyB0aGF0IHN1cHBvc2VkIHRv
IHJlYWxseSBzYXkgJzx1c2VyPicgb3Igc2hvdWxkIGl0IGJlDQo+ID4+Pj4+Pj4gY2hhbmdlZCB0
byBhDQo+ID4+Pj4+Pj4+Pj4gcmVhbCB1c2VyIG5hbWU/IEVpdGhlciB3YXksIGl0IGRvZXNuJ3Qg
d29yaywgSSB0cmllZCBpdCBhbGwuDQo+ID4+Pj4+Pj4+Pg0KPiA+Pj4+Pj4+Pj4gJz88dXNlcj4n
IGlzIGp1c3QgYSBuYW1lZCBncm91cCBpbiB0aGF0IHJlZ2V4IHNvIHlvdSBjYW4gbGF0ZXIgdXNl
DQo+ID4+Pj4+Pj4gaXQgaW4NCj4gPj4+Pj4+Pj4+ICdjb25maWcubWFwVXNlci5yZXBsYWNlbWVu
dCcgIG9wdGlvbi4gSXQgc2hvdWxkIHRha2UNCj4gPj4+Pj4+PiBldmVyeXRoaW5nIHVudGlsDQo+
ID4+Pj4+Pj4+PiBmaXJzdCAnQCcuDQo+ID4+Pj4+Pj4+Pg0KPiA+Pj4+Pj4+Pj4+DQo+ID4+Pj4+
Pj4+Pj4gICA+IGNvbmZpZy5tYXBVc2VyLnJlcGxhY2VtZW50ID0gJHt1c2VyfUBmb28uYmFyDQo+
ID4+Pj4+Pj4+Pj4gICA+IGNvbmZpZy5tYXBVc2VyLm11c3RNYXRjaCA9IGZhbHNlDQo+ID4+Pj4+
Pj4+Pj4gICA+DQo+ID4+Pj4+Pj4+Pj4gICA+IDMpIHNlbGVjdCBhIG1hcHBpbmcgcGx1Z2luIGlu
IGF1dGhuIGNvbmZpZ3VyYXRpb246DQo+ID4+Pj4+Pj4+Pj4gICA+DQo+ID4+Pj4+Pj4+Pj4gICA+
IG92aXJ0LmVuZ2luZS5hYWEuYXV0aG4ubWFwcGluZy5wbHVnaW4gPSBtYXBwaW5nLXN1ZmZpeA0K
PiA+Pj4+Pj4+Pj4+ICAgPg0KPiA+Pj4+Pj4+Pj4+ICAgPiBXaXRoIGFib3ZlIGNvbmZpZ3VyYXRp
b24gaW4gdXNlLCB5b3VyIHVzZXIgJ3VzZXInIHdpdGxsIGJlDQo+ID4+Pj4+Pj4gbWFwcGVkIHRv
DQo+ID4+Pj4+Pj4+Pj4gICA+IHVzZXIgJ3VzZXJAZm9vLmJhciA8bWFpbHRvOnVzZXJAZm9vLmJh
cj4nDQo+ID4+Pj4+Pj4+Pj4gICA+IGFuZCB1c2VycyAndXNlckBhbm90aGVyZG9tYWluLmZvby5i
YXINCj4gPj4+Pj4+PiA8bWFpbHRvOnVzZXJAYW5vdGhlcmRvbWFpbi5mb28uYmFyPicgd2lsbCBy
ZW1haW4NCj4gPj4+Pj4+Pj4+PiAgID4gJ3VzZXJAYW5vdGhlcmRvbWFpbi5mb28uYmFyDQo+ID4+
Pj4+Pj4gPG1haWx0bzp1c2VyQGFub3RoZXJkb21haW4uZm9vLmJhcj4nLg0KPiA+Pj4+Pj4+Pj4+
DQo+ID4+Pj4+Pj4+Pj4gVGhpcyBob3dldmVyIGRvZXMgbm90LCBpdCBkb2Vzbid0IHJlcGxhY2Ug
dGhlIHN1ZmZpeCBhcyBpdCdzDQo+ID4+Pj4+Pj4gc3VwcG9zZWQNCj4gPj4+Pj4+Pj4+PiB0by4g
SSB0cmllZCB3aXRoIG1hbnkgZGlmZmVyZW50IHR5cGVzIG9mIHRoZQ0KPiA+Pj4+Pj4+ICdtYXBV
c2VyLnBhdHRlcm4nIGJ1dCBpdA0KPiA+Pj4+Pj4+Pj4+IHNpbXBseSB3b24ndCBjaGFuZ2UgaXQs
IGV2ZW4gaWYgSSB0eXBlIGluICc9DQo+ID4+Pj4+Pj4gXnVzZXJAYmF6LmZvby5iYXIgPG1haWx0
bzp1c2VyQGJhei5mb28uYmFyPiQnLCB0aGUNCj4gPj4+Pj4+Pj4+PiBlcnJvciBpcyB0aGUgc2Ft
ZTooDQo+ID4+Pj4+Pj4+Pg0KPiA+Pj4+Pj4+Pj4gSG1tLCBoYXJkIHRvIHNheSB3aGF0J3Mgd3Jv
bmcsIHRyeSB0byBydW46DQo+ID4+Pj4+Pj4+PiAkIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRv
b2wgLS1sb2ctbGV2ZWw9RklORVNUIGFhYSBsb2dpbi11c2VyDQo+ID4+Pj4+Pj4+PiAtLXByb2Zp
bGU9YmF6LmZvby5iYXItbmV3IC0tdXNlci1uYW1lPXVzZXINCj4gPj4+Pj4+Pj4+DQo+ID4+Pj4+
Pj4+PiBhbmQgc2VhcmNoIGZvciBhIG1hcHBpbmcgcGFydCBpbiBsb2cuDQo+ID4+Pj4+Pj4+DQo+
ID4+Pj4+Pj4+IFdvdyB3aGF0IGEgbW91dGhmdWxsOikgQ2FuIHlvdSBtYWtlIGFueXRoaW5nIG91
dCBvZiBpdD8NCj4gPj4+Pj4+Pj4NCj4gPj4+Pj4+Pj4gaHR0cHM6Ly9kcm9wb2ZmLnNsdS5zZS9p
bmRleC5waHAvcy9FTWUyTlBtT2ZzV0NOVHYvZG93bmxvYWQNCj4gPj4+Pj4+Pj4NCj4gPj4+Pj4+
Pj4gL0sNCj4gPj4+Pj4+Pg0KPiA+Pj4+Pj4+IEp1c3Qgbm90aWNlZCBhZnRlciBsb2dnaW5nIGlu
IHRvIHdlYmFkbWluIGFzICJ1c2VyQGZvby5iYXINCj4gPj4+Pj4+PiA8bWFpbHRvOnVzZXJAZm9v
LmJhcj4iICh3aGljaA0KPiA+Pj4+Pj4+IHdvcmtlZCBidHcsIHNvIGdvb2QgdGhlcmUpIHRoYXQg
dGhlICJVc2VyIE5hbWUiIGluIFVzZXJzIG1haW4gdGFiIGxvb2tzDQo+ID4+Pj4+Pj4gcmVhbGx5
IG9kZDoNCj4gPj4+Pj4+PiB1c2VyQGZvby5iYXIgPG1haWx0bzp1c2VyQGZvby5iYXI+QGJhei5m
b28uYmFyLW5ldy1hdXRoeg0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IFNvcnJ5IHlvdSBhcmUgcmlnaHQs
IGl0IGRvbid0IHdvcmsuIEkndmUgc2VudCB5b3UgaW5jb3JyZWN0DQo+ID4+Pj4+PiBjb2ZpZ3Vy
YXRpb24sICB0aGUgY29ycmVjdCBvbmUgaXM6DQo+ID4+Pj4+Pg0KPiA+Pj4+Pj4gL2V0Yy9vdmly
dC1lbmdpbmUvZXh0ZW5zaW9ucy5kL21hcHBpbmctc3VmZml4LnByb3BlcnRpZXMNCj4gPj4+Pj4+
DQo+ID4+Pj4+PiAuLi4NCj4gPj4+Pj4+IGNvbmZpZy5tYXBVc2VyLnJlZ2V4LnBhdHRlcm4gPSBe
KD88dXNlcj5bXkBdKikkDQo+ID4+Pj4+PiBjb25maWcubWFwVXNlci5yZWdleC5yZXBsYWNlbWVu
dCA9ICR7dXNlcn1AZm9vLmJhcg0KPiA+Pj4+Pj4gY29uZmlnLm1hcFVzZXIucmVnZXgubXVzdE1h
dGNoID0gZmFsc2UNCj4gPj4+Pj4+IC4uLg0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IE5vdGljZSB0aGVy
ZSB3YXMgbWlzc2luZyAncmVnZXgnLCBhZnRlciAnbWFwVXNlcicuDQo+ID4+Pj4+Pg0KPiA+Pj4+
Pj4+DQo+ID4+Pj4+Pj4gL0sNCj4gPj4+Pj4+Pg0KPiA+Pj4+Pj4+Pg0KPiA+Pj4+Pj4+Pj4NCj4g
Pj4+Pj4+Pj4+Pg0KPiA+Pj4+Pj4+Pj4+IC9LDQo+ID4+Pj4+Pj4+Pj4NCj4gPj4+Pj4+Pj4+PiAg
ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+IEFQSTogPC0tQXV0aG4u
SW52b2tlQ29tbWFuZHMuQVVUSEVOVElDQVRFX0NSRURFTlRJQUxTDQo+ID4+Pj4+Pj4gcmVzdWx0
PVNVQ0NFU1MNCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+DQo+ID4+Pj4+
Pj4+Pj4gICA+ID4gYnV0Og0KPiA+Pj4+Pj4+Pj4+ICAgPiA+DQo+ID4+Pj4+Pj4+Pj4gICA+ID4g
QVBJOiAtLT5BdXRoei5JbnZva2VDb21tYW5kcy5GRVRDSF9QUklOQ0lQQUxfUkVDT1JEDQo+ID4+
Pj4+Pj4+Pj4gICA+ID4gcHJpbmNpcGFsPSd1c2VyQGJhei5mb28uYmFyDQo+ID4+Pj4+Pj4gPG1h
aWx0bzpwcmluY2lwYWw9J3VzZXJAYmF6LmZvby5iYXI+Jw0KPiA+Pj4+Pj4+Pj4+ICAgPiA+IFNF
VkVSRSAgQ2Fubm90IHJlc29sdmUgcHJpbmNpcGFsICd1c2VyQGJhei5mb28uYmFyDQo+ID4+Pj4+
Pj4gPG1haWx0bzp1c2VyQGJhei5mb28uYmFyPicNCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+
Pj4+Pj4+ICAgPiA+DQo+ID4+Pj4+Pj4+Pj4gICA+ID4gU28gaXQgZmFpbHMuDQo+ID4+Pj4+Pj4+
Pj4gICA+ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+ICMgbGRhcHNl
YXJjaCAteCAtSCBsZGFwOi8vYmF6LmZvby5iYXIgLUQgdXNlckBmb28uYmFyDQo+ID4+Pj4+Pj4g
PG1haWx0bzp1c2VyQGZvby5iYXI+IC1XIC1iDQo+ID4+Pj4+Pj4+Pj4gICA+ID4gREM9YmF6LERD
PWZvbyxEQz1iYXIgLXMgc3ViICIoc2FtQWNjb3VudE5hbWU9dXNlcikiDQo+ID4+Pj4+Pj4gdXNl
clByaW5jaXBhbE5hbWUgfA0KPiA+Pj4+Pj4+Pj4+ICAgPiA+IGdyZXAgJ3VzZXJQcmluY2lwYWxO
YW1lOicNCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+IHVzZXJQcmluY2lw
YWxOYW1lOiB1c2VyQGZvby5iYXIgPG1haWx0bzp1c2VyQGZvby5iYXI+DQo+ID4+Pj4+Pj4+Pj4g
ICA+ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAgPiA+IHxIb3cgZG8geW91
IGNvbmZpZ3VyZSBBQUEgd2l0aCBiYXNlDQo+ID4+Pj4+Pj4gJ0RDPWJheixEQz1mb28sREM9YmFy
JyB3aGVuDQo+ID4+Pj4+Pj4+Pj4gICA+ID4gdXNlclByaW5jaXBhbE5hbWUgZW5kcyBvbmx5IG9u
ICdAZm9vLmJhcic/DQo+ID4+Pj4+Pj4+Pj4gICA+ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPiAvSw0K
PiA+Pj4+Pj4+Pj4+ICAgPiA+IHwNCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+ICAg
PiA+DQo+ID4+Pj4+Pj4+Pj4gICA+ID4NCj4gPj4+Pj4+Pj4+PiAgID4gPg0KPiA+Pj4+Pj4+Pj4+
ICAgPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+
ID4+Pj4+Pj4+Pj4gICA+ID4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4+Pj4+Pj4+Pj4gICA+ID4g
VXNlcnNAb3ZpcnQub3JnIDxtYWlsdG86VXNlcnNAb3ZpcnQub3JnPg0KPiA+Pj4+Pj4+Pj4+ICAg
PiA+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KPiA+Pj4+
Pj4+Pj4+ICAgPiA+DQo+ID4+Pj4+Pj4+Pj4NCj4gPj4+Pj4+Pg0KPiA+Pj4+DQo+ID4+Pg0KPiA+
DQo=
--_000_5d0ea4970f114d75b59b0194a4e78348exch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <5130F88B277B484F9B2BE2873904D60E(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyOCBtYXJzIDIwMTYgNzozOSBlbSBza3JldiBPbmRyYSBNYWNoYWNlayAmbHQ7
b21hY2hhY2VAcmVkaGF0LmNvbSZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDsgT24gMDMvMjcvMjAx
NiAxMTo0MCBBTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7Jmd0OyBPbiAyNiBNYXIgMjAxNiwgYXQgMjE6MzIsIE9uZHJhIE1hY2hhY2VrICZsdDtv
bWFjaGFjZUByZWRoYXQuY29tJmd0OyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7Jmd0OyBPbiAwMy8yNi8yMDE2IDAyOjA5IFBNLCBLYXJsaSBTasO2YmVyZyB3cm90ZTo8
YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyBPbiAyNiBN
YXIgMjAxNiwgYXQgMTM6NDksIEthcmxpIFNqw7ZiZXJnICZsdDtLYXJsaS5Tam9iZXJnQHNsdS5z
ZTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOkthcmxpLlNqb2JlcmdAc2x1
LnNlJmd0OyZndDsgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyAm
Z3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBPbiAyNiBNYXIg
MjAxNiwgYXQgMTE6MzUsIE9uZHJhIE1hY2hhY2VrICZsdDtvbWFjaGFjZUByZWRoYXQuY29tPGJy
Pg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOm9tYWNoYWNlQHJlZGhhdC5j
b20mZ3Q7Jmd0OyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBGb3IgbWUgaXQncyB3b3JraW5nIGNvbXBsZXRlbGx5IGZp
bmU6PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7
Jmd0OyZndDsgLi4uPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBjb25maWcubWFwVXNl
ci50eXBlID0gcmVnZXg8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGNvbmZpZy5tYXBV
c2VyLnJlZ2V4LnBhdHRlcm4gPSBeKD8mbHQ7dXNlciZndDtbXkBdKikkPGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyBjb25maWcubWFwVXNlci5yZWdleC5yZXBsYWNlbWVudCA9ICR7dXNl
cn1ARE9NQUlOWC5jb208YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDtodHRwOi8v
ZG9tYWlueC5jb20vJmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsgY29uZmlnLm1h
cFVzZXIucmVnZXgubXVzdE1hdGNoID0gZmFsc2U8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7IC4uLjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7ICQgb3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNl
cjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsgLS1wYXNzd29yZD1wYXNzOnBhc3N3b3Jk
IC0tdXNlci1uYW1lPXVzZXJARE9NQUlOWSAtLXByb2ZpbGU9YWQ8YnI+DQomZ3Q7ICZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBJTkZPJm5ic3A7Jm5i
c3A7Jm5ic3A7IEFQSTogLS0mZ3Q7TWFwcGluZy5JbnZva2VDb21tYW5kcy5NQVBfVVNFUiBwcm9m
aWxlPSdhZCc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHVzZXI9J3VzZXJARE9NQUlO
WSc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IElORk8mbmJzcDsmbmJzcDsmbmJzcDsg
QVBJOiAmbHQ7LS1NYXBwaW5nLkludm9rZUNvbW1hbmRzLk1BUF9VU0VSIHByb2ZpbGU9J2FkJzxi
cj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsgdXNlcj0ndXNlckBET01BSU5ZJzxicj4NCiZn
dDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICQg
b3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNlcjxicj4NCiZndDsgJmd0
OyZndDsmZ3Q7Jmd0OyZndDsgLS1wYXNzd29yZD1wYXNzOnBhc3N3b3JkIC0tdXNlci1uYW1lPXVz
ZXIgLS1wcm9maWxlPWFkPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgSU5GTyZuYnNwOyZuYnNwOyZuYnNwOyBBUEk6IC0tJmd0O01h
cHBpbmcuSW52b2tlQ29tbWFuZHMuTUFQX1VTRVIgcHJvZmlsZT0nYWQnIHVzZXI9J3VzZXInPGJy
Pg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBJTkZPJm5ic3A7Jm5ic3A7Jm5ic3A7IEFQSTog
Jmx0Oy0tTWFwcGluZy5JbnZva2VDb21tYW5kcy5NQVBfVVNFUiBwcm9maWxlPSdhZCc8YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHVzZXI9J3VzZXJARE9NQUlOWC5jb20gJmx0O21haWx0
bzp1c2VyPSd1c2VyQERPTUFJTlguY29tJmd0Oyc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBBcyB5b3UgY2FuIHNlZSBpdCdzIGNv
cnJlY3RseSBtYXBwZWQuPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgUGxlYXNlIGNoZWNrIG9uY2UgYWdhaW4gdGhlIHJlZ2V4IGlz
IGNvcnJlY3QsIGlmIGl0IHN0aWxsIHdvbid0IHdvcmssPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyBwbGVhc2Ugc2VuZCBsb2cgb3V0cHV0IGFnYWluLjxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyAvZXRjL292aXJ0LWVuZ2luZS9leHRl
bnNpb25zLmQvbWFwcGluZy1zdWZmaXgucHJvcGVydGllczo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0
OyZndDsgb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5uYW1lID0gbWFwcGluZy1zdWZmaXg8YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsgb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5iaW5kaW5ncy5tZXRo
b2QgPSBqYm9zc21vZHVsZTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyBvdmlydC5lbmdpbmUu
ZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Ntb2R1bGUubW9kdWxlID08YnI+DQomZ3Q7ICZndDsmZ3Q7
Jmd0OyZndDsgb3JnLm92aXJ0LmVuZ2luZS1leHRlbnNpb25zLmFhYS5taXNjPGJyPg0KJmd0OyAm
Z3Q7Jmd0OyZndDsmZ3Q7IG92aXJ0LmVuZ2luZS5leHRlbnNpb24uYmluZGluZy5qYm9zc21vZHVs
ZS5jbGFzczxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyA9IG9yZy5vdmlydC5lbmdpbmVleHRl
bnNpb25zLmFhYS5taXNjLm1hcHBpbmcuTWFwcGluZ0V4dGVuc2lvbjxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLnByb3ZpZGVzID08YnI+DQomZ3Q7ICZn
dDsmZ3Q7Jmd0OyZndDsgb3JnLm92aXJ0LmVuZ2luZS5hcGkuZXh0ZW5zaW9ucy5hYWEuTWFwcGlu
Zzxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyBjb25maWcubWFwVXNlci50eXBlID0gcmVnZXg8
YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgY29uZmlnLm1hcFVzZXIucmVnZXgucGF0dGVybiA9
IF4oPyZsdDt1c2VyJmd0O1teQF0qKSQ8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgY29uZmln
Lm1hcFVzZXIucmVnZXgucmVwbGFjZW1lbnQgPSAke3VzZXJ9QGZvby5iYXI8YnI+DQomZ3Q7ICZn
dDsmZ3Q7Jmd0OyZndDsgY29uZmlnLm1hcFVzZXIucmVnZXgubXVzdE1hdGNoID0gZmFsc2U8YnI+
DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgIyBvdmly
dC1lbmdpbmUtZXh0ZW5zaW9ucy10b29sIC0tbG9nLWxldmVsPUZJTkVTVCBhYWEgbG9naW4tdXNl
cjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyAtLXByb2ZpbGU9YmF6LmZvby5iYXItbmV3IC0t
dXNlci1uYW1lPXVzZXJAYmF6LmZvby5iYXI8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgJmx0
O21haWx0bzp1c2VyLW5hbWU9dXNlckBiYXouZm9vLmJhciZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7
Jmd0OyZndDsgIyBncmVwIE1hcHBpbmcuSW52b2tlQ29tbWFuZHMuTUFQX1VTRVIgbG9naW4ubG9n
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7IDIwMTYtMDMtMjYgMTM6Mjc6NDAgSU5GTyZuYnNw
OyZuYnNwOyZuYnNwOyBBUEk6IC0tJmd0O01hcHBpbmcuSW52b2tlQ29tbWFuZHMuTUFQX1VTRVI8
YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgdXNlcj0ndXNlckBiYXouZm9vLmJhciAmbHQ7bWFp
bHRvOnVzZXI9J3VzZXJAYmF6LmZvby5iYXImZ3Q7Jzxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0
OyAyMDE2LTAzLTI2IDEzOjI3OjQwIElORk8mbmJzcDsmbmJzcDsmbmJzcDsgQVBJOiAmbHQ7LS1N
YXBwaW5nLkludm9rZUNvbW1hbmRzLk1BUF9VU0VSPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7
IHVzZXI9J3VzZXJAYmF6LmZvby5iYXIgJmx0O21haWx0bzp1c2VyPSd1c2VyQGJhei5mb28uYmFy
Jmd0Oyc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZn
dDsgQW5kIGhlcmUgaXMgdGhlIGxvZzo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgaHR0cHM6
Ly9kcm9wb2ZmLnNsdS5zZS9pbmRleC5waHAvcy9TSzlUOHZPVU83eUIzUE0vZG93bmxvYWQ8YnI+
DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsgL0s8YnI+
DQomZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7IEV1cmVrYSEgSSBjaGFu
Z2VkIOKAmHZhcnMudXNlcuKAmSBpbiDigJhiYXouZm9vLmJhci1uZXcucHJvcGVydGllc+KAmSBm
cm9tIG9uZTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7IHdpdGggc3VmZml4IOKAmEBiYXouZm9vLmJh
cuKAmSB0byBtaW5lIHRoYXQgaGFzIGEg4oCYQGZvby5iYXLigJkgZW5kaW5nIGFuZCBub3c8YnI+
DQomZ3Q7ICZndDsmZ3Q7Jmd0OyBpdCB3b3JrcywgZm9yIHNvbWUgcmVhc29uLiBWZXJ5IHN0cmFu
Z2UsIGJ1dCBhbnl3YXkuLi4gSG93IGRvIEkgZ28gYWJvdXQ8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0
OyBjaGFuZ2luZyBmcm9tIFVQTiB0byBzYW1BY2NvdW50TmFtZSwgaWYgScK0ZCB3YW50IHRoYXQg
aW5zdGVhZD88YnI+DQomZ3Q7ICZndDsmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyBXZWxsLCB3ZSBz
dXBwb3J0IG9ubHkgVVBOLCBiZWNhdXNlIHNhbSBzdXBwb3J0IG9ubHkgMTVjaGFyYWN0ZXJzIGlu
IHVzZXJuYW1lLjxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBPSywgdGhhbmsgeW91LiBG
cm9tIGhlcmUgY29tZXMgdGhlIHJlYWxseSBkYXVudGluZyBwYXJ0LCB3aGljaCBpcyB0byBnbyB0
aHJvdWdoIGFsbCB0aGUgVk1zLCBjaGVjayB0aGVpciBwZXJtaXNzaW9ucywgYWRkIHNhbWUgdXNl
cihzKSBmcm9tIHRoZSBuZXcgcHJvdmlkZXIgYW5kIGRlbGV0ZSB0aGUgb2xkLiBQcm9iYWJseSBn
b2luZyB0byBzdGFydCBhIG5ldyB0aHJlYWQgZm9yIGRvaW5nIHRoYXQgd2l0aCBQeXRob24sIGJ1
dCBJwrRsbCBjcm9zcw0KIHRoYXQgYnJpZGdlIHdoZW4gSSBnZXQgdG8gaXQsIHRoaXMgd2FzIG9u
bHkgYSB2aXJ0dWFsIHRlc3QgZW52aXJvbm1lbnQgZm9yIGdvaW5nIGZyb20gMy40IHRvIDMuNi48
YnI+DQomZ3Q7PGJyPg0KJmd0OyBOb3Qgc3VyZSBJIHVuZGVyc3RhbmQsIHdoeSB3b3VsZCB5b3Ug
ZG8gdGhhdD8gVGhpcyBpcyB3aGF0IG1pZ3JhdGlvbiA8YnI+DQomZ3Q7IHRvb2wgZG8gZm9yIHlv
dSBhcyB3ZWxsLDxicj4NCiZndDsgc28gd2h5IGRvIHlvdSBuZWVkIGl0IHRvIGRvIGFnYWluPzwv
cD4NCjxwIGRpcj0ibHRyIj5BaCwgSSBtdXN0IGhhdmUgbWlzcmVhZCB0aGUgaW5zdHJ1Y3Rpb25z
LiBTbyBpZiBpdCB0dXJucyBvdXQgdG8gYmUgbmVjZXNzYXJ5LCBJIGtub3cgd2hvIHRvIGJsYW1l
OlAgVGhhbmtzIGZvciBwb2ludGluZyB0aGF0IG91dCE8L3A+DQo8cCBkaXI9Imx0ciI+L0s8L3A+
DQo8cCBkaXI9Imx0ciI+Jmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAvSzxicj4N
CiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4N
CiZndDsgJmd0OyZndDsmZ3Q7IC9LPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZn
dDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyAm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBPbiAwMy8yNi8yMDE2IDEwOjA3IEFNLCBLYXJsaSBTasO2YmVy
ZyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBXaGF0IHRoZSBoZWNr
LCBteSBtZXNzYWdlIGRpc2FwcGVhcmVzISBUcnlpbmcgYWdhaW4uPGJyPg0KJmd0OyAmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBPaywg
c28gaXQncyBtYXBwaW5nIG5vdyBidXQgdGhlIG9ubHkgdGhpbmcgd29ya2luZyBpczo8YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBjb25maWcubWFwVXNlci5yZWdleC5wYXR0ZXJu
ID0gdXNlckBiYXouZm9vLmJhcjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZs
dDttYWlsdG86dXNlckBiYXouZm9vLmJhciZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyBjb25maWcubWFwVXNlci5yZWdleC5yZXBsYWNlbWVudCA9IHVzZXJAZm9vLmJhciAm
bHQ7bWFpbHRvOnVzZXJAZm9vLmJhciZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IEFuZCB0aGF0IGlzbid0IHZl
cnkgdXNlZnVsLiBQbGVhc2UgYWR2aWNlITxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgL0s8YnI+DQomZ3Q7ICZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE9u
IDAzLzI1LzIwMTYgMTI6MjYgQU0sIEthcmxpIFNqw7ZiZXJnIHdyb3RlOjxicj4NCiZndDsgJmd0
OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyBEZW4gMjUgbWFycyAyMDE2IDEyOjEwIGZtIHNrcmV2IEthcmxpIFNqw7ZiZXJnICZs
dDtrYXJsaS5zam9iZXJnQHNsdS5zZTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyAmbHQ7bWFpbHRvOmthcmxpLnNqb2JlcmdAc2x1LnNlJmd0OyZndDs6PGJyPg0KJmd0OyAm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7IERlbiAyNCBtYXJzIDIwMTYgMTE6MjYgZW0gc2tyZXYgT25kcmEgTWFjaGFjZWs8YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgJmx0O29tYWNoYWNlQHJlZGhhdC5jb20g
Jmx0O21haWx0bzpvbWFjaGFjZUByZWRoYXQuY29tJmd0OyZndDs6PGJyPg0KJmd0OyAmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBPbiAwMy8yNC8yMDE2IDExOjE0IFBNLCBLYXJsaSBTasO2YmVy
ZyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgRGVu
IDI0IG1hcnMgMjAxNiA3OjI2IGVtIHNrcmV2IE9uZHJhIE1hY2hhY2VrPGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDtvbWFjaGFjZUByZWRoYXQuY29tICZsdDttYWls
dG86b21hY2hhY2VAcmVkaGF0LmNvbSZndDsmZ3Q7Ojxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgT24g
MDMvMjQvMjAxNiAwNjoxNiBQTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0
OyBIaSE8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
bmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDsgU3Rh
cnRpbmcgbmV3IHRocmVhZCBpbnN0ZWFkIG9mIGphY2tpbmcgc29tZW9uZSBlbHNlwrRzLjxicj4N
CiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNw
OyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0OyBNYW5hZ2VkIHRvIG1p
Z3JhdGUgZnJvbSBvbGQgJ2VuZ2luZS1tYW5hZ2UtZG9tYWlucycgYXV0aCB0bzxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBhYWEtbGRhcCB1c2luZzo8
YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsm
bmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0OyAjfCBvdmlydC1lbmdpbmUta2VyYmxkYXAt
bWlncmF0aW9uLXRvb2wgLS1kb21haW48YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsgYmF6LmZvby5iYXI8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsg
LS1jYWNlcnQ8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7IC90bXAvY2EuY3J0IC0tYXBwbHk8YnI+DQomZ3Q7ICZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAm
Z3Q7IHw8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
bmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDsgQWxs
IE9LLCBubyBlcnJvcnMsIGJ1dCBjYW5ub3QgbG9nIGluOjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsg
Jmd0OyAmZ3Q7ICMgb3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNlcjxi
cj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAtLXByb2ZpbGU9YmF6LmZvby5i
YXItbmV3PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jm5ic3A7Jm5ic3A7ICZndDsgJmd0OyAtLXVzZXItbmFtZT11c2VyOjxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7
ICZndDsgSWYgeW91IHdhbnQgdG8gbG9naW4gd2l0aCB1c2VyIHdpdGggZGlmZmVyZW50IHVwbiBz
dWZmaXgsPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHRoZW48YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsganVzdDxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IGFwcGVuZCB0
aGF0IHN1ZmZpeDxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJCBvdmlydC1lbmdpbmUtZXh0ZW5zaW9u
cy10b29sIGFhYSBsb2dpbi11c2VyPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7IC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAtLXVzZXItbmFtZT11c2Vy
QGZvby5iYXIgJmx0O21haWx0bzp1c2VyLW5hbWU9dXNlckBmb28uYmFyJmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBPSywgc29tZSBwcm9ncmVzcywgdGhh
dCB3b3JrcyE8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJz
cDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IElmIHlvdSBoYXZlIG1vcmUgc3VmZml4ZXMgYW5kIHdh
bnQgdG8gaGF2ZSBzb21lIGFzPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
IGRlZmF1bHQgeW91PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGNhbiB1
c2U8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJz
cDsmbmJzcDsgJmd0OyBmb2xsb3dpbmcgYXBwcm9hY2g6PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDs8YnI+DQomZ3Q7ICZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAx
KSBpbnN0YWxsIG92aXJ0LWVuZ2luZS1leHRlbnNpb24tYWFhLW1pc2M8YnI+DQomZ3Q7ICZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0Ozxicj4N
CiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNw
OyAmZ3Q7IDIpIGNyZWF0ZSBuZXcgbWFwcGluZyBleHRlbnNpb24gbGlrZSB0aGlzOjxicj4NCiZn
dDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAm
Z3Q7IC9ldGMvb3ZpcnQtZW5naW5lL2V4dGVuc2lvbnMuZC9tYXBwaW5nLXN1ZmZpeC5wcm9wZXJ0
aWVzPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5i
c3A7Jm5ic3A7ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLm5hbWUgPSBt
YXBwaW5nLXN1ZmZpeDxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IG92aXJ0LmVuZ2luZS5leHRlbnNpb24uYmluZGluZ3Mu
bWV0aG9kID0gamJvc3Ntb2R1bGU8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJp
bmRpbmcuamJvc3Ntb2R1bGUubW9kdWxlID08YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyBvcmcub3ZpcnQuZW5naW5lLWV4
dGVuc2lvbnMuYWFhLm1pc2M8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRp
bmcuamJvc3Ntb2R1bGUuY2xhc3MgPTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IG9yZy5vdmlydC5lbmdpbmVleHRlbnNp
b25zLmFhYS5taXNjLm1hcHBpbmcuTWFwcGluZ0V4dGVuc2lvbjxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IG92aXJ0LmVu
Z2luZS5leHRlbnNpb24ucHJvdmlkZXMgPTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IG9yZy5vdmlydC5lbmdpbmUuYXBp
LmV4dGVuc2lvbnMuYWFhLk1hcHBpbmc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyBjb25maWcubWFwVXNlci50eXBlID0g
cmVnZXg8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
bmJzcDsmbmJzcDsgJmd0OyBjb25maWcubWFwVXNlci5wYXR0ZXJuID0gXig/Jmx0O3VzZXImZ3Q7
W15AXSopJDxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBJcyB0
aGF0IHN1cHBvc2VkIHRvIHJlYWxseSBzYXkgJyZsdDt1c2VyJmd0Oycgb3Igc2hvdWxkIGl0IGJl
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGNoYW5nZWQgdG8gYTxicj4N
CiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyByZWFsIHVzZXIg
bmFtZT8gRWl0aGVyIHdheSwgaXQgZG9lc24ndCB3b3JrLCBJIHRyaWVkIGl0IGFsbC48YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICc/Jmx0O3VzZXImZ3Q7JyBpcyBqdXN0IGEg
bmFtZWQgZ3JvdXAgaW4gdGhhdCByZWdleCBzbyB5b3UgY2FuIGxhdGVyIHVzZTxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBpdCBpbjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICdjb25maWcubWFwVXNlci5yZXBsYWNlbWVudCcmbmJz
cDsgb3B0aW9uLiBJdCBzaG91bGQgdGFrZTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyBldmVyeXRoaW5nIHVudGlsPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDsgZmlyc3QgJ0AnLjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jm5ic3A7Jm5ic3A7ICZndDsgY29uZmlnLm1hcFVzZXIucmVwbGFjZW1lbnQgPSAke3VzZXJ9
QGZvby5iYXI8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmbmJzcDsmbmJzcDsgJmd0OyBjb25maWcubWFwVXNlci5tdXN0TWF0Y2ggPSBmYWxzZTxicj4N
CiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNw
OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jm5ic3A7Jm5ic3A7ICZndDsgMykgc2VsZWN0IGEgbWFwcGluZyBwbHVnaW4gaW4gYXV0aG4gY29u
ZmlndXJhdGlvbjo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmbmJzcDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7IG92aXJ0LmVuZ2luZS5hYWEuYXV0aG4u
bWFwcGluZy5wbHVnaW4gPSBtYXBwaW5nLXN1ZmZpeDxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgV2l0
aCBhYm92ZSBjb25maWd1cmF0aW9uIGluIHVzZSwgeW91ciB1c2VyICd1c2VyJyB3aXRsbCBiZTxi
cj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBtYXBwZWQgdG88YnI+DQomZ3Q7
ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0
OyB1c2VyICd1c2VyQGZvby5iYXIgJmx0O21haWx0bzp1c2VyQGZvby5iYXImZ3Q7Jzxicj4NCiZn
dDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAm
Z3Q7IGFuZCB1c2VycyAndXNlckBhbm90aGVyZG9tYWluLmZvby5iYXI8YnI+DQomZ3Q7ICZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzp1c2VyQGFub3RoZXJkb21haW4uZm9v
LmJhciZndDsnIHdpbGwgcmVtYWluPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJ3VzZXJAYW5vdGhlcmRvbWFpbi5mb28u
YmFyPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86dXNl
ckBhbm90aGVyZG9tYWluLmZvby5iYXImZ3Q7Jy48YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsgVGhpcyBob3dldmVyIGRvZXMgbm90LCBpdCBkb2Vzbid0IHJlcGxh
Y2UgdGhlIHN1ZmZpeCBhcyBpdCdzPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7IHN1cHBvc2VkPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7IHRvLiBJIHRyaWVkIHdpdGggbWFueSBkaWZmZXJlbnQgdHlwZXMgb2YgdGhlPGJyPg0K
Jmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICdtYXBVc2VyLnBhdHRlcm4nIGJ1dCBp
dDxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBzaW1w
bHkgd29uJ3QgY2hhbmdlIGl0LCBldmVuIGlmIEkgdHlwZSBpbiAnPTxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBedXNlckBiYXouZm9vLmJhciAmbHQ7bWFpbHRvOnVzZXJA
YmF6LmZvby5iYXImZ3Q7JCcsIHRoZTxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyBlcnJvciBpcyB0aGUgc2FtZTooPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyBIbW0sIGhhcmQgdG8gc2F5IHdoYXQncyB3cm9uZywgdHJ5IHRvIHJ1
bjo8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAkIG92aXJ0
LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgLS1sb2ctbGV2ZWw9RklORVNUIGFhYSBsb2dpbi11c2Vy
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgLS1wcm9maWxl
PWJhei5mb28uYmFyLW5ldyAtLXVzZXItbmFtZT11c2VyPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyBhbmQgc2VhcmNoIGZvciBhIG1hcHBpbmcgcGFydCBpbiBsb2cuPGJyPg0K
Jmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgV293IHdoYXQgYSBtb3V0aGZ1bGw6KSBDYW4geW91IG1h
a2UgYW55dGhpbmcgb3V0IG9mIGl0Pzxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGh0dHBz
Oi8vZHJvcG9mZi5zbHUuc2UvaW5kZXgucGhwL3MvRU1lMk5QbU9mc1dDTlR2L2Rvd25sb2FkPGJy
Pg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgL0s8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgSnVzdCBu
b3RpY2VkIGFmdGVyIGxvZ2dpbmcgaW4gdG8gd2ViYWRtaW4gYXMgJnF1b3Q7dXNlckBmb28uYmFy
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86dXNlckBm
b28uYmFyJmd0OyZxdW90OyAod2hpY2g8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsgd29ya2VkIGJ0dywgc28gZ29vZCB0aGVyZSkgdGhhdCB0aGUgJnF1b3Q7VXNlciBOYW1l
JnF1b3Q7IGluIFVzZXJzIG1haW4gdGFiIGxvb2tzPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7IHJlYWxseSBvZGQ6PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7IHVzZXJAZm9vLmJhciAmbHQ7bWFpbHRvOnVzZXJAZm9vLmJhciZndDtAYmF6LmZvby5i
YXItbmV3LWF1dGh6PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7
ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBTb3JyeSB5b3UgYXJlIHJpZ2h0LCBpdCBkb24ndCB3
b3JrLiBJJ3ZlIHNlbnQgeW91IGluY29ycmVjdDxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7IGNvZmlndXJhdGlvbiwmbmJzcDsgdGhlIGNvcnJlY3Qgb25lIGlzOjxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsgL2V0Yy9vdmlydC1lbmdpbmUvZXh0ZW5zaW9ucy5kL21hcHBpbmctc3VmZml4LnByb3BlcnRp
ZXM8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7IC4uLjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGNv
bmZpZy5tYXBVc2VyLnJlZ2V4LnBhdHRlcm4gPSBeKD8mbHQ7dXNlciZndDtbXkBdKikkPGJyPg0K
Jmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgY29uZmlnLm1hcFVzZXIucmVnZXgucmVwbGFj
ZW1lbnQgPSAke3VzZXJ9QGZvby5iYXI8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyBjb25maWcubWFwVXNlci5yZWdleC5tdXN0TWF0Y2ggPSBmYWxzZTxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7IC4uLjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgTm90aWNlIHRoZXJlIHdhcyBtaXNz
aW5nICdyZWdleCcsIGFmdGVyICdtYXBVc2VyJy48YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAvSzxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8
YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAvSzxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0
Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNw
OyZuYnNwOyAmZ3Q7ICZndDsgQVBJOiAmbHQ7LS1BdXRobi5JbnZva2VDb21tYW5kcy5BVVRIRU5U
SUNBVEVfQ1JFREVOVElBTFM8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsg
cmVzdWx0PVNVQ0NFU1M8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7
ICZndDsgYnV0Ojxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7IEFQSTogLS0mZ3Q7QXV0
aHouSW52b2tlQ29tbWFuZHMuRkVUQ0hfUFJJTkNJUEFMX1JFQ09SRDxicj4NCiZndDsgJmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDsg
cHJpbmNpcGFsPSd1c2VyQGJhei5mb28uYmFyPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7ICZsdDttYWlsdG86cHJpbmNpcGFsPSd1c2VyQGJhei5mb28uYmFyJmd0Oyc8YnI+
DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJz
cDsgJmd0OyAmZ3Q7IFNFVkVSRSZuYnNwOyBDYW5ub3QgcmVzb2x2ZSBwcmluY2lwYWwgJ3VzZXJA
YmF6LmZvby5iYXI8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgJmx0O21h
aWx0bzp1c2VyQGJhei5mb28uYmFyJmd0Oyc8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0
Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNw
OyZuYnNwOyAmZ3Q7ICZndDsgU28gaXQgZmFpbHMuPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7
ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
bmJzcDsmbmJzcDsgJmd0OyAmZ3Q7ICMgbGRhcHNlYXJjaCAteCAtSCBsZGFwOi8vYmF6LmZvby5i
YXIgLUQgdXNlckBmb28uYmFyPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
ICZsdDttYWlsdG86dXNlckBmb28uYmFyJmd0OyAtVyAtYjxicj4NCiZndDsgJmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDsgREM9YmF6
LERDPWZvbyxEQz1iYXIgLXMgc3ViICZxdW90OyhzYW1BY2NvdW50TmFtZT11c2VyKSZxdW90Ozxi
cj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyB1c2VyUHJpbmNpcGFsTmFtZSB8
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7
Jm5ic3A7ICZndDsgJmd0OyBncmVwICd1c2VyUHJpbmNpcGFsTmFtZTonPGJyPg0KJmd0OyAmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0
Ozxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNw
OyZuYnNwOyAmZ3Q7ICZndDsgdXNlclByaW5jaXBhbE5hbWU6IHVzZXJAZm9vLmJhciAmbHQ7bWFp
bHRvOnVzZXJAZm9vLmJhciZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0Ozxicj4N
CiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNw
OyAmZ3Q7ICZndDsgfEhvdyBkbyB5b3UgY29uZmlndXJlIEFBQSB3aXRoIGJhc2U8YnI+DQomZ3Q7
ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgJ0RDPWJheixEQz1mb28sREM9YmFyJyB3aGVu
PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7
Jm5ic3A7ICZndDsgJmd0OyB1c2VyUHJpbmNpcGFsTmFtZSBlbmRzIG9ubHkgb24gJ0Bmb28uYmFy
Jz88YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJz
cDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0OyAvSzxicj4NCiZndDsgJmd0OyZndDsm
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7ICZndDsgfDxi
cj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZu
YnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsg
Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyAmZ3Q7
ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
bmJzcDsmbmJzcDsgJmd0OyAmZ3Q7IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fPGJyPg0KJmd0OyAmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Jm5ic3A7Jm5ic3A7ICZndDsgJmd0OyBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+DQomZ3Q7
ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJzcDsgJmd0
OyAmZ3Q7IFVzZXJzQG92aXJ0Lm9yZyAmbHQ7bWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyZndDs8YnI+
DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmbmJzcDsmbmJz
cDsgJmd0OyAmZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy
czxicj4NCiZndDsgJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZuYnNw
OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQom
Z3Q7ICZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0
Ozxicj4NCjwvcD4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_5d0ea4970f114d75b59b0194a4e78348exch24sluse_--
1
0
Re: [ovirt-users] oVirt 3.6 AAA LDAP cannot not log in when end of UPN is different from domain base
by Karli Sjöberg 28 Mar '16
by Karli Sjöberg 28 Mar '16
28 Mar '16
--_000_0d278e8e72e34bb696eaf54f5b6d9948exch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjUgbWFycyAyMDE2IDEyOjEwIGZtIHNrcmV2IEthcmxpIFNqw7ZiZXJnIDxrYXJsaS5z
am9iZXJnQHNsdS5zZT46DQo+DQo+DQo+IERlbiAyNCBtYXJzIDIwMTYgMTE6MjYgZW0gc2tyZXYg
T25kcmEgTWFjaGFjZWsgPG9tYWNoYWNlQHJlZGhhdC5jb20+Og0KPiA+DQo+ID4gT24gMDMvMjQv
MjAxNiAxMToxNCBQTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6DQo+ID4gPg0KPiA+ID4gRGVuIDI0
IG1hcnMgMjAxNiA3OjI2IGVtIHNrcmV2IE9uZHJhIE1hY2hhY2VrIDxvbWFjaGFjZUByZWRoYXQu
Y29tPjoNCj4gPiA+ICA+DQo+ID4gPiAgPiBPbiAwMy8yNC8yMDE2IDA2OjE2IFBNLCBLYXJsaSBT
asO2YmVyZyB3cm90ZToNCj4gPiA+ICA+ID4gSGkhDQo+ID4gPiAgPiA+DQo+ID4gPiAgPiA+DQo+
ID4gPiAgPiA+IFN0YXJ0aW5nIG5ldyB0aHJlYWQgaW5zdGVhZCBvZiBqYWNraW5nIHNvbWVvbmUg
ZWxzZcK0cy4NCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4gTWFuYWdlZCB0byBt
aWdyYXRlIGZyb20gb2xkICdlbmdpbmUtbWFuYWdlLWRvbWFpbnMnIGF1dGggdG8NCj4gPiA+IGFh
YS1sZGFwIHVzaW5nOg0KPiA+ID4gID4gPg0KPiA+ID4gID4gPiAjfCBvdmlydC1lbmdpbmUta2Vy
YmxkYXAtbWlncmF0aW9uLXRvb2wgLS1kb21haW4gYmF6LmZvby5iYXIgLS1jYWNlcnQNCj4gPiA+
ICA+ID4gL3RtcC9jYS5jcnQgLS1hcHBseQ0KPiA+ID4gID4gPiB8DQo+ID4gPiAgPiA+DQo+ID4g
PiAgPiA+DQo+ID4gPiAgPiA+IEFsbCBPSywgbm8gZXJyb3JzLCBidXQgY2Fubm90IGxvZyBpbjoN
Cj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4gIyBvdmlydC1lbmdpbmUtZXh0ZW5zaW9ucy10b29sIGFh
YSBsb2dpbi11c2VyIC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXcNCj4gPiA+ICA+ID4gLS11c2Vy
LW5hbWU9dXNlcjoNCj4gPiA+ICA+DQo+ID4gPiAgPiBJZiB5b3Ugd2FudCB0byBsb2dpbiB3aXRo
IHVzZXIgd2l0aCBkaWZmZXJlbnQgdXBuIHN1ZmZpeCwgdGhlbiBqdXN0DQo+ID4gPiAgPiBhcHBl
bmQgdGhhdCBzdWZmaXgNCj4gPiA+ICA+DQo+ID4gPiAgPiAkIG92aXJ0LWVuZ2luZS1leHRlbnNp
b25zLXRvb2wgYWFhIGxvZ2luLXVzZXIgLS1wcm9maWxlPWJhei5mb28uYmFyLW5ldw0KPiA+ID4g
ID4gLS11c2VyLW5hbWU9dXNlckBmb28uYmFyDQo+ID4gPg0KPiA+ID4gT0ssIHNvbWUgcHJvZ3Jl
c3MsIHRoYXQgd29ya3MhDQo+ID4gPg0KPiA+ID4gID4NCj4gPiA+ICA+IElmIHlvdSBoYXZlIG1v
cmUgc3VmZml4ZXMgYW5kIHdhbnQgdG8gaGF2ZSBzb21lIGFzIGRlZmF1bHQgeW91IGNhbiB1c2UN
Cj4gPiA+ICA+IGZvbGxvd2luZyBhcHByb2FjaDoNCj4gPiA+ICA+DQo+ID4gPiAgPiAxKSBpbnN0
YWxsIG92aXJ0LWVuZ2luZS1leHRlbnNpb24tYWFhLW1pc2MNCj4gPiA+ICA+DQo+ID4gPiAgPiAy
KSBjcmVhdGUgbmV3IG1hcHBpbmcgZXh0ZW5zaW9uIGxpa2UgdGhpczoNCj4gPiA+ICA+IC9ldGMv
b3ZpcnQtZW5naW5lL2V4dGVuc2lvbnMuZC9tYXBwaW5nLXN1ZmZpeC5wcm9wZXJ0aWVzDQo+ID4g
PiAgPg0KPiA+ID4gID4gb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5uYW1lID0gbWFwcGluZy1zdWZm
aXgNCj4gPiA+ICA+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24uYmluZGluZ3MubWV0aG9kID0gamJv
c3Ntb2R1bGUNCj4gPiA+ICA+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24uYmluZGluZy5qYm9zc21v
ZHVsZS5tb2R1bGUgPQ0KPiA+ID4gID4gb3JnLm92aXJ0LmVuZ2luZS1leHRlbnNpb25zLmFhYS5t
aXNjDQo+ID4gPiAgPiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Ntb2R1bGUu
Y2xhc3MgPQ0KPiA+ID4gID4gb3JnLm92aXJ0LmVuZ2luZWV4dGVuc2lvbnMuYWFhLm1pc2MubWFw
cGluZy5NYXBwaW5nRXh0ZW5zaW9uDQo+ID4gPiAgPiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLnBy
b3ZpZGVzID0NCj4gPiA+ICA+IG9yZy5vdmlydC5lbmdpbmUuYXBpLmV4dGVuc2lvbnMuYWFhLk1h
cHBpbmcNCj4gPiA+ICA+IGNvbmZpZy5tYXBVc2VyLnR5cGUgPSByZWdleA0KPiA+ID4gID4gY29u
ZmlnLm1hcFVzZXIucGF0dGVybiA9IF4oPzx1c2VyPlteQF0qKSQNCj4gPiA+DQo+ID4gPiBJcyB0
aGF0IHN1cHBvc2VkIHRvIHJlYWxseSBzYXkgJzx1c2VyPicgb3Igc2hvdWxkIGl0IGJlIGNoYW5n
ZWQgdG8gYQ0KPiA+ID4gcmVhbCB1c2VyIG5hbWU/IEVpdGhlciB3YXksIGl0IGRvZXNuJ3Qgd29y
aywgSSB0cmllZCBpdCBhbGwuDQo+ID4NCj4gPiAnPzx1c2VyPicgaXMganVzdCBhIG5hbWVkIGdy
b3VwIGluIHRoYXQgcmVnZXggc28geW91IGNhbiBsYXRlciB1c2UgaXQgaW4NCj4gPiAnY29uZmln
Lm1hcFVzZXIucmVwbGFjZW1lbnQnICBvcHRpb24uIEl0IHNob3VsZCB0YWtlIGV2ZXJ5dGhpbmcg
dW50aWwNCj4gPiBmaXJzdCAnQCcuDQo+ID4NCj4gPiA+DQo+ID4gPiAgPiBjb25maWcubWFwVXNl
ci5yZXBsYWNlbWVudCA9ICR7dXNlcn1AZm9vLmJhcg0KPiA+ID4gID4gY29uZmlnLm1hcFVzZXIu
bXVzdE1hdGNoID0gZmFsc2UNCj4gPiA+ICA+DQo+ID4gPiAgPiAzKSBzZWxlY3QgYSBtYXBwaW5n
IHBsdWdpbiBpbiBhdXRobiBjb25maWd1cmF0aW9uOg0KPiA+ID4gID4NCj4gPiA+ICA+IG92aXJ0
LmVuZ2luZS5hYWEuYXV0aG4ubWFwcGluZy5wbHVnaW4gPSBtYXBwaW5nLXN1ZmZpeA0KPiA+ID4g
ID4NCj4gPiA+ICA+IFdpdGggYWJvdmUgY29uZmlndXJhdGlvbiBpbiB1c2UsIHlvdXIgdXNlciAn
dXNlcicgd2l0bGwgYmUgbWFwcGVkIHRvDQo+ID4gPiAgPiB1c2VyICd1c2VyQGZvby5iYXInDQo+
ID4gPiAgPiBhbmQgdXNlcnMgJ3VzZXJAYW5vdGhlcmRvbWFpbi5mb28uYmFyJyB3aWxsIHJlbWFp
bg0KPiA+ID4gID4gJ3VzZXJAYW5vdGhlcmRvbWFpbi5mb28uYmFyJy4NCj4gPiA+DQo+ID4gPiBU
aGlzIGhvd2V2ZXIgZG9lcyBub3QsIGl0IGRvZXNuJ3QgcmVwbGFjZSB0aGUgc3VmZml4IGFzIGl0
J3Mgc3VwcG9zZWQNCj4gPiA+IHRvLiBJIHRyaWVkIHdpdGggbWFueSBkaWZmZXJlbnQgdHlwZXMg
b2YgdGhlICdtYXBVc2VyLnBhdHRlcm4nIGJ1dCBpdA0KPiA+ID4gc2ltcGx5IHdvbid0IGNoYW5n
ZSBpdCwgZXZlbiBpZiBJIHR5cGUgaW4gJz0gXnVzZXJAYmF6LmZvby5iYXIkJywgdGhlDQo+ID4g
PiBlcnJvciBpcyB0aGUgc2FtZTooDQo+ID4NCj4gPiBIbW0sIGhhcmQgdG8gc2F5IHdoYXQncyB3
cm9uZywgdHJ5IHRvIHJ1bjoNCj4gPiAkIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgLS1s
b2ctbGV2ZWw9RklORVNUIGFhYSBsb2dpbi11c2VyDQo+ID4gLS1wcm9maWxlPWJhei5mb28uYmFy
LW5ldyAtLXVzZXItbmFtZT11c2VyDQo+ID4NCj4gPiBhbmQgc2VhcmNoIGZvciBhIG1hcHBpbmcg
cGFydCBpbiBsb2cuDQo+DQo+IFdvdyB3aGF0IGEgbW91dGhmdWxsOikgQ2FuIHlvdSBtYWtlIGFu
eXRoaW5nIG91dCBvZiBpdD8NCj4NCj4gaHR0cHM6Ly9kcm9wb2ZmLnNsdS5zZS9pbmRleC5waHAv
cy9FTWUyTlBtT2ZzV0NOVHYvZG93bmxvYWQNCj4NCj4gL0sNCg0KSnVzdCBub3RpY2VkIGFmdGVy
IGxvZ2dpbmcgaW4gdG8gd2ViYWRtaW4gYXMgInVzZXJAZm9vLmJhciIgKHdoaWNoIHdvcmtlZCBi
dHcsIHNvIGdvb2QgdGhlcmUpIHRoYXQgdGhlICJVc2VyIE5hbWUiIGluIFVzZXJzIG1haW4gdGFi
IGxvb2tzIHJlYWxseSBvZGQ6DQp1c2VyQGZvby5iYXJAYmF6LmZvby5iYXItbmV3LWF1dGh6DQoN
Ci9LDQoNCj4NCj4gPg0KPiA+ID4NCj4gPiA+IC9LDQo+ID4gPg0KPiA+ID4gID4NCj4gPiA+ICA+
ID4NCj4gPiA+ICA+ID4gQVBJOiA8LS1BdXRobi5JbnZva2VDb21tYW5kcy5BVVRIRU5USUNBVEVf
Q1JFREVOVElBTFMgcmVzdWx0PVNVQ0NFU1MNCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4NCj4gPiA+
ICA+ID4gYnV0Og0KPiA+ID4gID4gPg0KPiA+ID4gID4gPiBBUEk6IC0tPkF1dGh6Lkludm9rZUNv
bW1hbmRzLkZFVENIX1BSSU5DSVBBTF9SRUNPUkQNCj4gPiA+ICA+ID4gcHJpbmNpcGFsPSd1c2Vy
QGJhei5mb28uYmFyJw0KPiA+ID4gID4gPiBTRVZFUkUgIENhbm5vdCByZXNvbHZlIHByaW5jaXBh
bCAndXNlckBiYXouZm9vLmJhcicNCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4g
U28gaXQgZmFpbHMuDQo+ID4gPiAgPiA+DQo+ID4gPiAgPiA+DQo+ID4gPiAgPiA+ICMgbGRhcHNl
YXJjaCAteCAtSCBsZGFwOi8vYmF6LmZvby5iYXIgLUQgdXNlckBmb28uYmFyIC1XIC1iDQo+ID4g
PiAgPiA+IERDPWJheixEQz1mb28sREM9YmFyIC1zIHN1YiAiKHNhbUFjY291bnROYW1lPXVzZXIp
IiB1c2VyUHJpbmNpcGFsTmFtZSB8DQo+ID4gPiAgPiA+IGdyZXAgJ3VzZXJQcmluY2lwYWxOYW1l
OicNCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4gdXNlclByaW5jaXBhbE5hbWU6IHVzZXJAZm9vLmJh
cg0KPiA+ID4gID4gPg0KPiA+ID4gID4gPg0KPiA+ID4gID4gPiB8SG93IGRvIHlvdSBjb25maWd1
cmUgQUFBIHdpdGggYmFzZSAnREM9YmF6LERDPWZvbyxEQz1iYXInIHdoZW4NCj4gPiA+ICA+ID4g
dXNlclByaW5jaXBhbE5hbWUgZW5kcyBvbmx5IG9uICdAZm9vLmJhcic/DQo+ID4gPiAgPiA+DQo+
ID4gPiAgPiA+IC9LDQo+ID4gPiAgPiA+IHwNCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4NCj4gPiA+
ICA+ID4NCj4gPiA+ICA+ID4NCj4gPiA+ICA+ID4gX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18NCj4gPiA+ICA+ID4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4g
PiAgPiA+IFVzZXJzQG92aXJ0Lm9yZw0KPiA+ID4gID4gPiBodHRwOi8vbGlzdHMub3ZpcnQub3Jn
L21haWxtYW4vbGlzdGluZm8vdXNlcnMNCj4gPiA+ICA+ID4NCj4gPiA+DQo=
--_000_0d278e8e72e34bb696eaf54f5b6d9948exch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <7E50D08D305B164D8B34D21E306473D7(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyNSBtYXJzIDIwMTYgMTI6MTAgZm0gc2tyZXYgS2FybGkgU2rDtmJlcmcgJmx0
O2thcmxpLnNqb2JlcmdAc2x1LnNlJmd0Ozo8YnI+DQomZ3Q7PGJyPg0KJmd0Ozxicj4NCiZndDsg
RGVuIDI0IG1hcnMgMjAxNiAxMToyNiBlbSBza3JldiBPbmRyYSBNYWNoYWNlayAmbHQ7b21hY2hh
Y2VAcmVkaGF0LmNvbSZndDs6PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IE9uIDAzLzI0
LzIwMTYgMTE6MTQgUE0sIEthcmxpIFNqw7ZiZXJnIHdyb3RlOjxicj4NCiZndDsgJmd0OyAmZ3Q7
PGJyPg0KJmd0OyAmZ3Q7ICZndDsgRGVuIDI0IG1hcnMgMjAxNiA3OjI2IGVtIHNrcmV2IE9uZHJh
IE1hY2hhY2VrICZsdDtvbWFjaGFjZUByZWRoYXQuY29tJmd0Ozo8YnI+DQomZ3Q7ICZndDsgJmd0
OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyBPbiAwMy8yNC8yMDE2
IDA2OjE2IFBNLCBLYXJsaSBTasO2YmVyZyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNw
OyAmZ3Q7ICZndDsgSGkhPGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsg
Jmd0OyAmZ3Q7IFN0YXJ0aW5nIG5ldyB0aHJlYWQgaW5zdGVhZCBvZiBqYWNraW5nIHNvbWVvbmUg
ZWxzZcK0cy48YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZn
dDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZn
dDsgTWFuYWdlZCB0byBtaWdyYXRlIGZyb20gb2xkICdlbmdpbmUtbWFuYWdlLWRvbWFpbnMnIGF1
dGggdG88YnI+DQomZ3Q7ICZndDsgJmd0OyBhYWEtbGRhcCB1c2luZzo8YnI+DQomZ3Q7ICZndDsg
Jmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsg
I3wgb3ZpcnQtZW5naW5lLWtlcmJsZGFwLW1pZ3JhdGlvbi10b29sIC0tZG9tYWluIGJhei5mb28u
YmFyIC0tY2FjZXJ0PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IC90bXAvY2Eu
Y3J0IC0tYXBwbHk8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgfDxicj4NCiZn
dDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBBbGwgT0ssIG5vIGVy
cm9ycywgYnV0IGNhbm5vdCBsb2cgaW46PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAm
Z3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7ICMgb3ZpcnQtZW5naW5lLWV4
dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNlciAtLXByb2ZpbGU9YmF6LmZvby5iYXItbmV3PGJy
Pg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IC0tdXNlci1uYW1lPXVzZXI6PGJyPg0K
Jmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsg
SWYgeW91IHdhbnQgdG8gbG9naW4gd2l0aCB1c2VyIHdpdGggZGlmZmVyZW50IHVwbiBzdWZmaXgs
IHRoZW4ganVzdDxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgYXBwZW5kIHRoYXQgc3Vm
Zml4PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5i
c3A7ICZndDsgJCBvdmlydC1lbmdpbmUtZXh0ZW5zaW9ucy10b29sIGFhYSBsb2dpbi11c2VyIC0t
cHJvZmlsZT1iYXouZm9vLmJhci1uZXc8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7IC0t
dXNlci1uYW1lPXVzZXJAZm9vLmJhcjxicj4NCiZndDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
ICZndDsgT0ssIHNvbWUgcHJvZ3Jlc3MsIHRoYXQgd29ya3MhPGJyPg0KJmd0OyAmZ3Q7ICZndDs8
YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsg
Jmd0OyBJZiB5b3UgaGF2ZSBtb3JlIHN1ZmZpeGVzIGFuZCB3YW50IHRvIGhhdmUgc29tZSBhcyBk
ZWZhdWx0IHlvdSBjYW4gdXNlPGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyBmb2xsb3dp
bmcgYXBwcm9hY2g6PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0
OyAmZ3Q7Jm5ic3A7ICZndDsgMSkgaW5zdGFsbCBvdmlydC1lbmdpbmUtZXh0ZW5zaW9uLWFhYS1t
aXNjPGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5i
c3A7ICZndDsgMikgY3JlYXRlIG5ldyBtYXBwaW5nIGV4dGVuc2lvbiBsaWtlIHRoaXM6PGJyPg0K
Jmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAvZXRjL292aXJ0LWVuZ2luZS9leHRlbnNpb25zLmQv
bWFwcGluZy1zdWZmaXgucHJvcGVydGllczxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDs8
YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7IG92aXJ0LmVuZ2luZS5leHRlbnNpb24ubmFt
ZSA9IG1hcHBpbmctc3VmZml4PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyBvdmlydC5l
bmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmdzLm1ldGhvZCA9IGpib3NzbW9kdWxlPGJyPg0KJmd0OyAm
Z3Q7ICZndDsmbmJzcDsgJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Nt
b2R1bGUubW9kdWxlID08YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7IG9yZy5vdmlydC5l
bmdpbmUtZXh0ZW5zaW9ucy5hYWEubWlzYzxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsg
b3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5iaW5kaW5nLmpib3NzbW9kdWxlLmNsYXNzID08YnI+DQom
Z3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7IG9yZy5vdmlydC5lbmdpbmVleHRlbnNpb25zLmFhYS5t
aXNjLm1hcHBpbmcuTWFwcGluZ0V4dGVuc2lvbjxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDsgb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5wcm92aWRlcyA9PGJyPg0KJmd0OyAmZ3Q7ICZndDsm
bmJzcDsgJmd0OyBvcmcub3ZpcnQuZW5naW5lLmFwaS5leHRlbnNpb25zLmFhYS5NYXBwaW5nPGJy
Pg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyBjb25maWcubWFwVXNlci50eXBlID0gcmVnZXg8
YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7IGNvbmZpZy5tYXBVc2VyLnBhdHRlcm4gPSBe
KD8mbHQ7dXNlciZndDtbXkBdKikkPGJyPg0KJmd0OyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsg
Jmd0OyBJcyB0aGF0IHN1cHBvc2VkIHRvIHJlYWxseSBzYXkgJyZsdDt1c2VyJmd0Oycgb3Igc2hv
dWxkIGl0IGJlIGNoYW5nZWQgdG8gYTxicj4NCiZndDsgJmd0OyAmZ3Q7IHJlYWwgdXNlciBuYW1l
PyBFaXRoZXIgd2F5LCBpdCBkb2Vzbid0IHdvcmssIEkgdHJpZWQgaXQgYWxsLjxicj4NCiZndDsg
Jmd0Ozxicj4NCiZndDsgJmd0OyAnPyZsdDt1c2VyJmd0OycgaXMganVzdCBhIG5hbWVkIGdyb3Vw
IGluIHRoYXQgcmVnZXggc28geW91IGNhbiBsYXRlciB1c2UgaXQgaW48YnI+DQomZ3Q7ICZndDsg
J2NvbmZpZy5tYXBVc2VyLnJlcGxhY2VtZW50JyZuYnNwOyBvcHRpb24uIEl0IHNob3VsZCB0YWtl
IGV2ZXJ5dGhpbmcgdW50aWwgPGJyPg0KJmd0OyAmZ3Q7IGZpcnN0ICdAJy48YnI+DQomZ3Q7ICZn
dDs8YnI+DQomZ3Q7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgY29u
ZmlnLm1hcFVzZXIucmVwbGFjZW1lbnQgPSAke3VzZXJ9QGZvby5iYXI8YnI+DQomZ3Q7ICZndDsg
Jmd0OyZuYnNwOyAmZ3Q7IGNvbmZpZy5tYXBVc2VyLm11c3RNYXRjaCA9IGZhbHNlPGJyPg0KJmd0
OyAmZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgMykg
c2VsZWN0IGEgbWFwcGluZyBwbHVnaW4gaW4gYXV0aG4gY29uZmlndXJhdGlvbjo8YnI+DQomZ3Q7
ICZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyBvdmly
dC5lbmdpbmUuYWFhLmF1dGhuLm1hcHBpbmcucGx1Z2luID0gbWFwcGluZy1zdWZmaXg8YnI+DQom
Z3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyBX
aXRoIGFib3ZlIGNvbmZpZ3VyYXRpb24gaW4gdXNlLCB5b3VyIHVzZXIgJ3VzZXInIHdpdGxsIGJl
IG1hcHBlZCB0bzxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgdXNlciAndXNlckBmb28u
YmFyJzxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgYW5kIHVzZXJzICd1c2VyQGFub3Ro
ZXJkb21haW4uZm9vLmJhcicgd2lsbCByZW1haW48YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAm
Z3Q7ICd1c2VyQGFub3RoZXJkb21haW4uZm9vLmJhcicuPGJyPg0KJmd0OyAmZ3Q7ICZndDs8YnI+
DQomZ3Q7ICZndDsgJmd0OyBUaGlzIGhvd2V2ZXIgZG9lcyBub3QsIGl0IGRvZXNuJ3QgcmVwbGFj
ZSB0aGUgc3VmZml4IGFzIGl0J3Mgc3VwcG9zZWQ8YnI+DQomZ3Q7ICZndDsgJmd0OyB0by4gSSB0
cmllZCB3aXRoIG1hbnkgZGlmZmVyZW50IHR5cGVzIG9mIHRoZSAnbWFwVXNlci5wYXR0ZXJuJyBi
dXQgaXQ8YnI+DQomZ3Q7ICZndDsgJmd0OyBzaW1wbHkgd29uJ3QgY2hhbmdlIGl0LCBldmVuIGlm
IEkgdHlwZSBpbiAnPSBedXNlckBiYXouZm9vLmJhciQnLCB0aGU8YnI+DQomZ3Q7ICZndDsgJmd0
OyBlcnJvciBpcyB0aGUgc2FtZTooPGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IEhtbSwg
aGFyZCB0byBzYXkgd2hhdCdzIHdyb25nLCB0cnkgdG8gcnVuOjxicj4NCiZndDsgJmd0OyAkIG92
aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgLS1sb2ctbGV2ZWw9RklORVNUIGFhYSBsb2dpbi11
c2VyIDxicj4NCiZndDsgJmd0OyAtLXByb2ZpbGU9YmF6LmZvby5iYXItbmV3IC0tdXNlci1uYW1l
PXVzZXI8YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgYW5kIHNlYXJjaCBmb3IgYSBtYXBw
aW5nIHBhcnQgaW4gbG9nLjxicj4NCiZndDs8YnI+DQomZ3Q7IFdvdyB3aGF0IGEgbW91dGhmdWxs
OikgQ2FuIHlvdSBtYWtlIGFueXRoaW5nIG91dCBvZiBpdD88YnI+DQomZ3Q7PGJyPg0KJmd0OyBo
dHRwczovL2Ryb3BvZmYuc2x1LnNlL2luZGV4LnBocC9zL0VNZTJOUG1PZnNXQ05Udi9kb3dubG9h
ZDxicj4NCiZndDs8YnI+DQomZ3Q7IC9LPC9wPg0KPHAgZGlyPSJsdHIiPkp1c3Qgbm90aWNlZCBh
ZnRlciBsb2dnaW5nIGluIHRvIHdlYmFkbWluIGFzICZxdW90O3VzZXJAZm9vLmJhciZxdW90OyAo
d2hpY2ggd29ya2VkIGJ0dywgc28gZ29vZCB0aGVyZSkgdGhhdCB0aGUgJnF1b3Q7VXNlciBOYW1l
JnF1b3Q7IGluIFVzZXJzIG1haW4gdGFiIGxvb2tzIHJlYWxseSBvZGQ6PGJyPg0KdXNlckBmb28u
YmFyQGJhei5mb28uYmFyLW5ldy1hdXRoejwvcD4NCjxwIGRpcj0ibHRyIj4vSzwvcD4NCjxwIGRp
cj0ibHRyIj4mZ3Q7PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDs8YnI+DQomZ3Q7
ICZndDsgJmd0OyAvSzxicj4NCiZndDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJz
cDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0
OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBBUEk6ICZsdDstLUF1dGhuLkludm9rZUNvbW1hbmRzLkFV
VEhFTlRJQ0FURV9DUkVERU5USUFMUyByZXN1bHQ9U1VDQ0VTUzxicj4NCiZndDsgJmd0OyAmZ3Q7
Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4N
CiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBidXQ6PGJyPg0KJmd0OyAmZ3Q7ICZndDsm
bmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IEFQSTog
LS0mZ3Q7QXV0aHouSW52b2tlQ29tbWFuZHMuRkVUQ0hfUFJJTkNJUEFMX1JFQ09SRDxicj4NCiZn
dDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBwcmluY2lwYWw9J3VzZXJAYmF6LmZvby5iYXIn
PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IFNFVkVSRSZuYnNwOyBDYW5ub3Qg
cmVzb2x2ZSBwcmluY2lwYWwgJ3VzZXJAYmF6LmZvby5iYXInPGJyPg0KJmd0OyAmZ3Q7ICZndDsm
bmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IFNvIGl0IGZhaWxzLjxicj4NCiZndDsgJmd0
OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0
Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyAjIGxkYXBzZWFyY2ggLXggLUgg
bGRhcDovL2Jhei5mb28uYmFyIC1EIHVzZXJAZm9vLmJhciAtVyAtYjxicj4NCiZndDsgJmd0OyAm
Z3Q7Jm5ic3A7ICZndDsgJmd0OyBEQz1iYXosREM9Zm9vLERDPWJhciAtcyBzdWIgJnF1b3Q7KHNh
bUFjY291bnROYW1lPXVzZXIpJnF1b3Q7IHVzZXJQcmluY2lwYWxOYW1lIHw8YnI+DQomZ3Q7ICZn
dDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgZ3JlcCAndXNlclByaW5jaXBhbE5hbWU6Jzxicj4NCiZn
dDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDsgJmd0OyB1c2VyUHJpbmNpcGFsTmFtZTogdXNlckBmb28uYmFyPGJyPg0KJmd0OyAmZ3Q7ICZn
dDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJy
Pg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IHxIb3cgZG8geW91IGNvbmZpZ3VyZSBB
QUEgd2l0aCBiYXNlICdEQz1iYXosREM9Zm9vLERDPWJhcicgd2hlbjxicj4NCiZndDsgJmd0OyAm
Z3Q7Jm5ic3A7ICZndDsgJmd0OyB1c2VyUHJpbmNpcGFsTmFtZSBlbmRzIG9ubHkgb24gJ0Bmb28u
YmFyJz88YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsg
Jmd0OyZuYnNwOyAmZ3Q7ICZndDsgL0s8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZn
dDsgfDxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAm
Z3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxi
cj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5i
c3A7ICZndDsgJmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXzxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBVc2VycyBtYWlsaW5nIGxp
c3Q8YnI+DQomZ3Q7ICZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgVXNlcnNAb3ZpcnQub3JnPGJy
Pg0KJmd0OyAmZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcv
bWFpbG1hbi9saXN0aW5mby91c2Vyczxicj4NCiZndDsgJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0
Ozxicj4NCiZndDsgJmd0OyAmZ3Q7PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_0d278e8e72e34bb696eaf54f5b6d9948exch24sluse_--
2
10
Hi,
On oVirt 3.6.3.4-1, when selecting the 'Event notification' sub-tab on a
user in the 'Users' tab, at the bottom this message is shown:
Note: To receive email notifications, ensure that the mail server is
configured and the ovirt-event-notifier service is running.
I cannot find this service installed anywhere, though. Which package
should contain this service?
# systemctl status ovirt-event-notifier
ovirt-event-notifier.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
# locate ovirt-event-notifier
#
# yum provides *ovirt-event-notifier
Loading mirror speeds from cached hostfile
[...]
No matches found
Thanks.
2
6
I am seeking some assistance installing Ovirt Self hosted Engine, but I
keep getting and error at the end "System not stable or unable to read
dev/mapper etc. here is the log file from my last try. I am testing this
for use at work as a test environment, there I have a single server to run
this on the reason I choose the Self Hosted Engine. I guess the
instructions is not clear either, i.e when i try to disk install with Cento
OVA I get and error, and finally when I choose the cdrom it work fine until
the end and fail to initialize /dev/mapper, etc.
Any assistance will be highly appreciated.
Last failed login: Wed Mar 23 09:56:50 EDT 2016 from dul-av1.acs.net on
ssh:nott
y
There were 9 failed login attempts since the last successful login.
Last login: Tue Mar 22 17:59:14 2016 from atllp096.acs.net
[root@dul-ovrtst01 ~]#
[root@dul-ovrtst01 ~]# screen
[root@dul-ovrtst01 ~]# cd /
[root@dul-ovrtst01 /]# screen
[root@dul-ovrtst01 /]# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
crea
te a VM where you have to install the engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: Yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup
-20160323110803-m7svm5.log
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc
, nfs3, nfs4)[nfs3]: fc
The following luns have been found on the requested target:
[1] 3600508b10018433953524235374f0007 203GiB
COMPAQ M
SA1000 VOLUME
status: used, paths: 1 active
Please select the destination LUN (1) [1]: 1
The selected device is already used.
To create a vg on this device, you must use Force.
WARNING: This will destroy existing data on the device.
(Force, Abort)[Abort]? Force
[ INFO ] Installing on first host
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (enp3s0,
enp5s0) [en
p3s0]:
iptables was detected on your computer, do you wish setup to
configure
it? (Yes, No)[Yes]: No
Please indicate a pingable gateway IP address [192.168.200.1]:
--== VM CONFIGURATION ==--
Booting from cdrom on RHEL7 is ISO image based only, as cdrom
passthro
ugh is disabled (BZ760885)
Please specify the device to boot the VM from (choose disk for
the oVi
rt engine appliance)
(cdrom, disk, pxe) [disk]:
[ INFO ] Detecting available oVirt engine appliances
[ INFO ] No engine appliance image is available on your system.
Using an oVirt engine appliance could greatly speed-up ovirt
hosted-en
gine deploy.
You could get oVirt engine appliance installing
ovirt-engine-appliance
rpm.
Please specify path to OVF archive you would like to use [None]:
/root
/Downloads/CentOS-7.0-amd64-gui.ova
[ ERROR ] Failed to execute stage 'Environment customization': not a gzip
file
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/ans
wers-20160323111200.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please c
heck the issue, fix and redeploy
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted
-engine-setup-20160323110803-m7svm5.log
[root@dul-ovrtst01 /]# vi
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine
-setup-20160323110803-m7svm5.log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE boot
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
closeup
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
reboot
Traceback (most recent call last):
constants.PackEnv.DNF_DISABLED_PLUGINS
2016-03-23 11:08:03 DEBUG otopi.context context.runSequence:431 STAGE setup
Active: inactive (dead)
? vdsmd.service - Virtual Desktop Server Manager
Active: active (running) since Tue 2016-03-22 18:00:09 EDT; 17h ago
Main PID: 3327 (vdsm)
CGroup: /system.slice/vdsmd.service
mq3327 /usr/bin/python /usr/share/vdsm/vdsm
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 1
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 1
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 make_client_response()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 2
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5
parse_server_challenge()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 make_client_response()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 3
2f:41:91:a1:8e:ae:1c:b9:3f:2d:05:4d:83:2e:b0:
28:91:fe:fa:55:d5:a6:3d:77:b7:a1:20:66:4e:ff:
6c:20:b3:5f:ff:b8:f3:38:7d
Exponent: 65537 (0x10001)
LoadState=loaded
maxauthtries 6
maxsessions 10
ignoreuserknownhosts no
rhostsrsaauthentication no
hostbasedauthentication no
hostbasedusesnamefrompacketonly no
rsaauthentication yes
pubkeyauthentication yes
kerberosauthentication no
kerberosorlocalpasswd yes
kerberosticketcleanup yes
gssapiauthentication yes
gssapicleanupcredentials no
gssapikeyexchange no
gssapistrictacceptorcheck yes
gssapistorecredentialsonrekey no
gssapikexalgorithms gss-gex-sha1-,gss-group1-sha1-,gss-group14-sha1-
passwordauthentication yes
kbdinteractiveauthentication no
challengeresponseauthentication no
printmotd yes
printlastlog yes
x11forwarding yes
x11uselocalhost yes
permittty yes
strictmodes yes
tcpkeepalive yes
permitemptypasswords no
permituserenvironment no
uselogin no
compression delayed
usedns yes
allowtcpforwarding yes
loglevel INFO
PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data.
method['method']()
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/dnfRollback=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/dnfpackagerEnabled=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/keepAliveInterval=int:'30'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/yumDisabledPlugins=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/yumEnabledPlugins=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/yumExpireCache=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/yumRollback=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
/yumpackagerEnabled=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
lockMaxGap=int:'5'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
lockSet=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
ommandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
eboot=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
ebootAllow=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
ebootDeferTime=int:'10'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
DUMP - END
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
pre-ter
minate METHOD
otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition Fal
se
2016-03-23 11:12:01 INFO otopi.context context.runSequence:427 Stage:
Terminatio
n
2016-03-23 11:12:01 DEBUG otopi.context context.runSequence:431 STAGE
terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
te METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._terminate
2016-03-23 11:12:01 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc
._terminate:170 Hosted Engine deployment failed: this system is
not reliable, pl
ease check the issue, fix and redeploy
2016-03-23 11:12:01 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
9 DIALOG:SEND Log file is located
at /var/log/ovirt-hosted-engin
e-setup/ovirt-hosted-engine-setup-20160323110803-m7svm5.log
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
te METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
te METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition Fal
se
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
te METHOD otopi.plugins.otopi.core.log.Plugin._terminate
topi.context context.dumpEnvironment:510
~
[root@dul-ovrtst01 /]# hosted-engine --deploy
se
2016-03-23 11:12:01 INFO otopi.context context.runSequence:427 Stage:
Terminatio
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
2016-03-23 11:12:01 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc
ck the issue, fix and redeploy
2016-03-23 11:12:01 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
ovirt-hosted-engine-setup-20160323110803-m7svm5.log
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
topi.context context.dumpEnvironment:510
~
[root@dul-ovrtst01 /]# hosted-engine --deploy
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
crea
Are you sure you want to continue? (Yes, No)[Yes]:
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVESETUP
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
pre-ter
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:12:01 INFO otopi.context context.runSequence:427 Stage:
Terminatio
2016-03-23 11:12:01 DEBUG otopi.context context.runSequence:431 STAGE
terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
2016-03-23 11:12:01 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc
ck the issue, fix and redeploy
2016-03-23 11:12:01 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
ovirt-hosted-engine-setup-20160323110803-m7svm5.log
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
termina
topi.context context.dumpEnvironment:510
~
[root@dul-ovrtst01 /]# hosted-engine --deploy
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
crea
Are you sure you want to continue? (Yes, No)[Yes]: Yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc
The following luns have been found on the requested target:
[1] 3600508b10018433953524235374f0007 203GiB
COMPAQ
status: used, paths: 1 active
Please select the destination LUN (1) [1]: 1
The selected device is already used.
To create a vg on this device, you must use Force.
WARNING: This will destroy existing data on the device.
(Force, Abort)[Abort]? Force
[ INFO ] Installing on first host
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (enp3s0,
enp5s0) [en
iptables was detected on your computer, do you wish setup to
configure
Please indicate a pingable gateway IP address [192.168.200.1]:
--== VM CONFIGURATION ==--
Booting from cdrom on RHEL7 is ISO image based only, as cdrom
passthro
Please specify the device to boot the VM from (choose disk for
the oVi
(cdrom, disk, pxe) [disk]:
[ INFO ] Detecting available oVirt engine appliances
[ INFO ] No engine appliance image is available on your system.
Using an oVirt engine appliance could greatly speed-up ovirt
hosted-en
You could get oVirt engine appliance installing
ovirt-engine-appliance
Please specify path to OVF archive you would like to use [None]:
/root
[ INFO ] Checking OVF archive content (could take a few minutes depending
on ar
[ ERROR ] Failed to execute stage 'Environment customization': CRC check
failed
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/ans
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please c
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted
[root@dul-ovrtst01 /]# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
crea
Are you sure you want to continue? (Yes, No)[Yes]: Yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc
Please specify the full shared storage connection path to use
(example
[ ERROR ] Error while mounting specified storage path: mount.nfs: remote
share n
[WARNING] Cannot unmount /tmp/tmpu9YLEJ
[ ERROR ] Cannot access storage connection /hpnas01: mount.nfs: remote
share not
Please specify the full shared storage connection path to use
(example
[ ERROR ] Error while mounting specified storage path: mount.nfs: Failed to
reso
[WARNING] Cannot unmount /tmp/tmpP8Mmjj
[ ERROR ] Cannot access storage connection dul-ovrtst.acs.net:hpnas01:
mount.nfs
Please specify the full shared storage connection path to use
(example
e
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/ans
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please c
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted
[root@dul-ovrtst01 /]# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
crea
Are you sure you want to continue? (Yes, No)[Yes]: Yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc
The following luns have been found on the requested target:
[1] 3600508b10018433953524235374f0007 203GiB
COMPAQ
status: used, paths: 1 active
[2] 3600143800006b070a577938b0f0b000e 1396GiB
COMPAQ
status: used, paths: 1 active
[3] 3600508b10010443953555538485a001a 1862GiB HP
status: used, paths: 1 active
Please select the destination LUN (1, 2, 3) [1]: 2
The selected device is already used.
To create a vg on this device, you must use Force.
WARNING: This will destroy existing data on the device.
(Force, Abort)[Abort]? Force
[ INFO ] Installing on first host
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (enp3s0,
enp5s0) [en
iptables was detected on your computer, do you wish setup to
configure
Please indicate a pingable gateway IP address [192.168.200.1]:
--== VM CONFIGURATION ==--
Booting from cdrom on RHEL7 is ISO image based only, as cdrom
passthro
Please specify the device to boot the VM from (choose disk for
the oVi
(cdrom, disk, pxe) [disk]:
[ INFO ] Detecting available oVirt engine appliances
[ INFO ] No engine appliance image is available on your system.
Using an oVirt engine appliance could greatly speed-up ovirt
hosted-en
You could get oVirt engine appliance installing
ovirt-engine-appliance
Please specify path to OVF archive you would like to use [None]:
/Down
[ ERROR ] The specified file does not exists
[ ERROR ] The specified OVF archive is not a valid OVF archive.
Please specify path to OVF archive you would like to use [None]:
/Down
[ ERROR ] The specified file does not exists
[ ERROR ] The specified OVF archive is not a valid OVF archive.
Please specify path to OVF archive you would like to use [None]:
/root
[ ERROR ] Failed to execute stage 'Environment customization': not a gzip
file
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/ans
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please c
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted
[root@dul-ovrtst01 /]# vi
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/d
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/b
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:477 SEQUENCE
DUMP -
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE boot
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE init
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE setup
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
customiza
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
cleanup
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
closeup
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
cleanup
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
pre-termi
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
terminate
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:479 STAGE
reboot
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:484 METHOD
otop
2016-03-23 11:08:03 DEBUG otopi.context context.dumpSequence:486 SEQUENCE
DUMP -
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/abo
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/deb
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/err
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exc
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exe
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exi
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/plu
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/sup
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/fai
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/ran
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/b
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/d
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PAC
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PAC
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVESETUP
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PAC
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PAC
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.packagers.dnfpackager
dnfpackager.
Traceback (most recent call last):
File "/usr/share/otopi/plugins/otopi/packagers/dnfpackager.py", line 165,
in _
constants.PackEnv.DNF_DISABLED_PLUGINS
File "/usr/share/otopi/plugins/otopi/packagers/dnfpackager.py", line 75,
in _g
from otopi import minidnf
File "/usr/lib/python2.7/site-packages/otopi/minidnf.py", line 31, in
<module>
import dnf
ImportError: No module named dnf
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:51
SYSTEM I
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:52
executab
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:53
python /
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:54
platform
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:55
distribu
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:56
host 'du
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:62 uid
0 eu
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.system.info info._init:64
SYSTEM I
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
boot ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 INFO otopi.context context.runSequence:427 Stage:
Initializi
2016-03-23 11:08:03 DEBUG otopi.context context.runSequence:431 STAGE init
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/con
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/int
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/mai
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/mod
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/c
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/c
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/c
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/r
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.core.offlinepa
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/log
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
init ME
2016-03-23 11:08:03 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.runvm
mixins
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:03 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:03 INFO otopi.context context.runSequence:427 Stage:
Environmen
2016-03-23 11:08:03 DEBUG otopi.context context.runSequence:431 STAGE setup
2016-03-23 11:08:03 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:08:03 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:07 DEBUG otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware._prdmsr:125
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware._vmx_enabled
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware._cpuid:88 cp
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware._cpu_has_vmx
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware._isVirtualiz
2016-03-23 11:08:07 DEBUG otopi.ovirt_host_deploy.hardware
hardware.detect:201 H
2016-03-23 11:08:07 INFO otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
cpu._
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
setup M
2016-03-23 11:08:07 INFO otopi.context context.runSequence:427 Stage:
Environmen
2016-03-23 11:08:07 DEBUG otopi.context context.runSequence:431 STAGE
internal_p
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
interna
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
interna
2016-03-23 11:08:07 INFO otopi.context context.runSequence:427 Stage:
Programs d
2016-03-23 11:08:07 DEBUG otopi.context context.runSequence:431 STAGE
programs
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
program
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
program
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:93
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:94
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
systemd._programs
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
program
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:87
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:936 e
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:941 e
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
program
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
program
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:10
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:93
? ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring
Agen
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
disabled; ven
Active: inactive (dead)
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:94
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:10
? ovirt-ha-broker.service - oVirt Hosted Engine High Availability
Communications
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
disabled; ve
Active: inactive (dead)
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:93
? NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service;
disabled; ven
Active: inactive (dead)
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:94
2016-03-23 11:08:07 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
2016-03-23 11:08:07 INFO otopi.context context.runSequence:427 Stage:
Environmen
2016-03-23 11:08:07 DEBUG otopi.context context.runSequence:431 STAGE
late_setup
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
late_se
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
late_se
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:10
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:93
? vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset
Active: active (running) since Tue 2016-03-22 18:00:09 EDT; 17h ago
Process: 3257 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start
Main PID: 3327 (vdsm)
CGroup: /system.slice/vdsmd.service
mq3327 /usr/bin/python /usr/share/vdsm/vdsm
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 1
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 1
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 make_client_response()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 2
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 make_client_response()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 3
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:94
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context._executeMethod:142 Stage
late_se
Certificate:
Data:
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=EN, L=Test, O=Test, CN=TestCA
Validity
Not Before: Mar 22 22:00:33 2016 GMT
Not After : Mar 22 22:00:33 2019 GMT
Subject: C=EN, L=Test, O=Test, CN=Test
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (1024 bit)
Modulus:
00:d7:d0:9d:89:c9:df:79:91:8c:c8:ec:19:2c:d1:
d0:48:ed:d4:55:77:cd:a1:b7:26:95:1c:4c:0a:ab:
fe:0c:7c:e7:ea:ec:46:d2:bf:30:9f:7e:c2:0c:52:
37:b3:e3:28:c9:19:35:6d:42:8a:f3:a9:cb:38:e0:
4a:5b:21:0b:2b:f8:ba:b2:dd:38:a1:29:e9:5e:a0:
6c:4a:31:b1:a1:a3:47:2c:84:1b:79:6f:27:b8:2d:
Signature Algorithm: sha1WithRSAEncryption
05:9d:95:4d:58:2d:5e:d6:83:8a:0e:ec:4b:51:6a:86:50:a6:
eb:f9:3f:e0:26:fa:96:b9:af:c9:de:2f:a0:10:79:d3:0b:fe:
0c:e6:da:65:77:0d:b9:5e:99:95:6a:03:d4:b3:8d:4c:d4:df:
0d:87:43:ce:6c:30:2d:88:a2:92:ad:22:a2:13:0c:0b:43:a2:
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88
2016-03-23 11:08:07 DEBUG otopi.plugins.otopi.network.firewalld
firewalld._get_f
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/
2016-03-23 11:08:07 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:08 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:08:08 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:08:08 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:08 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:08:08 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:08 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:08 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:08 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storag
2016-03-23 11:08:08 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:08:17 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:08:17 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:08:17 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:08:41 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:41 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:08:45 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:07 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd
2016-03-23 11:09:07 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:09:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storag
2016-03-23 11:09:34 INFO
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:148
condition Fal
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:93
LoadState=loaded
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:94
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd pl
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd pl
usepam yes
serverkeybits 1024
x11displayoffset 10
maxauthtries 6
maxsessions 10
clientaliveinterval 0
clientalivecountmax 3
permitrootlogin yes
ignorerhosts yes
ignoreuserknownhosts no
rhostsrsaauthentication no
hostbasedauthentication no
hostbasedusesnamefrompacketonly no
rsaauthentication yes
pubkeyauthentication yes
kerberosauthentication no
kerberosorlocalpasswd yes
kerberosticketcleanup yes
gssapiauthentication yes
gssapicleanupcredentials no
gssapikeyexchange no
gssapistrictacceptorcheck yes
gssapistorecredentialsonrekey no
gssapikexalgorithms gss-gex-sha1-,gss-group1-sha1-,gss-group14-sha1-
passwordauthentication yes
kbdinteractiveauthentication no
challengeresponseauthentication no
printmotd yes
printlastlog yes
x11forwarding yes
x11uselocalhost yes
permittty yes
strictmodes yes
tcpkeepalive yes
permitemptypasswords no
permituserenvironment no
uselogin no
compression delayed
gatewayports no
showpatchlevel no
usedns yes
allowtcpforwarding yes
kerberosusekuserok yes
gssapienablek5users no
pidfile /var/run/sshd.pid
xauthlocation /usr/bin/xauth
banner none
authorizedkeysfile .ssh/authorized_keys
acceptenv LC_TIME
acceptenv LC_COLLATE
acceptenv LC_PAPER
acceptenv LC_NAME
acceptenv LC_ADDRESS
xauthlocation /usr/bin/xauth
banner none
authorizedkeysfile .ssh/authorized_keys
acceptenv LC_TIME
acceptenv LC_COLLATE
acceptenv LC_PAPER
acceptenv LC_NAME
acceptenv LC_ADDRESS
acceptenv LC_IDENTIFICATION
acceptenv LC_ALL
acceptenv LANGUAGE
acceptenv XMODIFIERS
permittunnel no
ipqos lowdelay throughput
rekeylimit 0 0
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd pl
2016-03-23 11:09:34 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:34 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
2016-03-23 11:09:34 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
2016-03-23 11:09:34 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:09:40 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:09:40 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:40 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
LoadState=loaded
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gatewa
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gatewa
PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data.
64 bytes from 192.168.200.1: icmp_seq=1 ttl=64 time=37.0 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 37.077/37.077/37.077/0.000 ms
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gatewa
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customi
on
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
rom passthrough is disabled (BZ760885)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
for the oVirt engine appliance)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTE
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:09:54 DEBUG otopi.context context._executeMethod:142 Stage
customi
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk bo
2016-03-23 11:09:54 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk b
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk bo
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
rt hosted-engine deploy.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
ne-appliance rpm.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
None]:
2016-03-23 11:12:00 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:21
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:156 method
except
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
_execut
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-eng
valid = self._check_ovf(ova_path)
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-eng
tar = tarfile.open(path, 'r:gz')
File "/usr/lib64/python2.7/tarfile.py", line 1678, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to exe
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/err
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exc
k object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to exe
cute stage 'Environment customization': not a gzip file
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/err
or=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exc
eptionInfo=list:'[(<class 'tarfile.ReadError'>,
ReadError('not a gzip file',), <
traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
DUMP - END
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._c
leanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to exe
cute stage 'Environment customization': not a gzip file
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT
DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/err
or=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exc
eptionInfo=list:'[(<class 'tarfile.ReadError'>,
ReadError('not a gzip file',), <
traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT
DUMP - END
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._c
leanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup
METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:45 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall.Plugin._configuration
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[{'directory': 'base', 'name':
'hosted-console'}]'
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:45 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._customization
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_GATEWAY
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please indicate a
pingable gateway IP address [192.168.200.1]:
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway
plugin.executeRaw:828 execute: ('/bin/ping', '-c', '1', '192.168.200.1'),
executable='None', cwd='None', env=None
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway
plugin.executeRaw:878 execute-result: ('/bin/ping', '-c', '1',
'192.168.200.1'), rc=0
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway plugin.execute:936
execute-output: ('/bin/ping', '-c', '1', '192.168.200.1') stdout:
PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data.
64 bytes from 192.168.200.1: icmp_seq=1 ttl=64 time=37.0 ms
--- 192.168.200.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 37.077/37.077/37.077/0.000 ms
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway plugin.execute:941
execute-output: ('/bin/ping', '-c', '1', '192.168.200.1') stderr:
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/gateway=str:'192.168.200.1'
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._network_end
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_start
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== VM CONFIGURATION
==--
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._customization
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_BOOT
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Booting from cdrom on
RHEL7 is ISO image based only, as cdrom passthrough is disabled (BZ760885)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
device to boot the VM from (choose disk for the oVirt engine appliance)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND (cdrom, disk, pxe)
[disk]:
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'disk'
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:54 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._customization
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:213 Detecting available oVirt engine appliances
2016-03-23 11:09:54 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:243 available appliances: []
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:246 No engine appliance image is available on
your system.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Using an oVirt engine
appliance could greatly speed-up ovirt hosted-engine deploy.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND You could get oVirt
engine appliance installing ovirt-engine-appliance rpm.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_OVF
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
OVF archive you would like to use [None]:
2016-03-23 11:12:00 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE
/root/Downloads/CentOS-7.0-amd64-gui.ova
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:156 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/boot_disk.py",
line 538, in _customization
valid = self._check_ovf(ova_path)
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/boot_disk.py",
line 384, in _check_ovf
tar = tarfile.open(path, 'r:gz')
File "/usr/lib64/python2.7/tarfile.py", line 1678, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to execute stage 'Environment customization': not a gzip file
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[(<class 'tarfile.ReadError'>, ReadError('not a
gzip file',), <traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to execute stage 'Environment customization': not a gzip file
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[(<class 'tarfile.ReadError'>, ReadError('not a
gzip file',), <traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
2016-03-23 11:09:40 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 11:09:40 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._customization
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88 check if service iptables exists
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service'), executable='None', cwd='None', env=None
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service'), rc=0
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service') stdout:
LoadState=loaded
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service') stderr:
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OHOSTED_NETWORK_FIREWALL_MANAGER
2016-03-23 11:09:40 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND iptables was detected on
your computer, do you wish setup to configure it? (Yes, No)[Yes]:
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE No
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:45 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall.Plugin._configuration
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[{'directory': 'base', 'name':
'hosted-console'}]'
2016-03-23 11:09:45 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:45 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._customization
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_GATEWAY
2016-03-23 11:09:45 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please indicate a
pingable gateway IP address [192.168.200.1]:
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway
plugin.executeRaw:828 execute: ('/bin/ping', '-c', '1', '192.168.200.1'),
executable='None', cwd='None', env=None
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway
plugin.executeRaw:878 execute-result: ('/bin/ping', '-c', '1',
'192.168.200.1'), rc=0
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway plugin.execute:936
execute-output: ('/bin/ping', '-c', '1', '192.168.200.1') stdout:
PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data.
64 bytes from 192.168.200.1: icmp_seq=1 ttl=64 time=37.0 ms
--- 192.168.200.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 37.077/37.077/37.077/0.000 ms
2016-03-23 11:09:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway plugin.execute:941
execute-output: ('/bin/ping', '-c', '1', '192.168.200.1') stderr:
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/gateway=str:'192.168.200.1'
2016-03-23 11:09:47 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._network_end
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_start
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== VM CONFIGURATION
==--
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._customization
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_BOOT
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Booting from cdrom on
RHEL7 is ISO image based only, as cdrom passthrough is disabled (BZ760885)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
device to boot the VM from (choose disk for the oVirt engine appliance)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND (cdrom, disk, pxe)
[disk]:
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'disk'
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:54 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._customization
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:213 Detecting available oVirt engine appliances
2016-03-23 11:09:54 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:243 available appliances: []
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:246 No engine appliance image is available on
your system.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Using an oVirt engine
appliance could greatly speed-up ovirt hosted-engine deploy.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND You could get oVirt
engine appliance installing ovirt-engine-appliance rpm.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_OVF
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
OVF archive you would like to use [None]:
2016-03-23 11:12:00 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE
/root/Downloads/CentOS-7.0-amd64-gui.ova
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:156 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/boot_disk.py",
line 538, in _customization
valid = self._check_ovf(ova_path)
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/boot_disk.py",
line 384, in _check_ovf
tar = tarfile.open(path, 'r:gz')
File "/usr/lib64/python2.7/tarfile.py", line 1678, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to execute stage 'Environment customization': not a gzip file
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[(<class 'tarfile.ReadError'>, ReadError('not a
gzip file',), <traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._save_answers_at_cleanup
2016-03-23 11:12:00 INFO
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile
answerfile._save_answers:74 Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160323111200.conf'
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage:
Pre-termination
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
pre-terminate
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
pre-terminate METHOD otopi.plugins.otopi.core.misc.Plugin._preTerminate
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/aborted=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/debug=int:'0'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[(<class 'tarfile.ReadError'>, ReadError('not a
gzip file',), <traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/executionDirectory=str:'/'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginGroups=str:'otopi:ovirt-hosted-engine-setup'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/suppressEnvironmentKeys=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chkconfig=str:'/sbin/chkconfig'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chown=str:'/bin/chown'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chronyc=str:'/bin/chronyc'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/date=str:'/bin/date'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/dig=str:'/bin/dig'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/file=str:'/bin/file'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/firewall-cmd=str:'/bin/firewall-cmd'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/genisoimage=str:'/bin/genisoimage'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/gluster=str:'/sbin/gluster'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/hwclock=str:'/sbin/hwclock'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/initctl=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ip=str:'/sbin/ip'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/iscsiadm=str:'/sbin/iscsiadm'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/losetup=str:'/sbin/losetup'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/lsof=str:'/sbin/lsof'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mkfs=str:'/sbin/mkfs'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mount=str:'/bin/mount'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ntpq=str:'/sbin/ntpq'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/openssl=str:'/bin/openssl'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ping=str:'/bin/ping'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/qemu-img=str:'/bin/qemu-img'
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== VM CONFIGURATION
==--
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 11:09:47 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._customization
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_BOOT
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Booting from cdrom on
RHEL7 is ISO image based only, as cdrom passthrough is disabled (BZ760885)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
device to boot the VM from (choose disk for the oVirt engine appliance)
2016-03-23 11:09:47 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND (cdrom, disk, pxe)
[disk]:
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'disk'
2016-03-23 11:09:54 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:09:54 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._customization
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:213 Detecting available oVirt engine appliances
2016-03-23 11:09:54 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:243 available appliances: []
2016-03-23 11:09:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk
boot_disk._detect_appliances:246 No engine appliance image is available on
your system.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Using an oVirt engine
appliance could greatly speed-up ovirt hosted-engine deploy.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND You could get oVirt
engine appliance installing ovirt-engine-appliance rpm.
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_OVF
2016-03-23 11:09:54 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
OVF archive you would like to use [None]:
2016-03-23 11:12:00 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE
/root/Downloads/CentOS-7.0-amd64-gui.ova
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:156 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/boot_disk.py",
line 538, in _customization
valid = self._check_ovf(ova_path)
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/boot_disk.py",
line 384, in _check_ovf
tar = tarfile.open(path, 'r:gz')
File "/usr/lib64/python2.7/tarfile.py", line 1678, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib64/python2.7/tarfile.py", line 1729, in gzopen
raise ReadError("not a gzip file")
ReadError: not a gzip file
2016-03-23 11:12:00 ERROR otopi.context context._executeMethod:165 Failed
to execute stage 'Environment customization': not a gzip file
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[(<class 'tarfile.ReadError'>, ReadError('not a
gzip file',), <traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._cleanup
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._save_answers_at_cleanup
2016-03-23 11:12:00 INFO
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile
answerfile._save_answers:74 Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160323111200.conf'
2016-03-23 11:12:00 INFO otopi.context context.runSequence:427 Stage:
Pre-termination
2016-03-23 11:12:00 DEBUG otopi.context context.runSequence:431 STAGE
pre-terminate
2016-03-23 11:12:00 DEBUG otopi.context context._executeMethod:142 Stage
pre-terminate METHOD otopi.plugins.otopi.core.misc.Plugin._preTerminate
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/aborted=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/debug=int:'0'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[(<class 'tarfile.ReadError'>, ReadError('not a
gzip file',), <traceback object at 0x282be60>)]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/executionDirectory=str:'/'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginGroups=str:'otopi:ovirt-hosted-engine-setup'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/suppressEnvironmentKeys=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chkconfig=str:'/sbin/chkconfig'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chown=str:'/bin/chown'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chronyc=str:'/bin/chronyc'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/date=str:'/bin/date'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/dig=str:'/bin/dig'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/file=str:'/bin/file'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/firewall-cmd=str:'/bin/firewall-cmd'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/genisoimage=str:'/bin/genisoimage'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/gluster=str:'/sbin/gluster'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/hwclock=str:'/sbin/hwclock'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/initctl=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ip=str:'/sbin/ip'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/iscsiadm=str:'/sbin/iscsiadm'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/losetup=str:'/sbin/losetup'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/lsof=str:'/sbin/lsof'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mkfs=str:'/sbin/mkfs'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mount=str:'/bin/mount'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ntpq=str:'/sbin/ntpq'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/openssl=str:'/bin/openssl'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ping=str:'/bin/ping'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/qemu-img=str:'/bin/qemu-img'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/rc=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/rc-update=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/reboot=str:'/sbin/reboot'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/remote-viewer=str:'/bin/remote-viewer'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/service=str:'/sbin/service'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sshd=str:'/sbin/sshd'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sudo=str:'/bin/sudo'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/systemctl=str:'/bin/systemctl'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/truncate=str:'/bin/truncate'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/umount=str:'/bin/umount'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/vdsm-tool=str:'/bin/vdsm-tool'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileAppend=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileName=str:'/etc/otopi.conf'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/failOnPrioOverride=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/internalPackageTransaction=Transaction:'transaction'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileHandle=file:'<open file
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323110803-m7svm5.log',
mode 'a' at 0x25d25d0>'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323110803-m7svm5.log'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilter=_MyLoggerFilter:'filter'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword',
'OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd',
'OVEHOSTED_VDSM/passwd']'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logRemoveAtExit=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/mainTransaction=Transaction:'transaction'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/modifiedFiles=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/randomizeEvents=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/cliVersion=int:'1'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/customization=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/dialect=str:'human'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_NAME=str:'otopi'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_VERSION=str:'1.4.1'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldDisableServices=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldEnable=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesEnable=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesRules=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshEnable=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshKey=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshUser=str:''
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostEnabled=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostReDeployment=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/checkRequirements=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/confirmSettings=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/deployProceed=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/isAdditionalHost=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/nodeSetup=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/screenProceed=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/tempDir=str:'/var/tmp'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/userAnswerFile=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/adminPassword=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/appHostName=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/clusterName=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/engineSetupTimeout=int:'600'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/forceCreateVG=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/insecureSSL=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/promptNonOperational=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/temporaryCertificate=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/fetchAnswer=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/fqdn=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/rootPassword=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/sshdPort=int:'22'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeIf=str:'enp3s0'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewallManager=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[{'directory': 'base', 'name':
'hosted-console'}]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldSubst=dict:'{}'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdn=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/gateway=str:'192.168.200.1'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/promptRequiredNetworks=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/sshdPort=int:'22'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/destEmail=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpPort=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpServer=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/sourceEmail=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/serviceName=str:'sanlock'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/GUID=str:'3600508b10018433953524235374f0007'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/LunID=str:'3600508b10018433953524235374f0007'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/blockDeviceSizeGB=int:'203'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/brokerConfContent=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageSizeGB=int:'1'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageUUID=str:'d6d4f89a-bab0-40a3-b88d-4041c9720f3a'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confVolUUID=str:'57714f10-01b5-40d9-8f57-245d5f7d8970'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/connectionUUID=str:'1d758c74-2288-4e93-b7bb-22873dd46d00'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/domainType=str:'fc'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdConnUUID=str:'5a4d6f5d-f5cd-4654-8b8c-03132dfbf626'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdUUID=str:'b5227523-d51a-4638-9ece-b3223e8cb5ce'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterBrick=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisionedShareName=str:'hosted_engine_glusterfs'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisioningEnabled=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/hostID=int:'1'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortal=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalIPAddress=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalPassword=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalPort=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalUser=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSITargetName=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgSizeGB=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgUUID=str:'bd7e4b83-d1de-4630-81cf-8fd127e92b3f'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/lockspaceImageUUID=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/lockspaceVolumeUUID=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/metadataImageUUID=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/metadataVolumeUUID=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/sdUUID=str:'3f95d342-7d01-4fb6-b0f9-034582de4d2a'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/spUUID=str:'205377ee-d9eb-4538-8e16-2bd69ac002d4'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageAnswerFileContent=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDatacenterName=str:'hosted_datacenter'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainConnection=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageHEConfContent=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageType=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/vgUUID=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/vmConfContent=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/volUUID=str:'03278886-10fe-470f-bbfa-859fcf659474'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/caSubject=str:'/C=EN/L=Test/O=Test/CN=TestCA'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/consoleType=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/cpu=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/engineCpu=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/glusterMinimumVersion=str:'3.7.2'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/kvmGid=int:'36'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwd=str:'**FILTERED**'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwdValiditySecs=str:'10800'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/pkiSubject=str:'/C=EN/L=Test/O=Test/CN=Test'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/serviceName=str:'vdsmd'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/spicePkiSubject=unicode:'C=EN, L=Test, O=Test, CN=Test'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/useSSL=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdscli=instance:'<ServerProxy for 0.0.0.0:54321/RPC2>'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdsmUid=int:'36'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/applianceMem=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/applianceVCpus=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/automateVMShutdown=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cdromUUID=str:'67be7b12-d27b-4f86-a8da-58cfcbf0c6e9'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudInitISO=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitExecuteEngineSetup=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitInstanceDomainName=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitInstanceHostName=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitRootPwd=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMDNS=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMETCHOSTS=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMStaticCIDR=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/consoleUUID=str:'c9a45713-bbda-4b7e-bc05-4353fcc85fa7'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/emulatedMachine=str:'pc'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/nicUUID=str:'5e7f5404-ea84-43ba-ba36-f77e883cc851'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/ovfArchive=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/subst=dict:'{}'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'disk'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmCDRom=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMACAddr=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMemSizeMB=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmUUID=str:'8d31eb2f-2b91-41be-85a4-66406f583218'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmVCpus=NoneType:'None'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVESETUP_CORE/offlinePackager=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfDisabledPlugins=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfExpireCache=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfRollback=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfpackagerEnabled=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/keepAliveInterval=int:'30'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumDisabledPlugins=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumEnabledPlugins=list:'[]'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumExpireCache=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumRollback=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumpackagerEnabled=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockMaxGap=int:'5'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockSet=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/reboot=bool:'False'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootAllow=bool:'True'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootDeferTime=int:'10'
2016-03-23 11:12:00 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 11:12:01 INFO otopi.context context.runSequence:427 Stage:
Termination
2016-03-23 11:12:01 DEBUG otopi.context context.runSequence:431 STAGE
terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._terminate
2016-03-23 11:12:01 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc._terminate:170 Hosted Engine deployment failed: this system is not
reliable, please check the issue, fix and redeploy
2016-03-23 11:12:01 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323110803-m7svm5.log
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 11:12:01 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
[root@dul-ovrtst01 /]# ls -lrt
total 36
drwxr-xr-x. 2 root root 6 Aug 12 2015 srv
drwxr-xr-x. 2 root root 6 Aug 12 2015 media
lrwxrwxrwx. 1 root root 7 Mar 21 15:49 bin -> usr/bin
lrwxrwxrwx. 1 root root 8 Mar 21 15:49 sbin -> usr/sbin
lrwxrwxrwx. 1 root root 9 Mar 21 15:49 lib64 -> usr/lib64
lrwxrwxrwx. 1 root root 7 Mar 21 15:49 lib -> usr/lib
drwxr-xr-x. 14 root root 4096 Mar 22 11:00 usr
drwxr-xr-x. 4 root root 24 Mar 22 11:00 opt
drwxr-xr-x. 3 root root 18 Mar 22 12:11 mnt
drwxr-xr-x. 2 root root 6 Mar 22 14:00 hpnas01
drwxr-xr-x. 2 root root 6 Mar 22 14:01 hpnas02
drwxr-xr-x. 4 root root 29 Mar 22 14:18 home
dr-xr-x---. 15 root root 4096 Mar 22 15:17 root
dr-xr-xr-x. 5 root root 4096 Mar 22 15:36 boot
drwxr-xr-x. 3 root root 24 Mar 22 17:46 rhev
dr-xr-xr-x. 234 root root 0 Mar 22 17:54 proc
drwxr-xr-x. 22 root root 4096 Mar 22 17:55 var
dr-xr-xr-x. 13 root root 0 Mar 22 17:55 sys
drwxrwxrwt. 19 root root 4096 Mar 23 12:18 tmp
drwxr-xr-x. 148 root root 12288 Mar 23 12:20 etc
drwxr-xr-x. 21 root root 3680 Mar 23 12:21 dev
drwxr-xr-x. 43 root root 1280 Mar 23 12:30 run
[root@dul-ovrtst01 /]# yum upgrade all
Loaded plugins: fastestmirror, langpacks, versionlock
Loading mirror speeds from cached hostfile
* base: chicago.gaminghost.co
* elrepo: reflector.westga.edu
* extras: ftpmirror.your.org
* ovirt-3.5: resources.ovirt.org
* ovirt-3.5-epel: mirror.cogentco.com
* rpmforge: mirror.teklinks.com
* updates: centos.vwtonline.net
No Match for argument: all
No package all available.
No packages marked for update
[root@dul-ovrtst01 /]# su -c yum update
su: user update does not exist
[root@dul-ovrtst01 /]# yum update
Loaded plugins: fastestmirror, langpacks, versionlock
Loading mirror speeds from cached hostfile
* base: chicago.gaminghost.co
* elrepo: reflector.westga.edu
* extras: ftpmirror.your.org
* ovirt-3.5: resources.ovirt.org
* ovirt-3.5-epel: mirror.cogentco.com
* rpmforge: mirror.teklinks.com
* updates: centos.vwtonline.net
Resolving Dependencies
--> Running transaction check
---> Package bind-libs.x86_64 32:9.9.4-29.el7_2.2 will be updated
---> Package bind-libs.x86_64 32:9.9.4-29.el7_2.3 will be an update
---> Package bind-libs-lite.x86_64 32:9.9.4-29.el7_2.2 will be updated
---> Package bind-libs-lite.x86_64 32:9.9.4-29.el7_2.3 will be an update
---> Package bind-license.noarch 32:9.9.4-29.el7_2.2 will be updated
---> Package bind-license.noarch 32:9.9.4-29.el7_2.3 will be an update
---> Package bind-utils.x86_64 32:9.9.4-29.el7_2.2 will be updated
---> Package bind-utils.x86_64 32:9.9.4-29.el7_2.3 will be an update
---> Package epel-release.noarch 0:6-8 will be updated
---> Package epel-release.noarch 0:7-5 will be an update
---> Package firefox.x86_64 0:38.6.1-1.el7.centos will be updated
---> Package firefox.x86_64 0:38.7.0-1.el7.centos will be an update
---> Package libcacard.x86_64 10:1.5.3-105.el7_2.3 will be obsoleted
---> Package libcacard-ev.x86_64 10:2.3.0-31.el7_2.7.1 will be obsoleting
---> Package libsmbclient.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package libsmbclient.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package libssh2.x86_64 0:1.4.3-10.el7 will be updated
---> Package libssh2.x86_64 0:1.4.3-10.el7_2.1 will be an update
---> Package libwbclient.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package libwbclient.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package nss-util.x86_64 0:3.19.1-4.el7_1 will be updated
---> Package nss-util.x86_64 0:3.19.1-9.el7_2 will be an update
---> Package openssh.x86_64 0:6.6.1p1-23.el7_2 will be updated
---> Package openssh.x86_64 0:6.6.1p1-25.el7_2 will be an update
---> Package openssh-clients.x86_64 0:6.6.1p1-23.el7_2 will be updated
---> Package openssh-clients.x86_64 0:6.6.1p1-25.el7_2 will be an update
---> Package openssh-server.x86_64 0:6.6.1p1-23.el7_2 will be updated
---> Package openssh-server.x86_64 0:6.6.1p1-25.el7_2 will be an update
---> Package openssl.x86_64 1:1.0.1e-51.el7_2.2 will be updated
---> Package openssl.x86_64 1:1.0.1e-51.el7_2.4 will be an update
---> Package openssl-libs.x86_64 1:1.0.1e-51.el7_2.2 will be updated
---> Package openssl-libs.x86_64 1:1.0.1e-51.el7_2.4 will be an update
---> Package openssl098e.x86_64 0:0.9.8e-29.el7.centos.2 will be updated
---> Package openssl098e.x86_64 0:0.9.8e-29.el7.centos.3 will be an update
---> Package rpmforge-release.x86_64 0:0.5.2-2.el6.rf will be updated
---> Package rpmforge-release.x86_64 0:0.5.3-1.el6.rf will be an update
---> Package samba.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package samba.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package samba-client.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package samba-client.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package samba-client-libs.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package samba-client-libs.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package samba-common.noarch 0:4.2.3-11.el7_2 will be updated
---> Package samba-common.noarch 0:4.2.3-12.el7_2 will be an update
---> Package samba-common-libs.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package samba-common-libs.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package samba-common-tools.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package samba-common-tools.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package samba-libs.x86_64 0:4.2.3-11.el7_2 will be updated
---> Package samba-libs.x86_64 0:4.2.3-12.el7_2 will be an update
---> Package tzdata.noarch 0:2016a-1.el7 will be updated
---> Package tzdata.noarch 0:2016b-1.el7 will be an update
---> Package tzdata-java.noarch 0:2016a-1.el7 will be updated
---> Package tzdata-java.noarch 0:2016b-1.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=============================================================================================================================================================================================================================================
Package Arch
Version
Repository
Size
=============================================================================================================================================================================================================================================
Installing:
libcacard-ev x86_64
10:2.3.0-31.el7_2.7.1
centos-qemu-ev
231 k
replacing libcacard.x86_64 10:1.5.3-105.el7_2.3
Updating:
bind-libs x86_64
32:9.9.4-29.el7_2.3
updates
1.0 M
bind-libs-lite x86_64
32:9.9.4-29.el7_2.3
updates
724 k
bind-license noarch
32:9.9.4-29.el7_2.3
updates
82 k
bind-utils x86_64
32:9.9.4-29.el7_2.3
updates
200 k
epel-release noarch
7-5
extras
14 k
firefox x86_64
38.7.0-1.el7.centos
updates
72 M
libsmbclient x86_64
4.2.3-12.el7_2
updates
118 k
libssh2 x86_64
1.4.3-10.el7_2.1
updates
134 k
libwbclient x86_64
4.2.3-12.el7_2
updates
95 k
nss-util x86_64
3.19.1-9.el7_2
updates
71 k
openssh x86_64
6.6.1p1-25.el7_2
updates
435 k
openssh-clients x86_64
6.6.1p1-25.el7_2
updates
639 k
openssh-server x86_64
6.6.1p1-25.el7_2
updates
436 k
openssl x86_64
1:1.0.1e-51.el7_2.4
updates
711 k
openssl-libs x86_64
1:1.0.1e-51.el7_2.4
updates
951 k
openssl098e x86_64
0.9.8e-29.el7.centos.3
updates
793 k
rpmforge-release x86_64
0.5.3-1.el6.rf
rpmforge
12 k
samba x86_64
4.2.3-12.el7_2
updates
602 k
samba-client x86_64
4.2.3-12.el7_2
updates
496 k
samba-client-libs x86_64
4.2.3-12.el7_2
updates
4.3 M
samba-common noarch
4.2.3-12.el7_2
updates
269 k
samba-common-libs x86_64
4.2.3-12.el7_2
updates
156 k
samba-common-tools x86_64
4.2.3-12.el7_2
updates
443 k
samba-libs x86_64
4.2.3-12.el7_2
updates
259 k
tzdata noarch
2016b-1.el7
updates
433 k
tzdata-java noarch
2016b-1.el7
updates
178 k
Transaction Summary
=============================================================================================================================================================================================================================================
Install 1 Package
Upgrade 26 Packages
Total download size: 85 M
Is this ok [y/d/N]: y
Downloading packages:
No Presto metadata available for rpmforge
Not downloading deltainfo for extras, MD is 25 k and rpms are 14 k
updates/7/x86_64/prestodelta
| 277 kB
00:00:00
Delta RPMs reduced 998 k of updates to 139 k (86% saved)
(1/27): libssh2-1.4.3-10.el7_1.4.3-10.el7_2.1.x86_64.drpm
| 20 kB
00:00:00
(2/27): bind-license-9.9.4-29.el7_2.3.noarch.rpm
| 82 kB
00:00:00
(3/27): nss-util-3.19.1-4.el7_1_3.19.1-9.el7_2.x86_64.drpm
| 21 kB
00:00:00
(4/27): bind-utils-9.9.4-29.el7_2.3.x86_64.rpm
| 200 kB
00:00:00
(5/27):
openssl098e-0.9.8e-29.el7.centos.2_0.9.8e-29.el7.centos.3.x86_64.drpm
| 99 kB 00:00:00
(6/27): epel-release-7-5.noarch.rpm
| 14 kB
00:00:00
(7/27): bind-libs-lite-9.9.4-29.el7_2.3.x86_64.rpm
| 724 kB
00:00:00
(8/27): libsmbclient-4.2.3-12.el7_2.x86_64.rpm
| 118 kB
00:00:00
(9/27): libcacard-ev-2.3.0-31.el7_2.7.1.x86_64.rpm
| 231 kB
00:00:00
(10/27): libwbclient-4.2.3-12.el7_2.x86_64.rpm
| 95 kB
00:00:00
(11/27): openssh-6.6.1p1-25.el7_2.x86_64.rpm
| 435 kB
00:00:00
(12/27): openssh-server-6.6.1p1-25.el7_2.x86_64.rpm
| 436 kB
00:00:00
(13/27): openssh-clients-6.6.1p1-25.el7_2.x86_64.rpm
| 639 kB
00:00:01
(14/27): rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
| 12 kB
00:00:00
(15/27): samba-4.2.3-12.el7_2.x86_64.rpm
| 602 kB
00:00:01
(16/27): openssl-1.0.1e-51.el7_2.4.x86_64.rpm
| 711 kB
00:00:02
(17/27): openssl-libs-1.0.1e-51.el7_2.4.x86_64.rpm
| 951 kB
00:00:02
(18/27): samba-common-4.2.3-12.el7_2.noarch.rpm
| 269 kB
00:00:00
(19/27): samba-common-libs-4.2.3-12.el7_2.x86_64.rpm
| 156 kB
00:00:00
(20/27): samba-client-4.2.3-12.el7_2.x86_64.rpm
| 496 kB
00:00:02
(21/27): samba-libs-4.2.3-12.el7_2.x86_64.rpm
| 259 kB
00:00:01
(22/27): samba-common-tools-4.2.3-12.el7_2.x86_64.rpm
| 443 kB
00:00:02
(23/27): tzdata-java-2016b-1.el7.noarch.rpm
| 178 kB
00:00:00
(24/27): bind-libs-9.9.4-29.el7_2.3.x86_64.rpm
| 1.0 MB
00:00:09
(25/27): tzdata-2016b-1.el7.noarch.rpm
| 433 kB
00:00:01
(26/27): samba-client-libs-4.2.3-12.el7_2.x86_64.rpm
| 4.3 MB
00:00:13
(27/27): firefox-38.7.0-1.el7.centos.x86_64.rpm
| 72 MB
00:00:39
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total
2.1 MB/s | 84 MB
00:00:40
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : 1:openssl-libs-1.0.1e-51.el7_2.4.x86_64
1/54
Updating : openssh-6.6.1p1-25.el7_2.x86_64
2/54
Updating : samba-libs-4.2.3-12.el7_2.x86_64
3/54
Updating : samba-common-tools-4.2.3-12.el7_2.x86_64
4/54
Updating : samba-common-4.2.3-12.el7_2.noarch
5/54
Updating : libwbclient-4.2.3-12.el7_2.x86_64
6/54
Updating : samba-client-libs-4.2.3-12.el7_2.x86_64
7/54
Updating : samba-common-libs-4.2.3-12.el7_2.x86_64
8/54
Updating : nss-util-3.19.1-9.el7_2.x86_64
9/54
Updating : 32:bind-license-9.9.4-29.el7_2.3.noarch
10/54
Updating : 32:bind-libs-9.9.4-29.el7_2.3.x86_64
11/54
Updating : libsmbclient-4.2.3-12.el7_2.x86_64
12/54
Updating : samba-client-4.2.3-12.el7_2.x86_64
13/54
Updating : 32:bind-utils-9.9.4-29.el7_2.3.x86_64
14/54
Updating : 32:bind-libs-lite-9.9.4-29.el7_2.3.x86_64
15/54
Installing : 10:libcacard-ev-2.3.0-31.el7_2.7.1.x86_64
16/54
Updating : firefox-38.7.0-1.el7.centos.x86_64
17/54
Updating : samba-4.2.3-12.el7_2.x86_64
18/54
Updating : openssh-server-6.6.1p1-25.el7_2.x86_64
19/54
Updating : openssh-clients-6.6.1p1-25.el7_2.x86_64
20/54
Updating : libssh2-1.4.3-10.el7_2.1.x86_64
21/54
Updating : 1:openssl-1.0.1e-51.el7_2.4.x86_64
22/54
Updating : epel-release-7-5.noarch
23/54
Updating : tzdata-java-2016b-1.el7.noarch
24/54
Updating : rpmforge-release-0.5.3-1.el6.rf.x86_64
25/54
Updating : tzdata-2016b-1.el7.noarch
26/54
Updating : openssl098e-0.9.8e-29.el7.centos.3.x86_64
27/54
Cleanup : samba-4.2.3-11.el7_2.x86_64
28/54
Cleanup : samba-client-4.2.3-11.el7_2.x86_64
29/54
Cleanup : 32:bind-utils-9.9.4-29.el7_2.2.x86_64
30/54
Cleanup : libsmbclient-4.2.3-11.el7_2.x86_64
31/54
Cleanup : 32:bind-libs-9.9.4-29.el7_2.2.x86_64
32/54
Cleanup : openssh-clients-6.6.1p1-23.el7_2.x86_64
33/54
Cleanup : 32:bind-libs-lite-9.9.4-29.el7_2.2.x86_64
34/54
Cleanup : openssh-server-6.6.1p1-23.el7_2.x86_64
35/54
Cleanup : samba-common-libs-4.2.3-11.el7_2.x86_64
36/54
Cleanup : samba-common-tools-4.2.3-11.el7_2.x86_64
37/54
Cleanup : samba-libs-4.2.3-11.el7_2.x86_64
38/54
Cleanup : samba-common-4.2.3-11.el7_2.noarch
39/54
Cleanup : samba-client-libs-4.2.3-11.el7_2.x86_64
40/54
Cleanup : libwbclient-4.2.3-11.el7_2.x86_64
41/54
Cleanup : firefox-38.6.1-1.el7.centos.x86_64
42/54
Cleanup : openssh-6.6.1p1-23.el7_2.x86_64
43/54
Cleanup : 1:openssl-1.0.1e-51.el7_2.2.x86_64
44/54
Erasing : 10:libcacard-1.5.3-105.el7_2.3.x86_64
45/54
Cleanup : libssh2-1.4.3-10.el7.x86_64
46/54
Cleanup : 32:bind-license-9.9.4-29.el7_2.2.noarch
47/54
Cleanup : epel-release-6-8.noarch
48/54
Cleanup : tzdata-java-2016a-1.el7.noarch
49/54
Cleanup : rpmforge-release-0.5.2-2.el6.rf.x86_64
50/54
Cleanup : tzdata-2016a-1.el7.noarch
51/54
Cleanup : 1:openssl-libs-1.0.1e-51.el7_2.2.x86_64
52/54
Cleanup : nss-util-3.19.1-4.el7_1.x86_64
53/54
Cleanup : openssl098e-0.9.8e-29.el7.centos.2.x86_64
54/54
Verifying : openssl098e-0.9.8e-29.el7.centos.3.x86_64
1/54
Verifying : libssh2-1.4.3-10.el7_2.1.x86_64
2/54
Verifying : openssh-server-6.6.1p1-25.el7_2.x86_64
3/54
Verifying : 10:libcacard-ev-2.3.0-31.el7_2.7.1.x86_64
4/54
Verifying : 1:openssl-1.0.1e-51.el7_2.4.x86_64
5/54
Verifying : firefox-38.7.0-1.el7.centos.x86_64
6/54
Verifying : tzdata-2016b-1.el7.noarch
7/54
Verifying : 32:bind-license-9.9.4-29.el7_2.3.noarch
8/54
Verifying : samba-4.2.3-12.el7_2.x86_64
9/54
Verifying : 32:bind-utils-9.9.4-29.el7_2.3.x86_64
10/54
Verifying : samba-common-tools-4.2.3-12.el7_2.x86_64
11/54
Verifying : samba-client-4.2.3-12.el7_2.x86_64
12/54
Verifying : libsmbclient-4.2.3-12.el7_2.x86_64
13/54
Verifying : samba-common-4.2.3-12.el7_2.noarch
14/54
Verifying : openssh-6.6.1p1-25.el7_2.x86_64
15/54
Verifying : libwbclient-4.2.3-12.el7_2.x86_64
16/54
Verifying : 32:bind-libs-9.9.4-29.el7_2.3.x86_64
17/54
Verifying : nss-util-3.19.1-9.el7_2.x86_64
18/54
Verifying : 32:bind-libs-lite-9.9.4-29.el7_2.3.x86_64
19/54
Verifying : 1:openssl-libs-1.0.1e-51.el7_2.4.x86_64
20/54
Verifying : samba-common-libs-4.2.3-12.el7_2.x86_64
21/54
Verifying : rpmforge-release-0.5.3-1.el6.rf.x86_64
22/54
Verifying : samba-client-libs-4.2.3-12.el7_2.x86_64
23/54
Verifying : openssh-clients-6.6.1p1-25.el7_2.x86_64
24/54
Verifying : tzdata-java-2016b-1.el7.noarch
25/54
Verifying : epel-release-7-5.noarch
26/54
Verifying : samba-libs-4.2.3-12.el7_2.x86_64
27/54
Verifying : samba-4.2.3-11.el7_2.x86_64
28/54
Verifying : samba-client-4.2.3-11.el7_2.x86_64
29/54
Verifying : epel-release-6-8.noarch
30/54
Verifying : samba-common-4.2.3-11.el7_2.noarch
31/54
Verifying : rpmforge-release-0.5.2-2.el6.rf.x86_64
32/54
Verifying : 32:bind-libs-9.9.4-29.el7_2.2.x86_64
33/54
Verifying : 32:bind-libs-lite-9.9.4-29.el7_2.2.x86_64
34/54
Verifying : libssh2-1.4.3-10.el7.x86_64
35/54
Verifying : 1:openssl-1.0.1e-51.el7_2.2.x86_64
36/54
Verifying : samba-client-libs-4.2.3-11.el7_2.x86_64
37/54
Verifying : tzdata-2016a-1.el7.noarch
38/54
Verifying : openssh-clients-6.6.1p1-23.el7_2.x86_64
39/54
Verifying : openssl098e-0.9.8e-29.el7.centos.2.x86_64
40/54
Verifying : openssh-server-6.6.1p1-23.el7_2.x86_64
41/54
Verifying : libsmbclient-4.2.3-11.el7_2.x86_64
42/54
Verifying : 1:openssl-libs-1.0.1e-51.el7_2.2.x86_64
43/54
Verifying : 32:bind-utils-9.9.4-29.el7_2.2.x86_64
44/54
Verifying : openssh-6.6.1p1-23.el7_2.x86_64
45/54
Verifying : firefox-38.6.1-1.el7.centos.x86_64
46/54
Verifying : samba-common-tools-4.2.3-11.el7_2.x86_64
47/54
Verifying : 32:bind-license-9.9.4-29.el7_2.2.noarch
48/54
Verifying : 10:libcacard-1.5.3-105.el7_2.3.x86_64
49/54
Verifying : libwbclient-4.2.3-11.el7_2.x86_64
50/54
Verifying : samba-libs-4.2.3-11.el7_2.x86_64
51/54
Verifying : tzdata-java-2016a-1.el7.noarch
52/54
Verifying : samba-common-libs-4.2.3-11.el7_2.x86_64
53/54
Verifying : nss-util-3.19.1-4.el7_1.x86_64
54/54
Installed:
libcacard-ev.x86_64 10:2.3.0-31.el7_2.7.1
Updated:
bind-libs.x86_64 32:9.9.4-29.el7_2.3 bind-libs-lite.x86_64
32:9.9.4-29.el7_2.3 bind-license.noarch 32:9.9.4-29.el7_2.3
bind-utils.x86_64 32:9.9.4-29.el7_2.3 epel-release.noarch 0:7-5
firefox.x86_64 0:38.7.0-1.el7.centos libsmbclient.x86_64
0:4.2.3-12.el7_2 libssh2.x86_64 0:1.4.3-10.el7_2.1
libwbclient.x86_64 0:4.2.3-12.el7_2 nss-util.x86_64 0:3.19.1-9.el7_2
openssh.x86_64 0:6.6.1p1-25.el7_2 openssh-clients.x86_64
0:6.6.1p1-25.el7_2 openssh-server.x86_64 0:6.6.1p1-25.el7_2
openssl.x86_64 1:1.0.1e-51.el7_2.4 openssl-libs.x86_64
1:1.0.1e-51.el7_2.4
openssl098e.x86_64 0:0.9.8e-29.el7.centos.3 rpmforge-release.x86_64
0:0.5.3-1.el6.rf samba.x86_64 0:4.2.3-12.el7_2
samba-client.x86_64 0:4.2.3-12.el7_2 samba-client-libs.x86_64
0:4.2.3-12.el7_2
samba-common.noarch 0:4.2.3-12.el7_2 samba-common-libs.x86_64
0:4.2.3-12.el7_2 samba-common-tools.x86_64 0:4.2.3-12.el7_2
samba-libs.x86_64 0:4.2.3-12.el7_2 tzdata.noarch 0:2016b-1.el7
tzdata-java.noarch 0:2016b-1.el7
Replaced:
libcacard.x86_64 10:1.5.3-105.el7_2.3
Complete!
[root@dul-ovrtst01 /]# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install the engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: Yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323130341-plx8ao.log
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs3, nfs4)[nfs3]: fc
The following luns have been found on the requested target:
[1] 3600508b10018433953524235374f0007 203GiB
COMPAQ MSA1000 VOLUME
status: used, paths: 1 active
[2] 3600143800006b070a577938b0f0b000e 1396GiB
COMPAQ MSA1000 VOLUME
status: used, paths: 1 active
[3] 3600508b10010443953555538485a001a 1862GiB HP
LOGICAL VOLUME
status: used, paths: 1 active
Please select the destination LUN (1, 2, 3) [1]:
The selected device is already used.
To create a vg on this device, you must use Force.
WARNING: This will destroy existing data on the device.
(Force, Abort)[Abort]? Force
[ INFO ] Installing on first host
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (enp3s0,
enp5s0) [enp3s0]:
iptables was detected on your computer, do you wish setup to
configure it? (Yes, No)[Yes]: Yes
Please indicate a pingable gateway IP address [192.168.200.1]:
--== VM CONFIGURATION ==--
Booting from cdrom on RHEL7 is ISO image based only, as cdrom
passthrough is disabled (BZ760885)
Please specify the device to boot the VM from (choose disk for
the oVirt engine appliance)
(cdrom, disk, pxe) [disk]:
[ INFO ] Detecting available oVirt engine appliances
[ INFO ] No engine appliance image is available on your system.
Using an oVirt engine appliance could greatly speed-up ovirt
hosted-engine deploy.
You could get oVirt engine appliance installing
ovirt-engine-appliance rpm.
Please specify path to OVF archive you would like to use [None]:
/boot/Downloads/ovirt-engine-appliance-3.6-20160321.1.el7.centos.noarch.rpm
[ ERROR ] The specified file does not exists
[ ERROR ] The specified OVF archive is not a valid OVF archive.
Please specify path to OVF archive you would like to use [None]:
/root/Downloads/ovirt-engine-appliance-3.6-20160321.1.el7.centos.noarch.rpm
[ ERROR ] Failed to execute stage 'Environment customization': not a gzip
file
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160323130907.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue, fix and redeploy
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323130341-plx8ao.log
[root@dul-ovrtst01 /]# vdsm-tool restore-nets
[root@dul-ovrtst01 /]# cd /etc/sysconfig/network-scripts
[root@dul-ovrtst01 network-scripts]# ls -lrt
total 236
-rwxr-xr-x. 1 root root 1876 Apr 2 2015 ifup-TeamPort
-rwxr-xr-x. 1 root root 1755 Apr 2 2015 ifup-Team
-rwxr-xr-x. 1 root root 1556 Apr 2 2015 ifdown-TeamPort
-rwxr-xr-x. 1 root root 1599 Apr 2 2015 ifdown-Team
-rw-r--r--. 1 root root 26134 Sep 16 2015 network-functions-ipv6
-rw-r--r--. 1 root root 15322 Sep 16 2015 network-functions
-rwxr-xr-x. 1 root root 4623 Sep 16 2015 init.ipv6-global
-rwxr-xr-x. 1 root root 1740 Sep 16 2015 ifup-wireless
-rwxr-xr-x. 1 root root 2682 Sep 16 2015 ifup-tunnel
-rwxr-xr-x. 1 root root 3263 Sep 16 2015 ifup-sit
-rwxr-xr-x. 1 root root 1925 Sep 16 2015 ifup-routes
-rwxr-xr-x. 1 root root 4154 Sep 16 2015 ifup-ppp
-rwxr-xr-x. 1 root root 2609 Sep 16 2015 ifup-post
-rwxr-xr-x. 1 root root 1043 Sep 16 2015 ifup-plusb
-rwxr-xr-x. 1 root root 642 Sep 16 2015 ifup-plip
-rwxr-xr-x. 1 root root 10430 Sep 16 2015 ifup-ipv6
-rwxr-xr-x. 1 root root 12039 Sep 16 2015 ifup-ippp
-rwxr-xr-x. 1 root root 11721 Sep 16 2015 ifup-eth
-rwxr-xr-x. 1 root root 859 Sep 16 2015 ifup-bnep
-rwxr-xr-x. 1 root root 12631 Sep 16 2015 ifup-aliases
-rwxr-xr-x. 1 root root 1462 Sep 16 2015 ifdown-tunnel
-rwxr-xr-x. 1 root root 1444 Sep 16 2015 ifdown-sit
-rwxr-xr-x. 1 root root 837 Sep 16 2015 ifdown-routes
-rwxr-xr-x. 1 root root 1068 Sep 16 2015 ifdown-ppp
-rwxr-xr-x. 1 root root 1642 Sep 16 2015 ifdown-post
-rwxr-xr-x. 1 root root 4201 Sep 16 2015 ifdown-ipv6
-rwxr-xr-x. 1 root root 781 Sep 16 2015 ifdown-ippp
-rwxr-xr-x. 1 root root 5817 Sep 16 2015 ifdown-eth
-rwxr-xr-x. 1 root root 627 Sep 16 2015 ifdown-bnep
-rw-r--r--. 1 root root 254 Sep 16 2015 ifcfg-lo
-rwxr-xr-x. 1 root root 10145 Nov 30 20:58 ifup-ib
-rwxr-xr-x. 1 root root 6196 Nov 30 20:58 ifdown-ib
lrwxrwxrwx. 1 root root 24 Mar 21 15:53 ifdown ->
../../../usr/sbin/ifdown
lrwxrwxrwx. 1 root root 11 Mar 21 15:53 ifdown-isdn -> ifdown-ippp
lrwxrwxrwx. 1 root root 22 Mar 21 15:53 ifup -> ../../../usr/sbin/ifup
lrwxrwxrwx. 1 root root 9 Mar 21 15:53 ifup-isdn -> ifup-ippp
-rw-r--r--. 1 root root 347 Mar 21 16:54 ifcfg-enp3s0
-rw-r--r--. 1 root root 286 Mar 21 17:20 ifcfg-enp5s0
[root@dul-ovrtst01 network-scripts]# vi ifcfg-enp3s0
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.200.64
NETMASK=255.255.252.0
NM_CONTROLLED=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=no
IPV6_FAILURE_FATAL=no
NAME="eth0"
UUID=87f63b68-a333-4a35-aea7-fbf2e8d32e91
DEVICE=enp3s0
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
IPV6_PEERDNS=no
IPV6_PEERROUTES=no
ZONE=trusted
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
"ifcfg-enp3s0" 20L, 349C written
[root@dul-ovrtst01 network-scripts]# exit
exit
[root@dul-ovrtst01 ~]# cd /
[root@dul-ovrtst01 /]# screen
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]# systemctl netowkr status
Unknown operation 'netowkr'.
[root@dul-ovrtst01 /]# systemctl network status
Unknown operation 'network'.
[root@dul-ovrtst01 /]# systemctl status nettwork
? nettwork.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
[root@dul-ovrtst01 /]# systemctl status network
? network.service - LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: active (running) since Tue 2016-03-22 17:55:44 EDT; 19h ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/network.service
mq1133 /sbin/dhclient -H dul-ovrtst01 -1 -q -lf
/var/lib/dhclient/dhclient-6508e3b9-49eb-408e-8625-6f24505484d2-enp5s0.lease
-pf /var/run/dhclient-enp5s0.pid enp5s0
Mar 22 17:55:28 localhost.localdomain network[747]: Bringing up loopback
interface: [ OK ]
Mar 22 17:55:31 dul-ovrtst01 network[747]: Bringing up interface enp3s0: [
OK ]
Mar 22 17:55:31 dul-ovrtst01 network[747]: Bringing up interface enp5s0:
Mar 22 17:55:34 dul-ovrtst01 dhclient[1075]: DHCPREQUEST on enp5s0 to
255.255.255.255 port 67 (xid=0x6548a261)
Mar 22 17:55:41 dul-ovrtst01 dhclient[1075]: DHCPREQUEST on enp5s0 to
255.255.255.255 port 67 (xid=0x6548a261)
Mar 22 17:55:41 dul-ovrtst01 dhclient[1075]: DHCPACK from 192.168.200.7
(xid=0x6548a261)
Mar 22 17:55:44 dul-ovrtst01 dhclient[1075]: bound to 192.168.201.65 --
renewal in 235886 seconds.
Mar 22 17:55:44 dul-ovrtst01 network[747]: Determining IP information for
enp5s0... done.
Mar 22 17:55:44 dul-ovrtst01 network[747]: [ OK ]
Mar 22 17:55:44 dul-ovrtst01 systemd[1]: Started LSB: Bring up/down
networking.
[root@dul-ovrtst01 /]# systemctl status NetworkManager
? NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service;
disabled; vendor preset: enabled)
Active: inactive (dead)
[root@dul-ovrtst01 /]# yum install ovirt-hosted-engine-setup
Loaded plugins: fastestmirror, langpacks, versionlock
epel/x86_64/metalink
| 13 kB
00:00:00
Loading mirror speeds from cached hostfile
* base: chicago.gaminghost.co
* elrepo: reflector.westga.edu
* epel: mirror.cogentco.com
* extras: ftpmirror.your.org
* ovirt-3.5: resources.ovirt.org
* ovirt-3.5-epel: mirror.cogentco.com
* rpmforge: mirror.teklinks.com
* updates: centos.vwtonline.net
Package ovirt-hosted-engine-setup-1.3.3.4-1.el7.noarch already installed
and latest version
Nothing to do
[root@dul-ovrtst01 /]# screen hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install the engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: Yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs3, nfs4)[nfs3]: fc
The following luns have been found on the requested target:
[1] 3600508b10018433953524235374f0007 203GiB
COMPAQ MSA1000 VOLUME
status: used, paths: 1 active
[2] 3600143800006b070a577938b0f0b000e 1396GiB
COMPAQ MSA1000 VOLUME
status: used, paths: 1 active
[3] 3600508b10010443953555538485a001a 1862GiB HP
LOGICAL VOLUME
status: used, paths: 1 active
Please select the destination LUN (1, 2, 3) [1]: 1
The selected device is already used.
To create a vg on this device, you must use Force.
WARNING: This will destroy existing data on the device.
(Force, Abort)[Abort]? Force
[ INFO ] Installing on first host
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (enp3s0,
enp5s0) [enp3s0]:
iptables was detected on your computer, do you wish setup to
configure it? (Yes, No)[Yes]: Yes
Please indicate a pingable gateway IP address [192.168.200.1]:
--== VM CONFIGURATION ==--
Booting from cdrom on RHEL7 is ISO image based only, as cdrom
passthrough is disabled (BZ760885)
Please specify the device to boot the VM from (choose disk for
the oVirt engine appliance)
(cdrom, disk, pxe) [disk]: cdrom
The following CPU types are supported by this host:
- model_Conroe: Intel Conroe Family
Please specify the CPU type to be used by the VM [model_Conroe]:
Please specify path to installation media you would like to use
[None]: /root/Downloads/CentOS-7-x86_64-Everything-1511.iso
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that /root/Downloads/CentOS-7-x86_64-Everything-1511.iso is
valid and could be read by qemu user or kvm group or specify another
installation media.
Please specify path to installation media you would like to use
[/root/Downloads/CentOS-7-x86_64-Everything-1511.iso]:
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that /root/Downloads/CentOS-7-x86_64-Everything-1511.iso is
valid and could be read by qemu user or kvm group or specify another
installation media.
Please specify path to installation media you would like to use
[/root/Downloads/CentOS-7-x86_64-Everything-1511.iso]:
/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that
/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova is
valid and could be read by qemu user or kvm group or specify another
installation media.
Please specify path to installation media you would like to use
[/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova]:
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that
/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova is
valid and could be read by qemu user or kvm group or specify another
installation media.
Please specify path to installation media you would like to use
[/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova]:
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that
/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova is
valid and could be read by qemu user or kvm group or specify another
installation media.
Please specify path to installation media you would like to use
[/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova]:
/root/Downloads
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that /root/Downloads is valid and could be read by qemu user
or kvm group or specify another installation media.
Please specify path to installation media you would like to use
[/root/Downloads]:
[ ERROR ] The specified installation media is not valid or not readable.
Please ensure that /root/Downloads is valid and could be read by qemu user
or kvm group or specify another installation media.
Please specify path to installation media you would like to use
[/root/Downloads]: /dev/cdrom
Please specify the number of virtual CPUs for the VM [Defaults to
minimum requirement: 2]: 4
Please specify the disk size of the VM in GB [Defaults to minimum
requirement: 25]: 40
You may specify a unicast MAC address for the VM or accept a
randomly generated default [00:16:3e:14:57:04]:
Please specify the memory size of the VM in MB [Defaults to
minimum requirement: 4096]:
Please specify the console type you would like to use to connect
to the VM (vnc, spice) [vnc]:
--== HOSTED ENGINE CONFIGURATION ==--
Enter the name which will be used to identify this host inside
the Administrator Portal [hosted_engine_1]: dul-ovrtst02
Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
Confirm 'admin@internal' user password:
Please provide the FQDN for the engine you would like to use.
This needs to match the FQDN that you will use for the engine
installation within the VM.
Note: This will be the FQDN of the VM you are now going to create,
it should not point to the base host or to any other existing
machine.
Engine FQDN: []: dul-ovrteng01.acs.net
[ ERROR ] Host name is not valid: dul-ovrteng01.acs.net did not resolve
into an IP address
Please provide the FQDN for the engine you would like to use.
This needs to match the FQDN that you will use for the engine
installation within the VM.
Note: This will be the FQDN of the VM you are now going to create,
it should not point to the base host or to any other existing
machine.
Engine FQDN: []: dul-ovrtst02.acs.net
Please provide the name of the SMTP server through which we will
send notifications [localhost]: 192.168.200.36
Please provide the TCP port number of the SMTP server [25]:
Please provide the email address from which notifications will be
sent [root@localhost]:
Please provide a comma-separated list of email addresses which
will get notifications [root@localhost]: internal.support(a)analysts.com
[ INFO ] Stage: Setup validation
--== CONFIGURATION PREVIEW ==--
Bridge interface : enp3s0
Engine FQDN : dul-ovrtst02.acs.net
Bridge name : ovirtmgmt
Host address : dul-ovrtst01
SSH daemon port : 22
Firewall manager : iptables
Gateway address : 192.168.200.1
Host name for web application : dul-ovrtst02
Host ID : 1
LUN ID :
3600508b10018433953524235374f0007
Image size GB : 40
GlusterFS Share Name : hosted_engine_glusterfs
GlusterFS Brick Provisioning : False
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:14:57:04
Boot type : cdrom
Number of CPUs : 4
ISO image (cdrom boot/cloud-init) : /dev/cdrom
CPU Type : model_Conroe
Please confirm installation settings (Yes, No)[Yes]: Yes
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Configuring libvirt
[ INFO ] Configuring VDSM
[ INFO ] Starting vdsmd
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Configuring the management bridge
[ INFO ] Creating Volume Group
[ ERROR ] Error creating Volume Group: Failed to initialize physical
device: ("['/dev/mapper/3600508b10018433953524235374f0007']",)
[ ERROR ] Failed to execute stage 'Misc configuration': Failed to
initialize physical device:
("['/dev/mapper/3600508b10018433953524235374f0007']",)
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160323140702.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue, fix and redeploy
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log
[root@dul-ovrtst01 ~]# cd /
[root@dul-ovrtst01 /]# screen
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]#
[root@dul-ovrtst01 /]# systemctl netowkr status
Unknown operation 'netowkr'.
[root@dul-ovrtst01 /]# systemctl network status
Unknown operation 'network'.
[root@dul-ovrtst01 /]# systemctl status nettwork
? nettwork.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
[root@dul-ovrtst01 /]# systemctl status network
? network.service - LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: active (running) since Tue 2016-03-22 17:55:44 EDT; 19h ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/network.service
mq1133 /sbin/dhclient -H dul-ovrtst01 -1 -q -lf
/var/lib/dhclient/dhclient-6508e3b9-49eb-408e-8625-6f24505484d2-enp5s0.lease
-pf /var/run/dhclient-enp5s0.pid enp5s0
Mar 22 17:55:28 localhost.localdomain network[747]: Bringing up loopback
interface: [ OK ]
Mar 22 17:55:31 dul-ovrtst01 network[747]: Bringing up interface enp3s0: [
OK ]
Mar 22 17:55:31 dul-ovrtst01 network[747]: Bringing up interface enp5s0:
Mar 22 17:55:34 dul-ovrtst01 dhclient[1075]: DHCPREQUEST on enp5s0 to
255.255.255.255 port 67 (xid=0x6548a261)
Mar 22 17:55:41 dul-ovrtst01 dhclient[1075]: DHCPREQUEST on enp5s0 to
255.255.255.255 port 67 (xid=0x6548a261)
Mar 22 17:55:41 dul-ovrtst01 dhclient[1075]: DHCPACK from 192.168.200.7
(xid=0x6548a261)
Mar 22 17:55:44 dul-ovrtst01 dhclient[1075]: bound to 192.168.201.65 --
renewal in 235886 seconds.
Mar 22 17:55:44 dul-ovrtst01 network[747]: Determining IP information for
enp5s0... done.
Mar 22 17:55:44 dul-ovrtst01 network[747]: [ OK ]
Mar 22 17:55:44 dul-ovrtst01 systemd[1]: Started LSB: Bring up/down
networking.
[root@dul-ovrtst01 /]# systemctl status NetworkManager
? NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service;
disabled; vendor preset: enabled)
Active: inactive (dead)
[root@dul-ovrtst01 /]# yum install ovirt-hosted-engine-setup
Loaded plugins: fastestmirror, langpacks, versionlock
epel/x86_64/metalink
| 13 kB
00:00:00
Loading mirror speeds from cached hostfile
* base: chicago.gaminghost.co
* elrepo: reflector.westga.edu
* epel: mirror.cogentco.com
* extras: ftpmirror.your.org
* ovirt-3.5: resources.ovirt.org
* ovirt-3.5-epel: mirror.cogentco.com
* rpmforge: mirror.teklinks.com
* updates: centos.vwtonline.net
Package ovirt-hosted-engine-setup-1.3.3.4-1.el7.noarch already installed
and latest version
Nothing to do
[root@dul-ovrtst01 /]# screen hosted-engine --deploy
[root@dul-ovrtst01 /]# ls -lrt
total 36
drwxr-xr-x. 2 root root 6 Aug 12 2015 srv
drwxr-xr-x. 2 root root 6 Aug 12 2015 media
lrwxrwxrwx. 1 root root 7 Mar 21 15:49 bin -> usr/bin
lrwxrwxrwx. 1 root root 8 Mar 21 15:49 sbin -> usr/sbin
lrwxrwxrwx. 1 root root 9 Mar 21 15:49 lib64 -> usr/lib64
lrwxrwxrwx. 1 root root 7 Mar 21 15:49 lib -> usr/lib
drwxr-xr-x. 14 root root 4096 Mar 22 11:00 usr
drwxr-xr-x. 4 root root 24 Mar 22 11:00 opt
drwxr-xr-x. 3 root root 18 Mar 22 12:11 mnt
drwxr-xr-x. 2 root root 6 Mar 22 14:00 hpnas01
drwxr-xr-x. 2 root root 6 Mar 22 14:01 hpnas02
drwxr-xr-x. 4 root root 29 Mar 22 14:18 home
dr-xr-x---. 15 root root 4096 Mar 22 15:17 root
dr-xr-xr-x. 5 root root 4096 Mar 22 15:36 boot
drwxr-xr-x. 3 root root 24 Mar 22 17:46 rhev
dr-xr-xr-x. 230 root root 0 Mar 22 17:54 proc
drwxr-xr-x. 22 root root 4096 Mar 22 17:55 var
dr-xr-xr-x. 13 root root 0 Mar 22 17:55 sys
drwxr-xr-x. 148 root root 12288 Mar 23 12:46 etc
drwxr-xr-x. 43 root root 1280 Mar 23 14:06 run
drwxrwxrwt. 19 root root 4096 Mar 23 14:06 tmp
drwxr-xr-x. 21 root root 3660 Mar 23 14:07 dev
[root@dul-ovrtst01 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 7.9G 43G 16% /
devtmpfs 19G 0 19G 0% /dev
tmpfs 19G 84K 19G 1% /dev/shm
tmpfs 19G 9.0M 19G 1% /run
tmpfs 19G 0 19G 0% /sys/fs/cgroup
/dev/sda1 497M 189M 308M 39% /boot
/dev/mapper/centos-home 205G 53M 205G 1% /home
tmpfs 3.8G 16K 3.8G 1% /run/user/42
tmpfs 3.8G 0 3.8G 0% /run/user/0
[root@dul-ovrtst01 /]# fdisk -l
Disk /dev/sdb: 2000.3 GB, 2000297156608 bytes, 3906830384 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 293.6 GB, 293564211200 bytes, 573367600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c6c95
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 573366271 286170112 8e Linux LVM
Disk /dev/sdc: 1500.0 GB, 1499997157376 bytes, 2929681948 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
fdisk: cannot open /dev/sdd: Input/output error
[root@dul-ovrtst01 /]# vi
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log=bool:'True'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileHandle=file:'<open file
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log',
mode 'a' at 0x34555d0>'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilter=_MyLoggerFilter:'filter'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'[]'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logRemoveAtExit=bool:'False'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:14 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.dialog.misc.Plugin._init
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/dialect=str:'human'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:14 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.dialog.human.Plugin._init
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--'
2016-03-23 13:47:14 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:14 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.dialog.machine.Plugin._init
2016-03-23 13:47:14 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:14 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.core.misc.Plugin._init
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:477 SEQUENCE
DUMP - BEGIN
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:479 STAGE boot
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._preinit (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.log.Plugin._init (otopi.core.log.init)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.misc.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.human.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.machine.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.misc.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._boot (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.info.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._boot (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:479 STAGE init
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.config.Plugin._init (otopi.core.config.init)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._init
(otopi.packagers.detection)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._init
(otopi.packagers.detection)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.command.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.transaction.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.cli.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.iptables.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.ssh.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.clock.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.reboot.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.offlinepackager.Plugin._init
(None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.preview.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._init
(None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.health.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_notifications.Plugin._init
(None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._init
(None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._init
(None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._init
(None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.packages.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cpu.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.mac.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.machine.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.memory.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:479 STAGE setup
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup_existence (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._setup_existence (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.config.Plugin._post_init (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.log.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.misc.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.shell.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._setup (None)
2016-03-23 13:47:14 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.hostname.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.services.openrc.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.services.rhel.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.services.systemd.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.clock.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.reboot.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._setup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
internal_packages
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.transaction.Plugin._pre_prepare (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.hostname.Plugin._internal_packages (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._internal_packages_end
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._internal_packages_end
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.transaction.Plugin._pre_end (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
programs
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.command.Plugin._programs
(otopi.system.command.detection)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.services.systemd.Plugin._programs (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.services.rhel.Plugin._programs (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.services.openrc.Plugin._programs (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_services.Plugin._programs
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._check_NM
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
late_setup
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf.Plugin._late_setup
(ohosted.vdsm.conf.loaded)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._late_setup
(ohosted.vdsm.libvirt.configured)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._late_setup
(ohosted.vdsm.pki.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.packages.Plugin._late_setup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._late_setup
(ohosted.vdsm.late_setup_ready)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
customization
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._customization (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.config.Plugin._customize1 (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.cli.Plugin._customize
(otopi.dialog.cli.customization)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._storage_start
(ohosted.dialog.titles.storage.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
(ohosted.storage.configuration.early)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
(ohosted.storage.gluster.provisioned)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
(ohosted.storage.nfs.configuration.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._customization
(ohosted.storage.blockd.configuration.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._late_customization
(ohosted.storage.configuration.late)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._late_customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._storage_end
(ohosted.dialog.titles.storage.end)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._system_start
(ohosted.dialog.titles.system.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._customization
(ohosted.core.require.answerfile)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._system_end
(ohosted.dialog.titles.system.end)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._network_start
(ohosted.dialog.titles.network.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._get_existing_bridge_interface
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._customization
(ohosted.network.firewallmanager.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall.Plugin._configuration
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._customization
(ohosted.networking.gateway.configuration.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._network_end
(ohosted.dialog.titles.network.end)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_start
(ohosted.dialog.titles.vm.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._customization
(ohosted.boot.configuration.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._customization
(ohosted.configuration.ovf)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._customization
(ohosted.boot.configuration.cloud_init_options)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._customize_vm_networking
(ohosted.boot.configuration.cloud_init_vm_networking)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cpu.Plugin._customization (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._disk_customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.mac.Plugin._customization (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.memory.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_end
(ohosted.dialog.titles.vm.end)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._engine_start
(ohosted.dialog.titles.engine.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._engine_end
(ohosted.dialog.titles.engine.end)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_notifications.Plugin._customization
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.config.Plugin._customize2 (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._process_templates
(ohosted.network.firewallmanager.templates.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
validation
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.misc.Plugin._validation (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._validation
(otopi.network.firewalld.validation)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.hostname.Plugin._validation (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.iptables.Plugin._validate
(otopi.network.iptables.validation)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.ssh.Plugin._validation (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._validation
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._get_hostname_additional_hosts
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._get_hostname_from_bridge_if
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.iptables.Plugin._validate
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._validation
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._validation
(ohosted.lockspace.valid)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._validate
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._validation
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.preview.Plugin._validation
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
transaction-prepare
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.transaction.Plugin._main_prepare (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
early_misc
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._early_misc (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.iptables.Plugin._early_misc
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
packages
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.iptables.Plugin._packages (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._packages (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._packages (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE misc
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.command.Plugin._misc
(otopi.system.command.redetection)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._misc (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.iptables.Plugin._store_iptables (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.ssh.Plugin._append_key (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.clock.Plugin._set_clock (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_notifications.Plugin._misc
(ohosted.notifications.broker.conf.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.libvirt.configureqemu.Plugin._misc
(ohosted.libvirt.configured)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._misc
(ohosted.sshd.started)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf.Plugin._misc
(ohosted.vdsm.configured)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._misc
(ohosted.vdsm.started)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._misc
(ohosted.network.bridge.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._misc (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._misc
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._misc
(ohosted.storage.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._misc_create_volume
(ohosted.conf.volume.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._misc
(ohosted.sanlock.initialized)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._misc (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._misc
(ohosted.vm.image.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc
(ohosted.vm.ovf.imported)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._destroy_pool
(ohosted.storage.pool.destroyed)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._misc
(ohosted.vm.state.configured)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.conf.Plugin._misc
(ohosted.save.config)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
cleanup
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.transaction.Plugin._main_end (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
closeup
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.firewalld.Plugin._closeup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.network.iptables.Plugin._closeup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._closeup
(ohosted.notifications.answerfile.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._closeup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._closeup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._closeup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._closeup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._boot_from_install_media
(ohosted.vm.state.running)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.os_install.Plugin._closeup
(ohosted.vm.state.os.installed)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._boot_from_hd
(ohosted.vm.state.os.installed.running)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.health.Plugin._closeup
(ohosted.engine.alive)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._closeup
(ohosted.engine.host.added)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._closeup
(ohosted.engine.vdscli.reconnected)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._closeup_reprepare_images
(ohosted.storage.imagesreprepared)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._closeup_create_tar
(ohosted.notifications.confimage.available)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_services.Plugin._closeup
(ohosted.engine.ha.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._persist_files_start
(ohosted.node.files.persist.start)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._persist_files_start
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._persist_files_end
(ohosted.node.files.persist.end)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.reboot.Plugin._closeup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
cleanup
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._cleanup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._cleanup (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._save_answers_at_cleanup
(None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
pre-terminate
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.misc.Plugin._preTerminate (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
(otopi.dialog.cli.termination)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
terminate
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._terminate (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.human.Plugin._terminate (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.dialog.machine.Plugin._terminate (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.core.log.Plugin._terminate (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:479 STAGE
reboot
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:484 METHOD
otopi.plugins.otopi.system.reboot.Plugin._reboot (None)
2016-03-23 13:47:15 DEBUG otopi.context context.dumpSequence:486 SEQUENCE
DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/aborted=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/debug=int:'0'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/executionDirectory=str:'/'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginGroups=str:'otopi:ovirt-hosted-engine-setup'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/suppressEnvironmentKeys=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/failOnPrioOverride=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileHandle=file:'<open file
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log',
mode 'a' at 0x34555d0>'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilter=_MyLoggerFilter:'filter'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logRemoveAtExit=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/randomizeEvents=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/dialect=str:'human'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_NAME=str:'otopi'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_VERSION=str:'1.4.1'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVESETUP_CORE/offlinePackager=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumpackagerEnabled=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_NAME=str:'otopi'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_VERSION=str:'1.4.1'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._boot
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.packagers.dnfpackager
dnfpackager._boot:178 Cannot initialize minidnf
Traceback (most recent call last):
File "/usr/share/otopi/plugins/otopi/packagers/dnfpackager.py", line 165,
in _boot
constants.PackEnv.DNF_DISABLED_PLUGINS
File "/usr/share/otopi/plugins/otopi/packagers/dnfpackager.py", line 75,
in _getMiniDNF
from otopi import minidnf
File "/usr/lib/python2.7/site-packages/otopi/minidnf.py", line 31, in
<module>
import dnf
ImportError: No module named dnf
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfDisabledPlugins=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfExpireCache=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfRollback=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfpackagerEnabled=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/keepAliveInterval=int:'30'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.system.info.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:51
SYSTEM INFORMATION - BEGIN
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:52
executable /bin/python
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:53
python /bin/python
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:54
platform linux2
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:55
distribution ('CentOS Linux', '7.2.1511', 'Core')
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:56
host 'dul-ovrtst01'
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:62 uid
0 euid 0 gid 0 egid 0
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.system.info info._init:64
SYSTEM INFORMATION - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
boot METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._boot
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumDisabledPlugins=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumEnabledPlugins=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumExpireCache=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumRollback=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 INFO otopi.context context.runSequence:427 Stage:
Initializing
2016-03-23 13:47:15 DEBUG otopi.context context.runSequence:431 STAGE init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.core.config.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileName=str:'/etc/otopi.conf'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.system.command.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.core.transaction.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/internalPackageTransaction=Transaction:'transaction'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/mainTransaction=Transaction:'transaction'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/modifiedFiles=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.dialog.cli.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/cliVersion=int:'1'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/customization=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.network.firewalld.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldDisableServices=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldEnable=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.network.iptables.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesEnable=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.network.ssh.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshEnable=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshUser=str:''
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.system.clock.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockMaxGap=int:'5'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockSet=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.otopi.system.reboot.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/reboot=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootAllow=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootDeferTime=int:'10'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/nodeSetup=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.offlinepackager.Plugin._init
2016-03-23 13:47:15 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.core.offlinepackager
offlinepackager._init:60 Registering offline packager
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.preview.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword']'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/sshdPort=int:'22'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword',
'OVEHOSTED_ENGINE/adminPassword']'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/promptNonOperational=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/promptRequiredNetworks=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.health.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/engineSetupTimeout=int:'600'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_notifications.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[]'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldSubst=dict:'{}'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/caSubject=str:'/C=EN/L=Test/O=Test/CN=TestCA'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/pkiSubject=str:'/C=EN/L=Test/O=Test/CN=Test'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/serviceName=str:'sanlock'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisionedShareName=str:'hosted_engine_glusterfs'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisioningEnabled=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageSizeGB=int:'1'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageUUID=str:'8bf559b0-4cc3-4c92-ada1-f0680afb3487'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confVolUUID=str:'3517ae74-6202-45ea-ac0c-0d69ec3c2489'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostEnabled=bool:'False'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/connectionUUID=str:'1b903a29-fffe-439e-b787-911c988c879a'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdConnUUID=str:'2df85503-9df3-4a60-987d-03e93d9abf5f'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdUUID=str:'a8908c2b-a8f3-407d-beb5-2b278da62fe1'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/sdUUID=str:'54a95285-412a-4749-b2c4-374980f08472'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/spUUID=str:'e8eaa6c0-ffb4-44df-a6d5-c096df88965c'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDatacenterName=str:'hosted_datacenter'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.packages.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/glusterMinimumVersion=str:'3.7.2'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/kvmGid=int:'36'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/serviceName=str:'vdsmd'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdsmUid=int:'36'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/useSSL=bool:'True'
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/tempDir=str:'/var/tmp'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword',
'OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd']'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cdromUUID=str:'9463ba78-d3e6-48b8-b124-a2e2b7066204'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/consoleUUID=str:'9a9b4420-ffb1-4da3-850f-5b6049a20ff5'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/nicUUID=str:'48da7132-0fa3-42d9-8ccc-5a7e2004a965'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/subst=dict:'{}'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmUUID=str:'d0e46126-9faa-4a03-b6df-3a0c2ea1e27a'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.cpu.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgUUID=str:'ca1dc26a-596e-4d99-ac4e-9809d7f2d5e4'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/volUUID=str:'7c466957-ef51-4749-aa3d-0224e13aade3'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.mac.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.machine.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.memory.Plugin._init
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
init METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._init
2016-03-23 13:47:15 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.runvm
mixins._generateTempVncPassword:51 Generating a temporary VNC password.
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword',
'OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd',
'OVEHOSTED_VDSM/passwd']'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwd=str:'**FILTERED**'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwdValiditySecs=str:'10800'
2016-03-23 13:47:15 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:15 INFO otopi.context context.runSequence:427 Stage:
Environment setup
2016-03-23 13:47:15 DEBUG otopi.context context.runSequence:431 STAGE setup
2016-03-23 13:47:15 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._setup
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query DEPLOY_PROCEED
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Continuing will
configure this host for serving as hypervisor and create a VM where you
have to install the engine afterwards.
2016-03-23 13:47:15 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Are you sure you want to
continue? (Yes, No)[Yes]:
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE Yes
2016-03-23 13:47:19 DEBUG otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc._setup:113 Disabling persisting file configuration
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/checkRequirements=bool:'True'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/deployProceed=bool:'True'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup_existence
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._setup_existence
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.core.log.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.core.misc.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Version: otopi-1.4.1
(otopi-1.4.1-1.el7.centos)
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.shell.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: processor : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: vendor_id : GenuineIntel
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu family : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model : 15
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model name : Intel(R) Xeon(R) CPU
5160 @ 3.00GHz
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: stepping : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: microcode : 0xd2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu MHz : 3000.203
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache size : 4096 KB
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: physical id : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: siblings : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: core id : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu cores : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: apicid : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: initial apicid : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpuid level : 10
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: wp : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: bogomips : 6000.40
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: clflush size : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache_alignment : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: address sizes : 36 bits physical, 48 bits
virtual
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: power management:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: processor : 1
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: vendor_id : GenuineIntel
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu family : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model : 15
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model name : Intel(R) Xeon(R) CPU
5160 @ 3.00GHz
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: stepping : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: microcode : 0xd2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu MHz : 3000.203
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache size : 4096 KB
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: physical id : 3
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: siblings : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: core id : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu cores : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: apicid : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: initial apicid : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: fpu : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: fpu_exception : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpuid level : 10
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: wp : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: bogomips : 6000.06
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: clflush size : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache_alignment : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: address sizes : 36 bits physical, 48 bits
virtual
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: power management:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: processor : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: vendor_id : GenuineIntel
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu family : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model : 15
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model name : Intel(R) Xeon(R) CPU
5160 @ 3.00GHz
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: stepping : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: microcode : 0xd2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu MHz : 3000.203
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache size : 4096 KB
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: physical id : 0
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: siblings : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: core id : 1
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu cores : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: apicid : 1
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: initial apicid : 1
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: fpu : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: fpu_exception : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpuid level : 10
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: wp : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: bogomips : 6000.40
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: clflush size : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache_alignment : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: address sizes : 36 bits physical, 48 bits
virtual
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: power management:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: processor : 3
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: vendor_id : GenuineIntel
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu family : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model : 15
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: model name : Intel(R) Xeon(R) CPU
5160 @ 3.00GHz
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: stepping : 6
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: microcode : 0xd2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu MHz : 3000.203
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache size : 4096 KB
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: physical id : 3
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: siblings : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: core id : 1
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpu cores : 2
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: apicid : 7
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: initial apicid : 7
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: fpu : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: fpu_exception : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cpuid level : 10
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: wp : yes
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: bogomips : 6000.06
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: clflush size : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: cache_alignment : 64
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: address sizes : 36 bits physical, 48 bits
virtual
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo: power management:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.getVendor:52 cpuinfo:
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware._prdmsr:125 prdmsr: 5
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware._vmx_enabled_by_bios:140 vmx bios: True
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware._cpuid:88 cpuid: (1782, 133120, 320445, 3219913727)
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware._cpu_has_vmx_support:95 vmx support: True
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware._isVirtualizationEnabled:189 virtualization support GenuineIntel
(cpu: True, bios: True)
2016-03-23 13:47:19 DEBUG otopi.ovirt_host_deploy.hardware
hardware.detect:201 Hardware supports virtualization
2016-03-23 13:47:19 INFO otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
cpu._setup:108 Hardware supports virtualization
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.network.firewalld.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.network.hostname.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.services.openrc.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.services.rhel.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.services.systemd.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.system.clock.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.otopi.system.reboot.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
setup METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._setup
2016-03-23 13:47:19 INFO otopi.context context.runSequence:427 Stage:
Environment packages setup
2016-03-23 13:47:19 DEBUG otopi.context context.runSequence:431 STAGE
internal_packages
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
internal_packages METHOD
otopi.plugins.otopi.core.transaction.Plugin._pre_prepare
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
internal_packages METHOD
otopi.plugins.otopi.network.hostname.Plugin._internal_packages
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
internal_packages METHOD
otopi.plugins.otopi.packagers.dnfpackager.Plugin._internal_packages_end
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
internal_packages METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._internal_packages_end
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
internal_packages METHOD
otopi.plugins.otopi.core.transaction.Plugin._pre_end
2016-03-23 13:47:19 INFO otopi.context context.runSequence:427 Stage:
Programs detection
2016-03-23 13:47:19 DEBUG otopi.context context.runSequence:431 STAGE
programs
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
programs METHOD otopi.plugins.otopi.system.command.Plugin._programs
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chkconfig=str:'/sbin/chkconfig'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chown=str:'/bin/chown'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chronyc=str:'/bin/chronyc'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/date=str:'/bin/date'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/dig=str:'/bin/dig'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/file=str:'/bin/file'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/firewall-cmd=str:'/bin/firewall-cmd'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/genisoimage=str:'/bin/genisoimage'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/gluster=str:'/sbin/gluster'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/hwclock=str:'/sbin/hwclock'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ip=str:'/sbin/ip'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/iscsiadm=str:'/sbin/iscsiadm'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/losetup=str:'/sbin/losetup'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/lsof=str:'/sbin/lsof'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mkfs=str:'/sbin/mkfs'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mount=str:'/bin/mount'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ntpq=str:'/sbin/ntpq'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/openssl=str:'/bin/openssl'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ping=str:'/bin/ping'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/qemu-img=str:'/bin/qemu-img'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/reboot=str:'/sbin/reboot'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/remote-viewer=str:'/bin/remote-viewer'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/service=str:'/sbin/service'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sshd=str:'/sbin/sshd'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sudo=str:'/bin/sudo'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/systemctl=str:'/bin/systemctl'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/truncate=str:'/bin/truncate'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/umount=str:'/bin/umount'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/vdsm-tool=str:'/bin/vdsm-tool'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
programs METHOD otopi.plugins.otopi.services.systemd.Plugin._programs
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show-environment'),
executable='None', cwd='None', env=None
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show-environment')
stderr:
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
systemd._programs:64 registering systemd provider
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
programs METHOD otopi.plugins.otopi.services.rhel.Plugin._programs
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show-environment'),
executable='None', cwd='None', env=None
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.rhel
plugin.executeRaw:878 execute-result: ('/bin/systemctl',
'show-environment'), rc=0
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:936 execute-output: ('/bin/systemctl', 'show-environment')
stdout:
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:941 execute-output: ('/bin/systemctl', 'show-environment')
stderr:
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
programs METHOD otopi.plugins.otopi.services.openrc.Plugin._programs
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
programs METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_services.Plugin._programs
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:105 check service ovirt-ha-agent status
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'status',
'ovirt-ha-agent.service'), executable='None', cwd='None', env=None
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'status',
'ovirt-ha-agent.service'), rc=3
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'status',
'ovirt-ha-agent.service') stdout:
? ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring
Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
disabled; vendor preset: disabled)
Active: inactive (dead)
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'status',
'ovirt-ha-agent.service') stderr:
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:105 check service ovirt-ha-broker status
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'status',
'ovirt-ha-broker.service'), executable='None', cwd='None', env=None
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'status',
'ovirt-ha-broker.service'), rc=3
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'status',
'ovirt-ha-broker.service') stdout:
? ovirt-ha-broker.service - oVirt Hosted Engine High Availability
Communications Broker
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
disabled; vendor preset: disabled)
Active: inactive (dead)
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'status',
'ovirt-ha-broker.service') stderr:
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
programs METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._check_NM
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:105 check service NetworkManager status
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'status',
'NetworkManager.service'), executable='None', cwd='None', env=None
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'status',
'NetworkManager.service'), rc=3
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'status',
'NetworkManager.service') stdout:
? NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service;
disabled; vendor preset: enabled)
Active: inactive (dead)
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'status',
'NetworkManager.service') stderr:
2016-03-23 13:47:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge bridge._check_NM:92
NetworkManager: False
2016-03-23 13:47:19 INFO otopi.context context.runSequence:427 Stage:
Environment setup
2016-03-23 13:47:19 DEBUG otopi.context context.runSequence:431 STAGE
late_setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
late_setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf.Plugin._late_setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
late_setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._late_setup
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:105 check service vdsmd status
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'status',
'vdsmd.service'), executable='None', cwd='None', env=None
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'status',
'vdsmd.service'), rc=0
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'status',
'vdsmd.service') stdout:
? vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: active (running) since Tue 2016-03-22 18:00:09 EDT; 19h ago
Main PID: 3327 (vdsm)
CGroup: /system.slice/vdsmd.service
mq3327 /usr/bin/python /usr/share/vdsm/vdsm
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 1
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 1
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 make_client_response()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 2
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5
parse_server_challenge()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 ask_user_info()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 make_client_response()
Mar 22 18:00:10 dul-ovrtst01 python[3327]: DIGEST-MD5 client step 3
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'status',
'vdsmd.service') stderr:
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdscli=instance:'<ServerProxy for 0.0.0.0:54321/RPC2>'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
late_setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._late_setup
2016-03-23 13:47:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki plugin.executeRaw:878
execute-result: ('/bin/openssl', 'x509', '-noout', '-text', '-in',
'/etc/pki/vdsm/libvirt-spice/server-cert.pem'), rc=0
Validity
Not Before: Mar 22 22:00:33 2016 GMT
Not After : Mar 22 22:00:33 2019 GMT
Subject: C=EN, L=Test, O=Test, CN=Test
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (1024 bit)
Modulus:
00:d7:d0:9d:89:c9:df:79:91:8c:c8:ec:19:2c:d1:
d0:48:ed:d4:55:77:cd:a1:b7:26:95:1c:4c:0a:ab:
fe:0c:7c:e7:ea:ec:46:d2:bf:30:9f:7e:c2:0c:52:
2f:41:91:a1:8e:ae:1c:b9:3f:2d:05:4d:83:2e:b0:
28:91:fe:fa:55:d5:a6:3d:77:b7:a1:20:66:4e:ff:
0c:e6:da:65:77:0d:b9:5e:99:95:6a:03:d4:b3:8d:4c:d4:df:
0d:87:43:ce:6c:30:2d:88:a2:92:ad:22:a2:13:0c:0b:43:a2:
fe:d1:c8:17:00:20:c6:dd:59:3e:8c:88:82:13:ca:dc:19:6d:
70:97
2016-03-23 13:47:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki plugin.execute:941
execute-output: ('/bin/openssl', 'x509', '-noout', '-text', '-in',
'/etc/pki/vdsm/libvirt-spice/server-cert.pem') stderr:
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/spicePkiSubject=unicode:'C=EN, L=Test, O=Test, CN=Test'
2016-03-23 13:47:19 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
late_setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.packages.Plugin._late_setup
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
late_setup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._late_setup
2016-03-23 13:47:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm
configurevm._late_setup:98 []
2016-03-23 13:47:19 INFO otopi.context context.runSequence:427 Stage:
Environment customization
2016-03-23 13:47:19 DEBUG otopi.context context.runSequence:431 STAGE
customization
2016-03-23 13:47:19 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.otopi.network.firewalld.Plugin._customization
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88 check if service firewalld exists
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service'), executable='None', cwd='None', env=None
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service'), rc=0
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service') stdout:
LoadState=loaded
2016-03-23 13:47:19 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service') stderr:
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.network.firewalld
firewalld._get_firewalld_cmd_version:120 firewalld version: 0.3.9
2016-03-23 13:47:20 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:20 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'True'
2016-03-23 13:47:20 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:20 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD otopi.plugins.otopi.core.config.Plugin._customize1
2016-03-23 13:47:20 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD otopi.plugins.otopi.dialog.cli.Plugin._customize
2016-03-23 13:47:20 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:20 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._storage_start
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== STORAGE
CONFIGURATION ==--
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:47:20 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND During customization use
CTRL-D to abort.
2016-03-23 13:47:20 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._check_existing_pools:1010 _check_existing_pools
2016-03-23 13:47:20 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._check_existing_pools:1011 getConnectedStoragePoolsList
2016-03-23 13:47:20 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._check_existing_pools:1013 {'status': {'message': 'OK', 'code': 0},
'poollist': []}
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2016-03-23 13:47:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-03-23 13:47:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE fc
2016-03-23 13:47:30 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:47:30 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/domainType=str:'fc'
2016-03-23 13:47:30 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:47:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
2016-03-23 13:47:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
2016-03-23 13:47:30 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
2016-03-23 13:47:30 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:47:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._customization
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND The following luns have
been found on the requested target:
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND [1]
3600508b10018433953524235374f0007 203GiB COMPAQ MSA1000 VOLUME
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND status: used,
paths: 1 active
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND [2]
3600143800006b070a577938b0f0b000e 1396GiB COMPAQ MSA1000 VOLUME
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND status: used,
paths: 1 active
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND [3]
3600508b10010443953555538485a001a 1862GiB HP LOGICAL VOLUME
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND status: used,
paths: 1 active
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_STORAGE_BLOCKD_LUN
2016-03-23 13:47:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please select the
destination LUN (1, 2, 3) [1]:
2016-03-23 13:48:21 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE 1
2016-03-23 13:48:42 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd
blockd._validate_domain:430 Available space on None is 208374Mb
2016-03-23 13:48:42 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_FORCE_CREATEVG
2016-03-23 13:48:42 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND The selected device is
already used.
2016-03-23 13:48:42 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND To create a vg on this
device, you must use Force.
2016-03-23 13:48:42 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND WARNING: This will
destroy existing data on the device.
2016-03-23 13:48:42 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND (Force, Abort)[Abort]?
2016-03-23 13:49:29 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE Force
2016-03-23 13:49:29 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:49:29 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/forceCreateVG=bool:'True'
2016-03-23 13:49:29 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/GUID=str:'3600508b10018433953524235374f0007'
2016-03-23 13:49:29 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/LunID=str:'3600508b10018433953524235374f0007'
2016-03-23 13:49:29 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/blockDeviceSizeGB=int:'203'
2016-03-23 13:49:30 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:49:30 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/isAdditionalHost=bool:'False'
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._late_customization
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._storage_end
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._system_start
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== SYSTEM
CONFIGURATION ==--
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._customization
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._customization
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88 check if service sshd exists
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p',
'LoadState', 'sshd.service'), executable='None', cwd='None', env=None
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'LoadState', 'sshd.service'), rc=0
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'sshd.service') stdout:
LoadState=loaded
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'sshd.service') stderr:
2016-03-23 13:49:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd plugin.executeRaw:828
execute: ('/sbin/sshd', '-T'), executable='None', cwd='None', env=None
2016-03-23 13:49:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd plugin.executeRaw:878
execute-result: ('/sbin/sshd', '-T'), rc=0
2016-03-23 13:49:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd plugin.execute:936
execute-output: ('/sbin/sshd', '-T') stdout:
port 22
protocol 2
addressfamily any
listenaddress 0.0.0.0:22
listenaddress [::]:22
usepam yes
serverkeybits 1024
logingracetime 120
keyregenerationinterval 3600
x11displayoffset 10
maxauthtries 6
maxsessions 10
clientaliveinterval 0
clientalivecountmax 3
permitrootlogin yes
ignorerhosts yes
ignoreuserknownhosts no
rhostsrsaauthentication no
hostbasedauthentication no
hostbasedusesnamefrompacketonly no
rsaauthentication yes
pubkeyauthentication yes
kerberosauthentication no
kerberosorlocalpasswd yes
kerberosticketcleanup yes
gssapiauthentication yes
gssapicleanupcredentials no
gssapikeyexchange no
gssapistrictacceptorcheck yes
gssapistorecredentialsonrekey no
gssapikexalgorithms gss-gex-sha1-,gss-group1-sha1-,gss-group14-sha1-
passwordauthentication yes
kbdinteractiveauthentication no
challengeresponseauthentication no
printmotd yes
printlastlog yes
x11forwarding yes
x11uselocalhost yes
permittty yes
strictmodes yes
tcpkeepalive yes
permitemptypasswords no
permituserenvironment no
uselogin no
compression delayed
gatewayports no
showpatchlevel no
usedns yes
allowtcpforwarding yes
allowagentforwarding yes
useprivilegeseparation sandbox
kerberosusekuserok yes
gssapienablek5users no
pidfile /var/run/sshd.pid
xauthlocation /usr/bin/xauth
banner none
versionaddendum none
loglevel INFO
syslogfacility AUTHPRIV
authorizedkeysfile .ssh/authorized_keys
hostkey /etc/ssh/ssh_host_rsa_key
hostkey /etc/ssh/ssh_host_ecdsa_key
hostkey /etc/ssh/ssh_host_ed25519_key
acceptenv LANG
acceptenv LC_CTYPE
acceptenv LC_NUMERIC
acceptenv LC_TIME
acceptenv LC_COLLATE
acceptenv LC_MONETARY
acceptenv LC_MESSAGES
acceptenv LC_PAPER
acceptenv LC_NAME
acceptenv LC_ADDRESS
acceptenv LC_TELEPHONE
acceptenv LC_MEASUREMENT
acceptenv LC_IDENTIFICATION
acceptenv LC_ALL
acceptenv LANGUAGE
acceptenv XMODIFIERS
subsystem sftp /usr/libexec/openssh/sftp-server
maxstartups 10:30:100
permittunnel no
ipqos lowdelay throughput
rekeylimit 0 0
permitopen any
2016-03-23 13:49:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.sshd plugin.execute:941
execute-output: ('/sbin/sshd', '-T') stderr:
2016-03-23 13:49:30 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:49:30 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/sshdPort=int:'22'
2016-03-23 13:49:30 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._system_end
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._network_start
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== NETWORK
CONFIGURATION ==--
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:49:30 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._customization
2016-03-23 13:49:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
bridge._customization:136 Detected bond device bond0 without slaves
2016-03-23 13:49:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
bridge._customization:143 Nics valid: enp3s0,enp5s0
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ovehosted_bridge_if
2016-03-23 13:49:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please indicate a nic to
set ovirtmgmt bridge on: (enp3s0, enp5s0) [enp3s0]:
2016-03-23 13:49:38 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:49:38 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeIf=str:'enp3s0'
2016-03-23 13:49:38 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:49:38 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._get_existing_bridge_interface
2016-03-23 13:49:38 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:49:38 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._customization
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88 check if service iptables exists
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service'), executable='None', cwd='None', env=None
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service'), rc=0
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service') stdout:
LoadState=loaded
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'iptables.service') stderr:
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OHOSTED_NETWORK_FIREWALL_MANAGER
2016-03-23 13:49:38 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND iptables was detected on
your computer, do you wish setup to configure it? (Yes, No)[Yes]:
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesEnable=bool:'True'
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewallManager=str:'iptables'
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:49:46 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall.Plugin._configuration
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[{'directory': 'base', 'name':
'hosted-console'}]'
2016-03-23 13:49:46 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:49:46 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.gateway.Plugin._customization
2016-03-23 13:49:46 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_GATEWAY
2016-03-23 13:49:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please indicate a
pingable gateway IP address [192.168.200.1]:
2016-03-23 13:49:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway
plugin.executeRaw:828 execute: ('/bin/ping', '-c', '1', '192.168.200.1'),
executable='None', cwd='None', env=None
2016-03-23 13:49:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway
plugin.executeRaw:878 execute-result: ('/bin/ping', '-c', '1',
'192.168.200.1'), rc=0
2016-03-23 13:49:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.gateway plugin.execute:936
execute-output: ('/bin/ping', '-c', '1', '192.168.200.1') stdout:
PING 192.168.200.1 (192.168.200.1) 56(84) bytes of data.
64 bytes from 192.168.200.1: icmp_seq=1 ttl=64 time=43.3 ms
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 43.386/43.386/43.386/0.000 ms
2016-03-23 13:49:47 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:49:47 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/gateway=str:'192.168.200.1'
2016-03-23 13:49:47 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:49:48 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._network_end
2016-03-23 13:49:48 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_start
2016-03-23 13:49:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:49:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== VM CONFIGURATION
==--
2016-03-23 13:49:48 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_BOOT
2016-03-23 13:49:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Booting from cdrom on
RHEL7 is ISO image based only, as cdrom passthrough is disabled (BZ760885)
2016-03-23 13:49:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
device to boot the VM from (choose disk for the oVirt engine appliance)
2016-03-23 13:49:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND (cdrom, disk, pxe)
[disk]:
2016-03-23 13:50:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE cdrom
2016-03-23 13:50:52 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:50:52 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'cdrom'
2016-03-23 13:50:52 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 13:50:52 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu.Plugin._customization
2016-03-23 13:50:52 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vdsmd.cpu
cpu._customization:135 Compatible CPU models are: ['model_Conroe',
'model_coreduo', 'model_core2duo', 'model_n270']
2016-03-23 13:50:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND The following CPU types
are supported by this host:
2016-03-23 13:50:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND - model_Conroe: Intel
Conroe Family
2016-03-23 13:50:52 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ovehosted_vmenv_cpu_type
2016-03-23 13:50:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the CPU
type to be used by the VM [model_Conroe]:
2016-03-23 13:51:01 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:51:01 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/cpu=str:'model_Conroe'
2016-03-23 13:51:01 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom.Plugin._customization
2016-03-23 13:51:01 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:51:01 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
installation media you would like to use [None]:
2016-03-23 13:51:59 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE
/root/Downloads/CentOS-7-x86_64-Everything-1511.iso
2016-03-23 13:51:59 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso'), rc=1
2016-03-23 13:51:59 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:828
execute: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso'), executable='None',
cwd='None', env=None
2016-03-23 13:51:59 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso'), rc=0
2016-03-23 13:51:59 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso') stdout:
application/x-iso9660-image; charset=binary
2016-03-23 13:51:59 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso') stderr:
2016-03-23 13:51:59 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:51:59 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
installation media you would like to use
[/root/Downloads/CentOS-7-x86_64-Everything-1511.iso]:
2016-03-23 13:52:14 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso'), rc=1
2016-03-23 13:52:14 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso') stdout:
2016-03-23 13:52:14 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:828
execute: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso'), executable='None',
cwd='None', env=None
2016-03-23 13:52:14 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso'), rc=0
2016-03-23 13:52:14 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso') stdout:
application/x-iso9660-image; charset=binary
2016-03-23 13:52:14 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/CentOS-7-x86_64-Everything-1511.iso') stderr:
2016-03-23 13:52:14 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:52:14 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
installation media you would like to use
[/root/Downloads/CentOS-7-x86_64-Everything-1511.iso]:
2016-03-23 13:53:02 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE
/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova
2016-03-23 13:53:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova'), rc=1
2016-03-23 13:53:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stdout:
2016-03-23 13:53:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom
boot_cdrom._check_iso_readable:94 read test failed
2016-03-23 13:53:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova'), rc=0
2016-03-23 13:53:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stdout:
application/x-gzip; charset=binary
2016-03-23 13:53:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stderr:
2016-03-23 13:53:02 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:54:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stderr:
2016-03-23 13:54:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom
boot_cdrom._check_iso_readable:94 read test failed
2016-03-23 13:54:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova'), rc=0
2016-03-23 13:54:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stdout:
application/x-gzip; charset=binary
2016-03-23 13:54:19 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stderr:
2016-03-23 13:54:19 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova'), rc=1
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stdout:
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stderr:
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom
boot_cdrom._check_iso_readable:94 read test failed
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova'), rc=0
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stdout:
application/x-gzip; charset=binary
2016-03-23 13:54:21 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i',
'/root/Downloads/oVirt-Engine-Appliance-CentOS-x86_64-7-20160321.ova')
stderr:
2016-03-23 13:54:21 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:54:31 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE /root/Downloads
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:828
execute: ('/bin/sudo', '-u', 'qemu', 'test', '-r', '/root/Downloads'),
executable='None', cwd='None', env=None
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads'), rc=1
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads') stdout:
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads') stderr:
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom
boot_cdrom._check_iso_readable:94 read test failed
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:828
execute: ('/bin/file', '-b', '-i', '/root/Downloads'), executable='None',
cwd='None', env=None
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i', '/root/Downloads'), rc=0
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i', '/root/Downloads') stdout:
inode/directory; charset=binary
2016-03-23 13:54:31 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i', '/root/Downloads') stderr:
2016-03-23 13:54:31 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:54:31 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
installation media you would like to use [/root/Downloads]:
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:828
execute: ('/bin/sudo', '-u', 'qemu', 'test', '-r', '/root/Downloads'),
executable='None', cwd='None', env=None
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads'), rc=1
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads') stdout:
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/sudo', '-u', 'qemu', 'test', '-r',
'/root/Downloads') stderr:
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom
boot_cdrom._check_iso_readable:94 read test failed
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:828
execute: ('/bin/file', '-b', '-i', '/root/Downloads'), executable='None',
cwd='None', env=None
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.executeRaw:878
execute-result: ('/bin/file', '-b', '-i', '/root/Downloads'), rc=0
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:936
execute-output: ('/bin/file', '-b', '-i', '/root/Downloads') stdout:
inode/directory; charset=binary
2016-03-23 13:54:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_cdrom plugin.execute:941
execute-output: ('/bin/file', '-b', '-i', '/root/Downloads') stderr:
2016-03-23 13:54:46 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VMENV_CDROM
2016-03-23 13:54:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify path to
installation media you would like to use [/root/Downloads]:
2016-03-23 13:55:31 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE /dev/cdrom
2016-03-23 13:55:31 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:55:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmCDRom=str:'/dev/cdrom'
2016-03-23 13:55:31 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:55:31 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cpu.Plugin._customization
2016-03-23 13:55:31 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ovehosted_vmenv_cpu
2016-03-23 13:55:31 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
number of virtual CPUs for the VM [Defaults to minimum requirement: 2]:
2016-03-23 13:55:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE 4
2016-03-23 13:55:48 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:55:48 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmVCpus=str:'4'
2016-03-23 13:55:48 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:55:48 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._disk_customization
2016-03-23 13:55:48 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ovehosted_vmenv_mem
2016-03-23 13:55:48 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the disk
size of the VM in GB [Defaults to minimum requirement: 25]:
2016-03-23 13:58:25 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE 40
2016-03-23 13:58:25 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:58:25 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgSizeGB=str:'40'
2016-03-23 13:58:25 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:58:25 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.mac.Plugin._customization
2016-03-23 13:58:25 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ovehosted_vmenv_mac
2016-03-23 13:58:25 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND You may specify a
unicast MAC address for the VM or accept a randomly generated default
[00:16:3e:14:57:04]:
2016-03-23 13:58:27 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:58:27 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:14:57:04'
2016-03-23 13:58:27 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:58:27 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.memory.Plugin._customization
2016-03-23 13:58:28 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ovehosted_vmenv_mem
2016-03-23 13:58:28 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
memory size of the VM in MB [Defaults to minimum requirement: 4096]:
2016-03-23 13:58:32 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:58:32 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMemSizeMB=int:'4096'
2016-03-23 13:58:32 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:58:32 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.runvm.Plugin._customization
2016-03-23 13:58:32 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_VM_CONSOLE_TYPE
2016-03-23 13:58:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please specify the
console type you would like to use to connect to the VM (vnc, spice) [vnc]:
2016-03-23 13:58:41 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:58:41 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/consoleType=str:'vnc'
2016-03-23 13:58:41 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:58:41 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._vm_end
2016-03-23 13:58:41 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._engine_start
2016-03-23 13:58:41 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:58:41 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== HOSTED ENGINE
CONFIGURATION ==--
2016-03-23 13:58:41 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 13:58:41 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._customization
2016-03-23 13:58:41 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query APP_HOST_NAME
2016-03-23 13:58:41 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Enter the name which
will be used to identify this host inside the Administrator Portal
[hosted_engine_1]:
2016-03-23 13:59:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE dul-ovrtst02
2016-03-23 13:59:20 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ENGINE_ADMIN_PASSWORD
2016-03-23 13:59:20 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Enter 'admin@internal'
user password that will be used for accessing the Administrator Portal:
2016-03-23 13:59:33 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query ENGINE_ADMIN_PASSWORD
2016-03-23 13:59:33 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Confirm 'admin@internal'
user password:
2016-03-23 13:59:39 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 13:59:39 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/adminPassword=str:'**FILTERED**'
2016-03-23 13:59:39 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/appHostName=str:'dul-ovrtst02'
2016-03-23 13:59:39 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 13:59:39 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn.Plugin._customization
2016-03-23 13:59:39 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_NETWORK_FQDN
2016-03-23 13:59:39 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide the FQDN
for the engine you would like to use.
2016-03-23 13:59:39 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND This needs to match the
FQDN that you will use for the engine installation within the VM.
2016-03-23 13:59:39 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Note: This will be the
FQDN of the VM you are now going to create,
2016-03-23 13:59:39 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND it should not point to
the base host or to any other existing machine.
2016-03-23 13:59:39 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Engine FQDN: []:
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE dul-ovrteng01.acs.net
2016-03-23 14:01:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn
hostname.test_hostname:411 test_hostname exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/hostname.py", line
407, in test_hostname
not_local_text,
File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/hostname.py", line
252, in _validateFQDNresolvability
fqdn=fqdn,
RuntimeError: dul-ovrteng01.acs.net did not resolve into an IP address
2016-03-23 14:01:46 ERROR
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn dialog.queryEnvKey:115
Host name is not valid: dul-ovrteng01.acs.net did not resolve into an IP
address
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVEHOSTED_NETWORK_FQDN
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide the FQDN
for the engine you would like to use.
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND This needs to match the
FQDN that you will use for the engine installation within the VM.
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Note: This will be the
FQDN of the VM you are now going to create,
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND it should not point to
the base host or to any other existing machine.
2016-03-23 14:01:46 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Engine FQDN: []:
2016-03-23 14:02:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE dul-ovrtst02.acs.net
2016-03-23 14:02:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn
hostname._validateFQDNresolvability:245 dul-ovrtst02.acs.net resolves to:
set(['192.168.200.129'])
2016-03-23 14:02:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn plugin.executeRaw:828
execute: ('/sbin/ip', 'addr'), executable='None', cwd='None', env=None
2016-03-23 14:02:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn plugin.executeRaw:878
execute-result: ('/sbin/ip', 'addr'), rc=0
2016-03-23 14:02:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn plugin.execute:936
execute-output: ('/sbin/ip', 'addr') stdout:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
link/ether 00:1b:78:bc:8e:8e brd ff:ff:ff:ff:ff:ff
inet 192.168.200.64/22 brd 192.168.203.255 scope global enp3s0
valid_lft forever preferred_lft forever
inet6 fe80::21b:78ff:febc:8e8e/64 scope link
valid_lft forever preferred_lft forever
3: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
link/ether 00:1b:78:bc:8e:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.201.65/22 brd 192.168.203.255 scope global dynamic enp5s0
valid_lft 445992sec preferred_lft 445992sec
inet6 fe80::21b:78ff:febc:8e8c/64 scope link
valid_lft forever preferred_lft forever
4: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 62:15:60:3c:dc:70 brd ff:ff:ff:ff:ff:ff
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN
link/ether 52:54:00:24:72:95 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN qlen 500
link/ether 52:54:00:24:72:95 brd ff:ff:ff:ff:ff:ff
7: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether aa:7d:0c:de:12:4c brd ff:ff:ff:ff:ff:ff
2016-03-23 14:02:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn plugin.execute:941
execute-output: ('/sbin/ip', 'addr') stderr:
2016-03-23 14:02:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.engine.fqdn
hostname._getLocalAddresses:207 addresses: [u'127.0.0.1', u'192.168.122.1',
u'192.168.201.65', u'192.168.200.64']
2016-03-23 14:02:32 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:02:32 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdn=str:'dul-ovrtst02.acs.net'
2016-03-23 14:02:32 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:02:32 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.titles.Plugin._engine_end
2016-03-23 14:02:32 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_notifications.Plugin._customization
2016-03-23 14:02:32 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query DIALOGOVEHOSTED_NOTIF/smtpServer
2016-03-23 14:02:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide the name
of the SMTP server through which we will send notifications [localhost]:
2016-03-23 14:02:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE 192.168.200.36
2016-03-23 14:02:52 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query DIALOGOVEHOSTED_NOTIF/smtpPort
2016-03-23 14:02:52 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide the TCP
port number of the SMTP server [25]:
2016-03-23 14:02:56 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query DIALOGOVEHOSTED_NOTIF/sourceEmail
2016-03-23 14:02:56 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide the email
address from which notifications will be sent [root@localhost]:
2016-03-23 14:03:07 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query DIALOGOVEHOSTED_NOTIF/destEmail
2016-03-23 14:03:07 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide a
comma-separated list of email addresses which will get notifications
[root@localhost]:
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE internal.support(a)analysts.com
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/destEmail=str:'internal.support(a)analysts.com'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpPort=str:'25'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpServer=str:'192.168.200.36'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/sourceEmail=str:'root@localhost'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:05:31 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD otopi.plugins.otopi.core.config.Plugin._customize2
2016-03-23 14:05:31 DEBUG otopi.context context._executeMethod:142 Stage
customization METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.firewall_manager.Plugin._process_templates
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesRules=str:'# Generated by ovirt-hosted-engine-setup
installer
#filtering rules
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5900 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5901 -j ACCEPT
#drop all rule
-A INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK_FIREWALLD_SERVICE/hosted-console=str:'<?xml version="1.0"
encoding="utf-8"?>
<service>
<short>hosted-console</short>
<description>oVirt Hosted Engine console service</description>
<port protocol="tcp" port="5900"/>
<port protocol="udp" port="5900"/>
<port protocol="tcp" port="5901"/>
<port protocol="udp" port="5901"/>
</service>
'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:05:31 INFO otopi.context context.runSequence:427 Stage: Setup
validation
2016-03-23 14:05:31 DEBUG otopi.context context.runSequence:431 STAGE
validation
2016-03-23 14:05:31 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD otopi.plugins.otopi.core.misc.Plugin._validation
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/aborted=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/debug=int:'0'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exceptionInfo=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/executionDirectory=str:'/'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginGroups=str:'otopi:ovirt-hosted-engine-setup'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/suppressEnvironmentKeys=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chkconfig=str:'/sbin/chkconfig'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chown=str:'/bin/chown'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chronyc=str:'/bin/chronyc'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/date=str:'/bin/date'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/dig=str:'/bin/dig'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/file=str:'/bin/file'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/firewall-cmd=str:'/bin/firewall-cmd'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/genisoimage=str:'/bin/genisoimage'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/gluster=str:'/sbin/gluster'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/hwclock=str:'/sbin/hwclock'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/initctl=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ip=str:'/sbin/ip'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/iscsiadm=str:'/sbin/iscsiadm'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/losetup=str:'/sbin/losetup'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/lsof=str:'/sbin/lsof'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mkfs=str:'/sbin/mkfs'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mount=str:'/bin/mount'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ntpq=str:'/sbin/ntpq'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/openssl=str:'/bin/openssl'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ping=str:'/bin/ping'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/qemu-img=str:'/bin/qemu-img'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/rc=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/rc-update=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/reboot=str:'/sbin/reboot'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/remote-viewer=str:'/bin/remote-viewer'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/service=str:'/sbin/service'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sshd=str:'/sbin/sshd'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sudo=str:'/bin/sudo'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/systemctl=str:'/bin/systemctl'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/truncate=str:'/bin/truncate'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/umount=str:'/bin/umount'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/vdsm-tool=str:'/bin/vdsm-tool'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileAppend=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileName=str:'/etc/otopi.conf'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/failOnPrioOverride=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/internalPackageTransaction=Transaction:'transaction'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileHandle=file:'<open file
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log',
mode 'a' at 0x34555d0>'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilter=_MyLoggerFilter:'filter'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword',
'OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd',
'OVEHOSTED_VDSM/passwd']'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logRemoveAtExit=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/mainTransaction=Transaction:'transaction'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/modifiedFiles=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/randomizeEvents=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/cliVersion=int:'1'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/customization=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/dialect=str:'human'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_NAME=str:'otopi'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_VERSION=str:'1.4.1'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldDisableServices=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldEnable=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesEnable=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesRules=str:'# Generated by ovirt-hosted-engine-setup
installer
#filtering rules
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5900 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5901 -j ACCEPT
#drop all rule
-A INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshEnable=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshKey=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshUser=str:''
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK_FIREWALLD_SERVICE/hosted-console=str:'<?xml version="1.0"
encoding="utf-8"?>
<service>
<short>hosted-console</short>
<description>oVirt Hosted Engine console service</description>
<port protocol="tcp" port="5900"/>
<port protocol="udp" port="5900"/>
<port protocol="tcp" port="5901"/>
<port protocol="udp" port="5901"/>
</service>
'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostEnabled=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostReDeployment=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/checkRequirements=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/confirmSettings=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/deployProceed=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/isAdditionalHost=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/nodeSetup=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/screenProceed=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/tempDir=str:'/var/tmp'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/userAnswerFile=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/adminPassword=str:'**FILTERED**'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/appHostName=str:'dul-ovrtst02'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/clusterName=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/engineSetupTimeout=int:'600'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/forceCreateVG=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/insecureSSL=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/promptNonOperational=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/temporaryCertificate=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/fetchAnswer=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/fqdn=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/rootPassword=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/sshdPort=int:'22'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeIf=str:'enp3s0'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewallManager=str:'iptables'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[{'directory': 'base', 'name':
'hosted-console'}]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldSubst=dict:'{}'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdn=str:'dul-ovrtst02.acs.net'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/gateway=str:'192.168.200.1'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/promptRequiredNetworks=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/sshdPort=int:'22'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/destEmail=str:'internal.support(a)analysts.com'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpPort=str:'25'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpServer=str:'192.168.200.36'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/sourceEmail=str:'root@localhost'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/serviceName=str:'sanlock'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/GUID=str:'3600508b10018433953524235374f0007'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/LunID=str:'3600508b10018433953524235374f0007'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/blockDeviceSizeGB=int:'203'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/brokerConfContent=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageSizeGB=int:'1'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageUUID=str:'8bf559b0-4cc3-4c92-ada1-f0680afb3487'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confVolUUID=str:'3517ae74-6202-45ea-ac0c-0d69ec3c2489'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/connectionUUID=str:'1b903a29-fffe-439e-b787-911c988c879a'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/domainType=str:'fc'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdConnUUID=str:'2df85503-9df3-4a60-987d-03e93d9abf5f'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdUUID=str:'a8908c2b-a8f3-407d-beb5-2b278da62fe1'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterBrick=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisionedShareName=str:'hosted_engine_glusterfs'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisioningEnabled=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/hostID=int:'1'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortal=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalIPAddress=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalPassword=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalPort=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalUser=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSITargetName=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgSizeGB=str:'40'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgUUID=str:'ca1dc26a-596e-4d99-ac4e-9809d7f2d5e4'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/lockspaceImageUUID=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/lockspaceVolumeUUID=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/metadataImageUUID=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/metadataVolumeUUID=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/sdUUID=str:'54a95285-412a-4749-b2c4-374980f08472'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/spUUID=str:'e8eaa6c0-ffb4-44df-a6d5-c096df88965c'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageAnswerFileContent=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDatacenterName=str:'hosted_datacenter'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainConnection=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageHEConfContent=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageType=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/vgUUID=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/vmConfContent=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/volUUID=str:'7c466957-ef51-4749-aa3d-0224e13aade3'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/caSubject=str:'/C=EN/L=Test/O=Test/CN=TestCA'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/consoleType=str:'vnc'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/cpu=str:'model_Conroe'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/engineCpu=str:'Intel Conroe Family'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/glusterMinimumVersion=str:'3.7.2'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/kvmGid=int:'36'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwd=str:'**FILTERED**'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwdValiditySecs=str:'10800'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/pkiSubject=str:'/C=EN/L=Test/O=Test/CN=Test'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/serviceName=str:'vdsmd'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/spicePkiSubject=unicode:'C=EN, L=Test, O=Test, CN=Test'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/useSSL=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdscli=instance:'<ServerProxy for 0.0.0.0:54321/RPC2>'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdsmUid=int:'36'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/applianceMem=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/applianceVCpus=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/automateVMShutdown=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cdromUUID=str:'9463ba78-d3e6-48b8-b124-a2e2b7066204'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudInitISO=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitExecuteEngineSetup=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitInstanceDomainName=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitInstanceHostName=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitRootPwd=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMDNS=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/consoleUUID=str:'9a9b4420-ffb1-4da3-850f-5b6049a20ff5'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/emulatedMachine=str:'pc'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/nicUUID=str:'48da7132-0fa3-42d9-8ccc-5a7e2004a965'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/ovfArchive=NoneType:'None'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/subst=dict:'{}'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'cdrom'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmCDRom=str:'/dev/cdrom'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:14:57:04'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMemSizeMB=int:'4096'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmUUID=str:'d0e46126-9faa-4a03-b6df-3a0c2ea1e27a'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmVCpus=str:'4'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVESETUP_CORE/offlinePackager=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfDisabledPlugins=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfExpireCache=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfRollback=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfpackagerEnabled=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/keepAliveInterval=int:'30'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumDisabledPlugins=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumEnabledPlugins=list:'[]'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumExpireCache=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumRollback=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumpackagerEnabled=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockMaxGap=int:'5'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockSet=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/reboot=bool:'False'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootAllow=bool:'True'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootDeferTime=int:'10'
2016-03-23 14:05:31 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:05:31 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD otopi.plugins.otopi.network.firewalld.Plugin._validation
2016-03-23 14:05:31 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD otopi.plugins.otopi.network.hostname.Plugin._validation
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
hostname._validation:76 my name: dul-ovrtst01
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
plugin.executeRaw:828 execute: ('/sbin/ip', 'addr', 'show'),
executable='None', cwd='None', env=None
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
plugin.executeRaw:878 execute-result: ('/sbin/ip', 'addr', 'show'), rc=0
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
plugin.execute:936 execute-output: ('/sbin/ip', 'addr', 'show') stdout:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
link/ether 00:1b:78:bc:8e:8e brd ff:ff:ff:ff:ff:ff
inet 192.168.200.64/22 brd 192.168.203.255 scope global enp3s0
valid_lft forever preferred_lft forever
inet6 fe80::21b:78ff:febc:8e8e/64 scope link
valid_lft forever preferred_lft forever
3: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
qlen 1000
link/ether 00:1b:78:bc:8e:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.201.65/22 brd 192.168.203.255 scope global dynamic enp5s0
valid_lft 445813sec preferred_lft 445813sec
inet6 fe80::21b:78ff:febc:8e8c/64 scope link
valid_lft forever preferred_lft forever
4: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 62:15:60:3c:dc:70 brd ff:ff:ff:ff:ff:ff
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN
link/ether 52:54:00:24:72:95 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN qlen 500
link/ether 52:54:00:24:72:95 brd ff:ff:ff:ff:ff:ff
7: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether aa:7d:0c:de:12:4c brd ff:ff:ff:ff:ff:ff
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
plugin.execute:941 execute-output: ('/sbin/ip', 'addr', 'show') stderr:
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
hostname._validation:113 my addresses: ['192.168.200.64', '192.168.200.64',
'192.168.200.64']
2016-03-23 14:05:31 DEBUG otopi.plugins.otopi.network.hostname
hostname._validation:114 local addresses: [u'192.168.200.64',
u'fe80::21b:78ff:febc:8e8e', u'192.168.201.65',
u'fe80::21b:78ff:febc:8e8c', u'192.168.122.1']
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD otopi.plugins.otopi.network.iptables.Plugin._validate
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD otopi.plugins.otopi.network.ssh.Plugin._validation
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._validation
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._get_hostname_additional_hosts
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._get_hostname_from_bridge_if
2016-03-23 14:05:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
bridge._get_hostname_from_bridge_if:265 Network info: {'netmask':
'255.255.252.0', 'ipaddr': '192.168.200.64', 'gateway': ''}
2016-03-23 14:05:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
bridge._get_hostname_from_bridge_if:302 hostname: 'dul-ovrtst01',
aliaslist: '['dul-ovrtst01.acs.net']', ipaddrlist: '['192.168.200.64']'
2016-03-23 14:05:32 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:05:32 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/host_name=str:'dul-ovrtst01'
2016-03-23 14:05:32 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.iptables.Plugin._validate
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._validation
2016-03-23 14:05:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki plugin.executeRaw:878
execute-result: ('/bin/openssl', 'x509', '-noout', '-text', '-in',
'/etc/pki/vdsm/libvirt-spice/server-cert.pem'), rc=0
2016-03-23 14:05:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki plugin.execute:936
execute-output: ('/bin/openssl', 'x509', '-noout', '-text', '-in',
'/etc/pki/vdsm/libvirt-spice/server-cert.pem') stdout:
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=EN, L=Test, O=Test, CN=TestCA
Validity
Not Before: Mar 22 22:00:33 2016 GMT
Not After : Mar 22 22:00:33 2019 GMT
Subject: C=EN, L=Test, O=Test, CN=Test
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (1024 bit)
Modulus:
00:d7:d0:9d:89:c9:df:79:91:8c:c8:ec:19:2c:d1:
d0:48:ed:d4:55:77:cd:a1:b7:26:95:1c:4c:0a:ab:
fe:0c:7c:e7:ea:ec:46:d2:bf:30:9f:7e:c2:0c:52:
2f:41:91:a1:8e:ae:1c:b9:3f:2d:05:4d:83:2e:b0:
28:91:fe:fa:55:d5:a6:3d:77:b7:a1:20:66:4e:ff:
37:b3:e3:28:c9:19:35:6d:42:8a:f3:a9:cb:38:e0:
4a:5b:21:0b:2b:f8:ba:b2:dd:38:a1:29:e9:5e:a0:
6c:4a:31:b1:a1:a3:47:2c:84:1b:79:6f:27:b8:2d:
6c:20:b3:5f:ff:b8:f3:38:7d
Exponent: 65537 (0x10001)
Signature Algorithm: sha1WithRSAEncryption
19:fb:80:34:12:44:86:78:29:75:63:9d:77:d8:85:3c:42:ba:
fc:6b:e3:63:d7:dd:d7:92:7d:44:5a:4f:3d:2f:28:e6:c1:13:
05:9d:95:4d:58:2d:5e:d6:83:8a:0e:ec:4b:51:6a:86:50:a6:
eb:f9:3f:e0:26:fa:96:b9:af:c9:de:2f:a0:10:79:d3:0b:fe:
0c:e6:da:65:77:0d:b9:5e:99:95:6a:03:d4:b3:8d:4c:d4:df:
0d:87:43:ce:6c:30:2d:88:a2:92:ad:22:a2:13:0c:0b:43:a2:
fe:d1:c8:17:00:20:c6:dd:59:3e:8c:88:82:13:ca:dc:19:6d:
70:97
2016-03-23 14:05:32 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki plugin.execute:941
execute-output: ('/bin/openssl', 'x509', '-noout', '-text', '-in',
'/etc/pki/vdsm/libvirt-spice/server-cert.pem') stderr:
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._validation
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._validate
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._validation
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:05:32 DEBUG otopi.context context._executeMethod:142 Stage
validation METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.preview.Plugin._validation
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND --== CONFIGURATION
PREVIEW ==--
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Bridge interface
: enp3s0
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Engine FQDN
: dul-ovrtst02.acs.net
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Bridge name
: ovirtmgmt
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Host address
: dul-ovrtst01
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND SSH daemon port
: 22
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Firewall manager
: iptables
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Gateway address
: 192.168.200.1
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Host name for web
application : dul-ovrtst02
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Host ID
: 1
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND LUN ID
: 3600508b10018433953524235374f0007
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Image size GB
: 40
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND GlusterFS Share Name
: hosted_engine_glusterfs
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND GlusterFS Brick
Provisioning : False
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Console type
: vnc
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Memory size MB
: 4096
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND MAC address
: 00:16:3e:14:57:04
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Boot type
: cdrom
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Number of CPUs
: 4
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND ISO image (cdrom
boot/cloud-init) : /dev/cdrom
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND CPU Type
: model_Conroe
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query SETTINGS_PROCEED
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND
2016-03-23 14:05:32 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please confirm
installation settings (Yes, No)[Yes]:
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVE Yes
2016-03-23 14:06:18 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:06:18 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/confirmSettings=bool:'True'
2016-03-23 14:06:18 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:06:18 INFO otopi.context context.runSequence:427 Stage:
Transaction setup
2016-03-23 14:06:18 DEBUG otopi.context context.runSequence:431 STAGE
transaction-prepare
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
transaction-prepare METHOD
otopi.plugins.otopi.core.transaction.Plugin._main_prepare
2016-03-23 14:06:18 DEBUG otopi.transaction transaction._prepare:76
preparing 'File transaction for
'/etc/ovirt-hosted-engine/firewalld/hosted-console.xml''
2016-03-23 14:06:18 DEBUG otopi.filetransaction filetransaction.prepare:198
file '/etc/ovirt-hosted-engine/firewalld/hosted-console.xml' missing
2016-03-23 14:06:18 DEBUG otopi.transaction transaction._prepare:76
preparing 'File transaction for '/etc/ovirt-hosted-engine/iptables.example''
2016-03-23 14:06:18 DEBUG otopi.filetransaction filetransaction.prepare:198
file '/etc/ovirt-hosted-engine/iptables.example' missing
2016-03-23 14:06:18 INFO otopi.context context.runSequence:427 Stage: Misc
configuration
2016-03-23 14:06:18 DEBUG otopi.context context.runSequence:431 STAGE
early_misc
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
early_misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._early_misc
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
early_misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.iptables.Plugin._early_misc
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:88 check if service firewalld exists
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service'), rc=0
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service') stdout:
LoadState=loaded
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service') stderr:
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
systemd.startup:114 set service firewalld startup to False
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p', 'Id',
'firewalld.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'Id', 'firewalld.service'), rc=0
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p', 'Id',
'firewalld.service') stdout:
Id=firewalld.service
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p', 'Id',
'firewalld.service') stderr:
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'disable',
u'firewalld.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'disable',
u'firewalld.service'), rc=0
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'disable',
u'firewalld.service') stdout:
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'disable',
u'firewalld.service') stderr:
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
systemd.state:145 stopping service firewalld
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'stop',
'firewalld.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'stop',
'firewalld.service'), rc=0
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'stop',
'firewalld.service') stdout:
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'stop',
'firewalld.service') stderr:
2016-03-23 14:06:18 INFO otopi.context context.runSequence:427 Stage:
Package installation
2016-03-23 14:06:18 DEBUG otopi.context context.runSequence:431 STAGE
packages
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
packages METHOD otopi.plugins.otopi.network.iptables.Plugin._packages
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
packages METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._packages
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
packages METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._packages
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:06:18 INFO otopi.context context.runSequence:427 Stage: Misc
configuration
2016-03-23 14:06:18 DEBUG otopi.context context.runSequence:431 STAGE misc
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD otopi.plugins.otopi.system.command.Plugin._misc
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD otopi.plugins.otopi.network.firewalld.Plugin._misc
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD otopi.plugins.otopi.network.iptables.Plugin._store_iptables
2016-03-23 14:06:18 DEBUG otopi.transaction transaction._prepare:76
preparing 'File transaction for '/etc/sysconfig/iptables''
2016-03-23 14:06:18 DEBUG otopi.filetransaction filetransaction.prepare:200
file '/etc/sysconfig/iptables' exists
2016-03-23 14:06:18 DEBUG otopi.filetransaction filetransaction.prepare:234
backup '/etc/sysconfig/iptables'->'/etc/sysconfig/iptables.20160323140618'
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD otopi.plugins.otopi.network.ssh.Plugin._append_key
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD otopi.plugins.otopi.system.clock.Plugin._set_clock
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.ha.ha_notifications.Plugin._misc
2016-03-23 14:06:18 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:06:18 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/brokerConfContent=str:'[email]
smtp-server = 192.168.200.36
smtp-port = 25
source-email = root@localhost
destination-emails = internal.support(a)analysts.com
[notify]
state_transition = maintenance|start|stop|migrate|up|down
'
2016-03-23 14:06:18 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:06:18 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.libvirt.configureqemu.Plugin._misc
2016-03-23 14:06:18 INFO
otopi.plugins.ovirt_hosted_engine_setup.libvirt.configureqemu
configureqemu._misc:71 Configuring libvirt
2016-03-23 14:06:18 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.libvirt.configureqemu
configureqemu._misc:85 Changing lock_manager from sanlock to sanlock
2016-03-23 14:06:18 DEBUG otopi.transaction transaction._prepare:76
preparing 'File transaction for '/etc/libvirt/qemu.conf''
2016-03-23 14:06:18 DEBUG otopi.filetransaction filetransaction.prepare:200
file '/etc/libvirt/qemu.conf' exists
2016-03-23 14:06:18 DEBUG otopi.filetransaction filetransaction.prepare:204
file '/etc/libvirt/qemu.conf' already has content
2016-03-23 14:06:18 DEBUG otopi.transaction transaction.commit:162
committing 'File transaction for '/etc/libvirt/qemu.conf''
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
systemd.state:145 stopping service libvirtd
2016-03-23 14:06:18 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'stop',
'libvirtd.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'stop',
'libvirtd.service'), rc=0
2016-03-23 14:06:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'stop',
'libvirtd.service') stdout:
2016-03-23 14:06:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'stop',
'libvirtd.service') stderr:
2016-03-23 14:06:29 DEBUG otopi.plugins.otopi.services.systemd
systemd.state:145 starting service libvirtd
2016-03-23 14:06:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'start',
'libvirtd.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'start',
'libvirtd.service'), rc=0
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'start',
'libvirtd.service') stdout:
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'start',
'libvirtd.service') stderr:
2016-03-23 14:06:30 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD otopi.plugins.ovirt_hosted_engine_setup.system.sshd.Plugin._misc
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
systemd.status:105 check service sshd status
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'status',
'sshd.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'status',
'sshd.service'), rc=0
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'status',
'sshd.service') stdout:
? sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor
preset: enabled)
Active: active (running) since Wed 2016-03-23 12:46:00 EDT; 1h 20min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 28797 (sshd)
CGroup: /system.slice/sshd.service
mq28797 /usr/sbin/sshd -D
Mar 23 12:46:00 dul-ovrtst01 systemd[1]: Started OpenSSH server daemon.
Mar 23 12:46:00 dul-ovrtst01 systemd[1]: Starting OpenSSH server daemon...
Mar 23 12:46:00 dul-ovrtst01 sshd[28797]: Server listening on 0.0.0.0 port
22.
Mar 23 12:46:00 dul-ovrtst01 sshd[28797]: Server listening on :: port 22.
Mar 23 13:55:46 dul-ovrtst01 sshd[30161]: Received disconnect from
192.168.200.35: 11: PPA says bye [preauth]
Mar 23 13:55:47 dul-ovrtst01 sshd[30163]: pam_unix(sshd:auth):
authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
dul-av1.acs.net user=root
Mar 23 13:55:47 dul-ovrtst01 sshd[30163]: pam_succeed_if(sshd:auth):
requirement "uid >= 1000" not met by user "root"
Mar 23 13:55:49 dul-ovrtst01 sshd[30163]: Failed password for root from
192.168.200.35 port 53817 ssh2
Mar 23 13:55:49 dul-ovrtst01 sshd[30163]: Received disconnect from
192.168.200.35: 11: PPA says bye [preauth]
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'status',
'sshd.service') stderr:
2016-03-23 14:06:30 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf.Plugin._misc
2016-03-23 14:06:30 INFO
otopi.plugins.ovirt_hosted_engine_setup.vdsmd.vdsmconf vdsmconf._misc:85
Configuring VDSM
2016-03-23 14:06:30 DEBUG otopi.transaction transaction._prepare:76
preparing 'File transaction for '/etc/vdsm/vdsm.conf''
2016-03-23 14:06:30 DEBUG otopi.filetransaction filetransaction.prepare:200
file '/etc/vdsm/vdsm.conf' exists
2016-03-23 14:06:30 DEBUG otopi.filetransaction filetransaction.prepare:234
backup '/etc/vdsm/vdsm.conf'->'/etc/vdsm/vdsm.conf.20160323140630'
2016-03-23 14:06:30 DEBUG otopi.transaction transaction.commit:162
committing 'File transaction for '/etc/vdsm/vdsm.conf''
2016-03-23 14:06:30 DEBUG otopi.filetransaction filetransaction.commit:331
Executing restorecon for /etc/vdsm/vdsm.conf
2016-03-23 14:06:30 DEBUG otopi.filetransaction filetransaction.commit:345
restorecon result rc=0, stdout=, stderr=
2016-03-23 14:06:30 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:06:30 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/modifiedFiles=list:'['/etc/vdsm/vdsm.conf']'
2016-03-23 14:06:30 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:06:30 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv.Plugin._misc
2016-03-23 14:06:30 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._misc:162
Starting vdsmd
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
systemd.startup:114 set service vdsmd startup to True
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'show', '-p', 'Id',
'vdsmd.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'show', '-p',
'Id', 'vdsmd.service'), rc=0
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'show', '-p', 'Id',
'vdsmd.service') stdout:
Id=vdsmd.service
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'show', '-p', 'Id',
'vdsmd.service') stderr:
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'enable',
u'vdsmd.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'enable',
u'vdsmd.service'), rc=0
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'enable',
u'vdsmd.service') stderr:
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
systemd.state:145 stopping service vdsmd
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'stop', 'vdsmd.service'),
executable='None', cwd='None', env=None
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'stop',
'vdsmd.service'), rc=0
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'stop',
'vdsmd.service') stdout:
2016-03-23 14:06:30 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'stop',
'vdsmd.service') stderr:
2016-03-23 14:06:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv
plugin.executeRaw:828 execute: ('/bin/vdsm-tool', 'configure', '--force'),
executable='None', cwd='None', env=None
2016-03-23 14:06:33 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv
plugin.executeRaw:878 execute-result: ('/bin/vdsm-tool', 'configure',
'--force'), rc=0
2016-03-23 14:06:33 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv plugin.execute:936
execute-output: ('/bin/vdsm-tool', 'configure', '--force') stdout:
Checking configuration status...
Current revision of multipath.conf detected, preserving
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
Done configuring modules to VDSM.
2016-03-23 14:06:33 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv plugin.execute:941
execute-output: ('/bin/vdsm-tool', 'configure', '--force') stderr:
2016-03-23 14:06:33 DEBUG otopi.plugins.otopi.services.systemd
systemd.state:145 starting service vdsmd
2016-03-23 14:06:33 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:828 execute: ('/bin/systemctl', 'start',
'vdsmd.service'), executable='None', cwd='None', env=None
2016-03-23 14:06:35 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'start',
'vdsmd.service'), rc=0
2016-03-23 14:06:35 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:936 execute-output: ('/bin/systemctl', 'start',
'vdsmd.service') stdout:
2016-03-23 14:06:35 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'start',
'vdsmd.service') stderr:
2016-03-23 14:06:35 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:70
Waiting for VDSM hardware info
2016-03-23 14:06:36 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:36 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:37 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:38 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:39 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:39 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:40 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:40 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:41 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:41 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:42 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:42 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:43 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:43 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:44 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:44 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:45 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:45 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:46 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:46 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:47 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:47 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:49 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:49 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:50 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:50 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:51 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:51 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:52 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:52 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:53 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:53 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:54 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:54 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:55 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:55 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:56 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:56 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:57 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:63
{'status': {'message': 'Recovering from crash or Initializing', 'code': 99}}
2016-03-23 14:06:57 INFO
otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv vdsmenv._connect:67
Waiting for VDSM hardware info
2016-03-23 14:06:58 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.network.bridge.Plugin._misc
2016-03-23 14:06:58 INFO
otopi.plugins.ovirt_hosted_engine_setup.network.bridge bridge._misc:348
Configuring the management bridge
2016-03-23 14:06:58 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge bridge._misc:361
bonds: {}
2016-03-23 14:06:58 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge bridge._misc:362
options: {'connectivityCheck': False}
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd.Plugin._misc
2016-03-23 14:07:02 INFO
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd blockd._misc:634
Creating Volume Group
2016-03-23 14:07:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd blockd._misc:636
{'status': {'message': 'Failed to initialize physical device:
("[\'/dev/mapper/3600508b10018433953524235374f0007\']",)', 'code': 601}}
2016-03-23 14:07:02 ERROR
otopi.plugins.ovirt_hosted_engine_setup.storage.blockd blockd._misc:642
Error creating Volume Group: Failed to initialize physical device:
("['/dev/mapper/3600508b10018433953524235374f0007']",)
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:156 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/blockd.py",
line 666, in _misc
raise RuntimeError(dom['status']['message'])
RuntimeError: Failed to initialize physical device:
("['/dev/mapper/3600508b10018433953524235374f0007']",)
2016-03-23 14:07:02 ERROR otopi.context context._executeMethod:165 Failed
to execute stage 'Misc configuration': Failed to initialize physical
device: ("['/dev/mapper/3600508b10018433953524235374f0007']",)
2016-03-23 14:07:02 DEBUG otopi.transaction transaction.abort:134 aborting
'File transaction for
'/etc/ovirt-hosted-engine/firewalld/hosted-console.xml''
2016-03-23 14:07:02 DEBUG otopi.transaction transaction.abort:134 aborting
'File transaction for '/etc/ovirt-hosted-engine/iptables.example''
2016-03-23 14:07:02 DEBUG otopi.transaction transaction.abort:134 aborting
'File transaction for '/etc/sysconfig/iptables''
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:07:02 INFO otopi.context context.runSequence:427 Stage: Clean
up
2016-03-23 14:07:02 DEBUG otopi.context context.runSequence:431 STAGE
cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.vm.cloud_init.Plugin._cleanup
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile.Plugin._save_answers_at_cleanup
2016-03-23 14:07:02 INFO
otopi.plugins.ovirt_hosted_engine_setup.core.answerfile
answerfile._save_answers:74 Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160323140702.conf'
2016-03-23 14:07:02 INFO otopi.context context.runSequence:427 Stage:
Pre-termination
2016-03-23 14:07:02 DEBUG otopi.context context.runSequence:431 STAGE
pre-terminate
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
pre-terminate METHOD otopi.plugins.otopi.core.misc.Plugin._preTerminate
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:500
ENVIRONMENT DUMP - BEGIN
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/aborted=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/debug=int:'0'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/error=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/executionDirectory=str:'/'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/exitCode=list:'[{'priority': 90001, 'code': 0}]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/log=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginGroups=str:'otopi:ovirt-hosted-engine-setup'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/pluginPath=str:'/usr/share/otopi/plugins:/usr/share/ovirt-hosted-engine-setup/scripts/../plugins'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
BASE/suppressEnvironmentKeys=list:'[]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chkconfig=str:'/sbin/chkconfig'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chown=str:'/bin/chown'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/chronyc=str:'/bin/chronyc'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/date=str:'/bin/date'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/dig=str:'/bin/dig'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/file=str:'/bin/file'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/firewall-cmd=str:'/bin/firewall-cmd'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/genisoimage=str:'/bin/genisoimage'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/gluster=str:'/sbin/gluster'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/hwclock=str:'/sbin/hwclock'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/initctl=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ip=str:'/sbin/ip'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/iscsiadm=str:'/sbin/iscsiadm'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/losetup=str:'/sbin/losetup'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/lsof=str:'/sbin/lsof'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mkfs=str:'/sbin/mkfs'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/mount=str:'/bin/mount'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ntpq=str:'/sbin/ntpq'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/openssl=str:'/bin/openssl'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/ping=str:'/bin/ping'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/qemu-img=str:'/bin/qemu-img'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/rc=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/rc-update=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/reboot=str:'/sbin/reboot'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/remote-viewer=str:'/bin/remote-viewer'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/service=str:'/sbin/service'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sshd=str:'/sbin/sshd'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/sudo=str:'/bin/sudo'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/systemctl=str:'/bin/systemctl'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/truncate=str:'/bin/truncate'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/umount=str:'/bin/umount'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
COMMAND/vdsm-tool=str:'/bin/vdsm-tool'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileAppend=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/configFileName=str:'/etc/otopi.conf'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/failOnPrioOverride=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/internalPackageTransaction=Transaction:'transaction'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logDir=str:'/var/log/ovirt-hosted-engine-setup'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileHandle=file:'<open file
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log',
mode 'a' at 0x34555d0>'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileName=str:'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFileNamePrefix=str:'ovirt-hosted-engine-setup'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilter=_MyLoggerFilter:'filter'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logFilterKeys=list:'['OVEHOSTED_FIRST_HOST/rootPassword',
'OVEHOSTED_ENGINE/adminPassword', 'OVEHOSTED_VM/cloudinitRootPwd',
'OVEHOSTED_VDSM/passwd']'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/logRemoveAtExit=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/mainTransaction=Transaction:'transaction'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/modifiedFiles=list:'['/etc/vdsm/vdsm.conf']'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
CORE/randomizeEvents=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/boundary=str:'--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/cliVersion=int:'1'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/customization=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
DIALOG/dialect=str:'human'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_NAME=str:'otopi'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
INFO/PACKAGE_VERSION=str:'1.4.1'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldAvailable=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldDisableServices=list:'[]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/firewalldEnable=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesEnable=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/iptablesRules=str:'# Generated by ovirt-hosted-engine-setup
installer
#filtering rules
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5900 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5901 -j ACCEPT
#drop all rule
-A INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshEnable=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshKey=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK/sshUser=str:''
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
NETWORK_FIREWALLD_SERVICE/hosted-console=str:'<?xml version="1.0"
encoding="utf-8"?>
<service>
<short>hosted-console</short>
<description>oVirt Hosted Engine console service</description>
<port protocol="tcp" port="5900"/>
<port protocol="udp" port="5900"/>
<port protocol="tcp" port="5901"/>
<port protocol="udp" port="5901"/>
</service>
'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostEnabled=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/additionalHostReDeployment=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/checkRequirements=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/confirmSettings=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/deployProceed=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/etcAnswerFile=str:'/etc/ovirt-hosted-engine/answers.conf'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/isAdditionalHost=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/nodeSetup=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/screenProceed=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/tempDir=str:'/var/tmp'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_CORE/userAnswerFile=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/adminPassword=str:'**FILTERED**'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/appHostName=str:'dul-ovrtst02'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/clusterName=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/engineSetupTimeout=int:'600'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/forceCreateVG=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/insecureSSL=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/promptNonOperational=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_ENGINE/temporaryCertificate=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/fetchAnswer=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/fqdn=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/rootPassword=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_FIRST_HOST/sshdPort=int:'22'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeIf=str:'enp3s0'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/bridgeName=str:'ovirtmgmt'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewallManager=str:'iptables'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldServices=list:'[{'directory': 'base', 'name':
'hosted-console'}]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/firewalldSubst=dict:'{}'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdn=str:'dul-ovrtst02.acs.net'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/fqdnReverseValidation=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/gateway=str:'192.168.200.1'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/host_name=str:'dul-ovrtst01'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/promptRequiredNetworks=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NETWORK/sshdPort=int:'22'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/destEmail=str:'internal.support(a)analysts.com'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpPort=str:'25'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/smtpServer=str:'192.168.200.36'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_NOTIF/sourceEmail=str:'root@localhost'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/lockspaceName=str:'hosted-engine'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_SANLOCK/serviceName=str:'sanlock'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/GUID=str:'3600508b10018433953524235374f0007'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/LunID=str:'3600508b10018433953524235374f0007'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/blockDeviceSizeGB=int:'203'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/brokerConfContent=str:'[email]
smtp-server = 192.168.200.36
smtp-port = 25
source-email = root@localhost
destination-emails = internal.support(a)analysts.com
[notify]
state_transition = maintenance|start|stop|migrate|up|down
'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageSizeGB=int:'1'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confImageUUID=str:'8bf559b0-4cc3-4c92-ada1-f0680afb3487'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/confVolUUID=str:'3517ae74-6202-45ea-ac0c-0d69ec3c2489'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/connectionUUID=str:'1b903a29-fffe-439e-b787-911c988c879a'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/domainType=str:'fc'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdConnUUID=str:'2df85503-9df3-4a60-987d-03e93d9abf5f'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/fakeMasterSdUUID=str:'a8908c2b-a8f3-407d-beb5-2b278da62fe1'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterBrick=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisionedShareName=str:'hosted_engine_glusterfs'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/glusterProvisioningEnabled=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/hostID=int:'1'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortal=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalIPAddress=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalPassword=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalPort=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSIPortalUser=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/iSCSITargetName=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgDesc=str:'Hosted Engine Image'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgSizeGB=str:'40'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/imgUUID=str:'ca1dc26a-596e-4d99-ac4e-9809d7f2d5e4'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/lockspaceImageUUID=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/lockspaceVolumeUUID=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/metadataImageUUID=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/metadataVolumeUUID=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/sdUUID=str:'54a95285-412a-4749-b2c4-374980f08472'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/spUUID=str:'e8eaa6c0-ffb4-44df-a6d5-c096df88965c'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageAnswerFileContent=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDatacenterName=str:'hosted_datacenter'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainConnection=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageDomainName=str:'hosted_storage'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageHEConfContent=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/storageType=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/vgUUID=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/vmConfContent=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_STORAGE/volUUID=str:'7c466957-ef51-4749-aa3d-0224e13aade3'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/caSubject=str:'/C=EN/L=Test/O=Test/CN=TestCA'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/consoleType=str:'vnc'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/cpu=str:'model_Conroe'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/engineCpu=str:'Intel Conroe Family'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/glusterMinimumVersion=str:'3.7.2'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/kvmGid=int:'36'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwd=str:'**FILTERED**'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/passwdValiditySecs=str:'10800'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/pkiSubject=str:'/C=EN/L=Test/O=Test/CN=Test'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/serviceName=str:'vdsmd'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/spicePkiSubject=unicode:'C=EN, L=Test, O=Test, CN=Test'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/useSSL=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdscli=instance:'<ServerProxy for 0.0.0.0:54321/RPC2>'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VDSM/vdsmUid=int:'36'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/applianceMem=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/applianceVCpus=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/automateVMShutdown=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cdromUUID=str:'9463ba78-d3e6-48b8-b124-a2e2b7066204'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudInitISO=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitExecuteEngineSetup=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitInstanceDomainName=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitInstanceHostName=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitRootPwd=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMDNS=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMETCHOSTS=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/cloudinitVMStaticCIDR=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/consoleUUID=str:'9a9b4420-ffb1-4da3-850f-5b6049a20ff5'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/emulatedMachine=str:'pc'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/nicUUID=str:'48da7132-0fa3-42d9-8ccc-5a7e2004a965'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/ovfArchive=NoneType:'None'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/subst=dict:'{}'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmBoot=str:'cdrom'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmCDRom=str:'/dev/cdrom'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:14:57:04'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmMemSizeMB=int:'4096'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmUUID=str:'d0e46126-9faa-4a03-b6df-3a0c2ea1e27a'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVEHOSTED_VM/vmVCpus=str:'4'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
OVESETUP_CORE/offlinePackager=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfDisabledPlugins=list:'[]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfExpireCache=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfRollback=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/dnfpackagerEnabled=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/keepAliveInterval=int:'30'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumDisabledPlugins=list:'[]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumEnabledPlugins=list:'[]'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumExpireCache=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumRollback=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
PACKAGER/yumpackagerEnabled=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockMaxGap=int:'5'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/clockSet=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/reboot=bool:'False'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootAllow=bool:'True'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:510 ENV
SYSTEM/rebootDeferTime=int:'10'
2016-03-23 14:07:02 DEBUG otopi.context context.dumpEnvironment:514
ENVIRONMENT DUMP - END
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:07:02 INFO otopi.context context.runSequence:427 Stage:
Termination
2016-03-23 14:07:02 DEBUG otopi.context context.runSequence:431 STAGE
terminate
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._terminate
2016-03-23 14:07:02 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc
misc._terminate:170 Hosted Engine deployment failed: this system is not
reliable, please check the issue, fix and redeploy
2016-03-23 14:07:02 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160323134714-il6wlo.log
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:148
condition False
2016-03-23 14:07:02 DEBUG otopi.context context._executeMethod:142 Stage
terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
3
3
28 Mar '16
Hi,
Here is one of my system setup:
Data Centers
dc01
Storage
storagedomain01
storagedomain02
storagedomain03
Clusters
cl01
When i making vm from template on webgui, under the resource allocation
i can choose which target storagedomain like to provision.
The question is how can i set this target parameter when i'm using
python sdk for vm creation?
regards
enax
2
1
This is a multi-part message in MIME format.
--------------060409020502060901040200
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
On 03/27/2016 10:24 PM, paf1(a)email.cz wrote:
> What's the recommended shard size for databases ( espacially Oracle )
> , I'm afraid that 512M is too large.
> I found that someones using about 16MB , but it generate a lot of
> files for healing if volumes splitted. (eg. for 500GB DB in worse case )
Would the database be running within the guest VM?
Did you run into any specific issue with 512M shard size?
What we have noticed is that with smaller shard sizes , like 4MB, the
number of entries in the .shard directory is too high, and hence affects
the performance when it comes to healing entries in directory. The
256M/512M shard size is a good balance between the number of entries
created and data size to heal.
>
> Pa.
>
> On 27.3.2016 16:57, Sahina Bose wrote:
>> Stripe is not supported.
>>
>> What you need to do instead is turn on sharding for the volume.
>>
>> So:
>>
>> gluster volume create 12HP12-S2R3A1P2 replica 3 arbiter 1
>> 1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
>> kvmarbiter:/STORAGES/P2-1/GFS force
>>
>> gluster volume set 12HP12-S2R3A1P2 features.shard on
>> gluster volume set 12HP12-S2R3A1P2 features.shard-block-size 512MB
>>
>> If you want to utilize the additional nodes as well, you can change
>> this to a distributed replicate volume - instead of the volume
>> creation in step above , use below
>>
>> gluster volume create 12HP12-S2R3A1P2 replica 3 arbiter 1
>> 1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
>> kvmarbiter:/STORAGES/P2-1/GFS 2hp1:/STORAGES/P2/GFS
>> 2hp2:/STORAGES/P2/GFS kvmarbiter:/STORAGES/P2-2/GFS force
>>
>>
>> On 03/24/2016 07:49 PM, paf1(a)email.cz wrote:
>>> Hello,
>>> I tried create stripe 2 replica 3 arbiter1 gluster volume for testing.
>>> So , glusterFS such type from commandline was successfull, but
>>> domain creation looks to be unsupported. with oVirt message "Error
>>> while executing action AddGlusterFsStorageDomain: Storage Domain
>>> target is unsupported".
>>> Cam U tell me if is it error or really unsuported ??
>>>
>>> exam:
>>> gluster volume create 12HP12-S2R3A1P2 stripe 2 replica 3 arbiter 1
>>> 1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
>>> kvmarbiter:/STORAGES/P2-1/GFS 2hp1:/STORAGES/P2/GFS
>>> 2hp2:/STORAGES/P2/GFS kvmarbiter:/STORAGES/P2-2/GFS force
>>>
>>>
>>> RHEL 7-2.1511
>>> vdsm - vdsm-4.17.23-1.el7
>>> gluster - glusterfs-3.7.9-1.el7
>>> ovirt - 3.5.6.2-1
>>>
>>> regs.Pavel
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
--------------060409020502060901040200
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<br>
<div class="moz-cite-prefix">On 03/27/2016 10:24 PM, <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1(a)email.cz</a>
wrote:<br>
</div>
<blockquote cite="mid:56F81046.3070200@email.cz" type="cite">
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
What's the recommended shard size for databases ( espacially
Oracle ) , I'm afraid that 512M is too large.<br>
I found that someones using about 16MB , but it generate a lot of
files for healing if volumes splitted. (eg. for 500GB DB in worse
case )<br>
</blockquote>
<br>
Would the database be running within the guest VM?<br>
<br>
Did you run into any specific issue with 512M shard size?<br>
What we have noticed is that with smaller shard sizes , like 4MB,
the number of entries in the .shard directory is too high, and hence
affects the performance when it comes to healing entries in
directory. The 256M/512M shard size is a good balance between the
number of entries created and data size to heal.<br>
<br>
<blockquote cite="mid:56F81046.3070200@email.cz" type="cite"> <br>
Pa.<br>
<br>
<div class="moz-cite-prefix">On 27.3.2016 16:57, Sahina Bose
wrote:<br>
</div>
<blockquote cite="mid:56F7F4C0.2020408@redhat.com" type="cite">
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
Stripe is not supported.<br>
<br>
What you need to do instead is turn on sharding for the volume.<br>
<br>
So:<br>
<br>
gluster volume create 12HP12-S2R3A1P2 replica 3 arbiter 1
1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
kvmarbiter:/STORAGES/P2-1/GFS force<br>
<br>
gluster volume set 12HP12-S2R3A1P2 features.shard on<br>
gluster volume set 12HP12-S2R3A1P2 features.shard-block-size
512MB<br>
<br>
If you want to utilize the additional nodes as well, you can
change this to a distributed replicate volume - instead of the
volume creation in step above , use below<br>
<br>
gluster volume create 12HP12-S2R3A1P2 replica 3 arbiter 1
1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
kvmarbiter:/STORAGES/P2-1/GFS 2hp1:/STORAGES/P2/GFS
2hp2:/STORAGES/P2/GFS kvmarbiter:/STORAGES/P2-2/GFS force<br>
<br>
<br>
<div class="moz-cite-prefix">On 03/24/2016 07:49 PM, <a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:paf1@email.cz"><a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1(a)email.cz</a></a> wrote:<br>
</div>
<blockquote cite="mid:56F3F75D.8060201@email.cz" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
Hello, <br>
I tried create stripe 2 replica 3 arbiter1 gluster volume for
testing.<br>
So , glusterFS such type from commandline was successfull, but
domain creation looks to be unsupported. with oVirt message
"Error while executing action AddGlusterFsStorageDomain:
Storage Domain target is unsupported".<br>
Cam U tell me if is it error or really unsuported ??<br>
<br>
exam: <br>
gluster volume create 12HP12-S2R3A1P2 stripe 2 replica 3
arbiter 1 1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
kvmarbiter:/STORAGES/P2-1/GFS 2hp1:/STORAGES/P2/GFS
2hp2:/STORAGES/P2/GFS kvmarbiter:/STORAGES/P2-2/GFS force<br>
<br>
<br>
RHEL 7-2.1511<br>
vdsm - vdsm-4.17.23-1.el7<br>
gluster - glusterfs-3.7.9-1.el7<br>
ovirt - 3.5.6.2-1<br>
<br>
regs.Pavel<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>
--------------060409020502060901040200--
2
1
This is a multi-part message in MIME format.
--------------000109010407050908020106
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
can anybody explain why is engine.log filled with following messages,
even thought this is fresh installation ??
Especially START / FINISH rows in cycle, ... in cycle, .... in cycle
???? - it takes a lot of space and is really needed ??
RHEL 7-2.1511
vdsm - vdsm-4.17.23-1.el7
gluster - glusterfs-3.7.9-1.el7
ovirt - 3.5.6.2-1
regs.Pavel
2016-03-24 13:39:10,758 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) START,
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = 2hp1, HostId =
45f76a0f-9616-420a-be1d-afbed2954562), log id: 3d6b27fd
2016-03-24 13:39:13,243 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) FINISH,
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@11d55805,
log id: 3d6b27fd
2016-03-24 13:39:13,278 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) START,
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = 2hp1, HostId =
45f76a0f-9616-420a-be1d-afbed2954562), log id: 3ae0e479
2016-03-24 13:39:13,349 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait lock
EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-00000000022e
value: GLUSTER
, sharedLocks= ]
2016-03-24 13:39:13,444 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait lock
EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-00000000022e
value: GLUSTER
, sharedLocks= ]
2016-03-24 13:39:13,801 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait lock
EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-00000000022e
value: GLUSTER
, sharedLocks= ]
2016-03-24 13:39:14,646 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait lock
EngineLock [exclusiveLocks= key: 00000001-0001-0001-0001-00000000022e
value: GLUSTER
, sharedLocks= ]
2016-03-24 13:39:15,630 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) FINISH,
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@cab5100f,
log id: 3ae0e479
2016-03-24 13:39:15,656 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-43) START,
GlusterVolumesListVDSCommand(HostName = 1hp2, HostId =
184ebfaa-51a9-43e4-a57b-9d4f03e85b47), log id: 5756f325
2016-03-24 13:39:16,105 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-43) FINISH, GlusterVolumesListVDSCommand,
return:
{e4121610-6128-4ecc-86d3-1429ab3b8356=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@edd4741e,
d3d260cd-455f-42d6-9580-d88ae6df0519=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@84ea7412},
log id: 5756f325
2016-03-24 13:39:21,161 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-81) START,
GlusterVolumesListVDSCommand(HostName = 1hp2, HostId =
184ebfaa-51a9-43e4-a57b-9d4f03e85b47), log id: 6e2f6c69
2016-03-24 13:39:21,667 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-81) FINISH, GlusterVolumesListVDSCommand,
return:
{e4121610-6128-4ecc-86d3-1429ab3b8356=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2ad03bfa,
d3d260cd-455f-42d6-9580-d88ae6df0519=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@6a99fa09},
log id: 6e2f6c69
--------------000109010407050908020106
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
can anybody explain why is engine.log filled with following
messages, even thought this is fresh installation ??<br>
Especially START / FINISH rows in cycle, ... in cycle, .... in
cycle ???? - it takes a lot of space and is really needed ??<br>
<br>
RHEL 7-2.1511<br>
vdsm - vdsm-4.17.23-1.el7<br>
gluster - glusterfs-3.7.9-1.el7<br>
ovirt - 3.5.6.2-1<br>
<br>
regs.Pavel<br>
<br>
<br>
2016-03-24 13:39:10,758 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) START,
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = 2hp1, HostId =
45f76a0f-9616-420a-be1d-afbed2954562), log id: 3d6b27fd<br>
2016-03-24 13:39:13,243 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) FINISH,
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@11d55805,
log id: 3d6b27fd<br>
2016-03-24 13:39:13,278 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) START,
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = 2hp1, HostId =
45f76a0f-9616-420a-be1d-afbed2954562), log id: 3ae0e479<br>
2016-03-24 13:39:13,349 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
00000001-0001-0001-0001-00000000022e value: GLUSTER<br>
, sharedLocks= ]<br>
2016-03-24 13:39:13,444 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
00000001-0001-0001-0001-00000000022e value: GLUSTER<br>
, sharedLocks= ]<br>
2016-03-24 13:39:13,801 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
00000001-0001-0001-0001-00000000022e value: GLUSTER<br>
, sharedLocks= ]<br>
2016-03-24 13:39:14,646 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler_Worker-43) Failed to acquire lock and wait
lock EngineLock [exclusiveLocks= key:
00000001-0001-0001-0001-00000000022e value: GLUSTER<br>
, sharedLocks= ]<br>
2016-03-24 13:39:15,630 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler_Worker-89) FINISH,
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@cab5100f,
log id: 3ae0e479<br>
2016-03-24 13:39:15,656 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-43) START,
GlusterVolumesListVDSCommand(HostName = 1hp2, HostId =
184ebfaa-51a9-43e4-a57b-9d4f03e85b47), log id: 5756f325<br>
2016-03-24 13:39:16,105 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-43) FINISH,
GlusterVolumesListVDSCommand, return:
{e4121610-6128-4ecc-86d3-1429ab3b8356=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@edd4741e,
d3d260cd-455f-42d6-9580-d88ae6df0519=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@84ea7412},
log id: 5756f325<br>
2016-03-24 13:39:21,161 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-81) START,
GlusterVolumesListVDSCommand(HostName = 1hp2, HostId =
184ebfaa-51a9-43e4-a57b-9d4f03e85b47), log id: 6e2f6c69<br>
2016-03-24 13:39:21,667 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-81) FINISH,
GlusterVolumesListVDSCommand, return:
{e4121610-6128-4ecc-86d3-1429ab3b8356=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2ad03bfa,
d3d260cd-455f-42d6-9580-d88ae6df0519=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@6a99fa09},
log id: 6e2f6c69<br>
<br>
</body>
</html>
--------------000109010407050908020106--
2
1
This is a multi-part message in MIME format.
--------------060206050809050705030905
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
I tried create stripe 2 replica 3 arbiter1 gluster volume for testing.
So , glusterFS such type from commandline was successfull, but domain
creation looks to be unsupported. with oVirt message "Error while
executing action AddGlusterFsStorageDomain: Storage Domain target is
unsupported".
Cam U tell me if is it error or really unsuported ??
exam:
gluster volume create 12HP12-S2R3A1P2 stripe 2 replica 3 arbiter 1
1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
kvmarbiter:/STORAGES/P2-1/GFS 2hp1:/STORAGES/P2/GFS
2hp2:/STORAGES/P2/GFS kvmarbiter:/STORAGES/P2-2/GFS force
RHEL 7-2.1511
vdsm - vdsm-4.17.23-1.el7
gluster - glusterfs-3.7.9-1.el7
ovirt - 3.5.6.2-1
regs.Pavel
--------------060206050809050705030905
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
I tried create stripe 2 replica 3 arbiter1 gluster volume for
testing.<br>
So , glusterFS such type from commandline was successfull, but
domain creation looks to be unsupported. with oVirt message "Error
while executing action AddGlusterFsStorageDomain: Storage Domain
target is unsupported".<br>
Cam U tell me if is it error or really unsuported ??<br>
<br>
exam: <br>
gluster volume create 12HP12-S2R3A1P2 stripe 2 replica 3 arbiter 1
1hp1:/STORAGES/P2/GFS 1hp2:/STORAGES/P2/GFS
kvmarbiter:/STORAGES/P2-1/GFS 2hp1:/STORAGES/P2/GFS
2hp2:/STORAGES/P2/GFS kvmarbiter:/STORAGES/P2-2/GFS force<br>
<br>
<br>
RHEL 7-2.1511<br>
vdsm - vdsm-4.17.23-1.el7<br>
gluster - glusterfs-3.7.9-1.el7<br>
ovirt - 3.5.6.2-1<br>
<br>
regs.Pavel<br>
</body>
</html>
--------------060206050809050705030905--
2
1
Hi all,
I'm attempting to use the REST API to get a proxy ticket as BZ1181030
[0] has added support for, but I am unable to find _any_ documentation
on how to use this new API call.
All I've found is the git commit message in gerrit [1]. Not quite
understanding how to interpret this, I've attempted to POST the
following:
<action/>
As well as:
<action> <proxy_ticket> <value>ticket_content</value> </proxy_ticket> </action>
Both of these, and indeed using any other data I can think of, results
in an NPE being logged in engine.log. I've uploaded my engine.log [2]
which is produced when I POST "<action/>" to the endpoint.
Is anyone able to tell me how to use this endpoint? I'm assuming I
get the NPE because I'm not POSTing the correct data. The overall
goal of this is to be able to embed a NoVNC console in our own web UI,
this is the last piece of the puzzle for me to implement this.
Thanks,
Ollie
[0] https://bugzilla.redhat.com/show_bug.cgi?id=1181030
[1] https://gerrit.ovirt.org/#/c/42412/
[2] http://ix.io/uVP
3
6
Re: [ovirt-users] oVirt 3.6 AAA LDAP cannot not log in when end of UPN is different from domain base
by Karli Sjöberg 24 Mar '16
by Karli Sjöberg 24 Mar '16
24 Mar '16
--_000_b78ced6864174864b04ccacdb481eac8exch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjQgbWFycyAyMDE2IDExOjI2IGVtIHNrcmV2IE9uZHJhIE1hY2hhY2VrIDxvbWFjaGFj
ZUByZWRoYXQuY29tPjoNCj4NCj4gT24gMDMvMjQvMjAxNiAxMToxNCBQTSwgS2FybGkgU2rDtmJl
cmcgd3JvdGU6DQo+ID4NCj4gPiBEZW4gMjQgbWFycyAyMDE2IDc6MjYgZW0gc2tyZXYgT25kcmEg
TWFjaGFjZWsgPG9tYWNoYWNlQHJlZGhhdC5jb20+Og0KPiA+ICA+DQo+ID4gID4gT24gMDMvMjQv
MjAxNiAwNjoxNiBQTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6DQo+ID4gID4gPiBIaSENCj4gPiAg
PiA+DQo+ID4gID4gPg0KPiA+ICA+ID4gU3RhcnRpbmcgbmV3IHRocmVhZCBpbnN0ZWFkIG9mIGph
Y2tpbmcgc29tZW9uZSBlbHNlwrRzLg0KPiA+ICA+ID4NCj4gPiAgPiA+DQo+ID4gID4gPiBNYW5h
Z2VkIHRvIG1pZ3JhdGUgZnJvbSBvbGQgJ2VuZ2luZS1tYW5hZ2UtZG9tYWlucycgYXV0aCB0bw0K
PiA+IGFhYS1sZGFwIHVzaW5nOg0KPiA+ICA+ID4NCj4gPiAgPiA+ICN8IG92aXJ0LWVuZ2luZS1r
ZXJibGRhcC1taWdyYXRpb24tdG9vbCAtLWRvbWFpbiBiYXouZm9vLmJhciAtLWNhY2VydA0KPiA+
ICA+ID4gL3RtcC9jYS5jcnQgLS1hcHBseQ0KPiA+ICA+ID4gfA0KPiA+ICA+ID4NCj4gPiAgPiA+
DQo+ID4gID4gPiBBbGwgT0ssIG5vIGVycm9ycywgYnV0IGNhbm5vdCBsb2cgaW46DQo+ID4gID4g
Pg0KPiA+ICA+ID4gIyBvdmlydC1lbmdpbmUtZXh0ZW5zaW9ucy10b29sIGFhYSBsb2dpbi11c2Vy
IC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXcNCj4gPiAgPiA+IC0tdXNlci1uYW1lPXVzZXI6DQo+
ID4gID4NCj4gPiAgPiBJZiB5b3Ugd2FudCB0byBsb2dpbiB3aXRoIHVzZXIgd2l0aCBkaWZmZXJl
bnQgdXBuIHN1ZmZpeCwgdGhlbiBqdXN0DQo+ID4gID4gYXBwZW5kIHRoYXQgc3VmZml4DQo+ID4g
ID4NCj4gPiAgPiAkIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgYWFhIGxvZ2luLXVzZXIg
LS1wcm9maWxlPWJhei5mb28uYmFyLW5ldw0KPiA+ICA+IC0tdXNlci1uYW1lPXVzZXJAZm9vLmJh
cg0KPiA+DQo+ID4gT0ssIHNvbWUgcHJvZ3Jlc3MsIHRoYXQgd29ya3MhDQo+ID4NCj4gPiAgPg0K
PiA+ICA+IElmIHlvdSBoYXZlIG1vcmUgc3VmZml4ZXMgYW5kIHdhbnQgdG8gaGF2ZSBzb21lIGFz
IGRlZmF1bHQgeW91IGNhbiB1c2UNCj4gPiAgPiBmb2xsb3dpbmcgYXBwcm9hY2g6DQo+ID4gID4N
Cj4gPiAgPiAxKSBpbnN0YWxsIG92aXJ0LWVuZ2luZS1leHRlbnNpb24tYWFhLW1pc2MNCj4gPiAg
Pg0KPiA+ICA+IDIpIGNyZWF0ZSBuZXcgbWFwcGluZyBleHRlbnNpb24gbGlrZSB0aGlzOg0KPiA+
ICA+IC9ldGMvb3ZpcnQtZW5naW5lL2V4dGVuc2lvbnMuZC9tYXBwaW5nLXN1ZmZpeC5wcm9wZXJ0
aWVzDQo+ID4gID4NCj4gPiAgPiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLm5hbWUgPSBtYXBwaW5n
LXN1ZmZpeA0KPiA+ICA+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24uYmluZGluZ3MubWV0aG9kID0g
amJvc3Ntb2R1bGUNCj4gPiAgPiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Nt
b2R1bGUubW9kdWxlID0NCj4gPiAgPiBvcmcub3ZpcnQuZW5naW5lLWV4dGVuc2lvbnMuYWFhLm1p
c2MNCj4gPiAgPiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Ntb2R1bGUuY2xh
c3MgPQ0KPiA+ICA+IG9yZy5vdmlydC5lbmdpbmVleHRlbnNpb25zLmFhYS5taXNjLm1hcHBpbmcu
TWFwcGluZ0V4dGVuc2lvbg0KPiA+ICA+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24ucHJvdmlkZXMg
PQ0KPiA+ICA+IG9yZy5vdmlydC5lbmdpbmUuYXBpLmV4dGVuc2lvbnMuYWFhLk1hcHBpbmcNCj4g
PiAgPiBjb25maWcubWFwVXNlci50eXBlID0gcmVnZXgNCj4gPiAgPiBjb25maWcubWFwVXNlci5w
YXR0ZXJuID0gXig/PHVzZXI+W15AXSopJA0KPiA+DQo+ID4gSXMgdGhhdCBzdXBwb3NlZCB0byBy
ZWFsbHkgc2F5ICc8dXNlcj4nIG9yIHNob3VsZCBpdCBiZSBjaGFuZ2VkIHRvIGENCj4gPiByZWFs
IHVzZXIgbmFtZT8gRWl0aGVyIHdheSwgaXQgZG9lc24ndCB3b3JrLCBJIHRyaWVkIGl0IGFsbC4N
Cj4NCj4gJz88dXNlcj4nIGlzIGp1c3QgYSBuYW1lZCBncm91cCBpbiB0aGF0IHJlZ2V4IHNvIHlv
dSBjYW4gbGF0ZXIgdXNlIGl0IGluDQo+ICdjb25maWcubWFwVXNlci5yZXBsYWNlbWVudCcgIG9w
dGlvbi4gSXQgc2hvdWxkIHRha2UgZXZlcnl0aGluZyB1bnRpbA0KPiBmaXJzdCAnQCcuDQo+DQo+
ID4NCj4gPiAgPiBjb25maWcubWFwVXNlci5yZXBsYWNlbWVudCA9ICR7dXNlcn1AZm9vLmJhcg0K
PiA+ICA+IGNvbmZpZy5tYXBVc2VyLm11c3RNYXRjaCA9IGZhbHNlDQo+ID4gID4NCj4gPiAgPiAz
KSBzZWxlY3QgYSBtYXBwaW5nIHBsdWdpbiBpbiBhdXRobiBjb25maWd1cmF0aW9uOg0KPiA+ICA+
DQo+ID4gID4gb3ZpcnQuZW5naW5lLmFhYS5hdXRobi5tYXBwaW5nLnBsdWdpbiA9IG1hcHBpbmct
c3VmZml4DQo+ID4gID4NCj4gPiAgPiBXaXRoIGFib3ZlIGNvbmZpZ3VyYXRpb24gaW4gdXNlLCB5
b3VyIHVzZXIgJ3VzZXInIHdpdGxsIGJlIG1hcHBlZCB0bw0KPiA+ICA+IHVzZXIgJ3VzZXJAZm9v
LmJhcicNCj4gPiAgPiBhbmQgdXNlcnMgJ3VzZXJAYW5vdGhlcmRvbWFpbi5mb28uYmFyJyB3aWxs
IHJlbWFpbg0KPiA+ICA+ICd1c2VyQGFub3RoZXJkb21haW4uZm9vLmJhcicuDQo+ID4NCj4gPiBU
aGlzIGhvd2V2ZXIgZG9lcyBub3QsIGl0IGRvZXNuJ3QgcmVwbGFjZSB0aGUgc3VmZml4IGFzIGl0
J3Mgc3VwcG9zZWQNCj4gPiB0by4gSSB0cmllZCB3aXRoIG1hbnkgZGlmZmVyZW50IHR5cGVzIG9m
IHRoZSAnbWFwVXNlci5wYXR0ZXJuJyBidXQgaXQNCj4gPiBzaW1wbHkgd29uJ3QgY2hhbmdlIGl0
LCBldmVuIGlmIEkgdHlwZSBpbiAnPSBedXNlckBiYXouZm9vLmJhciQnLCB0aGUNCj4gPiBlcnJv
ciBpcyB0aGUgc2FtZTooDQo+DQo+IEhtbSwgaGFyZCB0byBzYXkgd2hhdCdzIHdyb25nLCB0cnkg
dG8gcnVuOg0KPiAkIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgLS1sb2ctbGV2ZWw9RklO
RVNUIGFhYSBsb2dpbi11c2VyDQo+IC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXcgLS11c2VyLW5h
bWU9dXNlcg0KPg0KPiBhbmQgc2VhcmNoIGZvciBhIG1hcHBpbmcgcGFydCBpbiBsb2cuDQoNCldv
dyB3aGF0IGEgbW91dGhmdWxsOikgQ2FuIHlvdSBtYWtlIGFueXRoaW5nIG91dCBvZiBpdD8NCg0K
aHR0cHM6Ly9kcm9wb2ZmLnNsdS5zZS9pbmRleC5waHAvcy9FTWUyTlBtT2ZzV0NOVHYvZG93bmxv
YWQNCg0KL0sNCg0KPg0KPiA+DQo+ID4gL0sNCj4gPg0KPiA+ICA+DQo+ID4gID4gPg0KPiA+ICA+
ID4gQVBJOiA8LS1BdXRobi5JbnZva2VDb21tYW5kcy5BVVRIRU5USUNBVEVfQ1JFREVOVElBTFMg
cmVzdWx0PVNVQ0NFU1MNCj4gPiAgPiA+DQo+ID4gID4gPg0KPiA+ICA+ID4gYnV0Og0KPiA+ICA+
ID4NCj4gPiAgPiA+IEFQSTogLS0+QXV0aHouSW52b2tlQ29tbWFuZHMuRkVUQ0hfUFJJTkNJUEFM
X1JFQ09SRA0KPiA+ICA+ID4gcHJpbmNpcGFsPSd1c2VyQGJhei5mb28uYmFyJw0KPiA+ICA+ID4g
U0VWRVJFICBDYW5ub3QgcmVzb2x2ZSBwcmluY2lwYWwgJ3VzZXJAYmF6LmZvby5iYXInDQo+ID4g
ID4gPg0KPiA+ICA+ID4NCj4gPiAgPiA+IFNvIGl0IGZhaWxzLg0KPiA+ICA+ID4NCj4gPiAgPiA+
DQo+ID4gID4gPiAjIGxkYXBzZWFyY2ggLXggLUggbGRhcDovL2Jhei5mb28uYmFyIC1EIHVzZXJA
Zm9vLmJhciAtVyAtYg0KPiA+ICA+ID4gREM9YmF6LERDPWZvbyxEQz1iYXIgLXMgc3ViICIoc2Ft
QWNjb3VudE5hbWU9dXNlcikiIHVzZXJQcmluY2lwYWxOYW1lIHwNCj4gPiAgPiA+IGdyZXAgJ3Vz
ZXJQcmluY2lwYWxOYW1lOicNCj4gPiAgPiA+DQo+ID4gID4gPiB1c2VyUHJpbmNpcGFsTmFtZTog
dXNlckBmb28uYmFyDQo+ID4gID4gPg0KPiA+ICA+ID4NCj4gPiAgPiA+IHxIb3cgZG8geW91IGNv
bmZpZ3VyZSBBQUEgd2l0aCBiYXNlICdEQz1iYXosREM9Zm9vLERDPWJhcicgd2hlbg0KPiA+ICA+
ID4gdXNlclByaW5jaXBhbE5hbWUgZW5kcyBvbmx5IG9uICdAZm9vLmJhcic/DQo+ID4gID4gPg0K
PiA+ICA+ID4gL0sNCj4gPiAgPiA+IHwNCj4gPiAgPiA+DQo+ID4gID4gPg0KPiA+ICA+ID4NCj4g
PiAgPiA+DQo+ID4gID4gPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fXw0KPiA+ICA+ID4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4gID4gPiBVc2Vyc0Bvdmly
dC5vcmcNCj4gPiAgPiA+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91
c2Vycw0KPiA+ICA+ID4NCj4gPg0K
--_000_b78ced6864174864b04ccacdb481eac8exch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <798DF3446B171847806BCF00AA9DD793(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyNCBtYXJzIDIwMTYgMTE6MjYgZW0gc2tyZXYgT25kcmEgTWFjaGFjZWsgJmx0
O29tYWNoYWNlQHJlZGhhdC5jb20mZ3Q7Ojxicj4NCiZndDs8YnI+DQomZ3Q7IE9uIDAzLzI0LzIw
MTYgMTE6MTQgUE0sIEthcmxpIFNqw7ZiZXJnIHdyb3RlOjxicj4NCiZndDsgJmd0Ozxicj4NCiZn
dDsgJmd0OyBEZW4gMjQgbWFycyAyMDE2IDc6MjYgZW0gc2tyZXYgT25kcmEgTWFjaGFjZWsgJmx0
O29tYWNoYWNlQHJlZGhhdC5jb20mZ3Q7Ojxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7Jm5ic3A7ICZndDsgT24gMDMvMjQvMjAxNiAwNjoxNiBQTSwgS2FybGkgU2rDtmJl
cmcgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBIaSE8YnI+DQomZ3Q7ICZn
dDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZn
dDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgU3RhcnRpbmcgbmV3IHRocmVhZCBpbnN0ZWFkIG9mIGph
Y2tpbmcgc29tZW9uZSBlbHNlwrRzLjxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+
DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0
OyBNYW5hZ2VkIHRvIG1pZ3JhdGUgZnJvbSBvbGQgJ2VuZ2luZS1tYW5hZ2UtZG9tYWlucycgYXV0
aCB0bzxicj4NCiZndDsgJmd0OyBhYWEtbGRhcCB1c2luZzo8YnI+DQomZ3Q7ICZndDsmbmJzcDsg
Jmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyAjfCBvdmlydC1lbmdpbmUt
a2VyYmxkYXAtbWlncmF0aW9uLXRvb2wgLS1kb21haW4gYmF6LmZvby5iYXIgLS1jYWNlcnQ8YnI+
DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IC90bXAvY2EuY3J0IC0tYXBwbHk8YnI+DQomZ3Q7
ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IHw8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJy
Pg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZn
dDsgQWxsIE9LLCBubyBlcnJvcnMsIGJ1dCBjYW5ub3QgbG9nIGluOjxicj4NCiZndDsgJmd0OyZu
YnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7ICMgb3ZpcnQtZW5n
aW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNlciAtLXByb2ZpbGU9YmF6LmZvby5iYXIt
bmV3PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyAtLXVzZXItbmFtZT11c2VyOjxicj4N
CiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgSWYgeW91IHdh
bnQgdG8gbG9naW4gd2l0aCB1c2VyIHdpdGggZGlmZmVyZW50IHVwbiBzdWZmaXgsIHRoZW4ganVz
dDxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7IGFwcGVuZCB0aGF0IHN1ZmZpeDxicj4NCiZndDsg
Jmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJCBvdmlydC1lbmdpbmUt
ZXh0ZW5zaW9ucy10b29sIGFhYSBsb2dpbi11c2VyIC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXc8
YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAtLXVzZXItbmFtZT11c2VyQGZvby5iYXI8YnI+DQom
Z3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgT0ssIHNvbWUgcHJvZ3Jlc3MsIHRoYXQgd29ya3MhPGJy
Pg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyBJZiB5b3UgaGF2ZSBtb3JlIHN1ZmZpeGVzIGFuZCB3YW50IHRvIGhhdmUgc29tZSBh
cyBkZWZhdWx0IHlvdSBjYW4gdXNlPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgZm9sbG93aW5n
IGFwcHJvYWNoOjxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7
ICZndDsgMSkgaW5zdGFsbCBvdmlydC1lbmdpbmUtZXh0ZW5zaW9uLWFhYS1taXNjPGJyPg0KJmd0
OyAmZ3Q7Jm5ic3A7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAyKSBjcmVhdGUgbmV3
IG1hcHBpbmcgZXh0ZW5zaW9uIGxpa2UgdGhpczo8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAv
ZXRjL292aXJ0LWVuZ2luZS9leHRlbnNpb25zLmQvbWFwcGluZy1zdWZmaXgucHJvcGVydGllczxi
cj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgb3ZpcnQu
ZW5naW5lLmV4dGVuc2lvbi5uYW1lID0gbWFwcGluZy1zdWZmaXg8YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmdzLm1ldGhvZCA9IGpib3NzbW9k
dWxlPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5iaW5k
aW5nLmpib3NzbW9kdWxlLm1vZHVsZSA9PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgb3JnLm92
aXJ0LmVuZ2luZS1leHRlbnNpb25zLmFhYS5taXNjPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsg
b3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5iaW5kaW5nLmpib3NzbW9kdWxlLmNsYXNzID08YnI+DQom
Z3Q7ICZndDsmbmJzcDsgJmd0OyBvcmcub3ZpcnQuZW5naW5lZXh0ZW5zaW9ucy5hYWEubWlzYy5t
YXBwaW5nLk1hcHBpbmdFeHRlbnNpb248YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyBvdmlydC5l
bmdpbmUuZXh0ZW5zaW9uLnByb3ZpZGVzID08YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyBvcmcu
b3ZpcnQuZW5naW5lLmFwaS5leHRlbnNpb25zLmFhYS5NYXBwaW5nPGJyPg0KJmd0OyAmZ3Q7Jm5i
c3A7ICZndDsgY29uZmlnLm1hcFVzZXIudHlwZSA9IHJlZ2V4PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7
ICZndDsgY29uZmlnLm1hcFVzZXIucGF0dGVybiA9IF4oPyZsdDt1c2VyJmd0O1teQF0qKSQ8YnI+
DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgSXMgdGhhdCBzdXBwb3NlZCB0byByZWFsbHkgc2F5
ICcmbHQ7dXNlciZndDsnIG9yIHNob3VsZCBpdCBiZSBjaGFuZ2VkIHRvIGE8YnI+DQomZ3Q7ICZn
dDsgcmVhbCB1c2VyIG5hbWU/IEVpdGhlciB3YXksIGl0IGRvZXNuJ3Qgd29yaywgSSB0cmllZCBp
dCBhbGwuPGJyPg0KJmd0Ozxicj4NCiZndDsgJz8mbHQ7dXNlciZndDsnIGlzIGp1c3QgYSBuYW1l
ZCBncm91cCBpbiB0aGF0IHJlZ2V4IHNvIHlvdSBjYW4gbGF0ZXIgdXNlIGl0IGluPGJyPg0KJmd0
OyAnY29uZmlnLm1hcFVzZXIucmVwbGFjZW1lbnQnJm5ic3A7IG9wdGlvbi4gSXQgc2hvdWxkIHRh
a2UgZXZlcnl0aGluZyB1bnRpbCA8YnI+DQomZ3Q7IGZpcnN0ICdAJy48YnI+DQomZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgY29uZmlnLm1hcFVzZXIucmVwbGFj
ZW1lbnQgPSAke3VzZXJ9QGZvby5iYXI8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyBjb25maWcu
bWFwVXNlci5tdXN0TWF0Y2ggPSBmYWxzZTxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7Jm5ic3A7ICZndDsgMykgc2VsZWN0IGEgbWFwcGluZyBwbHVnaW4gaW4gYXV0aG4g
Y29uZmlndXJhdGlvbjo8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyZu
YnNwOyAmZ3Q7IG92aXJ0LmVuZ2luZS5hYWEuYXV0aG4ubWFwcGluZy5wbHVnaW4gPSBtYXBwaW5n
LXN1ZmZpeDxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDsgV2l0aCBhYm92ZSBjb25maWd1cmF0aW9uIGluIHVzZSwgeW91ciB1c2VyICd1c2VyJyB3aXRs
bCBiZSBtYXBwZWQgdG88YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyB1c2VyICd1c2VyQGZvby5i
YXInPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgYW5kIHVzZXJzICd1c2VyQGFub3RoZXJkb21h
aW4uZm9vLmJhcicgd2lsbCByZW1haW48YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAndXNlckBh
bm90aGVyZG9tYWluLmZvby5iYXInLjxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBUaGlz
IGhvd2V2ZXIgZG9lcyBub3QsIGl0IGRvZXNuJ3QgcmVwbGFjZSB0aGUgc3VmZml4IGFzIGl0J3Mg
c3VwcG9zZWQ8YnI+DQomZ3Q7ICZndDsgdG8uIEkgdHJpZWQgd2l0aCBtYW55IGRpZmZlcmVudCB0
eXBlcyBvZiB0aGUgJ21hcFVzZXIucGF0dGVybicgYnV0IGl0PGJyPg0KJmd0OyAmZ3Q7IHNpbXBs
eSB3b24ndCBjaGFuZ2UgaXQsIGV2ZW4gaWYgSSB0eXBlIGluICc9IF51c2VyQGJhei5mb28uYmFy
JCcsIHRoZTxicj4NCiZndDsgJmd0OyBlcnJvciBpcyB0aGUgc2FtZTooPGJyPg0KJmd0Ozxicj4N
CiZndDsgSG1tLCBoYXJkIHRvIHNheSB3aGF0J3Mgd3JvbmcsIHRyeSB0byBydW46PGJyPg0KJmd0
OyAkIG92aXJ0LWVuZ2luZS1leHRlbnNpb25zLXRvb2wgLS1sb2ctbGV2ZWw9RklORVNUIGFhYSBs
b2dpbi11c2VyIDxicj4NCiZndDsgLS1wcm9maWxlPWJhei5mb28uYmFyLW5ldyAtLXVzZXItbmFt
ZT11c2VyPGJyPg0KJmd0Ozxicj4NCiZndDsgYW5kIHNlYXJjaCBmb3IgYSBtYXBwaW5nIHBhcnQg
aW4gbG9nLjwvcD4NCjxwIGRpcj0ibHRyIj5Xb3cgd2hhdCBhIG1vdXRoZnVsbDopIENhbiB5b3Ug
bWFrZSBhbnl0aGluZyBvdXQgb2YgaXQ/PC9wPg0KPHAgZGlyPSJsdHIiPmh0dHBzOi8vZHJvcG9m
Zi5zbHUuc2UvaW5kZXgucGhwL3MvRU1lMk5QbU9mc1dDTlR2L2Rvd25sb2FkPC9wPg0KPHAgZGly
PSJsdHIiPi9LPC9wPg0KPHAgZGlyPSJsdHIiPiZndDs8YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7
ICZndDsgL0s8YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZn
dDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IEFQ
STogJmx0Oy0tQXV0aG4uSW52b2tlQ29tbWFuZHMuQVVUSEVOVElDQVRFX0NSRURFTlRJQUxTIHJl
c3VsdD1TVUNDRVNTPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0
OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IGJ1dDo8YnI+
DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0
OyBBUEk6IC0tJmd0O0F1dGh6Lkludm9rZUNvbW1hbmRzLkZFVENIX1BSSU5DSVBBTF9SRUNPUkQ8
YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IHByaW5jaXBhbD0ndXNlckBiYXouZm9vLmJh
cic8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IFNFVkVSRSZuYnNwOyBDYW5ub3QgcmVz
b2x2ZSBwcmluY2lwYWwgJ3VzZXJAYmF6LmZvby5iYXInPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyAmZ3Q7IFNvIGl0IGZhaWxzLjxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8
YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsg
Jmd0OyAjIGxkYXBzZWFyY2ggLXggLUggbGRhcDovL2Jhei5mb28uYmFyIC1EIHVzZXJAZm9vLmJh
ciAtVyAtYjxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgREM9YmF6LERDPWZvbyxEQz1i
YXIgLXMgc3ViICZxdW90OyhzYW1BY2NvdW50TmFtZT11c2VyKSZxdW90OyB1c2VyUHJpbmNpcGFs
TmFtZSB8PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBncmVwICd1c2VyUHJpbmNpcGFs
TmFtZTonPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNw
OyAmZ3Q7ICZndDsgdXNlclByaW5jaXBhbE5hbWU6IHVzZXJAZm9vLmJhcjxicj4NCiZndDsgJmd0
OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyB8SG93IGRvIHlvdSBjb25maWd1cmUgQUFBIHdpdGggYmFz
ZSAnREM9YmF6LERDPWZvbyxEQz1iYXInIHdoZW48YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAm
Z3Q7IHVzZXJQcmluY2lwYWxOYW1lIGVuZHMgb25seSBvbiAnQGZvby5iYXInPzxicj4NCiZndDsg
Jmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IC9LPGJy
Pg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyB8PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsg
Jmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsg
Jmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZu
YnNwOyAmZ3Q7ICZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX188YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxi
cj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgVXNlcnNAb3ZpcnQub3JnPGJyPg0KJmd0OyAm
Z3Q7Jm5ic3A7ICZndDsgJmd0OyBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGlu
Zm8vdXNlcnM8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7PGJy
Pg0KPC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_b78ced6864174864b04ccacdb481eac8exch24sluse_--
1
0
Re: [ovirt-users] oVirt 3.6 AAA LDAP cannot not log in when end of UPN is different from domain base
by Karli Sjöberg 24 Mar '16
by Karli Sjöberg 24 Mar '16
24 Mar '16
--_000_af85ac88d3754905aa58276556a94cceexch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjQgbWFycyAyMDE2IDc6MjYgZW0gc2tyZXYgT25kcmEgTWFjaGFjZWsgPG9tYWNoYWNl
QHJlZGhhdC5jb20+Og0KPg0KPiBPbiAwMy8yNC8yMDE2IDA2OjE2IFBNLCBLYXJsaSBTasO2YmVy
ZyB3cm90ZToNCj4gPiBIaSENCj4gPg0KPiA+DQo+ID4gU3RhcnRpbmcgbmV3IHRocmVhZCBpbnN0
ZWFkIG9mIGphY2tpbmcgc29tZW9uZSBlbHNlwrRzLg0KPiA+DQo+ID4NCj4gPiBNYW5hZ2VkIHRv
IG1pZ3JhdGUgZnJvbSBvbGQgJ2VuZ2luZS1tYW5hZ2UtZG9tYWlucycgYXV0aCB0byBhYWEtbGRh
cCB1c2luZzoNCj4gPg0KPiA+ICN8IG92aXJ0LWVuZ2luZS1rZXJibGRhcC1taWdyYXRpb24tdG9v
bCAtLWRvbWFpbiBiYXouZm9vLmJhciAtLWNhY2VydA0KPiA+IC90bXAvY2EuY3J0IC0tYXBwbHkN
Cj4gPiB8DQo+ID4NCj4gPg0KPiA+IEFsbCBPSywgbm8gZXJyb3JzLCBidXQgY2Fubm90IGxvZyBp
bjoNCj4gPg0KPiA+ICMgb3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMtdG9vbCBhYWEgbG9naW4tdXNl
ciAtLXByb2ZpbGU9YmF6LmZvby5iYXItbmV3DQo+ID4gLS11c2VyLW5hbWU9dXNlcjoNCj4NCj4g
SWYgeW91IHdhbnQgdG8gbG9naW4gd2l0aCB1c2VyIHdpdGggZGlmZmVyZW50IHVwbiBzdWZmaXgs
IHRoZW4ganVzdA0KPiBhcHBlbmQgdGhhdCBzdWZmaXgNCj4NCj4gJCBvdmlydC1lbmdpbmUtZXh0
ZW5zaW9ucy10b29sIGFhYSBsb2dpbi11c2VyIC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXcNCj4g
LS11c2VyLW5hbWU9dXNlckBmb28uYmFyDQoNCk9LLCBzb21lIHByb2dyZXNzLCB0aGF0IHdvcmtz
IQ0KDQo+DQo+IElmIHlvdSBoYXZlIG1vcmUgc3VmZml4ZXMgYW5kIHdhbnQgdG8gaGF2ZSBzb21l
IGFzIGRlZmF1bHQgeW91IGNhbiB1c2UNCj4gZm9sbG93aW5nIGFwcHJvYWNoOg0KPg0KPiAxKSBp
bnN0YWxsIG92aXJ0LWVuZ2luZS1leHRlbnNpb24tYWFhLW1pc2MNCj4NCj4gMikgY3JlYXRlIG5l
dyBtYXBwaW5nIGV4dGVuc2lvbiBsaWtlIHRoaXM6DQo+IC9ldGMvb3ZpcnQtZW5naW5lL2V4dGVu
c2lvbnMuZC9tYXBwaW5nLXN1ZmZpeC5wcm9wZXJ0aWVzDQo+DQo+IG92aXJ0LmVuZ2luZS5leHRl
bnNpb24ubmFtZSA9IG1hcHBpbmctc3VmZml4DQo+IG92aXJ0LmVuZ2luZS5leHRlbnNpb24uYmlu
ZGluZ3MubWV0aG9kID0gamJvc3Ntb2R1bGUNCj4gb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5iaW5k
aW5nLmpib3NzbW9kdWxlLm1vZHVsZSA9DQo+IG9yZy5vdmlydC5lbmdpbmUtZXh0ZW5zaW9ucy5h
YWEubWlzYw0KPiBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJpbmRpbmcuamJvc3Ntb2R1bGUuY2xh
c3MgPQ0KPiBvcmcub3ZpcnQuZW5naW5lZXh0ZW5zaW9ucy5hYWEubWlzYy5tYXBwaW5nLk1hcHBp
bmdFeHRlbnNpb24NCj4gb3ZpcnQuZW5naW5lLmV4dGVuc2lvbi5wcm92aWRlcyA9DQo+IG9yZy5v
dmlydC5lbmdpbmUuYXBpLmV4dGVuc2lvbnMuYWFhLk1hcHBpbmcNCj4gY29uZmlnLm1hcFVzZXIu
dHlwZSA9IHJlZ2V4DQo+IGNvbmZpZy5tYXBVc2VyLnBhdHRlcm4gPSBeKD88dXNlcj5bXkBdKikk
DQoNCklzIHRoYXQgc3VwcG9zZWQgdG8gcmVhbGx5IHNheSAnPHVzZXI+JyBvciBzaG91bGQgaXQg
YmUgY2hhbmdlZCB0byBhIHJlYWwgdXNlciBuYW1lPyBFaXRoZXIgd2F5LCBpdCBkb2Vzbid0IHdv
cmssIEkgdHJpZWQgaXQgYWxsLg0KDQo+IGNvbmZpZy5tYXBVc2VyLnJlcGxhY2VtZW50ID0gJHt1
c2VyfUBmb28uYmFyDQo+IGNvbmZpZy5tYXBVc2VyLm11c3RNYXRjaCA9IGZhbHNlDQo+DQo+IDMp
IHNlbGVjdCBhIG1hcHBpbmcgcGx1Z2luIGluIGF1dGhuIGNvbmZpZ3VyYXRpb246DQo+DQo+IG92
aXJ0LmVuZ2luZS5hYWEuYXV0aG4ubWFwcGluZy5wbHVnaW4gPSBtYXBwaW5nLXN1ZmZpeA0KPg0K
PiBXaXRoIGFib3ZlIGNvbmZpZ3VyYXRpb24gaW4gdXNlLCB5b3VyIHVzZXIgJ3VzZXInIHdpdGxs
IGJlIG1hcHBlZCB0bw0KPiB1c2VyICd1c2VyQGZvby5iYXInDQo+IGFuZCB1c2VycyAndXNlckBh
bm90aGVyZG9tYWluLmZvby5iYXInIHdpbGwgcmVtYWluDQo+ICd1c2VyQGFub3RoZXJkb21haW4u
Zm9vLmJhcicuDQoNClRoaXMgaG93ZXZlciBkb2VzIG5vdCwgaXQgZG9lc24ndCByZXBsYWNlIHRo
ZSBzdWZmaXggYXMgaXQncyBzdXBwb3NlZCB0by4gSSB0cmllZCB3aXRoIG1hbnkgZGlmZmVyZW50
IHR5cGVzIG9mIHRoZSAnbWFwVXNlci5wYXR0ZXJuJyBidXQgaXQgc2ltcGx5IHdvbid0IGNoYW5n
ZSBpdCwgZXZlbiBpZiBJIHR5cGUgaW4gJz0gXnVzZXJAYmF6LmZvby5iYXIkJywgdGhlIGVycm9y
IGlzIHRoZSBzYW1lOigNCg0KL0sNCg0KPg0KPiA+DQo+ID4gQVBJOiA8LS1BdXRobi5JbnZva2VD
b21tYW5kcy5BVVRIRU5USUNBVEVfQ1JFREVOVElBTFMgcmVzdWx0PVNVQ0NFU1MNCj4gPg0KPiA+
DQo+ID4gYnV0Og0KPiA+DQo+ID4gQVBJOiAtLT5BdXRoei5JbnZva2VDb21tYW5kcy5GRVRDSF9Q
UklOQ0lQQUxfUkVDT1JEDQo+ID4gcHJpbmNpcGFsPSd1c2VyQGJhei5mb28uYmFyJw0KPiA+IFNF
VkVSRSAgQ2Fubm90IHJlc29sdmUgcHJpbmNpcGFsICd1c2VyQGJhei5mb28uYmFyJw0KPiA+DQo+
ID4NCj4gPiBTbyBpdCBmYWlscy4NCj4gPg0KPiA+DQo+ID4gIyBsZGFwc2VhcmNoIC14IC1IIGxk
YXA6Ly9iYXouZm9vLmJhciAtRCB1c2VyQGZvby5iYXIgLVcgLWINCj4gPiBEQz1iYXosREM9Zm9v
LERDPWJhciAtcyBzdWIgIihzYW1BY2NvdW50TmFtZT11c2VyKSIgdXNlclByaW5jaXBhbE5hbWUg
fA0KPiA+IGdyZXAgJ3VzZXJQcmluY2lwYWxOYW1lOicNCj4gPg0KPiA+IHVzZXJQcmluY2lwYWxO
YW1lOiB1c2VyQGZvby5iYXINCj4gPg0KPiA+DQo+ID4gfEhvdyBkbyB5b3UgY29uZmlndXJlIEFB
QSB3aXRoIGJhc2UgJ0RDPWJheixEQz1mb28sREM9YmFyJyB3aGVuDQo+ID4gdXNlclByaW5jaXBh
bE5hbWUgZW5kcyBvbmx5IG9uICdAZm9vLmJhcic/DQo+ID4NCj4gPiAvSw0KPiA+IHwNCj4gPg0K
PiA+DQo+ID4NCj4gPg0KPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fDQo+ID4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4gVXNlcnNAb3ZpcnQub3JnDQo+
ID4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+ID4NCg==
--_000_af85ac88d3754905aa58276556a94cceexch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <51B0C4ED7F656947BE8D44AC9BB2A263(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyNCBtYXJzIDIwMTYgNzoyNiBlbSBza3JldiBPbmRyYSBNYWNoYWNlayAmbHQ7
b21hY2hhY2VAcmVkaGF0LmNvbSZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDsgT24gMDMvMjQvMjAx
NiAwNjoxNiBQTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7IEhpITxicj4N
CiZndDsgJmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBTdGFydGluZyBuZXcgdGhy
ZWFkIGluc3RlYWQgb2YgamFja2luZyBzb21lb25lIGVsc2XCtHMuPGJyPg0KJmd0OyAmZ3Q7PGJy
Pg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IE1hbmFnZWQgdG8gbWlncmF0ZSBmcm9tIG9sZCAn
ZW5naW5lLW1hbmFnZS1kb21haW5zJyBhdXRoIHRvIGFhYS1sZGFwIHVzaW5nOjxicj4NCiZndDsg
Jmd0Ozxicj4NCiZndDsgJmd0OyAjfCBvdmlydC1lbmdpbmUta2VyYmxkYXAtbWlncmF0aW9uLXRv
b2wgLS1kb21haW4gYmF6LmZvby5iYXIgLS1jYWNlcnQ8YnI+DQomZ3Q7ICZndDsgL3RtcC9jYS5j
cnQgLS1hcHBseTxicj4NCiZndDsgJmd0OyB8PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
PGJyPg0KJmd0OyAmZ3Q7IEFsbCBPSywgbm8gZXJyb3JzLCBidXQgY2Fubm90IGxvZyBpbjo8YnI+
DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgIyBvdmlydC1lbmdpbmUtZXh0ZW5zaW9ucy10b29s
IGFhYSBsb2dpbi11c2VyIC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXc8YnI+DQomZ3Q7ICZndDsg
LS11c2VyLW5hbWU9dXNlcjo8YnI+DQomZ3Q7PGJyPg0KJmd0OyBJZiB5b3Ugd2FudCB0byBsb2dp
biB3aXRoIHVzZXIgd2l0aCBkaWZmZXJlbnQgdXBuIHN1ZmZpeCwgdGhlbiBqdXN0IDxicj4NCiZn
dDsgYXBwZW5kIHRoYXQgc3VmZml4PGJyPg0KJmd0Ozxicj4NCiZndDsgJCBvdmlydC1lbmdpbmUt
ZXh0ZW5zaW9ucy10b29sIGFhYSBsb2dpbi11c2VyIC0tcHJvZmlsZT1iYXouZm9vLmJhci1uZXcg
PGJyPg0KJmd0OyAtLXVzZXItbmFtZT11c2VyQGZvby5iYXI8L3A+DQo8cCBkaXI9Imx0ciI+T0ss
IHNvbWUgcHJvZ3Jlc3MsIHRoYXQgd29ya3MhPC9wPg0KPHAgZGlyPSJsdHIiPiZndDs8YnI+DQom
Z3Q7IElmIHlvdSBoYXZlIG1vcmUgc3VmZml4ZXMgYW5kIHdhbnQgdG8gaGF2ZSBzb21lIGFzIGRl
ZmF1bHQgeW91IGNhbiB1c2UgPGJyPg0KJmd0OyBmb2xsb3dpbmcgYXBwcm9hY2g6PGJyPg0KJmd0
Ozxicj4NCiZndDsgMSkgaW5zdGFsbCBvdmlydC1lbmdpbmUtZXh0ZW5zaW9uLWFhYS1taXNjPGJy
Pg0KJmd0Ozxicj4NCiZndDsgMikgY3JlYXRlIG5ldyBtYXBwaW5nIGV4dGVuc2lvbiBsaWtlIHRo
aXM6PGJyPg0KJmd0OyAvZXRjL292aXJ0LWVuZ2luZS9leHRlbnNpb25zLmQvbWFwcGluZy1zdWZm
aXgucHJvcGVydGllczxicj4NCiZndDs8YnI+DQomZ3Q7IG92aXJ0LmVuZ2luZS5leHRlbnNpb24u
bmFtZSA9IG1hcHBpbmctc3VmZml4PGJyPg0KJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5zaW9uLmJp
bmRpbmdzLm1ldGhvZCA9IGpib3NzbW9kdWxlPGJyPg0KJmd0OyBvdmlydC5lbmdpbmUuZXh0ZW5z
aW9uLmJpbmRpbmcuamJvc3Ntb2R1bGUubW9kdWxlID0gPGJyPg0KJmd0OyBvcmcub3ZpcnQuZW5n
aW5lLWV4dGVuc2lvbnMuYWFhLm1pc2M8YnI+DQomZ3Q7IG92aXJ0LmVuZ2luZS5leHRlbnNpb24u
YmluZGluZy5qYm9zc21vZHVsZS5jbGFzcyA9IDxicj4NCiZndDsgb3JnLm92aXJ0LmVuZ2luZWV4
dGVuc2lvbnMuYWFhLm1pc2MubWFwcGluZy5NYXBwaW5nRXh0ZW5zaW9uPGJyPg0KJmd0OyBvdmly
dC5lbmdpbmUuZXh0ZW5zaW9uLnByb3ZpZGVzID0gPGJyPg0KJmd0OyBvcmcub3ZpcnQuZW5naW5l
LmFwaS5leHRlbnNpb25zLmFhYS5NYXBwaW5nPGJyPg0KJmd0OyBjb25maWcubWFwVXNlci50eXBl
ID0gcmVnZXg8YnI+DQomZ3Q7IGNvbmZpZy5tYXBVc2VyLnBhdHRlcm4gPSBeKD8mbHQ7dXNlciZn
dDtbXkBdKikkPC9wPg0KPHAgZGlyPSJsdHIiPklzIHRoYXQgc3VwcG9zZWQgdG8gcmVhbGx5IHNh
eSAnJmx0O3VzZXImZ3Q7JyBvciBzaG91bGQgaXQgYmUgY2hhbmdlZCB0byBhIHJlYWwgdXNlciBu
YW1lPyBFaXRoZXIgd2F5LCBpdCBkb2Vzbid0IHdvcmssIEkgdHJpZWQgaXQgYWxsLjwvcD4NCjxw
IGRpcj0ibHRyIj4mZ3Q7IGNvbmZpZy5tYXBVc2VyLnJlcGxhY2VtZW50ID0gJHt1c2VyfUBmb28u
YmFyPGJyPg0KJmd0OyBjb25maWcubWFwVXNlci5tdXN0TWF0Y2ggPSBmYWxzZTxicj4NCiZndDs8
YnI+DQomZ3Q7IDMpIHNlbGVjdCBhIG1hcHBpbmcgcGx1Z2luIGluIGF1dGhuIGNvbmZpZ3VyYXRp
b246PGJyPg0KJmd0Ozxicj4NCiZndDsgb3ZpcnQuZW5naW5lLmFhYS5hdXRobi5tYXBwaW5nLnBs
dWdpbiA9IG1hcHBpbmctc3VmZml4PGJyPg0KJmd0Ozxicj4NCiZndDsgV2l0aCBhYm92ZSBjb25m
aWd1cmF0aW9uIGluIHVzZSwgeW91ciB1c2VyICd1c2VyJyB3aXRsbCBiZSBtYXBwZWQgdG8gPGJy
Pg0KJmd0OyB1c2VyICd1c2VyQGZvby5iYXInPGJyPg0KJmd0OyBhbmQgdXNlcnMgJ3VzZXJAYW5v
dGhlcmRvbWFpbi5mb28uYmFyJyB3aWxsIHJlbWFpbiA8YnI+DQomZ3Q7ICd1c2VyQGFub3RoZXJk
b21haW4uZm9vLmJhcicuPC9wPg0KPHAgZGlyPSJsdHIiPlRoaXMgaG93ZXZlciBkb2VzIG5vdCwg
aXQgZG9lc24ndCByZXBsYWNlIHRoZSBzdWZmaXggYXMgaXQncyBzdXBwb3NlZCB0by4gSSB0cmll
ZCB3aXRoIG1hbnkgZGlmZmVyZW50IHR5cGVzIG9mIHRoZSAnbWFwVXNlci5wYXR0ZXJuJyBidXQg
aXQgc2ltcGx5IHdvbid0IGNoYW5nZSBpdCwgZXZlbiBpZiBJIHR5cGUgaW4gJz0gXnVzZXJAYmF6
LmZvby5iYXIkJywgdGhlIGVycm9yIGlzIHRoZSBzYW1lOig8L3A+DQo8cCBkaXI9Imx0ciI+L0s8
L3A+DQo8cCBkaXI9Imx0ciI+Jmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBBUEk6
ICZsdDstLUF1dGhuLkludm9rZUNvbW1hbmRzLkFVVEhFTlRJQ0FURV9DUkVERU5USUFMUyByZXN1
bHQ9U1VDQ0VTUzxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBi
dXQ6PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IEFQSTogLS0mZ3Q7QXV0aHouSW52b2tl
Q29tbWFuZHMuRkVUQ0hfUFJJTkNJUEFMX1JFQ09SRDxicj4NCiZndDsgJmd0OyBwcmluY2lwYWw9
J3VzZXJAYmF6LmZvby5iYXInPGJyPg0KJmd0OyAmZ3Q7IFNFVkVSRSZuYnNwOyBDYW5ub3QgcmVz
b2x2ZSBwcmluY2lwYWwgJ3VzZXJAYmF6LmZvby5iYXInPGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IFNvIGl0IGZhaWxzLjxicj4NCiZndDsgJmd0Ozxicj4NCiZn
dDsgJmd0Ozxicj4NCiZndDsgJmd0OyAjIGxkYXBzZWFyY2ggLXggLUggbGRhcDovL2Jhei5mb28u
YmFyIC1EIHVzZXJAZm9vLmJhciAtVyAtYjxicj4NCiZndDsgJmd0OyBEQz1iYXosREM9Zm9vLERD
PWJhciAtcyBzdWIgJnF1b3Q7KHNhbUFjY291bnROYW1lPXVzZXIpJnF1b3Q7IHVzZXJQcmluY2lw
YWxOYW1lIHw8YnI+DQomZ3Q7ICZndDsgZ3JlcCAndXNlclByaW5jaXBhbE5hbWU6Jzxicj4NCiZn
dDsgJmd0Ozxicj4NCiZndDsgJmd0OyB1c2VyUHJpbmNpcGFsTmFtZTogdXNlckBmb28uYmFyPGJy
Pg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IHxIb3cgZG8geW91IGNv
bmZpZ3VyZSBBQUEgd2l0aCBiYXNlICdEQz1iYXosREM9Zm9vLERDPWJhcicgd2hlbjxicj4NCiZn
dDsgJmd0OyB1c2VyUHJpbmNpcGFsTmFtZSBlbmRzIG9ubHkgb24gJ0Bmb28uYmFyJz88YnI+DQom
Z3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgL0s8YnI+DQomZ3Q7ICZndDsgfDxicj4NCiZndDsgJmd0
Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsg
Jmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4N
CiZndDsgJmd0OyBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+DQomZ3Q7ICZndDsgVXNlcnNAb3ZpcnQu
b3JnPGJyPg0KJmd0OyAmZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5m
by91c2Vyczxicj4NCiZndDsgJmd0Ozxicj4NCjwvcD4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_af85ac88d3754905aa58276556a94cceexch24sluse_--
2
1
oVirt 3.6 AAA LDAP cannot not log in when end of UPN is different from domain base
by Karli Sjöberg 24 Mar '16
by Karli Sjöberg 24 Mar '16
24 Mar '16
Hi!
Starting new thread instead of jacking someone else´s.
Managed to migrate from old 'engine-manage-domains' auth to aaa-ldap using:
# ovirt-engine-kerbldap-migration-tool --domain baz.foo.bar --cacert /tmp/ca.crt --apply
All OK, no errors, but cannot log in:
# ovirt-engine-extensions-tool aaa login-user --profile=baz.foo.bar-new --user-name=user:
API: <--Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS result=SUCCESS
but:
API: -->Authz.InvokeCommands.FETCH_PRINCIPAL_RECORD principal='user(a)baz.foo.bar'
SEVERE Cannot resolve principal 'user(a)baz.foo.bar'
So it fails.
# ldapsearch -x -H ldap://baz.foo.bar -D user(a)foo.bar -W -b DC=baz,DC=foo,DC=bar -s sub "(samAccountName=user)" userPrincipalName | grep 'userPrincipalName:'
userPrincipalName: user(a)foo.bar
How do you configure AAA with base 'DC=baz,DC=foo,DC=bar' when userPrincipalName ends only on '@foo.bar'?
/K
2
1
--_000_0d09312977a3465882bf2a2bace95cb0exch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjQgbWFycyAyMDE2IDEzOjQ5IHNrcmV2IE9uZHJhIE1hY2hhY2VrIDxvbWFjaGFjZUBy
ZWRoYXQuY29tPjoNCj4NCj4gSGksDQo+DQo+IGlmIHlvdSByZW1vdmUgdXNlciwgdGhlbiBhbHNv
IHBlcm1pc3Npb25zIG9mIHRoYXQgdXNlciB0byB2bXMgd2lsbCBiZQ0KPiByZW1vdmVkLg0KPiBB
bmQgeWVzLCB5b3Ugd2lsbCBoYXZlIHRvIGFkZCBhbGwgdGhvc2UgcGVybWlzc2lvbnMgYmFjayB0
byB1c2VycyBmcm9tDQo+IG5ldyBwcm9maWxlLg0KPg0KPiBCdXQsIHlvdSBjYW4gdHJ5IG1pZ3Jh
dGlvbiB0b29sWzFdLCB0byBtaWdyYXRlIGFsbCB1c2VycyB0byBuZXcgQUFBIHByb2ZpbGUuDQo+
IElmIHlvdSBoYXZlIGFueSBwcm9ibGVtIHdpdGggaXQsIHlvdSBjYW4gYXNrLg0KDQpFaG0sIGhv
dyBkbyB5b3UgaW5zdGFsbCBpdD8gKGVsNikNCg0KL0sNCg0KPg0KPiBPbmRyYQ0KPg0KPiBbMV0N
Cj4gaHR0cHM6Ly9naXRodWIuY29tL21hY2hhY2Vrb25kcmEvb3ZpcnQtZW5naW5lLWtlcmJsZGFw
LW1pZ3JhdGlvbi9ibG9iL21hc3Rlci9SRUFETUUubWQNCj4NCj4gT24gMDMvMjQvMjAxNiAwMTow
NiBQTSwgV2lsbCBEZW5uaXMgd3JvdGU6DQo+ID4gSW4gdGhlIFJIRVYgQWRtaW4gR3VpZGUgdGhh
dCBNYXJ0aW4gbWVudGlvbmVkLCBpdCBzYXlzOg0KPiA+DQo+ID4gIkxvZyBpbiB0byB0aGUgQWRt
aW5pc3RyYXRpb24gUG9ydGFsLCBhbmQgcmVtb3ZlIGFsbCB1c2VycyBhbmQgZ3JvdXBzIHJlbGF0
ZWQgdG8gdGhlIG9sZCBwcm9maWxlLiBVc2VycyBkZWZpbmVkIGluIHRoZSByZW1vdmVkIGRvbWFp
biB3aWxsIG5vIGxvbmdlciBiZSBhYmxlIHRvIGF1dGhlbnRpY2F0ZSB3aXRoIHRoZSBSZWQgSGF0
IEVudGVycHJpc2UgVmlydHVhbGl6YXRpb24gTWFuYWdlci4gVGhlIGVudHJpZXMgZm9yIHRoZSBh
ZmZlY3RlZCB1c2VycyB3aWxsIHJlbWFpbiBkZWZpbmVkIGluIHRoZSBSZWQgSGF0IEVudGVycHJp
c2UgVmlydHVhbGl6YXRpb24gTWFuYWdlciB1bnRpbCB0aGV5IGFyZSBleHBsaWNpdGx5IHJlbW92
ZWQgZnJvbSB0aGUgQWRtaW5pc3RyYXRpb24gUG9ydGFsLuKAnQ0KPiA+DQo+ID4gSSBoYXZlIHNv
bWUgVk1zIHJ1bm5pbmcgdW5kZXIgc29tZSBBRCBkb21haW4gdXNlcnM7IGlmIEkgcmVtb3ZlIHRo
ZSB1c2VycyBmcm9tIHRoZSBzeXN0ZW0gYXMgYWJvdmUsIHdpbGwgSSBuZWVkIHRvIHJlbW92ZSB0
aGVtIGZyb20gdGhlIFZNIHBlcm1pc3Npb25zLCBvciBpcyB0aGF0IGNsZWFuZWQgdXAgYXMgd2Vs
bD8gQW5kIEkgZ3Vlc3MgSeKAmWxsIG5lZWQgdG8gbWFudWFsbHkgcmUtYWRkIHRoZSBwZXJtcyBi
YWNrIGFmdGVyIHRoZSBuZXcgZGlyZWN0b3J5IGNvbmZpZyBpcyBpbiBwbGFjZT8gUGxlYXNlIGFk
dmlzZS4NCj4gPg0KPiA+IFRoYW5rcywNCj4gPiBXaWxsDQo+ID4NCj4gPiBPbiBNYXIgMjEsIDIw
MTYsIGF0IDQ6MjkgQU0sIE1hcnRpbiBQZXJpbmEgPG1wZXJpbmFAcmVkaGF0LmNvbTxtYWlsdG86
bXBlcmluYUByZWRoYXQuY29tPj4gd3JvdGU6DQo+ID4NCj4gPg0KPiA+DQo+ID4gT24gTW9uLCBN
YXIgMjEsIDIwMTYgYXQgODoyMCBBTSwgWWVkaWR5YWggQmFyIERhdmlkIDxkaWRpQHJlZGhhdC5j
b208bWFpbHRvOmRpZGlAcmVkaGF0LmNvbT4+IHdyb3RlOg0KPiA+IE9uIE1vbiwgTWFyIDIxLCAy
MDE2IGF0IDQ6NDcgQU0sIFdpbGwgRGVubmlzIDx3ZGVubmlzQG5lYy1sYWJzLmNvbTxtYWlsdG86
d2Rlbm5pc0BuZWMtbGFicy5jb20+PiB3cm90ZToNCj4gPj4gSGkgYWxsLA0KPiA+Pg0KPiA+PiBJ
IGhhdmUgZW5hYmxlZCBBY3RpdmUgRGlyZWN0b3J5IGF1dGhlbnRpY2F0aW9uIGZvciB0aGUgdXNl
cnMgaW4gb1ZpcnQgKHZpYSBlbmdpbmUtbWFuYWdlLWRvbWFpbnMgY29tbWFuZCB1c2luZyAtLXBy
b3ZpZGVyPWFkKSBhbmQsIGFsdGhvdWdoIGl0IHdvcmtzLCBpdCB0YWtlcyBhYm91dCB+NTAgc2Vj
4oCZcyB0byBwcm9jZXNzIGEgbG9naW4uIEkgaGF2ZSBvdGhlciBPU1Mgc29mdHdhcmUgdGhhdCB1
dGlsaXplcyBBRCBhdXRoLCBhbmQgdGhlcmUgaXMgbm8gc3VjaCBsYWcgd2hlbiBwcm9jZXNzaW5n
IGxvZ2lucywgc28gSeKAmW0gZ3Vlc3NpbmcgaXTigJlzIGEgcHJvYmxlbSB3aXRoIHRoZSBvVmly
dCBpbXBsZW1lbnRhdGlvbuKApiBBbnkgd2F5IHRvIGRlYnVnIHdoeSB0aGUgYXV0aCBwcm9jZXNz
IGlzIHRha2luZyBzbyBsb25nPw0KPiA+DQo+ID4gVGhpcyBpcyBhbiBvbGQsIHVubWFpbnRhaW5l
ZCBjb21wb25lbnQuIFlvdSBzaG91bGQgdXNlIHRoZSBuZXcgYWFhLWxkYXAgb25lLg0KPiA+IFNl
YXJjaCB0aGUgbGlzdCBhcmNoaXZlcyBmb3IgImFhYS1sZGFwIiBhbmQvb3IgcmVhZCB0aGUgUkVB
RE1FIGZpbGUgaW4gdGhlDQo+ID4gc291cmNlcyBbMV0uIEJlc3QsDQo+ID4NCj4gPiBbMV0gaHR0
cHM6Ly9nZXJyaXQub3ZpcnQub3JnL2dpdHdlYj9wPW92aXJ0LWVuZ2luZS1leHRlbnNpb24tYWFh
LWxkYXAuZ2l0O2E9YmxvYjtmPVJFQURNRQ0KPiA+DQo+ID4g4oCLWW91IGNvdWxkIGFsc28gdGFr
ZSBhIGxvb2sgYXQgUkhFViAzLjYgQWRtaW5pc3RyYXRpb24gR3VpZGUsIGNoYXB0ZXIgMTMgVXNl
cnMgYW5kIFJvbGVzIFsyXQ0KPiA+IHdoZXJlIHlvdSBjYW4gZmluZCBkZXRhaWxlZCBzdGVwcyBm
b3IgY29tbW9uIGNvbmZpZ3VyYXRpb25zLg0KPiA+DQo+ID4gTWFydGluIFBlcmluYQ0KPiA+DQo+
ID4gWzJdIGh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9jdW1lbnRhdGlvbi9lbi1VUy9SZWRf
SGF0X0VudGVycHJpc2VfVmlydHVhbGl6YXRpb24vMy42L2h0bWwvQWRtaW5pc3RyYXRpb25fR3Vp
ZGUvY2hhcC1Vc2Vyc19hbmRfUm9sZXMuaHRtbA0KPiA+IOKAiw0KPiA+DQo+ID4NCj4gPj4NCj4g
Pj4gV2lsbA0KPiA+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXw0KPiA+PiBVc2VycyBtYWlsaW5nIGxpc3QNCj4gPj4gVXNlcnNAb3ZpcnQub3JnPG1haWx0
bzpVc2Vyc0BvdmlydC5vcmc+DQo+ID4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9s
aXN0aW5mby91c2Vycw0KPiA+DQo+ID4NCj4gPg0KPiA+IC0tDQo+ID4gRGlkaQ0KPiA+IF9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4gVXNlcnMgbWFp
bGluZyBsaXN0DQo+ID4gVXNlcnNAb3ZpcnQub3JnPG1haWx0bzpVc2Vyc0BvdmlydC5vcmc+DQo+
ID4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+ID4NCj4g
Pg0KPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+
ID4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4gVXNlcnNAb3ZpcnQub3JnDQo+ID4gaHR0cDovL2xp
c3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+ID4NCj4gX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4gVXNlcnMgbWFpbGluZyBsaXN0
DQo+IFVzZXJzQG92aXJ0Lm9yZw0KPiBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlz
dGluZm8vdXNlcnMNCg==
--_000_0d09312977a3465882bf2a2bace95cb0exch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <12358A6FAB67B045BC2134D23CE9A03D(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyNCBtYXJzIDIwMTYgMTM6NDkgc2tyZXYgT25kcmEgTWFjaGFjZWsgJmx0O29t
YWNoYWNlQHJlZGhhdC5jb20mZ3Q7Ojxicj4NCiZndDs8YnI+DQomZ3Q7IEhpLDxicj4NCiZndDs8
YnI+DQomZ3Q7IGlmIHlvdSByZW1vdmUgdXNlciwgdGhlbiBhbHNvIHBlcm1pc3Npb25zIG9mIHRo
YXQgdXNlciB0byB2bXMgd2lsbCBiZSA8YnI+DQomZ3Q7IHJlbW92ZWQuPGJyPg0KJmd0OyBBbmQg
eWVzLCB5b3Ugd2lsbCBoYXZlIHRvIGFkZCBhbGwgdGhvc2UgcGVybWlzc2lvbnMgYmFjayB0byB1
c2VycyBmcm9tIDxicj4NCiZndDsgbmV3IHByb2ZpbGUuPGJyPg0KJmd0Ozxicj4NCiZndDsgQnV0
LCB5b3UgY2FuIHRyeSBtaWdyYXRpb24gdG9vbFsxXSwgdG8gbWlncmF0ZSBhbGwgdXNlcnMgdG8g
bmV3IEFBQSBwcm9maWxlLjxicj4NCiZndDsgSWYgeW91IGhhdmUgYW55IHByb2JsZW0gd2l0aCBp
dCwgeW91IGNhbiBhc2suPC9wPg0KPHAgZGlyPSJsdHIiPkVobSwgaG93IGRvIHlvdSBpbnN0YWxs
IGl0PyAoZWw2KTwvcD4NCjxwIGRpcj0ibHRyIj4vSzwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7PGJy
Pg0KJmd0OyBPbmRyYTxicj4NCiZndDs8YnI+DQomZ3Q7IFsxXSA8YnI+DQomZ3Q7IGh0dHBzOi8v
Z2l0aHViLmNvbS9tYWNoYWNla29uZHJhL292aXJ0LWVuZ2luZS1rZXJibGRhcC1taWdyYXRpb24v
YmxvYi9tYXN0ZXIvUkVBRE1FLm1kPGJyPg0KJmd0Ozxicj4NCiZndDsgT24gMDMvMjQvMjAxNiAw
MTowNiBQTSwgV2lsbCBEZW5uaXMgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7IEluIHRoZSBSSEVWIEFk
bWluIEd1aWRlIHRoYXQgTWFydGluIG1lbnRpb25lZCwgaXQgc2F5czo8YnI+DQomZ3Q7ICZndDs8
YnI+DQomZ3Q7ICZndDsgJnF1b3Q7TG9nIGluIHRvIHRoZSBBZG1pbmlzdHJhdGlvbiBQb3J0YWws
IGFuZCByZW1vdmUgYWxsIHVzZXJzIGFuZCBncm91cHMgcmVsYXRlZCB0byB0aGUgb2xkIHByb2Zp
bGUuIFVzZXJzIGRlZmluZWQgaW4gdGhlIHJlbW92ZWQgZG9tYWluIHdpbGwgbm8gbG9uZ2VyIGJl
IGFibGUgdG8gYXV0aGVudGljYXRlIHdpdGggdGhlIFJlZCBIYXQgRW50ZXJwcmlzZSBWaXJ0dWFs
aXphdGlvbiBNYW5hZ2VyLiBUaGUgZW50cmllcyBmb3IgdGhlIGFmZmVjdGVkDQogdXNlcnMgd2ls
bCByZW1haW4gZGVmaW5lZCBpbiB0aGUgUmVkIEhhdCBFbnRlcnByaXNlIFZpcnR1YWxpemF0aW9u
IE1hbmFnZXIgdW50aWwgdGhleSBhcmUgZXhwbGljaXRseSByZW1vdmVkIGZyb20gdGhlIEFkbWlu
aXN0cmF0aW9uIFBvcnRhbC7igJ08YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgSSBoYXZl
IHNvbWUgVk1zIHJ1bm5pbmcgdW5kZXIgc29tZSBBRCBkb21haW4gdXNlcnM7IGlmIEkgcmVtb3Zl
IHRoZSB1c2VycyBmcm9tIHRoZSBzeXN0ZW0gYXMgYWJvdmUsIHdpbGwgSSBuZWVkIHRvIHJlbW92
ZSB0aGVtIGZyb20gdGhlIFZNIHBlcm1pc3Npb25zLCBvciBpcyB0aGF0IGNsZWFuZWQgdXAgYXMg
d2VsbD8gQW5kIEkgZ3Vlc3MgSeKAmWxsIG5lZWQgdG8gbWFudWFsbHkgcmUtYWRkIHRoZSBwZXJt
cyBiYWNrIGFmdGVyIHRoZSBuZXcNCiBkaXJlY3RvcnkgY29uZmlnIGlzIGluIHBsYWNlPyBQbGVh
c2UgYWR2aXNlLjxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBUaGFua3MsPGJyPg0KJmd0
OyAmZ3Q7IFdpbGw8YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgT24gTWFyIDIxLCAyMDE2
LCBhdCA0OjI5IEFNLCBNYXJ0aW4gUGVyaW5hICZsdDttcGVyaW5hQHJlZGhhdC5jb20mbHQ7bWFp
bHRvOm1wZXJpbmFAcmVkaGF0LmNvbSZndDsmZ3Q7IHdyb3RlOjxicj4NCiZndDsgJmd0Ozxicj4N
CiZndDsgJmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBPbiBNb24sIE1hciAyMSwg
MjAxNiBhdCA4OjIwIEFNLCBZZWRpZHlhaCBCYXIgRGF2aWQgJmx0O2RpZGlAcmVkaGF0LmNvbSZs
dDttYWlsdG86ZGlkaUByZWRoYXQuY29tJmd0OyZndDsgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7IE9u
IE1vbiwgTWFyIDIxLCAyMDE2IGF0IDQ6NDcgQU0sIFdpbGwgRGVubmlzICZsdDt3ZGVubmlzQG5l
Yy1sYWJzLmNvbSZsdDttYWlsdG86d2Rlbm5pc0BuZWMtbGFicy5jb20mZ3Q7Jmd0OyB3cm90ZTo8
YnI+DQomZ3Q7ICZndDsmZ3Q7IEhpIGFsbCw8YnI+DQomZ3Q7ICZndDsmZ3Q7PGJyPg0KJmd0OyAm
Z3Q7Jmd0OyBJIGhhdmUgZW5hYmxlZCBBY3RpdmUgRGlyZWN0b3J5IGF1dGhlbnRpY2F0aW9uIGZv
ciB0aGUgdXNlcnMgaW4gb1ZpcnQgKHZpYSBlbmdpbmUtbWFuYWdlLWRvbWFpbnMgY29tbWFuZCB1
c2luZyAtLXByb3ZpZGVyPWFkKSBhbmQsIGFsdGhvdWdoIGl0IHdvcmtzLCBpdCB0YWtlcyBhYm91
dCB+NTAgc2Vj4oCZcyB0byBwcm9jZXNzIGEgbG9naW4uIEkgaGF2ZSBvdGhlciBPU1Mgc29mdHdh
cmUgdGhhdCB1dGlsaXplcyBBRCBhdXRoLCBhbmQgdGhlcmUNCiBpcyBubyBzdWNoIGxhZyB3aGVu
IHByb2Nlc3NpbmcgbG9naW5zLCBzbyBJ4oCZbSBndWVzc2luZyBpdOKAmXMgYSBwcm9ibGVtIHdp
dGggdGhlIG9WaXJ0IGltcGxlbWVudGF0aW9u4oCmIEFueSB3YXkgdG8gZGVidWcgd2h5IHRoZSBh
dXRoIHByb2Nlc3MgaXMgdGFraW5nIHNvIGxvbmc/PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAm
Z3Q7IFRoaXMgaXMgYW4gb2xkLCB1bm1haW50YWluZWQgY29tcG9uZW50LiBZb3Ugc2hvdWxkIHVz
ZSB0aGUgbmV3IGFhYS1sZGFwIG9uZS48YnI+DQomZ3Q7ICZndDsgU2VhcmNoIHRoZSBsaXN0IGFy
Y2hpdmVzIGZvciAmcXVvdDthYWEtbGRhcCZxdW90OyBhbmQvb3IgcmVhZCB0aGUgUkVBRE1FIGZp
bGUgaW4gdGhlPGJyPg0KJmd0OyAmZ3Q7IHNvdXJjZXMgWzFdLiBCZXN0LDxicj4NCiZndDsgJmd0
Ozxicj4NCiZndDsgJmd0OyBbMV0gaHR0cHM6Ly9nZXJyaXQub3ZpcnQub3JnL2dpdHdlYj9wPW92
aXJ0LWVuZ2luZS1leHRlbnNpb24tYWFhLWxkYXAuZ2l0O2E9YmxvYjtmPVJFQURNRTxicj4NCiZn
dDsgJmd0Ozxicj4NCiZndDsgJmd0OyDigItZb3UgY291bGQgYWxzbyB0YWtlIGEgbG9vayBhdCBS
SEVWIDMuNiBBZG1pbmlzdHJhdGlvbiBHdWlkZSwgY2hhcHRlciAxMyBVc2VycyBhbmQgUm9sZXMg
WzJdPGJyPg0KJmd0OyAmZ3Q7IHdoZXJlIHlvdSBjYW4gZmluZCBkZXRhaWxlZCBzdGVwcyBmb3Ig
Y29tbW9uIGNvbmZpZ3VyYXRpb25zLjxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBNYXJ0
aW4gUGVyaW5hPGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IFsyXSBodHRwczovL2FjY2Vz
cy5yZWRoYXQuY29tL2RvY3VtZW50YXRpb24vZW4tVVMvUmVkX0hhdF9FbnRlcnByaXNlX1ZpcnR1
YWxpemF0aW9uLzMuNi9odG1sL0FkbWluaXN0cmF0aW9uX0d1aWRlL2NoYXAtVXNlcnNfYW5kX1Jv
bGVzLmh0bWw8YnI+DQomZ3Q7ICZndDsg4oCLPGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
PGJyPg0KJmd0OyAmZ3Q7Jmd0Ozxicj4NCiZndDsgJmd0OyZndDsgV2lsbDxicj4NCiZndDsgJmd0
OyZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+
DQomZ3Q7ICZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4NCiZndDsgJmd0OyZndDsgVXNl
cnNAb3ZpcnQub3JnJmx0O21haWx0bzpVc2Vyc0BvdmlydC5vcmcmZ3Q7PGJyPg0KJmd0OyAmZ3Q7
Jmd0OyBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8YnI+DQom
Z3Q7ICZndDs8YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgLS08
YnI+DQomZ3Q7ICZndDsgRGlkaTxicj4NCiZndDsgJmd0OyBfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NCiZndDsgJmd0OyBVc2VycyBtYWlsaW5nIGxp
c3Q8YnI+DQomZ3Q7ICZndDsgVXNlcnNAb3ZpcnQub3JnJmx0O21haWx0bzpVc2Vyc0BvdmlydC5v
cmcmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0
aW5mby91c2Vyczxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyBf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NCiZndDsg
Jmd0OyBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+DQomZ3Q7ICZndDsgVXNlcnNAb3ZpcnQub3JnPGJy
Pg0KJmd0OyAmZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy
czxicj4NCiZndDsgJmd0Ozxicj4NCiZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX188YnI+DQomZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4NCiZndDsg
VXNlcnNAb3ZpcnQub3JnPGJyPg0KJmd0OyBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4v
bGlzdGluZm8vdXNlcnM8YnI+DQo8L3A+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_0d09312977a3465882bf2a2bace95cb0exch24sluse_--
3
3
--_000_e68eb5719d7a4784b4b46753e2e0cbf1exch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjQgbWFycyAyMDE2IDM6MDYgZW0gc2tyZXYgT25kcmEgTWFjaGFjZWsgPG9tYWNoYWNl
QHJlZGhhdC5jb20+Og0KPg0KPiBPbiAwMy8yNC8yMDE2IDAzOjAyIFBNLCBLYXJsaSBTasO2YmVy
ZyB3cm90ZToNCj4gPg0KPiA+IERlbiAyNCBtYXJzIDIwMTYgMTM6NDkgc2tyZXYgT25kcmEgTWFj
aGFjZWsgPG9tYWNoYWNlQHJlZGhhdC5jb20+Og0KPiA+ICA+DQo+ID4gID4gSGksDQo+ID4gID4N
Cj4gPiAgPiBpZiB5b3UgcmVtb3ZlIHVzZXIsIHRoZW4gYWxzbyBwZXJtaXNzaW9ucyBvZiB0aGF0
IHVzZXIgdG8gdm1zIHdpbGwgYmUNCj4gPiAgPiByZW1vdmVkLg0KPiA+ICA+IEFuZCB5ZXMsIHlv
dSB3aWxsIGhhdmUgdG8gYWRkIGFsbCB0aG9zZSBwZXJtaXNzaW9ucyBiYWNrIHRvIHVzZXJzIGZy
b20NCj4gPiAgPiBuZXcgcHJvZmlsZS4NCj4gPiAgPg0KPiA+ICA+IEJ1dCwgeW91IGNhbiB0cnkg
bWlncmF0aW9uIHRvb2xbMV0sIHRvIG1pZ3JhdGUgYWxsIHVzZXJzIHRvIG5ldyBBQUENCj4gPiBw
cm9maWxlLg0KPiA+ICA+IElmIHlvdSBoYXZlIGFueSBwcm9ibGVtIHdpdGggaXQsIHlvdSBjYW4g
YXNrLg0KPiA+DQo+ID4gRWhtLCBob3cgZG8geW91IGluc3RhbGwgaXQ/IChlbDYpDQo+DQo+IHl1
bSBpbnN0YWxsIC15DQo+IGh0dHBzOi8vZ2l0aHViLmNvbS9tYWNoYWNla29uZHJhL292aXJ0LWVu
Z2luZS1rZXJibGRhcC1taWdyYXRpb24vcmVsZWFzZXMvZG93bmxvYWQvb3ZpcnQtZW5naW5lLWtl
cmJsZGFwLW1pZ3JhdGlvbi0xLjAuNC9vdmlydC1lbmdpbmUta2VyYmxkYXAtbWlncmF0aW9uLTEu
MC40LTEuZWw2ZXYubm9hcmNoLnJwbQ0KDQpBd2Vzb21lLCB0aGFua3MhDQoNCi9LDQoNCj4NCj4g
Pg0KPiA+IC9LDQo+ID4NCj4gPiAgPg0KPiA+ICA+IE9uZHJhDQo+ID4gID4NCj4gPiAgPiBbMV0N
Cj4gPiAgPg0KPiA+IGh0dHBzOi8vZ2l0aHViLmNvbS9tYWNoYWNla29uZHJhL292aXJ0LWVuZ2lu
ZS1rZXJibGRhcC1taWdyYXRpb24vYmxvYi9tYXN0ZXIvUkVBRE1FLm1kDQo+ID4gID4NCj4gPiAg
PiBPbiAwMy8yNC8yMDE2IDAxOjA2IFBNLCBXaWxsIERlbm5pcyB3cm90ZToNCj4gPiAgPiA+IElu
IHRoZSBSSEVWIEFkbWluIEd1aWRlIHRoYXQgTWFydGluIG1lbnRpb25lZCwgaXQgc2F5czoNCj4g
PiAgPiA+DQo+ID4gID4gPiAiTG9nIGluIHRvIHRoZSBBZG1pbmlzdHJhdGlvbiBQb3J0YWwsIGFu
ZCByZW1vdmUgYWxsIHVzZXJzIGFuZA0KPiA+IGdyb3VwcyByZWxhdGVkIHRvIHRoZSBvbGQgcHJv
ZmlsZS4gVXNlcnMgZGVmaW5lZCBpbiB0aGUgcmVtb3ZlZCBkb21haW4NCj4gPiB3aWxsIG5vIGxv
bmdlciBiZSBhYmxlIHRvIGF1dGhlbnRpY2F0ZSB3aXRoIHRoZSBSZWQgSGF0IEVudGVycHJpc2UN
Cj4gPiBWaXJ0dWFsaXphdGlvbiBNYW5hZ2VyLiBUaGUgZW50cmllcyBmb3IgdGhlIGFmZmVjdGVk
IHVzZXJzIHdpbGwgcmVtYWluDQo+ID4gZGVmaW5lZCBpbiB0aGUgUmVkIEhhdCBFbnRlcnByaXNl
IFZpcnR1YWxpemF0aW9uIE1hbmFnZXIgdW50aWwgdGhleSBhcmUNCj4gPiBleHBsaWNpdGx5IHJl
bW92ZWQgZnJvbSB0aGUgQWRtaW5pc3RyYXRpb24gUG9ydGFsLuKAnQ0KPiA+ICA+ID4NCj4gPiAg
PiA+IEkgaGF2ZSBzb21lIFZNcyBydW5uaW5nIHVuZGVyIHNvbWUgQUQgZG9tYWluIHVzZXJzOyBp
ZiBJIHJlbW92ZSB0aGUNCj4gPiB1c2VycyBmcm9tIHRoZSBzeXN0ZW0gYXMgYWJvdmUsIHdpbGwg
SSBuZWVkIHRvIHJlbW92ZSB0aGVtIGZyb20gdGhlIFZNDQo+ID4gcGVybWlzc2lvbnMsIG9yIGlz
IHRoYXQgY2xlYW5lZCB1cCBhcyB3ZWxsPyBBbmQgSSBndWVzcyBJ4oCZbGwgbmVlZCB0bw0KPiA+
IG1hbnVhbGx5IHJlLWFkZCB0aGUgcGVybXMgYmFjayBhZnRlciB0aGUgbmV3IGRpcmVjdG9yeSBj
b25maWcgaXMgaW4NCj4gPiBwbGFjZT8gUGxlYXNlIGFkdmlzZS4NCj4gPiAgPiA+DQo+ID4gID4g
PiBUaGFua3MsDQo+ID4gID4gPiBXaWxsDQo+ID4gID4gPg0KPiA+ICA+ID4gT24gTWFyIDIxLCAy
MDE2LCBhdCA0OjI5IEFNLCBNYXJ0aW4gUGVyaW5hDQo+ID4gPG1wZXJpbmFAcmVkaGF0LmNvbTxt
YWlsdG86bXBlcmluYUByZWRoYXQuY29tPj4gd3JvdGU6DQo+ID4gID4gPg0KPiA+ICA+ID4NCj4g
PiAgPiA+DQo+ID4gID4gPiBPbiBNb24sIE1hciAyMSwgMjAxNiBhdCA4OjIwIEFNLCBZZWRpZHlh
aCBCYXIgRGF2aWQNCj4gPiA8ZGlkaUByZWRoYXQuY29tPG1haWx0bzpkaWRpQHJlZGhhdC5jb20+
PiB3cm90ZToNCj4gPiAgPiA+IE9uIE1vbiwgTWFyIDIxLCAyMDE2IGF0IDQ6NDcgQU0sIFdpbGwg
RGVubmlzDQo+ID4gPHdkZW5uaXNAbmVjLWxhYnMuY29tPG1haWx0bzp3ZGVubmlzQG5lYy1sYWJz
LmNvbT4+IHdyb3RlOg0KPiA+ICA+ID4+IEhpIGFsbCwNCj4gPiAgPiA+Pg0KPiA+ICA+ID4+IEkg
aGF2ZSBlbmFibGVkIEFjdGl2ZSBEaXJlY3RvcnkgYXV0aGVudGljYXRpb24gZm9yIHRoZSB1c2Vy
cyBpbg0KPiA+IG9WaXJ0ICh2aWEgZW5naW5lLW1hbmFnZS1kb21haW5zIGNvbW1hbmQgdXNpbmcg
LS1wcm92aWRlcj1hZCkgYW5kLA0KPiA+IGFsdGhvdWdoIGl0IHdvcmtzLCBpdCB0YWtlcyBhYm91
dCB+NTAgc2Vj4oCZcyB0byBwcm9jZXNzIGEgbG9naW4uIEkgaGF2ZQ0KPiA+IG90aGVyIE9TUyBz
b2Z0d2FyZSB0aGF0IHV0aWxpemVzIEFEIGF1dGgsIGFuZCB0aGVyZSBpcyBubyBzdWNoIGxhZyB3
aGVuDQo+ID4gcHJvY2Vzc2luZyBsb2dpbnMsIHNvIEnigJltIGd1ZXNzaW5nIGl04oCZcyBhIHBy
b2JsZW0gd2l0aCB0aGUgb1ZpcnQNCj4gPiBpbXBsZW1lbnRhdGlvbuKApiBBbnkgd2F5IHRvIGRl
YnVnIHdoeSB0aGUgYXV0aCBwcm9jZXNzIGlzIHRha2luZyBzbyBsb25nPw0KPiA+ICA+ID4NCj4g
PiAgPiA+IFRoaXMgaXMgYW4gb2xkLCB1bm1haW50YWluZWQgY29tcG9uZW50LiBZb3Ugc2hvdWxk
IHVzZSB0aGUgbmV3DQo+ID4gYWFhLWxkYXAgb25lLg0KPiA+ICA+ID4gU2VhcmNoIHRoZSBsaXN0
IGFyY2hpdmVzIGZvciAiYWFhLWxkYXAiIGFuZC9vciByZWFkIHRoZSBSRUFETUUgZmlsZQ0KPiA+
IGluIHRoZQ0KPiA+ICA+ID4gc291cmNlcyBbMV0uIEJlc3QsDQo+ID4gID4gPg0KPiA+ICA+ID4g
WzFdDQo+ID4gaHR0cHM6Ly9nZXJyaXQub3ZpcnQub3JnL2dpdHdlYj9wPW92aXJ0LWVuZ2luZS1l
eHRlbnNpb24tYWFhLWxkYXAuZ2l0O2E9YmxvYjtmPVJFQURNRQ0KPiA+ICA+ID4NCj4gPiAgPiA+
IOKAi1lvdSBjb3VsZCBhbHNvIHRha2UgYSBsb29rIGF0IFJIRVYgMy42IEFkbWluaXN0cmF0aW9u
IEd1aWRlLA0KPiA+IGNoYXB0ZXIgMTMgVXNlcnMgYW5kIFJvbGVzIFsyXQ0KPiA+ICA+ID4gd2hl
cmUgeW91IGNhbiBmaW5kIGRldGFpbGVkIHN0ZXBzIGZvciBjb21tb24gY29uZmlndXJhdGlvbnMu
DQo+ID4gID4gPg0KPiA+ICA+ID4gTWFydGluIFBlcmluYQ0KPiA+ICA+ID4NCj4gPiAgPiA+IFsy
XQ0KPiA+IGh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9jdW1lbnRhdGlvbi9lbi1VUy9SZWRf
SGF0X0VudGVycHJpc2VfVmlydHVhbGl6YXRpb24vMy42L2h0bWwvQWRtaW5pc3RyYXRpb25fR3Vp
ZGUvY2hhcC1Vc2Vyc19hbmRfUm9sZXMuaHRtbA0KPiA+ICA+ID4g4oCLDQo+ID4gID4gPg0KPiA+
ICA+ID4NCj4gPiAgPiA+Pg0KPiA+ICA+ID4+IFdpbGwNCj4gPiAgPiA+PiBfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiA+ICA+ID4+IFVzZXJzIG1haWxp
bmcgbGlzdA0KPiA+ICA+ID4+IFVzZXJzQG92aXJ0Lm9yZzxtYWlsdG86VXNlcnNAb3ZpcnQub3Jn
Pg0KPiA+ICA+ID4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy
cw0KPiA+ICA+ID4NCj4gPiAgPiA+DQo+ID4gID4gPg0KPiA+ICA+ID4gLS0NCj4gPiAgPiA+IERp
ZGkNCj4gPiAgPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fDQo+ID4gID4gPiBVc2VycyBtYWlsaW5nIGxpc3QNCj4gPiAgPiA+IFVzZXJzQG92aXJ0Lm9y
ZzxtYWlsdG86VXNlcnNAb3ZpcnQub3JnPg0KPiA+ICA+ID4gaHR0cDovL2xpc3RzLm92aXJ0Lm9y
Zy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+ID4gID4gPg0KPiA+ICA+ID4NCj4gPiAgPiA+IF9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4gID4gPiBV
c2VycyBtYWlsaW5nIGxpc3QNCj4gPiAgPiA+IFVzZXJzQG92aXJ0Lm9yZw0KPiA+ICA+ID4gaHR0
cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+ID4gID4gPg0KPiA+
ICA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4g
ID4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4gID4gVXNlcnNAb3ZpcnQub3JnDQo+ID4gID4gaHR0
cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+ID4NCg==
--_000_e68eb5719d7a4784b4b46753e2e0cbf1exch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <1D4E2E0ECA1F4D43850876A6527A6444(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyNCBtYXJzIDIwMTYgMzowNiBlbSBza3JldiBPbmRyYSBNYWNoYWNlayAmbHQ7
b21hY2hhY2VAcmVkaGF0LmNvbSZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDsgT24gMDMvMjQvMjAx
NiAwMzowMiBQTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7IERlbiAyNCBtYXJzIDIwMTYgMTM6NDkgc2tyZXYgT25kcmEgTWFjaGFjZWsgJmx0O29t
YWNoYWNlQHJlZGhhdC5jb20mZ3Q7Ojxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7PGJyPg0KJmd0
OyAmZ3Q7Jm5ic3A7ICZndDsgSGksPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDs8YnI+DQomZ3Q7
ICZndDsmbmJzcDsgJmd0OyBpZiB5b3UgcmVtb3ZlIHVzZXIsIHRoZW4gYWxzbyBwZXJtaXNzaW9u
cyBvZiB0aGF0IHVzZXIgdG8gdm1zIHdpbGwgYmU8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyBy
ZW1vdmVkLjxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7IEFuZCB5ZXMsIHlvdSB3aWxsIGhhdmUg
dG8gYWRkIGFsbCB0aG9zZSBwZXJtaXNzaW9ucyBiYWNrIHRvIHVzZXJzIGZyb208YnI+DQomZ3Q7
ICZndDsmbmJzcDsgJmd0OyBuZXcgcHJvZmlsZS48YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0Ozxi
cj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7IEJ1dCwgeW91IGNhbiB0cnkgbWlncmF0aW9uIHRvb2xb
MV0sIHRvIG1pZ3JhdGUgYWxsIHVzZXJzIHRvIG5ldyBBQUE8YnI+DQomZ3Q7ICZndDsgcHJvZmls
ZS48YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyBJZiB5b3UgaGF2ZSBhbnkgcHJvYmxlbSB3aXRo
IGl0LCB5b3UgY2FuIGFzay48YnI+DQomZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsgRWhtLCBob3cg
ZG8geW91IGluc3RhbGwgaXQ/IChlbDYpPGJyPg0KJmd0Ozxicj4NCiZndDsgeXVtIGluc3RhbGwg
LXkgPGJyPg0KJmd0OyBodHRwczovL2dpdGh1Yi5jb20vbWFjaGFjZWtvbmRyYS9vdmlydC1lbmdp
bmUta2VyYmxkYXAtbWlncmF0aW9uL3JlbGVhc2VzL2Rvd25sb2FkL292aXJ0LWVuZ2luZS1rZXJi
bGRhcC1taWdyYXRpb24tMS4wLjQvb3ZpcnQtZW5naW5lLWtlcmJsZGFwLW1pZ3JhdGlvbi0xLjAu
NC0xLmVsNmV2Lm5vYXJjaC5ycG08L3A+DQo8cCBkaXI9Imx0ciI+QXdlc29tZSwgdGhhbmtzITwv
cD4NCjxwIGRpcj0ibHRyIj4vSzwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7PGJyPg0KJmd0OyAmZ3Q7
PGJyPg0KJmd0OyAmZ3Q7IC9LPGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyBPbmRyYTxicj4NCiZndDsgJmd0OyZuYnNwOyAm
Z3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgWzFdPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDs8YnI+DQomZ3Q7ICZndDsgaHR0cHM6Ly9naXRodWIuY29tL21hY2hhY2Vrb25kcmEvb3ZpcnQt
ZW5naW5lLWtlcmJsZGFwLW1pZ3JhdGlvbi9ibG9iL21hc3Rlci9SRUFETUUubWQ8YnI+DQomZ3Q7
ICZndDsmbmJzcDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7IE9uIDAzLzI0LzIwMTYg
MDE6MDYgUE0sIFdpbGwgRGVubmlzIHdyb3RlOjxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZn
dDsgSW4gdGhlIFJIRVYgQWRtaW4gR3VpZGUgdGhhdCBNYXJ0aW4gbWVudGlvbmVkLCBpdCBzYXlz
Ojxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0
OyAmZ3Q7ICZxdW90O0xvZyBpbiB0byB0aGUgQWRtaW5pc3RyYXRpb24gUG9ydGFsLCBhbmQgcmVt
b3ZlIGFsbCB1c2VycyBhbmQ8YnI+DQomZ3Q7ICZndDsgZ3JvdXBzIHJlbGF0ZWQgdG8gdGhlIG9s
ZCBwcm9maWxlLiBVc2VycyBkZWZpbmVkIGluIHRoZSByZW1vdmVkIGRvbWFpbjxicj4NCiZndDsg
Jmd0OyB3aWxsIG5vIGxvbmdlciBiZSBhYmxlIHRvIGF1dGhlbnRpY2F0ZSB3aXRoIHRoZSBSZWQg
SGF0IEVudGVycHJpc2U8YnI+DQomZ3Q7ICZndDsgVmlydHVhbGl6YXRpb24gTWFuYWdlci4gVGhl
IGVudHJpZXMgZm9yIHRoZSBhZmZlY3RlZCB1c2VycyB3aWxsIHJlbWFpbjxicj4NCiZndDsgJmd0
OyBkZWZpbmVkIGluIHRoZSBSZWQgSGF0IEVudGVycHJpc2UgVmlydHVhbGl6YXRpb24gTWFuYWdl
ciB1bnRpbCB0aGV5IGFyZTxicj4NCiZndDsgJmd0OyBleHBsaWNpdGx5IHJlbW92ZWQgZnJvbSB0
aGUgQWRtaW5pc3RyYXRpb24gUG9ydGFsLuKAnTxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZn
dDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IEkgaGF2ZSBzb21lIFZNcyBydW5uaW5n
IHVuZGVyIHNvbWUgQUQgZG9tYWluIHVzZXJzOyBpZiBJIHJlbW92ZSB0aGU8YnI+DQomZ3Q7ICZn
dDsgdXNlcnMgZnJvbSB0aGUgc3lzdGVtIGFzIGFib3ZlLCB3aWxsIEkgbmVlZCB0byByZW1vdmUg
dGhlbSBmcm9tIHRoZSBWTTxicj4NCiZndDsgJmd0OyBwZXJtaXNzaW9ucywgb3IgaXMgdGhhdCBj
bGVhbmVkIHVwIGFzIHdlbGw/IEFuZCBJIGd1ZXNzIEnigJlsbCBuZWVkIHRvPGJyPg0KJmd0OyAm
Z3Q7IG1hbnVhbGx5IHJlLWFkZCB0aGUgcGVybXMgYmFjayBhZnRlciB0aGUgbmV3IGRpcmVjdG9y
eSBjb25maWcgaXMgaW48YnI+DQomZ3Q7ICZndDsgcGxhY2U/IFBsZWFzZSBhZHZpc2UuPGJyPg0K
Jmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsg
VGhhbmtzLDxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgV2lsbDxicj4NCiZndDsgJmd0
OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IE9uIE1hciAy
MSwgMjAxNiwgYXQgNDoyOSBBTSwgTWFydGluIFBlcmluYTxicj4NCiZndDsgJmd0OyAmbHQ7bXBl
cmluYUByZWRoYXQuY29tJmx0O21haWx0bzptcGVyaW5hQHJlZGhhdC5jb20mZ3Q7Jmd0OyB3cm90
ZTo8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZn
dDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyAmZ3Q7IE9uIE1vbiwgTWFyIDIxLCAyMDE2IGF0IDg6MjAgQU0sIFllZGlkeWFoIEJh
ciBEYXZpZDxicj4NCiZndDsgJmd0OyAmbHQ7ZGlkaUByZWRoYXQuY29tJmx0O21haWx0bzpkaWRp
QHJlZGhhdC5jb20mZ3Q7Jmd0OyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7
IE9uIE1vbiwgTWFyIDIxLCAyMDE2IGF0IDQ6NDcgQU0sIFdpbGwgRGVubmlzPGJyPg0KJmd0OyAm
Z3Q7ICZsdDt3ZGVubmlzQG5lYy1sYWJzLmNvbSZsdDttYWlsdG86d2Rlbm5pc0BuZWMtbGFicy5j
b20mZ3Q7Jmd0OyB3cm90ZTo8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7Jmd0OyBIaSBh
bGwsPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyZndDs8YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyAmZ3Q7Jmd0OyBJIGhhdmUgZW5hYmxlZCBBY3RpdmUgRGlyZWN0b3J5IGF1dGhlbnRp
Y2F0aW9uIGZvciB0aGUgdXNlcnMgaW48YnI+DQomZ3Q7ICZndDsgb1ZpcnQgKHZpYSBlbmdpbmUt
bWFuYWdlLWRvbWFpbnMgY29tbWFuZCB1c2luZyAtLXByb3ZpZGVyPWFkKSBhbmQsPGJyPg0KJmd0
OyAmZ3Q7IGFsdGhvdWdoIGl0IHdvcmtzLCBpdCB0YWtlcyBhYm91dCB+NTAgc2Vj4oCZcyB0byBw
cm9jZXNzIGEgbG9naW4uIEkgaGF2ZTxicj4NCiZndDsgJmd0OyBvdGhlciBPU1Mgc29mdHdhcmUg
dGhhdCB1dGlsaXplcyBBRCBhdXRoLCBhbmQgdGhlcmUgaXMgbm8gc3VjaCBsYWcgd2hlbjxicj4N
CiZndDsgJmd0OyBwcm9jZXNzaW5nIGxvZ2lucywgc28gSeKAmW0gZ3Vlc3NpbmcgaXTigJlzIGEg
cHJvYmxlbSB3aXRoIHRoZSBvVmlydDxicj4NCiZndDsgJmd0OyBpbXBsZW1lbnRhdGlvbuKApiBB
bnkgd2F5IHRvIGRlYnVnIHdoeSB0aGUgYXV0aCBwcm9jZXNzIGlzIHRha2luZyBzbyBsb25nPzxi
cj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAm
Z3Q7IFRoaXMgaXMgYW4gb2xkLCB1bm1haW50YWluZWQgY29tcG9uZW50LiBZb3Ugc2hvdWxkIHVz
ZSB0aGUgbmV3PGJyPg0KJmd0OyAmZ3Q7IGFhYS1sZGFwIG9uZS48YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyAmZ3Q7IFNlYXJjaCB0aGUgbGlzdCBhcmNoaXZlcyBmb3IgJnF1b3Q7YWFhLWxkYXAm
cXVvdDsgYW5kL29yIHJlYWQgdGhlIFJFQURNRSBmaWxlPGJyPg0KJmd0OyAmZ3Q7IGluIHRoZTxi
cj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgc291cmNlcyBbMV0uIEJlc3QsPGJyPg0KJmd0
OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgWzFd
PGJyPg0KJmd0OyAmZ3Q7IGh0dHBzOi8vZ2Vycml0Lm92aXJ0Lm9yZy9naXR3ZWI/cD1vdmlydC1l
bmdpbmUtZXh0ZW5zaW9uLWFhYS1sZGFwLmdpdDthPWJsb2I7Zj1SRUFETUU8YnI+DQomZ3Q7ICZn
dDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyDigItZb3Ug
Y291bGQgYWxzbyB0YWtlIGEgbG9vayBhdCBSSEVWIDMuNiBBZG1pbmlzdHJhdGlvbiBHdWlkZSw8
YnI+DQomZ3Q7ICZndDsgY2hhcHRlciAxMyBVc2VycyBhbmQgUm9sZXMgWzJdPGJyPg0KJmd0OyAm
Z3Q7Jm5ic3A7ICZndDsgJmd0OyB3aGVyZSB5b3UgY2FuIGZpbmQgZGV0YWlsZWQgc3RlcHMgZm9y
IGNvbW1vbiBjb25maWd1cmF0aW9ucy48YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJy
Pg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBNYXJ0aW4gUGVyaW5hPGJyPg0KJmd0OyAmZ3Q7
Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgWzJdPGJyPg0K
Jmd0OyAmZ3Q7IGh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9jdW1lbnRhdGlvbi9lbi1VUy9S
ZWRfSGF0X0VudGVycHJpc2VfVmlydHVhbGl6YXRpb24vMy42L2h0bWwvQWRtaW5pc3RyYXRpb25f
R3VpZGUvY2hhcC1Vc2Vyc19hbmRfUm9sZXMuaHRtbDxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7
ICZndDsg4oCLPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZu
YnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7Jmd0Ozxicj4NCiZn
dDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsmZ3Q7IFdpbGw8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0
OyAmZ3Q7Jmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Xzxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxi
cj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsmZ3Q7IFVzZXJzQG92aXJ0Lm9yZyZsdDttYWls
dG86VXNlcnNAb3ZpcnQub3JnJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsmZ3Q7
IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vyczxicj4NCiZndDsg
Jmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7PGJyPg0K
Jmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsg
LS08YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IERpZGk8YnI+DQomZ3Q7ICZndDsmbmJz
cDsgJmd0OyAmZ3Q7IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+
DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IFVzZXJzQG92aXJ0Lm9yZyZsdDttYWlsdG86VXNl
cnNAb3ZpcnQub3JnJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDsgaHR0cDovL2xp
c3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7
ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZuYnNwOyAmZ3Q7ICZndDs8YnI+DQomZ3Q7ICZndDsm
bmJzcDsgJmd0OyAmZ3Q7IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0OyBVc2VycyBtYWlsaW5nIGxpc3Q8
YnI+DQomZ3Q7ICZndDsmbmJzcDsgJmd0OyAmZ3Q7IFVzZXJzQG92aXJ0Lm9yZzxicj4NCiZndDsg
Jmd0OyZuYnNwOyAmZ3Q7ICZndDsgaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3Rp
bmZvL3VzZXJzPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgJmd0Ozxicj4NCiZndDsgJmd0OyZu
YnNwOyAmZ3Q7IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
PGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsgVXNlcnMgbWFpbGluZyBsaXN0PGJyPg0KJmd0OyAm
Z3Q7Jm5ic3A7ICZndDsgVXNlcnNAb3ZpcnQub3JnPGJyPg0KJmd0OyAmZ3Q7Jm5ic3A7ICZndDsg
aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPGJyPg0KJmd0OyAm
Z3Q7PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_e68eb5719d7a4784b4b46753e2e0cbf1exch24sluse_--
1
0
Good morning
I have a doubt, when i do a snapshot, a new lvm is generated, however
when I delete this snapshot the lvm not off, that's right?
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
3fba372c-4c39-4843-be9e-b358b196331d b47f58e0-d576-49be-b8aa-f30581a0373a
5097df27-c676-4ee7-af89-ecdaed2c77be c598bb22-a386-4908-bfa1-7c44bd764c96
5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
total 0
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
3fba372c-4c39-4843-be9e-b358b196331d ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
5097df27-c676-4ee7-af89-ecdaed2c77be ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
b47f58e0-d576-49be-b8aa-f30581a0373a ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
c598bb22-a386-4908-bfa1-7c44bd764c96 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
disks snapshot:
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
file format: qcow2
virtual size: 112G (120259084288 bytes)
disk size: 0
cluster_size: 65536
backing file: ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
3fba372c-4c39-4843-be9e-b358b196331d
image: 3fba372c-4c39-4843-be9e-b358b196331d
file format: qcow2
virtual size: 112G (120259084288 bytes)
disk size: 0
cluster_size: 65536
backing file: ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
file format: qcow2
virtual size: 112G (120259084288 bytes)
disk size: 0
cluster_size: 65536
backing file: ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
disk base:
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
b47f58e0-d576-49be-b8aa-f30581a0373a
image: b47f58e0-d576-49be-b8aa-f30581a0373a
file format: raw
virtual size: 112G (120259084288 bytes)
disk size: 0
Thanks.
3
10
Hi all,
I have enabled Active Directory authentication for the users in oVirt (via engine-manage-domains command using --provider=ad) and, although it works, it takes about ~50 sec’s to process a login. I have other OSS software that utilizes AD auth, and there is no such lag when processing logins, so I’m guessing it’s a problem with the oVirt implementation… Any way to debug why the auth process is taking so long?
Will
4
4
This is a multi-part message in MIME format.
--------------060909030807010004040503
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi folks,
how can I delete the last storage definition from oVirt database if the
last volume has been deleted from bricks commandline ( rm -rf < path to
that volume > ) directly ?
In oVirt DB exists this storage last record and blocking create new
storage operation ( ovirt offering " delete datacenter", but this is not
the right way for me, now )
regs. Pavel
--------------060909030807010004040503
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hi folks,<br>
how can I delete the last storage definition from oVirt database if
the last volume has been deleted from bricks commandline ( rm -rf
< path to that volume > ) directly ?<br>
In oVirt DB exists this storage last record and blocking create new
storage operation ( ovirt offering " delete datacenter", but this is
not the right way for me, now )<br>
regs. Pavel<br>
</body>
</html>
--------------060909030807010004040503--
2
2
24 Mar '16
------=_Part_24992860_1266289793.1458819135011
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello Paul,=20
Any chance you may rename the HE Storage Domain name of to hosted_storage a=
nd retry the auto-import of it?=20
You might find the auto-import worked as described in https://bugzilla.redh=
at.com/show_bug.cgi?id=3D1269768 .=20
Thanks in advance.=20
Best regards,=20
Nikolai=20
____________________=20
Nikolai Sednev=20
Senior Quality Engineer at Compute team=20
Red Hat Israel=20
34 Jerusalem Road,=20
Ra'anana, Israel 43501=20
Tel: +972 9 7692043=20
Mobile: +972 52 7342734=20
Email: nsednev(a)redhat.com=20
IRC: nsednev=20
----- Original Message -----
From: users-request(a)ovirt.org=20
To: users(a)ovirt.org=20
Sent: Thursday, March 24, 2016 12:29:37 PM=20
Subject: Users Digest, Vol 54, Issue 113=20
Send Users mailing list submissions to=20
users(a)ovirt.org=20
To subscribe or unsubscribe via the World Wide Web, visit=20
http://lists.ovirt.org/mailman/listinfo/users=20
or, via email, send a message with subject or body 'help' to=20
users-request(a)ovirt.org=20
You can reach the person managing the list at=20
users-owner(a)ovirt.org=20
When replying, please edit your Subject line so it is more specific=20
than "Re: Contents of Users digest..."=20
Today's Topics:=20
1. Re: Delete Failed to update OVF disks, OVF data isn't=20
updated on those OVF stores (Data Center Default, Storage Domain=20
hostedengine_nfs). (Maor Lipchuk)=20
2. libvirt failed to read spice key (Fabrice Bacchella)=20
3. Re: VM get stuck randomly (Christophe TREFOIS)=20
4. delete storage definition (paf1(a)email.cz)=20
5. Re: [ANN] oVirt 3.6.4 Second Release Candidate is now=20
available for testing (Ond?ej Svoboda)=20
----------------------------------------------------------------------=20
Message: 1=20
Date: Thu, 24 Mar 2016 11:01:32 +0200=20
From: Maor Lipchuk <mlipchuk(a)redhat.com>=20
To: "Paul Groeneweg | Pazion" <paul(a)pazion.nl>=20
Cc: users(a)ovirt.org=20
Subject: Re: [ovirt-users] Delete Failed to update OVF disks, OVF data=20
isn't updated on those OVF stores (Data Center Default, Storage Domain=20
hostedengine_nfs).=20
Message-ID:=20
<CAJ1JNOd0RxCfEYDv8jKdd2HZ+StGmyFySNcwyjfOqp1-GAxBrw(a)mail.gmail.com>=20
Content-Type: text/plain; charset=3D"utf-8"=20
On Thu, Mar 24, 2016 at 12:12 AM, Paul Groeneweg | Pazion <paul(a)pazion.nl>=
=20
wrote:=20
>=20
> After the 3.6 updates ( which didn't went without a hitch )=20
>=20
> I get the following errors in my event log:=20
>=20
> Failed to update OVF disks 18c50ea6-4654-4525-b241-09e15acf5e99, OVF data=
=20
> isn't updated on those OVF stores (Data Center Default, Storage Domain=20
> hostedengine_nfs).=20
>=20
> VDSM command failed: Could not acquire resource. Probably resource factor=
y=20
> threw an exception.: ()=20
>=20
> http://screencast.com/t/S8cfXMsdGM=20
>=20
> When I check on file there is some data, but not updated:=20
> http://screencast.com/t/hbXQFlou=20
>=20
> When I check in the web interface I see 2 OVF files listed. What are thes=
e=20
> for, can I delete them? http://screencast.com/t/ymnzsNHj7e=20
>=20
> Hopefully someone knows what to do about these warnings/erros and whether=
=20
> I can delete the OVF files.=20
>=20
> Best Regards,=20
> Paul Groeneweg=20
>=20
>=20
> _______________________________________________=20
> Users mailing list=20
> Users(a)ovirt.org=20
> http://lists.ovirt.org/mailman/listinfo/users=20
>=20
>=20
Hi Paul,=20
The OVF_STORE disks are disks which preserve all the VMs and Templates OVF=
=20
data and are mostly use for disaster recovery scenarios.=20
Those disks can not be deleted.=20
Regarding the audit log which you got, can you try to detach and attach the=
=20
Storage once again and let me know if you still get this even log.=20
Regards,=20
Maor=20
-------------- next part --------------=20
An HTML attachment was scrubbed...=20
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160324/e1109a36/=
attachment-0001.html>=20
------------------------------=20
Message: 2=20
Date: Thu, 24 Mar 2016 10:02:46 +0100=20
From: Fabrice Bacchella <fabrice.bacchella(a)orange.fr>=20
To: oVirt Userlist <Users(a)ovirt.org>=20
Subject: [ovirt-users] libvirt failed to read spice key=20
Message-ID: <734DEEBF-8845-4D8F-BDCC-139E50D9875D(a)orange.fr>=20
Content-Type: text/plain; charset=3Dus-ascii=20
I' m running on a brand new Centos 7.2 an up to date ovirt 3.6.3.4.=20
The host is new too and dedicated to ovirt.=20
When I try to launch a vm, I get :=20
Thread-9407::ERROR::2016-03-24 09:16:18,301::vm::759::virt.vm::(_startUnder=
lyingVm) vmId=3D`a32e1043-a5a5-4e4c-8436-f7b7a4ff644c`::The vm start proces=
s failed=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm=20
self._run()=20
File "/usr/share/vdsm/virt/vm.py", line 1941, in _run=20
self._connection.createXML(domxml, flags),=20
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124=
, in wrapper=20
ret =3D f(*args, **kwargs)=20
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrappe=
r=20
return func(inst, *args, **kwargs)=20
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createX=
ML=20
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=3Dsel=
f)=20
libvirtError: internal error: process exited while connecting to monitor: (=
(null):23672): Spice-Warning **: reds.c:3311:reds_init_ssl: Could not use p=
rivate key file=20
2016-03-24T08:16:18.005359Z qemu-kvm: failed to initialize spice server=20
/var/log/libvirt/qemu/test.log says=20
2016-03-24 08:55:48.214+0000: starting up libvirt version: 1.2.17, package:=
13.el7_2.3 (CentOS BuildSystem <http://bugs.centos.org>, 2016-02-16-17:06:=
00, worker1.bsys.centos.org) qemu version: 2.3.0 (qemu-kvm-ev-2.3.0-31.el7=
_2.7.1)=20
LC_ALL=3DC PATH=3D/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AU=
DIO_DRV=3Dspice /usr/libexec/qemu-kvm -name test -S -machine pc-i440fx-rhel=
7.2.0,accel=3Dkvm,usb=3Doff -cpu Haswell-noTSX -m size=3D2097152k,slots=3D1=
6,maxmem=3D4294967296k -realtime mlock=3Doff -smp 2,maxcpus=3D16,sockets=3D=
16,cores=3D1,threads=3D1 -numa node,nodeid=3D0,cpus=3D0-1,mem=3D2048 -uuid =
a32e1043-a5a5-4e4c-8436-f7b7a4ff644c -smbios type=3D1,manufacturer=3DoVirt,=
product=3DoVirt Node,version=3D7-2.1511.el7.centos.2.10,serial=3D30373237-3=
132-5A43-3235-343233333937,uuid=3Da32e1043-a5a5-4e4c-8436-f7b7a4ff644c -no-=
user-config -nodefaults -chardev socket,id=3Dcharmonitor,path=3D/var/lib/li=
bvirt/qemu/domain-test/monitor.sock,server,nowait -mon chardev=3Dcharmonito=
r,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2016-03-24T08:55:46,driftfix=3Dsl=
ew -global kvm-pit.lost_tick_policy=3Ddiscard -no-hpet -no-shutdown -boot m=
enu=3Don,strict=3Don -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1=
.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device virt=
io-serial-pci,id=20
=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5 -drive if=3Dnone,id=
=3Ddrive-ide0-1-0,readonly=3Don,format=3Draw,serial=3D -device ide-cd,bus=
=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive file=3D/rhev/=
data-center/00000001-0001-0001-0001-00000000022a/85d19e93-ee08-41bb-94c9-56=
adf17287b4/images/da6f49dd-8662-418b-a859-3523b4360c0e/930bbe74-7470-4b22-b=
096-fdb03276262d,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3Dda6=
f49dd-8662-418b-a859-3523b4360c0e,cache=3Dnone,werror=3Dstop,rerror=3Dstop,=
aio=3Dnative,iops=3D300 -device scsi-hd,bus=3Dscsi0.0,channel=3D0,scsi-id=
=3D0,lun=3D0,drive=3Ddrive-scsi0-0-0-0,id=3Dscsi0-0-0-0,bootindex=3D1 -netd=
ev tap,fd=3D27,id=3Dhostnet0,vhost=3Don,vhostfd=3D28 -device virtio-net-pci=
,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:51,bus=3Dpci.0,addr=3D0x3=
,bootindex=3D2 -chardev socket,id=3Dcharserial0,path=3D/var/run/ovirt-vmcon=
sole-console/a32e1043-a5a5-4e4c-8436-f7b7a4ff644c.sock,server,nowait -devic=
e isa-serial,chardev=3Dcharserial0,id=3Dserial0 -chardev socket,id=3Dcharch=
annel0,path=3D/var/lib/libvirt/qemu=20
/channels/a32e1043-a5a5-4e4c-8436-f7b7a4ff644c.com.redhat.rhevm.vdsm,server=
,nowait -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dchar=
channel0,id=3Dchannel0,name=3Dcom.redhat.rhevm.vdsm -chardev socket,id=3Dch=
archannel1,path=3D/var/lib/libvirt/qemu/channels/a32e1043-a5a5-4e4c-8436-f7=
b7a4ff644c.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=
=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dchannel1,name=3Dorg.=
qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -devi=
ce virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=
=3Dchannel2,name=3Dcom.redhat.spice.0 -spice port=3D5900,tls-port=3D5901,ad=
dr=3D0,x509-dir=3D/etc/pki/vdsm/libvirt-spice,seamless-migration=3Don -devi=
ce qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vgamem_mb=3D=
16,bus=3Dpci.0,addr=3D0x2 -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpc=
i.0,addr=3D0x6 -msg timestamp=3Don=20
((null):29166): Spice-Warning **: reds.c:3311:reds_init_ssl: Could not use =
private key file=20
2016-03-24T08:55:48.329252Z qemu-kvm: failed to initialize spice server=20
2016-03-24 08:55:48.479+0000: shutting down=20
and indeed, when I try to strace libvirt :=20
open("/etc/pki/vdsm/libvirt-spice/server-key.pem", O_RDONLY) =3D -1 EACCES =
(Permission denied)=20
chmod a+r /etc/pki/vdsm/libvirt-spice/server-key.pem solved the problem, bu=
t it's obviously not a solution.=20
------------------------------=20
Message: 3=20
Date: Thu, 24 Mar 2016 09:45:34 +0000=20
From: Christophe TREFOIS <christophe.trefois(a)uni.lu>=20
To: Nir Soffer <nsoffer(a)redhat.com>=20
Cc: users <users(a)ovirt.org>=20
Subject: Re: [ovirt-users] VM get stuck randomly=20
Message-ID: <2EBB29CB9A8F494FB5253F6AF2E6A1981D6B8F66(a)hoshi.uni.lux>=20
Content-Type: text/plain; charset=3D"utf-8"=20
Hi,=20
We finally upgraded to 3.6.3 across the whole data center and will now see =
if this issue reappears.=20
The upgrade went quite smooth, first from 3.5.4 to 3.5.6 and then to 3.6.3.=
=20
Thank you,=20
--=20
Christophe=20
> -----Original Message-----=20
> From: Nir Soffer [mailto:nsoffer@redhat.com]=20
> Sent: dimanche 13 mars 2016 12:51=20
> To: Christophe TREFOIS <christophe.trefois(a)uni.lu>=20
> Cc: users <users(a)ovirt.org>=20
> Subject: Re: [ovirt-users] VM get stuck randomly=20
>=20
> On Sun, Mar 13, 2016 at 9:46 AM, Christophe TREFOIS=20
> <christophe.trefois(a)uni.lu> wrote:=20
> > Dear all,=20
> >=20
> > I have a problem since couple of weeks, where randomly 1 VM (not always=
=20
> the same) becomes completely unresponsive.=20
> > We find this out because our Icinga server complains that host is down.=
=20
> >=20
> > Upon inspection, we find we can?t open a console to the VM, nor can we=
=20
> login.=20
> >=20
> > In oVirt engine, the VM looks like ?up?. The only weird thing is that R=
AM=20
> usage shows 0% and CPU usage shows 100% or 75% depending on number of=20
> cores.=20
> > The only way to recover is to force shutdown the VM via 2-times shutdow=
n=20
> from the engine.=20
> >=20
> > Could you please help me to start debugging this?=20
> > I can provide any logs, but I?m not sure which ones, because I couldn?t=
see=20
> anything with ERROR in the vdsm logs on the host.=20
>=20
> I would inspect this vm on the host when it happens.=20
>=20
> What is vdsm cpu usage? what is the qemu process (for this vm) cpu usage?=
=20
>=20
> strace output of this qemu process (all threads) or a core dump can help=
=20
> qemu developers to understand this issue.=20
>=20
> >=20
> > The host is running=20
> >=20
> > OS Version: RHEL - 7 - 1.1503.el7.centos.2.8=20
> > Kernel Version: 3.10.0 - 229.14.1.el7.x86_64=20
> > KVM Version: 2.1.2 - 23.el7_1.8.1=20
> > LIBVIRT Version: libvirt-1.2.8-16.el7_1.4=20
> > VDSM Version: vdsm-4.16.26-0.el7.centos=20
> > SPICE Version: 0.12.4 - 9.el7_1.3=20
> > GlusterFS Version: glusterfs-3.7.5-1.el7=20
>=20
> You are running old versions, missing lot of fixes. Nothing specific to y=
our=20
> problem but this lower the chance to get a working system.=20
>=20
> It would be nice if you can upgrade to ovirt-3.6 and report if it made an=
y=20
> change.=20
> Or at lest latest ovirt-3.5.=20
>=20
> > We use a locally exported gluster as storage domain (eg, storage is on =
the=20
> same machine exposed via gluster). No replica.=20
> > We run around 50 VMs on that host.=20
>=20
> Why use gluster for this? Do you plan to add more gluster servers in the=
=20
> future?=20
>=20
> Nir=20
------------------------------=20
Message: 4=20
Date: Thu, 24 Mar 2016 11:20:07 +0100=20
From: "paf1(a)email.cz" <paf1(a)email.cz>=20
To: users <users(a)ovirt.org>=20
Subject: [ovirt-users] delete storage definition=20
Message-ID: <56F3BF57.7070009(a)email.cz>=20
Content-Type: text/plain; charset=3D"utf-8"; Format=3D"flowed"=20
Hi folks,=20
how can I delete the last storage definition from oVirt database if the=20
last volume has been deleted from bricks commandline ( rm -rf < path to=20
that volume > ) directly ?=20
In oVirt DB exists this storage last record and blocking create new=20
storage operation ( ovirt offering " delete datacenter", but this is not=20
the right way for me, now )=20
regs. Pavel=20
-------------- next part --------------=20
An HTML attachment was scrubbed...=20
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160324/9a9a23c7/=
attachment-0001.html>=20
------------------------------=20
Message: 5=20
Date: Thu, 24 Mar 2016 11:24:26 +0100=20
From: Ond?ej Svoboda <ondrej(a)svobodasoft.cz>=20
To: Sandro Bonazzola <sbonazzo(a)redhat.com>=20
Cc: users <users(a)ovirt.org>=20
Subject: Re: [ovirt-users] [ANN] oVirt 3.6.4 Second Release Candidate=20
is now available for testing=20
Message-ID: <56F3C05A.2000208(a)svobodasoft.cz>=20
Content-Type: text/plain; charset=3D"utf-8"; Format=3D"flowed"=20
Thank you very much, Sandro!=20
centos-ovirt36 now works for me.=20
On 24.3.2016 09:00, Sandro Bonazzola wrote:=20
>=20
>=20
> On Wed, Mar 23, 2016 at 4:22 PM, Ond?ej Svoboda <ondrej(a)svobodasoft.cz=20
> <mailto:ondrej@svobodasoft.cz>> wrote:=20
>=20
> Hi Sandro,=20
>=20
> I just ran yum -y install=20
> http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm=20
> <http://www.google.com/url?q=3Dhttp%3A%2F%2Fplain.resources.ovirt.org%2Fp=
ub%2Fyum-repo%2Fovirt-release36.rpm&sa=3DD&sntz=3D1&usg=3DAFQjCNHQ3_R-I3QwE=
v9bTQHIIfj5FJpCQg>=20
> on my fresh EL7 (not CentOS) system.=20
>=20
> In /etc/yum.repos.d/ovirt-3.6-dependencies.repo, there was a=20
> broken centos-ovirt36 source:=20
>=20
>=20
> http://mirror.centos.org/centos/7Server/virt/x86_64/ovirt-3.6/repodata/re=
pomd.xml:=20
> [Errno 14] HTTP Error 404 - Not Found=20
>=20
>=20
> Fixed pointing to=20
> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-3.6/repodata/repomd.x=
ml=20
>=20
> Released a new ovirt-release36 package including the fix.=20
>=20
>=20
>=20
>=20
> I had to disable this repo before I was even able to update my system.=20
>=20
> [centos-ovirt36]=20
> name=3DCentOS-$releasever - oVirt 3.6=20
> baseurl=3Dhttp://mirror.centos.org/centos/$releasever/virt/$basearch/ovir=
t-3.6/=20
> gpgcheck=3D0=20
> enabled=3D1=20
> skip_if_unavailable =3D 1=20
> keepcache =3D 0=20
>=20
> Then I managed to install ovirt-engine all right, so I think the=20
> above repo should simply not be enabled in ovirt-release36.rpm=20
> <http://www.google.com/url?q=3Dhttp%3A%2F%2Fplain.resources.ovirt.org%2Fp=
ub%2Fyum-repo%2Fovirt-release36.rpm&sa=3DD&sntz=3D1&usg=3DAFQjCNHQ3_R-I3QwE=
v9bTQHIIfj5FJpCQg>.=20
>=20
> Thanks for your reply.=20
> Ondra=20
>=20
> On 22.3.2016 18:28, Sandro Bonazzola wrote:=20
>> The oVirt Project is pleased to announce the availability of the=20
>> Second Release Candidate of oVirt 3.6.4 for testing, as of March=20
>> 22nd, 2016=20
>>=20
>> This release is available now for:=20
>> * Fedora 22=20
>> * Red Hat Enterprise Linux 6.7=20
>> * CentOS Linux 6.7 (or similar)=20
>> * Red Hat Enterprise Linux 7.2 or later=20
>> * CentOS Linux (or similar) 7.2 or later=20
>>=20
>> This release supports Hypervisor Hosts running:=20
>> * Red Hat Enterprise Linux 7.2 or later=20
>> * CentOS Linux (or similar) 7.2 or later=20
>> * Fedora 22=20
>>=20
>> This release is also available with experimental support for:=20
>> * Debian 8.3 Jessie=20
>>=20
>> This release candidate includes the following updated packages:=20
>>=20
>> * ovirt-engine=20
>>=20
>> * ovirt-hosted-engine-ha=20
>>=20
>>=20
>> See the release notes [1] for installation / upgrade instructions=20
>> and a list of new features and bugs fixed.=20
>>=20
>> Notes:=20
>> * A new oVirt Live ISO will be available soon [2].=20
>> * Mirrors[3] might need up to one day to synchronize.=20
>>=20
>> Additional Resources:=20
>> * Read more about the oVirt 3.6.3 release=20
>> highlights:http://www.ovirt.org/release/3.6.4/=20
>> * Get more oVirt Project updates on Twitter:=20
>> https://twitter.com/ovirt=20
>> * Check out the latest project news on the oVirt blog:=20
>> http://www.ovirt.org/blog/=20
>>=20
>> [1] http://www.ovirt.org/release/3.6.4/=20
>> [2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/=20
>> [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors=20
>>=20
>>=20
>> --=20
>> Sandro Bonazzola=20
>> Better technology. Faster innovation. Powered by community=20
>> collaboration.=20
>> See how it works at redhat.com <http://redhat.com>=20
>>=20
>>=20
>> _______________________________________________=20
>> Users mailing list=20
>> Users(a)ovirt.org <mailto:Users@ovirt.org>=20
>> http://lists.ovirt.org/mailman/listinfo/users=20
>=20
>=20
> _______________________________________________=20
> Users mailing list=20
> Users(a)ovirt.org <mailto:Users@ovirt.org>=20
> http://lists.ovirt.org/mailman/listinfo/users=20
>=20
>=20
>=20
>=20
> --=20
> Sandro Bonazzola=20
> Better technology. Faster innovation. Powered by community collaboration.=
=20
> See how it works at redhat.com <http://redhat.com>=20
-------------- next part --------------=20
An HTML attachment was scrubbed...=20
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160324/71e6b117/=
attachment.html>=20
------------------------------=20
_______________________________________________=20
Users mailing list=20
Users(a)ovirt.org=20
http://lists.ovirt.org/mailman/listinfo/users=20
End of Users Digest, Vol 54, Issue 113=20
**************************************=20
------=_Part_24992860_1266289793.1458819135011
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hello Paul,<br></div><div>Any chance you may rename the HE=
Storage Domain name of to hosted_storage and retry the auto-import of it?<=
br></div><div>You might find the auto-import worked as described in <a href=
=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1269768">https://bugzilla=
.redhat.com/show_bug.cgi?id=3D1269768</a> .<br></div><div><span name=
=3D"x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nik=
olai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer a=
t Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel =
43501<br><div><br></div>Tel: +972 9 7692043<br>=
Mobile: +972 52 7342734<br>Email: nsednev(a)redhat.com<br>IRC: nsednev<span n=
ame=3D"x"></span><br></div><div><br></div><hr id=3D"zwchr"><div style=3D"co=
lor:#000;font-weight:normal;font-style:normal;text-decoration:none;font-fam=
ily:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@=
ovirt.org<br><b>To: </b>users(a)ovirt.org<br><b>Sent: </b>Thursday, March 24,=
2016 12:29:37 PM<br><b>Subject: </b>Users Digest, Vol 54, Issue 113<br><di=
v><br></div>Send Users mailing list submissions to<br> &nb=
sp; users(a)ovirt.org<br><div><br></div>To subscribe o=
r unsubscribe via the World Wide Web, visit<br> &nbs=
p; http://lists.ovirt.org/mailman/listinfo/users<br>or, vi=
a email, send a message with subject or body 'help' to<br>  =
; users-request(a)ovirt.org<br><div><br></div>Yo=
u can reach the person managing the list at<br> &nbs=
p; users-owner(a)ovirt.org<br><div><br></div>When replying, =
please edit your Subject line so it is more specific<br>than "Re: Contents =
of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div=
> 1. Re: Delete Failed to update OVF disks, OVF data isn'=
t<br> updated on those OVF stores (Data Center Def=
ault, Storage Domain<br> hostedengine_nfs). (Maor =
Lipchuk)<br> 2. libvirt failed to read spice key (Fabrice=
Bacchella)<br> 3. Re: VM get stuck randomly (Christophe =
TREFOIS)<br> 4. delete storage definition (paf1(a)email.cz)=
<br> 5. Re: [ANN] oVirt 3.6.4 Second Release Candidate is=
now<br> available for testing (Ond?ej Svoboda)<br=
><div><br></div><br>-------------------------------------------------------=
---------------<br><div><br></div>Message: 1<br>Date: Thu, 24 Mar 2016 11:0=
1:32 +0200<br>From: Maor Lipchuk <mlipchuk(a)redhat.com><br>To: "Paul G=
roeneweg | Pazion" <paul(a)pazion.nl><br>Cc: users(a)ovirt.org<br>Subject=
: Re: [ovirt-users] Delete Failed to update OVF disks, OVF data<br> &n=
bsp; isn't updated on those OVF stores (=
Data Center Default, Storage Domain<br> =
hostedengine_nfs).<br>Message-ID:<br> &n=
bsp; <CAJ1JNOd0RxCfEYDv8jKdd2HZ+StGmyFySNcwyjfOqp1-GAxB=
rw(a)mail.gmail.com><br>Content-Type: text/plain; charset=3D"utf-8"<br><di=
v><br></div>On Thu, Mar 24, 2016 at 12:12 AM, Paul Groeneweg | Pazion <p=
aul(a)pazion.nl><br>wrote:<br><div><br></div>><br>> After the 3.6 up=
dates ( which didn't went without a hitch )<br>><br>> I get the follo=
wing errors in my event log:<br>><br>> Failed to update OVF disks 18c=
50ea6-4654-4525-b241-09e15acf5e99, OVF data<br>> isn't updated on those =
OVF stores (Data Center Default, Storage Domain<br>> hostedengine_nfs).<=
br>><br>> VDSM command failed: Could not acquire resource. Probably r=
esource factory<br>> threw an exception.: ()<br>><br>> http://scre=
encast.com/t/S8cfXMsdGM<br>><br>> When I check on file there is some =
data, but not updated:<br>> http://screencast.com/t/hbXQFlou<br>><br>=
> When I check in the web interface I see 2 OVF files listed. What are t=
hese<br>> for, can I delete them? http://screencast.com/t/ymnzsNHj7e<br>=
><br><div><br></div>> Hopefully someone knows what to do about these =
warnings/erros and whether<br>> I can delete the OVF files.<br>><br><=
div><br></div>> Best Regards,<br>> Paul Groeneweg<br>><br>><br>=
> _______________________________________________<br>> Users mailing =
list<br>> Users(a)ovirt.org<br>> http://lists.ovirt.org/mailman/listinf=
o/users<br>><br>><br>Hi Paul,<br><div><br></div>The OVF_STORE disks a=
re disks which preserve all the VMs and Templates OVF<br>data and are mostl=
y use for disaster recovery scenarios.<br>Those disks can not be deleted.<b=
r>Regarding the audit log which you got, can you try to detach and attach t=
he<br>Storage once again and let me know if you still get this even log.<br=
><div><br></div>Regards,<br>Maor<br>-------------- next part --------------=
<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/p=
ipermail/users/attachments/20160324/e1109a36/attachment-0001.html><br><d=
iv><br></div>------------------------------<br><div><br></div>Message: 2<br=
>Date: Thu, 24 Mar 2016 10:02:46 +0100<br>From: Fabrice Bacchella <fabri=
ce.bacchella(a)orange.fr><br>To: oVirt Userlist <Users(a)ovirt.org><br=
>Subject: [ovirt-users] libvirt failed to read spice key<br>Message-ID: <=
;734DEEBF-8845-4D8F-BDCC-139E50D9875D(a)orange.fr><br>Content-Type: text/p=
lain; charset=3Dus-ascii<br><div><br></div>I' m running on a brand new Cent=
os 7.2 an up to date ovirt 3.6.3.4.<br><div><br></div>The host is new too a=
nd dedicated to ovirt.<br><div><br></div>When I try to launch a vm, I get :=
<br><div><br></div>Thread-9407::ERROR::2016-03-24 09:16:18,301::vm::759::vi=
rt.vm::(_startUnderlyingVm) vmId=3D`a32e1043-a5a5-4e4c-8436-f7b7a4ff644c`::=
The vm start process failed<br>Traceback (most recent call last):<br> =
File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm<br=
> self._run()<br> File "/usr/share/vdsm/virt/v=
m.py", line 1941, in _run<br> self._connection.createXML(=
domxml, flags),<br> File "/usr/lib/python2.7/site-packages/vdsm/=
libvirtconnection.py", line 124, in wrapper<br> ret =3D f=
(*args, **kwargs)<br> File "/usr/lib/python2.7/site-packages/vds=
m/utils.py", line 1313, in wrapper<br> return func(inst, =
*args, **kwargs)<br> File "/usr/lib64/python2.7/site-packages/li=
bvirt.py", line 3611, in createXML<br> if ret is None:rai=
se libvirtError('virDomainCreateXML() failed', conn=3Dself)<br>libvirtError=
: internal error: process exited while connecting to monitor: ((null):23672=
): Spice-Warning **: reds.c:3311:reds_init_ssl: Could not use private key f=
ile<br>2016-03-24T08:16:18.005359Z qemu-kvm: failed to initialize spice ser=
ver<br><div><br></div><br>/var/log/libvirt/qemu/test.log says<br><div><br><=
/div>2016-03-24 08:55:48.214+0000: starting up libvirt version: 1.2.17, pac=
kage: 13.el7_2.3 (CentOS BuildSystem <http://bugs.centos.org>, 2016-0=
2-16-17:06:00, worker1.bsys.centos.org) qemu version: 2.3.0 (qemu-kvm-ev-2=
.3.0-31.el7_2.7.1)<br>LC_ALL=3DC PATH=3D/usr/local/sbin:/usr/local/bin:/usr=
/sbin:/usr/bin QEMU_AUDIO_DRV=3Dspice /usr/libexec/qemu-kvm -name test -S -=
machine pc-i440fx-rhel7.2.0,accel=3Dkvm,usb=3Doff -cpu Haswell-noTSX -m siz=
e=3D2097152k,slots=3D16,maxmem=3D4294967296k -realtime mlock=3Doff -smp 2,m=
axcpus=3D16,sockets=3D16,cores=3D1,threads=3D1 -numa node,nodeid=3D0,cpus=
=3D0-1,mem=3D2048 -uuid a32e1043-a5a5-4e4c-8436-f7b7a4ff644c -smbios type=
=3D1,manufacturer=3DoVirt,product=3DoVirt Node,version=3D7-2.1511.el7.cento=
s.2.10,serial=3D30373237-3132-5A43-3235-343233333937,uuid=3Da32e1043-a5a5-4=
e4c-8436-f7b7a4ff644c -no-user-config -nodefaults -chardev socket,id=3Dchar=
monitor,path=3D/var/lib/libvirt/qemu/domain-test/monitor.sock,server,nowait=
-mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc base=3D2016-03=
-24T08:55:46,driftfix=3Dslew -global kvm-pit.lost_tick_policy=3Ddiscard -no=
-hpet -no-shutdown -boot menu=3Don,strict=3Don -device piix3-usb-uhci,id=3D=
usb,bus=3Dpci.0,addr=3D0x1.0x2 -device virtio-scsi-pci,id=3Dscsi0,bus=3Dpci=
.0,addr=3D0x4 -device virtio-serial-pci,id<br> =3Dvirtio-serial0,max_p=
orts=3D16,bus=3Dpci.0,addr=3D0x5 -drive if=3Dnone,id=3Ddrive-ide0-1-0,reado=
nly=3Don,format=3Draw,serial=3D -device ide-cd,bus=3Dide.1,unit=3D0,drive=
=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive file=3D/rhev/data-center/00000001-00=
01-0001-0001-00000000022a/85d19e93-ee08-41bb-94c9-56adf17287b4/images/da6f4=
9dd-8662-418b-a859-3523b4360c0e/930bbe74-7470-4b22-b096-fdb03276262d,if=3Dn=
one,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3Dda6f49dd-8662-418b-a859-35=
23b4360c0e,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dnative,iops=3D300=
-device scsi-hd,bus=3Dscsi0.0,channel=3D0,scsi-id=3D0,lun=3D0,drive=3Ddriv=
e-scsi0-0-0-0,id=3Dscsi0-0-0-0,bootindex=3D1 -netdev tap,fd=3D27,id=3Dhostn=
et0,vhost=3Don,vhostfd=3D28 -device virtio-net-pci,netdev=3Dhostnet0,id=3Dn=
et0,mac=3D00:1a:4a:16:01:51,bus=3Dpci.0,addr=3D0x3,bootindex=3D2 -chardev s=
ocket,id=3Dcharserial0,path=3D/var/run/ovirt-vmconsole-console/a32e1043-a5a=
5-4e4c-8436-f7b7a4ff644c.sock,server,nowait -device isa-serial,chardev=3Dch=
arserial0,id=3Dserial0 -chardev socket,id=3Dcharchannel0,path=3D/var/lib/li=
bvirt/qemu<br> /channels/a32e1043-a5a5-4e4c-8436-f7b7a4ff644c.com.redh=
at.rhevm.vdsm,server,nowait -device virtserialport,bus=3Dvirtio-serial0.0,n=
r=3D1,chardev=3Dcharchannel0,id=3Dchannel0,name=3Dcom.redhat.rhevm.vdsm -ch=
ardev socket,id=3Dcharchannel1,path=3D/var/lib/libvirt/qemu/channels/a32e10=
43-a5a5-4e4c-8436-f7b7a4ff644c.org.qemu.guest_agent.0,server,nowait -device=
virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dc=
hannel1,name=3Dorg.qemu.guest_agent.0 -chardev spicevmc,id=3Dcharchannel2,n=
ame=3Dvdagent -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=
=3Dcharchannel2,id=3Dchannel2,name=3Dcom.redhat.spice.0 -spice port=3D5900,=
tls-port=3D5901,addr=3D0,x509-dir=3D/etc/pki/vdsm/libvirt-spice,seamless-mi=
gration=3Don -device qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D83=
88608,vgamem_mb=3D16,bus=3Dpci.0,addr=3D0x2 -device virtio-balloon-pci,id=
=3Dballoon0,bus=3Dpci.0,addr=3D0x6 -msg timestamp=3Don<br>((null):29166): S=
pice-Warning **: reds.c:3311:reds_init_ssl: Could not use private key file<=
br>2016-03-24T08:55:48.329252Z qemu-kvm: failed to initialize spice server<=
br>2016-03-24 08:55:48.479+0000: shutting down<br><div><br></div>and indeed=
, when I try to strace libvirt :<br> open("/etc/pki/vdsm/libvirt-spice=
/server-key.pem", O_RDONLY) =3D -1 EACCES (Permission denied)<br><div><br><=
/div>chmod a+r /etc/pki/vdsm/libvirt-spice/server-key.pem solved the proble=
m, but it's obviously not a solution.<br><div><br></div><br><div><br></div>=
<br><div><br></div><br>------------------------------<br><div><br></div>Mes=
sage: 3<br>Date: Thu, 24 Mar 2016 09:45:34 +0000<br>From: Christophe TREFOI=
S <christophe.trefois(a)uni.lu><br>To: Nir Soffer <nsoffer(a)redhat.co=
m><br>Cc: users <users(a)ovirt.org><br>Subject: Re: [ovirt-users] VM=
get stuck randomly<br>Message-ID: <2EBB29CB9A8F494FB5253F6AF2E6A1981D6B=
8F66(a)hoshi.uni.lux><br>Content-Type: text/plain; charset=3D"utf-8"<br><d=
iv><br></div>Hi,<br><div><br></div>We finally upgraded to 3.6.3 across the =
whole data center and will now see if this issue reappears.<br><div><br></d=
iv>The upgrade went quite smooth, first from 3.5.4 to 3.5.6 and then to 3.6=
.3.<br><div><br></div>Thank you,<br><div><br></div>--<br>Christophe<br><div=
><br></div>> -----Original Message-----<br>> From: Nir Soffer [mailto=
:nsoffer@redhat.com]<br>> Sent: dimanche 13 mars 2016 12:51<br>> To: =
Christophe TREFOIS <christophe.trefois(a)uni.lu><br>> Cc: users <=
users(a)ovirt.org><br>> Subject: Re: [ovirt-users] VM get stuck randoml=
y<br>> <br>> On Sun, Mar 13, 2016 at 9:46 AM, Christophe TREFOIS<br>&=
gt; <christophe.trefois(a)uni.lu> wrote:<br>> > Dear all,<br>>=
><br>> > I have a problem since couple of weeks, where randomly 1=
VM (not always<br>> the same) becomes completely unresponsive.<br>> =
> We find this out because our Icinga server complains that host is down=
.<br>> ><br>> > Upon inspection, we find we can?t open a consol=
e to the VM, nor can we<br>> login.<br>> ><br>> > In oVirt e=
ngine, the VM looks like ?up?. The only weird thing is that RAM<br>> usa=
ge shows 0% and CPU usage shows 100% or 75% depending on number of<br>> =
cores.<br>> > The only way to recover is to force shutdown the VM via=
2-times shutdown<br>> from the engine.<br>> ><br>> > Could =
you please help me to start debugging this?<br>> > I can provide any =
logs, but I?m not sure which ones, because I couldn?t see<br>> anything =
with ERROR in the vdsm logs on the host.<br>> <br>> I would inspect t=
his vm on the host when it happens.<br>> <br>> What is vdsm cpu usage=
? what is the qemu process (for this vm) cpu usage?<br>> <br>> strace=
output of this qemu process (all threads) or a core dump can help<br>> =
qemu developers to understand this issue.<br>> <br>> ><br>> >=
; The host is running<br>> ><br>> > OS Version: &=
nbsp; RHEL - 7 - 1.1503.el7.centos.2.8<br>> > Ke=
rnel Version: 3.10.0 - 229.14.1.el7.x86_64<br>> > KVM Version: =
2.1.2 - 23.el7_1.8.1<br>> > LIBVIR=
T Version: libvirt-1.2.8-16.el7_1.4<br>> >=
VDSM Version: vdsm-4.16.26-0.el7.centos<br>> > SPICE Version:=
0.12.4 - 9.el7_1.3<br>> > GlusterFS Version: &nb=
sp;glusterfs-3.7.5-1.el7<br>> <br>> You are running old versions, mis=
sing lot of fixes. Nothing specific to your<br>> problem but this lower =
the chance to get a working system.<br>> <br>> It would be nice if yo=
u can upgrade to ovirt-3.6 and report if it made any<br>> change.<br>>=
; Or at lest latest ovirt-3.5.<br>> <br>> > We use a locally expor=
ted gluster as storage domain (eg, storage is on the<br>> same machine e=
xposed via gluster). No replica.<br>> > We run around 50 VMs on that =
host.<br>> <br>> Why use gluster for this? Do you plan to add more gl=
uster servers in the<br>> future?<br>> <br>> Nir<br><div><br></div=
>------------------------------<br><div><br></div>Message: 4<br>Date: Thu, =
24 Mar 2016 11:20:07 +0100<br>From: "paf1(a)email.cz" <paf1(a)email.cz><b=
r>To: users <users(a)ovirt.org><br>Subject: [ovirt-users] delete storag=
e definition<br>Message-ID: <56F3BF57.7070009(a)email.cz><br>Content-Ty=
pe: text/plain; charset=3D"utf-8"; Format=3D"flowed"<br><div><br></div>Hi f=
olks,<br>how can I delete the last storage definition from oVirt data=
base if the <br>last volume has been deleted from bricks commandline =
( rm -rf < path to <br>that volume > ) directly ?<br>In oVirt DB exis=
ts this storage last record and blocking create new <br>storage operation (=
ovirt offering " delete datacenter", but this is not <br>the right way for=
me, now )<br>regs. Pavel<br>-------------- next part --------------<br>An =
HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermai=
l/users/attachments/20160324/9a9a23c7/attachment-0001.html><br><div><br>=
</div>------------------------------<br><div><br></div>Message: 5<br>Date: =
Thu, 24 Mar 2016 11:24:26 +0100<br>From: Ond?ej Svoboda <ondrej@svobodas=
oft.cz><br>To: Sandro Bonazzola <sbonazzo(a)redhat.com><br>Cc: users=
<users(a)ovirt.org><br>Subject: Re: [ovirt-users] [ANN] oVirt 3.6.4 Se=
cond Release Candidate<br> i=
s now available for testing<br>Message-ID: <56F3C05A.2000208@svobodasoft=
.cz><br>Content-Type: text/plain; charset=3D"utf-8"; Format=3D"flowed"<b=
r><div><br></div>Thank you very much, Sandro!<br><div><br></div>centos-ovir=
t36 now works for me.<br><div><br></div>On 24.3.2016 09:00, Sandro Bonazzol=
a wrote:<br>><br>><br>> On Wed, Mar 23, 2016 at 4:22 PM, Ond?ej Sv=
oboda <ondrej(a)svobodasoft.cz <br>> <mailto:ondrej@svobodasoft.cz&g=
t;> wrote:<br>><br>> Hi Sandro,<br>><br>> &nbs=
p; I just ran yum -y install<br>> http://plain.reso=
urces.ovirt.org/pub/yum-repo/ovirt-release36.rpm<br>> <=
http://www.google.com/url?q=3Dhttp%3A%2F%2Fplain.resources.ovirt.org%2Fpub%=
2Fyum-repo%2Fovirt-release36.rpm&sa=3DD&sntz=3D1&usg=3DAFQjCNHQ=
3_R-I3QwEv9bTQHIIfj5FJpCQg><br>> on my fresh EL7 (not C=
entOS) system.<br>><br>> In /etc/yum.repos.d/ovirt-3.6-=
dependencies.repo, there was a<br>> broken centos-ovirt36 =
source:<br>><br>><br>> http://mirror.centos.org/cent=
os/7Server/virt/x86_64/ovirt-3.6/repodata/repomd.xml:<br>> =
[Errno 14] HTTP Error 404 - Not Found<br>><br>><br>> Fixed pointi=
ng to <br>> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-3.6/repo=
data/repomd.xml<br>><br>> Released a new ovirt-release36 package incl=
uding the fix.<br>><br>><br>><br>><br>> I had =
to disable this repo before I was even able to update my system.<br>><br=
>> [centos-ovirt36]<br>> name=3DCentOS-$r=
eleasever - oVirt 3.6<br>> baseurl=3Dhttp://mirror.centos.=
org/centos/$releasever/virt/$basearch/ovirt-3.6/<br>> gpgc=
heck=3D0<br>> enabled=3D1<br>> skip_if_un=
available =3D 1<br>> keepcache =3D 0<br>><br>>  =
; Then I managed to install ovirt-engine all right, so I think the<b=
r>> above repo should simply not be enabled in ovirt-relea=
se36.rpm<br>> <http://www.google.com/url?q=3Dhttp%3A%2F=
%2Fplain.resources.ovirt.org%2Fpub%2Fyum-repo%2Fovirt-release36.rpm&sa=
=3DD&sntz=3D1&usg=3DAFQjCNHQ3_R-I3QwEv9bTQHIIfj5FJpCQg>.<br>>=
<br>> Thanks for your reply.<br>> Ondra<b=
r>><br>> On 22.3.2016 18:28, Sandro Bonazzola wrote:<br=
>>> The oVirt Project is pleased to announce the availa=
bility of the<br>>> Second Release Candidate of oVirt 3=
.6.4 for testing, as of March<br>>> 22nd, 2016<br>>&=
gt;<br>>> This release is available now for:<br>>>=
; * Fedora 22<br>>> * Red Hat Enterprise =
Linux 6.7<br>>> * CentOS Linux 6.7 (or similar)<br>>=
> * Red Hat Enterprise Linux 7.2 or later<br>>> &nbs=
p; * CentOS Linux (or similar) 7.2 or later<br>>><br>>> =
This release supports Hypervisor Hosts running:<br>>> &=
nbsp; * Red Hat Enterprise Linux 7.2 or later<br>>> &nb=
sp; * CentOS Linux (or similar) 7.2 or later<br>>> * Fe=
dora 22<br>>><br>>> This release is also availabl=
e with experimental support for:<br>>> * Debian 8.3 Jes=
sie<br>>><br>>> This release candidate includes t=
he following updated packages:<br>>><br>>> =
* ovirt-engine<br>>><br>>> * ovirt-hosted=
-engine-ha<br>>><br>>><br>>> See the releas=
e notes [1] for installation / upgrade instructions<br>>> &nbs=
p; and a list of new features and bugs fixed.<br>>><br>>>  =
; Notes:<br>>> * A new oVirt Live ISO will be av=
ailable soon [2].<br>>> * Mirrors[3] might need up to o=
ne day to synchronize.<br>>><br>>> Additional Res=
ources:<br>>> * Read more about the oVirt 3.6.3 release=
<br>>> highlights:http://www.ovirt.org/release/3.6.4/<b=
r>>> * Get more oVirt Project updates on Twitter:<br>&g=
t;> https://twitter.com/ovirt<br>>> * =
Check out the latest project news on the oVirt blog:<br>>> &nb=
sp; http://www.ovirt.org/blog/<br>>><br>>> [1] ht=
tp://www.ovirt.org/release/3.6.4/<br>>> [2] http://reso=
urces.ovirt.org/pub/ovirt-3.6-pre/iso/<br>>> [3] http:/=
/www.ovirt.org/Repository_mirrors#Current_mirrors<br>>><br>>><b=
r>>> -- <br>>> Sandro Bonazzola<br>=
>> Better technology. Faster innovation. Powered by com=
munity<br>>> collaboration.<br>>> S=
ee how it works at redhat.com <http://redhat.com><br>>><br>>=
><br>>> ____________________________________________=
___<br>>> Users mailing list<br>>> =
Users(a)ovirt.org <mailto:Users@ovirt.org><br>>> ht=
tp://lists.ovirt.org/mailman/listinfo/users<br>><br>><br>> =
_______________________________________________<br>>  =
; Users mailing list<br>> Users(a)ovirt.org <mailto:Users=
@ovirt.org><br>> http://lists.ovirt.org/mailman/listinf=
o/users<br>><br>><br>><br>><br>> -- <br>> Sandro Bonazzol=
a<br>> Better technology. Faster innovation. Powered by community collab=
oration.<br>> See how it works at redhat.com <http://redhat.com><b=
r><div><br></div>-------------- next part --------------<br>An HTML attachm=
ent was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/atta=
chments/20160324/71e6b117/attachment.html><br><div><br></div>-----------=
-------------------<br><div><br></div>_____________________________________=
__________<br>Users mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.o=
rg/mailman/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 54=
, Issue 113<br>**************************************<br></div><div><br></d=
iv></div></body></html>
------=_Part_24992860_1266289793.1458819135011--
1
0
[ANN] oVirt 3.6.4 Second Release Candidate is now available for testing
by Sandro Bonazzola 24 Mar '16
by Sandro Bonazzola 24 Mar '16
24 Mar '16
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 3.6.4 for testing, as of March 22nd, 2016
This release is available now for:
* Fedora 22
* Red Hat Enterprise Linux 6.7
* CentOS Linux 6.7 (or similar)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 22
This release is also available with experimental support for:
* Debian 8.3 Jessie
This release candidate includes the following updated packages:
- ovirt-engine
- ovirt-hosted-engine-ha
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
* A new oVirt Live ISO will be available soon [2].
* Mirrors[3] might need up to one day to synchronize.
Additional Resources:
* Read more about the oVirt 3.6.3 release highlights:
http://www.ovirt.org/release/3.6.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/3.6.4/
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
2
3
Greetings all, and Happy Friday;
I'm running oVirt 3.6 management engine that is currently overseeing 4
compute nodes, attached to iSCSI storage over Infiniband. The compute
nodes have 8 hot-swap hard drive slots, with each only having 2 slots
occupied in a hardware RAID array level one for the hypervisor to live
on.
Would it be possible for me to set up each compute node with a second
array in the remaining 6 hot-swap slots, and utilize that new space for
a nice fault tolerant GlusterFS storage array ?
1. Would the system allow me to add the Gluster storage to the existing
datacenter/cluster that is currently using iSCSI ?
2. Would I be able to configure all aspects of the Gluster
environment(except the hardware RAID array)through the GUI ?
3. What hardware RAID level would be optimal for this configuration ?
(RHEV documentation says RAID 6 is "Mandatory")
4. Will the system support live migration between storage domains ?
I'm reading through the gluster documentation now to get a better
understanding of it's inner workings. Is there a good source for Gluster
on oVirt that i can reference as well ?
2
2
Hi,
I'm installing oVirt hosted-engine using a fibre channel storage. During
the deployment I found this error:
[ ERROR ] The VDSM host was found in a failed state. Please check
engine and bootstrap installation logs.
[ ERROR ] Unable to add hosted_engine_1 to the manager
Tried to reinstall the host via web GUI, but got this error:
Host hosted_engine_1 installation failed. Host is not reachable.
How do I fix this?
P.S. The log files were about 10 MB so I zipped it all
Thank you,
Wee
---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus
2
2
--Apple-Mail=_14AE05D8-7DD7-4F82-A644-49C6A05FC3E6
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I tried to add a new host on a RHEL7, but it fails.
In the ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I found:
warning: =
/var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3=
.7.9-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID d5dc52dc: =
NOKEY
Retrieving key from =
https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
2016-03-22 17:13:41 ERROR otopi.plugins.otopi.packagers.yumpackager =
yumpackager.error:100 Yum GPG key retrieval failed: [Errno 14] HTTPS =
Error 404 - Not Found
2016-03-22 17:13:41 DEBUG otopi.context context._executeMethod:156 =
method exception
Traceback (most recent call last):
File "/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/context.py", line 146, in =
_executeMethod
method['method']()
File =
"/tmp/ovirt-6ocubrsLfP/otopi-plugins/otopi/packagers/yumpackager.py", =
line 274, in _packages
self._miniyum.processTransaction()
File "/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/miniyum.py", line 1054, in =
processTransaction
rpmDisplay=3Dself._RPMCallback(sink=3Dself._sink)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6500, in =
processTransaction
self._checkSignatures(pkgs,callback)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6543, in =
_checkSignatures
self.getKeyForPackage(po, self._askForGPGKeyImport)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6194, in =
getKeyForPackage
keys =3D self._retrievePublicKey(keyurl, repo)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6091, in =
_retrievePublicKey
exception2msg(e))
YumBaseError: GPG key retrieval failed: [Errno 14] HTTPS Error 404 - Not =
Found
In /etc/yum.repos.d/ovirt-3.6-dependencies.repo, I found :
[ovirt-3.6-glusterfs-epel]
name=3DGlusterFS is a clustered file-system capable of scaling to =
several petabytes.
=
baseurl=3Dhttp://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.re=
po/epel-$releasever/$basearch/
enabled=3D1
skip_if_unavailable=3D1
gpgcheck=3D1
gpgkey=3Dhttps://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key=
This file is up to date I think :
$ rpm -qf /etc/yum.repos.d/ovirt-3.6-dependencies.repo
ovirt-release36-005-1.noarch
$ yum update
...
No packages marked for update
If I try to download it :
$ curl -ORLv =
https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key =
<https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key>
...
< HTTP/1.1 404 Not Found
I think the explanation are here :
=
https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBLIC_KEY.R=
EADME =
<https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBLIC_KEY.=
README>
Any thing I can do ?
I don't even use glusterfs, I will be happy to disable it if I knew how.
--Apple-Mail=_14AE05D8-7DD7-4F82-A644-49C6A05FC3E6
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">I tried to add a new host on a RHEL7, but it fails.<div =
class=3D""><br class=3D""></div><div class=3D"">In the <span =
style=3D"font-family: Menlo; font-size: 11px;" =
class=3D"">ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I =
found:</span></div><div class=3D""><br class=3D""></div><div =
class=3D""><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">warning: =
/var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3=
.7.9-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID d5dc52dc: =
NOKEY</div><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">Retrieving key from <a =
href=3D"https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key"=
=
class=3D"">https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.k=
ey</a></div><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">2016-03-22 17:13:41 ERROR =
otopi.plugins.otopi.packagers.yumpackager yumpackager.error:100 Yum GPG =
key retrieval failed: [Errno 14] HTTPS Error 404 - Not Found</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" =
class=3D"">2016-03-22 17:13:41 DEBUG otopi.context =
context._executeMethod:156 method exception</div><div style=3D"margin: =
0px; font-size: 11px; font-family: Menlo;" class=3D"">Traceback (most =
recent call last):</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> File =
"/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/context.py", line 146, in =
_executeMethod</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> =
method['method']()</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> File =
"/tmp/ovirt-6ocubrsLfP/otopi-plugins/otopi/packagers/yumpackager.py", =
line 274, in _packages</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> =
self._miniyum.processTransaction()</div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;" class=3D""> File =
"/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/miniyum.py", line 1054, in =
processTransaction</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> =
rpmDisplay=3Dself._RPMCallback(sink=3Dself._sink)</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" =
class=3D""> File =
"/usr/lib/python2.7/site-packages/yum/__init__.py", line 6500, in =
processTransaction</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> =
self._checkSignatures(pkgs,callback)</div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;" class=3D""> File =
"/usr/lib/python2.7/site-packages/yum/__init__.py", line 6543, in =
_checkSignatures</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> self.getKeyForPackage(po, =
self._askForGPGKeyImport)</div><div style=3D"margin: 0px; font-size: =
11px; font-family: Menlo;" class=3D""> File =
"/usr/lib/python2.7/site-packages/yum/__init__.py", line 6194, in =
getKeyForPackage</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> keys =3D =
self._retrievePublicKey(keyurl, repo)</div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;" class=3D""> File =
"/usr/lib/python2.7/site-packages/yum/__init__.py", line 6091, in =
_retrievePublicKey</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""> =
exception2msg(e))</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D"">YumBaseError: GPG key retrieval failed: =
[Errno 14] HTTPS Error 404 - Not Found</div><div class=3D""><br =
class=3D""></div><div class=3D""><br class=3D""></div><div =
class=3D"">In <span style=3D"font-family: Menlo; font-size: 11px;" =
class=3D"">/etc/yum.repos.d/</span><span style=3D"font-family: Menlo; =
font-size: 11px;" class=3D"">ovirt-3.6-dependencies.repo, I found =
:</span></div><div class=3D""><span style=3D"font-family: Menlo; =
font-size: 11px;" class=3D""><br class=3D""></span></div><div =
class=3D""><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">[ovirt-3.6-glusterfs-epel]</div><div style=3D"margin: =
0px; font-size: 11px; font-family: Menlo;" class=3D"">name=3DGlusterFS =
is a clustered file-system capable of scaling to several =
petabytes.</div><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">baseurl=3D<a =
href=3D"http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo=
/epel-$releasever/$basearch/" =
class=3D"">http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.r=
epo/epel-$releasever/$basearch/</a></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;" class=3D"">enabled=3D1</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" =
class=3D"">skip_if_unavailable=3D1</div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;" class=3D"">gpgcheck=3D1</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" =
class=3D"">gpgkey=3D<a =
href=3D"https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key"=
=
class=3D"">https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.k=
ey</a></div></div><div class=3D""><br class=3D""></div><div =
class=3D"">This file is up to date I think :</div><div class=3D""><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" class=3D"">$ =
rpm -qf /etc/yum.repos.d/ovirt-3.6-dependencies.repo</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" =
class=3D"">ovirt-release36-005-1.noarch</div></div><div style=3D"margin: =
0px; font-size: 11px; font-family: Menlo;" class=3D""><br =
class=3D""></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""><div style=3D"margin: 0px;" class=3D"">$ =
yum update</div><div style=3D"margin: 0px;" class=3D"">...</div><div =
style=3D"margin: 0px;" class=3D""><div style=3D"margin: 0px;" =
class=3D"">No packages marked for update</div><div class=3D""><br =
class=3D""></div></div></div><div class=3D""><br class=3D""></div><div =
class=3D"">If I try to download it :</div><div class=3D""><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" class=3D"">$ =
curl -ORLv <a =
href=3D"https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key"=
=
class=3D"">https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.k=
ey</a></div></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D"">...</div><div style=3D"margin: 0px;" =
class=3D""><div style=3D"font-family: Menlo; font-size: 11px; margin: =
0px;" class=3D"">< HTTP/1.1 404 Not Found</div><div =
style=3D"font-family: Menlo; font-size: 11px;" class=3D""><br =
class=3D""></div><div style=3D"font-family: Menlo; font-size: 11px;" =
class=3D"">I think the explanation are here :</div><div class=3D""><font =
face=3D"Menlo" class=3D""><span style=3D"font-size: 11px;" class=3D""><a =
href=3D"https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBL=
IC_KEY.README" =
class=3D"">https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_P=
UBLIC_KEY.README</a></span></font></div><div class=3D""><font =
face=3D"Menlo" class=3D""><span style=3D"font-size: 11px;" class=3D""><br =
class=3D""></span></font></div><div class=3D""><font face=3D"Menlo" =
class=3D""><span style=3D"font-size: 11px;" class=3D"">Any thing I can =
do ?</span></font></div><div class=3D""><font face=3D"Menlo" =
class=3D""><span style=3D"font-size: 11px;" class=3D""><br =
class=3D""></span></font></div><div class=3D""><font face=3D"Menlo" =
class=3D""><span style=3D"font-size: 11px;" class=3D"">I don't even use =
glusterfs, I will be happy to disable it if I knew =
how.</span></font></div><div class=3D""><font face=3D"Menlo" =
class=3D""><span style=3D"font-size: 11px;" class=3D""><br =
class=3D""></span></font></div><div class=3D""><font face=3D"Menlo" =
class=3D""><span style=3D"font-size: 11px;" class=3D""><br =
class=3D""></span></font></div><div style=3D"font-family: Menlo; =
font-size: 11px;" class=3D""><br class=3D""></div></div><div =
class=3D""><div class=3D""><br class=3D""></div><div class=3D""><br =
class=3D""></div></div></div></body></html>=
--Apple-Mail=_14AE05D8-7DD7-4F82-A644-49C6A05FC3E6--
4
4
How do I make it that when ever I add or reinstall a hardware node that
oVirt creates a rule for NFS, port 2049?
I have to either add it manually after ovirt removes it, or just tell
ovirt not to touch firewall rules.
Our ISO domain is not hosted by the ovirt-engine, fyi.
ovirt-engine-3.6.3.4-1.el7.centos.noarch
2
3
Hello Joop,
thanks for help. I tried now with fresh installation, but i have to
enter a first storage domain to the nfs path i created. So the question
is which path i have to add in engine? Create a new one on host and add
in engine?
thx
Am 2016-03-20 14:34, schrieb Joop:
> On 20-3-2016 11:49, Taste-Of-IT wrote:
>> Hello Joop,
>>
>> actually right after the installation i have no storage domain. so
>> what have i to do in detail? my installation path at installation are
>> /var/ovirt/exports/data and /var/ovirt/exports/iso. the data i added
>> at installation for the host. at engine installation i say no to
>> storage domain. as far as i see and understand i have no storage
>> domain in cluster, also for default and those i added in installation
>> process of host.
>>
>> so what have i todo in detail?
> What I did was the following (probably missing quite a few steps)
> - create (minimum) two folders, one for hosted-engine storage, one for
> data storage, iso and export optional
> - create exports in /etc/exports
> - start nfs server
> - make sure permissions are 36:36 on the exports
> - start the nfs server
> - check if the mounts work (problem areas are: iptables/selinux)
>
> - install ovirt repo
> - install hosted-engine
> - run 'hosted-engine --deploy'
> - I installed from centos-7 iso but appliance should work too, no
> experience with it
> - enter the correct path when asked for the hosted storage, nfs and
> fqdn:/path-to-nfs-he-share
> - follow the instruction, not much needs changing from the default
> - Networkmanager still seems a problem, use static ip, don't try to
> bridge wifi :-)
> - If the engine is started using the ha-agent/broker it can take a
> while
> before the webui is available
> - If everything goes OK engine will claim and insert the hosted storage
> domain in the database and make it visible in the webui
> - DONT use it for VMs
> - create a new data domain on fqdn:/nfs/ovirt/data (or whatever your
> path is)
> - wait until its up
> - start creating VMs
>
> My case is a laptop running F22 with hosted-engine on it using a nfs
> server provided by my F22 host, engine itself is Centos7.
> So all storage is on my host but if I want I can move it somewhere
> else,
> add storage from somewhere else because I'm nog using local storage
> domain. Only thing is the overhead of the network filesystem that being
> either NFS/GlusterFS/ISCSI.
>
> There are some guides on how to setup oVirt using hosted-engine but to
> be honest its hard to find on the current site and searching on
> old.ovirt-.org doesn't work :-(
>
> Let me know how far you get and maybe I can help you further if you get
> stuck.
>
> Joop
2
4
This is a multi-part MIME message.
--=_reb-r6510EC4B-t56E95ECE
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksIEkgaGF2ZSBzZXQgdXAgYSBXaW5kb3dzIDIwMTIgUjIgdm0gaW4gT1ZJUlQuICBJIGhh
dmUgdHJpZWQgdXNpbmcgZTEwMDAgYW5kIHZpcnRpbyBkcml2ZXJzIGZvciBuZXR3b3JrIGNv
bm5lY3Rpb24uICBJIGFtIGFibGUgdG8gZXN0YWJsaXNoIGEgY29ubmVjdGlvbiwgaG93ZXZl
ciwgY2Fubm90IGFjY2VzcyBpbnRlcm5ldC4gIERlc3BpdGUgdGhlIGZhY3QgdGhlIGNvbm5l
Y3Rpb24gc2hvd3MgSW50ZXJuZXQsIHJlY2VpdmVkIGFuZCBzZW50IGJ5dGVzIHNlZW0gdmVy
eSBsb3cuDQoNCkhhcyBhbnlvbmUgc2VlbiB0aGlzIGlzc3VlIGFuZCBjYW4gaGVscD8NCg0K
VGhhbmsgeW91Lg0KDQoNCkJpbGwgTWljaGVsb24NCkdsb2JhbCBJVCAmIExvZ2lzdGljcyBN
YW5hZ2VyDQpBY2NlcnRpZnksIEluYy4NCg0KDQoNCg0KDQoiVGhpcyBtZXNzYWdlIGFuZCBh
bnkgYXR0YWNobWVudHMgbWF5IGNvbnRhaW4gY29uZmlkZW50aWFsIGluZm9ybWF0aW9uLiBJ
ZiB5b3UNCmhhdmUgcmVjZWl2ZWQgdGhpcyAgbWVzc2FnZSBpbiBlcnJvciwgYW55IHVzZSBv
ciBkaXN0cmlidXRpb24gaXMgcHJvaGliaXRlZC4gDQpQbGVhc2Ugbm90aWZ5IHVzIGJ5IHJl
cGx5IGUtbWFpbCBpZiB5b3UgaGF2ZSBtaXN0YWtlbmx5IHJlY2VpdmVkIHRoaXMgbWVzc2Fn
ZSwNCmFuZCBpbW1lZGlhdGVseSBhbmQgcGVybWFuZW50bHkgZGVsZXRlIGl0IGFuZCBhbnkg
YXR0YWNobWVudHMuIFRoYW5rIHlvdS4i
--=_reb-r6510EC4B-t56E95ECE
Content-Type: text/html; charset="utf-8"
Content-ID: <F07624AB918DC244A0F0C08A7F62A1D8(a)sh11.lan>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVu
dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3
b3JkLXdyYXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtp
dC1saW5lLWJyZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsgY29sb3I6IHJnYigwLCAwLCAwKTsg
Zm9udC1zaXplOiAxNHB4OyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsiPg0K
PGRpdj5IaSwgSSBoYXZlIHNldCB1cCBhIFdpbmRvd3MgMjAxMiBSMiB2bSBpbiBPVklSVC4g
Jm5ic3A7SSBoYXZlIHRyaWVkIHVzaW5nIGUxMDAwIGFuZCB2aXJ0aW8gZHJpdmVycyBmb3Ig
bmV0d29yayBjb25uZWN0aW9uLiAmbmJzcDtJIGFtIGFibGUgdG8gZXN0YWJsaXNoIGEgY29u
bmVjdGlvbiwgaG93ZXZlciwgY2Fubm90IGFjY2VzcyBpbnRlcm5ldC4gJm5ic3A7RGVzcGl0
ZSB0aGUgZmFjdCB0aGUgY29ubmVjdGlvbiBzaG93cyBJbnRlcm5ldCwgcmVjZWl2ZWQgYW5k
IHNlbnQNCiBieXRlcyBzZWVtIHZlcnkgbG93LjwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4N
CjxkaXY+SGFzIGFueW9uZSBzZWVuIHRoaXMgaXNzdWUgYW5kIGNhbiBoZWxwPzwvZGl2Pg0K
PGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+VGhhbmsgeW91LjwvZGl2Pg0KPGRpdj48YnI+DQo8
L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2Pg0KPGRpdiBpZD0iIj4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiIHN0eWxlPSJtYXJnaW46IDBpbiAwaW4gMC4wMDAxcHQ7IGZvbnQtc2l6
ZTogMTFwdDsiPjxiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDlwdDsgZm9udC1mYW1pbHk6
IEFyaWFsLCBzYW5zLXNlcmlmOyBjb2xvcjogcmdiKDMxLCA3MywgMTI1KTsiPkJpbGwgTWlj
aGVsb248L3NwYW4+PC9iPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDE0cHQ7IGNvbG9yOiBy
Z2IoMzEsIDczLCAxMjUpOyI+PGJyPg0KPC9zcGFuPjxiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6IDEwcHQ7IGZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsgY29sb3I6IHJnYigz
OCwgMzgsIDM4KTsiPkdsb2JhbCBJVCAmYW1wOyBMb2dpc3RpY3MgTWFuYWdlcjwvc3Bhbj48
L2I+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMzEsIDczLCAxMjUpOyI+PGJyPg0KPC9zcGFu
PjxiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDlwdDsgZm9udC1mYW1pbHk6IEFyaWFsLCBz
YW5zLXNlcmlmOyBjb2xvcjogcmdiKDE5MiwgMCwgMCk7Ij5BY2NlcnRpZnksIEluYy48bzpw
PjwvbzpwPjwvc3Bhbj48L2I+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCIgc3R5bGU9Im1h
cmdpbjogMGluIDBpbiAwLjAwMDFwdDsgZm9udC1zaXplOiAxMXB0OyI+PGI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZTogOXB0OyBmb250LWZhbWlseTogQXJpYWwsIHNhbnMtc2VyaWY7IGNv
bG9yOiByZ2IoMTkyLCAwLCAwKTsiPiZuYnNwOzwvc3Bhbj48L2I+PC9wPg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCIgc3R5bGU9Im1hcmdpbjogMGluIDBpbiAwLjAwMDFwdDsgZm9udC1zaXpl
OiAxMXB0OyI+PGJyPg0KPHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMzEsIDczLCAxMjUpOyI+
PC9zcGFuPjwvcD4NCjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0bWw+DQoNCg0KPHBy
ZT4NCg0KIlRoaXMgbWVzc2FnZSBhbmQgYW55IGF0dGFjaG1lbnRzIG1heSBjb250YWluIGNv
bmZpZGVudGlhbCBpbmZvcm1hdGlvbi4gSWYgeW91DQpoYXZlIHJlY2VpdmVkIHRoaXMgIG1l
c3NhZ2UgaW4gZXJyb3IsIGFueSB1c2Ugb3IgZGlzdHJpYnV0aW9uIGlzIHByb2hpYml0ZWQu
IA0KUGxlYXNlIG5vdGlmeSB1cyBieSByZXBseSBlLW1haWwgaWYgeW91IGhhdmUgbWlzdGFr
ZW5seSByZWNlaXZlZCB0aGlzIG1lc3NhZ2UsDQphbmQgaW1tZWRpYXRlbHkgYW5kIHBlcm1h
bmVudGx5IGRlbGV0ZSBpdCBhbmQgYW55IGF0dGFjaG1lbnRzLiBUaGFuayB5b3UuIjwvcHJl
Pg0K
--=_reb-r6510EC4B-t56E95ECE--
3
6
--Sig_/xu2kUSt=2MxpNT++Hm/Hx_s
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Hello,
I'm trying to add a host to a 3.5 cluster, and it's failing because gluster
folks apparently created a new key and renamed the file...
warning: /var/cache/yum/x86_64/7/ovirt-3.5-glusterfs-epel/packages/glusterf=
s-libs-3.7.9-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID d5dc5=
2dc: NOKEY
Retrieving key from https://download.gluster.org/pub/gluster/glusterfs/LATE=
ST/pub.key
see
https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBLIC_KEY.R=
EADME
and
https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
Is there a workaround?
Robert
--=20
Senior Software Engineer @ Parsons
--Sig_/xu2kUSt=2MxpNT++Hm/Hx_s
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlbwGzgACgkQ7/fVLLY1mnjBWgCffWFrSLiHhK0xpT9zZe1ItSBZ
thQAn3/YrQlylorTOGKdb+6/wTyE7T5C
=duNk
-----END PGP SIGNATURE-----
--Sig_/xu2kUSt=2MxpNT++Hm/Hx_s--
3
3
Hello list,
I was wondering if any one could tell me if a Dell MD1400 Direct Attached
Storage unit is suitable for shared storage between ovirt hosts?
I really don't want to buy something that isn't going to work :)
Regards,
Brett
4
3
I'd like to pre-seed answers to several of the hosted-engine setup
question, but can't seem to find any instructions on how to pass answers
into hosted-engine. I tried the same tactics as with engine-setup, but
those netted no results.
Pat
--
Pat Riehecky
Scientific Linux developer
Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org
2
1
Hello everyone,
Please excuse me if this question has been asked before, I've searched
the mailing list archives and can't find someone having the same
problem as I am.
I have a new oVirt installation that I'm trying to configure, and I
cannot get the ISO domain I configured during engine-setup to show in
the UI at all.
Installation layout:
# Engine
CentOS 7 - oVirt Engine 3.5 (VM on different, non-oVirt cluster)
# Nodes
de-fra-p-hv05 - oVirt Node 3.5
de-fra-p-hv06 - oVirt Node 3.5
I installed and configured the engine, but the node that runs the
engine is a VM, and will not be running VMs itself. I intend to use it
only as a management node (in the VMWare world this would be analogous
to the vCenter server). I don't want to run hosted engine because that
requires me to install Fedora/CentOS on the blades. I would much
rather have the engine live on a different cluster and just run oVirt
node on the blades.
I am able to install the nodes successfully, and configure them from
within the engine UI.
I am using an iSCSI SAN for VM storage. As per the Quick Setup guide,
I have configured my iSCSI network on the nodes and created an iSCSI
storage domain for VMs.
Where I am utterly stuck is the creation of the ISO domain:
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Attach_an_I…
Since the server running oVirt engine is not participating in the
cluster, it doesn't appear in my data center. This might be my
problem, but I'm not sure.
Here's a summary of the data center I have configured thus far:
System
|- Data Centers
|- Default
|- Storage
|- sanQA
|- Networks
|- ovirtmgmt
|- hpmsa
|- vmnet128
|- Clusters
|- QA
|- Hosts
|- de-fra-p-hv05
|- de-fra-p-hv06
I have edited the ACL on the server running oVirt engine to allow rw
access on /var/lib/exports/iso to nodes de-fra-p-hv05 and
de-fra-p-hv06.
$ cat /etc/exports.d/ovirt-engine-iso-domain.exports
/var/lib/exports/iso de-fra-p-ove01.domain.xyz(rw)
/var/lib/exports/iso 172.20.102.215(rw)
/var/lib/exports/iso 172.20.102.216(rw)
When I try to import an ISO domain in the Data Center -> Storage tab,
I get the following error:
"Error while executing action: Cannot add Storage Connection. Storage
connection already exists."
Attached is the engine log from attempting to import the ISO domain.
I don't have any ISO domain visible in the Storage tab of my data
center. Since I cannot see the ISO domain in the UI, I cannot create
new virtual machines on my cluster.
Please help me understand what I've done wrong.
I've also attached a screenshot of the management interface. As you
can see, I've configured an iSCSI domain. I see no button in the UI to
"Attach ISO"
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Attach_an_I…
Thank you,
Hal
2
2
Hello,
I'm trying to test migrating a VM from one host to another and it's
failing, but with no apparent error message. I took a look at the
server logs (which I've copied below) but there doesn't seem to be
anything in here that would help track down the problem. Should I be
looking somewhere besides engine.log on the hosted engine? This is the
latest version of oVirt (3.6.3.4-1.el7.centos) that I just upgraded this
morning. The VM has an ISO attached over NFS storage and the disk image
is on fiber channel.
2016-03-18 14:31:18,913 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-25)
[549545f2] Lock Acquired to object
'EngineLock:{exclusiveLocks='[2be4938e-f4a3-4322-bae3-8a9628b81835=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName cdTest02>]',
sharedLocks='null'}'
2016-03-18 14:31:19,131 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-8-thread-45) [549545f2] Running command:
MigrateVmToServerCommand internal: false. Entities affected : ID:
2be4938e-f4a3-4322-bae3-8a9628b81835 Type: VMAction group MIGRATE_VM
with role type USER
2016-03-18 14:31:19,186 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [549545f2] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vmId='2be4938e-f4a3-4322-bae3-8a9628b81835',
srcHost='ovirt-01.virt.roblib.upei.ca',
dstVdsId='1200a78f-6d05-4e5e-9ef7-6798cf741310',
dstHost='ovirt-02.virt.roblib.upei.ca:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='false',
migrateCompressed='false', consoleAddress='null'}), log id: 8f303ba
2016-03-18 14:31:19,188 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [549545f2] START,
MigrateBrokerVDSCommand(HostName = oVirt-01,
MigrateVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vmId='2be4938e-f4a3-4322-bae3-8a9628b81835',
srcHost='ovirt-01.virt.roblib.upei.ca',
dstVdsId='1200a78f-6d05-4e5e-9ef7-6798cf741310',
dstHost='ovirt-02.virt.roblib.upei.ca:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='false',
migrateCompressed='false', consoleAddress='null'}), log id: 792d7e9e
2016-03-18 14:31:19,455 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [549545f2] FINISH,
MigrateBrokerVDSCommand, log id: 792d7e9e
2016-03-18 14:31:19,470 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [549545f2] FINISH,
MigrateVDSCommand, return: MigratingFrom, log id: 8f303ba
2016-03-18 14:31:19,498 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-45) [549545f2] Correlation ID: 549545f2,
Job ID: 3f97f737-d32a-4ffa-b213-122dcd3ca048, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: cdTest02, Source:
oVirt-01, Destination: oVirt-02, User: admin@internal).
2016-03-18 14:31:28,055 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-39) [] VM
'2be4938e-f4a3-4322-bae3-8a9628b81835'(cdTest02) moved from
'MigratingFrom' --> 'Up'
2016-03-18 14:31:28,055 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-39) [] Adding VM
'2be4938e-f4a3-4322-bae3-8a9628b81835' to re-run list
2016-03-18 14:31:28,114 ERROR
[org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-39) [] Rerun VM
'2be4938e-f4a3-4322-bae3-8a9628b81835'. Called from VDS 'oVirt-01'
2016-03-18 14:31:28,202 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-46) [] START,
MigrateStatusVDSCommand(HostName = oVirt-01,
MigrateStatusVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vmId='2be4938e-f4a3-4322-bae3-8a9628b81835'}), log id: 618baaa6
2016-03-18 14:31:29,209 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-46) [] FINISH, MigrateStatusVDSCommand,
log id: 618baaa6
2016-03-18 14:31:29,250 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-46) [] Correlation ID: 549545f2, Job ID:
3f97f737-d32a-4ffa-b213-122dcd3ca048, Call Stack: null, Custom Event ID:
-1, Message: Migration failed (VM: cdTest02, Source: oVirt-01,
Destination: oVirt-02).
2016-03-18 14:31:29,262 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-8-thread-46) [] Lock freed to object
'EngineLock:{exclusiveLocks='[2be4938e-f4a3-4322-bae3-8a9628b81835=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName cdTest02>]',
sharedLocks='null'}'
3
6
--_000_AM3PR06MB12021070EF302BE6210F1A65828C0AM3PR06MB1202eurp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
I'm trying to track down why my Foreman Centos7 VM continuously crashes wit=
h out of Memory errors on my ovirt-3.6 hosts and what seems to happen is th=
e following:
1. The Foreman VM only uses 1GB at the beginning and MOM over time reduces =
the memory size from 8GB Ram to the defined minimum of 2GB
mom.log:
2016-03-18 12:25:42,669 - mom.Controllers.Balloon - INFO - Ballooning guest=
:qvie-foreman from 2210584 to 2100054
2. When the VM needs RAM very quickly like if you do an 'cp /dev/zero /run/=
zero' the ovirt guest agent crashes with an out of memory error.
ovirt-guest-agent.log:
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 239, in doWo=
rk
self.sendUserInfo()
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 355, in send=
UserInfo
cur_user =3D self.dr.getActiveUser()
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 325, in get=
ActiveUser
users =3D os.popen('/usr/bin/users').read().split()
OSError: [Errno 12] Cannot allocate memory
3. Resulting in an MOM warning that it can't get any memory stats from the =
VM. Leaving the memory size at its minimum size:.
mom.log:
mom.Collectors.GuestMemory - WARNING - getVmMemoryStats() error: The ovirt-=
guest-agent is not active
Any chances to prevent that from happening? Or is this a bug in MOM / ovirt=
-guest-agent that should be fixed?
Greetings
Andreas
--_000_AM3PR06MB12021070EF302BE6210F1A65828C0AM3PR06MB1202eurp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;back=
ground-color:#FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hi,</p>
<p><br>
</p>
<p>I'm trying to track down why my Foreman Centos7 VM continuously crashes =
with out of Memory errors on my ovirt-3.6 hosts and what seems to happen is=
the following:</p>
<p><br>
</p>
<p>1. The Foreman VM only uses 1GB at the beginning and MOM over time reduc=
es the memory size from 8GB Ram to the defined minimum of 2GB
<br>
</p>
<p><br>
</p>
<p>mom.log:<br>
2016-03-18 12:25:42,669 - mom.Controllers.Balloon - INFO - Ballooning guest=
:qvie-foreman from 2210584 to 2100054</p>
<p><br>
</p>
<p>2. When the VM needs RAM very quickly like if you do an 'cp /dev/zero /r=
un/zero' the ovirt guest agent crashes with an out of memory error.</p>
<p><br>
</p>
<p>ovirt-guest-agent.log:</p>
<p>Traceback (most recent call last):<br>
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", li=
ne 239, in doWork<br>
self.sendUserInfo()<br>
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", li=
ne 355, in sendUserInfo<br>
cur_user =3D self.dr.getActiveUser()<br>
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", l=
ine 325, in getActiveUser<br>
users =3D os.popen('/usr/bin/users').read().split()<br>
OSError: [Errno 12] Cannot allocate memory<br>
<br>
3. Resulting in an MOM warning that it can't get any memory stats from the =
VM. Leaving the memory size at its minimum size:.<br>
</p>
<p><br>
</p>
<p>mom.log:<br>
</p>
<p>mom.Collectors.GuestMemory - WARNING - getVmMemoryStats() error: The ovi=
rt-guest-agent is not active</p>
<p><br>
</p>
<p>Any chances to prevent that from happening? Or is this a bug in MOM / ov=
irt-guest-agent that should be fixed?<br>
</p>
<p><br>
</p>
<p>Greetings</p>
<p>Andreas<br>
<br>
</p>
</div>
</body>
</html>
--_000_AM3PR06MB12021070EF302BE6210F1A65828C0AM3PR06MB1202eurp_--
3
4
Hello,
i want to install self-hosted-engine with nfs storage on host and engine
uses this storage. i tried it but i dont know where to add the nfs
storage for data. i first setup nfs server which is working, then added
the nfs export dir in host setup installation, than install engine and
dont add storage there. if i now want to add on system tab under storage
new data storage this nfs export i got the message, that the storage is
in use by another domain. What is the right way to use local nfs storage
for engine?
thx
2
1
Hi,
This is oVirt 3.6.3.4, and I have a user with the following permissions.
Role | Object | Inherited permission
UserTemplateBasedVm | Small (VM Template) | Everyone
UserTemplateBasedVm | Medium (VM Template) | Everyone
UserTemplateBasedVm | Large (VM Template) | Everyone
UserTemplateBasedVm | XLarge (VM Template) | Everyone
UserTemplateBasedVm | ubuntu-14 (VM Template) | Everyone
UserTemplateBasedVm | centos-7 (VM Template) | Everyone
CpuProfileOperator | Default (CpuProfile) | Everyone
VnicProfileUser | LAN1 (Vnic Profile) | Everyone
VnicProfileUser | LAN2 (Vnic Profile) | Everyone
UserRole | instance (VM Pool) |
UserRole | instance-6 (VM) |
This user can see the Extended tab when logged in to the User panel.
However AFAIK only PowerUserRole grants access to that tab. Which
permission(s) is allowing the user to see the tab?
Thanks
James
3
3
This is a multi-part message in MIME format.
------=_001_NextPart866553846435_=----
Content-Type: multipart/alternative;
boundary="----=_002_NextPart210702067046_=----"
------=_002_NextPart210702067046_=----
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: base64
SGk6DQogICAgICAgIEluIDMuNSwgSSBjYW4gYWRkIGEgbmljIGZvciB0aGUgaG9zdGVkIGVuZ2lu
ZSB2bSBieSBmb2xsb3dpbmcgdGhlc2Ugc3RlcHM6DQoNCi0gQ3JlYXRlZCB0aGUgbmV0d29yayBp
biB0aGUgVUkNCi0gaG9zdGVkLWVuZ2luZSAtLXNldC1tYWludGVuYW5jZSAtLW1vZGU9Z2xvYmFs
DQotIGVkaXRlZCAvZXRjL292aXJ0LWhvc3RlZC1lbmdpbmUvdm0uY29uZjsgZHVwbGljYXRlZCB0
aGUgZXhpc3RpbmcgbmV0d29yayBsaW5lLCBjaGFuZ2luZyB0aGUgbWFjQWRkciwgbmV0d29yaywg
ZGV2aWNlSWQsIGFuZCBzbG90IChjaGFuZ2VkIG9uIGFsbCBob3N0ZWQtZW5naW5lIG5vZGVzKQ0K
LSBob3N0ZWQtZW5naW5lIC0tdm0tc2h1dGRvd24NCi0gaG9zdGVkLWVuZ2luZSAtLXZtLXN0YXJ0
DQotIGhvc3RlZC1lbmdpbmUgLS1zZXQtbWFpbnRlbmFuY2UgLS1tb2RlPW5vbmUNCg0KICAgICAg
ICBCdXQgaW4gMy42IEkgY2Fubm90IGZpbmQgdGhlIGNvbmYgZmlsZS4gSG93IENhbiBJIGFkZCBh
IG5pYyBmb3IgaG9zdGVkIGVuZ2luZSB2bT8NCg0KDQoNCg==
------=_002_NextPart210702067046_=----
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3Dus-ascii"><style>body { line-height: 1.5; }body { font-size: 10.5pt; f=
ont-family: ????; color: rgb(0, 0, 0); line-height: 1.5; }</style></head><=
body>=0A<div><span></span>Hi:</div><div> <span style=3D"=
font-size: 10.5pt; line-height: 1.5; background-color: window;"> &nb=
sp; In 3.5, I can add a nic for the hosted engine vm </span><span sty=
le=3D"font-size: 10.5pt; line-height: 1.5; background-color: window;">by&n=
bsp;following these steps:</span></div><div><span style=3D"font-=
size: 10.5pt; line-height: 1.5; background-color: window;"><br></span></di=
v><div>- Created the network in the UI<br>-&=
nbsp;hosted-engine --set-maintenance --mode=3Dglobal<br>- e=
dited /etc/ovirt-hosted-engine/vm.conf; duplicated the =
;existing network line, changing the macAddr,&nbs=
p;network, deviceId, and slot (changed on al=
l hosted-engine nodes)<br>- hosted-engine --vm-shutdow=
n<br>- hosted-engine --vm-start<br>- hosted-engine --s=
et-maintenance --mode=3Dnone</div><div><br></div><div><span style=3D"=
font-size: 10.5pt; line-height: 1.5; background-color: window;"> &nb=
sp; </span><span style=3D"font-size: 10.5pt; line-height: 1.5; backgr=
ound-color: window;"> </span><span style=3D"font-size: 1=
0.5pt; line-height: 1.5; background-color: window;">But in 3.6 I cannot fi=
nd the conf file. How Can I add a nic for </span><span style=3D"font-=
size: 10.5pt; line-height: 1.5; background-color: window;">hosted engine v=
m?</span></div>=0A<div><br></div><hr style=3D"width: 210px; height: 1px;" =
color=3D"#b5c4df" size=3D"1" align=3D"left">=0A<div><span><img src=3D"cid:=
_Foxmail.1@0a4de108-d4f4-d220-6d85-d569899a0ebc" border=3D"0"></span></div=
>=0A</body></html>
------=_002_NextPart210702067046_=------
------=_001_NextPart866553846435_=----
Content-Type: image/jpeg;
name="=?us-ascii?B?Mjg4NDRfPz8/LmpwZw==?="
Content-Transfer-Encoding: base64
Content-ID: <_Foxmail.1@0a4de108-d4f4-d220-6d85-d569899a0ebc>
/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAIBAQIBAQICAgICAgICAwUDAwMDAwYEBAMFBwYHBwcG
BwcICQsJCAgKCAcHCg0KCgsMDAwMBwkODw0MDgsMDAz/2wBDAQICAgMDAwYDAwYMCAcIDAwMDAwM
DAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAz/wAARCADfAXIDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9w1Vv
iM99PPe6jDYw3k1nbwWd3LZlDBK0LszxMrszSI5xnbtC4GdxLT8MtPwf9O8UdCf+Rk1D/wCP/wCf
fpTfhm+NCvsnj+29VH/lRuK6AzE549f1qWxmD/wrHT93/H94o64/5GTUfT/rv/n2oHwx08gf6d4o
5A/5mTUe/wD22/z+tb3nc568/T/P9KBLwB9B0/z/APXpagYJ+GWn4P8Ap3ijoT/yMmof/H/8+/Sj
/hWOn7v+P7xR1x/yMmo+n/Xf/PtW8Zic8ev60edznrz9P8/0o1AwR8MdPIH+neKOQP8AmZNR7/8A
bb/P60H4Zafg/wCneKOhP/Iyah/8f/z79K3hLwB9B0/z/wDXoMxOePX9aNQMH/hWOn7v+P7xR1x/
yMmo+n/Xf/PtQPhjp5A/07xRyB/zMmo9/wDtt/n9a3vO5z15+n+f6UCXgD6Dp/n/AOvRqBgn4Zaf
g/6d4o6E/wDIyah/8f8A8+/Sj/hWOn7v+P7xR1x/yMmo+n/Xf/PtW8Zic8ev60edznrz9P8AP9KN
QMEfDHTyB/p3ijkD/mZNR7/9tv8AP60H4Zafg/6d4o6E/wDIyah/8f8A8+/St4S8AfQdP8//AF6D
MTnj1/WjUDB/4Vjp+7/j+8Udcf8AIyaj6f8AXf8Az7UD4Y6eQP8ATvFHIH/Myaj3/wC23+f1re87
nPXn6f5/pQJeAPoOn+f/AK9GoGCfhlp+D/p3ijoT/wAjJqH/AMf/AM+/Sj/hWOn7v+P7xR1x/wAj
JqPp/wBd/wDPtW8Zic8ev60edznrz9P8/wBKNQMEfDHTyB/p3ijkD/mZNR7/APbb/P60H4Zafg/6
d4o6E/8AIyah/wDH/wDPv0reEvAH0HT/AD/9egzE549f1o1Awf8AhWOn7v8Aj+8Udcf8jJqPp/13
/wA+1A+GOnkD/TvFHIH/ADMmo9/+23+f1re87nPXn6f5/pQJeAPoOn+f/r0agYJ+GWn4P+neKOhP
/Iyah/8AH/8APv0oPwy08E/6d4owD/0Meo9h/wBd/wDPtW5LdLDG7uyxooJZmPCjuSew968z+IPx
RfXGezsGMVlnDydGn9vZfb/9VZ1KqgtRpXJ/E1tpFhCRb6n4mWJSVe5/4SPUH3MOqxL5/wA59T90
e9Yml2U/jDXEt7G88WwQRjny/EN80jj1d2m2r+A+gNZUfna3fxpJKplcBEL5x7KAAfwAH/1/YfBe
gDwp4egtSq+fjfOw/jcn9QOgrnpznUlvoW0kirH8MbFY13X/AInYgAE/8JHqJzxz/wAtx/k9q4zx
hZwXGorpegXniye9VsSSp4h1BgMHkAGbH1J4Fep+bheOv8uKgtLG3055mt4Uie4YtIwHzOT3J79/
p2xXTNSasnYhW6nCaV8FNRlt0a98V+KYpW6pDrV44X2yZeT9BWkvwqs9FtJLi78ReNJ4oELuTr14
AADycLJk12AlHuP1/wD19KivreHUrZoZ41likBDI2cN+vr/k0KNlpcLnIaXpnhjV/IWDXPERlnYK
kbeJNRWQnOMEef8A59a1V+GWnsB/p3ijt/zMmo+v/Xb/AD+tX9O8M6ZpE6y2un2kMqD5XVPmH0Of
StDzMDqMdOnanDmt7wnboYJ+GWn4P+neKOhP/Iyah/8AH/8APv0pH+GenRgk3/icBTyT4k1EAADv
+/8Az/pWxqesQaRZyXNzIsMEYO5jk9f/AK/4/WvNviB8UT4ktfsliJoLU580uAGm9Bx0HHT6VNSq
oLXcajcpeI9e0+0vZIdOl8RXESDAnl8S6lhzzyAJ+n86ytF1mSzmIurrxDfIx6HxHqEZX2BE2PzH
cVseBfCS3lrLrN0way05mcwqu55yo3FcHgDkdfesLWtSTVtUnuVghtVmYkRRjCoP8/59eOVWp8TZ
pZbHeaVpWmao6xO3jaxnkGU+0a7qXlHg87xPjHqeK5bxbqCQ5tbWfxNZ3MTYlkHie/mBx02HzsEH
1Pr2qC28S6romkSWYkmSzvoiBG4JUqeCU9PTj1rJC7RwOnp/n/ORTliJNWuCigE9+P8AmO+KD/3H
bz/47/nIoE9+B/yHPFB/7jt7/wDHf85FGMe+P8/5+ooxj3x/n/P1FZe1n3Y7IBPfj/mO+KD/ANx2
8/8Ajv8AnIoE9+P+Y74oP/cdvP8A47/nIoxj3x/n/P1FGMe+P8/5+oo9rPuwsgE9+P8AmO+KD/3H
bz/47/nIrofBeqWDuttrF/4mBP3boeItQA5/vATcfUflXPYx74/z/n6ijGPfH+f8/UU41pp3uDij
2KL4babMiut/4nZHwykeJdQIIx1/1/8An2p8PgttFjeXStT1qG8VQUa81O5voSeuGjmkYFT0OADg
8MDg1wHgP4i3HhOZYJt8+nswJjzzF/tL/h716tY6lFqNjHcW7rLDKMo69D/noR+dd9KqpoycbGl4
T8Qx+LfCumarCrJDqdpFdordVWRA4B/A0Vj/AAQ/5It4Q/7All/6ISityTG+Gx/4kd//ANhvVf8A
04XFdBmuf+Gw/wCJHf8A/Yb1X/04XFdBUNsAzRmiildgGaM0UUXYBmjNFFF2AZozRRRdgGaM0UUX
YBmjNFFF2AZozRRRdgGaM0UUXYBmjNFFF2AZpJJFiRmdgqqMkk4AFLXJ+JTe+ObuXTLBzDp8J23d
zjh2H/LMeuDjPv8ArMptIaVzmfiL8Qm8RztZ2rlNPjbr0M5H8R9vQfjU/gTT7LRfD93qmsJGbS5B
ggjaMO8pwclf17j1zXUaN8NLLStLeAs7yTrtnlAAaQd1HXav05961rzw9Z39lb20sCNb2rK8cf8A
CNowB7iudUpOXNLcvmWyMrwN4X061sYb+GzWOSYb4jIS8kY7Ek/xH2A6966LNH9OKK6ILlVkQ3cM
0uaSiquxBmjNFMuZWgt3dYmmZFJEakAufQE8UXYCzzpa20k0rBIYlLu56KBzmsnwZ4pbxZp8tyYG
gQSFU44Zfrnk+vAHNYelafqkF5earraXk8cKnyLVHEgIOeCo+XAHHNU/hf4lhk8T6hbrE0K6g/mQ
RRrlUxnI46cd6w9q+ZX0L5dDrvGGht4l8O3FlG8cby4Ks4JUYOe3NeceIPBdtp/hWO/s7wXwjnMN
xIq7UGemM9gcjP8AtV23iL4m6b4fvHtWWa5mUYdYwNqk8EEnv7VxOh/EWTTrCKyuLKzu7CMgiIpt
PHIOehPI6g1nWlBy1KinY634RaL9j8LtcSPI66gxYRN/qwoJGcdyec+1cv4r1uw0u6UaH/Z/kzA7
/wDRQXjI7ZfPB6jAHFOvPi/qUodLaK1tYSpVFVMlPfPr07YrlZHMsjMcZcluBjk/5H51nOouVRiN
LW7Hz3ctyiLJIzLFkIpPyoCSTgdBz6e9MP4D6/j/AJ/A/in0/D/P5UfT8P8AP5VgUKfwH1/H/P4H
8Q/gPr+P+fwP4p9Pw/z+VH0/D/P5UrAKfwH1/H/P4H8Q/gPr+P8An8D+KfT8P8/lR9Pw/wA/lRYB
T+A+v4/5/A/iH8B9fx/z+B/FPp+H+fyqS3tWuEkZSAsS72J7DIH9RTsAw/gPr+P+fwP47vgfxzN4
QvsNulsZT++izyP9pfRh+uKwTwf8/wCf8/kf5/z/AJ/+s4tp3QHu3wQ/5It4Q/7All/6ISij4If8
kW8If9gSy/8ARCUV7JzmN8N/+QHf/wDYb1X/ANOFxW/muf8AhucaJf8A/Yb1X/04XFb+a55SV9yh
c0ZozRmlzeYwozRmjNHMu4BmijNGaOZdwDNGaM0Zo5vMAozRmjNHMu4BmijNGaOZdwDNGaM0Zo5v
MAr8W/iF/wAHMfif9mD4zftHeJfFnw88da74ItPENv4T+HPhXUpdF006NqujJZW/ie1vLyzkuZci
fUrSaJ8XUcgO1HjG/b+0ma/ArRfC3wr0n/go34Mv38Y+J9D/AGVrLxd4v1L4c/G6+0rRDa2/ji5+
wXmoC41rWEuYdVtYzpk8Vnql1A11JcBoop7lbIXCVFpiPv8A/wCCQX/BTDWP2sfHPxy8PeM7vx7r
N/pnxn8TaH4Slm+HWp6fY6RoVrHDJa2d3dpYx29rcRr5oMV9Il3udFcbnjU8v8dv+CoHxM8Sf8FB
fAuhfD8aT8NPhr4b8Oav4i1a3+MsV/4At/ivFHasZ/7Pnu9HnntV0falzPv8iWRJXbymto/tB8r/
AOCBPxb8NyeOf2ptC0P9pXQNY8aeLvjP40bwnoOrXmh3sPiZjHazQ+JRZ2kdteXm9YZGcWlxDaPF
HL5ccRHmL5v47+H91+1D/wAFSfj14I/ad+L/AMcvHmn/AALm8OXPhTSvA3wsPiDwokup6Xd3M7XO
hjSdXtFaEXjQ2016HuTGD+/maLcj0uI/SP8A4J2/tkfEL9vD4PXnjPxD8L7H4Z+GdWutvhPUIfEN
xqUnijTQzD+1UguNPs5ra1nARrYzIJZUYyNFGnltL9JWVnFp1pHBCgjiiXaqjsK+Cf8Agkn+1n46
+OH7cH7VvgPxD498eeOfB/wy/wCEQbwzJ408IweGtctTf6fdT3guLZNOsJRmVFCebAv7uNGXIfe/
33moe40fOn/BS3/gofYf8E2vgNc+OdV8IX/ie0+y3pt2TxDoui2pvYbdp4LN3v7uGaSScJLsSygu
5iIJSIWIRH8j/Z1/4Lr+Df2nP2srf4V+GvA1/eXeoWunz2l3Z/ELwTqUjmeS9FyxgtdalMsdtBaC
d1tGubgIzl7eJfIa4P8Agu98Q9L+H/7PvhaS18d6h4L8f3mv2aaH9n+KaeCo5rCO/sptWkmhk1vS
I9QjFnG0AjFx5qPeR+W9v5j3CfBH/BHP436Xqv7X/g3TvFPxV1BfCt9r/i1LbTrf42pYR3GvzeJV
uNMkm08eNNQm1GOcJeRCOO0ZJ3vo2kfVEl+31Sta4H2x8XP+DhrwD8IfiVo3hfU/Ad/Z6pNr9xoW
t2t98Tfh/BdaE8FpeSyCWNNfkEUiz2ywst01sgMhXzTN5VvN738OP+CpXwg8SfB7RfG/jHxb4Q+E
mi+Kbq7t/D8ni3xz4aMfiFLUxpcT2dxYaldW00ccsnlOFm3o6MHRQVLfnz+0B+z34/0/9p343XGh
N8Xdf+HfwV8U29xpes6t491nUIfCGfCemXN3IL28+IGivDtj1G8Z5GgfbFdyr9pMbGGL6a+AF9q8
v/BIT4TeEfDPi3x9qfxj/aO8A6VPHq1/4t1PWNa0qfUNNsYtV8RRTXU8j21vp0dz9sEaS28DXH2e
3iaK4vYQ7bQjvv2VP+CyPwj/AGl/+Fbxv44+EWjah8SPC2mapDpkPxI0q71TS9buvKEmgz2TNFcG
4DXESwtHG5leK5SSO2ZIBc/XOa/Nr9nn9snwF8Bf2pPH1np3i7wj4P8Aht4t+JHhy18AeFNLjt7P
WPiDb32h+HPDVpc6fZ3SQg+HLWeGWVLvT/MF0LRmhkSC0eO//SXNS2NBmijNYmp/EPSdI1JrSe4Z
JY/vkRsVQ+hIHuKlyS3Y7DfHGoRWNnGbrTLjUbTlpPLbCxgd2Hf8RgV57e+OpTZPBZ2dlp2/KvJb
x7ZHX+7nqBxziu71zx9os2gXRFzFdB42TyVJDSZGMYIyOvWvKFBOBgk/TP8An/P481aVnoy4iE4J
PU9fr/nFB+X8P8/0FaGjeGL/AF8t9ktZJVXq3AUe2Txn2/ybfhbwiviC+ubSa5FleQj5I5FzvYH5
ge/H9frWKi2UYh+X8P8AP9BQfl/D/P8AQVPqenTaPqEttOu2WBtrD19/of8A2aoenvj9f8/+zUgE
Py/h/n+gozgcdv8AP9BV7w3pbaxr1pbAFhJIN3+6OSfyB/OvTfEnw/tdaaB4Ft7SSJhuxArCRRxt
I+nSrjBtXE2eTNE0aKxVgrfdJH3sen5Cmn5R9P8AP9B+deo+Mvhy/iOOFobwo9uu1I3QCID2CgY7
evSvOtc0K68P3xtruPy5Mbhg5DAnGQe4/wAPeicHFgnc6LQfhNPqekG5uLgWbEEojJngdCxyMDjt
XOtpP2PXEtLiSIASqjyI4ZMZAyCOOldn4I8YReJ9LfRtTk8otGI4nVthkXAG3PqP1rF8XfDafw2W
khlSa1Vd25yEbvkAd8e3rVygrJxEnrqUvHHhkeFfEL26MGicCWLrlVJGAfyrIikaMgqxUkY4OMg4
BFK8rzOGdmZuBljk9v8AP4GmjjHb/I/z+BrJvW6KBecfh/Shecfh/SgcY7f5H+fwNA4x2/yP8/ga
QHu3wQ/5It4Q/wCwJZf+iEoo+CH/ACRbwh/2BLL/ANEJRXtHOYnw5ONEv/8AsN6rx6/8TC5re3Y9
Dz+ft9awPhy4XRb8f9RvVf8A04XNb3mrjr71ySer0NELux6Hn8/b60bseh5/P2+tHmAd+lHmD1qb
+QBux6Hn8/b60bseh5/P2+tJ5q46+9L5gHfpRfyAN2PQ8/n7fWjdj0PP5+31o8wetJ5q46+9F/IB
d2PQ8/n7fWjdj0PP5+31o8wDv0o8wetF/IA3Y9Dz+ft9aN2PQ8/n7fWk81cdfel8wDv0ov5AG7Ho
efz9vrRux6Hn8/b60eYPWk81cdfei/kAu7Hoefz9vrRux6Hn8/b60eYB36UeYPWi/kAbseh5/P2+
tG7Hoefz9vrSeauOvvX5sa3/AMFXPjZrHhzS/iXott8LNM+Gus/HG3+E9l4ev9Dv7zXXsxqIsJ7+
W+W+hihmaSOZ0g+yOEUx7nc5y6dp1Y0VvKyXzlGC/wDJpxXzvstHJWpuo9lf8Iyk/wDyWLfytu0j
9KRye3HemPF50ik4Kod31Pb8v51+Wn7NH/BSH4v+Mfjt4b+F/hFPBGlt45+JPxI0SbVfEn9ueI20
6LRJIWtpI0uNT3ndvYPAksUKgjylhA2t1f7Of/BWL4yfte2/wF8HeGtP+GXhHx/8QrTxRqPinWtU
0u+1TRrKDRL17ArY2SXdvLJJcTmJ8SXX7qPfnzDg01G8YSt8UVJbbWbf3KL/AEV3YhSTUpPaMnF+
qk4r5tp2/wCCr/pIMH0oGD6V+JfiT/gp94l1T4p/s6ftG+PNB0PVNf8ABfgj4sz3WmaBHLptnqH9
mzxW8ap50lw8PmCBCxLSbSzEA4C19P8AxR/4Kd/Gr9l2wFp45i+F3inVfF/wc1r4leGp9B0K/wBN
t9GvtMtY7iSxvY5b64a5gcTx7Z0e2OY2GzLgoTSjTVR/yuXyTqfnGlKXTtu1cpNzrzobNTUPVv2d
/S0qkY767rS9v0VGD6UDB9K/NG7/AOCyHxS+A1pNqvxI0LwD4k0vVPgE3xl0y18NWN7ptxY3UbwI
+nTST3FwJ4mNwpE6xxFQGzG2N1epfsu/tkfHO8/bd8A/Cz4p33wl1ez8bfCqT4gTS+E/Dt/pz6Xd
rc2sP2VZp9QuVuIR5z/vPLiLYU7V5Faexl7T2T39774+1uvl7Gp5ab6q6VROmqi2aT+T9m0//KkP
PXbRn24MH0oGD6V8H/t9ftU/En4jfFj40fBL4fyeB/DWi+BfhLceKPE2r+ItIutWu9UN6lxHDZ2U
EN1arCFjgnZ7h3mAZ418rhifKP2CP2wvirrX7Onwt+Efwpf4e+HLj4cfADQPGup6r4u0i71ePV5b
mAx29lDFb3lp9mVfs0hed3lz5igRfKS3PTnCVOdV6KNvPT99zPy5fYy01b6La+8qbUox6v8AB/u+
VfP2kfJX1e9v1HGD6UDB9K/Mfw5/wWL+MH7Sr2918NdH+G/hLT5/gGPi+Y/E+lX+rTR3yXU8Mlhm
G6tQYG8nCzYBXh9sgbYNjQ/+Cr/xf/aStEl+G9j8NvBcPhr4H6f8WfEcvibSL3XHvbu+heWHT7OK
C8s9kKLBPvuHeQ5eMCMYYm6y9kpSqact77dPa3fn/Bqeb5dtVeKa9py8mvNy2/7eVNpf+VYeWvkz
9HLm4W1tpJWBKxKWIUZPA9K8R1bUn1nVJ7uThp3L4/u+g/l+XvXwX8Rf23/iZ+3RpXwQ8Pxa6vge
98b+E9B1u+GkXuraZYHUNT0nWdSkml/s++tdQe3ji0iSCCBb6NHlumaXzvIRa8IvvjJ4w+PvwK/Z
jsb3XLTUPEml/GHwS9lqOqLLPHEup+HItUWOZy5luBavevFukl86VIozLK0jvMd6mX1ud0pqzjOM
Gv8AFPkT7b3a8t2noJSXs/aR1Xs5VPlGLl/kn1TezSufrPZWUuoXccECNJNKdqKOpP8An+XvXp3w
++H/APwipa5uJA93Kmzav3YhwcZ7ngc1+b3gr/gpr8QfC1/c+Bbyz+H138WG+Nd18J7LxCmm3Y0C
2tYbJb9tUbTvtRneQw7oxALxR5jBvOCgqZPGP/Ba34v+CvHNl8OIPDHw68Q/ELS/i/bfDrU78297
pei6zZ3ely39tcQr59xJZTDEccis92F2uwUl1ReOhS5rOOt1FrzUlTaf/lSG+uvk7FeapX9o7W5r
+XLz38v+Xc+vTWyav+mHivVho2n25Ebss11HCfLbaVy3/wBbBHvWZ4s+HS6nf/2hYTm01FWDg/wy
MOn0P+cV+aWr/wDBbD432nxI1nRLT4d6b4o1P4Y63onhnxXoXhvwD4i1ka/eSRQPqt5ZarC32XTo
IPPYww3Uc00iQlmKF1Ucj+zR+1d8S/2P/hj8YvHugSeCbvwJdftQar4f1fR73TLmXV7lNR1O3tvt
FvdrcJDCYmlQ+U9vLvCsfMTIA2pQVWSSd1JaNdbypxj8pe0TT6bNLXlmrP2cXJ9Hr/4DUk/nH2bT
XW903pf9VLG3bxvZT2ms6eVvrNmj89MABuowQcjgj2Neew6Lczap9h8ord7inlk7TuHb/Pt+P50/
tV/tK/Er9sf4a+A/HuozeBtC+HVj+01oXhfTdEi0i6m8QXP9naqLQ3dxfG6WGLfNHOwtxZkqjR5l
Jya9guf+ClnirWf+Cg3w98Eaf4h+Hnjnwt8QfFGu+EzqGg+Bdf0+Pw9cWkMs1sF1me4bTtVkRYil
xHbCJkfIHAOOeKVWnGaesnZLq1y0pJ/P2sV2Ttdpuy2qL2cnFvRK7fazmmvlyPzf2U7Xf22PDl18
MdUt76a4Voj8mYoi4cnqhyRjI757VF4h+LF9q0ZitR9ij3Eh0P7xh2BPbt0r88/2Kf2ifjDoP/BN
PxL49+IHx1+HjWFv401nR7C78UeFNa1fUI7pfEDwBFMWqvNeq0Syx21hDCrgtColYRnf7b/wS9/a
F8YftqeHvina+J7fSrbW/hj4xm8NC6t9A1Lw+mrQ+RFcRTPp+obrqzk2TANHKW5GQcGopSVSClSe
8VO3ZNRf/t8fW/k7TL3W1L+Zx+a5v/kX6W81f6Q0z4h6vpZOy8eZW/hm/eD9efTv3qjrviK78SXY
mupfMdV2qAAoUew/Kvizw5+1z8afiR+x/wDFv9oLQ5PhdY+CvANx4mtrDwveaNfXGqvHpSXEMdxN
fpdpEzSXcAZrdLdQsMnE5cZab9kr9tz4mfEH9pjwP4K8d2/gW4074i/CyD4iWEug6fdWc2kzGWCO
W0laa4mW4TE4ZZFWIjBBQ/eopRlVUeV3TSktejjKa/8AJYSf4b6Dn7ibfRtf+AyjF/dKaX47an2A
Rkcfh/n8q9as4LLxp4IhtmljcvAuTvy8TgdfUEH9K8lB6f1/z/nmhTtOQSD3/T/P4GlCXLugaJr6
yfTr6W3cqzwuUJQ5Bxjof89R6VCO3+fT/P5UKMY7f5H+fwNA4x2/yP8AP4GoGA7f59P8/lQO3+fT
/P5UDjHb/I/z+BoHGO3+R/n8DQB7t8EP+SLeEP8AsCWX/ohKKPgh/wAkW8If9gSy/wDRCUV7RzmB
8PB/xKL7/sOap/6cbn/P+HWtz+D/AID/AF/z/wDXrn/h65Ok6iM4xreq9v8AqIXH+f8AORukn16f
r/n/AD7cklqzREhPJ+rfy/z/AIjpQOoHuv8AL/P+A60zcc5yf8/5/wD1dkyQOv6dP8/59psA/wDg
/wCA/wBf8/8A16Unk/Vv5f5/xHSoyT69P1/z/n2Xcc5yf8/5/wD1dmA8dQPdf5f5/wAB1pP4P+A/
1/z/APXpmSB1/Tp/n/PsEn16fr/n/PsgJCeT9W/l/n/EdKB1A91/l/n/AAHWmbjnOT/n/P8A+rsm
SB1/Tp/n/PsWAf8Awf8AAf6/5/8Ar0pPJ+rfy/z/AIjpUZJ9en6/5/z7LuOc5P8An/P/AOrswHjq
B7r/AC/z/gOtJ/B/wH+v+f8A69MyQOv6dP8AP+fYJPr0/X/P+fZASE8n6t/L/P8AiOlA6ge6/wAv
8/4DrTNxznJ/z/n/APV2TJA6/p0/z/n2LAP/AIP+A/1/z/8AXr5D/aR/4I4/Dv4w+K9J1/wtear4
D1i2+I2nfEbU0i1HUb7SNSvLWdJpmTTGvEsre5n2BWuo4hINz53b3Vvrkk+vT9f8/wCfZdxznJ/z
/n/9XaoNwqRqx0lFpp+jUl8rpNrZ21TBu8XB7O6fzTX5Nq+6voeK/D3/AIJ0fBv4WfEvSPF+g+Dv
sHiLQtZ1nXrG7/tW+l8i91jadSl2PMUbzti/KylY8fIqcmvKvjn/AMEjfDWueCfAmjfCl/DPw2/4
QO81XUNP1G6i8Q3mq2cmoyGS5FtfafrmnXUUcryStJE8ssT5QBAIxX19kgdf06f5/wA+wSfXp+v+
f8+062SvtovJdl5eQRsr2W7u/Nu+vrq9d9WfMH7MP/BIv4Q/s7fCn4eaBqGkL421X4eaVrWlQanq
pZYruPWHMmqK9mrfZmhnYkCORH2IAobqW6Lwl/wS4+CHg/RNe06Lwrq+pW/iPw03gy5bWvFWr6xN
a6LIrB9PtJbu6lksrdg3Mdq0QOxDj5EK+/bjnOT/AJ/z/wDq7Jkgdf06f5/z7XUnKbbl1un2s73V
tre9LTbV92FP3NYvW979bq2t976LXfRdkeJeMP8Agnx8MNd0tJbPwnop1jTfh/cfDfSZNY+16pp9
vo8oXFpcWhuEW5i3RxltzCVgpUSjcTXh37L/APwRR0H4S/EHxDr3j3xT/wAJ1Dq3gWb4b2ekWZ1q
1sNP0Wcg3EAk1HV9SvBuCoiLFcxRQqGKRq7sx+3iT69P1/z/AJ9l3HOcn/P+f/1dhTldtvdNPrfm
5r/fzyv35n3YR91RUNOW1raWty21XbkjbtyrsfP3ir/gll8DvGOmeG7K58J6rZ2/hXwtF4Hsxpfi
nV9La40NFULpt21tdRte2wCj93dGUHLZHzsTJ4n/AOCX/wAE/FXg7wxojeF9Y0qy8IeGR4P099D8
V6xo12+jfKBp9zc2d1FNd2w25Edw8q5ZyAS7E++ZIHX9On+f8+wSfXp+v+f8+xKc5X5ne7u/N63f
q+aWv9592OLcfh06fLT/AORj9y7I8ng/YQ+Elh4nutXtPBdhp93deC/+FdstlcT2ttHoCszCxjgj
kWKNAXbDoquAcBwABXz3+1L/AMEY9K+LdnoOl/DvXPDXwu0XQfBKeAUmj07XbrWH0pcgWslzb63a
Q3duqbdsGoW12m7eWDrIyH7d3HOcn/P+f/1duT8T/FGHSpngs1+1TpkFycRIfbHJxgeg4qJy6yf/
AAfi+/45/wDgUu7HBuPw+X4ctvS3JFekUtkj5d/aF/4JlfDHxB8HvAHw8tTqVjH8PfD1r4btr+e0
0zWDeadAq+XBd2epWl1YXBDoJVke2MsT7vKeMSSK8fh/9gr4VeH/AAF4L8PR+EdGitPAmtW/iXTm
0+yg0UPq0MZjF/LDp6W9u0rAsWURCM54QAKB7PqF/Lqd9LczNvlmbcxxjJ/yP84qHOPfH6/5/qaw
lXqOTnfVy5vne9/W+vrruJU4qKjbRLl+VrWffTTXppseQ6z+wd8Kdc0bxNZT+F3RfFviZfGOoXEG
q3tveR6wqoq31vcxzLNaShY1UG2ePC7x0dgYdD/4J/8Awk8O2fh6KDwtLJL4Y8TnxnZ3l3rF9dX0
usFHj+23NzLM013Jsdk/0h5BtCjGFQD2TOPfH6/5/qaM498fr/n+prKE5Rsou1rfha33WVu1lbZF
VEppxnqnffzvf77u/e77nknxA/YX+GXxN+JGpeK9U0nX7TV9djgh1k6J4q1XQ4ddSEMIxew2VzFD
dEISmZkc7Pk5UBa6XwV/wT4/Z00b4nQ+K4vC2swanF4pk8bJYX/inWbvRIdck3FtRXTZbp7EXALs
UkEAMZCldu1cdtnHvj9f8/1NGce+P1/z/U1dKtOm04O1tvLbb7l9y7BJc179d/PRrX5Nr0bXVmPr
P/BLT4DeL/Hb+KpvCmpTXNx4oh8b/ZbfxZq8WjDW4nSRdSTT47pbNLktHlpFhDPlw24Owat4X/4J
zfs9/Cv4yaT4m0/w/f2Gv+G9dvPE2lRSeKdYl03R9QvBKLqe2smuTZ25l8+UskcSqSwJXKrjstA8
U3vhqbdbSkKTlo25R/qP6/4VW1bU5dY1Ga6mCCWZtzBRgA/5A/WrVayio/Z28vh27fDH/wABj2QS
Tlfm1vv577/+BS/8CfdnM6r/AMEpfgF4utvEMR8Na0NN8U6l/bd3p9h4z1u102DUPtcd59utLSG8
WCyufPiV/OtkikBLjcRI4a9pH/BLb4GeHbGKMeENQnaLxrZfEQ3GoeJ9VvbqfxBZxLFb38s89y8k
jhVXKyMySHJdXJLVpRTPbtmN3jI7qxU/54H61Ne6rdahGqT3M86R/dWSQsF6+v8AnrRTrunZw0ta
1tLWs1btZxjbtZW2QO7vd73v800/vTafdN9zivi//wAE3PgpHF4ru4vDer2afEC4vJda0mx8VavZ
6Hey3ls1veXP9mRXS2aTyxM6tMkKyFnZ928ljk+Ev2UvAPgXx/4b8U6XoP2XXfCHhkeD9Juvt1y/
2TSg0b/Z9jSFH+aJDvcNJ8v3uefV9L8Nz+IbcNa7p5Itwkj/AIlGMqR7HkexrNljaKR0ZWVlJBBG
CDzwf896l1JaNaWVl6JNW+6Ul6NrZsXKnv5/i0397Sb80n0Q0cdfx/z+BqbTtOm1W8jt4EMk0nRR
3/zg1CR17/5P+fxqxpmoy6PqMVzCcSQsHXPQ9f5/1rJeZRHc2kljctDMjRyxnaysMEHio15x+H9K
9GvNH074l2MV8kjW9yFKMVwWUjnaw7+x9Kx7r4XfY0WA3scl9cMfsyAbUcKMnOeh6Y+netHTfQVz
kV5x+H9KF5x+H9K6iD4S6m0iiR7SIcZPmFtvvjHtV+/+EQh03/Rrl5b1fm2soVHHoPToOtJU5dgu
jv8A4If8kW8If9gSy/8ARCUUfBD/AJIt4Q/7All/6ISivXMDmfALY0zUeTxreq568f8AEwuP/rf5
xW2Gxjk8ex46c9fp/nFYPgRsabqPTjW9VPT/AKiFxW1uxjHb1rnktTREgbGOTx7Hjpz1+n+cUBsY
5PHseOnPX6f5xUe7GMdvWjdjGO3rSsBIGxjk8ex46c9fp/nFAbGOTx7Hjpz1+n+cVHuxjHb1o3Yx
jt60WAkDYxyePY8dOev0/wA4oDYxyePY8dOev0/zio92MY7etG7GMdvWiwEgbGOTx7Hjpz1+n+cU
BsY5PHseOnPX6f5xUe7GMdvWjdjGO3rRYCQNjHJ49jx056/T/OKA2Mcnj2PHTnr9P84qPdjGO3rR
uxjHb1osBIGxjk8ex46c9fp/nFAbGOTx7Hjpz1+n+cVHuxjHb1o3Yxjt60WAkDYxyePY8dOev0/z
igNjHJ49jx056/T/ADio92MY7etG7GMdvWiwEgbGOTx7Hjpz1+n+cUqckAE5/H29/p/nFRbsYx29
aR5PLUsMgIN3vSsAtpdpdxb0Y7QxGcHnBx6+38vwlBC/xHjB6H2/w/l+HLfDTxB/a2ny27LtltWL
57OrEn8/6YrpcgAUoPmV0DJAQv8AEeMHofb/AA/l+ACF/iPGD0Pt/h/L8I8gAUZAAp2AkBC/xHjB
6H2/w/l+ACF/iPGD0Pt/h/L8I8gAUZAAosBICF/iPGD0Pt/h/L8AEL/EeMHofb/D+X4R5AApYlDu
o7ZH4dKLAc38SfFP9kacLOBz9oul+YqeY046e5IxXneMDgdP8/5+v529fvZL/W7qaVtzmVhn0AOA
Ppx/ntTHH4f5/p/ntxTlzMtIU8ds4/z/AJ+v5h47Zx/n/P1/NBx+H+f6f57A4/D/AD/T/PaBinjt
nH+f8/X8w8ds4/z/AJ+v5oOPw/z/AE/z2Bx+H+f6f57ACnjtnH+f8/X8w8ds4/z/AJ+v5oOPw/z/
AE/z2Bx+H+f6f57ACnjtnH+f8/X8w8ds4/z/AJ+v5oOPw/z/AE/z2Bx+H+f6f57ACnjtnH+f8/X8
w8ds4/z/AJ+v5oOPw/z/AE/z2Bx+H+f6f57AGh4a12Tw5rEdygLIPlkUfxoeo/qPw/HrfH/hSPXr
Uarp+JJGTzHA485MZ3D/AGh/T1FcEOPw/wA/0/z2674YeJ5YL5dNlcG3kDGPdzsYc4+h549R+FaU
2n7rE+5yP0/D/P5UfT8P8/lWz408NSaBq8hEbC1lbdE/UYP8OfUcj8qxh2/z6f5/Koas7MZb0fWr
jQ7xZrZ8FSCVPKv1HI/H9al13xNd+Ir2OedxviGI9g2hec8e/Tn39qzx2/z6f5/Kgdv8+n+fyou7
WA9D+GfiS91mCeO6cSx2wXbKeHJJ6H14H16V1BbapGc8eh9Mf5+v4Vy/wtsJbTw9LI4AS5kDoO+A
MZ+hNdKCOOg+vauylflVyHuXvgh/yRbwh/2BLL/0QlFHwQ/5It4Q/wCwJZf+iEortMjlfAv/ACDd
R/7Deq/+nC4rarF8C/8AIO1H0/tvVf8A0vuK2vT9a5pPXY0S0Cij0/Wj0/WlzeQ7BRR6frR6frRz
eQWCij0/Wj0/Wjm8gsFFHp+tHp+tHN5BYKKPT9aPT9aObyCwUUen60en60c3kFgoo9P1o9P1o5vI
LBX5AfH3xdqvx0/4KGav4i8S/DzxZZ237UPhy9/Z4+HuhXcj+FNfHguCK4vfFHiu7t722mEckEzx
tZ28xt5p4GybbdtZ/wBf/T9a+P8A4of8EnNO8WfG34nfFCb4y/Fo+IPiToz+H9Ut7ux8L6rY2+ig
yMNGtor7RpzBYHed8KN+/YCSYyyfPTUtGJo+SrT9oLxx+x7/AMEdPEvhr4nW2u/D/Xv2e4tG8L+I
dQ0fQNXmtPEulJdw2mm3WiXtprGkySGRXsmndb6N4zHdRS2yCVEPzT/wUW+MPxY/Zx+FHxS8M3fj
L43fDf4oeFfBWneO7JZfEXiOF206bxHYaQXjuU8eazbB2e4lXy5bUnajkGM7GP3f4I/4JPWM/wDw
T48Tfsz/ABD+KPjrxv8AD3U9SRNFv447az1vw/osF1bXVnpondJo5/Jktcea0SgxyeWkcSJGqbX7
XH/BBXwp+1JrGqjTPGN/ouh+MtA0/wAM+ItR1u98Q+LvFdzpdvrNvqz2tlqOpa1LbWkby2sG0NYz
eWxlbLCQoIo1I7A0zkv+Dkzx/Hpv7P8A4e8L+P8Axf4S8MfAXxBqdi/jPS7HXorXx/4tt0voUddH
guLSeFo7CaSxvZgoaSdEaPfaqpa4pf8ABPL9tTXf2y/iR40K/tEWHibwh4v8GyaZ4C+Gni3xt4V0
7xzr+oy2ou5tSmn8MQJc6PHFEHgWJBcXcW24uHWFoo4q+jv2+/8AglD4U/bR8T+B/E+hX2kfC3x9
4W+IOieOdQ8Y6R4XsbrXdaXTIpoobZ55RgsokjMbzrcRp5CAwyL8tZPj3/gk5e/Fj9rf4RfErxz8
aPF3xQ0X4YWviOxuPDHjLw3oU9rrFvrGmmwmh3WFnZKsZQ5kWeK5EgUKPKBcvqpKwW1Pw18M/tve
F/FHhDx34o1LxZpM+oeGvjkfidoxsNd1PVvH3iUWDWGmeH7ePVtW0uaNNL0+G+up0l1OOWe4MRhF
rbFpZD+1n7En7Sfxn1H/AIJ8+OfG9hrFh+1X8U9O8ZX9jZ+FYbm28JXWgpFew2raJeXFzp2nlLu1
hEl1I9zp9q8hl2rEEMLt5T8Tf+Darwt42+F/xZ0ew8b6bp2u/Ez4h6j4ssdVuPDl1dW3g/Sby8sb
2TS9P09dRjtIbgy6baI9+sYkkt1aExhdmz6/8S/8Eyvgh4u+CHxO+HGoeCTc+DPjH4pm8Z+L9O/t
i/T+2NWmuLe4kufNWcSw5ltYG8uF0jGzAUAsCOaBRZ638K/EGueLPhh4c1XxP4e/4RLxJqel213q
2hfb47/+xbuSJWmtPtEYCTeVIWj8xAFfZuHBFfz/AH/BdL4z69r37U37SHwr1X/hPLP4J+EdU8PS
eJdb0q/8ZeKb7Sf7Ts4tThlNhPrMXh6K3OoIlssU4g2LcRG2ikeFjH/QD8LPhjofwT+GHhzwb4Ys
jpvhvwjpdrouk2fnSTfZbO2iWGGPfIzO22NFG52ZjjJJPNfDv7Vv/BBT4bftU/Fj9pHxv8QL46zL
8bx4abTJbDRLWDW/A39kW8cMv2O/mE/F2Io/N2xR/u1KHfwypTS1YNHxL+wR+0z8T7r4VftYav48
PjLwX42+CXgfS/EEUGn+KfElxdw/abC41aa0ltfFM2pW8NxttIrczfYmKCS4MbSK6SHhND/adufD
Gla9+0jpnx68E6d4z8U+CYL7WtFtfjh4Xh17V4YLc3MFncaWvg4QTapECbdGcGZCTCsojOK+5f2b
f+CX3iv9lS/+POu+B/H/AMPPB/ir4y22jwac3hv4ajTtC8HPYQzwGa3006hIsryLMZMPIEE3zssq
sYyaV/wSKkg+P2p+K734w/ES502X4XD4Z6fcQanfQ+LrYf2hHqJ1SXXDdu8t0bsSsEWCOARyRw+U
YkKSc3PC7f8AXQdmeAft8/tTXvw+/wCCfvwM+M3iPwLH8Q77WfD51Tw14t1LxrNo3inw1q+q6HqF
60J/sawsVksltoxC5gntmm2Rlow8azV8W/sWePtG+EXxG/ZgvdNvNE8S3ut/EfS4dH0mTxTIf+Fc
WutyxtfWem29h4uuZ0gB3Iwv9MVpvNYXbFiIG/Uz9r7/AIJa+Ov29Pgh4X8EfE744+fbeF9HuS13
oHgy306XWtfeKa1g1S6Ek86+QlpNIj2luIBJJPLIJYwYo4cnwL/wR91Tw94k+HTax408L674b+GW
t6ZrWi6LLb+M5bbTHsWH2c2kV14ruLaB44wUjZreREDYMbplGcakVGwNO5+Ov7Uf7Yfxv8bftL+O
5PFmir4G8SfEq70J/FXhyTVtZ8JWN2E0zTrSTTdTja7ijTT54LtPMa6ljZAWKXEKs6S+4fDv9sXx
nov/AAR6/ay8L6Jb6laar4L/ALA0LVrq9+INz4j0bTNF1Weay/s/QbaeKeGO0ihX7OjLcSGWK5ad
byTyLYN+gUH/AAQP8K+N/j/D8SviV431nx74k1nVtX1jxZtN/psOqPcSWr6bBYGO+abToNO+yQiM
CWWRxtUyCNI40y7/AP4N/tIs/gv+0f4E8O+O7Lwx4f8Ajq/hyLS7W08P3FxH4StNFuTLDETcX8kt
48sYRXkaSIb9zqoVhGtOrBpL0FZni+u/Bjxx8CvjxoU/ha9+Ngvvg7ptx4C8M6zbadqqyz6HEfJi
tbhrX4dTwXsAESSIsk93EkmZYpGY+a1L/gsT4lnP/BQzQfhX448a32ufDq5+DtjeahpXiPxBDp2l
a5qMWtTqL24t113w/ZvdN5Qb5JSMoClttRXg9e+Nn/BuRoHxo0L4ytP8TfsXiD4qeNr7xda3/wDw
gmjXP9iR3N5Hcm08+WNtSbaEdN0F9bRt5mTDgyrL9A/taf8ABPjxr+1F8VNU8RJ8bfEHhvSIk0iD
SPCFvZTy+GLyC2mmkvrfW7L7Wq6rFfLO0LohtR5SRI/nbSWjnjdNMdmfFv8AwTh/aj8R6x/wUa8d
acfi7u8I6T8E9Q1zzdZ8Wy+IfDmj30ep26/2hcp/wk+sRjyoiS/+n27eVuGyEP50nyZqPjnxx+03
4X/aE+AfhPxF428c+Ifjh8U9b1LwnZeGNC0XT/Afim5stQj1DUbttUvZ5Lk5tYYZo4Le42xBbbdJ
P9oNfrX+yr/wTj+KH7NPxv8ACOuD4/3158PfCmlT6DH8PbLRb1NFOnmEC2hia+1O9ljeG4AlWdmk
l8si2V0tlSJfOvEP/BDrWfi18MfFvw/8f/FrRNU8B+P/AIh3/wATNfi0LwGlhrP9pXSv+6sr27vL
xLOBXMRysDTsgkiM2yZwWqkb3CzOh8Eftr6tqf8AwTPT9ovwx8Xr1fA8Ntqus3tx8VPBmm32tTpb
zm0isbaPSr3SrSN3uLaZYlkaV5pLuJd6cJX5/wD/AASE+IXjj9qP9qvxR4c8X6d4J+GXxB/4WR4h
+JXh3VNd8Iat9oj8RMtvHrmjW00Ws2Ey+VA9u0unsLgSQ+YbgYjj3frZ8LP2I4P+EH+GVj8Vrrwv
8Q9X+DlzFceE59J0Cbw3pVk8NpHbQXEmlR3c1nJdRbXaKURokHmjyIoCCW5f4Tf8EsfB3hnwb8UN
C8aXX/Cb2PxA+LV/8WtPkhin0a98MX07wvALW6t5/PjngaHi5hkiZllZdoVmDSqkUmh2Z8Jf8FIP
il4/+N3/AAUw8M/CvS9a+M194d+Gt83xShh1Pw4IWe/tbqW1s30ldN8NX94LO1nkkT7RfQXEN0EY
BsGGefkf20v2qPHP7Qv/AAT4/bF8L+M9e+Idxc/Cy28HQSWWvCyjtrp9S1e0uFmWE+HNGv4ZIkg2
gTJscTMwUjy5D+lHxb/4Jl+Cvj7+3tD8Z/HVj4X8a6RbeAF8FQ+E9e8NQalbRTrqL3i6isszMocK
7RBRDnEjHzMMVPifj7/ghNp3ir4f/tPeG9G8X+F/BOkftDXPh57HT/DvghbHT/CNvo9yZo41t0u9
tw8yhfMdTCDK0kgQB/LDjUhp5W/MTTPyyk03wh4c/aJ+L3hDX9J+HOn/ABB1vwR4u8JwaBH8N7rx
NpE/jCbxPqkVppvhSC2dJdNZoxBHaXM6uYWEimNgVFfoB8XdXh/aV/4I6fs+fCXTku5vh1qvw+8P
eIvij43trSaXSfCnhnRYIJtSjgvY1kjfWBc2RgSzjiuJFMVx5sUShZK9Z+L/APwb4/B/4x/G291q
9utSsvBmpeErjQrvQ44xe6nd6nNqN1fnWn1e9a4ujdCa53BuHby1jkkktnltZPdvit/wTKuP2vP2
TPhV8MvGnxK1gR/DqO1i1OHwb4f0rw9oviP7GyGx+16XcQXtubeJYkJtUxavICxgCrDHDcqsZWtu
JRaPnD9ibxp48tP2qdL/AGiNR+GOt6P4D/bJ8N6Q2sWmnTt4muvB2sWSyxaTPO9tFGYdLvNLZJHm
aKQw3EgWZrZACfCvip8ePFOk+FPjV8VtW1n4qah4F+E3ji+8JeLNd0WPxBaaZb6ql5DBJHb2K/Ee
3mWAy3cGwQ26xqsoAVAjBf0Z+EX/AASW1r4O/tyaZ8bLX46eNNbm1PTpNL8d6Hq+h6Qlv40hihZN
OZ3sLazSOa1kYsLiSKaZowsIeOLcrY3xE/4IWfCjx38N/jFo2m674p1S7+LnjS78a6raeIvE2t3P
hiK9urqC4ljfSNK1DTIZ0UwjymlczIwjZpZBEiBxik7yGRf8EGPFPizVfhx+0hoPizxf4i8ZS+Av
j34m8KaZc6vqt9qLWdjZxWKRW8L3txcTpApLlUeaQjexLuxZj92g4Oa8H/YB/Yk/4Yk8D+Pre78T
nxb4k+KHj3WPiH4hv4dO/s2x+36jIheK0tjLM8MCRxRKFknmcsHYvhgi+8EdeT/jWjkCRofBD/ki
3hD/ALAll/6ISij4If8AJFvCH/YEsv8A0QlFdJkcp4GbGn6iME/8TrVT/wCVC4/z/kVtB84+VuRn
8P8AP+eRWF4JP/Ev1IcYGt6qTxz/AMf9x/n8a2Ogwe2SeOf8/wCP1rlk3dmiJQ+cfK3Iz+H+f88i
gPnHytyM/h/n/PIqLoMHtknjn/P+P1o6DB7ZJ45/z/j9am8gJQ+cfK3Iz+H+f88igPnHytyM/h/n
/PIqLoMHtknjn/P+P1o6DB7ZJ45/z/j9aLyAlD5x8rcjP4f5/wA8igPnHytyM/h/n/PIqLoMHtkn
jn/P+P1o6DB7ZJ45/wA/4/Wi8gJQ+cfK3Iz+H+f88igPnHytyM/h/n/PIqLoMHtknjn/AD/j9aOg
we2SeOf8/wCP1ovICUPnHytyM/h/n/PIoD5x8rcjP4f5/wA8iougwe2SeOf8/wCP1o6DB7ZJ45/z
/j9aLyAlD5x8rcjP4f5/zyKA+cfK3Iz+H+f88iougwe2SeOf8/4/WjoMHtknjn/P+P1ovICUPnHy
tyM/h/n/ADyKA+cfK3Iz+H+f88iougwe2SeOf8/4/WjoMHtknjn/AD/j9aLyAlD5x8rcjP4f5/zy
K+Yf2iv+Cj2h/BL4Z/He91bSCt/8MNQ1HQ9FsvtUn/FWXdr4Mi8VtH5iwsLTNs1ym596/wCi7gS0
iRV9M9Bg9sk8c/5/x+tfi5+1r+wb8PfFPg3x9q2mfCvQfg78NvGHjy+bQvEninRV8HeFrK0ufhtf
6TaT6jp8qxzWP2TxJCGhnvrKNo5tUR7N3a+k8yo3ejC59r337dfg7XfhfdeKfBU//CX2WneNtB8F
XbKk9jEs2qX+mW6XEMkkWLiAW+q211HLCGhuI2Qxy7XDj2r4e/tSabrHxm8HeANLSz1m317QvEup
yava36ypZTaLqWlWE1mUVSGfzdSdX+dTE9oyFSWOz82fiR4a+HH7Xml/Em60Tw9on7SHiG/+Nel6
vpVzonhG1vtLhsbK38JT6jYW+qS3E9jHv0i0WGV7q6s1vLhJ4ViwAh6L9jD47fs7fFz9rL4K2Wve
Mfgv4n1DS9a+Ll9okGo6tpt9LaazefEDTJ9GmtldmKXdxAJJLVkxJIm9oiwzWNKFndDbP1qMmOoP
b/P+f8K5L4lfFa4+Hd/Y20HhHxd4ml1O1vZLY6PbwSxtdQQefHZyPJLGsElyizCKacx2u+Ly5J4p
JrdJvMP2f/hJ/wAIh+29+0F4p/4VJ/wh/wDwmB8Of8Vz/wAJT/aH/Cw/s1g8X/IP3H+zvsWfI+6v
n79/O3NX/wBrD4taZ8DPE/gbxTqes32gWWmXUqatqWoa2ui+ErPSpXghuZNWubhHtlkEslr9kRAt
5Lc7Io3jtH1GRdbsRU+An7b2qfHf4sa74UHwP+LfhT/hFNU/sXX9V1m68NtY6NdnTrfUY45Ba6tP
PJvt7u12tBFKoa4UMV2yFPNfiZ/wVA+IngLxP4v0SD9kr403utaL4NbxXotg+teGvtWvGN7lLiOO
ODUpmkjidbJXNmLu4U38Qe2QyWv2rM/Yx/b4+BWtftM/H/T7P41/CW7v/GvxQ0//AIR22h8X6e82
v7/Cnhq0T7IglzPuuYpYB5YbMsboPmQgeLeCPgD8Btc+Neo/Fj9oXwX+zx4X0DTbrxJpN9rPivR/
D9rZ+Jde1jX5r6PTridkW3uNT0XTtOhtZ7iJ7pZLq+1KJbhpbK6aSlfqI+5fD37V0l3/AGFYa98O
fHnhDxb4m1RrLTPDGoy6Td6pc2kfkG51Umxvri3isLdZ18yWWZG3+XEqPNc2kVxyPwV/4KQ+D/ix
rt7pt1ZX+lSaZ4y1nwPqGoRZu9I0nU7TWbrTbGzu7sKqw3d+lss8cJBEZubWJ3El7p4vPhv44/A3
xn8O/wDggB4MT4oaxrum3PhTS/BOiWFo2vX/AIdaPRdSfw9Z6ppfiC0ivNKtJ/LabUrMLczRCO0E
XmXMU5uLg8F/wTp+Nv8AZ37c/wCzx4K8KfEDzPDY1S+0p/D2k+P/AO07EWEHhnVnhgNlH4819Fgj
eG3Kj7DGiNFFieMhY5Wk7Bc9v+O3/BcnwL8Ifivp3hy78E31le3Ou3GjatZ3vxH8CR3egyQ211JI
sscWuyiJ1mgEJW5a3QGQjzDL5cE3rngH/goT4U8a/CrSfGE3hf4iW+keILm6g0x9F0M+No7xLcos
kwn8OPqVvGnmO8QEsqOXhlAXCE18dfGj9mj4w/E2KbxHoVzpnhf4b6n8R/iJ4a8UajH4zt9MsLrR
LXxL4q1iSbXUm8N3kdtaJdW5g8y4ubq3MVxLb+RENTu2rqPDnw9sf2tv2QvhVYSTfDrXdd1P4yeN
PFWnaPJoa+PfDWuW513xDZ3V1taexWbTbZdSW4F6zRgslqqRtPcW8EnPOlFalJs+nfgr+31onxQs
dDtdX8EfF7wt4k1W5FhPZXXw18TfYbefzjD5gvZNOjiFq5/eJNP5JEUiNNHAwkjTE8Zf8FQPBfhD
4geD9Kk8KfF6XTvFlzc6d9qb4YeKbe5t7tLaS6iRLeTTFNwjw293u8lmljKRHyXiM81v81/sb/B3
w5qnxH+D/wAXv7O/Z48L+GbLxt4j03S9d+G/wri0aDV5oJdU8PWUE+qpqc/lwah532iEeQ0LzRwQ
ed5stsLjsx+yvD8Q/it8GNQ8e6n8XoNY8T/HbxraTpN8QPEWmCHTY7Txg+ntaQpeJHaJ9lt7Pynt
kj3wNtDGOZw8ckbjuz3z4j/t1waB8P8ATde8N/D34ia9baxrdt4ZjuNW0ebwrbaZqN3c2lnZfbI9
TWC+FrNc30KG4tLS7CBZSUJj2Gl4D/4KQeE/iB+0bqHw/t/DHxRtvI03SLu2v7r4eeI7bzJr+6v7
cpPHLp6fZII/siMLqdlhfzZAGH2aUj5Q/aP/AGZdR/aP/YY+FY121voNN0j4xtbx6r460VtV8XR6
JfePEsLCzjTXbaaeJJbK4tJ3kuCWaOwt4JIZ4riTy+n/AGQv2f8Axx8Kf+Cguqa3d6GIIrPwT4UN
34S0jxLq1pF4et7278QWqPHbS6pNprwWxheU6Yga3s47grYyyy2e/UlyRsF2epftD/8ABTy6+A/x
P8X6TfWnwT0DRPDXiS08L2l940+JtxoN5rN1NYaVePJDaRaTdfuIv7Yt1kl8wrEoaSXy4wWG34I/
4KdaFpngHxdr3xI0n+xLHwnqTW8mq+B4NX8d6DPY/wBjaZq41A31ppqCGA2+qRkNNGikROyu6qxX
5q/aA/Z68f8A7Qvxe8W+Ofh/4X8U+LvAHxB8WnXbTUdM8Q6hDbXSafpXhqDTL23sE8TaHAyNeWep
SR3uZXcWdnLFuhkgmK/sTeF7j9pb4TahpWieIPFMuuXfxR0XxNqGr+GfFEPiXSNNu7Xwh4c/tH7T
qmprq0F4lrezt9lt2e4lN3aWhieKO0lurWuSPKF3c+vPgf8At9aJ8VfDnhBdY8D/ABf8I+KvEltZ
C+0m8+GviU22i3s6p5lvJfPpyW+yKR2Q3DMkWF3kquSPPNF/4K96B4z0PTV0j4ceN4PFPiLTfDfi
Hw94d1rUNGsbrxJo+s3j28d7bypeywRbBHIAl5JbB7iaxtdyzXkIblv+CeVt4k8P+CvDnjDTviZ8
Q/iL8OPEHj/xpazLYRaFeae7zeJtShs790tNLW5ltbiRmnlntrpUglkhfyhZGZrX5C+BeuvqfgS0
8TeE7Dwv4C8G/G3xb4dh1vQPCvwfvtX8O+IIH8Cfbbu1EenFbnUbWDV9OvLd7C0uFFuz6kL9bmO5
aNUoRuwufp34c/bY0fxT8AvC3jq28J+MftPji9l0/wAP+G9ti2q6zMjXGDBIt0bJ4ZYLeW5juBde
RJblJVkKuM4Xxc/b9sPh/wDsv+IviBBoC6bqvhfxBpvhnWNE8X6rFoaaBd3l/ZWpa+vIluoYoI4r
+K5M0XnRmIgqxySPJf2fPDvif46fsCaT4i1bW/iv438ZeFfHGsv4avNAtrHR9e0hLDVL3RoUhh1x
h8v2ON/Nj1eW6uiZ5vMkklCbZr79gL4ieLf2cvGWof8ACW+OU+LfjPXRrFrPfeP7rw1cadbSR2Fm
0V1NoEUVm95HY2haNjaXVvDcsBtuYvMaYpQi3Nzdrbet42t5W5r+i16OqlkouPXf1978LKNut27q
zTXb/stf8FFf+Gif2jbf4f8AmfBTWPtXhvUPEH2/4ffE7/hLvsX2S60+38m6j+wW3k+b9v3RtubP
2eQY4zW18Zf+CoXwV+DvxP8AC/ha7+J3wukvdV8SXPh7Xll8ZWFvL4U8mwv7h5rqMuSmLi0jtSsn
l7ZLpAW3AI3kPw/8G6n8Pf2uPh94N8RaHfRavq1y2pSWujftN+M/E99o9pDb3U8Wo3mlXcMEcmnP
c2sdo0kzeU0tzFGQ5fYaXjv4AfGn4R/E/wCDel2njv4W2djq3xs8Wa3oEEvg2/1GXTf7RsPFuoob
qcanbi5/0e5kQxxxQbJHT95KsJ85csbk3Z698eP+CgT+G5vB8vwq0bwv8W9I8V6JrGvf2rp+u31z
bCDT77TLGRbZNJ03U5rtzPqYDCOICMQSliNrY434Pf8ABXjx5qmq2F5b/CvQW8OT+I9O8Oanei/8
U2gtpLrXLbRJBG+o+GbK3lnt7u6w9v8AaUk/czDGY3x478cvCfjT9lrxr8LPD+u6Z/wnniDxBpvx
Nv8AUn8IXninQ/3eoeLdG1eOaB9EstQ1G1x5kKNGf3Y3PG1zICvneJ/DrTPHXwp8W+GrCbRvG0Hh
nxR8SNCa8h1jUviNeWenyXPxBsdVt54E1jRLawSfZILaea4nElyyRyo0cslxFeXGCtdCbZ+lvxo/
4KD+KfA/xB0rw7f/AAQ+MGof8JLrc+g6FcafeeGVtNanitrq83R+Zq6SIjW1nPKpuEjOFCkK5CH6
N/Zs1298VfCy11rU/DGueD7/AFF5Gl0nV5bOW8tFR2RfMNpPPAdwXeNkrfK6g7Wyo/MD4/fswaN8
QbDQvij4V8UfF3T/AIZeC9c0c2urXvxS8S3S6tBc61psGp+IIJZr9jZ6ba6K+rRC9DKJ4L67uBst
4be5uPv7/gn/APEj4V+OfBXjXTvhB4o1zxr4Y8I+Jv7HuNbv/Gt34vttQu20zT7tzZX1zeXTm3RL
uKMoroqTx3GEBJdnBfaQM9/MmM8H5evt/n/PUUeZyeDx1/z/AJ/UVGAR6kH0/wA/5/KjPzDOOOcH
p1/z+da3YjW+CH/JFvCH/YEsv/RCUUfBD/ki3hD/ALAll/6ISiuszOO8FHbZan1/5DeqdP8Ar/n/
AM/542N/1/Af59v0/DE8HSYs9SGOmt6p3/6f7itdps9j+dccn7zNESb/AK/gP8+36fgb/r+A/wA+
36fhGZs9QT689aGmz2P51NwJN/1/Af59v0/A3/X8B/n2/T8IzNnqCfXnrQ02ex/Oi4Em/wCv4D/P
t+n4G/6/gP8APt+n4RmbPUE+vPWhps9j+dFwJN/1/Af59v0/A3/X8B/n2/T8IzNnqCfXnrQ02ex/
Oi4Em/6/gP8APt+n4G/6/gP8+36fhGZs9QT689aGmz2P50XAk3/X8B/n2/T8Df8AX8B/n2/T8IzN
nqCfXnrQ02ex/Oi4Em/6/gP8+36fgb/r+A/z7fp+EZmz1BPrz1oabPY/nRcCTf8AX8B/n2/T8APg
/T2/z7fp+EZmz1BPrz1r8ffGX7X2r/FP/gll+1rq9p43+E2q+Jdc/tWz8XXPgnT9S+IEOuzvoN1a
T3q/Y2tn0WwkjsYbC0nvbXyIYtGe4kn1IXqXRqOoH6aa7afYtau4sEbJWA47Z4/mPyFVlldduGZd
h3Lg/dPqP0/IV8PeJPEvhf4Tfs0683jGT4qy6Z4w8a6je6hZ+PNS8NWmgaldI8Vne2mraloKSaPp
eltqJaS6gdorm7uo7+NoL17iW1utX4K/tMeCvEH7TeiCP4seFvix4lvbYW/9ufDVoNQkuYJJCp03
WdNs2unh063u7rz7PUWOLZGeC4uY3kml1bldPUtM/QXwZ4zGqqtrcnF0oG1u0o/x6fX+XRb/AK/g
P8+36fh+Wvjj4teIPh18TotT1ST43WXxZPxKtdGtbeLT9dm8GX+h3OspbwIpRG0ZQ2myo/nFluVn
U7mDAoe1+I//AAUr8N+N/wBtv4Q6RD8Z/Cmg2Oi/EKfwvf8AhmDxPb2t7qjLp2qQyzajB5ofy/ti
2kMEMiAGT94d7SwiPpw0XUUf7zt9yi//AG5L1utldupTcHUvb3E3625vz5W4vZq1nfRforv+v4D/
AD7fp+Bv+v4D/Pt+n4fJnxl8C22jftS+GrbwH4s+J+q/Fe+1+013WrZ/Gep3Wg6HoDXBFz9s0xrj
+zoIZbZJ7a2UW/nSTDzE3NDPPGvxl8C22jftTeG7bwH4s+J+q/Fa+1+013WbZ/Gep3Wg6HoDXBFz
9s0xrj+zoIpbZJ7a2UW/nSTDzU3NDPOl0kpcl3a75fR6a/4Fd3l05ZaaaxUXLzW1sr/LX8XZWW75
o9z6y3/X8B/n2/T8Df8AX8B/n2/T8Pxq/aO/aC+In7PXx+0jR/iR+074UT4j/Cfwcmu65rMvxM8N
+HpLG4u7GH7Zp0Wnr4KuLq5juZFlmhsUN2WSxsbiZIp3sA36U/sNWHj+0+Fl9qPjr4h2PxOtPEd1
aaz4Y1e2vLG+C6ZPpVgzRfabKwsLe5jF99veGdLdfMt5YGbBJRJatqK57Zv+v4D/AD7fp+FXVtJt
9btfJuIy69VI4ZT7Ht2/T8JhNgdD+dAm68HH1/z7/nU3T3AyE+H2lIATDIwJ6mU/4/5/CiDwBpkF
wz+U75PCOxKr+H+Na4lx6/nQJcD7oqeWI7iR2sMEexIokVf4QoAH4f5/w5vxH8PjqGoefaNFCJcm
RWzgH1GOufSulM2R0/Wgy5PShqLVmFzzvxJ4Vm8NNGZJI5Y5chWUY5HJyPx/z3y+n+f8+/8Anr1f
xPv0la0gH303SMPQHAH8j+Vcn09v8/5/L6VzzSTsikL0/wA/59/89Tp/n/Pv/nqnT2/z/n8vpR09
v8/5/L6VFhi9P8/59/8APU6f5/z7/wCeqdPb/P8An8vpR09v8/5/L6UWAXp/n/Pv/nqdP8/59/8A
PWbTNNl1a+S3h2+ZJnG44Ax/+r9PpU+u6DJ4fuEjlkicuu75CePr+X+cU7aXApdP8/59/wDPU6f5
/wA+/wDnqnT/AD/n0/zxXReB/C4v5Rd3C5gjP7tf+ejDv9B/npRGN3YDT8B+FfsSLfXC/vnGYkPV
Bj7x9yM/T8a6gvj149v8+n+ezDNk5wfwOP8AP/6qTzR6H/vr/P8AnFdcUoqyMyQvj149v8+n+eyM
2AeoI9v8+n+ezPNHof8Avr/P+cUeaPQ/99f5/wA4p3A3fgh/yRbwh/2BLL/0QlFHwQ/5It4Q/wCw
JZf+iEoruMzivBwH2XU+eut6p+H+nz1rYB/z0rI8H/8AHrqf/Yb1T/0vuK1q4Z/EzRC4B/z0owD/
AJ6UlFSAuAf89KMA/wCelJRQAuAf89KMA/56UlFAC4B/z0owD/npSUUALgH/AD0owD/npSUUALgH
/PSjAP8AnpSUUALgH/PSjAP+elJRQAuAf89K+efFP7C+p+LfDGo+GdU+KHizxN4Z8eWsum/ERvEb
Pd6l4lsdhjS0shbyW+m6PG8Ly29w9npyyzxPuEkV0q3Y+ha+TdO/4LEfDTxL8TvBelaFoPxZ1zw3
418L6h4mtNctPhf4qk86K3l0xYDb240svdQTR6gzm5iJii8uJWObmKqV+gFPxh/wTt8Rn9mDWvAl
18Vr4o3jLw/rnhzUdO8PWtmfCWmaO+jfZbG2s9z2SOf7I8xmigitftF5NItkifuDH8Ov2Sdd8Hft
G6L8QNa+Lnjbx/8A2P4b1Tw9DYeIdO0iLyPt11plw00Umn2dpjH9nKrLIsu7cpVo9rCT6Bg+L6/F
j4R+CfF/gPTP+E08J+O1sL1bmO4+wXEWk3sIeLUIorhU8zZ5kEkkMjRSCHzmQSTRx203zn4R/aq8
efEj433HgnTvg/feFb3w5b6fqfiZPGviSys7m20++uLqGGexTS/7SiunzYX26Kaa1wUiAYiQsmU+
a9ykd6n7OHhST4v/APCc3MOualr6t5lsuo+INQvtO02UxiLzrWwmna0tZfLLJ5sESPtklG7Er7u0
s/hfonxB+IPhXUtW0/7Zd+Eb+TVtKl86SP7Fcm2mtjLhGAf91cSrtcMvz5xkKRMR+P8AX/P9aZ4x
+Mem/APwLa6tPpeseI9Y8QalHo2j6LpCQtf6xdsrusEPnyRQqdkcsrPLLGipE7MwC1nCck4q+236
JfPZLqU76vyd/Szvfyte99LXuZ+mf8E/fhvofxY1fxrp8nxH0zXfEGtDxDqYsviT4ktrC/vgI18y
ayjvhayLsiij8tojH5caR7dihaNM/wCCfvw30P4sav410+T4j6ZrviDWh4h1MWXxJ8SW1hf3wEa+
ZNZR3wtZF2RRR+W0Rj8uNI9uxQtS6T+2F/wkvw0vtX0f4bfEbV/E+kaz/wAI/qfgy3j01da0u8ES
zlZpHvFsFT7O8cwlF2Y2WWMKzO4Q1F/bj09vg1rXic+BPHi6/wCHfEFt4W1Hwa39mDXLbUrmW2SC
Av8AbPsDb0vLaYOt2U8uUZYMGQdkHNW5Hsla3Z8trW3i/cs1p8PkZt81+b7T1v397e/X4t9d/M+c
/iv/AMEifiv8Y/iPo/ijVPj9YW2o6Pr1x4hjisbfx1b2ouJrW7tnWKIeMttpGEvJdosxAVCiIEQN
JDJ9Zfsn/CDxT8C/hDD4Z8W+LLDxjd2V1M1pf28GqpItu5D+XNJqep6ldTyCRpSHe5wEaONUVYxn
d+DvxG1j4m+Gp7/W/APi34d3UVy0C6d4huNMnuZkCqRMp0+8u4thLFQGkD5RsqBtJ89+NH7cuk/B
nxVrlm3gzxz4i0XwaIH8WeINJjsP7P8ACqSosoe4S4uormYLC6ysLOC4IU4wX+Spd+ZU3u9v+H2X
z22FdWcux7fgH/PSjAP+eleY/tIftZeFf2XrTw22vrqV7d+K9Xs9IsLLTYkmuSbm7t7T7S6s6Bba
KW6gEkhPymWNQGd0Ruf+LX7dGg/CTxVr1tJ4V8aa54e8FPHH4v8AE+lwWkml+EWeKOf/AEpZLhLq
TbBNFM5tYJ/LjkDPt5wlFuzS3bXzVr/mtdru25TTUea2lr/pp39Fqe3YB/z0owD/AJ6V4f8AGX9u
jSPg54o1u1Pg3xx4k0TwcsEnivxDpMdh/Z3haOVFlDzrcXUVzOFgdZWFnBcEIcYL/JV/4n/tW6n4
J+LVz4P8PfCT4k/Ea90/TLTVb268P3OhW9rZx3MlxHEjHUNStHZybaUnYjADHOTilBOSTXV2+7v2
Xm9BXWq8k/vPYcA/56UYB/z0rzL4m/tXeGvhP8bPAfgDU7HxXJr3xDu2tNNmt9Cun0uFltrq4PnX
5QWqPstJB5IlaY7kYR7CXHPfFr9ujQfhJ4q162k8K+NNc8PeCnjj8X+J9LgtJNL8Is8Uc/8ApSyX
CXUm2CaKZzawT+XHIGfbzhqLdrdf00/PS/fTcck47ron52e2m+tttz0rx14YfUl+22+WljTDoerK
M4I9x6f4VxhHXH+ev/1v0rO+OX7Y7fAbV9Tm1D4Y/EnU/Bfh+GO51rxlYppY0fSYCoklmZJr2O9n
jhjYPI1tbTDAZV3ujIvT+M7FLPWi8SqIbhBKm3oc9f1/n71jUpvl5+j/AK/rv0Gn0Mkjrj/PX/63
6UEdcf56/wD1v0pf8+v+f8+tH+fX/P8An1rEoQjrj/PX/wCt+lDcA+3/ANf/AOt+lL/n1/z/AJ9a
QjII/wA/5/z3oAtWV5NoOpiVAolhJGGGR3H9f88U3Ur+TVb+S4kCh5TkgZwPT8On+cVJq6pLJHcx
nKXILEf3GGAV/wAPqKpgYx7f/W/w/UU32Au+H9J/trVYrfdsVuWPcKOTj34FekQQR20CRxqERAFU
D+EV594NnFt4ktSejEp+a4/z9RXoVa0tiZC4B/z0owD/AJ6UlFaEi4B/z0owD/npSUUAdB8EP+SL
eEP+wJZf+iEoo+CH/JFvCH/YEsv/AEQlFegZnEeEHxbamMf8xvVP/S+etXzM9qyfCJH2bVOn/Ib1
T/0vuK1cr6g/j1rgn8TNUL5me1HmZ7UmV9Qfx60ZX1B/HrUgL5me1HmZ7UmV9Qfx60ZX1B/HrQAv
mZ7UeZntSZX1B/HrRlfUH8etAC+ZntR5me1JlfUH8etGV9Qfx60AL5me1HmZ7UmV9Qfx60ZX1B/H
rQAvmZ7UeZntSZX1B/HrRlfUH8etAC+ZntR5me1JlfUH8etGV9Qfx60AL5me1fjDo37NHjj9jb/h
niT4qaPoXgCw0b4YX3hy61PXv2rfF3h7TI9RH9gEWhvIrc2+nTstvcMmm2UklvMlvOwcrYRE/s7l
fUH8etGV9Qfx61UZWFY/MrwBb6B8a/2cP2W9A8DeDfCf7Qep/s+67F8Kte1PS57PUPDWoJN8ObmK
+YakVkC6K817YR3LPEXZ7coLaeZYIZPmz4HaR4a03WfhH8crX4cfB+z1T4izeAbN9Bsv2V9f0DRv
Dkl1qsSvdadrk0xso7pRqjEXqlkuDY2YRfu7v3IyvqD+PWqmu6Wms6ZLASAx+ZG/usOn+fehz0sC
R8l/Az4Xf8Ip+2P8dfEp+Fn/AAif/CWf2Bnxn/wkv27/AITz7PYvF/x4bj9g+x5MPRfP37+cZruP
jz4E8QeIrH4f+KvDWi3HijUvhf4vHiGTQ7We2gutXgk068sJY4HuXjgEyi7MqiWSNW8orvQkEdtN
C1rK6OpV4zhgexH/AOr9a63wDptzprTmeIpFMiOhJHP+RXPCUlJOLs1b8PXuaqVk0uqa+TTT/Bnz
dG/xm+EPwz+Knj/wl8KNT1Lx98V/GFvfWPhee/0l7nwzZJp9np5u7zN9Da3EiLZPN9nhvP3hlij8
5MvLHZ8LeCGtf2UtU0jxD+zj8QPiH/buvfaPFGgeNbzwtqGr+KZXVZG1Nk+3vprqsscCLA00AiSI
CKNUiiRvqzK+o/OjK+o/OuhzvHl8or5RUUl6e6r9W9d1G0ucnZt7a/PVJ+qTsvL1lf5d/Zl/ZR8T
+Hfgv4k0rSNQ8Tfs6aRrPiybXfD/AIa8MpodzP4X09raKNrFo5ra+sIhLcpcXbR2oZUe4wsp+fPl
X7Wv7I/j7xl8X/iLe2XgXxv4t8VeKLTTLPwZ400fxha6NoekJawKLceINO+1QLfi31A3d2Q1hfq8
Vz5apgeQv3vlfUfnRlfUfnVxrSjNT7JL7kkvO+mrTTd3rqwpy5U1a9/Jfzc2qWj17pnxj+3H+x58
dPibPquqeC/GPw/1Qat4j8MXcenan4Onmv8AS7aw1GynkCXn9rwQm3jkhnumiFuskgZ4w+4xssX7
ZH7IXjb4reNvGHhjwJefF7wtZ/GDToofGGsaXeeGk8GzyNbfYriaaG9S61aO4NnDHGI7ONI5CsAa
aMmWdPtPK+o/OjK+o/OlRqOmlGKVk27d7qN0+691N93q76Wcakor3W07Wv10tZ+qtufBP7W37Ifj
fxZ8YPiNd6R8PPGXiDxD4ltNNsvA3izQvGVvpGgaBDawKLaPX9Pe8h/tBLXUDdXWJLLUQ8NwIwvH
kjtP26fgHd/tDa5qWk237O1prXxC+w29l4V+L5fQ418LTDEiXguJJ11W3a1uDJIsdtBIGKrhh5j7
PsLK+o/OjK+o/Omq0vc7xd799Etf1as29W2TKV9lbRLbsklo7p2srJ3iui1Pmj9tZPHl98b/AIHz
eGPhX408daZ4H8TnxFq+p6XqGiW0EcTaZqVj5Spe39vK0we5ikIEZTyycOXGyuH/AG0v2RfGPxZ+
IHjXwz8Pbn4xeEdM+MVjHF4s1XT9Q8M/8IhIzW/2KeW4jvI7jVlufsUMceyyijjlKwAyxEyzp9n5
X1H50ZX1H50qdVxae9m363to+60WndK9ylUkmnFtWioq3Szcrp73u3rf0Pk39q29+JPjX4rReBG+
C/xK8S/BTR7W1lupvDOreHo5PGU4+Y2dwb3VbWaCwi2oJIxGXumJRmSBHS7+jfiBbrc6PbXDIYJE
O3Y2NyhhyOCRxjsSK6XK+o/OuJ8fak1zrPkA/urdRgD+8eSfy4/D61nOb5OV9736tvuQoq65VZJJ
W6af1/WhhHn2z+n+cn8qDz7Z/T/OT+VLn/P+fx/zmjP+f8/j/nNcxYh59s/p/nJ/Kg8+2f0/zk/l
S5/z/n8f85oz/n/P4/5zQAh59s/p/nJ/Kr2p2KW2m6fMoAe4jZnHqQ3H9B+FUs/5/wA/j/nNSTXb
zW8MbYIgDBfxOT+uaEBPoCbtds1Gf9cuPz/+sK9I8z2riPA+iy3Wpx3ZG2CAk5P8RweB+f8Akmu2
yvqD+PWtaexLF8z2o8z2pMr6g/j1oyvqD+PWtBC+Z7UeZ7UmV9Qfx60ZX1B/HrQB0fwQ/wCSLeEP
+wJZf+iEoo+CH/JFvCH/AGBLL/0QlFeiZHB+FD+51T/sN6p/6X3Famay/CozDqn/AGG9U/8AS+4r
UxXnT+Jm62DNGaMUYqBhmjNGKMUAGaM0YoxQAZozRijFABmjNGKMUAGaM0YoxQAZozRijFABmvyB
/au/ab+HPiD9ozwhJb+M/wBmzx34N8S+O9Q/tfSfEf7VWqa94c1qwOl6tcQf2no11Z3Gm6dAt1HZ
zR+VHMkN1FaQxYDiVP1+xXk/xc+Afib4qftGfC7xP/wlWhWHg34a6pPr39if2BLLqeo38ml6ppmf
t/2tYo4BFqW7y/sjOXh/1uHwtwaW4mj4U/bD/abs9P8A2Hfg1oXwKb4Er9u+J2h2L+DvhVHbeMtJ
sru38S6Zfwyw3Ud3pEFtAs8lmLgzQKr3GsWcBltvPW6PpP8AwT+8YX3jD9q3wz4nXxP8Sb61+Ilr
8XLy807xDrusNaomm+OdNs9MC6VfSeXp8lvaTPCI0ghdAzK65BA+sP2s/gXqf7RPwZPh3RNesPDO
s2uu6H4hsNRvtLfU7WG40vV7PU41lt0ngaSN3tBGwWaMgOSGyK828Vf8E7/s178M73wV4+13wtqv
gfVLq51fUJ4ftNx4nttQ1/Ttf1lZDA9usE9/e6aiu0a/ZkhvLyJbTY8QhfMrWFZnZR+IfG/iT42f
EDT9X+H3/CO+DvDg04eHfE41yC8/4S7zrdnum+yIBLZ/ZZQsX70nzdwdcAGu0+IGmah4++El/b6V
4m1rwfqNxb5j1fSYrOW8s3QhiUW7gngO7aVO+JuGOMNhhQ8B/sy+B/hn8aviD8RNE0MWXjH4qDTh
4ovzdzyjVBYW7W1p+6d2ii8uJ2X90ibs5bceaueLfhVY/EDwjq/hXVZdbg0bUwrebpOs3mkXiDcG
KpdWksU8Zyozsddykg5BIPNXg5QcYbtGtKSjNOW1/U8Q/Zf8LXv7Sn/BMP4QT+J/EHxF1PULrwnp
mr6jNofia40nWvEEy2gcxPfRTQzKZJCGJE8W5gA7hC4Pi+i/Fz4i23wkvPht4fXxbpOueIvipL4Q
s9H8V+Mp/wC3/C+mnRDqxtLrXAuoOZLhYnZLi3ku3hhvkSOZJoQIvqLw9+wR8NvCfwM0v4c6Zb+N
LDwtoV0t5pSweOtdTUNJdYjEq21+Lz7XBEIyyeVHMse13XbhmBu2H7EHwz074SXvgtNBvpdJ1HUh
rNzeT65qE+tT6gCpS+bVHnN+btAkapcef5saRxojKqKo7q1WnOrWml7s3dLZX5ou73WiTjazutNF
KSaU2kn1V7O+3uyS130bUujv1dkeU/Cj+wfFv7O/jXwh8RL3xj8Lrb4Q666eKriH4taxfq0f2SLU
ElHiGeSDUTaGG7hdgzW7I0TRkGIfvNz9mfwX4r179mrxlbW3ij4meDtE1TWJ7vwPqGr3A1HxTpek
iOBlMz6tHduxmnW6dFvFeaOC4jRhE6BI+t8UfsB/C/xj8H7fwNf6X4jfQoNag8RtND4u1i31W71K
FleG7uNRjulvbiVGSMq00zlTDCR/qo9u1o37J3hXQ/hPq/guLUviRNo2uTCa5mu/iH4gu9UQjZxD
qEt615An7tcpFMinL5HztnGs1KnOMNHKKXle0by30d07b2jdfa90g0pQbWzu9n1lotEmtbu+rdk3
7t5eIfCbw74//aU/4JF/CqPSNc1DUvGuteHPD2oXl9feLtR8P3OrKj2010kmqWSSXcDzRLKpkjVm
y/PBNcbbeI9Z0P4HyeHDoXxa0iw8LfEc6J8U7Xw9458ReOtZt7N9LFxBLpmoO39rNbu8+lM62sUU
0YkuMRgCSRvofw/+wR8NvCnwL0v4caZb+NLHwroV0l5paQ+OtdS/0p0jMSrbXwvPtcEQjLIIo5lj
2uw24ZgdvTf2SvBOh/Bu58CabD4m0jRL66N9d3Om+KtVstZvLgyLI082qRXK30srsqh5HnLOo2sS
vy1tWrQlVqzivdk7pfOLs7bqya5dtnpqhKT91Nd9b6q6krq99dU09Gn3S1+f/wBnLWfDfxv+CHxb
sfE/iL4wWHww+GPiyRtH1fW9c8SeEvEFrYRaTaXcyXVy8lrqcsUMtzcgNdMxdFjLF9itWLqFh46+
Fn7Lngfw/wCFJ/G0eu/HnxnIIbXxT491J7/w7p0mnXV4lh/al0L+6tJHtrBI5GiWR4prm4aEowjd
fqm2/Zm8EWnwhh8Bpog/4RWOaO4ls2vJ2a/lWdbhpLqUv5t00sy75jO7m4Lyed5nmPu1Pi78G/Dn
x18GSaB4nsJL7T2mjuYmgu5rO6s542DRz29xA6TW8yMMrLE6Op6MKirOFmoXfwp3e/Ko3b3u5NXd
09rNtSYoNq2m3Pa11ZtNQt2UbttbN2so8qPn/wCCfhhfjB8CvFvw713TviX4X1n4YeITbaxpug/F
TVtVvdYka0iv4I7fX7mW3v3hlju4Dtd7Yq6GM4hGX4v4VeF/iD8a/wBlr4peCNDuPFui674c8eW0
Fj4e8V+P9Ts9e0XTVGnX0lheeILNru5LXEUk0iTW890EivI4fNzGyR/Qk37FHw9k+GEXhKOz8S2t
hFqP9sf2haeLNXttclvShjNzJqsdyt/JKYiYi7zktHiMkoAogg/YV+G1n8MG8J22n+JbOyk1Y67L
qNr4u1eDXrm/KGM3UurJdDUJZTEfK3PcE+Uqxf6tVUHPG8n3UdOnMnBuTvfs1bXR2uk5Ji2i+qbf
4S06aXafRrvLlV8T9gTUEtvBHjDw5daT4g0PxJ4O8Sy6VrtnqXj3VfG8SXBtba4je11HUm+0PA9t
cWz7DHEEdpB5ecu/feLbKWz1yd5BkTt5iH1H+eP886Pwe+Cvhv4DeD/7D8L2E1nZPcSXk8lzez31
5fXEhzJPcXNw8k9xM2BmSV3cgAE4AxteJNJTVtLdSPnjBeM45Bx0/HpUVpKdmuy+9JJ/K+y6LQiC
aWvd/n6L+u+5wR47Zx/n/P1/MPHbOP8AP+fr+aA5x/n0/wAf5+tAOcf59P8AH+frXKWKeO2cf5/z
9fzDx2zj/P8An6/mgOcf59P8f5+tAOcf59P8f5+tACnjtnH+f8/X89fwv4ZOtyGWQ7baNsHB5c+n
0/xrIjRpXVVUszHAAGSTx/j/ADrv/DenNpejQwuMSDLMOvJOauCu9QLkMSQRKiKqIowAOgp2aMUY
rQAzRmjFGKADNGaMUYoA6j4If8kW8If9gSy/9EJRR8EP+SLeEP8AsCWX/ohKK9Q5zgfC0m2LVBx/
yG9U/wDS+etPzfpWT4ZUlNV/7Deqf+l89aWw158/iZomSeb9KPN+lR7DRsNR8h3ZJ5v0o836VHsN
Gw0fILsk836Ueb9Kj2GjYaPkF2Seb9KPN+lR7DRsNHyC7JPN+lHm/So9ho2Gj5Bdknm/SjzfpUew
0bDR8guyTzfpR5v0qPYaNho+QXZJ5v0o836VHsNGw0fILsk836Ueb9Kj2GjYaPkF2Seb9KPN+lR7
DRsNHyC7JPN+lHm/So9ho2Gj5Bdknm/SjzfpUew0bDR8guyTzfpR5v0qPYaNho+QXZJ5v0o836VH
sNGw0fILsk836Ueb9Kj2GjYaPkF2Seb9KPMz1xUew0bDR8guzg9b0qTSL943HysS0bDow/zgf5FV
CMdP8/54/P6V2ni+zFxocjFCzxEMhA5ByB+VcetnM/3YZf8AvgnH+ePyrKSsykyMjHT/AD/nj8/p
ShSWwAST0A7+n9Pz+lW7Xw/e3cgVLeUe7DaB+f4flXU6D4Xi0ZQ5xLcEcueiey/40KLYXIvCPh/+
y4TPOii4f7oPWNf8a2/N+lR7DRsNapW0JuyTzfpR5v0qPYaNho+QXZJ5v0o836VHsNGw0fILsk83
6Ueb9Kj2GjYaPkF2df8ABD/ki3hD/sCWX/ohKKPgh/yRbwh/2BLL/wBEJRXpmR594ZYhNV/7Deqf
+l89aW81d8S+BdT8OajeS6Zpkmq2V3cPcpFbSxRzRPI5eQMJXRSC7MwIbPzYxxlsr7J4k/6E3Xf/
AAKsP/kmuKdOfM7FpljeaN5qv9k8Sf8AQm67/wCBVh/8k0fZPEn/AEJuu/8AgVYf/JNR7OfYLosb
zRvNV/sniT/oTdd/8CrD/wCSaPsniT/oTdd/8CrD/wCSaPZz7BdFjeaN5qv9k8Sf9Cbrv/gVYf8A
yTR9k8Sf9Cbrv/gVYf8AyTR7OfYLosbzRvNV/sniT/oTdd/8CrD/AOSaPsniT/oTdd/8CrD/AOSa
PZz7BdFjeaN5qv8AZPEn/Qm67/4FWH/yTR9k8Sf9Cbrv/gVYf/JNHs59guixvNG81X+yeJP+hN13
/wACrD/5Jo+yeJP+hN13/wACrD/5Jo9nPsF0WN5o3mq/2TxJ/wBCbrv/AIFWH/yTR9k8Sf8AQm67
/wCBVh/8k0ezn2C6LG80bzVf7J4k/wChN13/AMCrD/5Jo+yeJP8AoTdd/wDAqw/+SaPZz7BdFjea
N5qv9k8Sf9Cbrv8A4FWH/wAk0fZPEn/Qm67/AOBVh/8AJNHs59guixvNG81X+yeJP+hN13/wKsP/
AJJo+yeJP+hN13/wKsP/AJJo9nPsF0WN5o3mq/2TxJ/0Juu/+BVh/wDJNH2TxJ/0Juu/+BVh/wDJ
NHs59guixvNG81X+yeJP+hN13/wKsP8A5Jo+yeJP+hN13/wKsP8A5Jo9nPsF0WN5o3mq/wBk8Sf9
Cbrv/gVYf/JNH2TxJ/0Juu/+BVh/8k0ezn2C6LG80bzVf7J4k/6E3Xf/AAKsP/kmj7J4k/6E3Xf/
AAKsP/kmj2c+wXRY3mjear/ZPEn/AEJuu/8AgVYf/JNH2TxJ/wBCbrv/AIFWH/yTR7OfYLosbzRv
NV/sniT/AKE3Xf8AwKsP/kmj7J4k/wChN13/AMCrD/5Jo9nPsF0WN5o3mq/2TxJ/0Juu/wDgVYf/
ACTR9k8Sf9Cbrv8A4FWH/wAk0ezn2C6LG80bzVf7J4k/6E3Xf/Aqw/8Akmj7J4k/6E3Xf/Aqw/8A
kmj2c+wXRY3mjear/ZPEn/Qm67/4FWH/AMk0fZPEn/Qm67/4FWH/AMk0ezn2C6LG80bzVf7J4k/6
E3Xf/Aqw/wDkmj7J4k/6E3Xf/Aqw/wDkmj2c+wXRY3mjear/AGTxJ/0Juu/+BVh/8k0fZPEn/Qm6
7/4FWH/yTR7OfYLosbzRvNV/sniT/oTdd/8AAqw/+Sasaf4d8Sa27QjQrrR2bgXF9PbPGnqdsMrs
xHULgA9Nw6hqlPsF0df8EP8Aki3hD/sCWX/ohKK3dB0aDw5odlp1qpW2sIEt4QTkhEUKo/ICiu8g
/9k=
------=_001_NextPart866553846435_=------
1
0
Hello,
i want to install Self-Hosted-Engine with Data and ISO Domain on NFS4
Share on local Server / Hypervisor. NFS is correct configure, while i
can mount from client and host itself. At the point in installation
setup i have to enter the path hostname:/export i got this error:Error
while mounting specified storage path: mount.nfs: mount system call
failed
in setup log i can see:
ovirt_hosted_engine_setup.storage.nfs plugin.execute:941
Any Idea?
thx
1
1