Network Address Change
by Paul.LKW
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
2 years, 4 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
5 years, 2 months
[Users] Problem Creating "oVirtEngine" Machine
by Richie@HIP
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
I can't agree with you more. Modifying every box's or Virtual Machine's =
HOSTS file with a FQDN and IP SHOULD work, but in my case it is not. =
There are several reasons I've come to believe could be the problem =
during my trial-and-errors testing and learning.
FIRST - MACHINE IPs.
THe machine's "Names" where not appearing in the Microsoft Active =
Directory DHCP along with their assigned IPs; in other words, the DHCP =
just showed an "Assigned IP", equal to the Linux Machine's IP, with a =
<empty> ('i.e. blank, none, silch, plan old "no-letters-or-numbers") =
"Name" in the "Name" (i.e. machines "network name", or FQDN-value used =
by the Windows AD DNS-service) column. =20
if your IP is is appearing with an <empty> "name", there is no "host =
name" to associate the IP, it makes it difficult to define a FQDN; which =
isn't that useful if we're going to use the HOSTS files in all =
participating machines in an oVirt Installation.
I kept banging my head for three (3) long hours trying to find the =
problem.
In Fedora 18, I could't find where the "network name" of the machine =
could be defined. =20
I tried putting the "Additional Search Domains" and/or "DHCP Client ID" =
in Fedora's 18 Desktop - under "System Settings > Hardware > Network > =
Options > IPv4 Setting"
The DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"
Kept wondering around the "Settings" and seeing which one made sense, =
but what the heck, I went for it. =20
Under "System Settings > System > Details" I found the information about =
GNOME and the machine's hardware. =20
There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =20
I also installed all Kerberos libraries and client (i.e. authconfig-gtk, =
authhub, authhub-client, krb5-apple-clents, krb5-auth-dialog, =
krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) and rebooted
VOILA=85!!! =20
I don;t know if it was the definition of "Device Name" from =
"localhost.localdomain" to "ovirtengine", of the Kerberos libraries =
install, or both. But finally the MS AD DHCP was showing the =
Addigned-IP, the machine "Name" and the proper MAC-address. Regardless, =
setting the machine's "Network Name" under "System Settings > System > =
Details > Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this network =
setting could be defined.
NOTE - Somebody has to try the two steps I did together, separately. to =
see which one is the real problem-solver; for me it is working, and "if =
it ain't broke, don't fix it=85"
Now that I have the DHCP / IP thing sorted, I have to do the DNS stuff.
To this point, I've addressed the DHCP and "Network Name" of the =
IP-Lease (required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "as long as I do not =
use default HTTPd service parameters as suggested by the install". By =
using the HOST file to "define" FQDNs, AND NOT using the default HTTPd =
suggested changes, I'm able to install the oVirtEngine (given that I use =
ports 8700 and 8701) to access the "oVirtEngine Welcome Screen", BUT =
NONE of the "oVirt Portals" work=85 YET=85!!!
More to come during the week
Richie
Jos=E9 E ("Richie") Piovanetti, MD, MS=20
M: 787-615-4884 | richiepiovanetti(a)healthcareinfopartners.com
On Aug 2, 2013, at 3:10 AM, Joop <jvdwege(a)xs4all.nl> wrote:
> Hello Ritchie,
>=20
>> In a conversation via IRC, someone suggested that I activate =
"dnsmask" to overcome what appears to be a DNS problem. I'll try that =
other possibility once I get home later today.
>>=20
>> In the mean time, what do you mean by "fixing the hostname"=85? I =
opened and fixed the HOSTNAMES and changed it from =
"localhost-localdomain" to "localhost.localdomain" and that made no =
difference. Albeit, after changing I didm;t restart, remove ovirtEngine =
((using "engine-cleanup") and reinstalled via "engine-setup". Is that =
what you mean=85?
>>=20
>>=20
>>=20
>> In the mean time, the fact that even if I resolve the issue of =
oVirtEngine I will not be able to connect to the oVirt Nodes unless I =
have DNS resolution, apparently means I should do something with =
resolving via DNS in my home LAN (i.e implement some sort of "DNS Cache" =
so I can resolve my home computers via DNS inside my LAN).
>>=20
>> Any suggestions are MORE THAN WELCOME=85!!!
>> =20
>=20
> Having setup ovirt more than I can count right now I share your =
feeling that it isn't always clear why things are going wrong, but in =
this case I suspect that there is a rather small thing missing.
> In short if you setup ovirt-engine, either using virtualbox or on real =
hardware, and you give your host a meaningfull name AND you add that =
info also in your /etc/hosts file than things SHOULD work, no need for =
dnsmasq or even bind. Would make things easier once you start adding =
virt hosts to you infrastructure since you will need to duplicate these =
actions on each host (add engine name/ip to each host and add each host =
to the others and all hosts to engine)
>=20
> Just ask if you need more assistance and I will write down a small =
howto that should work out of the box else I might have some time to see =
if I can get things going.
>=20
> Regards,
>=20
> Joop
>=20
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I =
can't agree with you more. Modifying every box's or Virtual =
Machine's HOSTS file with a FQDN and IP SHOULD work, but in my case it =
is not. There are several reasons I've come to believe could be =
the problem during my trial-and-errors testing and =
learning.<div><div><br></div><div>FIRST - MACHINE IPs.</div><ul =
class=3D"MailOutline"><li>THe machine's "Names" where not appearing in =
the <b>Microsoft Active Directory DHCP</b> along with their assigned =
IPs; in other words, the DHCP just showed an "Assigned IP", equal to the =
Linux Machine's IP, with a <empty> ('i.e. blank, none, silch, plan =
old "no-letters-or-numbers") "Name" in the "Name" (i.e. machines =
"network name", or FQDN-value used by the Windows AD DNS-service) =
column. </li><li>if your IP is is appearing with an <empty> =
"name", there is no "host name" to associate the IP, it makes it =
difficult to define a FQDN; which isn't that useful if we're going to =
use the HOSTS files in all participating machines in an oVirt =
Installation.</li><li>I kept banging my head for three (3) long hours =
trying to find the problem.</li><ul><li>In Fedora 18, I could't find =
where the "network name" of the machine could be defined. =
</li><li>I tried putting the "Additional Search Domains" and/or =
"DHCP Client ID" in Fedora's 18 Desktop - under "System Settings > =
Hardware > Network > Options > IPv4 Setting"</li><ul><li>The =
DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"</li></ul><li>Kept wondering around the =
"Settings" and seeing which one made sense, but what the heck, I went =
for it. </li><ul><li>Under "System Settings > System > =
Details" I found the information about GNOME and the machine's hardware. =
</li><li>There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =
</li><li>I also installed all Kerberos libraries and client (i.e. =
authconfig-gtk, authhub, authhub-client, krb5-apple-clents, =
krb5-auth-dialog, krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) =
and rebooted</li><li>VOILA=85!!! </li></ul><li>I don;t know if it =
was the definition of "Device Name" from "localhost.localdomain" to =
"ovirtengine", of the Kerberos libraries install, or both. But =
finally the MS AD DHCP was showing the Addigned-IP, the machine "Name" =
and the proper MAC-address. Regardless, setting the machine's =
"Network Name" under "System Settings > System > Details =
> Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this =
network setting could be defined.</li><li><b>NOTE</b> - Somebody has to =
try the two steps I did together, separately. to see which one is the =
real problem-solver; for me it is working, and "if it ain't broke, don't =
fix it=85"</li></ul></ul><div><br =
class=3D"webkit-block-placeholder"></div><div>Now that I have the DHCP / =
IP thing sorted, I have to do the DNS stuff.</div><div><br></div><div>To =
this point, I've addressed the DHCP and "Network Name" of the IP-Lease =
(required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "<b><i>as long as I =
do not use default HTTPd service parameters as suggested by the =
install</i></b>". <b>By using the HOST file to "define" FQDNs, AND =
NOT using the default HTTPd suggested changes, I'm able to install the =
oVirtEngine (given that I use ports 8700 and 8701) to access the =
"oVirtEngine Welcome Screen", BUT NONE of the "oVirt Portals" work</b>=85 =
YET=85!!!</div><div><br></div><div>More to come during the =
week</div><div><br></div><div>Richie</div><div =
apple-content-edited=3D"true"><br>Jos=E9 E ("Richie") Piovanetti, MD, =
MS <br>M: 787-615-4884 | <a =
href=3D"mailto:richiepiovanetti@healthcareinfopartners.com">richiepiovanet=
ti(a)healthcareinfopartners.com</a><br><br><br><br><br><br></div><br><div><d=
iv>On Aug 2, 2013, at 3:10 AM, Joop <<a =
href=3D"mailto:jvdwege@xs4all.nl">jvdwege(a)xs4all.nl</a>> =
wrote:</div><br class=3D"Apple-interchange-newline"><blockquote =
type=3D"cite">Hello Ritchie,<br><br><blockquote type=3D"cite">In a =
conversation via IRC, someone suggested that I activate "dnsmask" to =
overcome what appears to be a DNS problem. I'll try that other =
possibility once I get home later today.<br><br>In the mean time, what =
do you mean by "fixing the hostname"=85? I opened and fixed the =
HOSTNAMES and changed it from "localhost-localdomain" to =
"localhost.localdomain" and that made no difference. Albeit, after =
changing I didm;t restart, remove ovirtEngine ((using "engine-cleanup") =
and reinstalled via "engine-setup". Is that what you =
mean=85?<br><br><br><br>In the mean time, the fact that even if I =
resolve the issue of oVirtEngine I will not be able to connect to the =
oVirt Nodes unless I have DNS resolution, apparently means I should do =
something with resolving via DNS in my home LAN (i.e implement some sort =
of "DNS Cache" so I can resolve my home computers via DNS inside my =
LAN).<br><br>Any suggestions are MORE THAN WELCOME=85!!!<br> =
<br></blockquote><br>Having setup ovirt more than I can count =
right now I share your feeling that it isn't always clear why things are =
going wrong, but in this case I suspect that there is a rather small =
thing missing.<br>In short if you setup ovirt-engine, either using =
virtualbox or on real hardware, and you give your host a meaningfull =
name AND you add that info also in your /etc/hosts file than things =
SHOULD work, no need for dnsmasq or even bind. Would make things easier =
once you start adding virt hosts to you infrastructure since you will =
need to duplicate these actions on each host (add engine name/ip to each =
host and add each host to the others and all hosts to =
engine)<br><br>Just ask if you need more assistance and I will write =
down a small howto that should work out of the box else I might have =
some time to see if I can get things =
going.<br><br>Regards,<br><br>Joop<br><br></blockquote></div><br></div></b=
ody></html>=
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241--
5 years, 11 months
Re: [ovirt-users] Question about the ovirt-engine-sdk-java
by Michael Pasternak
------=_Part_1975902_834617789.1445161505459
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Salifou,
Actually java sdk is=C2=A0intentionally=C2=A0hiding transport level interna=
ls so developers could stay in java domain,if your headers are static, easi=
est way would be using reverse proxy in a middle to intercept requests,=C2=
=A0
can you tell me why do you need this?
=20
On Friday, October 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah@=
redhat.com> wrote:
=20
Hi Micheal,
I have a question about the ovirt-engine-sdk-java.
Is there a way to add custom request headers to each RHEVM API call?
Here is an example of a request that I would like to do:
$ curl -v -k \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "ID: user1(a)ad.xyz.com" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "PASSWORD: Pwssd" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "TARGET: kobe" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 https://vm0.smalick.com/api/hosts
I would like to add ID, PASSWORD and TARGET as HTTP request header.=20
Thanks,
Salifou
------=_Part_1975902_834617789.1445161505459
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:HelveticaNeue-Light, Helvetica Neue Light, Helvetica Neue, Helve=
tica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id=3D"yui_3_16_=
0_1_1445160422533_3555" dir=3D"ltr"><span id=3D"yui_3_16_0_1_1445160422533_=
4552">Hi </span><span style=3D"font-family: 'Helvetica Neue', 'Segoe U=
I', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3_16_0_1_1445=
160422533_3568" class=3D"">Salifou,</span></div><div id=3D"yui_3_16_0_1_144=
5160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica Neue', =
'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" class=3D""><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
style=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Luci=
da Grande', sans-serif;" class=3D"" id=3D"yui_3_16_0_1_1445160422533_3595">=
Actually java sdk is </span><span style=3D"font-family: 'Helvetica Neu=
e', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3=
_16_0_1_1445160422533_4360" class=3D"">intentionally </span><span styl=
e=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Lucida G=
rande', sans-serif;" id=3D"yui_3_16_0_1_1445160422533_4362" class=3D"">hidi=
ng transport level internals so developers could stay in java domain,</span=
></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span class=
=3D"" id=3D"yui_3_16_0_1_1445160422533_4435"><font face=3D"Helvetica Neue, =
Segoe UI, Helvetica, Arial, Lucida Grande, sans-serif" id=3D"yui_3_16_0_1_1=
445160422533_4432" class=3D"">if your headers are static, easiest way would=
be using reverse proxy in a middle to intercept requests, </font><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
class=3D""><font face=3D"Helvetica Neue, Segoe UI, Helvetica, Arial, Lucida=
Grande, sans-serif" class=3D""><br></font></span></div><div id=3D"yui_3_16=
_0_1_1445160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica=
Neue', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"y=
ui_3_16_0_1_1445160422533_4357">can you tell me why do you need this?</span=
><br></div> <br><div class=3D"qtdSeparateBR"><br><br></div><div class=3D"y=
ahoo_quoted" style=3D"display: block;"> <div style=3D"font-family: Helvetic=
aNeue-Light, Helvetica Neue Light, Helvetica Neue, Helvetica, Arial, Lucida=
Grande, sans-serif; font-size: 13px;"> <div style=3D"font-family: Helvetic=
aNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-si=
ze: 16px;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Friday, O=
ctober 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah(a)redhat.com>=
wrote:<br> </font> </div> <br><br> <div class=3D"y_msg_container">Hi Mich=
eal,<br><br>I have a question about the ovirt-engine-sdk-java.<br><br>Is th=
ere a way to add custom request headers to each RHEVM API call?<br><br>Here=
is an example of a request that I would like to do:<br><br>$ curl -v -k \<=
br> -H "ID: <a ymailto=3D"mailto:user1@ad=
.xyz.com" href=3D"mailto:user1@ad.xyz.com">user1(a)ad.xyz.com</a>" \<br> =
; -H "PASSWORD: Pwssd" \<br>  =
; -H "TARGET: kobe" \<br> <=
a href=3D"https://vm0.smalick.com/api/hosts" target=3D"_blank">https://vm0.=
smalick.com/api/hosts</a><br><br><br>I would like to add ID, PASSWORD and T=
ARGET as HTTP request header. <br><br>Thanks,<br>Salifou<br><br><br><br></d=
iv> </div> </div> </div></div></body></html>
------=_Part_1975902_834617789.1445161505459--
5 years, 11 months
[Users] oVirt Weekly Sync Meeting Minutes -- 2012-05-23
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:41)
* Status of next release (mburns, 14:05:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145 (mburns,
14:05:29)
* AGREED: freeze date and beta release delayed by 1 week to 2012-06-07
(mburns, 14:12:33)
* post freeze, release notes flag needs to be used where required
(mburns, 14:14:21)
* https://bugzilla.redhat.com/show_bug.cgi?id=821867 is a VDSM blocker
for 3.1 (oschreib, 14:17:27)
* ACTION: dougsland to fix upstream vdsm right now, and open a bug on
libvirt augeas (oschreib, 14:21:44)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822158 (mburns,
14:23:39)
* assignee not available, update to come tomorrow (mburns, 14:24:59)
* ACTION: oschreib to make sure BZ#822158 is handled quickly
(oschreib, 14:25:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824397 (mburns,
14:28:55)
* 824397 expected to be merged prior next week's meeting (mburns,
14:29:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824420 (mburns,
14:30:15)
* tracker for node based on F17 (mburns, 14:30:28)
* blocked by util-linux bug currently (mburns, 14:30:40)
* new build expected from util-linux maintainer in next couple days
(mburns, 14:30:55)
* sub-project status -- engine (mburns, 14:32:49)
* nothing to report outside of blockers discussed above (mburns,
14:34:00)
* sub-project status -- vdsm (mburns, 14:34:09)
* nothing outside of blockers above (mburns, 14:35:36)
* sub-project status -- node (mburns, 14:35:43)
* working on f17 migration, but blocked by util-linux bug (mburns,
14:35:58)
* should be ready for freeze deadline (mburns, 14:36:23)
* Review decision on Java 7 and Fedora jboss rpms in oVirt Engine
(mburns, 14:36:43)
* Java7 basically working (mburns, 14:37:19)
* LINK: http://gerrit.ovirt.org/#change,4416 (oschreib, 14:39:35)
* engine will make ack/nack statement next week (mburns, 14:39:49)
* fedora jboss rpms patch is in review, short tests passed (mburns,
14:40:04)
* engine ack on fedora jboss rpms and java7 needed next week (mburns,
14:44:47)
* Upcoming Workshops (mburns, 14:45:11)
* NetApp workshop set for Jan 22-24 2013 (mburns, 14:47:16)
* already at half capacity for Workshop at LinuxCon Japan (mburns,
14:47:37)
* please continue to promote it (mburns, 14:48:19)
* proposal: board meeting to be held at all major workshops (mburns,
14:48:43)
* LINK: http://www.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:49:30)
* Open Discussion (mburns, 14:50:12)
* oVirt/Quantum integration discussion will be held separately
(mburns, 14:50:43)
Meeting ended at 14:52:47 UTC.
Action Items
------------
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib to make sure BZ#822158 is handled quickly
Action Items, by person
-----------------------
* dougsland
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib
* oschreib to make sure BZ#822158 is handled quickly
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (98)
* oschreib (55)
* doronf (12)
* lh (11)
* sgordon (8)
* dougsland (8)
* ovirtbot (6)
* ofrenkel (4)
* cestila (2)
* RobertMdroid (2)
* ydary (2)
* rickyh (1)
* yzaslavs (1)
* cctrieloff (1)
* mestery_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
5 years, 11 months
[QE][ACTION REQUIRED] oVirt 3.5.1 RC status - postponed
by Sandro Bonazzola
Hi,
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
- ACTION: Gilad please provide ETA on above blocker, the new proposed RC date will be decided on the given ETA.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 57 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 37 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- ACTION: Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- ACTION: Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
5 years, 11 months
[Users] Lifecycle / upgradepath
by Sven Kieske
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
5 years, 12 months
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
5 years, 12 months
Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
5 years, 12 months
[Users] importing from kvm into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
6 years
Trying to reset password for ovirt wiki
by noc
This is a multi-part message in MIME format.
--------------000005070002050708050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hoping someone can help me out.
For some reason I keep getting the following error when I try to reset
my password:
Reset password
* Error sending mail: Failed to add recipient: jvandewege(a)nieuwland.nl
[SMTP: Invalid response code received from server (code: 554,
response: 5.7.1 <jvandewege(a)nieuwland.nl>: Relay access denied)]
Complete this form to receive an e-mail reminder of your account details.
Since I receive the ML on this address it is definitely a working address.
Tried my home account too and same error but then for my home provider,
Relay denied ??
A puzzled user,
Joop
--------------000005070002050708050606
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hoping someone can help me out.<br>
For some reason I keep getting the following error when I try to
reset my password:<br>
<br>
<fieldset><legend>Reset password</legend>
<div class="error">
<ul>
<li>Error sending mail: Failed to add recipient:
<a class="moz-txt-link-abbreviated" href="mailto:jvandewege@nieuwland.nl">jvandewege(a)nieuwland.nl</a> [SMTP: Invalid response code
received from server (code: 554, response: 5.7.1
<a class="moz-txt-link-rfc2396E" href="mailto:jvandewege@nieuwland.nl"><jvandewege(a)nieuwland.nl></a>: Relay access denied)]</li>
</ul>
</div>
<p>Complete this form to receive an e-mail reminder of your
account details.<br>
</p>
</fieldset>
<br>
Since I receive the ML on this address it is definitely a working
address.<br>
Tried my home account too and same error but then for my home
provider, Relay denied ??<br>
<br>
A puzzled user,<br>
<br>
Joop<br>
<br>
</body>
</html>
--------------000005070002050708050606--
6 years, 1 month
ovirt-guest-agent issue on rhel5.5
by John Michael Mercado
Hi All,
I need your help. Anyone who encounter the below error and have the
solution? Can you help me how to fix this?
MainThread::INFO::2015-01-27
10:22:53,247::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2015-01-27
10:22:53,248::ovirt-guest-agent::138::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in ?
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 371, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
Thanks
6 years, 1 month
[Users] oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
6 years, 1 month
[Users] Moving iSCSI Master Data
by rni@chef.net
--========GMXBoundary282021374122634158505
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it's me again....
I started my oVirt 'project' as a proof of concept,.. but it happend as always, it became production
Now, I've to move the iSCSI Master data to the real iSCSI traget.
Is there any way to do this, and to become rid of the old Master Data?
Thank you for your help
Hans-Joachim
--========GMXBoundary282021374122634158505
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it's me again....<br /><br />I started my oVirt 'project' as a proof=
of concept,.. but it happend as always, it became production <img alt=
=3D" " title=3D" " src=3D"http://images.gmx.com/images/outsource/applicatio=
n/mailclient/mailcom/resource/mailclient/icons/blue/emoticons/animated/S_02=
-516742918.gif" /><br /><br />Now, I've to move the iSCSI Master data to th=
e real iSCSI traget.<br />Is there any way to do this, and to become rid of=
the old Master Data?<br /><br /><span id=3D"editor_signature">Thank you fo=
r your help</span><br /><br />Hans-Joachim</span></span>
--========GMXBoundary282021374122634158505--
6 years, 1 month
[Users] Can't access RHEV-H aka ovirt-node
by Scotto Alberto
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
6 years, 1 month
Unable to make Single Sign on working on Windows 7 Guest
by Felipe Herrera Martinez
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
6 years, 1 month
Re: [ovirt-users] Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
6 years, 1 month
Re: [ovirt-users] Problem Windows guests start in pause
by Dafna Ron
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
6 years, 1 month
Feature: Hosted engine VM management
by Roy Golan
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review and comment on the wiki below:
http://www.ovirt.org/Hosted_engine_VM_management
Thanks,
Roy
6 years, 2 months
Re: [ovirt-users] Packet loss
by Doron Fediuck
----_com.android.email_640187878761650
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
SGkgS3lsZSzCoApXZSBtYXkgaGF2ZSBzZWVuIHNvbWV0aGluZyBzaW1pbGFyIGluIHRoZSBwYXN0
IGJ1dCBJIHRoaW5rIHRoZXJlIHdlcmUgdmxhbnMgaW52b2x2ZWQuwqAKSXMgaXQgdGhlIHNhbWUg
Zm9yIHlvdT/CoApUb255IC8gRGFuLCBkb2VzIGl0IHJpbmcgYSBiZWxsP8Kg
----_com.android.email_640187878761650
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5IaSBLeWxlLCZuYnNwOzwv
ZGl2PjxkaXY+V2UgbWF5IGhhdmUgc2VlbiBzb21ldGhpbmcgc2ltaWxhciBpbiB0aGUgcGFzdCBi
dXQgSSB0aGluayB0aGVyZSB3ZXJlIHZsYW5zIGludm9sdmVkLiZuYnNwOzwvZGl2PjxkaXY+SXMg
aXQgdGhlIHNhbWUgZm9yIHlvdT8mbmJzcDs8L2Rpdj48ZGl2PlRvbnkgLyBEYW4sIGRvZXMgaXQg
cmluZyBhIGJlbGw/Jm5ic3A7PC9kaXY+PC9ib2R5PjwvaHRtbD4=
----_com.android.email_640187878761650--
6 years, 2 months
Changing gateway ping address
by Matteo
Hi all,
I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
Thanks,
Matteo
8 years, 1 month
Dedicated NICs for gluster network
by Nicolas Ecarnot
Hello,
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
the hosts].
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
--
Nicolas ECARNOT
8 years, 8 months
oVirt-shell command to move a disk
by Nicolas Ecarnot
Hello,
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
storage domains.
May you help me please?
(oVirt 3.4.4)
--
Nicolas Ecarnot
8 years, 8 months
One RHEV Virtual Machine does not Automatically Resume following Compellent SAN Controller Failover
by Duckworth, Douglas C
Hello --
Not sure if y'all can help with this issue we've been seeing with RHEV...
On 11/13/2015, during Code Upgrade of Compellent SAN at our Disaster
Recovery Site, we Failed Over to Secondary SAN Controller. Most Virtual
Machines in our DR Cluster Resumed automatically after Pausing except VM
"BADVM" on Host "BADHOST."
In Engine.log you can see that BADVM was sent into "VM_PAUSED_EIO" state
at 10:47:57:
"VM BADVM has paused due to storage I/O problem."
On this Red Hat Enterprise Virtualization Hypervisor 6.6
(20150512.0.el6ev) Host, two other VMs paused but then automatically
resumed without System Administrator intervention...
In our DR Cluster, 22 VMs also resumed automatically...
None of these Guest VMs are engaged in high I/O as these are DR site VMs
not currently doing anything.
We sent this information to Dell. Their response:
"The root cause may reside within your virtualization solution, not the
parent OS (RHEV-Hypervisor disc) or Storage (Dell Compellent.)"
We are doing this Failover again on Sunday November 29th so we would
like to know how to mitigate this issue, given we have to manually
resume paused VMs that don't resume automatically.
Before we initiated SAN Controller Failover, all iSCSI paths to Targets
were present on Host tulhv2p03.
VM logs on Host show in /var/log/libvirt/qemu/badhost.log that Storage
error was reported:
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
All disks used by this Guest VM are provided by single Storage Domain
COM_3TB4_DR with serial "270." In syslog we do see that all paths for
that Storage Domain Failed:
Nov 13 16:47:40 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 0
Though these recovered later:
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: sdbg -
tur checker reports path is up
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 8
Does anyone have an idea of why the VM would fail to automatically
resume if the iSCSI paths used by its Storage Domain recovered?
Thanks
Doug
--
Thanks
Douglas Charles Duckworth
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: duckd(a)tulane.edu
O: 504-988-9341
F: 504-988-8505
8 years, 11 months
3.6 upgrade issue
by Jon Archer
Hi all,
Wonder if anyone can shed any light on an error i'm seeing while running
engine-setup.
If just upgraded the packages to the latest 3.6 ones today (from 3.5),
run engine-setup, answered the questions, confirming install then get
presented with:
[ INFO ] Cleaning async tasks and compensations
[ INFO ] Unlocking existing entities
[ INFO ] Checking the Engine database consistency
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ ERROR ] Failed to execute stage 'Misc configuration': function
getdwhhistorytimekeepingbyvarname(unknown) does not exist LINE 2:
select * from GetDwhHistoryTimekeepingByVarName(
^ HINT: No function matches the given name and argument
types. You might need to add explicit type casts.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20150929144137-7u5rhg.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20150929144215-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Any ideas, where to look to fix things?
Thanks
Jon
9 years
virt-viewer shows no bootoption
by Taste-Of-IT
Hello,
i will install my first vm with centos and setup vnc and spice with
bootoption and centos as iso in dvd drive. i downloaded virt-viewer 3.1
for my windows10. if i start the vnc console which opened console.vv and
than opened the virt-viewer i see only message of bios version and no
boot medium as information. if i reboot the screen doesnt changed and i
also have no option from the activated bootmenue? Can anyone help?
9 years, 1 month
MAC spoofing for specific VMs
by Christopher Young
I'm working on some load-balancing solutions and they appear to require MAC
spoofing. I did some searching and reading and as I understand it, you can
disable the MAC spoofing protection through a few methods.
I was wondering about the best manner to enable this for the VMs that
require it and not across the board (if that is even possible). I'd like
to just allow my load-balancer VMs to do what they need to, but keep the
others untouched as a security mechanism.
If anyone has any advice on the best method to handle this scenario, I
would greatly appreciate it. It seems that this might turn into some type
of feature request, though I'm not sure if this is something that has to be
done at the Linux bridge level, the port level, or the VM level. Any
explanations into that would also help in my education.
Thanks,
Chris
9 years, 1 month
Highly Available in 3.6 and USB support
by jaumotte, styve
--_000_AM3PR02MB29653FCC03F78819A980A6A86070AM3PR02MB296eurprd_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi everybody,
After testing some features on 3.5, we are planning to finaly go to 3.6. So=
me problems still exist.
A major problem still remain on the < Highly Available > option on vm wich =
doesn't work. I had a cluster with 4 engines and a simple vm. When I start =
a poweroff from the node where this vm is living, the node is shutting down=
but my vm doesn't restart on another node of the cluster. The power managm=
ent of all the node are correctly configure. The HA feature of then hosted-=
engine is working well (except it is very long).
Another problem consist of passing usb host device to the virtual machine. =
We've got some specials usb keys for activating old application and we need=
to attach this key to vm. At first, I try with standard usb mass storage k=
ey to test this approach. I can't start virtual machine when I add usb devi=
ce, I always have the message < The host ... did not satisfy internal filte=
r HostDevice because it does not support host device passthrough >. Have an=
y idea where I can find an HowTo to help me ?
Tanks for your help,
SJ
--_000_AM3PR02MB29653FCC03F78819A980A6A86070AM3PR02MB296eurprd_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Arial",sans-serif;
color:black;
font-weight:normal;
font-style:normal;
text-decoration:none none;}
.MsoChpDefault
{mso-style-type:export-only;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">Hi everybody,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">After testing some features on 3.5, we ar=
e planning to finaly go to 3.6. Some problems still exist.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">A major problem still remain on the &laqu=
o; Highly Available » option on vm wich doesn’t work.=
I had a cluster with 4 engines and a simple vm. When I start a poweroff fr=
om
the node where this vm is living, the node is shutting down but my vm does=
n’t restart on another node of the cluster. The power managment of al=
l the node are correctly configure. The HA feature of then hosted-engine is=
working well (except it is very long).<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">Another problem consist of passing usb ho=
st device to the virtual machine. We’ve got some specials usb keys fo=
r activating old application and we need to attach this
key to vm. At first, I try with standard usb mass storage key to test this=
approach. I can’t start virtual machine when I add usb device, I alw=
ays have the message « The host … did not satisfy internal=
filter HostDevice because it does not support host device
passthrough ». Have any idea where I can find an HowTo to help =
me ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">Tanks for your help,<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black">SJ<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><o:p> </o:p></span></p>
</div>
</body>
</html>
--_000_AM3PR02MB29653FCC03F78819A980A6A86070AM3PR02MB296eurprd_--
9 years, 2 months
Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1
by Jon Archer
This is a multi-part message in MIME format.
--------------060204050902010701050107
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Stefano,
It's definitely not the switch, it seems to be the latest kernel package
(kernel-3.10.0-327.3.1.el7.x86_64) which stops bonding working
correctly, reverting back to the previous kernel brings the network up
in 802.3ad mode (4).
I know, from reading the release notes of 7.2, that there were some
changes to the bonding bits in the kernel so i'm guessing maybe some
defaults have changed.
I'll keep digging and post back as soon as i have something.
Jon
On 29/12/15 19:55, Stefano Danzi wrote:
> Hi! I didn't solve yet. I'm still using mode 2 on bond interface.
> What's your switch model and firmware version?
>
> -------- Messaggio originale --------
> Da: Jon Archer <jon(a)rosslug.org.uk>
> Data: 29/12/2015 19:26 (GMT+01:00)
> A: users(a)ovirt.org
> Oggetto: Re: [ovirt-users] Network instability after upgrade 3.6.0 ->
> 3.6.1
>
> Stefano,
>
> I am currently experiencing the same issue. 2x nic lacp config at
> switch, mode 4 bond at server with no connectivity. Interestingly I am
> able to ping the switch itself.
>
> I haven't had time to investigate thoroughly but my first thought is
> an update somewhere.
>
> Did you ever resolve and get back to mode=4?
>
> Jon
>
> On 17 December 2015 17:51:50 GMT+00:00, Stefano Danzi
> <s.danzi(a)hawai.it> wrote:
>
> I partially solve the problem.
>
> My host machine has 2 network interfaces with a bond. The bond was
> configured with mode=4 (802.3ad) and switch was configured in the same way.
> If I remove one network cable the network become stable. With both
> cables attached the network is instable.
>
> I removed the link aggregation configuration from switch and change the
> bond in mode=2 (balance-xor). Now the network are stable.
> The strange thing is that previous configuration worked fine for one
> year... since the last upgrade.
>
> Now ha-agent don't reboot the hosted-engine anymore, but I receive two
> emails from brocker evere 2/5 minutes.
> First a mail with "ovirt-hosted-engine state transition
> StartState-ReinitializeFSM" and after "ovirt-hosted-engine state
> transition ReinitializeFSM-EngineStarting"
>
>
> Il 17/12/2015 10.51, Stefano Danzi ha scritto:
>
> Hello, I have one testing host (only one host) with self
> hosted engine and 2 VM (one linux and one windows). After
> upgrade ovirt from 3.6.0 to 3.6.1 the network connection works
> discontinuously. Every 10 minutes HA agent restart hosted
> engine VM because result down. But the machine is UP, only the
> network stop to work for some minutes. I activate global
> maintenace mode to prevent engine reboot. If I ssh to the
> hosted engine sometimes the connection work and sometimes no.
> Using VNC connection to engine I see that sometime VM reach
> external network and sometimes no. If I do a tcpdump on
> phisical ethernet interface I don't see any packet when
> network on vm don't work. Same thing happens fo others two VM.
> Before the upgrade I never had network problems.
> ------------------------------------------------------------------------
> Users mailing list Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ------------------------------------------------------------------------
>
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
--------------060204050902010701050107
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi Stefano,<br>
<br>
It's definitely not the switch, it seems to be the latest kernel
package (kernel-3.10.0-327.3.1.el7.x86_64) which stops bonding
working correctly, reverting back to the previous kernel brings the
network up in 802.3ad mode (4).<br>
<br>
I know, from reading the release notes of 7.2, that there were some
changes to the bonding bits in the kernel so i'm guessing maybe some
defaults have changed.<br>
<br>
I'll keep digging and post back as soon as i have something.<br>
<br>
Jon<br>
<br>
<div class="moz-cite-prefix">On 29/12/15 19:55, Stefano Danzi wrote:<br>
</div>
<blockquote
cite="mid:yuu6vcix8xss464s04yxu6xv.1451418904304@email.android.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Hi! I didn't solve yet. I'm still using mode 2 on bond interface.
What's your switch model and firmware version? <br>
<br>
-------- Messaggio originale --------<br>
Da: Jon Archer <a class="moz-txt-link-rfc2396E" href="mailto:jon@rosslug.org.uk"><jon(a)rosslug.org.uk></a> <br>
Data: 29/12/2015 19:26 (GMT+01:00) <br>
A: <a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users(a)ovirt.org</a> <br>
Oggetto: Re: [ovirt-users] Network instability after upgrade 3.6.0
-> 3.6.1 <br>
<br>
Stefano,<br>
<br>
I am currently experiencing the same issue. 2x nic lacp config at
switch, mode 4 bond at server with no connectivity. Interestingly
I am able to ping the switch itself.<br>
<br>
I haven't had time to investigate thoroughly but my first thought
is an update somewhere.<br>
<br>
Did you ever resolve and get back to mode=4?<br>
<br>
Jon<br>
<br>
<div class="gmail_quote">On 17 December 2015 17:51:50 GMT+00:00,
Stefano Danzi <a class="moz-txt-link-rfc2396E" href="mailto:s.danzi@hawai.it"><s.danzi(a)hawai.it></a> wrote:
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt
0.8ex; border-left: 1px solid rgb(204, 204, 204);
padding-left: 1ex;">
<pre class="k9mail">I partially solve the problem.
My host machine has 2 network interfaces with a bond. The bond was
configured with mode=4 (802.3ad) and switch was configured in the same way.
If I remove one network cable the network become stable. With both
cables attached the network is instable.
I removed the link aggregation configuration from switch and change the
bond in mode=2 (balance-xor). Now the network are stable.
The strange thing is that previous configuration worked fine for one
year... since the last upgrade.
Now ha-agent don't reboot the hosted-engine anymore, but I receive two
emails from brocker evere 2/5 minutes.
First a mail with "ovirt-hosted-engine state transition
StartState-ReinitializeFSM" and after "ovirt-hosted-engine state
transition ReinitializeFSM-EngineStarting"
Il 17/12/2015 10.51, Stefano Danzi ha scritto:
<blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> Hello,
I have one testing host (only one host) with self hosted engine and 2
VM (one linux and one windows).
After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works
discontinuously.
Every 10 minutes HA agent restart hosted engine VM because result
down. But the machine is UP,
only the network stop to work for some minutes.
I activate global maintenace mode to prevent engine reboot. If I ssh
to the hosted engine sometimes
the connection work and sometimes no. Using VNC connection to engine
I see that sometime VM reach external network
and sometimes no.
If I do a tcpdump on phisical ethernet interface I don't see any
packet when network on vm don't work.
Same thing happens fo others two VM.
Before the upgrade I never had network problems.
<hr>
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a></blockquote>
<hr>
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre></blockquote></div>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
</blockquote>
</body></html>
--------------060204050902010701050107--
9 years, 3 months
ovirt 3.6 and gluster arbiter volumes?
by Arik Mitschang
Hi ovirt-users,
I have been working on a new install of ovirt 3.6 hosted-engine and ran
into difficulty adding a gluster data storage domain to host my VMs. I
have 4 servers for gluster (separate from vm hosts) and would like to
have the quorum enforcement of replica 3 without sacrificing space. I
created a gluster using
replica 3 arbiter 1
That looks like this:
Volume Name: arbtest
Type: Distributed-Replicate
Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
Status: Stopped
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: t2-gluster01b:/gluster/00/arbtest
Brick2: t2-gluster02b:/gluster/00/arbtest
Brick3: t2-gluster03b:/gluster/00/arbtest.arb
Brick4: t2-gluster03b:/gluster/00/arbtest
Brick5: t2-gluster04b:/gluster/00/arbtest
Brick6: t2-gluster01b:/gluster/00/arbtest.arb
Options Reconfigured:
nfs.disable: true
network.ping-timeout: 10
storage.owner-uid: 36
storage.owner-gid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
But adding to gluster I get the following error:
"Error while executing action AddGlusterFsStorageDomain: Error creating
a storage domain's metadata"
I guess my question really is, are volumes like this supposed to be
supported in ovirt 3.6? If so, then this looks like some bug to me and I
will file a report.
Here are versions of main components I am using:
ovirt-engine-3.6.0.3-1.el7.centos.noarch
glusterfs-3.7.6-1.el7.x86_64
vdsm-4.17.10.1-0.el7.centos.noarch
Thanks,
-Arik
9 years, 3 months
Unable to upgrade ovirt-engine 3.5.5 to 3.6.1 on EL6
by Frank Wall
Hi,
I've just tried to upgrade my ovirt-engine 3.5.5 which is still running on EL6,
but it failed due to a dependency error regarding slf4j:
# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20151231233407-lmovl5.log
Version: otopi-1.4.0 (otopi-1.4.0-1.el6)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
--== PACKAGES ==--
[ INFO ] Checking for product updates...
[ ERROR ] Yum: [u'ovirt-engine-3.6.1.3-1.el6.noarch requires slf4j >= 1.7.0', u'vdsm-jsonrpc-java-1.1.5-1.el6.noarch requires slf4j >= 1.6.1']
[ INFO ] Yum: Performing yum transaction rollback
[ ERROR ] Failed to execute stage 'Environment customization': [u'ovirt-engine-3.6.1.3-1.el6.noarch requires slf4j >= 1.7.0', u'vdsm-jsonrpc-java-1.1.5-1.el6.noarch requires slf4j >= 1.6.1']
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20151231233407-lmovl5.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20151231233424-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
I've followed the upgrade guide [1], and yes, I'm aware that AiO is no longer
supported on EL6 [2], but this is just my Hosted-Engine VM, *not* an AiO host.
So I thought it would still work.
I haven't attached any further logs, because this error is really obvious I
guess. These are the currently installed oVirt packages:
otopi-1.4.0-1.el6.noarch
otopi-java-1.4.0-1.el6.noarch
ovirt-engine-3.5.5-1.el6.noarch
ovirt-engine-backend-3.5.5-1.el6.noarch
ovirt-engine-cli-3.5.0.5-1.el6.noarch
ovirt-engine-dbscripts-3.5.5-1.el6.noarch
ovirt-engine-extension-aaa-jdbc-1.0.4-1.el6.noarch
ovirt-engine-extensions-api-impl-3.5.5-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-engine-lib-3.6.1.3-1.el6.noarch
ovirt-engine-restapi-3.5.5-1.el6.noarch
ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
ovirt-engine-setup-3.6.1.3-1.el6.noarch
ovirt-engine-setup-base-3.6.1.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.1.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.1.3-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.1.3-1.el6.noarch
ovirt-engine-tools-3.5.5-1.el6.noarch
ovirt-engine-userportal-3.5.5-1.el6.noarch
ovirt-engine-webadmin-portal-3.5.5-1.el6.noarch
ovirt-engine-websocket-proxy-3.5.5-1.el6.noarch
ovirt-host-deploy-1.3.1-1.el6.noarch
ovirt-host-deploy-java-1.3.1-1.el6.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch
vdsm-jsonrpc-java-1.0.15-1.el6.noarch
Any ideas?
[1] http://www.ovirt.org/OVirt_3.6.1_Release_Notes#oVirt_Hosted_Engine
[2] http://www.ovirt.org/OVirt_3.6.1_Release_Notes#Known_issues
Regards
- Frank
9 years, 3 months
Regarding ovirt
by shailendra saxena
Hi,
Does ovirt-restapi provide any URL which can provide all storage devices
attached to the particular host and partition list of every device ? Is
creating and deleting partition also supported ?
Thanks
Shailendra
9 years, 3 months
Release Date for Ovirt-node 3.6?
by Timothy Burger
--_000_9737D87CAED33C4981BB1EC457E781C933FB0EPARMB1bergenccnju_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Is there any word on a release date for a final ovirt-node 3.6?
--_000_9737D87CAED33C4981BB1EC457E781C933FB0EPARMB1bergenccnju_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Is there any word on a release date for a final ovir=
t-node 3.6?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_9737D87CAED33C4981BB1EC457E781C933FB0EPARMB1bergenccnju_--
9 years, 3 months
How to add faster NIC to oVirt cluser / host?
by Christophe TREFOIS
Dear all,
I have currently a data center with two clusters and each cluster had 1 host.
On one of the hosts, I now enabled 10 GbE NIC called p4p1. Currently everything is going over a 1 GbE NIC called em1 (ovirtmgmt).
My question now is, how I could add my 10 GbE NIC to the setup for instance to use for transferring data during transmissions to export domain or to simply replace the 1 GbE?
I prefer little downtime (eg only temp network loss), but a full downtime (shutting down) could be acceptable.
Thank you for any pointers or starting points you could provide,
Kind regards,
—
Christophe
9 years, 3 months
upgrade 3.6 to 3.6.1
by Sebastien Philippot
------=_Part_6463445_91078605.1451458242284
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Hello,=20
We have ovirt 3.6 on CentOS 7.1 on engine + hosts.=20
We would like to upgrade to 3.6.1.=20
I'm looking for the best procedure.=20
For the moment, we have 2 hosts in a cluster "prod" with only one default n=
etwork.=20
* We must upgrade OS on CentOS 7.2 before ?=20
1/ upgrade engine to centos 7.2 first ? =3D> yum update , reboot and engine=
-setup ?=20
2/ first host to maintenance, look for migation of VMs on the second host a=
nd upgrade between the web interface or "yum update and reboot" for centos =
7.2 ?=20
3/ up the first host and maintenance for the second, etc ..=20
Thanks,=20
S=E9bastien=20
--=20
--=20
------------------------------------------------------=20
S=E9bastien PHILIPPOT - Plate-forme GenOuest=20
IRISA-INRIA, Campus de Beaulieu=20
35042 Rennes cedex, France=20
T=E9l: +33 (0) 2 99 84 71 58=20
------=_Part_6463445_91078605.1451458242284
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Hello,<br></div><div><br></div><=
div>We have ovirt 3.6 on CentOS 7.1 on engine + hosts. <br></div><div>We wo=
uld like to upgrade to 3.6.1.<br></div><div>I'm looking for the best =
procedure. </div><div>For the moment, we have 2 hosts in a cluster "pr=
od" with only one default network.<br></div><div><br></div><div>* We must u=
pgrade OS on CentOS 7.2 before ? <br></div><div>1/ upgrade engine to centos=
7.2 first ? =3D> yum update , reboot and engine-setup ? <br></div><div>=
2/ first host to maintenance, look for migation of VMs on the second host a=
nd upgrade between the web interface or "yum update and reboot" for centos =
7.2 ? <br></div><div>3/ up the first host and maintenance for the second, e=
tc ..<br></div><div><br></div><div><span style=3D"font-family: arial,helvet=
ica,sans-serif;" face=3D"arial, helvetica, sans-serif" data-mce-style=3D"fo=
nt-family: arial,helvetica,sans-serif;">Thanks,</span></div><div><br></div>=
<div>S=E9bastien<br></div><div><br></div><div>-- <br></div><div><span name=
=3D"x"></span>--<br>------------------------------------------------------<=
br>S=E9bastien PHILIPPOT - Plate-forme GenOuest<br>IRISA-INRIA, Campus de B=
eaulieu<br>35042 Rennes cedex, France<br>T=E9l: +33 (0) 2 99 84 71 5=
8<span name=3D"x"></span><br></div></div></body></html>
------=_Part_6463445_91078605.1451458242284--
9 years, 4 months
Documents for java-sdk ovirt and restapi
by shailendra saxena
hi,
I am using java sdk ovirt and rest api. Could you tell where i can find the
documents for the same ?
--
Thanx & regards,
Shailendra Kr. Saxena
IIIT Allahabad
9 years, 4 months
Storage was removed after detach from DC
by Nhã Phạm
------=_NextPart_000_0168_01D1353B.AA054630
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Hi all,
=20
My name is Nha and now I had a really serious problem with ovirt 3.5.6.
I set up ovirt 3.5.6 running on single server with DC name is Local, =
storage was local too. Then I want to move VM from local server to =
another local, so that I put a backup hard drive to move all VM into it.
Everything was fine. Then I detached the hard drive and all vm on backup =
storage was deleted.
=20
Step executed.
1. Shut down VM.
2. Select Disk tab and select a disk.
3. Click move and start to move to backup storage.
4. Put back up storage to Maintenance
5. Click detach on back up storage.
=20
Please help because data on vm was really important.
=20
Thanks.
------=_NextPart_000_0168_01D1353B.AA054630
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dutf-8">
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta name=3DGenerator =
content=3D"Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:1276718186;
mso-list-type:hybrid;
mso-list-template-ids:676622278 67698703 67698713 67698715 67698703 =
67698713 67698715 67698703 67698713 67698715;}
@list l0:level1
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l0:level2
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l0:level3
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l0:level4
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l0:level5
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l0:level6
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l0:level7
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l0:level8
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;}
@list l0:level9
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
ol
{margin-bottom:0in;}
ul
{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hi all,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>My name is =
Nha and now I had a really serious problem with ovirt =
3.5.6.<o:p></o:p></p><p class=3DMsoNormal>I set up ovirt 3.5.6 running =
on single server with DC name is Local, storage was local too. Then I =
want to move VM from local server to another local, so that I put a =
backup hard drive to move all VM into it.<o:p></o:p></p><p =
class=3DMsoNormal>Everything was fine. Then I detached the hard drive =
and all vm on backup storage was deleted.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Step =
executed.<o:p></o:p></p><p class=3DMsoListParagraph =
style=3D'text-indent:-.25in;mso-list:l0 level1 lfo1'><![if =
!supportLists]><span style=3D'mso-list:Ignore'>1.<span =
style=3D'font:7.0pt "Times New Roman"'> =
</span></span><![endif]>Shut down VM.<o:p></o:p></p><p =
class=3DMsoListParagraph style=3D'text-indent:-.25in;mso-list:l0 level1 =
lfo1'><![if !supportLists]><span style=3D'mso-list:Ignore'>2.<span =
style=3D'font:7.0pt "Times New Roman"'> =
</span></span><![endif]>Select Disk tab and select a =
disk.<o:p></o:p></p><p class=3DMsoListParagraph =
style=3D'text-indent:-.25in;mso-list:l0 level1 lfo1'><![if =
!supportLists]><span style=3D'mso-list:Ignore'>3.<span =
style=3D'font:7.0pt "Times New Roman"'> =
</span></span><![endif]>Click move and start to move to backup =
storage.<o:p></o:p></p><p class=3DMsoListParagraph =
style=3D'text-indent:-.25in;mso-list:l0 level1 lfo1'><![if =
!supportLists]><span style=3D'mso-list:Ignore'>4.<span =
style=3D'font:7.0pt "Times New Roman"'> =
</span></span><![endif]>Put back up storage to =
Maintenance<o:p></o:p></p><p class=3DMsoListParagraph =
style=3D'text-indent:-.25in;mso-list:l0 level1 lfo1'><![if =
!supportLists]><span style=3D'mso-list:Ignore'>5.<span =
style=3D'font:7.0pt "Times New Roman"'> =
</span></span><![endif]>Click detach on back up =
storage.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Please help because data on vm was really =
important.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Thanks.<o:p></o:p></p></div></body></html>
------=_NextPart_000_0168_01D1353B.AA054630--
9 years, 4 months
doubts about oVirt
by Fauéz Passos
This is a multi-part message in MIME format.
--------------050706010207030304060702
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=windows-1252">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Good morning, <br>
I own the oVirt 3.5 installed together with storages iscsi, by
the biggest reasons I needed to change the virtual IP of the
storages. Now my host is displaying the following message:<br>
The error message for connection 192.168.200.14
iqn.2003-10.com.lefthandnetworks: mg-climate-space: 469: vms (LUN
36000eb3b1aeb895a00000000000001d5) returned by VDSM was: Failed to
setup iSCSI subsystem<br>
and<br>
Failed to connect to Host OP10 Storage Servers<br>
<br>
But everything is working properly, only this message is displayed
every five minutes, leaving the log very confusing ... is there any
place where I change the default virtual IP data network?<br>
<br>
OBS: old virtual IP: 192.168.200.14<br>
new virtual IP: 192.168.200.20<br>
<br>
thank you<br>
<br>
<div class="moz-signature">-- <br>
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<title>Untitled Document</title>
<style type="text/css">
a:link {
text-decoration:none; color:#666;
}
a:visited {
text-decoration:none;
}
a:hover {
text-decoration:underline;
}
</style>
<table border="0" width="600">
<tbody>
<tr>
<td width="188"><img
src="cid:part1.05070503.07050303@inpe.br" style="width:
188px; height: 108px"> </td>
<td align="left" bgcolor="#FFFFFF" width="402"> <font
style="font-family:Arial, Helvetica, sans-serif;
font-size:20px; font-weight:700; color:#009e9e;"> Fauéz
A. O. Passos </font> <br>
<font style="font-family:Arial, Helvetica, sans-serif;
font-size:14px; color:#666;"> Analista de Sistemas -
Embrace/INPE </font> <br>
<font style="font-family:Arial, Helvetica, sans-serif;
font-size:14px; color:#666;"> +55 12 3208-7878 </font>
<br>
<font style="font-family:Arial, Helvetica, sans-serif;
font-size:14px;">
<a href="mailto:fauez.passos@inpe.br">
fauez.passos(a)inpe.br </a> </font> <br>
<font style="font-family:Arial, Helvetica, sans-serif;
font-size:14px;">
<a href="www.centralit.com.br"> www.centralit.com.br </a>
</font> <br>
</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
--------------050706010207030304060702
Content-Type: image/png;
name="logo_centralit_assinatura_email.png"
Content-Transfer-Encoding: base64
Content-ID: <part1.05070503.07050303(a)inpe.br>
Content-Disposition: inline;
filename="logo_centralit_assinatura_email.png"
iVBORw0KGgoAAAANSUhEUgAAALwAAABsCAYAAADQd9iEAAAAGXRFWHRTb2Z0d2FyZQBBZG9i
ZSBJbWFnZVJlYWR5ccllPAAADWtJREFUeNrsnW1sHEcZxyfROeBQXxw3pA0+iEOTDw4qTZDI
iwKq+yGRSKhaYqMmFFHHSJBIiAQoyYdS8tYvpFXrVEh9kWK7lVCRkhQHKKgJKAUUYlOpaahi
I1opjr1OG9eE2EZ1Uaua/Y/9LHPj3bu9u9292dvnL63ubl/mbX/zzDOzs3NzpqamzgohmgSL
lQDN5SJgMfAsFgPPYjHwLBYDnxQNTL4vP4cty/ne39cnXjxxgguHga8svT7+H7H63KsS8Ls3
bxE7/t4vuobfEePj42Lvgz9m6IsUyhBbWEpxERcH+11/uyBufPChuH/bdgn5Ans/oN/53pg8
B9BDW1tauMAK0HPW2/Kztf5WtvCmwQ4BdlVPz18gJtZvcKBnS88uTcXA7qWRb32boWfgkwE7
Q8/AJw52hp6BTxzsDD0DnzjYGXoGPnGwM/QMfOJgZ+gZ+MTBztAz8ImDnaFn4BMHO0PPwCcO
doaegY8l7B+zBuVnc0uLeCCzhKGPgVIMe/Gwf+rxIxL2nz32qLMfMyaLgR6qOX+OZ1myhQ8O
7vYBS85ufOTQ4UBg//qWzVmwY0pr5+cbA7H09/36dzJtmG+PjcXAF2zJh65fl/PXX+r7R+Cw
Bw39by4PyjQO/ms6zQw9A1+w29LV0SHBubj+y6HAHhT0atq/efVd50UThp6Bj2z0xS/sQUCv
ajRdK67+cC9Dz8CbC3vQ0P838xmGnoE3G3aGnoFPHOwMPQOfONgZegY+cbAz9Ax84mBn6Bn4
xMHO0DPwiYOdoWfgyw47dOvTPxdzJ98TrW07IssDoG+qq2XoGfhoYYfe2fk98VH1/EiBwczK
V67fCCQshr5CgQ/rZY2ogaHVhuOcBwY+prBHDUwYsDP0FQZ8VK/hhQ1MmLAz9BUCfNTvnIYF
TBSwM/QxB75cL1gHDUyUsDP0MQW+XLAHDUw5YGfo3WX0S9wH37osVtXcFFh4NSuWi5p168Qf
0zViskBgxONHJDC/+OULonHlytBhn2cNivpMvchkMqVnvK5WVD/0sFj421Piyfaj4qlnn0ks
8HOS+Nfzd/VeKHj8m57E1lalfENfLOy37WqTn9/fs8fedouk3Rvo7NrVyQQebs2pkVH5vben
J5Awe9MLxfWqeQVfVwj0pbgxdS+dCrQM5w0Nik9cnAbpyGOPGr0ESKKBL7cPXyz05fTZvbT4
+WNy3RvToQ8beGM7rSbC7qcTaCLsEK9wZjDwpsKeD3pTYWfoDQbedNh16LGSGaD/Qf+bRsPO
0BsIfKnL3y06/kLk0EOAHmmPUvDH4Zcz9DEFPoi1HtvqFgTyUoXpIthb65cEtpZlUqBPVQrs
+ptKxaziGyfYedXimFp401bxjSPsQa5aXOmWfm6lwV6p0HvBztDHBPgwYa806PPBztAbDnwU
sFcK9H5hZ+gNBT5K2OMOfaGwM/SGAV8O2OMKfbGwM/SGAF9O2OMGfamwM/RlBt4E2OMCfVCw
M/RlAt4k2E2HPmjYGfqIgTcRdlOhDwt2hj5bKRNh/6z4SKz565/F57ZvEz/56cOhZR4QQKVM
Q1i7bp38vFpCOtaM/Vss/+cl0djWZmx+AX06nZZpPXnipHzxpZB3e01RaG88DUy+L7dihBe3
8UZRlB3qGx8WNx2ZFkMNIow45Tcshf3GU2hUNVR/XG5x0Kr0TUaEEaf8sg/PYjHwLBYDz2KV
TWV/AUTtQNWmUon2L1kVCjxW/To6MCS6r416jgQ8kFniDKPFQTS6QAprlMGP8DL5c9bbzu8n
Glc4hgRj6BhWJGFoEUOh+v58ousY+BzCMKWfv3nBcWwH37wsfvWF22Nh9YP665ogdMUuZzU9
6hCkZQ27ruDmtZ99+BJcl9XnXi0IDFQQXBOH5S9YDHwW7KWsNYNWIeplMFjswxclQP61195w
hZ18dXpAJRdOvfauayuAMC5s+GLeJ7DUCY7Tgy8ThFWK1ZWKsaz2k+3tzm9MocBamgx8Hh29
Ys2aYgBoO29vFPfesmhWBdjTkJEujD7fA2EgrP3LG3x3ghHPvYsXif0rlrnCj8qBFcNI6Cvs
bvi07Dt0j4w6lRTXttoVc/fSTFaF0zuHbp1YdBghNR5UckyfwPr3SPseO041X4gXee22K7/e
sqHMkEaMaOlhxqmTX5HA48a12yDqQkc015wM3Djc0IsT2Td7QWp2cnHT2wcsz/gBJTaAh8qU
ddxuCdTWBJUK5+qtEfYfsCtBlw232onWO4dunVjqMOqdSC8XLZ/7h0qNDWWkhtl080KmudzA
q1aSBOj8TECCJdNbADff3m+HFhXj4vhEzimy+Sa7yVGmN/qla1Vqn6bUvg535A3stAIwXWiO
g9DBtwZm3XS4HqhQB2wXBp+6v4/zvcb+dcGKuw2HAkqKd6kdHyqvWwWm/bWpVN54EA4g94Id
50Q9o5ItfACWDDcuiI4kuRi6G6Rb7/3Ll81a1RcWuqluvWfnF2mE20LphNugd7rRsUZ8tEFz
fn82Kxz1wZOb26PHgwqsw44WDq4YnYPjSL/fSsuK2MLrymftfLtK2g2H9XNzVQC1CgxB0z3i
DYwKIYWNiqP3DUrV2TWrs+LR+zqoSHpakB/sy+fqsco4ShOGYGF1sGEhPSsarPmk5mq5jGh4
DWXqrk2pT1Wlq6O0MHIoVatENLrjJoxwdV/7C9NrGvA6PK9PBPPwSO9c0shFqZ1GL3craP9Z
d6f0t4/0CuF2Pc4xaToDuzS27kjXzHIFgvA/i3110E9HNQqt0sqFVSEWHr6m+nAEwu98FsyP
S6BaN/i7DfOrfV+/1MOS11ZVGXFT/LSExb6TysCH7NLocMI6A/pc4+E0RKe7HjQSo7sed968
MNZPGfV/G6cHZl55Qnny3CJDR2nwWF8XbqbXzEnsc4NdDeueWz45q9XwGjlBXBgyVLewXKJS
fHq9n4A8uZUPygXDpCxDR2lofoz++F8+Vey9IG82WTg05V7gHlDmw8BVgr9NlQLXLPvTeTmy
gfhwHo5hfo3+cEq6PwZOKkNlfkV5iUS2cvZvpBctGIZ0MTrFT1gNBx4CiNRMu7kv+UYbcNP1
SWMYmlOfTMqHMnkWGKJxeROFiop86mVEc4FYMXFpHEBt31ufvOVHXkvEwcIDXr+dX5yHhz1R
LvBUTBnxjMcKAZ4sPR65+xnXhtuBp4q5OreAAxDnCw/HMeErDq8KIr/5KjLKxtSWynSFttRe
PsHHhitzZXLS8cWnJ1JVS0ALhVO+PDIyan9OSPdG9g3SNeKexYs8w8J56hBgrpdGdLdLr2S5
jhcSjyo8s7go+zUfzLRSVeIOu7+DPkyuMPVlDtWlC4ctS1jW//tTWC/SbY1Iv+cFrbCX2isb
8CxWOYDnhZhYPEoTtNA8njl9Wn7Hu5FxXGa5mDxj2Yvx8XGxcdMmUZ/JGJlOrEeDe2Jq+mJn
4fv7+sTO73xX+oAoVCyoT/BXqpDn+7dtFzV2npFvfB+2LGPS19XR4XyH8UE6EyP48FMh6s4N
X5qyhoac332XLk2dPH486xzs6zl/Puv32NiY62984lw9TNqv/saneh5JjSvf+Wq4ucJQtfdH
D06dfvnlrHPV8/3kAcdowzFdeplRPJQPEoWhXnfb0oZZZaVfly+csNTU85rcwtLcsC0drLra
XMKibG1pcX7D+mGJtzOnz4i7N2+RLgBcASwTAeH3rpkWgixnb0+vbDWopXjk0GG5n67B732y
JTkjz6O/aKHrIcSlXu91PuJGfHSdVxiqsA9uDAkuA/1TCI5RmGpceh5QJjje2dEpvyN9ucqM
9iNsWjJv30xrig3nUfohxE/xYh+1RBQW0kHnuoXDFt5FsCLfuG+b8/vwwUNTX/3KZmcfLD2s
oXoc+2Bd0DLQOUefaHcsJ74j3M5jx5xrEZ5qUdXfahrISmEfhZXrfNVSIx2q1dXDUAUL6qfF
U/Op5wHhojxIKDdc51VmFK/aMlIrgk81Tep3xEvWXg0L8VFYXuHE0cKH2mmFNScrAWHxTdV6
Yz1D1frDymAfPmERYVFgrY7M/MkXjQv39pDlXJt1raq0i1/aZ6ely7aYCBth6XHrwjm0n1ol
pMkrDD3f1Dkn/x3nqtdM59fylQd8h/X1KjO3a1DOSEPap4++o22HbFUQ/krlukLDSWynFQUE
UNAk0k1XO2/NLc2ySceNxIbv2DcNWLMs6IziEm3ctFF+xwpZmUy90/z61R9sF6DVvqnqClu5
BKip6d814374CQPgoHOO9CG/cBUIbPxLH3Ua8dmsuHe6aHFTVB6EA+hylZk+SoTjMDKqYSC5
lR3KFvcMbg7lL184PCypiZZiRiGi4HDTAAwV8EP2cfJP8XeNBDdgA+xblZu5VcLSKQHKzFxL
FlW1Pupv9Qkh4kUletFuNabDr895Pm46zoffCqgQP465haEK52HkA/lCeMgX+fBIM65HHrDP
Kw8Q8rhvpuI89ewzvspMhRf9CMQDQ6Eea21rkxUYS+fp8aKMUKkpzFzhxFH8pNXHeDoABWRR
Nunk9vltjSpFsf0Xv0pR38xIU9SqBPehGIU9KY4tPCtR4rk0LAaexWLgWSwGnsWKl/4nwABT
FM6PZB15nQAAAABJRU5ErkJggg==
--------------050706010207030304060702--
9 years, 4 months
python floppy in RunOnce mode
by Giulio Casella
Hi,
I'm trying to boot a vm with non persistent floppy using python ovirt
sdk (the "RunOnce" way in administrator portal), but guest OS can't see
floppy drive. The ultimate goal is to deploy floppy with sysprep
unattend.xml file for windows 7 pools of vm.
Here is a snippet of code I use:
-------------------------------------------------
myvm = api.vms.get(name="vmname")
content="This is file content!"
f=params.File(name="foobar.txt",content=content)
fs=params.Files()
fs.add_file(f)
payload=params.Payload()
payload.set_type("floppy")
payload.set_files(fs)
payloads=params.Payloads()
payloads.add_payload(payload)
thevm=params.VM()
thevm.set_payloads(payloads)
action=params.Action(vm=thevm)
myvm.start(action=action)
xml = ParseHelper.toXml(action)
print xml
-------------------------------------------------
As you can see, for debugging purpose, I print my xml action, and I get:
-------------------------------------------------
<action>
<vm>
<payloads>
<payload type="floppy">
<files>
<file>
<name>foobar.txt</name>
<content>This is file content</content>
</file>
</files>
</payload>
</payloads>
</vm>
</action>
-------------------------------------------------
in the admin portal I can see my vm in "RunOnce" state, but no floppy is
present...
In fact in the vm process command line
(ps -ef | grep qemu-kvm | grep vmname) I can't see -drive option
referring to floppy (I only see 2 "-drive" options, referring to vm
system disk and to a correctly mounted cdrom ISO)
What I'm doing wrong?
(The engine is RHEV-M version 3.4.1-0.31.el6ev)
Thanks in advance,
Giulio
9 years, 4 months
Delete disk references without deleting the disk
by Johan Kooijman
Hi all,
I have about 100 old VM's in my cluster. They're powered down, ready for
deletion. What I want to do is delete the VM's including disks without
actually deleting the disk images from the storage array itself. Is that
possible? At the end I want to be able to delete the storage domain (which
then should not hold any data, as far as ovirt is concerned).
Reason for this: it's a ZFS pool with dedup enabled, deleting the images
one by one will kill the array with 100% iowa for some time.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
9 years, 4 months
HE (3.6) on gluster storage, chicken and egg status
by Fil Di Noto
I've managed to get hosted-engine running but could use some direction
for what comes next.
Summary:
4 Hosts, CentOS 7.2, oVirt 3.6 (from ovirt.org repo)
All hosts are in default/default datacenter/cluster
All 4 hosts are running glusterd
HE is installed on glusterfs volume, replica 3, on hosts 1,2, and 3. (
started with replica 4, but removed 4th brick to satisfy HE --deploy)
Status:
All hosts are active
Datacenter is not initialized
Issues:
Attach hosted_storage to Default datacenter fails
New volume dialog doesn't allow me to select a datacenter (list is empty)
Questions:
1. Why doesn't replica 4 work as a glusterfs volume, is this just
because of the installer or is there a more fundamental reason?
2. I assume the reason I can't create new volumes is because I don't
have a data storage domain configured yet. I want all of my data
storage to be glusterfs. How do I escape this chicken/egg puzzle?
3. What question should I be asking that I am not?
Thanks
9 years, 4 months
HA cluster
by Budur Nagaraju
HI
Getting below error while configuring Hosted engine,
root@he ~]# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: yes
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
It has been detected that this program is executed through an SSH
connection without using screen.
Continuing with the installation may lead to broken installation
if the network connection fails.
It is highly recommended to abort the installation and run it
inside a screen session using command "screen".
Do you want to continue anyway? (Yes, No)[No]: yes
[WARNING] Cannot detect if hardware supports virtualization
[ INFO ] Bridge ovirtmgmt already created
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
*[ ERROR ] The following VMs has been found:
2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
'Environment setup': Cannot setup Hosted Engine with other VMs running*
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[root@he ~]#
9 years, 4 months
Upgrade path from Fedora 20 oVirt 3.5 to Fedora 22 oVirt 3.6
by David Marzal Canovas
--=_35f37747b7409088194fea6f348517b4
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8
Hi would like to be sure of the correct upgrate path to take.
Should I first upgrade Fedora 20 -> Fedora 21 -> Fedora 22
and then upgrade oVirt from 3.5 to 3.6?
Or would be better to upgrade in the Fedora 20: oVirt 3.5 -> oVirt 3.6
and then make the OS upgrades FD20->FD21->FD22
In the release notes
http://www.ovirt.org/OVirt_3.6_Release_Notes#Install_.2F_Upgrade_from_pre...
[1]
don't says anything about the upgrade path of the OS, but searching I'm
aware that
oVirt 3.5 is only compatible with FD20 not with 21, or 22
oVirt 3.6 is only compatible with FD20 not with 21 or 20
Thanks in advance
--
David Marzal Cánovas
Servicio de Mecanización e Informática
Asamblea Regional de Murcia
Paseo Alfonso XIII, nº53
30203 - Cartagena
Tlfno: 968326800
Links:
------
[1]
http://www.ovirt.org/OVirt_3.6_Release_Notes#Install_.2F_Upgrade_from_pre...
--=_35f37747b7409088194fea6f348517b4
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; charset=
=3DUTF-8" /></head><body style=3D'font-size: 10pt'>
<p>Hi would like to be sure of the correct upgrate path to take.</p>
<p>Should I first upgrade Fedora 20 -> Fedora 21 -> Fedora 22</p>
<p>and then upgrade oVirt from 3.5 to 3.6?</p>
<p>Or would be better to upgrade in the Fedora 20: oVirt 3.5 -> oVirt 3=
=2E6</p>
<p>and then make the OS upgrades FD20->FD21->FD22</p>
<p>In the release notes <a href=3D"http://www.ovirt.org/OVirt_3.6_Release_N=
otes#Install_.2F_Upgrade_from_previous_versions">http://www.ovirt.org/OVirt=
_3.6_Release_Notes#Install_.2F_Upgrade_from_previous_versions</a></p>
<p>don't says anything about the upgrade path of the OS, but searching I'm =
aware that</p>
<p>oVirt 3.5 is only compatible with FD20 not with 21, or 22</p>
<p>oVirt 3.6 is only compatible with FD20 not with 21 or 20</p>
<p>Thanks in advance</p>
<div>-- <br />
<table border=3D"0" cellspacing=3D"0" cellpadding=3D"0">
<tbody>
<tr>
<td><img src=3D"http://www.asambleamurcia.es/templates/sumandoesfuerzos/ima=
ges/ARM.png" alt=3D"" width=3D"130" height=3D"79" /></td>
<td>David Marzal Cánovas <br />Servicio de Mecanización =
e Informática<br />Asamblea Regional de Murcia<br />Paseo Alfonso XI=
II, nº53<br />30203 - Cartagena<br />Tlfno: 968326800</td>
</tr>
</tbody>
</table>
</div>
</body></html>
--=_35f37747b7409088194fea6f348517b4--
9 years, 4 months
New oVirt Node releases?
by Kevin Hung
Hello,
I have noticed that there have not been any official releases of oVirt
Node since 3.5.2 [1]. I have tried looking through the Users and the
Announce mailing lists for any information regarding the lack of
releases, but I was not able to find anything from the past few months.
The official documentation for oVirt seems to indicate that oVirt Node
is still supported.
Are there just build problems preventing oVirt Node from being released?
Or are we supposed to use either the latest Jenkins build or our own
build [2] if we wish to deploy oVirt Node?
[1] http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-node/el7-3.5.2/
[2] http://www.ovirt.org/Node_Building
9 years, 4 months
Which browser should I use?
by gregor
Hi,
after a painful upgrade from 3.5 to 3.6 (AIO) I try to figure out the deeper meaning of the message which is display on the webinterface.
"This Browser version isn't optimal for displaying the application graphics (refer to Documentation for details)"
Can anybody please point me where in the Documentation this is explained?
I tried the latest Firefox and Google-Chrome from arch linux but still get these message? (I hope it doesn't mean to use the Internet Explorer or now known as Edge)
cheers
gregor
9 years, 4 months
Refresh Host Devices
by ovirt@timmi.org
Hi list,
how can I refresh the host devices (usb) in the frontend?
"Refresh Capabilities" does not work for me.
I want to attach a usb device to a VM but I can't see the device in the
frontend.
# lsub is showing the device.
Best regards
Christoph
9 years, 4 months
Migration Failure With FibreChannel+NFS
by Charles Tassell
Hi Everyone,
I've been playing around with oVirt 3.6.1 to see if we can use it to
replace VMWare, and I'm running into a problem with live migrations.
They fail and I can't seem to find an error message that describes why
(the error logging is VERY verbose, so maybe I'm just missing the
important part.)
I've setup two hosts that use a fibre channel SAN for the VM
datastore and an NFS share for the ISO datastore. I have a VM which is
just booting off of a SystemRescue ISO file with a 2GB disk. It seems
to run fine, but when I try to migrate it to the other host I get the
following in the engine.log of the hosted engine:
2015-12-31 09:28:20,433 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-31)
[61255087] Lock Acquired to object
'EngineLock:{exclusiveLocks='[2be4938e-f4a3-4322-bae3-8a9628b81835=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName cdTest02>]',
sharedLocks='null'}'
2015-12-31 09:28:20,526 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default
task-31) [61255087] Candidate host 'oVirt-01'
('cbfd733b-8ced-487d-8754-a2217ce1210f') was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter 'Migration' (correlation id: null)
2015-12-31 09:28:20,646 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-30) [61255087] Running command:
MigrateVmCommand internal: false. Entities affected : ID:
2be4938e-f4a3-4322-bae3-8a9628b81835 Type: VMAction group MIGRATE_VM
with role type USER
2015-12-31 09:28:20,701 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-30) [61255087] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vmId='2be4938e-f4a3-4322-bae3-8a9628b81835',
srcHost='ovirt-01.virt.roblib.upei.ca',
dstVdsId='1200a78f-6d05-4e5e-9ef7-6798cf741310',
dstHost='ovirt-02.virt.roblib.upei.ca:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='false',
migrateCompressed='false', consoleAddress='null'}), log id: f2548d4
2015-12-31 09:28:20,703 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-30) [61255087] START,
MigrateBrokerVDSCommand(HostName = oVirt-01,
MigrateVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vmId='2be4938e-f4a3-4322-bae3-8a9628b81835',
srcHost='ovirt-01.virt.roblib.upei.ca',
dstVdsId='1200a78f-6d05-4e5e-9ef7-6798cf741310',
dstHost='ovirt-02.virt.roblib.upei.ca:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='false',
migrateCompressed='false', consoleAddress='null'}), log id: 5ec26536
2015-12-31 09:28:21,435 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-30) [61255087] FINISH,
MigrateBrokerVDSCommand, log id: 5ec26536
2015-12-31 09:28:21,449 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-30) [61255087] FINISH,
MigrateVDSCommand, return: MigratingFrom, log id: f2548d4
2015-12-31 09:28:21,504 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-30) [61255087] Correlation ID: 61255087,
Job ID: ff37dfc9-f543-4e7b-983c-62cb0056959c, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: cdTest02, Source:
oVirt-01, Destination: oVirt-02, User: admin@internal).
2015-12-31 09:28:22,984 WARN
[org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-3) [] skipping VM
'2be4938e-f4a3-4322-bae3-8a9628b81835' from this monitoring cycle - the
VM data has changed since fetching the data
2015-12-31 09:28:22,992 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] START, FullListVDSCommand(HostName
= , FullListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vds='Host[,cbfd733b-8ced-487d-8754-a2217ce1210f]',
vmIds='[2beb0a49-6f2a-460a-b253-d3fcc7b68d31]'}), log id: dfa1177
2015-12-31 09:28:23,628 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] FINISH, FullListVDSCommand, return:
[{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc,
guestDiskMapping={96768549-c104-4e9c-a={name=/dev/vda},
QEMU_DVD-ROM={name=/dev/sr0}},
vmId=2beb0a49-6f2a-460a-b253-d3fcc7b68d31, pid=9358,
devices=[Ljava.lang.Object;@16b85d46, smp=2, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=4096, displayPort=5900,
cpuType=Westmere,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4299531100, vmName=HostedEngine, clientIp=,
pauseCode=NOERR}], log id: dfa1177
2015-12-31 09:28:23,636 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-32) [699a8657] START,
GetExistingStorageDomainListQuery(GetExistingStorageDomainListParameters:{refresh='true',
filtered='false'}), log id: 23a50a1d
2015-12-31 09:28:23,637 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [699a8657] START,
HSMGetStorageDomainsListVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='null', storageDomainType='Data', path='null'}), log id:
3b3d52a7
2015-12-31 09:28:25,137 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [699a8657] FINISH,
HSMGetStorageDomainsListVDSCommand, return:
[383a1e7d-8b86-4a50-8bc3-e4374c57a3e5,
8dfcc83b-64d7-4ff7-9800-02ba3430ea7b], log id: 3b3d52a7
2015-12-31 09:28:25,155 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [699a8657] START,
HSMGetStorageDomainInfoVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainInfoVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storageDomainId='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}), log id: 173f9a1
2015-12-31 09:28:26,407 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [699a8657] FINISH,
HSMGetStorageDomainInfoVDSCommand, return:
<StorageDomainStatic:{name='oVirt_FC01',
id='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}, null>, log id: 173f9a1
2015-12-31 09:28:26,407 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-32) [699a8657] FINISH,
GetExistingStorageDomainListQuery, log id: 23a50a1d
2015-12-31 09:28:26,407 WARN
[org.ovirt.engine.core.bll.ImportHostedEngineStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-32) [] CanDoAction of action
'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons:
ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
2015-12-31 09:28:38,649 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-52) [] VM
'2be4938e-f4a3-4322-bae3-8a9628b81835'(cdTest02) moved from
'MigratingFrom' --> 'Up'
2015-12-31 09:28:38,650 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-52) [] Adding VM
'2be4938e-f4a3-4322-bae3-8a9628b81835' to re-run list
2015-12-31 09:28:38,662 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-52) [] START, FullListVDSCommand(HostName
= , FullListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vds='Host[,cbfd733b-8ced-487d-8754-a2217ce1210f]',
vmIds='[2beb0a49-6f2a-460a-b253-d3fcc7b68d31]'}), log id: 6ca982ad
2015-12-31 09:28:39,672 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-52) [] FINISH, FullListVDSCommand,
return: [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc,
guestDiskMapping={96768549-c104-4e9c-a={name=/dev/vda},
QEMU_DVD-ROM={name=/dev/sr0}},
vmId=2beb0a49-6f2a-460a-b253-d3fcc7b68d31, pid=9358,
devices=[Ljava.lang.Object;@7cf9a247, smp=2, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=4096, displayPort=5900,
cpuType=Westmere,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4299546770, vmName=HostedEngine, clientIp=,
pauseCode=NOERR}], log id: 6ca982ad
2015-12-31 09:28:39,681 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-37) [718efaaf] START,
GetExistingStorageDomainListQuery(GetExistingStorageDomainListParameters:{refresh='true',
filtered='false'}), log id: 2751bc0b
2015-12-31 09:28:39,682 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-37) [718efaaf] START,
HSMGetStorageDomainsListVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='null', storageDomainType='Data', path='null'}), log id:
6ffe19fd
2015-12-31 09:28:39,730 ERROR
[org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-52) [] Rerun VM
'2be4938e-f4a3-4322-bae3-8a9628b81835'. Called from VDS 'oVirt-01'
2015-12-31 09:28:39,738 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-36) [] START,
MigrateStatusVDSCommand(HostName = oVirt-01,
MigrateStatusVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vmId='2be4938e-f4a3-4322-bae3-8a9628b81835'}), log id: 3cd2d338
2015-12-31 09:28:40,323 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-36) [] FINISH, MigrateStatusVDSCommand,
log id: 3cd2d338
2015-12-31 09:28:40,362 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-36) [] Correlation ID: 61255087, Job ID:
ff37dfc9-f543-4e7b-983c-62cb0056959c, Call Stack: null, Custom Event ID:
-1, Message: Failed to migrate VM cdTest02 to Host oVirt-02 . Trying to
migrate to another Host.
2015-12-31 09:28:40,477 WARN
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-36) [] CanDoAction of action 'MigrateVm'
failed for user admin@internal. Reasons:
VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2015-12-31 09:28:40,478 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-36) [] Lock freed to object
'EngineLock:{exclusiveLocks='[2be4938e-f4a3-4322-bae3-8a9628b81835=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName cdTest02>]',
sharedLocks='null'}'
2015-12-31 09:28:40,512 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-36) [] Correlation ID: 61255087, Job ID:
ff37dfc9-f543-4e7b-983c-62cb0056959c, Call Stack: null, Custom Event ID:
-1, Message: No available host was found to migrate VM cdTest02 to.
2015-12-31 09:28:40,520 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-36) [] Correlation ID: 61255087, Job ID:
ff37dfc9-f543-4e7b-983c-62cb0056959c, Call Stack: null, Custom Event ID:
-1, Message: Migration failed (VM: cdTest02, Source: oVirt-01).
2015-12-31 09:28:42,059 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-37) [718efaaf] FINISH,
HSMGetStorageDomainsListVDSCommand, return:
[383a1e7d-8b86-4a50-8bc3-e4374c57a3e5,
8dfcc83b-64d7-4ff7-9800-02ba3430ea7b], log id: 6ffe19fd
2015-12-31 09:28:42,088 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-37) [718efaaf] START,
HSMGetStorageDomainInfoVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainInfoVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storageDomainId='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}), log id: 4aca610e
2015-12-31 09:28:43,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-37) [718efaaf] FINISH,
HSMGetStorageDomainInfoVDSCommand, return:
<StorageDomainStatic:{name='oVirt_FC01',
id='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}, null>, log id: 4aca610e
2015-12-31 09:28:43,341 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-37) [718efaaf] FINISH,
GetExistingStorageDomainListQuery, log id: 2751bc0b
2015-12-31 09:28:43,341 WARN
[org.ovirt.engine.core.bll.ImportHostedEngineStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-37) [] CanDoAction of action
'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons:
ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
2015-12-31 09:28:55,757 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-8) [] START, FullListVDSCommand(HostName
= , FullListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vds='Host[,cbfd733b-8ced-487d-8754-a2217ce1210f]',
vmIds='[2beb0a49-6f2a-460a-b253-d3fcc7b68d31]'}), log id: 6a41d8c6
2015-12-31 09:28:56,767 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-8) [] FINISH, FullListVDSCommand, return:
[{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc,
guestDiskMapping={96768549-c104-4e9c-a={name=/dev/vda},
QEMU_DVD-ROM={name=/dev/sr0}},
vmId=2beb0a49-6f2a-460a-b253-d3fcc7b68d31, pid=9358,
devices=[Ljava.lang.Object;@74a71c9b, smp=2, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=4096, displayPort=5900,
cpuType=Westmere,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4299563870, vmName=HostedEngine, clientIp=,
pauseCode=NOERR}], log id: 6a41d8c6
2015-12-31 09:28:56,776 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-42) [59b077df] START,
GetExistingStorageDomainListQuery(GetExistingStorageDomainListParameters:{refresh='true',
filtered='false'}), log id: c77c6e6
2015-12-31 09:28:56,777 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-42) [59b077df] START,
HSMGetStorageDomainsListVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='null', storageDomainType='Data', path='null'}), log id:
3faa3525
2015-12-31 09:28:57,435 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-42) [59b077df] FINISH,
HSMGetStorageDomainsListVDSCommand, return:
[383a1e7d-8b86-4a50-8bc3-e4374c57a3e5,
8dfcc83b-64d7-4ff7-9800-02ba3430ea7b], log id: 3faa3525
2015-12-31 09:28:57,456 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-42) [59b077df] START,
HSMGetStorageDomainInfoVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainInfoVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storageDomainId='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}), log id: 3e139bdf
2015-12-31 09:28:58,666 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-42) [59b077df] FINISH,
HSMGetStorageDomainInfoVDSCommand, return:
<StorageDomainStatic:{name='oVirt_FC01',
id='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}, null>, log id: 3e139bdf
2015-12-31 09:28:58,666 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-42) [59b077df] FINISH,
GetExistingStorageDomainListQuery, log id: c77c6e6
2015-12-31 09:28:58,666 WARN
[org.ovirt.engine.core.bll.ImportHostedEngineStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-42) [] CanDoAction of action
'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons:
ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
2015-12-31 09:29:12,776 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-60) [] START, FullListVDSCommand(HostName
= , FullListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
vds='Host[,cbfd733b-8ced-487d-8754-a2217ce1210f]',
vmIds='[2beb0a49-6f2a-460a-b253-d3fcc7b68d31]'}), log id: 18733300
2015-12-31 09:29:12,823 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-60) [] FINISH, FullListVDSCommand,
return: [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc,
guestDiskMapping={96768549-c104-4e9c-a={name=/dev/vda},
QEMU_DVD-ROM={name=/dev/sr0}},
vmId=2beb0a49-6f2a-460a-b253-d3fcc7b68d31, pid=9358,
devices=[Ljava.lang.Object;@27f48830, smp=2, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=4096, displayPort=5900,
cpuType=Westmere,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4299580890, vmName=HostedEngine, clientIp=,
pauseCode=NOERR}], log id: 18733300
2015-12-31 09:29:12,832 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-50) [4ddfdad5] START,
GetExistingStorageDomainListQuery(GetExistingStorageDomainListParameters:{refresh='true',
filtered='false'}), log id: 7bbc9a0a
2015-12-31 09:29:12,833 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-50) [4ddfdad5] START,
HSMGetStorageDomainsListVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainsListVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='null', storageDomainType='Data', path='null'}), log id:
4d600567
2015-12-31 09:29:13,438 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainsListVDSCommand]
(org.ovirt.thread.pool-8-thread-50) [4ddfdad5] FINISH,
HSMGetStorageDomainsListVDSCommand, return:
[383a1e7d-8b86-4a50-8bc3-e4374c57a3e5,
8dfcc83b-64d7-4ff7-9800-02ba3430ea7b], log id: 4d600567
2015-12-31 09:29:13,482 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-50) [4ddfdad5] START,
HSMGetStorageDomainInfoVDSCommand(HostName = oVirt-01,
HSMGetStorageDomainInfoVDSCommandParameters:{runAsync='true',
hostId='cbfd733b-8ced-487d-8754-a2217ce1210f',
storageDomainId='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}), log id: 6ff4b44f
2015-12-31 09:29:14,883 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-50) [4ddfdad5] FINISH,
HSMGetStorageDomainInfoVDSCommand, return:
<StorageDomainStatic:{name='oVirt_FC01',
id='8dfcc83b-64d7-4ff7-9800-02ba3430ea7b'}, null>, log id: 6ff4b44f
2015-12-31 09:29:14,883 INFO
[org.ovirt.engine.core.bll.storage.GetExistingStorageDomainListQuery]
(org.ovirt.thread.pool-8-thread-50) [4ddfdad5] FINISH,
GetExistingStorageDomainListQuery, log id: 7bbc9a0a
2015-12-31 09:29:14,883 WARN
[org.ovirt.engine.core.bll.ImportHostedEngineStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-50) [] CanDoAction of action
'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons:
ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
Any ideas?
9 years, 4 months
Configuring another interface for trunked (tagged) VM traffic
by Will Dennis
Hi all,
Taking the next step on configuring my newly-established oVirt cluster, and that would be to set up a trunk (VLAN tagged) connection to each cluster host (there are 3) for VM traffic. What I’m looking at is akin to setting up vSwitches on VMware, except I have never done this on a VMware cluster, just on individual hosts…
Anyhow, I have the following NICs available on my three hosts (conveniently, they are the exact same hardware platform):
ovirt-node-01 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
ovirt-node-02 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
ovirt-node-03 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
As you may see, I am using the ‘enp12s0f0’ interface on each host for the ‘ovirtmgmt’ bridge. This network carries the admin traffic as well as Gluster distributed filesystem traffic, but I now want to establish a separate link to each host for VM traffic. The ‘ovirtmgmt’ bridge is NOT trunked/tagged, only a single VLAN is used. For the VM traffic, I’d like to use the ‘enp4s0f0’ interface on each host, and tie them into a logical network named “vm-traffic” (or the like) and make that a trunked/tagged interface.
Are there any existing succinct instructions on how to do this? I have been reading thru the oVirt Admin Manual’s “Logical Networks” section (http://www.ovirt.org/OVirt_Administration_Guide#Logical_Network_Tasks) but it hasn’t “clicked” in my mind yet...
Thanks,
Will
9 years, 4 months
SPM
by Fernando Fuentes
Team,
I noticed that my SPM moved to another host which was odd because I have
a set SPM.
Somehow when that happen two of my hosts went down and all my vms when
in pause state.
The oddity behind all this is that my primary storage which has allways
been my SPM was online without any issues..
What could of have cause that? and is there a way prevent from the SPM
migrating unless there is an issue?
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
9 years, 4 months
oVirt hosted engine agent and broker duplicate logs to syslog
by Aleksey Chudov
Hi,
After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their logs to
syslog. So, the same messages logged twice to files in
/var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
Agent and broker configuration files remain the same for 3.5, 3.6.0 and
3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
Is it a bug or expected behavior?
OS is CentOS 7.2
# rpm -qa 'ovirt*'
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
ovirt-release36-002-2.noarch
ovirt-setup-lib-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
# cat /etc/ovirt-hosted-engine-ha/agent-log.conf
[loggers]
keys=root
[handlers]
keys=syslog,logfile
[formatters]
keys=long,sysform
[logger_root]
level=INFO
handlers=syslog,logfile
propagate=0
[handler_syslog]
level=ERROR
class=handlers.SysLogHandler
formatter=sysform
args=('/dev/log', handlers.SysLogHandler.LOG_USER)
[handler_logfile]
class=logging.handlers.TimedRotatingFileHandler
args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
level=DEBUG
formatter=long
[formatter_long]
format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
%(message)s
[formatter_sysform]
format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
datefmt=
Aleksey
9 years, 4 months
R: Re: R: Re: Network instability after upgrade 3.6.0 -> 3.6.1
by Stefano Danzi
----_com.android.email_999765613676600
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
CiAgICAKSGkgRGFuLEkgY2FuJ3QgY2hhbmdlIHN3aXRjaCBzZXR0aW5ncyB1bnRpbCBuZXh0IHdl
ZWsuSSB3aWxsIHBvc3QgYSBtZXNzYWdlIGFmdGVyIG90aGVycyB0ZXN0cy4KCi0tLS0tLS0tIE1l
c3NhZ2dpbyBvcmlnaW5hbGUgLS0tLS0tLS0KRGE6IERhbiBLZW5pZ3NiZXJnIDxkYW5rZW5AcmVk
aGF0LmNvbT4gCkRhdGE6IDMxLzEyLzIwMTUgIDA5OjQ0ICAoR01UKzAxOjAwKSAKQTogU3RlZmFu
byBEYW56aSA8cy5kYW56aUBoYXdhaS5pdD4gCkNjOiBKb24gQXJjaGVyIDxqb25Acm9zc2x1Zy5v
cmcudWs+LCBtYnVybWFuQHJlZGhhdC5jb20sIHVzZXJzQG92aXJ0Lm9yZyAKT2dnZXR0bzogUmU6
IFtvdmlydC11c2Vyc10gUjogUmU6IE5ldHdvcmsgaW5zdGFiaWxpdHkgYWZ0ZXIgdXBncmFkZSAz
LjYuMCAtPgogIDMuNi4xIAoKSSBkbyBub3Qgc2VlIGFueXRoaW5nIHN1c3BlY2lvdXMgaGVyZS4K
CldoaWNoIGtlcm5lbCB2ZXJzaW9uIHdvcmtlZCB3ZWxsIGZvciB5b3U/CgpXb3VsZCBpdCBiZSBw
b3NzaWJsZSB0byBib290IHRoZSBtYWNoaW5lIHdpdGggaXQsIGFuZCByZXRlc3QgYm9uZCBtb2Rl
CjQsIHNvIHRoYXQgd2UgY2FuIHdob2xlLWhlYXJ0ZWRseSBwbGFjZSB0aGUgYmxhbWUgb24ga2Vy
bmVsPwo=
----_com.android.email_999765613676600
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT4KICAgIAo8ZGl2PkhpIERhbiw8L2Rp
dj48ZGl2PkkgY2FuJ3QgY2hhbmdlIHN3aXRjaCBzZXR0aW5ncyB1bnRpbCBuZXh0IHdlZWsuPC9k
aXY+PGRpdj5JIHdpbGwgcG9zdCBhIG1lc3NhZ2UgYWZ0ZXIgb3RoZXJzIHRlc3RzLjwvZGl2Pjxi
cj48YnI+PGRpdiBzdHlsZT0iZm9udC1zaXplOjEwMCU7dGV4dC1hbGlnbjpsZWZ0O2NvbG9yOiMw
MDAwMDAiPi0tLS0tLS0tIE1lc3NhZ2dpbyBvcmlnaW5hbGUgLS0tLS0tLS08YnI+RGE6IERhbiBL
ZW5pZ3NiZXJnICZsdDtkYW5rZW5AcmVkaGF0LmNvbSZndDsgPGJyPkRhdGE6IDMxLzEyLzIwMTUg
IDA5OjQ0ICAoR01UKzAxOjAwKSA8YnI+QTogU3RlZmFubyBEYW56aSAmbHQ7cy5kYW56aUBoYXdh
aS5pdCZndDsgPGJyPkNjOiBKb24gQXJjaGVyICZsdDtqb25Acm9zc2x1Zy5vcmcudWsmZ3Q7LCBt
YnVybWFuQHJlZGhhdC5jb20sIHVzZXJzQG92aXJ0Lm9yZyA8YnI+T2dnZXR0bzogUmU6IFtvdmly
dC11c2Vyc10gUjogUmU6IE5ldHdvcmsgaW5zdGFiaWxpdHkgYWZ0ZXIgdXBncmFkZSAzLjYuMCAt
Jmd0OwogIDMuNi4xIDxicj48YnI+PC9kaXY+SSBkbyBub3Qgc2VlIGFueXRoaW5nIHN1c3BlY2lv
dXMgaGVyZS48YnI+PGJyPldoaWNoIGtlcm5lbCB2ZXJzaW9uIHdvcmtlZCB3ZWxsIGZvciB5b3U/
PGJyPjxicj5Xb3VsZCBpdCBiZSBwb3NzaWJsZSB0byBib290IHRoZSBtYWNoaW5lIHdpdGggaXQs
IGFuZCByZXRlc3QgYm9uZCBtb2RlPGJyPjQsIHNvIHRoYXQgd2UgY2FuIHdob2xlLWhlYXJ0ZWRs
eSBwbGFjZSB0aGUgYmxhbWUgb24ga2VybmVsPzxicj48L2JvZHk+PC9odG1sPg==
----_com.android.email_999765613676600--
9 years, 4 months
Can I reduce the Java heap size of engine-backup???
by John Florian
I'm trying to run the engine-backup script via a Bacula job using the
RunScript option so that the engine-backup dumps its output someplace
where Bacula will collect it once engine-backup finishes. However the
job is failing and with enough digging I eventually learned the script
was writing the following in /tmp/hs_err_pid5789.log:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 2555904 bytes for
committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2627), pid=5789, tid=140709998221056
#
# JRE version: (8.0_65-b17) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.65-b01 mixed mode linux-amd64
compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable
core dumping, try "ulimit -c unlimited" before starting Java again
#
So is there any good way to reduce the Java heap size? I mean I know
what -Xmx does, but where might I try setting it, ideally so that it
affects the engine-backup only? Any idea of good setting for a very
small environment with a dozen VMs?
--
John Florian
9 years, 4 months
oVirt on Dell SC1435
by Michael Cooper
Hey Guys,
First time pposter here, I am having an issue with installing oVirt
on a Dell SC1435
I ran lscpu to make sure I was Virtualized in the bios, The following was
the result
[root@council ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 30
Model name: Intel(R) Core(TM) i7 CPU K 875 @ 2.93GHz
Stepping: 5
CPU MHz: 1197.000
BogoMIPS: 5862.18
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
So I then tried to run the hosted-engine --deploy and this is what happens:
[root@starfleet tmpengineiso]# screen
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: YEs
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151230042443-p6o0qo.log
Version: otopi-1.4.0 (otopi-1.4.0-1.el7.centos)
[ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
support virtualization
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151230042447.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
I have attached my logfile, and a few screeshots .... Let me know please,
Thanks,
--
Michael A Cooper
Linux Certified
Zerto Certified
http://www.coopfire.com
9 years, 4 months
host xxx did no satisfy internal filter Memory because its swap value was illegal.
by pc
[sorry, this is my first time to use mailing list, repost again, with content from html to plain text]
### Description ###
1. problem
1) migrate vm {name:xyz001, mem(min, max) = (2G,4G)} from ovirt host n33 to n34, failed.
2) shutting down vm {name: test001, mem(min, max) = (1G,1G)} on n34, update test001's config: Host->Start Running On: Specific(n34), then start test001, while, it's running on n33.
2. err message
Error while executing action: migrate
[engine gui]
xyz001:
Cannot migrate VM. There is no host that satisfies current scheduling constraints. See below for details:
The host n33.ovirt did not satisfy internal filter Memory because has availabe 1863 MB memory. Insufficient free memory to run the VM.
The host n34.ovirt did not satisfy internal filter Memory because its swap value was illegal.
[engine.log]
INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23) [5916aa3b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[73351885-9a92-4317-baaf-e4f2bed1171a=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName test11>]', sharedLocks='null'}'
INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-23) [5916aa3b] Candidate host 'n34' ('2ae3a219-ae9a-4347-b1e2-0e100360231e') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)
INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-23) [5916aa3b] Candidate host 'n33' ('688aec34-5630-478e-ae5e-9d57990804e5') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)
WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23) [5916aa3b] CanDoAction of action 'MigrateVm' failed for user admin@internal. Reasons: VAR__ACTION__MIGRATE,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName n33,$filterName Memory,$availableMem 1863,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName n34,$filterName Memory,VAR__DETAIL__SWAP_VALUE_ILLEGAL,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL
INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23) [5916aa3b] Lock freed to object 'EngineLock:{exclusiveLocks='[73351885-9a92-4317-baaf-e4f2bed1171a=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName test11>]', sharedLocks='null'}'
3. DC
Compatibility Version: 3.5
4. Cluster
Memory Optimization: For Server Load - Allow scheduling of 150% of physical memory
Memory Balloon: Enable Memory Balloon Optimization
Enable KSM: Share memory pages across all available memory (best KSM effectivness)
5. HOST
name: n33, n34
mem: 32G
6. VM
[n33] 11 vms
(min, max) = (2G,4G) = 8
(min, max) = (2G,8G) = 1
(min, max) = (2G,2G) = 2
total: 22G/44G
[n34] 7 vms
(min, max) = (0.5G,1G) = 1
(min, max) = (1G,2G) = 1
(min, max) = (2G,2G) = 1
(min, max) = (2G,4G) = 3
(min, max) = (8G,8G) = 1
total: 17.5G/25G
--------------------------------------------
(min, max) = (2G,4G) stands for:
Memory Size: 4G
Physical Memory Guaranteed: 2G
Memory Balloon Device Enabled: checked
--------------------------------------------
7. rpm version
[root@n33 ~]# rpm -qa |grep vdsm
vdsm-yajsonrpc-4.16.27-0.el6.noarch
vdsm-jsonrpc-4.16.27-0.el6.noarch
vdsm-cli-4.16.27-0.el6.noarch
vdsm-python-zombiereaper-4.16.27-0.el6.noarch
vdsm-xmlrpc-4.16.27-0.el6.noarch
vdsm-python-4.16.27-0.el6.noarch
vdsm-4.16.27-0.el6.x86_64
[root@engine ~]# rpm -qa |grep ovirt
ovirt-release36-001-2.noarch
ovirt-engine-setup-base-3.6.0.3-1.el6.noarch
ovirt-engine-setup-3.6.0.3-1.el6.noarch
ovirt-image-uploader-3.6.0-1.el6.noarch
ovirt-engine-wildfly-8.2.0-1.el6.x86_64
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.0.3-1.el6.noarch
ovirt-host-deploy-1.4.0-1.el6.noarch
ovirt-engine-backend-3.6.0.3-1.el6.noarch
ovirt-engine-webadmin-portal-3.6.0.3-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-engine-lib-3.6.0.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.0.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.0.3-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.0.3-1.el6.noarch
ovirt-engine-sdk-python-3.6.0.3-1.el6.noarch
ovirt-iso-uploader-3.6.0-1.el6.noarch
ovirt-vmconsole-proxy-1.0.0-1.el6.noarch
ovirt-engine-extensions-api-impl-3.6.0.3-1.el6.noarch
ovirt-engine-websocket-proxy-3.6.0.3-1.el6.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.0.3-1.el6.noarch
ebay-cors-filter-1.0.1-0.1.ovirt.el6.noarch
ovirt-host-deploy-java-1.4.0-1.el6.noarch
ovirt-engine-tools-3.6.0.3-1.el6.noarch
ovirt-engine-restapi-3.6.0.3-1.el6.noarch
ovirt-engine-3.6.0.3-1.el6.noarch
ovirt-engine-extension-aaa-jdbc-1.0.1-1.el6.noarch
ovirt-engine-cli-3.6.0.1-1.el6.noarch
ovirt-vmconsole-1.0.0-1.el6.noarch
ovirt-engine-wildfly-overlay-001-2.el6.noarch
ovirt-engine-dbscripts-3.6.0.3-1.el6.noarch
ovirt-engine-userportal-3.6.0.3-1.el6.noarch
ovirt-guest-tools-iso-3.6.0-0.2_master.fc22.noarch
### DB ###
[root@engine ~]# su postgres
bash-4.1$ cd ~
bash-4.1$ psql engine
engine=# select vds_id, physical_mem_mb, mem_commited, vm_active, vm_count, reserved_mem, guest_overhead, transparent_hugepages_state, pending_vmem_size from vds_dynamic;
vds_id | physical_mem_mb | mem_commited | vm_active | vm_count | reserved_mem | guest_overhead | transparent_hugepages_state | pending_vmem_size
--------------------------------------+-----------------+--------------+-----------+----------+--------------+----------------+-----------------------------+-------------------
688aec34-5630-478e-ae5e-9d57990804e5 | 32057 | 45836 | 11 | 11 | 321 | 65 | 2 | 0
2ae3a219-ae9a-4347-b1e2-0e100360231e | 32057 | 26120 | 7 | 7 | 321 | 65 | 2 | 0
(2 rows)
### memory ###
[n33]
# free -m
total used free shared buffers cached
Mem: 32057 31770 287 0 41 6347
-/+ buffers/cache: 25381 6676
Swap: 29999 10025 19974
Physical Memory: 32057 MB total, 25646 MB used, 6411 MB free
Swap Size: 29999 MB total, 10025 MB used, 19974 MB free
Max free Memory for scheduling new VMs: 1928.5 MB
[n34]
# free -m
total used free shared buffers cached
Mem: 32057 31713 344 0 78 13074
-/+ buffers/cache: 18560 13497
Swap: 29999 5098 24901
Physical Memory: 32057 MB total, 18593 MB used, 13464 MB free
Swap Size: 29999 MB total, 5098 MB used, 24901 MB free
Max free Memory for scheduling new VMs: 21644.5 MB
### code ###
##from: https://github.com/oVirt/ovirt-engine
v3.6.0
##from: D:\code\java\ovirt-engine\backend\manager\modules\dal\src\main\resources\bundles\AppErrors.properties
VAR__DETAIL__SWAP_VALUE_ILLEGAL=$detailMessage its swap value was illegal
##from: D:\code\java\ovirt-engine\backend\manager\modules\bll\src\main\java\org\ovirt\engine\core\bll\scheduling\policyunits\MemoryPolicyUnit.java
#-----------code--------------1#
private boolean isVMSwapValueLegal(VDS host) {
if (!Config.<Boolean> getValue(ConfigValues.EnableSwapCheck)) {
return true;
}
(omitted..)
return ((swap_total - swap_free - mem_available) * 100 / physical_mem_mb) <= Config.<Integer> getValue(ConfigValues.BlockMigrationOnSwapUsagePercentage)
(omitted..)
}
#-----------code--------------1#
if EnableSwapCheck = False then return True, so we can simply disable this option? Any Suggestion?
[root@engine ~]# engine-config --get BlockMigrationOnSwapUsagePercentage
BlockMigrationOnSwapUsagePercentage: 0 version: general
so,,
Config.<Integer> getValue(ConfigValues.BlockMigrationOnSwapUsagePercentage) = 0
so,,
(swap_total - swap_free - mem_available) * 100 / physical_mem_mb <= 0
so,,
swap_total - swap_free - mem_available <= 0
right?
so,, if (swap_total - swap_free) <= mem_available then return True else return False
#-----------code--------------2#
for (VDS vds : hosts) {
if (!isVMSwapValueLegal(vds)) {
log.debug("Host '{}' swap value is illegal", vds.getName());
messages.addMessage(vds.getId(), EngineMessage.VAR__DETAIL__SWAP_VALUE_ILLEGAL.toString());
continue;
}
if (!memoryChecker.evaluate(vds, vm)) {
int hostAavailableMem = SlaValidator.getInstance().getHostAvailableMemoryLimit(vds);
log.debug("Host '{}' has {} MB available. Insufficient memory to run the VM",
vds.getName(),
hostAavailableMem);
messages.addMessage(vds.getId(), String.format("$availableMem %1$d", hostAavailableMem));
messages.addMessage(vds.getId(), EngineMessage.VAR__DETAIL__NOT_ENOUGH_MEMORY.toString());
continue;
}
(omitted..)
}
#-----------code--------------2#
!isVMSwapValueLegal then throw exception, right?
so,, when we migrate vm from n33 to n34, the swap status on n34 actually is:
(swap_total - swap_free) > mem_available
swap_used > mem_available? confused...
so,, the logic is:
1) check n33: swap[passed], then memory[failed], then goto (for..continue..loop)
2) check n34: swap[failed], then goto (for..continue..loop)
If I have misunderstood anything, please let me know.
### conclusion ###
1) n33 do not have enough memory. [yes, I know that.]
2) n34 memory is illegal [why and how to solve it?]
3) what I tried:
--change config: BlockMigrationOnSwapUsagePercentage
[root@engine ~]# engine-config --set BlockMigrationOnSwapUsagePercentage=75 -cver general
[root@engine ~]# engine-config --get BlockMigrationOnSwapUsagePercentage
BlockMigrationOnSwapUsagePercentage: 75 version: general
Result: failed.
--disable EnableSwapCheck
How? Option not found from 'engine-config --list', should I update table field direct from db?
--disk swap partition on host
Should I do this operation?
--update ovirt-engine?
No useful infomation found in latest release note, should I do this operation?
### help ###
any help would be appreciated.
ZYXW. Reference
http://www.ovirt.org/Sla/FreeMemoryCalculation
http://lists.ovirt.org/pipermail/users/2012-November/010858.html
http://lists.ovirt.org/pipermail/users/2013-March/013201.html
http://comments.gmane.org/gmane.comp.emulators.ovirt.user/19288
http://jim.rippon.me.uk/2013/07/ovirt-testing-english-instructions-for.html
9 years, 4 months
How to manually restore ovirt-engine from sql dump file?
by Arman Khalatyan
Hello,
Due to the HW error we lost ovirt-engine db.
We have only recent dump files from
/var/lib/ovirt-engine/backups/
Is it possible to restore database using dump files?
BTW the VMs are still running without ovirt-engine.
Thanks,
Arman.
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing
Leibniz-Institut für Astrophysik Potsdam (AIP)
An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
9 years, 4 months
virt-viewer error
by 靳占冰
This is a multi-part message in MIME format.
--------------030407000705030404080406
Content-Type: text/plain; charset=gbk; format=flowed
Content-Transfer-Encoding: 7bit
virt-viewer-x64-3.0.msi version can not connect a virtual machine.
Error: Can not find the specified file
When I select Console button to connect AD virtual machines, reported
the following error: cannot find Application.
1.My client system is windows7
2.I use the Firefox browser
--------------030407000705030404080406
Content-Type: multipart/related;
boundary="------------080908020104080006090707"
--------------080908020104080006090707
Content-Type: text/html; charset=gbk
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=gbk">
</head>
<body bgcolor="#FFFFFF" text="#000000">
virt-viewer-x64-3.0.msi version can not connect a virtual machine.
<br>
Error: Can not find the specified file<br>
<br>
<img src="cid:part1.05080101.01050702@horebdata.cn" alt=""
height="372" width="713"><br>
<br>
When I select Console button<img
src="cid:part2.03040805.05010002@horebdata.cn">
to connect AD virtual machines, reported the following error: cannot
find Application.<br>
1.My client system is windows7<br>
2.I use the Firefox browser<br>
</body>
</html>
--------------080908020104080006090707
Content-Type: image/png;
name="fhbgcgej.png"
Content-Transfer-Encoding: base64
Content-ID: <part1.05080101.01050702(a)horebdata.cn>
Content-Disposition: inline;
filename="fhbgcgej.png"
iVBORw0KGgoAAAANSUhEUgAAAwwAAAGWCAIAAABaf5MgAAAgAElEQVR4nOy9eXwU15nvXfLc
uWHxG7+ajBObCLxhsxoy8XIdYmMTJ3EAIbCzOMYQkslEzif2a+fOvLmJAxaIxZZibMiMHJvB
BgkhEMKyiFkkgVBLLNqXbvW+V+97txaQ0ILq/nFqObX0ph3x/D6/D7Sqq6tOPedUnW8/53QV
kZOTQ4FAIBAIBAKB+CIAkkAgEAgEAoHEAkgCgUAgEAgEkhBAEggEAoFAIJCEAJJAIBAIBAKB
JASQBAKBQCAQCCQhgCQQCAQCgUAgCQEkgUAgEAgEAkkIIAkEmmgNDd08K/N1aLtu3pzsolAU
RVGh/pCiW1Efrg/0B27evElRlOW6pf9m/2SXCwQCgSZZiUGSNX/jYxvzrewf7OsRiLct3iJr
/sbHHsuqxRZyf8VTbdZjnCQ2n1XLFrw2K/HiR+SlBzDJyAQ/NyHiCkeXKyIvLZVHRr1NwTbo
RRF5KRYA/l+Jl3Q0YcS2MrUqYiRq6+hc/P0aWUNweHgyi3Fj6MY+y7451XOIcoIoJ+48f+dO
005dj25D2wZXnyvBjRCvE5JO6MPRzvoRXmaSObulJTqtxkqC0zPaCTXCs5iUoU+NwUUAU+Jn
fYL7FVwKDhw4ILzeJFk86U8z0RiZSNnYFG9URbLmb6T7s4R7QYESOxewrnOkO4qihE7hZE9Y
UVhGySMJKHFIGrtC1WbxMQaHmI0bsb2wfyS0zajrMttn/0oKkriWTcrG7IQZvXCaYQo2JtdH
4VFG5KUHZCTaeGkphmPsHwkVdrRh5NEbKZtoTgoGg1ardWBgYKw22N0z+Nha2YNPV5vt18dq
m8nq+tD1LYotRDnxz1X//KPmH2W0Ztwnu48oJ/6x4h9nVs703vAmuB3idaKFasFNHEgGkkZ8
1ktvbTQXJ6nTaqwkhqQRn1BxNz5WSryQSRZgbMobbSuj3DopG/vLS3JFwlMJwv4yiW3E/Rze
N0qkL0anBAuQxD6lwjKFIGljfj6NfFihWAhFcRa8ExWq+EvYv9Be8tE7tVlZ+fmjgySaOTdm
ZXEQlpW1USrZFE38ls2dO+xXDfR3RF5aKpOVou8dpLyU+wLCfnESf83jXgu2lnzBIhEygkGS
YKdURF4qk8kEu4i6U8Gmmb8i8tJSuVzOlFgml48QkpgwcguxcovLKbUFriokPyLOBYwgwpgG
BgZUKpVarfb7/TfHaIQs3DnwxFrZN75dnr3f0Ns3lPgHO77/RMvdRGvajEtzZlyYM+PLe2ac
uGfGkXtmfHrXjEP3/5P7bFnim/rA8gFRTjxZ92RNsAYtkQVld1fdTZQTMypn+G74EtwOgiTC
SXBODpKkznr2skBfZJg3uLOY/gu7BDF/8T6YzBdkydNK3KBG1uokIEnqhGJXoz/OvMFdZOi/
sI0zf6GMb+IXgcQCInnWS8SE3S9aEvOkE35vSuRyii2X2KlkNOIVQ1oiSCJleByjFEYy5vQ6
paV4keJK0ENaa61WYcsXN++RnAuShGHN35iVlRVr4/zOlOvo+UM5EutwKzGAk0x3HCcsMUsr
OiLhhSV6lJKAJKuVu0oJCI5BFOzCtnHjRhFT4dtjy8FxsjV/48b82vysfCtKwtUmB0mC0TZ2
H2wGnz2KUWaSsIVcf8++ha/Du2AIPoud34KtJSLJVWPslFsiTOmLt8Stjn+bjshLS+WkXCaP
oMwxOVJIihkKQTmjHm+Mj/CKfEBGjjDCjAYHB9VqtUqlUiqVSqVyrDhJro7MfeLLbz5x5gev
XhkcTGKDg6GA9QdPDN1LDMwhQvcQ9q8Tmq8TTf8PUTf/Hu+pE1QyZbu3+t451XMaI43oz6Hh
offN76eUp6Cht2QzSaOBJImznh0cxy7zzMWfOYG5c1l0BZK4OiWgGKcV/vbIWp0UJEmcUPRq
7AaZtCme5JIsEv49LJGLQCKKU0hRAdgxpXgnnfjspxK6nHLfPbmQxo1G9GJICx9u4+cVuRy2
ZGEkYi6usgQk2WSlWz6z7kjPhVohO+Dds2hT+BJBR8ttWrwOvoTdcvJ5oDhhiV1a/hFJXlii
RCkZSKLHD7k/HntMgkzYaT9osDFKEPCyYvi3Md9am5VVS1nzs1goS0ii8OH75TWjZCHpgOh8
4Y1Ys98q+Ockc8nAaYM5gdisCSm9tUQU48Ij3qn4ghRnp/ilH/uWVCqPkDIZSUXkMnkkkhwk
iXcXixdFV5QYkBQdOrnPjiDCFEVRVH9/v06n02g0iJM6Ojo6OjoCgcDQUBK5H7E6u/u3vHn1
7kc/n/PY31f/4vKNgeSoa8Dn1T7/WPgewvMNwvh1ov2rRNNDdwdOnxxOslREOfF84/Psn8PU
sK3PpunWGK8ZTddMQ8OJbo2GpAMEz8lAksRZLzpbRZ0B83H8CiTMP/G/2sZVtI6Ua8CjaHVS
kCRxQgk7/uiMJVmkxC8CiSjGWS8uAJ5UiXfS8cAmwcspVnD6Jb6CZDTG8OqK9sFuQlyYWDEf
I0jCuzRx8x7VuYBWEkAGtstoG2dfsivgEwwF28GOS/p0HlVYEilt9BRP9CglB0mUNX9jVr74
gAVr0l8G6e+F0jFAn+ZNF2IJi06ATQ1I4k7lmLmXhCEJvRL8mWBxpAomWpggJMXcKXofTylx
VwI65Z4kJInDKF4Y4zouXCBOicXsrkY6ymY0GvV6PctJSqVSoVAgTkoqnzQ4OITWHxgYCoRu
ZP7HpX9aUJj2Lyf/eUnJXz/T3ehPOjU12NPdsnKZ9RuE9i5CPj818PeSEeS3iHLiucbnhinh
1PG6cN0x1zHPDU+i2xnlxG3Jsz4RSOIG9UVfTJNIH2GSPK247EFC+WDuU7GHjKOdUIlAUowi
jQckiQsZpQCkKKcTP9KJX06jQVKMaIzs3I/2FRQHrXGFpGijYMIuDdNYnAsc7ojhJnoGiFdY
hk74p+oYQVKcsCRS2piQFCVKSUISzVvseKJgZItedSN7mcvKijp3nR5KFE9OsuZnbWTG9UYx
J4mXYRuL4TYWGaJlfSmps1qUjacoUlYqk/G+kPG3lmjJcGrDvoFJZn3F1/TYO42g0XTstGYv
S7JSZorEiIbbeGHkxzPWdRw/XuwSLfERPs6WSmXpE9DAwIBGo0GpI7lc3t7e3tra2tLS0tzc
3NTU1NLS4vV6E+eSnp7+A/nKg4Wq7e/VP/Jk4V0PHPzGoiP/7/wjC58ui3SOcDL4YFfnpZXf
VixOC5UdH9kW7q2+N/VC6hn/GXyhslt5z8V7iHJC061JdoNJsBGraGd9jOE2cVdRmyX8yih9
dYorqdMKz6tGbagJtDppSBKdUFyXyyVDJCApWpESvwgkGA/JQkYtgHikSfqk4xUnicup1Jqx
ozGCq6sEJDEfZ6tFsjBRY54kJPGG1pjGjbf8GKNdSZ0L2Nxh7t0YcINvnL9NiSk0kmgymuG2
OGGJWVphtinGOL4wSklDkqicUgOaWPovegSs+Rv5WS0h5CUJSZjY2Vooc7ZRcAuAxPPv/JbN
Td+LNiVT9CLCpIHxk47r47m/RSvFF5dKlkAgfHuS18c4O43wcsv8jTP/j2xOEhdGuggsjcW5
jrOHxZ8tK/4It6Io+Z9wkYeHhweiq7+/P9lfutU3eR759sF/SN2bkvrhjHvzUh/89KnVZXpz
V1IbEajX5/NVn09qHhKuPFseUU4su7zsjP/MjaEbFEVdjVx9puEZopz4g+4PgzcHk93gKCBJ
dNYLBwuysiS+erITKrKy8OmsglGG5FJKotOKXSKTiXglmVYnDUmiEwrnLmavIsYSFQktKZWY
uB3rIhBXUc/66DHh0V3Uk07yaOJcTulvbuI1o0cjXjGkxR+kk5ER/NYH9NVbXJhYMecXKSFx
XZp4NIn3vmDWcZLnAjcwJbEj0Ygdu3GrxHRpYYmlBsJEg3LJdMdxwhK7tKIgCi4s0aMEN5ME
gSZInZ19ZypN+z5q/uuB1vIqa6Szb3LLMzA8sFmxmSgnZp+f/d367z7f+Pw3qr5BlBM/aPxB
qD+U1KZGONaWhJL7yjmNlFwKYjprSkViShVm4jV9Tsf4RwKQBALdvuob6jvoOLjw0kL0i7Z7
q+/90Pph52DnZJeLleiL4+0icULrtteU4pIpVZiJ1y0PSUlcWACSQCAQCAQCgSQEkAQCgUAg
EAgkIYAkEAgEAoFAIAkBJIFAIBAIBAJJCCAJBAKBQCAQSEIASSAQCAQCgUASAkgCgUAgEAgE
khBAEggEAoFAIJCEAJJAIBAIBAKBJASQBAKBQCAQCCQhgCQQCAQCgUAgCQEkgUAgEAgEAkmI
yMnJcYR6wWAwGAwGg8G4iZycnM5uagyt1drGdoMTsGXwxMQZanBiDHGGOE8nQ5whzpNogCTw
xMUZanBiDHGGOE8nQ5whzpNogCTwxMUZanBiDHGGOE8nQ5whzpNogCTwxMUZanBiDHGGOE8n
Q5whzpNogCTwxMUZanBiDHGGOE8nQ5whzpNogCTwxMUZanBiDHGGOE8nQ5whzpNogCTwxMUZ
anBiDHGGOE8nQ5whzpNogCRwEnE26hrqL/6HvD5b0fCeouEvjHM6Gt9rkv2+vX5/pOvmyLYM
HkNDnCHO08kQZ4jzJBogCZxEnG3W1qD9k+HrZ6m+KurGJaq/mRpUUINKalB5s6fUq39dJ//Y
4w2OYMvgMTTEGeI8nQxxhjhPogGSwEnE2WZt85MH+yMn+iPH+sMnB7ov3rzRMTxgGh509EX+
3m39adj6lqrpXa/Xn+yWwWNoiDPEeToZ4gxxnkTf4pAkz1uZmpr6WmVnN9V56o3U1FXvyyc/
pre6Y0GSpdVPHuh27+3x7O32/LXH/1lPoPCav7AvfHrwenOXe3/EvtXU9qZWczWpGnztFDXy
GpTnrUQNABwjzqfeSKXjXPlaKqaVee2SIY32Fm9T8RZOd0e9IrHR/cDU2c2PORtV/pp06OCC
lmCcu6nObtP7XASZQOHXk8QN15BocZZspaOKoVStTe3KmhaQlPpGgYO+pmTLKHd44sI3LR0T
klr81rz+yLH+8NH+yLH+rjNDvQ1DN9oG+1oGexsGe+uHrlW6Tf+l1zYlVYO/KqAc/pHVoOn9
lampmZWkY/LjNnUcDZK2FFAOf+VrqanfzTaZzZRZlrciNXXFLlOQv3L7B6tSn1m1InVVdi0V
FG+f21S8hdPdMdqz2Uyb9FS+lpqaujLvipkym6mCLampK/Ka+WvKslelpq7a3QQXtITjjE78
lXl1JBfAd1v515NEtw/XkOhxlmylI49hlFpLopCTUFnTAZJWrFi1pYDyYdeUL9gvbuh7G/p6
8dob7ML3mRWYr3r0VT4+L98GjglJTV7TX3p9+695//O6N++675Pr/k97g4d7g/m9wfzewKe9
gU+chvfHAJKkqqP9g1X8hdiXku9GSXvclk4EkkgPRWc4tlSSHnzlytdSU595tzJ7ReqKbBPb
Q7PBR+cR4iGphZWvpaaufO0N1Nl/0Z1gVUZdOJUdpz1zkV+1W0aZSSrYTXXK855OTd1SSAV5
a1b+JjV1RbbJ3Sp9QZv0I51ycT71Rmrqqndrmah2U0EPZTZTKIC/KqAcTVxK6YvXmKaYyDVE
ormKmvQ0dbz2zLRSYQ8rjKGw/41ba5IXfHneSi7sKL06OZU1HSDpV9l5K35V6Shhrimlb6Su
zGt0UOaCN1LRt2H0/WxLpcOB6DV1xS6Tm379RqGHoagSymymPxWFl28Lx6hB0tLoNe3uDx3q
j5QMdJ7s7zzZ3/l5P/2ipD9S3B857NTv1msak6pBTKuyZfSVbmWOiamO1H8todd85l0T+3X8
ip/q7Da9uyI1dUul2cydeOBEIUmetxJllRyCNVftllG17zJnB/Px3xQzuRC0qRKphf7K11JT
U1fk1Too0kyZZXlPi88syaqUrt8p7Xjt+Y0v6F75jQIzyzqVr6Wmrsg2oV6c6X5M769MTd1S
SdZLXdAAkkRxbv9glXRw5DxIQuFFkFRgpnxxryG1ec9IdAT8Jm2mfJMdkAmLMx+SmFZaLOph
8RgWit9NqNYkrxKpWypJB1WbvSo1dVX2lcmprGkBSQVU0a9X7dqNXVOa8lYwF6otBdzXC7OD
6y1ID1dnjdjXC+5Tt+u1KRYkmeu9xq3dzj91O/7c7dza7dzG/Lut27mty/n2NdfbDs3bek1D
sjXINvRsGVX3l1WpqW8Umik227Ei2+Twm95/hvvS4HMgKjK9vzI19VeV5ts+VR4nznxI4pRZ
SdJxpv3Fa6mpT+fJzJS7Ne+Z1NQtBZQP6+mZhF/qlgLqSo7EQnr7WyoRs7ZLn1lRqlJi4eQH
M7k4i4bbGpnQMRdrBlLrRd3PryrNV6Jc0Cb7SKdanEcISdJtjLuGRGmuvCY96dGYyDhLQBK6
2Ap62DD/Oix8N06tSYcdZVXRVzguqzoJlTVNIMlR8sbT312FdbGr3m+lyALmwo1/aWPCzQwW
cB/JlnGXNrP59ppdkWANkuarHt3/7vW92xf4oC+wT+QPbvjfdaj/Xa+pS7oGsapBkMT0K/jw
EOUmK7fQZwRKpQIkJRZnyTlJyLzQ8REqNTV1SyVzpjA1IoIkn+T2HVQndlkUn1miqoy6cMo6
8eG2bBnlk+etXJnXzmTv+JBU+Vpq6tO7TCwk4Rc0gKQY7TlJSKI6Y15DojRXXpOexo473IZa
qWy3qIf1C2PIfzdOrUmHvYmGJNITA5ImorKmCyTR349XZcuo45n0KVH3l1UxIMkdxsC2Ne8Z
ZrQV9QdT/wI9fo4FSabLbu1ve+y/67a90W17U+Q3ehyv25W/1Sf56zZh1TDVwfS+q3bLKNSp
oDlkDibn5ANISjDO0nOShEZnxG7mulOwJZWeRCznskpowgE6rSQWCrYvlzqz5FJVKblwsiOZ
dJwlJg7T+f+6MPVFJuqbV+1i2jPe8nfzFnIXNIAk6YnbT3NjsnTHLBdCEp2BXpmKD7fFuoZI
NtfuWKfMdHIi1+fdMqrw16IeFoMkFkmxd+PXWrSrRAxImrDKmjaQhOqG7mKfpjOqb/xGPPwv
CUlhqrOUnQK2Krd+Og88j6YGSdMlt3rLNUfmNcfvrjneuOZ4ne/fXXNk2ju26DWXR1CDvInb
XHXQo87uMNX+Fy6/imYscVMIYeJ27DgnBEmm91cyY21h7lOIVtng8yZuSywUbV/qzJKsSsmF
kx7M5OIs/euqyn/lZefeKGCmGAsbeavUBW1qB2Fy4txN0dNT+E2LF0AGSVeykBSl4fGuIRLN
9XaHJHErFfawfiyGcol349SaZP8rDUmTUFm3OCR1U51himQS+D4HE/EwPWnL7KAcaMoFtlqn
n2Kvv9zsemY58u18YYpRg1ZTjUv1csSYETKsCxkyMKM/10WM68n2n+nUl0ZWg3jV4NXBnmao
vpgfV9MLUb1P+xkDo61BPxdJB38eEm43yY+knws1G3yHg9uU5ELh9qXOLMmqlFw4ZR2/PWN2
sIMCBW+kbqn0dWOXKbyRS17QJvtIp2Kcu6lO1FwFTUsUQLOZIknue2/8a4hUc41xykwbx2jP
kq2U18PiMZR6N06tSYY9THEzJrGuYYIr69aHJPAExpk0yVwdGd3Wl7rJn3STPxX5J9dsL5Ht
63Xq2mS3DB5DQ5ynepz9FAwQT0ScwRDnURsgCZxEnEljtVPxQsTwvbBe2p3G71lbX9Cpa5Ld
MngMDXGGOE8nQ5whzpNogCRwEnG2map86udu2KO63/GcS7FKr5Ylu2XwGBriDHGeToY4Q5wn
0QBJ4CTibNBUXz33vfbq59ovRnH1c5fPfF+thOG2yTTEGeI8nQxxhjhPogGSwBMXZ6jBiTHE
GeI8nQxxhjhPogGSwBMXZ6jBiTHEGeI8nQxxhjhPogGSwBMXZ6jBiTHEGeI8nQxxhjhPogGS
wBMXZ6jBiTHEGeI8nQxxhjhPogGSwBMXZ6jBiTHEGeI8nQxxhjhPogGSwBMXZ6jBiTHEGeI8
nQxxhjhPoomcnBwqYV26Z0niK0fT3r0HWefmlVVd0Iqd9EZPbSaW56q4v1W5y4nNp0ZfWBAI
BAJNNZ3aTPAluv4zYvoBrk/A3l2em7uZ/Sy7fPMptkPheha0R95ubgNBnJOEJBAIBAKBQKDb
RABJIBAIBAKBQBICSAKBQCAQCASSEEASCAQCgUAgkIQAkkAgEAgEAoEkBJAEAoFAIBAIJCGA
JBAIBAKBQCAJETk5OaFwJxgMBoPBYDAYN5GTk9PTNwgGg8FgMBgMxg2QBAaDwWAwGCxhgCQw
GAwGg8FgCQMkgcFgMBgMBksYIAkMBoPBYDBYwpMPSbl5ZVVVWtyTHhQwGAwGg8HgSYYkMSEB
J4HBYDAYDJ4KnjhIKiqqieFNv/5Dbl5Zbl6ZQuEBSAKDwWAwGDzpniBIKiqqiX1TS5aTAJLA
YDAYDAZPBU8oJLF5o9zcstzcst//vgB5DCCp9b1Hl73XxC0pfYWg9egexaRHGQwGg8FjZ+4K
T4t3/Y/5kY2lSeyoZBO9ZWEXc5sY4jwtIKlpzzJB5RVvZGuo9BWCeKVk8gMNBoPB4DFy6SvE
sl2t3JKmPcvi9N+t7z1KbCpOdkds582+uL0McY4JSbI5SzT3Pdx+/92WuXOU876hmTdHPy+t
9eG7TXO/rpz3kPa+u81z7z2XtjiR3eCQJCCkUUISXWe8yCp2LePqtXgjJJPAYDB4OlnYeff0
KXYtYy/1il3LmMwH+rbc+t6jTCZkV+tgT8kmcWqkac8yLvkh6LO5jyff/d/ahjjHyyTVfHMp
NX8e9Ugateh+asF8avF91CNpQ4vmUovnUguWXLx3foK7QZCE8GhsIUkY677BHsgkgcFg8HS2
uPPmel/s+o+9ZjMcre89yn2W6yBidd59UzHDAXGeGMcfbpOlPTSw+EFqYdrwgjRq0f1DSx6g
FtxPPTI3cULq4UMSjkfjBEnYSKqwgsFgMBh8i1ui8+4p2URsLBW+xeuzJfITxRtv1c4b4jwx
TmhOUsOcRdSCB6nFc6n591FLF1MPL738zYVJ7QZBkhiPxgeS8OyRYteyJGeQgcFgMHhKO3qG
gxuyYSXReRdv5N5Gg0e3XOcNcZ4YJzpxW3bvfGrxg9SitBuPpF2Z91Cyuxn3WwDgkRVEeeoF
HQwGg8GjsPRcmVdK0FtSM1p4qQ5uDgY7aRXvvLnpyVO484Y4T4yT+HVb3Zx51MKHqueljWxP
sW8muXfvQYXCg1x1YQwzSbxxUzAYDAbf+hZ23sUbuanB+DW/ac8yYYYD7yxa33uUvU0Mt7z0
FXai8RTuvCHOE+PkbgFw6Z4lo9/l3r0HWefmlVVd0Iqd9GYFkcUzgUBIYDAYPK0c+/492K+u
2GwHNwyEvbvsvV1cr88u31Qs0WejPd5uk1whzpP97DYwGAwGg8HgqWmAJDAYDAaDwWAJAySB
wWAwGAwGSxggCQwGg8FgMFjCAElgMBgMBoPBEgZIAoPBYDAYDJYwQBIYDAaDwWCwhImcnJxQ
uBMMBoPBYDAYjJvIycmJ/cAQEAgEAoFAoNtQAEkgEAgEAoFAEgJIAoFAIBAIBJIQQBIIBAKB
QCCQhACSQCAQCAQCgSQ0+ZCUm1dWVaXFPbnlAYFAIBAIBKImHZLEhAScBAKBQCAQaCpo4iCp
qKgmhjf9+g+5eWW5eWUKhQcgCQQCgUAg0KRrgiCpqKgm7gqIkwCSQKCkNAwCgUDjr8m+1E2O
JhSS2LxRbm5Zbm7Z739fgEyNHpJUucuX56q4v09tJmhtPjVmRwECTSnh16+bN2/evHlzaGho
ENMACDQ9Vfoqwdeyd+UJfuTV0uT2g7Ysf3dZ/F1MB+EXkKGhLzYJ4rw8RxXnssR0vkl1vac2
E6gHF3blk6/pAEmq3OUEQRBcZE9t5ipIlbscOAk03STAI5aN+vv7+/v7+/r6ekGg6aySjcSj
u1u5v1t3P8pfIFLr7keJjSUj3o9wh9NcfX19/f39/f2fbyKWvacYHBoaunnz5vDwsDJnObE8
RxkjqzSyLpeFJPbFlFEsSJLNWaK57+H2+++2zJ2jnPcNzbw5+nlprQ/fbZr7deW8h7T33W2e
e++5tMWJ7AaHJAEhjRKSVLnLieW5KjyyfBQFSgJNMwmyR4iNEBhdu3atp6enu7u7q6urq6ur
EwSanjr2c2JpdgO+pCF7KbuoIXspk/n4+TH+gqXZDZ2dx37OpUbwz6CV8c2jF9zH2TWmrdCl
o7u7u6en59q14o3Eo3ta+/r7+wcHB2/evHnzZgeDScNsgoLLG3ELlueq8BEdLovB65EFbMR9
fAr12XEySTXfXErNn0c9kkYtup9aMJ9afB/1SNrQornU4rnUgiUX752f4G4QJCE8GltIooVD
Eh9FaYpKeosg0FSUIIE0MDDQ19d3/fr1np6ezs7OcDgcDAYDgYCPkRcEmobK/ymx5J1a3qLa
d5YQP81H76EXvNfs27XvLOE+K/E+b/PiF9Na7HUjEAgEg8Fw+OjPiaW7mnquX7/e19c3MDAw
NDTUkbOc2Fw2PDxchg3ZcMM3LATxul6J9+nFggTSrZVJQpKlPTSw+EFqYdrwgjRq0f1DSx6g
FtxPPTI3cUKi+JCE49G4QBJ1ajOHrac280biQKBbWywhsQkkhEfBYNDj8TgcDpIkLRaLyWQy
gkDTVh9vIBb+n3OiZRs+Fr517v8sJDZ8jL+Q/JDgfXYb4he3gUwmk9VqtdvtLtehnxJLtl8N
dXZ29vT0oDG4wdJNxKay4eEyUTaCZSOJPNCpzdMXkiiKapiziFrwILV4LjX/PmrpYurhpZe/
uTCp3SBIEuPR+EASnvTbfGrqBR0EGrEQJA0NDfX39/f29nZ3d4dCIY/HY7VadTqdQqFobW1t
ampqaGioZ1RXV1cPAk0r5a4mHn69iLeo6PWHidW59UWvPyyYa0yszuXe5j7PCm2I9z67efGL
20BNTU2tra0dHR1abd6LxKI/Vzv9fn84HFn1Tu0AACAASURBVO7p6ent7W1/dxmx6YthZc5y
YZwlIEk84DY9IYmiKNm986nFD1KL0m48knZl3kPJ7mbcbwEQPbIwJwk0bcSmkdAoW3d3dyAQ
cLlcLpertbW1vLz8pZ+9OgL/+OVNYDB4ZB7ZSTfFfeTIkZaWlubmveuIBX84o3e73aFQqKen
p7e39d1lxKYvhoaHyzZL9qy8lJIEDuE9MjciNw0giaKoujnzqIUPVc9LG9meYt9Mcu/egwqF
B7nqwuggCa8bmJEEmi7CB9rQKFsoFHI6nSaTqaSk5KWXX3154y/OlJ8Pd10Pd10Pd8dzF+dQ
17VQ17VQJ+0g3yGxIz1JORjuHpkDCblL2qGxc7grwJQnGO4ORrpDkZ5QZ08oIg7OddpdMd05
YktVx63pYKSHdcw67fZHrZRoTrDl0B5xO8QPYcTuud736aGCYKQn0nU9IXf3jpODndc+PXyk
p/fGmfLzL2/8xUsvv3r48JuriYd/Xyq3WCwejyccDhe/QhDL9igGB2/evFm2mSA2l6GrE9fr
shAk6pTpv7jl2GSY6QFJFEVdumfJ6He5d+9B1rl5ZVUXtGInvdGow22QRQJNE+EDbdevX+/s
7PR4PBaLpaSk5IU1GWcrLrj8XWZnUGv10SZjWWP18u1BVlu8aotXbfaozR6V2S1tk0vaxqhW
GpxidxgciViht8e2XGeTsJZk3R7XGgm3acg2Db2CXGdT6OwKvUOhd3QYHEqjkz40k1tl9qjM
HrXFo7F6NRavxupD1pJ+zGzY41uNbBHYi6yKZjOyZyxtkbba4mHLw7PVqxa2K6/G6lXTH2G2
wDQkpcnVYXR1GJ0dRqfC4JTrHXK9o11nb9fa2nU8t2ltbVpbu8D8dQQNgGkejjg2OBQx2p7g
XeHH7aIlSdrgtHvD7+//2Gj3KwzOBN1hdI3eSpNbYLMzuPc/P3H4Igab3+6NnK248MKajBWC
IbWl2S3Xr/f39w8NDQ3zhtzEd93BuuHlubncTGF2OTYZRkBOUwmUJv8BtyAQKBEJ0kjBYNBq
tV65cuWll189W3HB4Y1weJQYJyFU0pJeLUl3Zlq2Y7N4NBav2uLRSFltdkvbFMtiqFKanIm4
w+iIb4NdaL0NWRHbOgnTjKWjrdDbOvT2Dr2d7imNTqXJqTK5VCZ04B6Nxau1ItOR15F+ZL2N
NrsktrWkX2sVmN5mVK6y4PZqJPElYWvEluIejVV4yHir04mMVuZaEUNOKjPqoV0dJmeHCdGS
Q4HI2GDvMDoUBjuyXG+TS1KygWeuAfBaiDOWTbQlmp/RqTQ6VEanyuhUGnlwz4KOFDzdApAk
RiUWktCfpDt0tuLCSy+/Wl5e3tbWZjQanU5nMBjs7u7u6+tDdwS4HW7DDZAEAt0aYiGpt7cX
pZF0Ot2XX37545++7PJ3SfRV8SBJR/q0pJft0rSkl+7SbHSvxvZtOlLCWqvIFk9sa8xusdVm
VxybnIlYZXQIrDQ6lEZ7VBskLAFbBnp9ZrNody6N2a2xeLRWLyIkHRM9hEQGewDZaA8Ymdex
rbcH9DaBGcCSdDTGikfGiVhMOXTDEFlv87Hl5NnuN2DW23x6m4/XhGw+ls7ViLxNLrXZpTK5
lEaaTpQ0nTiURkeH0d5hpNFHGc8qo0NlciJjjcQVy9FboIr5rMrkooFJREsjRKURQdI4cZIA
kpQmt90beXnjLz766KO6ujq1Wk2SpM/n6+zs7O3tBUgCgUBTS+xY27Vr18LhsMPhUCgUmZmZ
p89V8EbZEoAkpsPzs6/1Np/ejro3H9ax+Q02unsz2HwGmx+3nu4gRSa9sa2zevh2J2KtJSFr
zC5kpodzSliSsXh24EYfUZmcambjWotbZ3HrrB62y0esYLDRVGRyBBO0EdkucDy6sgktAqwx
sHgv0cpjjGZHwOgImBwBk4NbyLQf1IT8NG+RXq3Vo7W4NRa3xiJIUrrUZpfK7MTsUsVjaw2z
KU1iLUdrcWutbi3e5JhWp7G4NBYXzfSIlowuHJUYTnLiGaZxhSTESe99cODHP8tEfu+DA5JL
EuckMSTpbf7T5yoyMzNlMll7e7vJZHK73eFw+Nq1a8yIG0ASCASaGhoeHkb3RmLH2lpbW1/6
2au+cI80IUVBJZaQmCyFT29jkMgeMNj9Roff5AgYHWx/HzDa/SamqzNxfZ5f2jZfbBtIL249
6Rkr60gP173RnZxLaLOENTw7eTY5NSYnw0YurcWls7r1VnanXoPNZ7T7jDY/Gy6TM2hyBs3O
oJl+EYpmk4SDcRwNs3gOjKHFVW9yxrZEsZloMOs4OAo0OgJsqklPelE9aq0erdVDI47VgwiG
hhXOUQAIawA61CpIj25Ujcqjs7p1dEJUApWYSUvOEXCS3RN6f//fRgBJCoPzxz/LZK8PiIoE
S0YJSUqT2xfueelnr54/f765uVmv1zscjkAg0NPTg+7BDZAEAoGmihAksb/8t1gsTU1NL/3s
1XDX9diQdKG24fCR4zm5e//0x7f/9Me3c3L35hcWV11qZAZEAkzqiOkUnQEz6uocAbMzYHYG
LS66kxPaEZCyP7ZNdp/AxgRssHkTskQ/56ZtlXbU9BWdLsJeWJlNkR4Di0d2n8nhMzn8JkfA
7AiaHUGzM2hxhcyuoMUVQra6wwJbkF246ZXNko5OWlKOUl9jbpeELdiBc3aHLO6Q1R2yuLDm
5AiYnYjzAiZHwOjwGx1+gx2lLZm8I+lhcpAstbj5joU4MVuLT9p22ka7z8isjIBYb/XorDQn
IVRCA3AMJzlpTxQkiakIfz36TJLS5A52XnvpZ6+Wl5c3NjZqtVqbzeb3+7u6utC0JIAkEAg0
VYQgqbe3t6ury+fzmUymhoaG2JDUpDDkFxbn5L6fX1BUXlnd0ChvaJSXV1bnFxTl5r5fcPRE
q8rEJQycAZOD7r0srqDVHbS6Q1Z3yOpC/watrpDYFrpT5NsZiG+HH3d8tHL4ErfR7uVs8xpt
XiMpbSyn5YliuntmlxhJepsmu8/EK2TA4gxasOCQ7jDPnghtt8D0CmKWsrrDVpeELXEsYpQx
smQboNsJ3yTtsMDsW1j7CTFMGTQ7AyaH3+TwR6Nko81rtEsQj1HSdm+8puKPbaODhngclXRW
jw5LKfE4KflBt9FAkpiTkJIiJNYxIOncuXP19fUajYYkSa/Xy05LAkgCgUBTRcPDwwMDAywk
GY3G+vr6GJDUpDB8/MlnH3988NK5KvXxz3V7cjSZv1X88l9b//d/NO3Pq/6y/OOPD37y34fa
1CaTM2hyBlBiwOJGmY+Q1R0iPaiTC5FulpMkbHEFpJ0MJFkcfrPDl4iTgyS8yyQ9uKNTUULm
umG7z+TwMZAUsDgDiBTFnGR1h0mPEBqSwqN4bBS2tJ9YS6w/1M5ijRTCxvIoaCkqKgkhiX0L
a0JBiytodgXMrgDDSSweoarkZRONNq/B5kH+MiuFeKcqKioxLSEpyGZtduKoxGYrRZyEjbsp
kkwmMZDkQ1OURsxJ7BNqR0ZIsSHp7NmzdXV1KpXKarXejpAUCneCweAp7mAo4vUFXG6vlbTr
9caWlrbK8xdiQFJ+YfHHHx+UnyxzvvH/dX3wgfa+eebvfEf75JMtc+41vvWm/he/bCgq+fjj
g0eKTpic3EAJ14d5QjbPpawU9j4oe8o9bHqgNiuFeKeKTQkEra7ad1KIbRcweGovXktkHGoP
WF3SliInv7Tbj68lMg610X9K0lL9kfVEelEdvpAey6vemkIQWdX0nzYvMtOJeoQWIdTV/Aze
fWLeqWIJyWTHelOH3+JEdBhkIYkOlyfMunxnCpF9mfRE2CXNRRuIdSeasHUkQUqaongOWd0h
qxxBUu07KcS2KhZWtIf5B0EQBJFe3CiFNUmZZC0/kU6sP9weIj3RzD9AdjmL4G4c6VhU8jPU
KzLDQKe3pxBZF1mWEg/m4tVkcvhMDuVn6wiC2HmGB98SyUuLMyDkJDqf5NVbvTor94NNNX/Q
TZFMMmmsIIl9SO14QNLx4hMVleevXK1va1fo9AYraXe5vT5/MBiKTPqFcbxN5OTk9PQNgsHg
Ke7u3oFwd68/3O3yBs02p0Klq669giBJIzUPKSf3/dpzVe4/b+0zmymK8uzcWTdz5uWZMzT/
/u9uinLq9erXfnvx1Lnc3PdlV5roWSNsDskdJhUn0gli7VGtzRu2ecK2qj0p2Zfs6LUn3FK0
gci+hF7bPGEbg002N2P5iXRifX570OaOatIldDSiOrcjZU2hiv5TmqVk21I4kLI46X6u4ch6
giAIIuOzVrSE7RS9Qttpm/iuL8gg1hbVMXR1OiuFINZ92uIVdK4Wp9/qDFhdwabCDUR6cZM7
ZHOHbW4mPt6IzROxeSL2jpPpxLsV3ojdG7F7IzZvpOXYi8S6kmZvxIbsSdZhnhUl6cT6fDm/
gtyM5SXpxPr8dmyJOyzcwkhdsTNl7VEtc7xC22kzB86uxt8I6Q5Z3UGrm8kwOQNmp5+2A3vB
jMCaHb4z21OIrGpzfOLxW5z+xsL1BEGsSc8giJ1nnajWWAcERpyE9otAzcCNu0kMuimxZJLC
4JCPAJKYn7wlS0gOvkbGSSZHIBokff7F36tklxpb5Cqt0UQ6XN6gP9wd6enr7h2Y9AvjeBsg
CQy+NSyGpIs1l1lIou/yzPjwkeP5BUUdBcfcb77py89HeWPTG2+0/vKXjut9nRQl37cv9/HH
r+bszS8oOnL0BIIkK5P2sHl0BRkE3ed5I3Zv2MH2cJ6w3RO2M50x/afnEg1JHsaKE+nE+nw5
hk1SFnMS6QpI+MLuFGJXuStAsokokc/uSCF21PBTU+pD64g1BbJD64g1BcqoI3p22qIMBAZJ
GDad3p5CrC2qs9P5KrPDZ2HSSBwkuUKkO2SjgxmxeSJ22vqCDGJtkY7FhdZjLxLrSlq87Aqc
kwemiE1xMp3YkC/HXjAgRXrCpKIknVh/WB5majlBJ7Rre9W7KcSeCvwQvNHNHSO9FztdSDq9
ZHWFuJFcYaIxgHOSFCQxlYKbbRWugOXC7hRi11ms8UjTuTNAUxozPwmfnKTDJnEzP3bD7ghg
cKAbiI8rJCFCsjASvE6Wk0yOwPt//cThBUjiGSAJDL41nBQk5eTuLa+sVv75nca7v654/HHP
Z5/1U1TnjRsmp8s1MNC+d++2lJS8O+6oeXljeWX1X/7yAZNDYrpPxECKiM3DfvsP2704DGkL
Moi1RzUMFfEG4HjjL65g+Q520G5XOSKhC7vZRe9cCJCuQHPheiJ91zvp9JKmwvXM+2jMruad
FGLbBYyKzu9KYVdACaTzu1KInWfxfrHt2Foi47NWXwNvMK56awqxteDYGmYHWyu8JrvXVLEz
hcj+rIAdl8o+bfeYbJ76ggxi7dE6G5qH5DHaPMaKnSnEuk9bvEab5/R2phTpRfUO/9nt3PDk
mkK11RVqPrqBHa+s8IRtzPhaMwMHLfw/bew6jNYe1fJG67jRzzCJjYfSqyEMag+Tbl1+BrG2
UMsM2PErxc0bOaW1o5Z0h0jFiXRi/eGiPSn0ZjVkFf2aGRbU5WcQa4+WCHdNtw38QLDhWpTW
UpSkswfGHHVL0QZiXXE+e2g7alEa6SzWbM6gcbfKXVx8s6oRpJ7ensIOp9Yf4ZrNp61oGFSK
nCp3pTCZJP6oLrHtPM1SjYXrifSd29IJYnsNmqFvaju2hsg42OzV27w60qOzeumbFOCQZGQh
iX7KSlxOGg0k/fhnmQZG7C0A8CUASaM3QBIYfGs4KUj60x/fbmiUt73886ZvfasyJeXcP/6j
Yf9+27Vr3t5e1V//upUgPiOIL++4o2rZsoZG+Z/++DYNSUxnbLu4J4XYU+GL0EMkHna45NL2
FCKrKmzzhGxVe1LWFTfHhaSqPSnE7nJ3iHQHaVftTiF2leOvESQxwES2F6eL5jM1Fq4ndtQw
kFSzLf14gzNgcfrP7kgh0o81OP0Wp+rQOmLbedTn+SxO39kdKUR6Ub3DZ247tpYgtlainrJ6
WwpBENln7F6z3dtQkIGGz0wVO1MIgsi6aLJ7THbFZ+kEYqM6BpIQIZnsXmNL0Rpi3cFmj7Fi
5+p8ucnuNdmrt6UQa/I7zA5/Y+F6ApXNFbTKT6xF5OHW5mcQRPYlBibYHE+kpehFYl1JM5OP
IT2R5qIXCWJPOfodnOJkOkG8UxUhPfr8DEIwe6l8J5MWqtqTglJEchaSwmTVnpR1J5q4iU1i
SMJrjXktP5FOEMS64iZPiMYj9FpxIp0g3qmiIYkg8F2j5eHmog3oGFke4gbgmCUMRekKMmhy
ainaQBDE2kINSe9x/eH2kLUKS/agdM75XSlExmetPrOTrsQ1BR04JNUXrCfWFtXbfWYHjcX1
UolDjqdFg7Znd6QQ22UWp9/irNmWQmw7j3Bq12mHz+zwox0Z7EJIUosgiZ2QJMecBCQlzEmC
W0eKlwAkjd4ASWDwreFEIEnDh6SmDS+e+Yd/KCGI6u9/X/nZZ+4bN1zXr5uKiz/++tdPEETp
HXdcfPRRMSTZvAiS1ud3iCGJHl/DwSgOJDG5ineqaEgqz+bSAQRBEMT6w+0ok3S8iR5fU+ev
Q8kYFTeyxswEx4fSmCwOgiQEKMcanD6L02dxVm9LIdYc6TA7fGaH8tA6ghmUqd6WQmytYGcg
XdyaQmytoDNJKHtksnnoP0WQZLR5jOU7U4js02iKd0sRm5Fak99hdvhwSLI4g1YXP1XDJoR2
XBJAEnODAH1+BrH2qI69XwA915s/Usab982FkQ9Jbjq1ExWS2BQRm7Uq1PDWYavVEyI92vwM
Yu1RDfOCzW/puD+xQjaLMmTCnFnVnhRiTwWzvEnIbbXvpBAElto5syOF2C4zM1PyGwppDDpD
Q5Lys3T+wRA7z6B3sT9pVJLMJNHwlHGoDb3Yddbptzhk21KI1QUdZodsawrx5wqfwUZDEnu7
S7XZrZKEJJ1dngAn2T2hvf/5idHmlfhBXPKTuEcwXxtZaXQZ7f4P/uu/Hd5IB3bzJICkyYek
3Lyyqiot7kkPChg8BZ0gJGmw4bb6f/23k3fccfGBB1QnT/ooSnHxYvvFi4a+Pse5c5+mpZ2/
445LG14SD7fZvGG79/L2FDR1JuzwYcNtXnYSUtjmCVXsTCGya20e3ugb1gGjBBI+rLP+cHuw
fEcKsaNWMBWJD0lo7pH68DqCINhRNvWhdcSaIyp6KAR1nw5/45H1RPqxBmx87VCrz0J3hMJe
8zQ73FbBzjHiQ5KNhiSjFCQZbB4DyfzmnFR8mk4QRPaXNo/R3vFpOrE6X0GP+KQfa3AGLI7A
2R0pBLH+UHuQdIXOoaNGk+IZPmDRAcsP6fgIgn4Qd0kCktDQ1Y5LpBtjIx4khcuzU+gVokIS
XkfiIbkYkKSRKjD3mv3VXmKQVNzs4e2RuUdA7bYUAs27P7M9hciSsT9dRHmjOgebSVJ+lk6s
zu+I/qM2bPStUjQyS3OS6tA6Ys0R5dkdKWuOKNH8p8bC9cTaorrKXSlri+rs9F0uESRprG61
xYUgSSUYbtM75Dq7PAFOsntCeQfyVQab0uAcPSSNmJNUJrfaaM87eAQgSeBJhiQxIQEngcGS
TgqS0MTtq7tzLj37rObkySBFaevr3yaIvxGEqaOjm6LC587l/M//WZ+9RzxxG03Wbj32IkHQ
U4wdvojj4rspOy87fJe3pxBZ1cz0W0VJOrGnwhO2e8KtRRsIbio3GkypRaNydLaJ7XHpARqm
S95RS7pDzfR85yDpCpLtxVmFatIVJF2aw+sI7ndtF3anpB9vdAXO7UghdtRYXQGrS31oHUGk
H2/ARkzWHFGi3o7NMJmZEbc1BR30cBuaooQGa4js0/ScJAQ6NDkRWRfxidtGm8doo8HotN1r
sFX9OYV4u8JjtHmNLcfWEHQPTUOSg5k2XqixukJ0PokeiorYPPqCDCKrSmK4jV7CzrlmhtvQ
R9g1K3buqWAmSrMfOSxHw3MbDsuZvJTiZDo7ckf/iZPWJbZUpEeXn7GnnJ8NsvGmGbFz+XUF
GQTBQR5WWk/EVvVuCiqk4mQ6Qaw9qrN5IjbP5e3Zl+3MErsnYvfqCzIIIvsyM3X9RDNN3pfY
RsKgUi09He387hSC2HYezdqWbU0hVhd0MBO3ZfRtIJipSKbKXasLOkRjbcwv3SpRogiffnS8
wRWwoFHd9Iw1dM7Sb3H6ze3H1xIZq9OJ1fkdRuZW4DorDkkuFfPrNuxOknZJSBJzkt0T+uJM
1YWaOj3pVRr5N1gaESQly0lKo0tldhvs/qra+rJzFwGSBJ44SCoqqonhTb/+Q25eWW5emULh
AUgCg8VOCpLQLQAunjr35cuvyHfs8F648DZBHCaI0wRR8D/+R1Crbd6zp+DhRy6WncFvAcD9
ug0xUAc+zbak1RtxeC9vTyGyLqIfu4XtXl1BBjdXt2InlrvZUcv8hE3LTYbeUYtG3LDpzPTk
JBaS0KTdc+yk3fTjjS72V9k176RkHGoLWNuL17KDXOkZLAzRIybpx+rbj68liK3n2V+MM78V
Ty+qR5mkLLas6z5t9ppsdCbpz++ws5GL6mxeo81bVyC8T5KRuY3hVe6tdavXEqvzFUa719RG
TwlfU6A0n2fnp69fsw4dfph0h23ucMvRDWjEraXoRd6A11GdIJLYVGgdu8O1hVqbW4ftf/1a
qUwSmr7NjbgJ30VLuB2J1mEgyR1Gu1tbSEPS2uw9ayVKiChnfb48zJ+mzSzBB/jQVG5vuOUY
nmFihwixezvtqEU008BN50dDYH4Tm0ly+OjXdLPhbpqFc1IDN7MbaedZbpSWvZcEgWYmoQlM
Zof/zPYUgth5Gj0vhXlIM3oWr9riUpldSpNTaXJ1GJ0dBqciHiQJUElr8aiNtg//67/Pnq9p
VxvNEncbj/KQxJiWeOJeFJudQbnGfO5C7b68gxqTXUd6OX4CSJowSCoqqol9U0uWkwCSwGBJ
Jw5JyOhmktWHjpQuXnoyJeUzgigjiAqCqCWIA3fccXzRktZDR/CbSYpG3CJ2X8Thi9h9ETTo
hnWEIRtzM8DmoxuIdcVN/CEbydtzs/cJjGXRHSbx++LQ3VWWDFsiHlKp3ppCbK3E7yfJPB6O
vlPzRZQBQmNnnMt3phDZX/Jvq60nPbEe9GZx6ywu2la3nvTorfSTTOin+eKPdXMGLM4gem6J
xRXCb40tcU9ISQsfCJOQGws3EOnFDWP5TBLtYZQhi1LOc9kpxI7ahG5BSd9sk7bolpIh7k7c
zG24ufsVJf44P/6tt9n7dxulxuNEt9tm7+FO//6fSyOxd0hCkIQRkhyHpOioxHKSyug02jyn
K2sPHDr2/v6/ifzxSPzXTxL3gfzjp89fMtl9arMwzwSQNKGQxOaNcnPLcnPLfv/7AuRRQVLr
e48yXwoe3aOY9ICCwePkZCGJfSxJVdmZK9t3X97wUs3SpZeWLr264aWG7N0Xy87gjyUxO4Nm
YTKJ5iT0L3evHfqW0Oyv1WrfSVl/qF0MQ1F4KN4z3US3AeR3XW3H1tBTi/jGusC6ggwi6yL+
bAqms0QEU/XnFOLtcoaEWAA6l51C7Pg7DkMsAMWzlnvt1lnceivzrDebl+6J7X6THT0BN8A+
HfbsjpQ1hRqLKzSuT1uzuGq3peAPKhnZ40pwaw6tI9YUqqOu0F68lth9NtkHm3B4hDUV9DQ3
ZOxZJSijg55nzD7YOCHbcMe5Qzd6mAn7dBT0QBIujWR2o4E2pdHZgczcQ1Kud8h19nZdEpyk
tXjsnpC0veER2OGNJGsd6RMPxiFI4lJvC/9PeRxIKn0FrbmxNInrW8kmYtl7TX2DPa3vPYpe
TBnf6pBU+gqxbFcreq3YtYx4pWTyYwoGj4eThaT4D7gtPNGqNDFPt+VxEp1PQreK9jLAxNwV
mo9K+MNKxY7Sj8Z7oJvo6bYCx3xkG//pXaK8wkUESdyz4vmQJMYjbTLWWdw6q0fPPLse9cRG
m9/EDYIEzc6Q2Tl+VDTe1qK5VmOytei5MbqdIKw0OZgH3zr8zB0dfSjCevZRIcyjiNngi02j
FZthivkkExrI7F6Dzau3eXSkR2d1ayUJyeBQGBxyg5352b+9XUdbngwqSXukM5NG82O3aJmk
U///QmLxn6tjQFLre48Sm4qTvb6xkMS+mDKOBUmyOUs09z3cfv/dlrlzlPO+oZk3Rz8vrfXh
u01zv66c95D2vrvNc+89l7Y4kd3gkCQgpFFBEj+gTXuWQTIJPF2dLCRpsEeUHD5yPCd375/+
+Paf/vh2Tu7e/CPFVZca9Ta/weY32ANGO81J7LgbjkriZ4qx4yY497AJEtzRZ0KIuIffRdHD
VYwNPEsNqZDoW74Ht462mzM3TObWsja7kDW0nazVuE3IDmkbHfQKZqfa7NSYXVoL2p1HZ/Xo
SK+O9OlJn4GNuSNodARNjCWi5+BsmsLGyynZBpIzth0eGDkCBrvfYPMZ7D69zYd+X4bCq7W6
tdgYKFutOmtMY62C4ye+EYGhhoT2orG4NRaXGnsOCUdIeodcb8fTSNKcFG+K0pjT0thCkkp7
7o+LicXvXPWHuyM9bTuXMRkmlDfiBnaW7Wod7CnZxKWgmJ66ac8yLskkYCPu48lj1rg5Tiap
5ptLqfnzqEfSqEX3UwvmU4vvox5JG1o0l1o8l1qw5OK98xPcDYIkhEdjCUk8l75CQCYJPG09
AkgS3GESWUci+/WkX2/z621+gy1gwDgJRyULfzYJ+pbPMRA3FBIwOQJGge1+QzQLuEeif/JI
WidpK/eMCNaoP+PZ7FIjm5xiq0xOldGBW4nb4FAa7Kw7cOvtSoOjQ2/v0NNL0EdURrRll9rs
1ljo2+porYiWUPADepvfYA8gY3Hj2TDlLSiwsBkkbJahmQP3G+x+vc2H/tWTPj3p05FeLenR
kh4NfY8ipk7N2Gu2Ws1ObAWeNWaupV/ucwAAIABJREFUYSAA0lq5hsRrUTQbsT9kcypNThUz
CamDnoRkR9kjuc6Op5EkOCkKJI0rJ40GlcRzksr/tJj4aaE/3F30CkFs/BxdnYo34py0qZh+
wY7zcL1zLEjqu9UySciytIcGFj9ILUwbXpBGLbp/aMkD1IL7qUfmJk5IPXxIwvFo7CBJsWtZ
koOgYPAt5bGCJJyTWFRC6Q0D210hVHIGza6gGYGRK0ibnyKi+zOb32CjkYuxj7VO0qSXtlTn
pLN6UL6HNcM9Ips5o/5PRdvJ2SQ0DkAM69hYK3DreJZHcTtj9KdCb0PYpDQ4VEanyuRSmd1q
s1tldqvNHo3Fq7V6ETDpbH7ONDnR1k2E/WNlfUIOxLXBFmDX55oK02C0Vq/G4lFb3Cqzm65o
uk5dKpNLKVXF4trnmQdSbrUZb1EurEU5ER51GB0dRofC4Ogw2BUsIXE8ZGvT2tq1NjEnJYJK
UzCfJDFx+79/TPy00B8+/nPi0Z0tzHAbj40k8kDFG6cvJPX0DTbMWUQteJBaPJeafx+1dDH1
8NLL31yY1G4QJInxaIwgqfSVZKeJgcG3mscQkiRSSqRfT6d5OFTCc0smZ5CfJQqwaSE9Rjxa
ku7+BdYIbPHQ5vVJQqOnPeDmdW9GCSuNTqXR0WG0czbYFVKW6+3tGNy062ztWrINt4Zs01il
rabdyrMFuU1tbdOQbRqynekv5Xq7wuDoMDrRbWmU6N59Zo/a4lVbUEA4a2PW5jhYVDujsLDq
SUn74liq5aitXrXFozZ71GaPyuRWGl0dRhZZ2Gp1yA12OVbFrCXbgALxsdHeYbRj3Iy3Jdod
BgeLRwqDXY7aj96G5Y1s7VpbG+M4nBQ9nzROU5TGPpNUt3MpIZAEJBVv5N5Gk2GmJyT19A3K
7p1PLX6QWpR245G0K/MeSnY343gLgNb3HoXftYFvA48YkhJEJW4Azo4Bkz1gcHDYhIGRn/mg
T4tDj8WDvo4LrMJtwu2KYe53Q5zpvgo9/AG3gjbqHW1yvTDHwzfZqrEK3IKstrSoLS0qS4vK
zLoZt5J2E2dTY4epSWlq6jA1dZjQuy0qS7PK0qK2tmrIVg3ZRtMSKiTbadFPyFKZPCqTR2ny
KE0eldk74faMudUCW8T2xjHzWdRslGa30uzuMDE2uhQGp9zgaNdzvEszis7WRv/Jo952HSnZ
GOh0oN6Gw5aCbVR6h0JvV2C81a63tett7Tp2RySORxwkRUGluJAUh5Mmdh635Jykn+QH/eHj
PydePR5j4nbre49iE2CKN0pAUtOeZdMHknr6BuvmzKMWPlQ9L21ke4p9M8m9ew8qFB7kqgtJ
/boN5iGBbwuPEpJicBKGSj6J0RM7z2gEBH3j577lWzx8DJIiHhOdROHZ4IzhDoPoQQ3s0x74
pr/K89JCNkQnIltbNdYWtbVFbW1WWZCbcCvNTUoT60bcHbQbOBs5KzB3mBo6TI1Kc6PS3KSy
NKuszSpri5ps1djatPY2rb1d65DrnAq9S6F3dRhcCoOrw+hGVjK0NLF2j4dVApvFjk5azKeU
RpfS5FKa3B0mV4fJpTC65AanXO9s1znadY42rb1Va2vT2lo1JELSFjXZomGstuJmgFiiYbRp
xAjFti6uUbUzENams7XpyDYt2UpDWBRIipZSGg0nTWA+SQBJ/5lBsL9uw+ckNe1ZJswk4biD
5zK45aWvsBO6pwck9fQNXrpnyeh3uXfvQda5eWVVF7RiJ7gpPJXHm2MPBk87Jw5J/vC1YOd1
SYfiuisxd15DDkZY9wQjPcFwTzDcEwh3J+FQl9j+mPZJuFNsb1DsiMAe1gHWYWQ3bv9IHQi7
A2FPIOIJRlAZfKFOfsm7faFuP3K4h3UAzJluKn7MggbgDXZ6pWtcotK9TF0ILNWKunyhLl+w
yytwSMJSzTKW/fwjmgDHDbU32C0JSVHuk4T9uo0dYuOG2xS72HeXvbdrI/sDN3b5pmIJNkK3
WWJnfE++J/8Bt2AwOBEnCEn+8LX8o8WZv3sLDAaDk/KhI8cEnAR33AZIAoNvDScIScHO65m/
eyv2FEAQCAQSK/N3bwXCPQBJuAGSwOBbwwBJIBBoXAWQJDZAEhh8axggCQQCjasAksQGSAKD
bw0DJIFAoHEVQJLYAElg8K1hgCQQCDSuAkgSGyAJDL41DJAEAoHGVQBJYhM5OTmhcCcYDJ7i
DoYiXl/A5fZaSbtOb2hpaauoPA+QBAKBxkrRIOl48YnyisorV+vb2hVand5K2l1ur88fDIYi
k35hHG8TOTk5k10vIBAovoaHhwcGBnp7ezs7O71er9ForKurA0gCgUBjpWiQdObMmatXryqV
SovF4vF4Ojs7e3t7BwcHh4eHJ7vI4y6AJBDo1hBAEggEGlcBJIkFkAQC3RoCSAKBQOMqgCSx
AJJAoFtDEwZJPp+vvq7uRHHxvg8/3LZ161tvvvnWm29u27p134cfnigurq+r8/l8Y3VQIBBo
6ghBEv4IaoAkgCQQ6NZQVEjqvq4lfaxDXSOHJL1eX3LixI7t248cOVJTU2MwGMKh0ODg4ODg
YDgUMhgMNTU1R44c2bF9e8mJE3q9fmwPEAQCTa4yf/dWINKjNLlYAyRNPiTl5pVVVWlxT255
QKCpqXGFpEAgcOrUqX379slksmAgENDrnBcvmo4Xaz/6m+rdnLZ3tne8v1d/pMBWXePX64N+
v0wm27dv36lTpwKBwHgcLAgEmngxkORmDZA0yZAkJiTgJBBIUuMHSTqt9qOPPqqsqIgEg65L
l21flHo/O0S+9ZZt0yb7Sz+xvbjBun69cf16zYYX2198sWHLFmVBAdlQ73E6KysqPvroI50W
TlgQaDoIIEmsiYOkoqKaGN706z/k5pXl5pUpFB6AJBBIrOiQ1Ksl/ayThSSFQvHX/fuVSmVQ
LreeKAkeOeL4zW/cL77oXL/euW49uW6ddd0607p1xtXpujVrNKtfUL3wQvPq1ZfXrpV/+KG3
pVWpVP51/36FQjF+Bw4CgSZGAEliTRAkFRXVxF0BcRJAEggkqfGAJJ1W+9f9+202m/3sWeex
464//sn14ovO9evtGRnkunWW9HTT2rXG1au1P/qR+oUXlD/4geIHP2h//vmW732v+dlnLz3/
/Lkf/NBUWmqz2f66fz/kk0CgW10ASWJNKCSxeaPc3LLc3LLf/74AmRoNJKlylxO0lueqxEux
hSDQLawxh6RAIPDRRx8plUr72bPOQ4fsma+5N2SQP/yh4ZlnTN//vmntWuOPfqR/4UfqH/5Q
+fz3Fc8/3/6977U+v6phxYqry//l8hNPXH766epnnjn77HOmzz9XKpUfffQRzE8CgW5psbcA
EEASwWrRH2viQNKpzWjNzaeS2PGpzXRXrcpdPsX67FsdktjQUgiMUL1gS09tTrKyQKCpqRiQ
pCP9rBOHpFOnTlVWVATlcuO+fY7f/taekWF8bqXvN7/p+/JL529+0/Hd72h/+EPV888jPGp+
7rmmZ5+98thjzRkZ9kOH2l/ddH7xoqonn6x46jt/X/msr7m5sqLi1Ck400CgW1jsLQBUJjdy
iJ9JqvzjImLJtoYYkMT1w8mI7bPxLn1qKBYkyeYs0dz3cPv9d1vmzlHO+4Zm3hz9vLTWh+82
zf26ct5D2vvuNs+991za4kR2g0OSgJBGBUn8gDIMemozVkkjqzIQaKppbCFJr9fv27cvEgxq
/vY3x9at5Np1xlWr3Fu2UB4PRVGUz2fauLHpsccQHjU+80zDymcufetbjT/6kbmmxhIOknJ5
/U9+enrhwjPf/vbfH3+8/Mc/iQSD+/btg/sC3AY6l0lknpvsQkysbpdDZm8BoDK7kUPC4bbK
bUuIpdkNvb29g4OKHHYcB3WygiEcJqmEj+nwemQBG3Efn0J9dpxMUs03l1Lz51GPpFGL7qcW
zKcW30c9kja0aC61eC61YMnFe+cnuBsESQiPxhKSeGKSRoJ83dQjUxBoBBpbSCo5caK6utrd
1Gh8+8/W9HTjqlXkyy9Tfj+3hserffXVumXLGleuvLpiRe2ypQ2r15gvXTI4HCq1Wmkw6Jub
LrzwQunDD3++ZOkX335Md7RIJpOVnDiR8AGdy2QvoLwO6FwmsWK/kYq1xLh/hWCVUUm8x8TX
Me5fQYyuAzXuX0FgGmVnLC6ncf+Kse3hETEId4RV5xgdisR+x7LWk9x1kgdzLnPsAzD+wu+T
FAWSLDXblhA/P9bb21u6iSA2l6EPckM2LASpcpdzXa/E+/RiQQJp6vXX8YfbZGkPDSx+kFqY
NrwgjVp0/9CSB6gF91OPzE2ckCg+JOF4NHaQpMpdTogH2yT+BIFuTcWCJJufdSKQ5PP5dmzf
HvD5DMePG3/5S+MLP9A8/bTl5Z9T3d34aje8XtWrr15cuPDiksVXV682Xr5scDrVarW8rc3l
9V67dq1s9epjDzxwbPHiE0uWVrz806Dfv2P79oTuxy3oQs5l4r2fCIHGG5IS2p70Osb9K1as
GB2E8CEmfucaB3pE5RwnSIoWtHFMuox1tSeupA7KuH8FQazIzBzjsE+ExDeTFEOS5/BPiJ8f
6+0t2UQsy1Eyw208NpLIA7HjO9MQkiiKapiziFrwILV4LjX/PmrpYurhpZe/uTCp3SBIEuPR
GEHSqc14fg4ySaDpqDGEpPq6uiNHjvj1hvbXX9d+//ua51cpv/e91scfN/7bv1GRCEVRQ0ND
vb2914aGOm32lg0bap97znj5ss5mU6tU7e3tLq/3Wnf3keeeO3DnnUfuuiv/rrs+u+uuE8u/
5VUoCgsL6+vq4h3KuczYHCDsDMcdkkZMSWjZ6LhABDHxKSip6I0XJEUJ2niOTE0aJY3goMY8
7BOhRCCJziS17l4mzBxKQJJ4wG16QhJFUbJ751OLH6QWpd14JO3KvIeS3c043gJAlbtc+BM2
XphhThJoemgMIelEcXFNTY3t/MX2n/5MtWpVx7PPtj37bMszz1xevFT5i1/0BgIDFNXV1RUO
hXydne6ODs3Fi0qDob29vamx0e5y9XR15a9Y8cmddxbOnn149uxDs2cfmj37aFqa8tNPa2tr
TxQXxzmSWCNX6A1BBxMLkrDBKvYT3MgPvlbmuXOZzCLRpyRJRbBdcbfHlINXQGxfEh+PtlwY
CHqJsBjYoBYXqtgblOyt8cEx/E1R6CS2zxJDtC0Llomr41wmsWL/OXrLmee4VSTqK06sJDcu
PrRzmcSK/fsFe4nTePgLY7QuSUkGh2vizB+ZmcIlkzSiSFEUA0kqk4u15Jyknx/r7O0t2URs
KosxcRsb3aH4P19jl3IjctMAkiiKqpszj1r4UPW8tJHtKfbNJPfuPahQeJCrLiT16zYJBMKW
wq/bQNNEYwhJ+z780GAwqA9+2rp6tfyZlW3PPN309NMNK1bUf+e7FxcubNm0KeRyha5dc7tc
FotFbzIp9frW1taGhgYbIqRnnz1w551HZ80qmD07f/bs/NmzD8+efeRrX6t5/XWDwbDvww/j
HEl8SBJ0tdEhSaInwldm30a9YfSuVrhH4/5MZht42ksAANyuhDvlfQRDimjLo0CSZDHw9RMp
pzRYcAHlPshfHnX72NYlMixRY4SVhKNVGjmERxYtVuKGIShwrEPDts2SsgTPRfk4+0J8OJKK
8i63AWYF8ZLJEwNJ7mi/bjvwIsH+ug2fk8QREQtBwp+eM39xy09tJohpBUkURV26Z8nod7l3
70HWuXllVRe0Yie4KTyVR4sGIrhPEmi6KRokRbp79TY/63ACkLRt69ZwKNS8Y3vLqucQHjWs
eOrKU09devLJmu98p/yhBy6tWWNpbjaRpEajUXZ0tLW2NjY22uz27q6uwu99DxEScuHs2YWz
ZxfMnl3w1a+eev774VBo29atcY4koQm4gu/u0TJJqI/F3hXNH5bqD0WfEu6R3knMjAKvWNjr
aImcBJfzDk+qGML145VTtH1hOMW9NU+i7QuqRswZokgLqyNK5LjXMZJhfESLM8UfxzIcbnBA
4zeeqB8XJvL4hyNOLUUjHmYf3PviJZOmaJDEHTDvPknYr9t42SHhT92I5bm5m9nOmF2++ZQE
G6FefQr125P/gFsQCJSIYkJSgHUikPTWm28ODg5Wv/JK03e/27BixZX/9b8uPfFEzZNPVj/5
ZMVDD/195szjs2ZVr16t0mgUSiXKIV2+ciUUCp3esuWTO+88PmvWMcZHmX+Pzp597NFHBwcH
33rzzXiHkiglRcEO6c5Mov/lryA59iFIHOBjTJJf77HNi3/OJQkxyUMSDxXExRAOxsUrZyLR
ixI66e3HzvKJIEl63tKIISlK/i72oUlDEnaQ0o0H/7iIteIoKvKgbeCFEC+ZLNGQxPz+X/LX
bXDHbRAINBU1tpDU399fm5l55amnLj3xRM3jj1c/8UTVt7515mtfK505s3jWrNPz5slLSlqV
ypaWloaGhsuXL9deuiS7cqX5yy/PLFlydNas4lmzimfNOoF79uxj//IvAwMDCUASr/OlKObX
bYIZGcKJR7xJIWzPvV84hMabFX4uUwpcJD7F36MAMQT9NZeGEM7m4Yb28JE9fAgpynLpESTp
YmDrJ1LORIbbsLQcm6XZv98YZft8DBL27eLhNnF1JARJUrES7FFU4FiHJoKkKI0nyse5YTvR
4UhK1LTwUK7IzFwhaOr8JZMkgCSxAJJAoFtDsSDJHmCd4HBbwB+o3/aO7Kmnqh9/vPrJJyoW
Lz41c+bnM2cWz5r197Q0+Zdftmi1zc3NDfX1lxsaLtfV1cpkFy9erLp6ta2i4uyiRcWzZn0+
c2bpzJlljE/deWfZCy+EgsH4w21I4gmywmmrvO4XXx1fiVvOG18SLBOCgsSneHvkNiH6KTe9
jpCRuP7TuH8FkZkpXaxoy6WPLUox6KUMV8YuZ5Tti6Mv2CE+N0iwfQEGCShJNAAnUR2JZZIk
YiXao7DA0Q5NOpMk1QyifZyHbJLFkjhgdjVe2xZSn9SSyVHm794KRnrUZjdrgCSAJBDo1lAM
SDLYA6wTgaR9H36o1ekVBw5ceOqpqiefLF+6tHTmzBOzZh2dNassLa39739v1miaW1rq6+tt
Tmc4GCz//POqS5dk1dU1NTV17e3q2tqKBQs+nznzy5kzv5w588zMmWdmzDj71a/K3nxTr9fH
n7idsCZ+kkYie4y3TrT3J/poJmx/47CjOJucArN3pqcAksQCSAKBbg0lAkkJZpJOFBdXV1eb
Lpwv/86KiuXLESEdmzXr87vvbi8tbdZompqb6+rqbE5nJBwueOqpIw8+WHHy85qGhob6+tbW
Vr3NZmtqqJo//8yMGZUzZpyfMePCjBknv3qX+khBTU1N/FsAJCGJX0+NsxLZY+x1pgokTWD0
xnxHcWM18Q3jthA73AaQxAogCQS6NRQXkhIfbquvq8svyHdqNadXrCj9p38qnjXr6KxZJV/7
WvvJk80aTVNTU11dHel0RsJh9Gv/Y7NmnVy4UPbll206nU6jIUkydL03pFDUpM2r/spXZF/5
Ss1XvlJy/wM+tbKw8EgCN5Oc3po6kHTrCmI1Ocr83VuByDWV2aMye9Rmj9rsAUgCSAKBbg3F
hqSk5iShx5K4HI4vf/jD4n/+56OzZuXPnn3igQeaLl5skMuvXr1qczgiodChlSs/ufPOY8wc
7VNLlrTW1Nj9/kAgcGN4OBwKXVy48MpXvlI/c8aVu+5qznkvGAhk79iR0GNJQCDQ1BMOSSqA
JIqiECSFwp1gMHiKOxiKeH0Bl9trJe06vaGlpa2i8vzIJm5TFFVy4oRMJlMfPHgsLa3gzjsP
z57937NnH3/iieqzZ61udyQU+vTppz++8070C/8Ts2aVzpx5aubMs488YmlrG6SornD4/I9+
VDVjRuusWW2zZ59MTSVrqmU1ST3gFgQCTS1l/u6tYOc1tcXDOtx17aWfvXq8+ER5ReWVq/Vt
7QqtTm8l7S631+cPBkORSb8wjreJnJycnr5BMBg8xd3dOxD+v+2deXRT94Hv7x9zTghx3rTN
IwlGkh3vlrc/0jMvYZgz05l2+ibT5E3IamNnzpSpX0/6XiE9zdIkM6QwFKt1wWTM63TopIHE
YCiOREoMxgY7eAEcb7Jly/K+S94XgYwX+L0/rnR3rdZyJb6f8zk5irZrX9n39/Hv/iSWbFNz
S+OWmf7hMb2h+0pNrc+RZDKZjhw5Mj8zc/473/n00UePP/TQb6Oi/l9U1Mknn2yrqDi+/elj
UVEnHnro082bT2/e/IfNm3UPPnhh06aLmzZdy8oa+eyz6r//+6sPPKB/6KGOqKimR/77+f/5
P+dnZo4cOWIymYJwKAcABAJxJM0u3tr58q5zn52vqr52s6nNYOztGxodt8xMzS3NW5eXbKsh
PzAGWkQShOGhs0iaW7J1DU0xzngWSYQQnU5XcemSpbHxk4TE3z/yyG+ioo5FRR2LijoaFXUs
Kuqjhx76+KGH6I9EKnvwwc8ffPDSpk1VmzbVPPBA1aZN9Q880BEVZXz4Yf03vnE6Nna6ufly
RYVOh38ACIAwJv/1PVNzVn3vOOPMAiIJkQRhOOgskmYXb3cOTjLOLHgaSdPT08eOHevo6DCV
ln4aG3v8kUc+jIo6FhX1m6io33Ii6YxjGqly06aaBx5o2Ly56cEHO6Kiuh9+uOORR0qjowfK
PjMYDMeOHZueng70QRwAEDjyX98zNbfU1jPGOD1vRSQhkiAMA/0eSYSQbqPxaFHR8PBwT2np
x/Hxv9uy5cOHHz7GiaRTmzef40wj1W7adDMqqi0qquvhh1sfeeRMXNyAVjs8PHy0qKjb6Pm/
Sw0AkCOIJLGIJAjDw0BEEiFEr9cfLSrq6OiwNDbq/ubbv33ssd99/ev/ERV1nB9Jlzdtqn7g
gRsPPtTy8MNNX/+69mtf+/zvnplube3o6DhaVKTX6wN34AYABAdEktjQR5KmWFtVZeQa8p0C
oQz1OJJueRVJhJBuo/HYsWMVly7Nz8wM1tTUvf2zj5+IO/6Nb3z6ta+VRkWd27z5wubNFZs3
X3344fN/+qenvv71Sy+/3Ff22fzMTMWlS8eOHcMcEgCRASJJbIgjSVxI6CQIJXUfSQOWzgGL
D5FECJmentbpdEeOHLly5cr05OSkydT/xRfNv/rVxZzs0qf/x6fpaef/7pkv33jDePKEpaNj
emqq+urVI0eO6HQ6rEMCIGKwR5JptM00ikiiDV4klZTUuDB395uaYq2mWKvXmxFJEIp1E0kD
lo1EEo3JZDp75swH+/adPHmypqamp6dnbnZ2bW1tbW1tbna2p6enpqbm5MmTH+zbd/bMGbzb
H4AIgxdJplFEkjVokVRSUuP6tWE6CZEEoaSuIslRSBuMJJrJycnrDQ1nSkuPHD78/nvv7fnx
j/f8+Mfvv/fekcOHz5SWXm9owGdqAxCRCCPJNIpICmokMfNGGo1Wo9Hu3XuClmwkkpoPZVB2
Mg7q+beWZVO5paHexRD6xaBFEgDg/iT/9T1Ts4tt3SOCSKIYUt666CaSyrLpe+aUeXF8O5tL
ZR5qXF6zNh/KoC/IxnCPpLJsKvNAM31ZfyCTyj7L2ekURSGSYKSISAIABBQ2khydJJhJ0v00
hVK/e9VFJDUfyvBh2GUiibkgG11FUnV0WldMYmvslgFldIfqsS5VtEmlaE7c0qd8tEMVb4zZ
0q/cWq5Qe7IZbiQJCmlDkcTfoY0HM+2TSWdzKSq31LdXC0JZikgCAAQUXiR1j0idbit/W02p
/6V+am5p3tqyP9Mxw0TPG7EndjIPNDNTFfQV9pG68WAmO8kkaCP24TIauN3MJNVsSycJKpKk
IKmxJDmBqGNIkmI9VUnUSpKcdmVrgoeboSOJziN/RhLPsmyKM5O07GvSQihLEUkAgIAijKTu
EfGapIvvqKmXPpmaWyrJpqicc/TRqTSH20m5pfYLzHkednR2FUnL4TaTRFutiF9Vx5EUxb1k
BUmNXU97giTHkiSl54Vk5UcSN4/8F0n6A5mik6CIJBhBIpIAAAFFIpLmloQLt//zBeqlT6bm
Tr9KZexvcpxu47WRxLBbmhO5kWRdXrsRnUqS44haSRJiSLqaJKbXbkvxajN0JInzyE+RVJYt
uUwMkQQjSEQSACCgeBJJ9pmkhv3plACJSCrNYW+mF8NEZiRZl9eqtyYQdRxJVdxJUtSp4r3d
TAA/AqD5UIbE+9qYmxBJMEJEJAEAAooHkVT+tpp68eOZqbnTr1K7TrtYuN18KIOzAKY0RyKS
Gg9mRk4kWZfXGqJVJCX+qkrh25Zcf5hkYeFxvd5MW1Xp1bvb+OuQJF8tCMNfRBIAIKC4jaQP
n6OYd7dx1yQ1HswUziRxc4c7l8FeX5bNLOiOjEiyLq9dezxt45ssLDzOqCnWVlUaxXr4VNyp
PN4ae/aFQSTBCNFZJM0hkgAA/sBZJDn5nCTOu9uYoZYddvUHmFszDx3IYd7gxlyfWyrRRvTH
LDErvkNv6P+BWwihJyKSAAABxaOF2/jEbQihDHURSV3hF0m9Rdu3F/U6/q88f3tRb3k+RVH2
K3uLtotmifnkl5PyfMmrHU/JXmaRvtb15jgP6C3aTlHbi8pFd5d60vJ8ivMtAiB/EEliEUkQ
hoceziRNh20k2S9J9QbpLdrurDiE3WN/Lic5JF0uvK+mt2g7+0j7/zgqir6Bdw/upqSyjQOa
Ccib/Nf3TCKS+CKSIAwPXS3cHrQXkiEcIolXEnQ3lOdvL+plp3NEdVOeLz1Zw/YK2y3CSHI9
KUVv320kOb4K95Ek/WUCEA7QkdSKSOKISIIwPHQTSYNhE0mEECZLuMHkdJ7FfSJxL0tEkttw
cXe6jXe7i9Nt9Eal55MwjQRkDxNJrYgkh4gkCMNDl5E0yUwmhVcksf9nzwz+9cSxDGj79vxy
0U0SNZJf7iaS2BN73Gkfj2Z2AEMUAAAgAElEQVSS6K25DC7uU/b2cr8BFBIIA7iRRItIQiRB
GB66iyR7J4VPJHHnYJwtJCrPpyj6TJx9lkgqU5iQ6e0t7+2VjCRmKRKbQ9xtOT3Dxj2NV7R9
O5VfVJRfVCQxU8Q5bUg/rjx/O2aQQJiBSBJLFRQUzM4tQAhl7szsvGVyenzCMjg00m3qaWpq
uVRx2b5w2x5Jk52DYfDuNsf8D7ceeou2b98uXJvdW7TdnkSClUHstRKtIowk+gITR45c4vWY
B+9uKy8qKrKfTJOcf7J/bdvz850/FdYrAXnjLJJOl565eKmirv56S6ve2G0aHBoZn7BMTs3M
zM6H/MAYaKmCgoJQvy4AAPfcu3dvdXXVZrMtLCxYLJbe3t6Ghgb7RwCwkTQp/0gihIgCQ7zs
iDdpJFxXxH+PmuQJNU4kbS/qJbxKYk/Ksc/neiaptyi/qJfbXZITSaLlT5znxZpuIH+cRdKF
Cxfq6+s7OjoGBgbMZvPCwoLNZltbW7t3716ov+SAg0gCIDxwH0kDk50D4RdJ9hrivkVse37+
dl5SuF58LXmr4/m4p93YLuI3Ev8JpCKpvIheZc7bjmgmif+sTjcOgEyxR5JxpNWISLKDSAIg
PHAdSV0DTCTdDp9I4n78UW8RHUZSUy5uI4k/o0MI8zSSEzjit/A7TSbJjwDgfQ/O/lfqEQDI
Gl4kGRFJhCCSAAgXXEXSADeSZD+TZD9bJTn7I32DZCSxbcRrE+6ScHu3uFpyxJ3D4m6MeYyw
u5w82fb8fH5bSXxpAMgbYSQZEUmIJADCBLczSV3hEkkAAFmCSBIT+kjSFGurqoxcQ/v1ACBP
3ESS/YxbGLy7DQAgT+hIakEkcQhxJIkLCZ0EgCRuI4n+nCREEgDAN5hIakEkOQheJJWU1Lgw
d/ebmmKtplir15sRSQCIcffuNgstIgkA4Bv5r++xzNgjqQWRRAgJWiSVlNS4vQPdSYgkACRx
GUkWRBIAYIPQkdTcNcx0EiIpqJHEzBtpNFqNRrt37wlaspFIMmiy7G8vydIYAvTlAxB6XEdS
FyIJALAxHJE0xHTS1NzSzpd3se/iTH27xk0k6fLoe+bpvNiwLs8+gBs0WTIbycM9knR5zGth
0GR597IAEE64jaQuRBIAYAPwI2mYiSRmJqni7VQq7f0bLiLJt4GYiSTmgmxwFUnV0WldMYmt
sVsGlNEdqse6VNEmlaI5cUuf8tEOVbwxZku/cmu5Qu3JZriRJCikDUUSb4fKb+8C4D/cLdy2
dGHhNgBgA3AiaajFONxiHJ4Snm6reD+NSv/5DZvNtramL3Ccx7F3keDEjmNSiXumhxdRgjZi
Hy6j+Q43M0k129JJgookKUhqLElOIOoYkqRYT1UStZIkp13ZmuDhZuhIovPIn5HEhTOpBEDk
4cFHAFi6EEkAAF/hRhI9nySKpIGa99OoV0/ZbLayXIrK09IPZIdfJoIMmix22kLidvvVggkk
+c11uD/dVq2IX1XHkRTFvWQFSY1dT3uCJMeSJKXnhUT4kcTNI/9EkiM/kUgggvHgwyQRSQAA
3xFF0pA4ksy/f5F69ZTNdjaXyizocJxu47WRxFCsy4vcSCKE3IhOJclxRK0kCTEkXU0S02u3
pXi1GTqSxHnkz5kkfrgCEGG4/WdJ6E5CJAEAfMOTSLLPJDX/W6bwn+aRiCTxCbfIjCRCSPXW
BKKOI6mKO0mKOlW8t5sJ0EcACLIIS7dBBBM5/3YbAECW0JHU1DnU1OkskireT6NePbVgs53N
pXK1LhZuGzRZlEQOcYdpdgSPgEgihDREq0hK/FWVwrctuf4wycLC43q9mbaq0qt3tzF7FDNJ
IJJxHUmdiCQAwMbgRhLdSYJI+u3zFPPuNu6aJLaImAgSDs6O/2Ov1+Ux80uREUmEkGuPp218
k4WFxxk1xdqqSqNYL56OXQ8vt30LgD/xOJJuI5IAAD7gLJKcfE4S591tvNkh4VvdqCyNJo8Z
oZnr83QSbUSfoJPRYB76f+AWAOAJLiKpE5EEANgwgkhq6pRauI1P3AYAyBBnkTSLSAIA+AOJ
SJpFJCGSAAgHPIskvLsNAOAj+a/vscwsNHUOIpIYEEkAhAceRJKlc8AyPY9IAgD4Qv7reyzT
dCQNIpJoEEkAhAeIJABAQEEkiUEkARAeuIskS+eAxdCPSAIA+Ej+63vM0wuNhgGmkxBJiCQA
wgOXkWQx9NtFJAEAfIOJpEbDwFeGwabOQUQSVVBQMDu3ACGUuTOz85bJ6fEJy+DQSLepp6mp
5VLFZUQSAMBfcCOJnk+aml3c+fKu06VnLl6qqKu/3tKqN3abBodGxicsk1MzM7PzIT8wBlqq
oKDAurwGIZS5S7bVuSXb1NzSuGWmf3hMb+i+UlNLRxJTSIgkAIDPCCKp0TBAR9K5z85XVV+7
2dRmMPb2DY2OW2am5pbmrctLttWQHxgDLSIJwvDQaSQtIJIAAH5AHEmTiCREEoRhISIJABBQ
EEliEUkQhoeIJABAQKEj6SYiiSMiCcLwEJEEAAgoTCTdRCQ5DH0kaYq1VVVGriHfKRDKUE8i
qQORBADwlfzX95in5292DDCdhEgKcSSJCwmdBKGkbiOpo9/S0YdIAgD4iCOS+plOQiQFL5JK
SmpcmLv7TU2xVlOs1evNiCQIxbqOJLqQEEkAAJ/hRJK9kyyIpOBEUklJjevXhukkRBKEkrqI
JKaQEEkAAJ/Jf33PBBtJ/Tc7EEnBjSRm3kij0Wo02r17T9ASP0SS/kAmlX029DsUwgDpWSSZ
EUkAAN+gI+mGKJIohpS3LrqJpLJs+p45ZV4c387mUpmHGpfXrM2HMugLsjFCIqk0h6IoRBKM
ZJ1F0szCbaaQEEkAAJ9hIonpJMsMbyZJ99MUSv3uVReR1Hwog8ot9fb4xkQSc0E2uoqk6ui0
rpjE1tgtA8roDtVjXapok0rRnLilT/lohyreGLOlX7m1XKH2ZDPcSBIUkh8i6WwulXMIM0kw
snUXSeaOPnM7IgkA4Cv2SGrvZzrJMrPAP91W/raaUv9L/dTc0ry1ZX+mY4aJnjdqPpRh///M
A81r1rO57BSUI30aD2ayk0yCNmIf7n1mBUw3M0k129JJgookKUhqLElOIOoYkqRYT1UStZIk
p13ZmuDhZuhIovPI35FUlk3lluJ0G4x0EUkAgIDiQST1XnxHTb30ydTcUkk2ReWco49OpTnc
TsottV/IPNBMH77Ksh2nelxF0nK4zSTRViviV9VxJEVxL1lBUmPX054gybEkSel5IVn5kcTN
o41HUmkO/TIgkmCE6zKS7IXERNIgAAB4CRtJjk4SR1Lff75AvfTJ1NzpV6mM/U2O0228NpKY
ByrNidxIsi6v3YhOJclxRK0kCTEkXU0S02u3pXi1GTqSxHm0wUhqPJiZcVBvXV5DJMGI13kk
3WIKiYmkBQAA8BJeJLVLR5J9JqlhfzolQCKS6OXCNPRgHZmRZF1eq96aQNRxJFVxJ0lRp4r3
djMB+giAA5nCV8m7FfUQho8uIqmdG0lziCQAgC8II6ldek3Six/PTM2dfpXaddrFwu3mQxmc
d1OV5khEUuPBzMiJJOvyWkO0iqTEX1UpfNuS6w+TLCw8rtebaasq8REAEAr1KJJ6EUkAAB+h
I+l6e5+zSPrwOYp5dxt3TVLjwUzhTBI3d5oPZThmkjjXl2UzC7ojI5Ksy2vXHk/b+CYLC48z
aoq1VZVGsd4/LSIJRrjuI6kXkQQA8B0mkphOoiPJyeckcd7dxpxiY0+36dlTPZmHDuQwb3Bj
rs8tlWgj+mOWmBXfoTf0/8AthNAT3URSLyIJALAhuJFEd5LEwm184jaEUIZ6GElTiCQAgE/k
v75nYgqRxBORBGF46CqSHIWkRyQBAHzFHkn6PqaTEEmIJAjDQ7eRpEckAQA2ABtJekSSXUQS
hOGh60jSI5IAABuDF0l6RNKaFZEEYbjoIpL0iCQAAszHVw0vH7m8Y9/5sPPlI5c/vmrw5HvM
f33PODeS9IgkRBKEYSIiCYBQ8VFVR9bR2sSGW1kGEnYmNtzKOlr7UVWH22+TjqQGRBJHqqCg
YHZuAUIoc2dm5y2T0+MTlsGhkW5TT1NTy6WKy1KRZEUkAeBfXjhckXT99o+G7v27+W7Y+aOh
e0nXb79wuMLtt8lEUgM/kk6Xnrl4qaKu/npLq97YbRocGhmfsExOzczMzof8wBhoqYKCgsD/
08IAgI1y79691dVVm822sLBgsVh6e3sbGhpEkTSBSALA7+zYdz7LQD6cuDttWw87P5y4m2Ug
O/add/ttOoukCxcu1NfXd3R0DAwMmM3mhYUFm822trZ27969UB8XAw4iCYDwwINImkAkARAI
mEiasq2Hnb5FUgMiiRCCSAIgXEAkARAqmEiatK2HnT5HUgMiCZEEQLjgLpImEEkABAg6ko5O
3LXY1v3iqe9TeToXdzDtf8r1HbzwqHeRNFff1otIYkAkARAeuIykCUQSAIFjx77zmQZSNH53
4ta6D9b/8mn2X4LVrk9of0BxyNWuT9xan2g//E3KBU//vN2XTU/cWi8av5vpaySZEUkhjyRN
sbaqysg1tF8PAPLEXSSNI5IACBAbjKSJW+sTt0w/f+oHn9Ix9P0L9iu1P6CYy+2Hv/nU4Xon
D//0+4ik0BDiSBIXEjoJAElcRtI4IgmAwMFE0vitdR/VH879pWn81oXcp36Q+5RwmujJX5p8
f2Z3+hBJtIgkEsxIKimpcWHu7jc1xVpNsVavNyOSABDjUST1IJIA8D90JB0ZuztmXffNT75v
76Fdn5k++LOnP9Cvj+kPP/n9C2PW9TrN009qTHWap4XpJAgpjcm3TR8Z8zaSehBJDEGKpJKS
Grd3oDsJkQSAJO4jqWcckQRAINix73xq+919A2uNk6u+eOqfKYraeWr16GvUzlNdPxEtPso4
0CV8yI1fZ7z2uY+b47tvYC21/a6nkTRJR1IPIokmqJHEzBtpNFqNRrt37wlasoFIMmiyuD9q
ebpAfQsAhBY3kdQzru8Zb+sZRyQB4Hd27Dufor/7r/1rN8wr3tv5xjd3P59HPV+yciSPer6k
841vPvXG9ZUb1wsz8s7fMK/8Yf9TGfs76Tv/Yf9Tz5es3DCzt27cf+1fS9F7EUl1rXQk9TCR
xA6xqW/XuIkkXZ4Pg7Euj8rSGAghBk0WfUE2hH0k6fIQRuC+AJEEQKigI+n9vrX6iRXf/HUu
9b8+pf/buUc0k5T+8876iZX6ic4933xqT/1K/cRKfX1hOvce3yw84+um3+/zMZLq23rN07yZ
pIq3U6m092+4iCSDJsuHMZmJJOaCbHAVSdXRaV0xia2xWwaU0R2qx7pU0SaVojlxS5/y0Q5V
vDFmS79ya7lC7clmuJEkKKSNRZL8shOAwOAqkhyFhEgCIBDs2Hc+WX/3Z72r18bu+OYvc6ln
T975ZS717AeFaU8Wnhq7c62uMC1Xd23szqkPnkr7wHBt7M61k7spinr25J1rnFs37s96V5O9
jCSmkwSRNDBQ8X4alf7zGzabbW1NX8CcyKG7iD2zk6UxsJNKzBVEEFGCNmIfLqOpDzczSTXb
0kmCiiQpSGosSU4g6hiSpFhPVRK1kiSnXdma4OFm6Eii88ivkaTLo7KysgSvAgARiNNImr/F
FBIiCYBA4JdIokl70pFEwkjSPUs99X/rdM9SVNoHhpBHEt1J5ul5wZqkmvfTqFdP2Wy2slyK
ytPSRyddHreT8nT2C8yYLHG7/WrBBFJ4zSTRVCviV9VxJEVxL1lBUmPX054gybEkSel5IRF+
JHHzaKORpMujpF4GACIPF5HUhkgCIJDs2Hc+SX/3nZ6V6tFl3yzYRX3v5HL1qOFHT+4uoK+s
/VXarl/96EmKoqjvnWTuYL+zBLu0vm36nZ6VJJ8iqa5VIpLMv3+RevWUzXY2l8os6HCcbuO1
kcQ4zCyMicBIIoTciE4lyXFErSQJMSRdTRLTa7eleLUZOpLEebTxNUlcfDsZCkBYgEgCIFTs
2Hc+qe3uW6Y7VcO2sPMt052kNr9Fkn0mqfnfMoURJxFJ4hNukRlJhJDqrQlEHUdSFXeSFHWq
eG83E6CPAODN5yGSQETjLJKmEUkABJgd+84ntt39qelOxdDtsPOnpjuJvkbShDCSKt5Po149
tWCznc2lcrUuFm4bNFmURA5xh2l2BI+ASCKENESrSEr8VZXCty25/jDJwsLjer2ZtqrSuzVJ
jj2KRgKRDCIJgFCxY9/5xLb1nxiXywduhZ0/MS4ntq37JZJ++zzFvLuNuyaJLSJmHBYOzo7/
Y6/X5THzS5ERSYSQa4+nbXyThYXHGTXF2qpKo1gvno7zQUkoJBDBIJIACBUvHK6IqbPuNKy+
0bUcdu40rMbUWV84XOH223QWSU4+J4nz7jbe7JDwrW5UlkbDrh9mrs/TSbQRfYJORqEU+n/g
FgDgCYgkAELFR1UdWUdrY+pvJbSuh50x9beyjtZ+VNXh9tv04HQbPnEbACBLEEkAhJDjle3P
F17ase982Pl84aXjle2efI+IJDGIJADCA0QSACCgIJLEIJIACA8QSQCAgIJIEoNIAiA8QCQB
AAIKIkkMIgmA8ACRBAAIKIgkMYgkAMIDRBIAIKAgksRQBQUFs3MLEEKZOzM7b5mcHp+wDA6N
dJt6mppaLlVcRiQBAPyFs0g6XXrm4qWKuvrrLa16Y7dpcGhkfMIyOTUzMzsf8gNjoKUKCgqs
y2sQQpm7ZFudW7JNzS2NW2b6h8f0hu4rNbWIJACAv3AWSec+O19Vfe1mU5vB2Ns3NDpumZma
W5q3Li/ZVkN+YAy0iCQIw0MXkdSKSAIAbBhEklhEEoThoTiSrn5Zh0gCAPgLRJJYRBKE4SE3
kgZGxhFJAAD/4jSStJ9fqamlI6l/eAyRBCGUnUwkTUzOiiOJtq1n3Dyz9NHJU/mv74EQQq/8
7Ucnh80z4kgq0/3xSk1tY7O+s7sPkRRsNcXaqioj15DvFAhl6JJtdd66zERSe6eppu76zpd3
Tc1ZuZHUNzpjnlmamrPKyklXLk3OLlnk7aSHzrmS95wzi7TmmUXzNOvE9IKnTs27dlzCOfdO
euSYWMvsRhx3PANvQ1NzE1PzE9Pz5ukFWma/8V4Xx89SyH/O5aP9p26W/uVadPzgOX7kZhbN
9h8k7s/G/LB51tA3Vt/WW9/WS0fS1JyVjqSrX9Y1tXZ0mhBJwVVcSOgkCCWlI2l63mqemhsa
M3d299Veb/yn3f984eLlzkELE0leOhF47dtiSq7VNN5qGm9hHWsxjbV0jzXL2BYPNbmyuXuU
tsk42mQc/aprhLaxc7ixc/imYeimYehGx6Ck19sHJNT3u7ahrU8gPf65kDuL4NbaFpPQ5m7f
rOM8tq7FVNdiqm/taWjrbWjrva7vu9Hef7NjoNEw+FXnUKNh6KvO4aaukWbjaLNxlN7z9A8V
bVB+qiV+wmVlq2mMtqV7lLa5e7TJONJkHPmqa7ixc+imYfCmYfBGx8D19v7r7f0N+j5a7g9D
U+fghYuX/2n3P5+/cLGmtqG5zdDV0z84OjExOTs9b0Uk+dmSkhoX5u5+U1Os1RRr9XozIglC
sXQkzS7etkzPD49bjL0DN75qPfHpqVd2vTY4McedTArLTuqWdSf5JZK4nUSnkh86yV0q+dBJ
nteSHyNJ3EzOOomOJEEncSMJncSNJNeddKNjwEUnmYbMr+x6rejDY19cqqy93tja3tXdNzg0
ZrZMz88u3kYk+dOSkhrXH2rJdBIiCUJJl2yrC7fuzC7enpxZGDVP9QwMN7cZPjt/Yecru764
VNk/PtvKqRCZdRLn2B22nRSI+SRBJLnupPCIpA2k0rUmoyCSmE66ru8TzCfRkUR3kgwmk+Se
Skwk0Z30VdewuJMEhdQ1MP7Fpcqdr+y6fKWm8uqX1xtb2jtNvYMjo+apyZmF2cXbC7fuIJL8
Jh1JzLyRRqPVaLR7956g3Vgk6Q9kUnZyykK+QyEMkEu21cXbK/SyJPoNbp3dfc1thg+Lf/Pd
Z5774lLl4MRc19CkzDupld9JLaJOCnkJBa2TJE+6+dhJLlPJ507yJJX8GEmu55MEnSS/k25h
EEmCTnJ20q3FOGwaMn9xqfK7zzz3ZxQf9XtXHQuSFm+vSEVSWbYPw/HZXCrzUOPymrX5UAZ9
QTaGeyTpD2RSGQf1ossQRppMJE3PWy3T8yMTk/3DY+2dpq9a2j8s/s3OV3a9suu1CxcvT8/f
Ci+nWEO/3NWfzrtTsMaWXWbLW9ntsQsuZBY+e+vE9LxHCpeNz/nP+YmpefP0vHma+XYWJ2cX
mcXyU3PCtymE/EdaTlppRT+fS1OOHzl6NTfzI2SeWfhj+aVXdr2285Vd/37sh39Dxf3go2vX
G1v0hm5T/1Dle2lU+gd181ankdR8KIPKLfX2+MZEEnNBNrqKpOrotK6YxNbYLQPK6A7VY12q
aJNK0Zy4pU/5aIcq3hizpV+5tVyh9mQz3EgSFNKGIkl+1QlhgFyyrTKdNLNwa3JmYcwyPTAy
buof+qql/VLl1d/9/sTOl3dBCOEGLfrwWEVVdXnF/u9QCT88eb1F39nV0z8wMj5mqfvXdCp9
f9PCrTtLtrb9gtM4zYcy7P+feaB5zXo2l52CcozUjQcz2UkmQRuxD/c+swKmm5mkmm3pJEFF
khQkNZYkJxB1DElSrKcqiVpJktOubE3wcDN0JNF55M9IOptL5ZSV5tj3K6aR4P0gXUvcKaXe
wRGDsbe5zVB346srNbXlFVXnL1zUfv5Fme6P57Sfn9N+fu6z8xBGkO/+JRWbe5R35ZHXYqm/
evfcZ+ff/iuKvsC7fPQfY6lvvW2/wDz23b+kqL98h/fwc5+dP/fOt6jYfzwieSGy1X5+Tvt5
me6P5y9c/OJSZeXVg9+lEn9U2tKi7+w09fUOjoxMTNbsS6deKZ23Lp/Ooaicc/RBqTSH20m5
pfYLmQea6aNWWTZFZZ9ds7qOpOVwm0mirVbEr6rjSIriXrKCpMaupz1BkmNJktLzQrLyI4mb
RxuMpMaDmZRj13NfBggjWDqSFm7doT9b0jw1R3dSV0+/3tD9VUt7/c2mL+tv1NQ2XP2y7kpN
7ZWa2qrqaxBGkL/4DpXwg4/4Vx74LvW3vxDe9NEP46nv7ude4Lv/b6nvHLhWVX3td/87gfrb
X7BPlfDD30leiGjpw8XVL+tqahtqrzdebzz891TyG7rurp5+upDMU3NTn2ZTr55duPWHbCpz
f5PjdBuvjSTmgUpzIjeSrMtrN6JTSXIcUStJQgxJV5PE9NptKV5tho4kcR75IZI4O5S39yGM
ULnn3WYXbzOdRJ966zT1tXeaWtu7mtsMTa0djc36m01tEEaWR56hkv7PWd6Vp3+cRD1z5ObZ
vYmCtcbU937V1Hbz7N5E+kJT282mtl89w96c+OMy9uH0HX79PSpp72nJC5FuY7O+qbWjuc3Q
2t7V3nnsOSrlzYtDAyPj9kKaW6rfn0Fln1386hcZwv0sEUnMeR7mVE9kRpJ1ea16awJRx5FU
xZ0kRZ0q3tvNBOojAPg7FJEE7xPFnWSZnh+zTA+PWwZGxnsHR7r7Brt6+jtNfZ3dfQZjL4SR
5W+epVJ+8jn3mvKfpFDPFtM3/cOH4od8/lYyff3nbyVT9D17DcbeD5+jkn9abjD26n6aQj33
G/pK3U9TqJS3dMZeQ/E/CC9Eup3dfZ2mvq6e/u6+wd7B3z1PqX921TJmmbZMz0/NLc0uNv08
g3r1zMqS7Vy25Moh3pQSe26nNEcikthpjgiIJOvyWkO0iqTEX1UpfNuS6w+TLCw8rtebaasq
PX93W1k2uxQJp9vg/SIdSXQn0efdpuetk7OL5qm5ccvMqHlqaMw8ODrRPzxG2zc0CmEE+V/P
U+q3L7PX/MdOilK/e5G5vPO/6OsvvqOmqBf+Y2i07/K7qfSF/3yBuWff5XdTKSr1nSt9vOv/
63nK8WzMldxHRa7MEWNwdGJozDxqPvEilfZ+7dzk7OL0vHVuyXYqm6Iyf3HTtrpkWz2dQ/Fy
RzCTxM2d5kMZzEjNXl+WzSzojoxIsi6vXXs8beObLCw8zqgp1lZVGsV684SOT2XAwm14P8nt
JGZKaXreOjW3NDmzYJmen5icHbfMQBiJfvKi4FRP2r6r7K3176UxN7zyEX3ll/vU9sucW9P2
vfcS81jm+lc++vgV+5XMBfsW0977MuTfezCcmJy1TM9Pzpx6WbCfHYW0ZFvlfUghM6vEnm7j
3Jp56EAO8wY35vrcUok2ogd0ZsV36A39P3ALIfRNppOYKaV56zJdS0wwQQiht07PW+nDCH1I
Wbh1h/5UpCW2kEJ/AAyOiCQIw1tBKnFrCUIIfZZuo/s2j2gRSRBGgtyjGIQQ+t2QH+VCIiIJ
QgghhFBCRBKEEEIIoYSIJAghhBBCCRFJEEIIIYQSUgUFBbNzCxBCCCGEkCtVUFDg+h8MAQAA
AAC4D0EkAQAAAABIgEgCAAAAAJAAkQQAAAAAIAEiCQAAAABAgtBHkqZYW1Vl5BrarwcAAAAA
gIQ8ksSFhE4CAAAAgBwIXiSVlNS4MHf3m5piraZYq9ebEUkAAAAACDlBiqSSkhq3d6A7CZEE
AAAAADkQ1Ehi5o00Gq1Go9279wQt8TmSdHmUgDxd4L4LAAAAMkB06M/SGDx8iFdjhC7P/swG
TZb7TUQe2M/hHklcDJosJBIAAEQ+zKBqx6DJcjN++zZAMNsRbvA+AfvZZSRVR6d1xSS2xm4Z
UEZ3qB7rUkWbVIrmxC19ykc7VPHGmC39yq3lCrUnm+FGkqCQ/BRJujzMIgEAwH2BeCzlDt8G
TRb/3AJ7RZbGwJ8f4T6GGUIEYzb78PttkMF+djeTVLMtnSSoSJKCpMaS5ASijiFJivVUJVEr
SXLala0JHm6GjiQ6j+x+rHsAAA97SURBVAIRSZhFAgCA+waJCQdmFOD+xcxeZm7mTYZI3M57
ehnPcAQF7GcPTrdVK+JX1XEkRXEvWUFSY9fTniDJsSRJ6XkhEX4kcfPIT5Ekv/0KAAAgUEgd
83V5VJ5OeBNvzJb4S9r+oDAcvIMC9rNna5JuRKeS5DiiVpKEGJKuJonptdtSvNoMHUniPPJP
JMlvtwIAAAgYzmc4OKeAeKeC+IO3+ERQ2A3eQQH72eOF29VbE4g6jqQq7iQp6lTx3m4moB8B
IL+9CgAAIHBIr5VxnAWSWnvBm+qQGKa5gzd7pkjGg3dQwH725t1tDdEqkhJ/VaXwbUuuP0yy
sPC4Xm+mrar0KpKwHgkAAO4rhGOpLo9dGsxdK8OO1NzBnXmoQZPFPI69XpfHzHvIePAOCtjP
Xn4EwLXH0za+ycLC44yaYm1VpVGsN88nv50KAAAggLj+/B7uqSDuMC58CxaVpdGwoz5zfZ5O
Ysymt3i/jTXYz6H+t9sAAAAAAOQJIgkAAAAAQAJEEgAAAACABIgkAAAAAAAJEEkAAAAAABIg
kgAAAAAAJEAkAQAAAABIQBUUFMzOLUAIIYQQQq5UQUGBdXkNQgghhBByRSRBCCGEEEqISIIQ
QgghlBCRBCGEEEIoISIJQgghhFDC0EeSplhbVWXkGvKdAiGEEEIY4kgSFxI6CUIIIYRyMHiR
VFJS48Lc3W9qirWaYq1eb0YkQQghhDDkBimSSkpqXH+oJdNJiCQIIYQQysGgRhIzb6TRaDUa
7d69J2g3FEnNhzIoO9lnQ79DIYQQBtiybIpP5qFGDx+SU+bFhs7m2p+5+VCG+01EntjPYR9J
+gOZVMZBvZXeuVTmgebQ71MIIYSBtCybf7RvPJjpZvxuPpRB5ZZ6uyFm8GYu3F9iP7uMpOro
tK6YxNbYLQPK6A7VY12qaJNK0Zy4pU/5aIcq3hizpV+5tVyh9mQz3EgSFNLGIqksm3099Acy
MZkEIYQRr3Dw5v3BvKw/kOmY+aDnM9gTDpkHmtesZ3PFUyONBzPZyQ/BmM0+3PvhP7zFfnY3
k1SzLZ0kqEiSgqTGkuQEoo4hSYr1VCVRK0ly2pWtCR5uho4kOo/8GkmYSYIQwvtN8eDNjr6l
Oey5HvYyM8PBGynKsh3rNFwN3stynOHAfg6O7k+3VSviV9VxJEVxL1lBUmPX054gybEkSel5
IVn5kcTNI7+uSZJRe0IIIQyYEoO39WwulVMmvIk3ZkuMEaU54Tp4Yz8HR4/WJN2ITiXJcUSt
JAkxJF1NEtNrt6V4tRk6ksR5tNFIErWqYxoQQghhpOp8hoPzVh6K+/czf/AuzWFvpkeNsBu8
sZ+Do6cLt6u3JhB1HElV3ElS1Knivd1MgD4CgLe7Rf8LIYQwEpVeK5N9do2/UJUjb6qDXb1a
miMxeLPLk2U8eGM/B0cv3t3WEK0iKfFXVQrftuT6wyQLC4/r9WbaqkrfZpK4C8oghBBGqsLB
uzSHXRrMXSvTeDBTOMPBHYabD2Uw5x/Y68uymYXGMh68sZ+Do3cfAXDt8bSNb7Kw8Dijplhb
VWkU68UTctbPo5AghPA+0PXn93DedcXMdrCngTi3Zh46wI76zPW5pRJjNr3F++29QdjPof63
2yCEEEII5SkiCUIIIYRQQkQShBBCCKGEiCQIIYQQQgkRSRBCCCGEEiKSIIQQQgglRCRBCCGE
EEpIFRQUzM4tQAghhBBCrlRBQYHrfzAEAAAAAOA+BJEEAAAAACABIgkAAAAAQALqrbfeCvXX
AAAAAAAgO6g9e/aE+msAAAAAAJAdVFFRUWi/Ak2xtqrKyDW0Xw8AAAAAACGEunjxYgg3Ly4k
dBIAAAAA5ABVW1sbnC2VlNS4MHf3m5piraZYq9ebEUkAAAAACDnUr3/96yBspqSkxu0d6E5C
JAEAAABADgQ1kph5I41Gq9Fo9+49QUs2EkkGTRZFk6UxBOjLBwAAIB90eRQf94d/x0PydN5t
h35mgybrfhxhsJ+D9TlJgYokgyaLfdGY3QwAACCCER7teUOBJAZNlnfjNn879+nwgv3sMpKq
o9O6YhJbY7cMKKM7VI91qaJNKkVz4pY+5aMdqnhjzJZ+5dZyhdqTzXAjSVBIG4okXR735ZDf
7gUAAOB3xAd77vDNnl+wDxCCEw7c+RHuY5jRRDBmsw/3fvgPb7Cf3c0k1WxLJwkqkqQgqbEk
OYGoY0iSYj1VSdRKkpx2ZWuCh5uhI4nOIz9GkqBZdXmy2rcAAAACgcRfxMxwwB0I2MvMzcLz
D8LbeU8v4xmOoID97MHptmpF/Ko6jqQo7iUrSGrsetoTJDmWJCk9LyTCjyRuHvnjdJtjh9MR
ikgCAIAIR2ostZ9Y4N/EG7MlhgfmbETYDd5BAfvZszVJN6JTSXIcUStJQgxJV5PE9NptKV5t
ho4kcR75YeE2M6GXpdH5dDIUAABAWOF8hoNzCoh3Kog/eItPBIXd4B0UsJ89XrhdvTWBqONI
quJOkqJOFe/tZoLzEQDy270AAAD8jvRaGcdZIKm/lXlTHRLDNHfwZs8UyXjwDgrYz968u60h
WkVS4q+qFL5tyfWHSRYWHtfrzbRVld4s3Gb2KFYkAQDAfYFwLHU2FLAjNXdwZx5q0GQxj2Ov
1+Ux8x4yHryDAvazlx8BcO3xtI1vsrDwOKOmWFtVaRTrxdOx03ky27UAAAACguvP7+GeCuIO
48K3YFFZGg076jPX5+kkxmx6i/fbKIP9HKzPSQIAAAAACC8QSQAAAAAAEiCSAAAAAAAkQCQB
AAAAAEiASAIAAAAAkACRBAAAAAAgASIJAAAAAEACqqCgYHZuAUIIIYQQcqUKCgpq24YghBBC
CCFXRBKEEEIIoYSIJAghhBBCCRFJEEIIIYQSIpIghBBCCCUMfSRpirVVVUauId8pEEIIIYQh
jiRxIaGTIIQQQigHgxdJJSU1Lszd/aamWKsp1ur1ZkQShBBCCENukCKppKTG9YdaMp2ESIIQ
QgihHPQikjIr/so4PubbZuhIYuaNNBqtRqPdu/cErR8iadC2QgixLbDXTK4IOmxlcSrkuxv6
zckVsmrrcnu3QduKJ3e7fxTukAWr4xfEOsnebdTG/t6MOn22qdnV9dlB/otih3O95JUiuxbX
nf6GSj1D1+K6B19hsJ1d5R9nJld4B6VB2wpZnx0c6lpct18/aFvh7/natgWr5I4SHdA4x7Sp
2VXhUY7dBPcZ6JeePlpKHRu7FteF+5P7i8Z5oONrZjfNeTabVeJ1oe/prxeL/bm14/7X3PEQ
wW5xLXen3Y9HEuxnjyMp/uKfU8svJJf/uW+bCXQkdS2uryzarIJDM29fTwmPXzCs9SCS7OOo
zH7lQqhohyxY+aMdfZk3vjrbz/bxUhBDjiGQvczZBPcOPO0DrfSvp+TTOmrD/tXK5yXm765R
Gy8LmB0riCR+PjqPJOlvk3dkG7XZL7uLJO4LwW5R4leG3S5nQ+z+Z39sJF5TzhcwavPvb6Jw
L7n/MRB+116+oB7+VRZpYj97FklPf/4X1O2d1PJL1K3n48t3+LAZbiQJCskfkWQ/EDMHCN5O
5714rv6WhaFwwUpWZh2zApxhkv0LhnMl82fr+uwi+/pKTirYf5nl9ysXKiV2CP+PNsewKhj2
FqziQ579V4l3AB21SQ2WvE1IDv/2oZf3y8tR+mm5MzS+HZQDJO+LWbCSFSvn62e+F34krVht
gnL1KpL4L5Djbt5EkuALW7ESyQMpd0PMD4mzSKLv73gep33ss+K9xI1FzvwWL0YdPcqdluMe
RsR/G9AX2IfL5ictSGI/exBJ0V9sp5Z3UndeopZfppZ3Uree/2+Vf+HtZuhIovPI/5HE3dHc
PzqFxxQXv88wVC5Yeb9g9C8kd4aDbd9RG/O36YKV+a2TnsBwiEgSyN0h/J0j+Teiyz8cuQfQ
qdnV9dlJ8THOk5kk3qvM18nTynYmSTwhx4sMztfMiaRR3sDjZSQJisexk32fSbIt8P6e9HEm
ifli7C+fvw+8EnuJ+ZY5BwrOZea75v21zP58uhq8Xez/CBf72V0kpV/48z+5vZNa3knZXqBs
O6lbO6nlF6nlF9MvetdJ3Eji5pE/IkkYtuzfLqJ9Lf1XKQylooF2UPTa2f+X9+vKDI38wVV0
yJbfr1yI5e0Qzt/63O7k3tnV5Cv3FeE9nBcuzF+TLl8IJ5Hk9GnZ6UOvlj4EXsc3wvwoOvbS
JH/u0+mI4vmaJDZTHDuBXffDmV7l4GRNEvMrwzshKB60JP7QF61J4r/K9JcRgHUOUnvJPr/I
v4m3hyUaXWJ6ry08Bu+giP3sMpKSdU9Ryzup5Rc47mQvVHnRSXQkifPID5EkPnY43deYSZKh
gkgi1knJ5a4ro4LfSfvrK7FuFJHkSsEO4Q574oklN6enhZHEP0nHzLdzp3+c/gK6iCTx04oi
TE6n0dkTB/ZvnP7WuCvTJSKJXTTt2HXiMxeupvQc95y0P6G3p9uEi6WY9uI9ivdy81vQiYE6
Gep8hkOUgERq8Oa8NUFqFVc4DN5BEfvZeSQpv9hBtXyb+urbVONfUze/Rd38NnXzW1TL31At
36Fufou6+ddU01//yR+f9nAzgfsIAO6Mn+NFdQy0EmuSZHSmE9a2DTmNJHczScx93MwOyu9X
LsQ63yHcg5dgxbEXrx3/JkH6OFt45PwmyacVXCm3P34cy+y4I4HNxv0iJSPJ0SUrXq5Jkr6b
V5Ek/Sg6am3ST+hkBZvI4EUSd+pOaou8qQ6Js/ncb1BiIu0+PZJgP8vmwyQLC4/r9WbaqkqP
I0lqLbaTFbsL1oDM+sINKjnQSq9JEv6ZK16TJJ6rkN+vXIgVnT2x7y5fVvmIToByJ0XEr45w
Woins36SfFreVyi7N2RMza6urwjabnWd2wpOImnIMSfk7cJt3jlTiZMagmfwaCbJ8RDCPUnH
+1WV1UwS991z3L+c2Z8f5isR/QqIVmjwlzzKdfAOitjPofhnSQoLjzNqirVVlUaxHj4VO1Jy
pV+PVeEpeRSSLHU2GyH57jZm8tbZu9tEx2v5/cqFWKen27i1JIA5+yMYvKXeHsydeHf66kh8
Hgc/kng5Jfm0nGl8WRUS8wVLVwh7B+lIcn72UOpzknjvY+D/snj1OUnS7zxivnLx8jL+RzR5
G0nO34bsKqP5uv78Hu5ZeO5fUPRlzq2rtln2G2Su55x6Fozo8vthC7DYz6H+t9sghBBCCOUp
IglCCCGEUEJEEoQQQgihhIgkCCGEEEIJEUkQQgghhBIikiCEEEIIJfz/xEoCe3EwcOEAAAAA
SUVORK5CYII=
--------------080908020104080006090707
Content-Type: image/png;
name="I[XKT_E_GD[62BIJ[`ZD8[4.png"
Content-Transfer-Encoding: base64
Content-ID: <part2.03040805.05010002(a)horebdata.cn>
Content-Disposition: inline;
filename="I[XKT_E_GD[62BIJ[`ZD8[4.png"
iVBORw0KGgoAAAANSUhEUgAAABkAAAAUCAIAAAD3FQHqAAABiElEQVQ4Ec2UTUsCQRjH581Z
JFYKREOokybksQ+Rhy7ehS7ePOXFzxL4AezgqVN4EJIgEAJPHn3LDQnFzRTT3ZmeZRJE115o
g4aFnZ3nz+/5PzPPLJZSIo8G8YjjYLxkMVdfUDjGWEixsBYIr0s45RBVmtWYs7T6/Zu5uy9b
2JTQ+mM9c5mZaTNLWJADbAZ4ANYrFxU/92/6cmcps83nZhu1i+fFntXDBAdJMF1OD24HwhaA
/i4LtsMxgkQilOgedPNPeR/32Zo93huTHaJ2UGlW98Tdl1JIJKGiCZr0R32kIXiwDUk2zmLJ
c2d9HIhAnVGnUWuga8RCzHq1JJXyDYLOcf2gRpDG9mP6VK/eVCOLiPFgxA/juqazI0YpIJ2m
WRpavmH1y1G4K/AzbpjG58qtNUIHQOZaq1a6L5Vb5bmYZ6+y0d1o7jQX1sPKyZo1d5Yjkohg
wigzp2byOJk6SQ1fhjDXmAZRMLgGAvrf9z0kUZnhAkDTQx9Af6i6tl1Gj315+c/5r6x3h3Hw
dFbwgr0AAAAASUVORK5CYII=
--------------080908020104080006090707--
--------------030407000705030404080406--
9 years, 4 months
oVirt 3.6.1 ISO storage domain with all-in-one local disk install not working as expected
by Matthew Bohnsack
Hello,
I installed an all-in-one oVirt 3.6.1 system on CentOS 7.2 with local disk
configured for images and ISOs. However, the engine web GUI doesn't
mention the ISO_DOMAIN, and I can't seem to select an uploaded ISO and
attach it to a VM. Any idea what I'm doing wrong here?
Thanks,
-Matthew
Additional detail...
I did engine setup with this where all paths are on local disk:
OVESETUP_CONFIG/isoDomainName=str:ISO_DOMAIN
OVESETUP_CONFIG/isoDomainACL=str:*(rw)
OVESETUP_CONFIG/isoDomainMountPoint=str:/home/ovirt-localdisk/iso
OVESETUP_AIO/storageDomainDir=str:/home/ovirt-localdisk/images/
With this, I was able to upload an image like so:
[root@host ~]# engine-iso-uploader upload -i ISO_DOMAIN
./CentOS-7-x86_64-Minimal-1511.iso
Please provide the REST API password for the admin@internal oVirt Engine
user (CTRL+D to abort):
Uploading, please wait...
INFO: Start uploading ./CentOS-7-x86_64-Minimal-1511.iso
Uploading: [########################################] 100%
INFO: ./CentOS-7-x86_64-Minimal-1511.iso uploaded successfully
After this, the local filesystem had my image:
[root@host ~]# find /home/ovirt-localdisk/iso/ -iname \*.iso
/home/ovirt-localdisk/iso/d33449b5-f660-4e90-abd4-5d55f9c27b85/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-Minimal-1511.iso
But this seems to say something's wrong:
[root@host ~]# engine-iso-uploader list
Please provide the REST API password for the admin@internal oVirt Engine
user (CTRL+D to abort):
ERROR: There are no ISO storage domains.
Further, the engine web gui has no mention of the ISO storage domain nor
does CentOS-7-x86_64-Minimal-1511.iso show up as an option to attach as a
CDROM to guests.
9 years, 4 months
Re: [ovirt-users] Self-hosted engine: Host cannot activate - network issues
by Yedidyah Bar David
On Tue, Dec 29, 2015 at 4:15 AM, Alan Murrell <lists(a)murrell.ca> wrote:
> I am attempting to install oVirt 3.6 on a CentOS7 host. it is a single
> server, so I am trying to install a self-hosted engine. I am able to
> install oVirt 3.5 self-hosted engine, but am having some problems doing it
> with 3.6.
>
> Everything appears to go fine; the engine-setup completes successfully. On
> the host, I pressed"1" to indicate engine-setup was complete, and it can
> connect to the webadmin but the host never becomes operational.
>
> I am able to log in to the webadmin, and indeed the host is in a
> non-operational state. When I click on the host then click on the "Virtual
> networks" tab, all my interfaces are showing a red arrow.
>
> When I click on "Setup Host Networks", my "ovirtmgmt" network is unassigned.
> Believing this to be why my host is not operational (since it is under
> "Required"), I assigned it to my management interface, but the interfaces
> still remain red. In addition, the "ovirtmgmt" network does have a green
> arrow, but also the icon indicating "Out of sync". If I click on "Sync All
> Networks", I get the following error:
>
> "Error while executing action SyncAllHostNetworks: Network is currently
> being used"
>
> which makes sense, since "ovirtmgmt" is assigned/in use. I am unable to
> unassign ovirtmgmt.
>
> Current status is this:
>
> - All network interfaces are showing the red "down" arrow
> - "ovirtmgmt" is assigned to my management NIC, but indicating that it is
> out of sync
> - I am unable to unassign the "ovirtmgmt" network (or at least, if I do,
> the "OK" button is greyed out)
>
> I could destroy the engine VM and go through the setup again, with the idea
> of once it is installed and waiting for the host to become operational, I
> could do a "Sync All Networks" and see if the networks turn green and go
> from there.
>
> I wanted to see what sort of insight I could get from here first. I am not
> sure what logs would be useful to you, so let me know what you want to see
> and I and make them available (I will likely zip them up and post a link)
from host:
/var/log/ovirt-hosted-engine-*/* (-setup and -ha)
/var/log/vdsm/*
from engine:
/var/log/ovirt-engine/*
/var/log/ovirt-engine/host-deploy/*
If last one is empty, it means the engine did not manage to get
host-deploy logs. Please check and try to copy what you find from /tmp
of the host while waiting for host-deploy ("Waiting for the host to
become operational").
>
> Something else I could try if this becomes a "puzzler" is to do a 3.5.x
> self-hosted engine then perform an in-place upgrade to 3.6 and see if that
> works? I would rather try to help make a direct 3.6 install work, though...
Did you principally do exactly the same thing in 3.5 and 3.6?
In both cases just had a simple network interface (no vlan/bonding/whatever)
and input it when asked?
What exact versions (OS and ovirt)?
I also changed the subject.
Best,
--
Didi
9 years, 4 months
ovirt 3.6 centos7.2 all-in-one install: The VDSM host was found in a failed state.
by Matthew Bohnsack
Hello,
I am attempting to build a fairly basic proof of concept ovirt 3.6 machine
with the engine, host and guests all on a single physical CentOS 7.2 box
with local storage (to start), but am running into issues where
engine-install produces the following error:
[ ERROR ] The VDSM host was found in a failed state. Please check engine
and bootstrap installation logs.
The engine webconsole seems to be working fine after this - I can login and
click around - but there's no host CPU/storage resources available, as
expected. From the logs shown below, there seems to be an issue with the
vdsm installation process being unable to contact the engine host during
installation. Any ideas what's going wrong and/or what I can do to debug
things further so I can move beyond this issue?
Thanks,
-Matthew
Steps I took to install and diagnose the problem:
1. Installed system with our configuration management system.
2. Deleted users using UID=36 (needed by vdsm user) and UID=108 (needed by
ovirt user).
3. Ensured that /etc/sudoers contained the line "#includedir
/etc/sudoers.d" so that /etc/sudoers.d/50_vdsm will take effect.
4. Made directories for isos and images:
# mkdir /state/partition1/images/; chmod 777 /state/partition1/images/
# mkdir /state/partition1/iso/; chmod 777 /state/partition1/iso/
5. Ensured selinux was disabled and no firewall rules were installed.
6. Installed RPMs:
# yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm
# yum -y install ovirt-engine ovirt-engine-setup-plugin-allinone
# rpm -qa | grep ovirt-release36
ovirt-release36-002-2.noarch
7. Installed engine with an all-in-one configuration (Configure VDSM on
this host? Yes):
# cat /root/ovirt-engine.ans
# action=setup
[environment:default]
OVESETUP_DIALOG/confirmSettings=bool:True
OVESETUP_CONFIG/applicationMode=str:virt
OVESETUP_CONFIG/remoteEngineSetupStyle=none:None
OVESETUP_CONFIG/sanWipeAfterDelete=bool:False
OVESETUP_CONFIG/storageIsLocal=bool:False
OVESETUP_CONFIG/firewallManager=none:None
OVESETUP_CONFIG/remoteEngineHostRootPassword=none:None
OVESETUP_CONFIG/firewallChangesReview=none:None
OVESETUP_CONFIG/updateFirewall=bool:False
OVESETUP_CONFIG/remoteEngineHostSshPort=none:None
OVESETUP_CONFIG/fqdn=<...host.fqdn...>
OVESETUP_CONFIG/storageType=none:None
OSETUP_RPMDISTRO/requireRollback=none:None
OSETUP_RPMDISTRO/enableUpgrade=none:None
OVESETUP_DB/secured=bool:False
OVESETUP_DB/host=str:localhost
OVESETUP_DB/user=str:engine
OVESETUP_DB/dumper=str:pg_custom
OVESETUP_DB/database=str:engine
OVESETUP_DB/fixDbViolations=none:None
OVESETUP_DB/port=int:5432
OVESETUP_DB/filter=none:None
OVESETUP_DB/restoreJobs=int:2
OVESETUP_DB/securedHostValidation=bool:False
OVESETUP_ENGINE_CORE/enable=bool:True
OVESETUP_CORE/engineStop=none:None
OVESETUP_SYSTEM/memCheckEnabled=bool:True
OVESETUP_SYSTEM/nfsConfigEnabled=bool:True
OVESETUP_PKI/organization=str:<...dn...>
OVESETUP_PKI/renew=none:None
OVESETUP_CONFIG/isoDomainName=str:ISO_DOMAIN
OVESETUP_CONFIG/engineHeapMax=str:7975M
OVESETUP_CONFIG/adminPassword=str:<...password...>
OVESETUP_CONFIG/isoDomainACL=str:*(rw)
OVESETUP_CONFIG/isoDomainMountPoint=str:/state/partition1/iso
OVESETUP_CONFIG/engineDbBackupDir=str:/var/lib/ovirt-engine/backups
OVESETUP_CONFIG/engineHeapMin=str:7975M
OVESETUP_AIO/configure=bool:True
OVESETUP_AIO/storageDomainName=str:local_storage
OVESETUP_AIO/storageDomainDir=str:/state/partition1/images/
OVESETUP_PROVISIONING/postgresProvisioningEnabled=bool:True
OVESETUP_APACHE/configureRootRedirection=bool:True
OVESETUP_APACHE/configureSsl=bool:True
OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=bool:True
OVESETUP_ENGINE_CONFIG/fqdn=str:<...fqdn...>
OVESETUP_CONFIG/websocketProxyConfig=bool:True
# engine-setup --config-append=/root/ovirt-engine.ans
...
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Waiting for VDSM host to become operational. This may take
several minutes...
[ ERROR ] The VDSM host was found in a failed state. Please check engine
and bootstrap installation logs.
[WARNING] Local storage domain not added because the VDSM host was not up.
Please add it manually.
[ INFO ] Restarting ovirt-vmconsole proxy service
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20151228112813-iksxhe.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20151228113003-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully
8. Examined /var/log/ovirt-engine/setup/ovirt-engine-setup-20151228112813-iksxhe.log
and found this error message which seems to indicate that the vdsm
installation process was unable to contact the engine:
2015-12-28 11:29:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:941 execute-output: ('/bin/systemctl', 'start',
'httpd.service') stderr:
2015-12-28 11:29:29 DEBUG otopi.context context._executeMethod:142 Stage
closeup METHOD
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi.Plugin._closeup
2015-12-28 11:29:29 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:211 Connecting to the Engine
2015-12-28 11:29:29 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitEngineUp:103 Waiting Engine API response
2015-12-28 11:29:29 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitEngineUp:133 Cannot connect to engine
Traceback (most recent call last):
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/all-in-one/vdsmi.py",
line 127, in _waitEngineUp
insecure=True,
File "/usr/lib/python2.7/site-packages/ovirtsdk/api.py", line 191, in
__init__
url=''
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 115, in request
persistent_auth=self.__persistent_auth
File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
persistent_auth)
File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 155, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
RequestError: ^M
status: 503^M
reason: Service Unavailable^M
detail:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Unavailable</title>
</head><body>
<h1>Service Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
</body></html>
2015-12-28 11:29:36 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitEngineUp:133 Cannot connect to engine
Traceback (most recent call last):
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/all-in-one/vdsmi.py",
line 127, in _waitEngineUp
insecure=True,
File "/usr/lib/python2.7/site-packages/ovirtsdk/api.py", line 191, in
__init__
url=''
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 115, in request
persistent_auth=self.__persistent_auth
File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
persistent_auth)
File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 155, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
RequestError: ^M
status: 404^M
reason: Not Found^M
detail:
...
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<link id="id-link-favicon" rel="shortcut icon"
href="/ovirt-engine/theme-resource/favicon" type="image/x-icon" />
<title>404 - Page not found</title>
...
</html>
2015-12-28 11:29:50 DEBUG otopi.ovirt_engine_setup.engine_common.database
database.execute:171 Database: 'None', Statement: '
select version, option_value
from vdc_options
where option_name = %(name)s
', args: {'name': 'SupportedClusterLevels'}
2015-12-28 11:29:50 DEBUG otopi.ovirt_engine_setup.engine_common.database
database.execute:221 Result: [{'version': 'general', 'option_value':
'3.0,3.1,3.2,3.3,3.4,3.5,3.6'}]
2015-12-28 11:29:50 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:225 engine SupportedClusterLevels
[3.0,3.1,3.2,3.3,3.4,3.5,3.6], PACKAGE_VERSION [3.6.1.3],
2015-12-28 11:29:50 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._getSupportedClusterLevels:181 Attempting to load the dsaversion vdsm
module
2015-12-28 11:29:50 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:236 VDSM SupportedClusterLevels [['3.4', '3.5', '3.6']],
VDSM VERSION [4.17.13-0.el7.centos],
2015-12-28 11:29:50 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:259 Creating the local data center
2015-12-28 11:29:50 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:269 Creating the local cluster into the local data center
2015-12-28 11:29:52 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:284 Adding the local host to the local cluster
2015-12-28 11:29:55 INFO
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:58 Waiting for VDSM host to become operational. This
may take several minutes...
2015-12-28 11:29:55 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:87 VDSM host in installing state
2015-12-28 11:29:56 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:87 VDSM host in installing state
2015-12-28 11:29:58 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:87 VDSM host in installing state
2015-12-28 11:29:59 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:87 VDSM host in installing state
2015-12-28 11:30:00 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:87 VDSM host in installing state
2015-12-28 11:30:01 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:87 VDSM host in installing state
2015-12-28 11:30:02 ERROR
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._waitVDSMHostUp:77 The VDSM host was found in a failed state. Please
check engine and bootstrap installation logs.
2015-12-28 11:30:02 WARNING
otopi.plugins.ovirt_engine_setup.ovirt_engine.all-in-one.vdsmi
vdsmi._closeup:306 Local storage domain not added because the VDSM host was
not up. Please add it manually.
2015-12-28 11:30:02 DEBUG otopi.context context._executeMethod:142 Stage
closeup METHOD
otopi.plugins.ovirt_engine_setup.vmconsole_proxy_helper.system.Plugin._closeup
9. Looked at the vdsm logs and examined service status:
# ls -l /var/log/vdsm/
total 8
drwxr-xr-x 2 vdsm kvm 6 Dec 9 03:24 backup
-rw-r--r-- 1 vdsm kvm 0 Dec 28 11:13 connectivity.log
-rw-r--r-- 1 vdsm kvm 0 Dec 28 11:13 mom.log
-rw-r--r-- 1 root root 2958 Dec 28 11:30 supervdsm.log
-rw-r--r-- 1 root root 1811 Dec 28 11:30 upgrade.log
-rw-r--r-- 1 vdsm kvm 0 Dec 28 11:13 vdsm.log
# cat /var/log/vdsm/upgrade.log
MainThread::DEBUG::2015-12-28
11:30:02,803::upgrade::90::upgrade::(apply_upgrade) Running upgrade
upgrade-unified-persistence
MainThread::DEBUG::2015-12-28
11:30:02,806::libvirtconnection::160::root::(get) trying to connect libvirt
MainThread::DEBUG::2015-12-28 11:30:02,813::utils::669::root::(execCmd)
/sbin/ip route show to 0.0.0.0/0 table main (cwd None)
MainThread::DEBUG::2015-12-28 11:30:02,826::utils::687::root::(execCmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2015-12-28
11:30:02,826::unified_persistence::46::root::(run)
upgrade-unified-persistence upgrade persisting networks {} and bondings {}
MainThread::INFO::2015-12-28
11:30:02,827::netconfpersistence::179::root::(_clearDisk) Clearing
/var/run/vdsm/netconf/nets/ and /var/run/vdsm/netconf/bonds/
MainThread::DEBUG::2015-12-28
11:30:02,827::netconfpersistence::187::root::(_clearDisk) No existent
config to clear.
MainThread::INFO::2015-12-28
11:30:02,827::netconfpersistence::179::root::(_clearDisk) Clearing
/var/run/vdsm/netconf/nets/ and /var/run/vdsm/netconf/bonds/
MainThread::DEBUG::2015-12-28
11:30:02,827::netconfpersistence::187::root::(_clearDisk) No existent
config to clear.
MainThread::INFO::2015-12-28
11:30:02,827::netconfpersistence::129::root::(save) Saved new config
RunningConfig({}, {}) to /var/run/vdsm/netconf/nets/ and
/var/run/vdsm/netconf/bonds/
MainThread::DEBUG::2015-12-28 11:30:02,827::utils::669::root::(execCmd)
/usr/share/vdsm/vdsm-store-net-config unified (cwd None)
MainThread::DEBUG::2015-12-28 11:30:02,836::utils::687::root::(execCmd)
SUCCESS: <err> = 'cp: cannot stat
\xe2\x80\x98/var/run/vdsm/netconf\xe2\x80\x99: No such file or
directory\n'; <rc> = 0
MainThread::DEBUG::2015-12-28
11:30:02,837::upgrade::51::upgrade::(_upgrade_seal) Upgrade
upgrade-unified-persistence successfully performed
# cat /var/log/vdsm/supervdsm.log
MainThread::DEBUG::2015-12-28
11:30:02,415::supervdsmServer::539::SuperVdsm.Server::(main) Making sure
I'm root - SuperVdsm
MainThread::DEBUG::2015-12-28
11:30:02,415::supervdsmServer::548::SuperVdsm.Server::(main) Parsing cmd
args
MainThread::DEBUG::2015-12-28
11:30:02,415::supervdsmServer::551::SuperVdsm.Server::(main) Cleaning old
socket /var/run/vdsm/svdsm.sock
MainThread::DEBUG::2015-12-28
11:30:02,415::supervdsmServer::555::SuperVdsm.Server::(main) Setting up
keep alive thread
MainThread::DEBUG::2015-12-28
11:30:02,415::supervdsmServer::561::SuperVdsm.Server::(main) Creating
remote object manager
MainThread::DEBUG::2015-12-28
11:30:02,416::fileUtils::192::Storage.fileUtils::(chown) Changing owner for
/var/run/vdsm/svdsm.sock, to (36:36)
MainThread::DEBUG::2015-12-28
11:30:02,416::supervdsmServer::572::SuperVdsm.Server::(main) Started
serving super vdsm object
sourceRoute::DEBUG::2015-12-28
11:30:02,416::sourceroutethread::79::root::(_subscribeToInotifyLoop)
sourceRouteThread.subscribeToInotifyLoop started
restore-net::DEBUG::2015-12-28
11:30:03,080::libvirtconnection::160::root::(get) trying to connect libvirt
restore-net::INFO::2015-12-28
11:30:03,188::vdsm-restore-net-config::86::root::(_restore_sriov_numvfs)
SRIOV network device which is not persisted found at: 0000:01:00.1.
restore-net::INFO::2015-12-28
11:30:03,189::vdsm-restore-net-config::86::root::(_restore_sriov_numvfs)
SRIOV network device which is not persisted found at: 0000:01:00.0.
restore-net::INFO::2015-12-28
11:30:03,189::vdsm-restore-net-config::86::root::(_restore_sriov_numvfs)
SRIOV network device which is not persisted found at: 0000:01:00.3.
restore-net::INFO::2015-12-28
11:30:03,189::vdsm-restore-net-config::86::root::(_restore_sriov_numvfs)
SRIOV network device which is not persisted found at: 0000:01:00.2.
restore-net::INFO::2015-12-28
11:30:03,189::vdsm-restore-net-config::385::root::(restore) starting
network restoration.
restore-net::DEBUG::2015-12-28
11:30:03,189::vdsm-restore-net-config::183::root::(_remove_networks_in_running_config)
Not cleaning running configuration since it is empty.
restore-net::INFO::2015-12-28
11:30:03,205::netconfpersistence::179::root::(_clearDisk) Clearing
/var/run/vdsm/netconf/nets/ and /var/run/vdsm/netconf/bonds/
restore-net::DEBUG::2015-12-28
11:30:03,206::netconfpersistence::187::root::(_clearDisk) No existent
config to clear.
restore-net::INFO::2015-12-28
11:30:03,206::netconfpersistence::129::root::(save) Saved new config
RunningConfig({}, {}) to /var/run/vdsm/netconf/nets/ and
/var/run/vdsm/netconf/bonds/
restore-net::DEBUG::2015-12-28
11:30:03,207::vdsm-restore-net-config::329::root::(_wait_for_for_all_devices_up)
All devices are up.
restore-net::INFO::2015-12-28
11:30:03,214::netconfpersistence::71::root::(setBonding) Adding
bond0({'nics': [], 'options': ''})
restore-net::INFO::2015-12-28
11:30:03,214::vdsm-restore-net-config::396::root::(restore) restoration
completed successfully.
# systemctl status supervdsmd
● supervdsmd.service - Auxiliary vdsm service for running helper functions
as root
Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static;
vendor preset: enabled)
Active: active (running) since Mon 2015-12-28 11:30:02 EST; 1h 8min ago
Main PID: 81535 (supervdsmServer)
CGroup: /system.slice/supervdsmd.service
└─81535 /usr/bin/python /usr/share/vdsm/supervdsmServer
--sockfile /var/run/vdsm/svdsm.sock
Dec 28 11:30:02 hostname systemd[1]: Started Auxiliary vdsm service for
running helper functions as root.
Dec 28 11:30:02 hostname systemd[1]: Starting Auxiliary vdsm service for
running helper functions as root.
9 years, 4 months
Network instability after upgrade 3.6.0 -> 3.6.1
by Stefano Danzi
Hello,
I have one testing host (only one host) with self hosted engine and 2 VM
(one linux and one windows).
After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works
discontinuously.
Every 10 minutes HA agent restart hosted engine VM because result down.
But the machine is UP,
only the network stop to work for some minutes.
I activate global maintenace mode to prevent engine reboot. If I ssh to
the hosted engine sometimes
the connection work and sometimes no. Using VNC connection to engine I
see that sometime VM reach external network
and sometimes no.
If I do a tcpdump on phisical ethernet interface I don't see any packet
when network on vm don't work.
Same thing happens fo others two VM.
Before the upgrade I never had network problems.
9 years, 4 months
virt-viewer error
by 靳占冰
virt-viewer-x64-3.0.msi version can not connect a virtual machine.
Error: Can not find the specified file
9 years, 4 months
Hosted engine vs. hypervisor hosts naming on hyperconverged setup
by Will Dennis
Hi all,
I have a hyperconverged setup where I have a hosted engine that runs on one of three hosts, which are named “ovirt-node-[01,02,03]” —
[root@ovirt-node-01 ~]# hosted-engine --vm-status | grep -e "Hostname" -e "Engine"
Hostname : ovirt-node-01
Engine status : {"health": "good", "vm": "up", "detail": "up"}
Hostname : ovirt-node-02
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown”}
When I deployed the hosted engine, I gave it a separate hostname/IP, as I expected it would need it (hostname = “ovirt-engine-01”)
However, when I look at the hosts in the oVirt web admin screen, I see that the first host has the name “ovirt-engine-01” whereas it has the hostname of “ovirt-node-01”
I also notice that even though the 1st host is showing the number of VM’s running as “1”, when I click on the “VMs” node, there are no VMs showing.
Not sure of what “should be”, but, would expect that the first host would have a name equal to its DNS hostname (like the other two do), and that in VMs I would see the engine VM. But is that not how a hosted engine setup works?
And if not, if the engine VM migrates over to another host, will that host gain the name "“ovirt-engine-01”?
I ask this now because I want to set up a storage domain on these hosts using GlusterFS, and I have to select a host to base the connection on. In the “Use Host” dropbox, I currently see the values:
ovirt-node-03
ovirt-node-02
ovirt-engine-01
I would expect that the last entry would be for “ovirt-host-01”, not “ovirt-engine-01”…
I don’t want to set up the storage domain until I figure this out, so as to prevent potential breakage...
Thanks,
Will
9 years, 4 months
Hosted engine support
by Budur Nagaraju
HI
Installed KVM in the host in which VT is enabled .
installed one vm centos6.7 64 bit in the KVM server and tried to
configure Hosted engine ,getting the below error ,
*"Failed to execute stage ‘Environment Customization’: Hardware
virtualization support is not available :please check BIOS settings and
turn on NX support if available"*
Even though VT is enabled facing the issue any setting needs to be done in
the vm ?
Thanks,
Nagaraju
9 years, 4 months
oVirt 3.6.1 USB Host Device passthrough
by ovirt@timmi.org
This is a multi-part message in MIME format.
--------------070308000109060601050508
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi list,
I'm currently trying to get USB devices into a VM.
My host is running a oVirt 3.6.1.
I'm trying to use the GUI feature to add the device into the VM and I'm
facing two issues:
1. I receive the following message while starting the VM:
Error while executing action:
<VM_Name>:
* Cannot run VM. There is no host that satisfies current scheduling
constraints. See below for details:
* The host <host_name> did not satisfy internal filter HostDevice
because it does not support host device passthrough..
2. The USB device list never get refreshed while removing or inserting
different devices.
Does anyone knows what I'm doing wrong or what I need to check?
I found for example the following description for oVirt 3.5.
http://wiki.rosalab.com/en/index.php/Blog:ROSA_Planet/How_to_use_USB_devi...
But this is not helping me as I will have multiple USB devices with the
same ID in the host.
Best regards and Merry Christmas
Christoph
--------------070308000109060601050508
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi list,<br>
<br>
I'm currently trying to get USB devices into a VM.<br>
My host is running a oVirt 3.6.1.<br>
<br>
I'm trying to use the GUI feature to add the device into the VM and
I'm facing two issues:<br>
<br>
1. I receive the following message while starting the VM:<br>
<tt>Error while executing action: </tt><tt><br>
</tt><tt><br>
</tt><tt><VM_Name>:</tt>
<ul style="margin-top:0">
<li><tt>Cannot run VM. There is no host that satisfies current
scheduling constraints. See below for details:</tt></li>
<li><tt>The host <host_name> did not satisfy internal filter
HostDevice because it does not support host device
passthrough..</tt></li>
</ul>
2. The USB device list never get refreshed while removing or
inserting different devices.<br>
<br>
Does anyone knows what I'm doing wrong or what I need to check?<br>
<br>
I found for example the following description for oVirt 3.5.<br>
<a class="moz-txt-link-freetext"
href="http://wiki.rosalab.com/en/index.php/Blog:ROSA_Planet/How_to_use_USB_devi...">http://wiki.rosalab.com/en/index.php/Blog:ROSA_Planet/How_to_use_USB_devi...</a><br>
<br>
But this is not helping me as I will have multiple USB devices with
the same ID in the host.<br>
<br>
Best regards and Merry Christmas<br>
Christoph<br>
<br>
</body>
</html>
--------------070308000109060601050508--
9 years, 4 months
oVirt 3.6: Dummy interfaces
by Alan Murrell
Hello,
I just upgraded my oVirt 3.5.1 install to 3.6. I have a self-hosted
engine setup all on a single server.
I make use of dummy interfaces for internal networks for my various
labs/testing setups. Normally I edit the following file:
/usr/lib/python2.7/site-packages/vdsm/config.py
I then go to the variable 'fake_nics' and add 'dummy*", so it looks like this:
('fake_nics', 'dummy_*,veth_*,dummy*', ),
After I added my entry, I even restarted the whole server (which
restarts the VDSM on the host as well as the engine VM), but when I go
into the "Network Interfaces" on the host the dummy interfaces are not
showingup.
Are dummy interfaces still supported in 3.6? I see a feature called
"Isolated Networks" in 3.6; is this essentially a replacement? The
Features page is a bit confusing to me on how to create an isolated
network?
Thanks!
Regards,
Alan
9 years, 4 months
Self-hosted engine: Host cannot activate
by Alan Murrell
I am attempting to install oVirt 3.6 on a CentOS7 host. it is a
single server, so I am trying to install a self-hosted engine. I am
able to install oVirt 3.5 self-hosted engine, but am having some
problems doing it with 3.6.
Everything appears to go fine; the engine-setup completes
successfully. On the host, I pressed"1" to indicate engine-setup was
complete, and it can connect to the webadmin but the host never
becomes operational.
I am able to log in to the webadmin, and indeed the host is in a
non-operational state. When I click on the host then click on the
"Virtual networks" tab, all my interfaces are showing a red arrow.
When I click on "Setup Host Networks", my "ovirtmgmt" network is
unassigned. Believing this to be why my host is not operational
(since it is under "Required"), I assigned it to my management
interface, but the interfaces still remain red. In addition, the
"ovirtmgmt" network does have a green arrow, but also the icon
indicating "Out of sync". If I click on "Sync All Networks", I get
the following error:
"Error while executing action SyncAllHostNetworks: Network is
currently being used"
which makes sense, since "ovirtmgmt" is assigned/in use. I am unable
to unassign ovirtmgmt.
Current status is this:
- All network interfaces are showing the red "down" arrow
- "ovirtmgmt" is assigned to my management NIC, but indicating that
it is out of sync
- I am unable to unassign the "ovirtmgmt" network (or at least, if
I do, the "OK" button is greyed out)
I could destroy the engine VM and go through the setup again, with the
idea of once it is installed and waiting for the host to become
operational, I could do a "Sync All Networks" and see if the networks
turn green and go from there.
I wanted to see what sort of insight I could get from here first. I
am not sure what logs would be useful to you, so let me know what you
want to see and I and make them available (I will likely zip them up
and post a link)
Something else I could try if this becomes a "puzzler" is to do a
3.5.x self-hosted engine then perform an in-place upgrade to 3.6 and
see if that works? I would rather try to help make a direct 3.6
install work, though...
Thanks, in advance! :-)
Regards,
Alan
9 years, 4 months
Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1 [SOLVED]
by Roy Golan
On Mon, Dec 28, 2015 at 4:06 PM, Yedidyah Bar David <didi(a)redhat.com> wrote:
> On Mon, Dec 28, 2015 at 3:48 PM, Stefano Danzi <s.danzi(a)hawai.it> wrote:
> > Problem solved!!!
> >
> > The file hosted-engine.conf had a wrong fqdn.
> > I don't think that this happened during upgrade... mybe thay my colleague
> > did something of wrong...
>
> Thanks for the report :-)
>
> >
> >
> > Il 20/12/2015 14.52, Stefano Danzi ha scritto:
> >
> > Network problems was solved after changing Bond mode (and it's strange.
> I
> > have to investigate around qemu-kvm, cento 7.2 and switch firmware ), but
> > broker problem still exist. If I turn on the host, ha agent start engine
> > vm. When engine VM is up, broker strats to send email. Now I haven't
> here
> > detailed logs.
> >
> >
> > -------- Messaggio originale --------
> > Da: Yedidyah Bar David <didi(a)redhat.com>
> > Data: 20/12/2015 11:20 (GMT+01:00)
> > A: Stefano Danzi <s.danzi(a)hawai.it>, Dan Kenigsberg <danken(a)redhat.com>
> > Cc: users <users(a)ovirt.org>
> > Oggetto: Re: [ovirt-users] Network instability after upgrade 3.6.0 ->
> 3.6.1
> >
> > On Fri, Dec 18, 2015 at 5:31 PM, Stefano Danzi <s.danzi(a)hawai.it> wrote:
> >> I found this in vdsm.log and I think that could be the problem:
> >>
> >> Thread-3771::ERROR::2015-12-18
> >>
> >>
> 16:18:58,597::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
> >> Connection closed: Connection closed
> >> Thread-3771::ERROR::2015-12-18
> 16:18:58,597::API::1847::vds::(_getHaInfo)
> >> failed to retrieve Hosted Engine HA info
> >> Traceback (most recent call last):
> >> File "/usr/share/vdsm/API.py", line 1827, in _getHaInfo
> >> stats = instance.get_all_stats()
> >> File
> >>
> >>
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> >> line 103, in get_all_stats
> >> self._configure_broker_conn(broker)
> >> File
> >>
> >>
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> >> line 180, in _configure_broker_conn
> >> dom_type=dom_type)
> >> File
> >>
> >>
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> >> line 176, in set_storage_domain
> >> .format(sd_type, options, e))
> >> RequestError: Failed to set storage domain FilesystemBackend, options
> >> {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}:
> >> Connection closed
> >
> > My guess is that this is a consequence of your networking problems.
> >
> > Adding Dan.
> >
> >>
> >>
> >> Il 17/12/2015 18.51, Stefano Danzi ha scritto:
> >>>
> >>> I partially solve the problem.
> >>>
> >>> My host machine has 2 network interfaces with a bond. The bond was
> >>> configured with mode=4 (802.3ad) and switch was configured in the same
> >>> way.
> >>> If I remove one network cable the network become stable. With both
> cables
> >>> attached the network is instable.
> >>>
> >>> I removed the link aggregation configuration from switch and change the
> >>> bond in mode=2 (balance-xor). Now the network are stable.
> >>> The strange thing is that previous configuration worked fine for one
> >>> year... since the last upgrade.
> >>>
> >>> Now ha-agent don't reboot the hosted-engine anymore, but I receive two
> >>> emails from brocker evere 2/5 minutes.
> >>> First a mail with "ovirt-hosted-engine state transition
> >>> StartState-ReinitializeFSM" and after "ovirt-hosted-engine state
> >>> transition
> >>> ReinitializeFSM-EngineStarting"
> >>>
> >>>
> >>> Il 17/12/2015 10.51, Stefano Danzi ha scritto:
> >>>>
> >>>> Hello,
> >>>> I have one testing host (only one host) with self hosted engine and 2
> VM
> >>>> (one linux and one windows).
> >>>>
> >>>> After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works
> >>>> discontinuously.
> >>>> Every 10 minutes HA agent restart hosted engine VM because result
> down.
> >>>> But the machine is UP,
> >>>> only the network stop to work for some minutes.
> >>>> I activate global maintenace mode to prevent engine reboot. If I ssh
> to
> >>>> the hosted engine sometimes
> >>>> the connection work and sometimes no. Using VNC connection to engine
> I
> >>>> see that sometime VM reach external network
> >>>> and sometimes no.
> >>>> If I do a tcpdump on phisical ethernet interface I don't see any
> packet
> >>>> when network on vm don't work.
> >>>>
> >>>> Same thing happens fo others two VM.
> >>>>
> >>>> Before the upgrade I never had network problems.
> >>>> _______________________________________________
> >>>> Users mailing list
> >>>> Users(a)ovirt.org
> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users(a)ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users(a)ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
> > --
> > Didi
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
>
>
>
> --
> Didi
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
9 years, 4 months
R: Re: Network instability after upgrade 3.6.0 -> 3.6.1
by Stefano Danzi
----_com.android.email_1678282976805320
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
CiAgICAKTmV0d29yayBwcm9ibGVtcyB3YXMgc29sdmVkIGFmdGVyIGNoYW5naW5nIEJvbmQgbW9k
ZSDCoChhbmQgaXQncyBzdHJhbmdlLiBJIGhhdmUgdG8gaW52ZXN0aWdhdGUgYXJvdW5kIHFlbXUt
a3ZtLCBjZW50byA3LjIgYW5kIHN3aXRjaCBmaXJtd2FyZSApLCBidXQgYnJva2VyIHByb2JsZW0g
c3RpbGwgZXhpc3QuIMKgSWYgSSB0dXJuIG9uIHRoZSBob3N0LCBoYSBhZ2VudCBzdGFydCBlbmdp
bmUgdm0uIFdoZW4gZW5naW5lIFZNIGlzIHVwLCBicm9rZXIgc3RyYXRzIHRvIHNlbmQgZW1haWwu
IMKgTm93IEkgaGF2ZW4ndCBoZXJlIGRldGFpbGVkIGxvZ3MuCgotLS0tLS0tLSBNZXNzYWdnaW8g
b3JpZ2luYWxlIC0tLS0tLS0tCkRhOiBZZWRpZHlhaCBCYXIgRGF2aWQgPGRpZGlAcmVkaGF0LmNv
bT4gCkRhdGE6IDIwLzEyLzIwMTUgIDExOjIwICAoR01UKzAxOjAwKSAKQTogU3RlZmFubyBEYW56
aSA8cy5kYW56aUBoYXdhaS5pdD4sIERhbiBLZW5pZ3NiZXJnIDxkYW5rZW5AcmVkaGF0LmNvbT4g
CkNjOiB1c2VycyA8dXNlcnNAb3ZpcnQub3JnPiAKT2dnZXR0bzogUmU6IFtvdmlydC11c2Vyc10g
TmV0d29yayBpbnN0YWJpbGl0eSBhZnRlciB1cGdyYWRlIDMuNi4wIC0+IDMuNi4xIAoKT24gRnJp
LCBEZWMgMTgsIDIwMTUgYXQgNTozMSBQTSwgU3RlZmFubyBEYW56aSA8cy5kYW56aUBoYXdhaS5p
dD4gd3JvdGU6Cj4gSSBmb3VuZCB0aGlzIGluIHZkc20ubG9nIGFuZCBJIHRoaW5rIHRoYXQgY291
bGQgYmUgdGhlIHByb2JsZW06Cj4KPiBUaHJlYWQtMzc3MTo6RVJST1I6OjIwMTUtMTItMTgKPiAx
NjoxODo1OCw1OTc6OmJyb2tlcmxpbms6OjI3OTo6b3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS5saWIu
YnJva2VybGluay5Ccm9rZXJMaW5rOjooX2NvbW11bmljYXRlKQo+IENvbm5lY3Rpb24gY2xvc2Vk
OiBDb25uZWN0aW9uIGNsb3NlZAo+IFRocmVhZC0zNzcxOjpFUlJPUjo6MjAxNS0xMi0xOCAxNjox
ODo1OCw1OTc6OkFQSTo6MTg0Nzo6dmRzOjooX2dldEhhSW5mbykKPiBmYWlsZWQgdG8gcmV0cmll
dmUgSG9zdGVkIEVuZ2luZSBIQSBpbmZvCj4gVHJhY2ViYWNrIChtb3N0IHJlY2VudCBjYWxsIGxh
c3QpOgo+wqDCoCBGaWxlICIvdXNyL3NoYXJlL3Zkc20vQVBJLnB5IiwgbGluZSAxODI3LCBpbiBf
Z2V0SGFJbmZvCj7CoMKgwqDCoCBzdGF0cyA9IGluc3RhbmNlLmdldF9hbGxfc3RhdHMoKQo+wqDC
oCBGaWxlCj4gIi91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2VzL292aXJ0X2hvc3RlZF9l
bmdpbmVfaGEvY2xpZW50L2NsaWVudC5weSIsCj4gbGluZSAxMDMsIGluIGdldF9hbGxfc3RhdHMK
PsKgwqDCoMKgIHNlbGYuX2NvbmZpZ3VyZV9icm9rZXJfY29ubihicm9rZXIpCj7CoMKgIEZpbGUK
PiAiL3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2VuZ2luZV9o
YS9jbGllbnQvY2xpZW50LnB5IiwKPiBsaW5lIDE4MCwgaW4gX2NvbmZpZ3VyZV9icm9rZXJfY29u
bgo+wqDCoMKgwqAgZG9tX3R5cGU9ZG9tX3R5cGUpCj7CoMKgIEZpbGUKPiAiL3Vzci9saWIvcHl0
aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS9saWIvYnJva2VybGlu
ay5weSIsCj4gbGluZSAxNzYsIGluIHNldF9zdG9yYWdlX2RvbWFpbgo+wqDCoMKgwqAgLmZvcm1h
dChzZF90eXBlLCBvcHRpb25zLCBlKSkKPiBSZXF1ZXN0RXJyb3I6IEZhaWxlZCB0byBzZXQgc3Rv
cmFnZSBkb21haW4gRmlsZXN5c3RlbUJhY2tlbmQsIG9wdGlvbnMKPiB7J2RvbV90eXBlJzogJ25m
czMnLCAnc2RfdXVpZCc6ICc0NmY1NWEzMS1mMzVmLTQ2NWMtYjNlMi1kZjQ1YzA1ZTA2YTcnfToK
PiBDb25uZWN0aW9uIGNsb3NlZAoKTXkgZ3Vlc3MgaXMgdGhhdCB0aGlzIGlzIGEgY29uc2VxdWVu
Y2Ugb2YgeW91ciBuZXR3b3JraW5nIHByb2JsZW1zLgoKQWRkaW5nIERhbi4KCj4KPgo+IElsIDE3
LzEyLzIwMTUgMTguNTEsIFN0ZWZhbm8gRGFuemkgaGEgc2NyaXR0bzoKPj4KPj4gSSBwYXJ0aWFs
bHkgc29sdmUgdGhlIHByb2JsZW0uCj4+Cj4+IE15IGhvc3QgbWFjaGluZSBoYXMgMiBuZXR3b3Jr
IGludGVyZmFjZXMgd2l0aCBhIGJvbmQuIFRoZSBib25kIHdhcwo+PiBjb25maWd1cmVkIHdpdGjC
oCBtb2RlPTQgKDgwMi4zYWQpIGFuZCBzd2l0Y2ggd2FzIGNvbmZpZ3VyZWQgaW4gdGhlIHNhbWUg
d2F5Lgo+PiBJZiBJIHJlbW92ZSBvbmUgbmV0d29yayBjYWJsZSB0aGUgbmV0d29yayBiZWNvbWUg
c3RhYmxlLiBXaXRoIGJvdGggY2FibGVzCj4+IGF0dGFjaGVkIHRoZSBuZXR3b3JrIGlzIGluc3Rh
YmxlLgo+Pgo+PiBJIHJlbW92ZWQgdGhlIGxpbmsgYWdncmVnYXRpb24gY29uZmlndXJhdGlvbiBm
cm9tIHN3aXRjaCBhbmQgY2hhbmdlIHRoZQo+PiBib25kIGluIG1vZGU9MiAoYmFsYW5jZS14b3Ip
LiBOb3cgdGhlIG5ldHdvcmsgYXJlIHN0YWJsZS4KPj4gVGhlIHN0cmFuZ2UgdGhpbmcgaXMgdGhh
dCBwcmV2aW91cyBjb25maWd1cmF0aW9uIHdvcmtlZCBmaW5lIGZvciBvbmUKPj4geWVhci4uLiBz
aW5jZSB0aGUgbGFzdCB1cGdyYWRlLgo+Pgo+PiBOb3cgaGEtYWdlbnQgZG9uJ3QgcmVib290IHRo
ZSBob3N0ZWQtZW5naW5lIGFueW1vcmUsIGJ1dCBJIHJlY2VpdmUgdHdvCj4+IGVtYWlscyBmcm9t
IGJyb2NrZXIgZXZlcmUgMi81IG1pbnV0ZXMuCj4+IEZpcnN0IGEgbWFpbCB3aXRoICJvdmlydC1o
b3N0ZWQtZW5naW5lIHN0YXRlIHRyYW5zaXRpb24KPj4gU3RhcnRTdGF0ZS1SZWluaXRpYWxpemVG
U00iIGFuZCBhZnRlciAib3ZpcnQtaG9zdGVkLWVuZ2luZSBzdGF0ZSB0cmFuc2l0aW9uCj4+IFJl
aW5pdGlhbGl6ZUZTTS1FbmdpbmVTdGFydGluZyIKPj4KPj4KPj4gSWwgMTcvMTIvMjAxNSAxMC41
MSwgU3RlZmFubyBEYW56aSBoYSBzY3JpdHRvOgo+Pj4KPj4+IEhlbGxvLAo+Pj4gSSBoYXZlIG9u
ZSB0ZXN0aW5nIGhvc3QgKG9ubHkgb25lIGhvc3QpIHdpdGggc2VsZiBob3N0ZWQgZW5naW5lIGFu
ZCAyIFZNCj4+PiAob25lIGxpbnV4IGFuZCBvbmUgd2luZG93cykuCj4+Pgo+Pj4gQWZ0ZXIgdXBn
cmFkZSBvdmlydCBmcm9tIDMuNi4wIHRvIDMuNi4xIHRoZSBuZXR3b3JrIGNvbm5lY3Rpb24gd29y
a3MKPj4+IGRpc2NvbnRpbnVvdXNseS4KPj4+IEV2ZXJ5IDEwIG1pbnV0ZXMgSEEgYWdlbnQgcmVz
dGFydCBob3N0ZWQgZW5naW5lIFZNIGJlY2F1c2UgcmVzdWx0IGRvd24uCj4+PiBCdXQgdGhlIG1h
Y2hpbmUgaXMgVVAsCj4+PiBvbmx5IHRoZSBuZXR3b3JrIHN0b3AgdG8gd29yayBmb3Igc29tZSBt
aW51dGVzLgo+Pj4gSSBhY3RpdmF0ZSBnbG9iYWwgbWFpbnRlbmFjZSBtb2RlIHRvIHByZXZlbnQg
ZW5naW5lIHJlYm9vdC4gSWYgSSBzc2ggdG8KPj4+IHRoZSBob3N0ZWQgZW5naW5lIHNvbWV0aW1l
cwo+Pj4gdGhlIGNvbm5lY3Rpb24gd29yayBhbmQgc29tZXRpbWVzIG5vLsKgIFVzaW5nIFZOQyBj
b25uZWN0aW9uIHRvIGVuZ2luZSBJCj4+PiBzZWUgdGhhdCBzb21ldGltZSBWTSByZWFjaCBleHRl
cm5hbCBuZXR3b3JrCj4+PiBhbmQgc29tZXRpbWVzIG5vLgo+Pj4gSWYgSSBkbyBhIHRjcGR1bXAg
b24gcGhpc2ljYWwgZXRoZXJuZXQgaW50ZXJmYWNlIEkgZG9uJ3Qgc2VlIGFueSBwYWNrZXQKPj4+
IHdoZW4gbmV0d29yayBvbiB2bSBkb24ndCB3b3JrLgo+Pj4KPj4+IFNhbWUgdGhpbmcgaGFwcGVu
cyBmbyBvdGhlcnMgdHdvIFZNLgo+Pj4KPj4+IEJlZm9yZSB0aGUgdXBncmFkZSBJIG5ldmVyIGhh
ZCBuZXR3b3JrIHByb2JsZW1zLgo+Pj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KPj4+IFVzZXJzIG1haWxpbmcgbGlzdAo+Pj4gVXNlcnNAb3ZpcnQub3Jn
Cj4+PiBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMKPj4+Cj4+
Cj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4+IFVz
ZXJzIG1haWxpbmcgbGlzdAo+PiBVc2Vyc0BvdmlydC5vcmcKPj4gaHR0cDovL2xpc3RzLm92aXJ0
Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzCj4KPiBfX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fXwo+IFVzZXJzIG1haWxpbmcgbGlzdAo+IFVzZXJzQG92aXJ0
Lm9yZwo+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycwoKCgot
LSAKRGlkaQo=
----_com.android.email_1678282976805320
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT4KICAgIAo8ZGl2Pk5ldHdvcmsgcHJv
YmxlbXMgd2FzIHNvbHZlZCBhZnRlciBjaGFuZ2luZyBCb25kIG1vZGUgJm5ic3A7KGFuZCBpdCdz
IHN0cmFuZ2UuIEkgaGF2ZSB0byBpbnZlc3RpZ2F0ZSBhcm91bmQgcWVtdS1rdm0sIGNlbnRvIDcu
MiBhbmQgc3dpdGNoIGZpcm13YXJlICksIGJ1dCBicm9rZXIgcHJvYmxlbSBzdGlsbCBleGlzdC4g
Jm5ic3A7SWYgSSB0dXJuIG9uIHRoZSBob3N0LCBoYSBhZ2VudCBzdGFydCBlbmdpbmUgdm0uIFdo
ZW4gZW5naW5lIFZNIGlzIHVwLCBicm9rZXIgc3RyYXRzIHRvIHNlbmQgZW1haWwuICZuYnNwO05v
dyBJIGhhdmVuJ3QgaGVyZSBkZXRhaWxlZCBsb2dzLjwvZGl2Pjxicj48YnI+LS0tLS0tLS0gTWVz
c2FnZ2lvIG9yaWdpbmFsZSAtLS0tLS0tLTxicj5EYTogWWVkaWR5YWggQmFyIERhdmlkICZsdDtk
aWRpQHJlZGhhdC5jb20mZ3Q7IDxicj5EYXRhOiAyMC8xMi8yMDE1ICAxMToyMCAgKEdNVCswMTow
MCkgPGJyPkE6IFN0ZWZhbm8gRGFuemkgJmx0O3MuZGFuemlAaGF3YWkuaXQmZ3Q7LCBEYW4gS2Vu
aWdzYmVyZyAmbHQ7ZGFua2VuQHJlZGhhdC5jb20mZ3Q7IDxicj5DYzogdXNlcnMgJmx0O3VzZXJz
QG92aXJ0Lm9yZyZndDsgPGJyPk9nZ2V0dG86IFJlOiBbb3ZpcnQtdXNlcnNdIE5ldHdvcmsgaW5z
dGFiaWxpdHkgYWZ0ZXIgdXBncmFkZSAzLjYuMCAtPiAzLjYuMSA8YnI+PGJyPk9uIEZyaSwgRGVj
IDE4LCAyMDE1IGF0IDU6MzEgUE0sIFN0ZWZhbm8gRGFuemkgJmx0O3MuZGFuemlAaGF3YWkuaXQm
Z3Q7IHdyb3RlOjxicj4mZ3Q7IEkgZm91bmQgdGhpcyBpbiB2ZHNtLmxvZyBhbmQgSSB0aGluayB0
aGF0IGNvdWxkIGJlIHRoZSBwcm9ibGVtOjxicj4mZ3Q7PGJyPiZndDsgVGhyZWFkLTM3NzE6OkVS
Uk9SOjoyMDE1LTEyLTE4PGJyPiZndDsgMTY6MTg6NTgsNTk3Ojpicm9rZXJsaW5rOjoyNzk6Om92
aXJ0X2hvc3RlZF9lbmdpbmVfaGEubGliLmJyb2tlcmxpbmsuQnJva2VyTGluazo6KF9jb21tdW5p
Y2F0ZSk8YnI+Jmd0OyBDb25uZWN0aW9uIGNsb3NlZDogQ29ubmVjdGlvbiBjbG9zZWQ8YnI+Jmd0
OyBUaHJlYWQtMzc3MTo6RVJST1I6OjIwMTUtMTItMTggMTY6MTg6NTgsNTk3OjpBUEk6OjE4NDc6
OnZkczo6KF9nZXRIYUluZm8pPGJyPiZndDsgZmFpbGVkIHRvIHJldHJpZXZlIEhvc3RlZCBFbmdp
bmUgSEEgaW5mbzxicj4mZ3Q7IFRyYWNlYmFjayAobW9zdCByZWNlbnQgY2FsbCBsYXN0KTo8YnI+
Jmd0OyZuYnNwOyZuYnNwOyBGaWxlICIvdXNyL3NoYXJlL3Zkc20vQVBJLnB5IiwgbGluZSAxODI3
LCBpbiBfZ2V0SGFJbmZvPGJyPiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgc3RhdHMgPSBp
bnN0YW5jZS5nZXRfYWxsX3N0YXRzKCk8YnI+Jmd0OyZuYnNwOyZuYnNwOyBGaWxlPGJyPiZndDsg
Ii91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2VzL292aXJ0X2hvc3RlZF9lbmdpbmVfaGEv
Y2xpZW50L2NsaWVudC5weSIsPGJyPiZndDsgbGluZSAxMDMsIGluIGdldF9hbGxfc3RhdHM8YnI+
Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBzZWxmLl9jb25maWd1cmVfYnJva2VyX2Nvbm4o
YnJva2VyKTxicj4mZ3Q7Jm5ic3A7Jm5ic3A7IEZpbGU8YnI+Jmd0OyAiL3Vzci9saWIvcHl0aG9u
Mi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS9jbGllbnQvY2xpZW50LnB5
Iiw8YnI+Jmd0OyBsaW5lIDE4MCwgaW4gX2NvbmZpZ3VyZV9icm9rZXJfY29ubjxicj4mZ3Q7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IGRvbV90eXBlPWRvbV90eXBlKTxicj4mZ3Q7Jm5ic3A7Jm5i
c3A7IEZpbGU8YnI+Jmd0OyAiL3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRf
aG9zdGVkX2VuZ2luZV9oYS9saWIvYnJva2VybGluay5weSIsPGJyPiZndDsgbGluZSAxNzYsIGlu
IHNldF9zdG9yYWdlX2RvbWFpbjxicj4mZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IC5mb3Jt
YXQoc2RfdHlwZSwgb3B0aW9ucywgZSkpPGJyPiZndDsgUmVxdWVzdEVycm9yOiBGYWlsZWQgdG8g
c2V0IHN0b3JhZ2UgZG9tYWluIEZpbGVzeXN0ZW1CYWNrZW5kLCBvcHRpb25zPGJyPiZndDsgeydk
b21fdHlwZSc6ICduZnMzJywgJ3NkX3V1aWQnOiAnNDZmNTVhMzEtZjM1Zi00NjVjLWIzZTItZGY0
NWMwNWUwNmE3J306PGJyPiZndDsgQ29ubmVjdGlvbiBjbG9zZWQ8YnI+PGJyPk15IGd1ZXNzIGlz
IHRoYXQgdGhpcyBpcyBhIGNvbnNlcXVlbmNlIG9mIHlvdXIgbmV0d29ya2luZyBwcm9ibGVtcy48
YnI+PGJyPkFkZGluZyBEYW4uPGJyPjxicj4mZ3Q7PGJyPiZndDs8YnI+Jmd0OyBJbCAxNy8xMi8y
MDE1IDE4LjUxLCBTdGVmYW5vIERhbnppIGhhIHNjcml0dG86PGJyPiZndDsmZ3Q7PGJyPiZndDsm
Z3Q7IEkgcGFydGlhbGx5IHNvbHZlIHRoZSBwcm9ibGVtLjxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0
OyBNeSBob3N0IG1hY2hpbmUgaGFzIDIgbmV0d29yayBpbnRlcmZhY2VzIHdpdGggYSBib25kLiBU
aGUgYm9uZCB3YXM8YnI+Jmd0OyZndDsgY29uZmlndXJlZCB3aXRoJm5ic3A7IG1vZGU9NCAoODAy
LjNhZCkgYW5kIHN3aXRjaCB3YXMgY29uZmlndXJlZCBpbiB0aGUgc2FtZSB3YXkuPGJyPiZndDsm
Z3Q7IElmIEkgcmVtb3ZlIG9uZSBuZXR3b3JrIGNhYmxlIHRoZSBuZXR3b3JrIGJlY29tZSBzdGFi
bGUuIFdpdGggYm90aCBjYWJsZXM8YnI+Jmd0OyZndDsgYXR0YWNoZWQgdGhlIG5ldHdvcmsgaXMg
aW5zdGFibGUuPGJyPiZndDsmZ3Q7PGJyPiZndDsmZ3Q7IEkgcmVtb3ZlZCB0aGUgbGluayBhZ2dy
ZWdhdGlvbiBjb25maWd1cmF0aW9uIGZyb20gc3dpdGNoIGFuZCBjaGFuZ2UgdGhlPGJyPiZndDsm
Z3Q7IGJvbmQgaW4gbW9kZT0yIChiYWxhbmNlLXhvcikuIE5vdyB0aGUgbmV0d29yayBhcmUgc3Rh
YmxlLjxicj4mZ3Q7Jmd0OyBUaGUgc3RyYW5nZSB0aGluZyBpcyB0aGF0IHByZXZpb3VzIGNvbmZp
Z3VyYXRpb24gd29ya2VkIGZpbmUgZm9yIG9uZTxicj4mZ3Q7Jmd0OyB5ZWFyLi4uIHNpbmNlIHRo
ZSBsYXN0IHVwZ3JhZGUuPGJyPiZndDsmZ3Q7PGJyPiZndDsmZ3Q7IE5vdyBoYS1hZ2VudCBkb24n
dCByZWJvb3QgdGhlIGhvc3RlZC1lbmdpbmUgYW55bW9yZSwgYnV0IEkgcmVjZWl2ZSB0d288YnI+
Jmd0OyZndDsgZW1haWxzIGZyb20gYnJvY2tlciBldmVyZSAyLzUgbWludXRlcy48YnI+Jmd0OyZn
dDsgRmlyc3QgYSBtYWlsIHdpdGggIm92aXJ0LWhvc3RlZC1lbmdpbmUgc3RhdGUgdHJhbnNpdGlv
bjxicj4mZ3Q7Jmd0OyBTdGFydFN0YXRlLVJlaW5pdGlhbGl6ZUZTTSIgYW5kIGFmdGVyICJvdmly
dC1ob3N0ZWQtZW5naW5lIHN0YXRlIHRyYW5zaXRpb248YnI+Jmd0OyZndDsgUmVpbml0aWFsaXpl
RlNNLUVuZ2luZVN0YXJ0aW5nIjxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyBJ
bCAxNy8xMi8yMDE1IDEwLjUxLCBTdGVmYW5vIERhbnppIGhhIHNjcml0dG86PGJyPiZndDsmZ3Q7
Jmd0Ozxicj4mZ3Q7Jmd0OyZndDsgSGVsbG8sPGJyPiZndDsmZ3Q7Jmd0OyBJIGhhdmUgb25lIHRl
c3RpbmcgaG9zdCAob25seSBvbmUgaG9zdCkgd2l0aCBzZWxmIGhvc3RlZCBlbmdpbmUgYW5kIDIg
Vk08YnI+Jmd0OyZndDsmZ3Q7IChvbmUgbGludXggYW5kIG9uZSB3aW5kb3dzKS48YnI+Jmd0OyZn
dDsmZ3Q7PGJyPiZndDsmZ3Q7Jmd0OyBBZnRlciB1cGdyYWRlIG92aXJ0IGZyb20gMy42LjAgdG8g
My42LjEgdGhlIG5ldHdvcmsgY29ubmVjdGlvbiB3b3Jrczxicj4mZ3Q7Jmd0OyZndDsgZGlzY29u
dGludW91c2x5Ljxicj4mZ3Q7Jmd0OyZndDsgRXZlcnkgMTAgbWludXRlcyBIQSBhZ2VudCByZXN0
YXJ0IGhvc3RlZCBlbmdpbmUgVk0gYmVjYXVzZSByZXN1bHQgZG93bi48YnI+Jmd0OyZndDsmZ3Q7
IEJ1dCB0aGUgbWFjaGluZSBpcyBVUCw8YnI+Jmd0OyZndDsmZ3Q7IG9ubHkgdGhlIG5ldHdvcmsg
c3RvcCB0byB3b3JrIGZvciBzb21lIG1pbnV0ZXMuPGJyPiZndDsmZ3Q7Jmd0OyBJIGFjdGl2YXRl
IGdsb2JhbCBtYWludGVuYWNlIG1vZGUgdG8gcHJldmVudCBlbmdpbmUgcmVib290LiBJZiBJIHNz
aCB0bzxicj4mZ3Q7Jmd0OyZndDsgdGhlIGhvc3RlZCBlbmdpbmUgc29tZXRpbWVzPGJyPiZndDsm
Z3Q7Jmd0OyB0aGUgY29ubmVjdGlvbiB3b3JrIGFuZCBzb21ldGltZXMgbm8uJm5ic3A7IFVzaW5n
IFZOQyBjb25uZWN0aW9uIHRvIGVuZ2luZSBJPGJyPiZndDsmZ3Q7Jmd0OyBzZWUgdGhhdCBzb21l
dGltZSBWTSByZWFjaCBleHRlcm5hbCBuZXR3b3JrPGJyPiZndDsmZ3Q7Jmd0OyBhbmQgc29tZXRp
bWVzIG5vLjxicj4mZ3Q7Jmd0OyZndDsgSWYgSSBkbyBhIHRjcGR1bXAgb24gcGhpc2ljYWwgZXRo
ZXJuZXQgaW50ZXJmYWNlIEkgZG9uJ3Qgc2VlIGFueSBwYWNrZXQ8YnI+Jmd0OyZndDsmZ3Q7IHdo
ZW4gbmV0d29yayBvbiB2bSBkb24ndCB3b3JrLjxicj4mZ3Q7Jmd0OyZndDs8YnI+Jmd0OyZndDsm
Z3Q7IFNhbWUgdGhpbmcgaGFwcGVucyBmbyBvdGhlcnMgdHdvIFZNLjxicj4mZ3Q7Jmd0OyZndDs8
YnI+Jmd0OyZndDsmZ3Q7IEJlZm9yZSB0aGUgdXBncmFkZSBJIG5ldmVyIGhhZCBuZXR3b3JrIHBy
b2JsZW1zLjxicj4mZ3Q7Jmd0OyZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX188YnI+Jmd0OyZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4mZ3Q7
Jmd0OyZndDsgVXNlcnNAb3ZpcnQub3JnPGJyPiZndDsmZ3Q7Jmd0OyBodHRwOi8vbGlzdHMub3Zp
cnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsmZ3Q7
PGJyPiZndDsmZ3Q7IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fPGJyPiZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4mZ3Q7Jmd0OyBVc2Vyc0Bvdmly
dC5vcmc8YnI+Jmd0OyZndDsgaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZv
L3VzZXJzPGJyPiZndDs8YnI+Jmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXzxicj4mZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4mZ3Q7IFVzZXJzQG92
aXJ0Lm9yZzxicj4mZ3Q7IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91
c2Vyczxicj48YnI+PGJyPjxicj4tLSA8YnI+RGlkaTxicj48L2JvZHk+PC9odG1sPg==
----_com.android.email_1678282976805320--
9 years, 4 months
Wrong nvidia drivers dependency for ovirt-hosted-engine-setup on CentOS 7.2
by Aleksey Chudov
Hi,
I have installed package
ovirt-hosted-engine-setup-1.3.1.4-1.el7.centos.noarch on CentOS 7.2 and
nvidia drivers installed as dependencies. If I manually try to remove
nvidia drivers yum proposes to remove ovirt packages as well. I don't have
any nvidia devices in my servers so I consider this a dependency bug.
Please check the output below.
# rpm -qa '*nvidia*' '*ovirt*'
nvidia-x11-drv-352.63-1.el7.elrepo.x86_64
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
yum-plugin-nvidia-1.0.2-1.el7.elrepo.noarch
libgovirt-0.3.3-1.el7.x86_64
kmod-nvidia-352.63-1.el7.elrepo.x86_64
ovirt-release36-002-2.noarch
ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.3.6-1.el7.centos.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.1.4-1.el7.centos.noarch
# rpm -qa | grep nvidia | xargs yum history info
Loaded plugins: fastestmirror, nvidia
[nvidia]: No NVIDIA display devices found
Transaction ID : 18
Begin time : Thu Dec 24 14:14:14 2015
Begin rpmdb : 429:07df3601154cf590c9c5c0435c6dd92b88beb824
End time : 14:18:53 2015 (279 seconds)
End rpmdb : 697:c3c9771baf9f1abe81798d792470334cf2eb8ba6
User : root <root>
Return-Code : Success
Command Line : -d 2 -y install ovirt-hosted-engine-setup
Transaction performed with:
Installed rpm-4.11.3-17.el7.x86_64 @anaconda
Installed yum-3.4.3-132.el7.centos.0.1.noarch @anaconda
Installed yum-plugin-fastestmirror-1.1.31-34.el7.noarch @anaconda
Packages Altered:
...
Dep-Install
kmod-nvidia-352.63-1.el7.elrepo.x86_64 @elrepo
...
Dep-Install
nvidia-x11-drv-352.63-1.el7.elrepo.x86_64 @elrepo
...
Install
ovirt-hosted-engine-setup-1.3.1.4-1.el7.centos.noarch @ovirt-3.6
...
Dep-Install
yum-plugin-nvidia-1.0.2-1.el7.elrepo.noarch @elrepo
Scriptlet output:
1 depmod: WARNING: could not open
/lib/modules/3.10.0-229.el7.x86_64/modules.order: No such file or directory
2 depmod: WARNING: could not open
/lib/modules/3.10.0-229.el7.x86_64/modules.builtin: No such file or
directory
3 gtk-query-immodules-2.0-64: error while loading shared libraries:
libpangocairo-1.0.so.0: cannot open shared object file: No such file or
directory
4 warning: %post(gtk2-2.24.28-8.el7.x86_64) scriptlet failed, exit
status 127
5 gtk-query-immodules-3.0-64: error while loading shared libraries:
libEGL.so.1: cannot open shared object file: No such file or directory
6 Working. This may take some time ...
7 Done.
history info
Regards,
Aleksey
9 years, 4 months
host xxx did no satisfy internal filter Memory because its swap value was illegal.
by pc
------=ALIBOUNDARY_54603_493d9940_568114ce_5020f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
aG9zdMKgeHh4wqBkaWTCoG5vwqBzYXRpc2Z5wqBpbnRlcm5hbMKgZmlsdGVywqBNZW1vcnnCoGJl
Y2F1c2XCoGl0c8Kgc3dhcMKgdmFsdWXCoHdhc8KgaWxsZWdhbC4jIyPCoERlc2NyaXB0aW9uwqAj
IyMxLsKgcHJvYmxlbTEpwqBtaWdyYXRlwqB2bcKge25hbWU6eHl6MDAxLMKgbWVtKG1pbizCoG1h
eCnCoD3CoCgyRyw0Ryl9wqBmcm9twqBvdmlydMKgaG9zdMKgbjMzwqB0b8KgbjM0LMKgZmFpbGVk
LjIpwqBzaHV0dGluZ8KgZG93bsKgdm3CoHtuYW1lOsKgdGVzdDAwMSzCoG1lbShtaW4swqBtYXgp
wqA9wqAoMUcsMUcpfcKgb27CoG4zNCzCoHVwZGF0ZcKgdGVzdDAwMSdzwqBjb25maWc6wqBIb3N0
LT5TdGFydMKgUnVubmluZ8KgT246wqBTcGVjaWZpYyhuMzQpLMKgdGhlbsKgc3RhcnTCoHRlc3Qw
MDEswqB3aGlsZSzCoGl0J3PCoHJ1bm5pbmfCoG9uwqBuMzMuMi7CoGVycsKgbWVzc2FnZcKgRXJy
b3LCoHdoaWxlwqBleGVjdXRpbmfCoGFjdGlvbjrCoG1pZ3JhdGXCoFtlbmdpbmXCoGd1aV14eXow
MDE6Q2Fubm90wqBtaWdyYXRlwqBWTS7CoFRoZXJlwqBpc8Kgbm/CoGhvc3TCoHRoYXTCoHNhdGlz
Zmllc8KgY3VycmVudMKgc2NoZWR1bGluZ8KgY29uc3RyYWludHMuwqBTZWXCoGJlbG93wqBmb3LC
oGRldGFpbHM6VGhlwqBob3N0wqBuMzMub3ZpcnTCoGRpZMKgbm90wqBzYXRpc2Z5wqBpbnRlcm5h
bMKgZmlsdGVywqBNZW1vcnnCoGJlY2F1c2XCoGhhc8KgYXZhaWxhYmXCoDE4NjPCoE1CwqBtZW1v
cnkuwqBJbnN1ZmZpY2llbnTCoGZyZWXCoG1lbW9yecKgdG/CoHJ1bsKgdGhlwqBWTS5UaGXCoGhv
c3TCoG4zNC5vdmlydMKgZGlkwqBub3TCoHNhdGlzZnnCoGludGVybmFswqBmaWx0ZXLCoE1lbW9y
ecKgYmVjYXVzZcKgaXRzwqBzd2FwwqB2YWx1ZcKgd2FzwqBpbGxlZ2FsLltlbmdpbmUubG9nXUlO
Rk/CoMKgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuTWlncmF0ZVZtQ29tbWFuZF3CoChkZWZh
dWx0wqB0YXNrLTIzKcKgWzU5MTZhYTNiXcKgTG9ja8KgQWNxdWlyZWTCoHRvwqBvYmplY3TCoCdF
bmdpbmVMb2NrOntleGNsdXNpdmVMb2Nrcz0nWzczMzUxODg1LTlhOTItNDMxNy1iYWFmLWU0ZjJi
ZWQxMTcxYT08Vk0swqBBQ1RJT05fVFlQRV9GQUlMRURfVk1fSVNfQkVJTkdfTUlHUkFURUQkVm1O
YW1lwqB0ZXN0MTE+XScswqBzaGFyZWRMb2Nrcz0nbnVsbCd9J0lORk/CoMKgW29yZy5vdmlydC5l
bmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5TY2hlZHVsaW5nTWFuYWdlcl3CoChkZWZhdWx0wqB0
YXNrLTIzKcKgWzU5MTZhYTNiXcKgQ2FuZGlkYXRlwqBob3N0wqAnbjM0J8KgKCcyYWUzYTIxOS1h
ZTlhLTQzNDctYjFlMi0wZTEwMDM2MDIzMWUnKcKgd2FzwqBmaWx0ZXJlZMKgb3V0wqBiecKgJ1ZB
Ul9fRklMVEVSVFlQRV9fSU5URVJOQUwnwqBmaWx0ZXLCoCdNZW1vcnknwqAoY29ycmVsYXRpb27C
oGlkOsKgbnVsbClJTkZPwqDCoFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcu
U2NoZWR1bGluZ01hbmFnZXJdwqAoZGVmYXVsdMKgdGFzay0yMynCoFs1OTE2YWEzYl3CoENhbmRp
ZGF0ZcKgaG9zdMKgJ24zMyfCoCgnNjg4YWVjMzQtNTYzMC00NzhlLWFlNWUtOWQ1Nzk5MDgwNGU1
JynCoHdhc8KgZmlsdGVyZWTCoG91dMKgYnnCoCdWQVJfX0ZJTFRFUlRZUEVfX0lOVEVSTkFMJ8Kg
ZmlsdGVywqAnTWVtb3J5J8KgKGNvcnJlbGF0aW9uwqBpZDrCoG51bGwpV0FSTsKgwqBbb3JnLm92
aXJ0LmVuZ2luZS5jb3JlLmJsbC5NaWdyYXRlVm1Db21tYW5kXcKgKGRlZmF1bHTCoHRhc2stMjMp
wqBbNTkxNmFhM2JdwqBDYW5Eb0FjdGlvbsKgb2bCoGFjdGlvbsKgJ01pZ3JhdGVWbSfCoGZhaWxl
ZMKgZm9ywqB1c2VywqBhZG1pbkBpbnRlcm5hbC7CoFJlYXNvbnM6wqBWQVJfX0FDVElPTl9fTUlH
UkFURSxWQVJfX1RZUEVfX1ZNLFNDSEVEVUxJTkdfQUxMX0hPU1RTX0ZJTFRFUkVEX09VVCxWQVJf
X0ZJTFRFUlRZUEVfX0lOVEVSTkFMLCRob3N0TmFtZcKgbjMzLCRmaWx0ZXJOYW1lwqBNZW1vcnks
JGF2YWlsYWJsZU1lbcKgMTg2MyxWQVJfX0RFVEFJTF9fTk9UX0VOT1VHSF9NRU1PUlksU0NIRURV
TElOR19IT1NUX0ZJTFRFUkVEX1JFQVNPTl9XSVRIX0RFVEFJTCxWQVJfX0ZJTFRFUlRZUEVfX0lO
VEVSTkFMLCRob3N0TmFtZcKgbjM0LCRmaWx0ZXJOYW1lwqBNZW1vcnksVkFSX19ERVRBSUxfX1NX
QVBfVkFMVUVfSUxMRUdBTCxTQ0hFRFVMSU5HX0hPU1RfRklMVEVSRURfUkVBU09OX1dJVEhfREVU
QUlMSU5GT8KgwqBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5NaWdyYXRlVm1Db21tYW5kXcKg
KGRlZmF1bHTCoHRhc2stMjMpwqBbNTkxNmFhM2JdwqBMb2NrwqBmcmVlZMKgdG/CoG9iamVjdMKg
J0VuZ2luZUxvY2s6e2V4Y2x1c2l2ZUxvY2tzPSdbNzMzNTE4ODUtOWE5Mi00MzE3LWJhYWYtZTRm
MmJlZDExNzFhPTxWTSzCoEFDVElPTl9UWVBFX0ZBSUxFRF9WTV9JU19CRUlOR19NSUdSQVRFRCRW
bU5hbWXCoHRlc3QxMT5dJyzCoHNoYXJlZExvY2tzPSdudWxsJ30nMy7CoERDQ29tcGF0aWJpbGl0
ecKgVmVyc2lvbjrCoDMuNTQuwqBDbHVzdGVyTWVtb3J5wqBPcHRpbWl6YXRpb246wqBGb3LCoFNl
cnZlcsKgTG9hZMKgLcKgQWxsb3fCoHNjaGVkdWxpbmfCoG9mwqAxNTAlwqBvZsKgcGh5c2ljYWzC
oG1lbW9yeU1lbW9yecKgQmFsbG9vbjrCoEVuYWJsZcKgTWVtb3J5wqBCYWxsb29uwqBPcHRpbWl6
YXRpb25FbmFibGXCoEtTTTrCoFNoYXJlwqBtZW1vcnnCoHBhZ2VzwqBhY3Jvc3PCoGFsbMKgYXZh
aWxhYmxlwqBtZW1vcnnCoChiZXN0wqBLU03CoGVmZmVjdGl2bmVzcyk1LsKgSE9TVG5hbWU6wqBu
MzMswqBuMzRtZW06wqAzMkc2LsKgVk1bbjMzXcKgMTHCoHZtcyhtaW4swqBtYXgpwqA9wqAoMkcs
NEcpwqA9wqA4KG1pbizCoG1heCnCoD3CoCgyRyw4RynCoD3CoDEobWluLMKgbWF4KcKgPcKgKDJH
LDJHKcKgPcKgMnRvdGFsOsKgMjJHLzQ0R1tuMzRdwqA3wqB2bXMobWluLMKgbWF4KcKgPcKgKDAu
NUcsMUcpwqA9wqAxKG1pbizCoG1heCnCoD3CoCgxRywyRynCoD3CoDEobWluLMKgbWF4KcKgPcKg
KDJHLDJHKcKgPcKgMShtaW4swqBtYXgpwqA9wqAoMkcsNEcpwqA9wqAzKG1pbizCoG1heCnCoD3C
oCg4Ryw4RynCoD3CoDF0b3RhbDrCoDE3LjVHLzI1Ry0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tKG1pbizCoG1heCnCoD3CoCgyRyw0RynCoHN0YW5kc8KgZm9yOsKg
TWVtb3J5wqBTaXplOsKgNEdQaHlzaWNhbMKgTWVtb3J5wqBHdWFyYW50ZWVkOsKgMkdNZW1vcnnC
oEJhbGxvb27CoERldmljZcKgRW5hYmxlZDrCoGNoZWNrZWQtLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLTcuwqBycG3CoHZlcnNpb25bcm9vdEBuMzPCoH5dI8KgcnBt
wqAtcWHCoHxncmVwwqB2ZHNtdmRzbS15YWpzb25ycGMtNC4xNi4yNy0wLmVsNi5ub2FyY2h2ZHNt
LWpzb25ycGMtNC4xNi4yNy0wLmVsNi5ub2FyY2h2ZHNtLWNsaS00LjE2LjI3LTAuZWw2Lm5vYXJj
aHZkc20tcHl0aG9uLXpvbWJpZXJlYXBlci00LjE2LjI3LTAuZWw2Lm5vYXJjaHZkc20teG1scnBj
LTQuMTYuMjctMC5lbDYubm9hcmNodmRzbS1weXRob24tNC4xNi4yNy0wLmVsNi5ub2FyY2h2ZHNt
LTQuMTYuMjctMC5lbDYueDg2XzY0W3Jvb3RAZW5naW5lwqB+XSPCoHJwbcKgLXFhwqB8Z3JlcMKg
b3ZpcnRvdmlydC1yZWxlYXNlMzYtMDAxLTIubm9hcmNob3ZpcnQtZW5naW5lLXNldHVwLWJhc2Ut
My42LjAuMy0xLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtc2V0dXAtMy42LjAuMy0xLmVsNi5ub2Fy
Y2hvdmlydC1pbWFnZS11cGxvYWRlci0zLjYuMC0xLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtd2ls
ZGZseS04LjIuMC0xLmVsNi54ODZfNjRvdmlydC1lbmdpbmUtc2V0dXAtcGx1Z2luLXZtY29uc29s
ZS1wcm94eS1oZWxwZXItMy42LjAuMy0xLmVsNi5ub2FyY2hvdmlydC1ob3N0LWRlcGxveS0xLjQu
MC0xLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtYmFja2VuZC0zLjYuMC4zLTEuZWw2Lm5vYXJjaG92
aXJ0LWVuZ2luZS13ZWJhZG1pbi1wb3J0YWwtMy42LjAuMy0xLmVsNi5ub2FyY2hvdmlydC1lbmdp
bmUtamJvc3MtYXMtNy4xLjEtMS5lbDYueDg2XzY0b3ZpcnQtZW5naW5lLWxpYi0zLjYuMC4zLTEu
ZWw2Lm5vYXJjaG92aXJ0LWVuZ2luZS1zZXR1cC1wbHVnaW4tb3ZpcnQtZW5naW5lLWNvbW1vbi0z
LjYuMC4zLTEuZWw2Lm5vYXJjaG92aXJ0LWVuZ2luZS1zZXR1cC1wbHVnaW4tb3ZpcnQtZW5naW5l
LTMuNi4wLjMtMS5lbDYubm9hcmNob3ZpcnQtZW5naW5lLXNldHVwLXBsdWdpbi13ZWJzb2NrZXQt
cHJveHktMy42LjAuMy0xLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtc2RrLXB5dGhvbi0zLjYuMC4z
LTEuZWw2Lm5vYXJjaG92aXJ0LWlzby11cGxvYWRlci0zLjYuMC0xLmVsNi5ub2FyY2hvdmlydC12
bWNvbnNvbGUtcHJveHktMS4wLjAtMS5lbDYubm9hcmNob3ZpcnQtZW5naW5lLWV4dGVuc2lvbnMt
YXBpLWltcGwtMy42LjAuMy0xLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtd2Vic29ja2V0LXByb3h5
LTMuNi4wLjMtMS5lbDYubm9hcmNob3ZpcnQtZW5naW5lLXZtY29uc29sZS1wcm94eS1oZWxwZXIt
My42LjAuMy0xLmVsNi5ub2FyY2hlYmF5LWNvcnMtZmlsdGVyLTEuMC4xLTAuMS5vdmlydC5lbDYu
bm9hcmNob3ZpcnQtaG9zdC1kZXBsb3ktamF2YS0xLjQuMC0xLmVsNi5ub2FyY2hvdmlydC1lbmdp
bmUtdG9vbHMtMy42LjAuMy0xLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtcmVzdGFwaS0zLjYuMC4z
LTEuZWw2Lm5vYXJjaG92aXJ0LWVuZ2luZS0zLjYuMC4zLTEuZWw2Lm5vYXJjaG92aXJ0LWVuZ2lu
ZS1leHRlbnNpb24tYWFhLWpkYmMtMS4wLjEtMS5lbDYubm9hcmNob3ZpcnQtZW5naW5lLWNsaS0z
LjYuMC4xLTEuZWw2Lm5vYXJjaG92aXJ0LXZtY29uc29sZS0xLjAuMC0xLmVsNi5ub2FyY2hvdmly
dC1lbmdpbmUtd2lsZGZseS1vdmVybGF5LTAwMS0yLmVsNi5ub2FyY2hvdmlydC1lbmdpbmUtZGJz
Y3JpcHRzLTMuNi4wLjMtMS5lbDYubm9hcmNob3ZpcnQtZW5naW5lLXVzZXJwb3J0YWwtMy42LjAu
My0xLmVsNi5ub2FyY2hvdmlydC1ndWVzdC10b29scy1pc28tMy42LjAtMC4yX21hc3Rlci5mYzIy
Lm5vYXJjaCMjI8KgRELCoCMjI1tyb290QGVuZ2luZcKgfl0jwqBzdcKgcG9zdGdyZXNiYXNoLTQu
MSTCoGNkwqB+YmFzaC00LjEkwqBwc3FswqBlbmdpbmVlbmdpbmU9I8Kgc2VsZWN0wqB2ZHNfaWQs
wqBwaHlzaWNhbF9tZW1fbWIswqBtZW1fY29tbWl0ZWQswqB2bV9hY3RpdmUswqB2bV9jb3VudCzC
oHJlc2VydmVkX21lbSzCoGd1ZXN0X292ZXJoZWFkLMKgdHJhbnNwYXJlbnRfaHVnZXBhZ2VzX3N0
YXRlLMKgcGVuZGluZ192bWVtX3NpemXCoGZyb23CoHZkc19keW5hbWljO8KgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgdmRzX2lkwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqB8
wqBwaHlzaWNhbF9tZW1fbWLCoHzCoG1lbV9jb21taXRlZMKgfMKgdm1fYWN0aXZlwqB8wqB2bV9j
b3VudMKgfMKgcmVzZXJ2ZWRfbWVtwqB8wqBndWVzdF9vdmVyaGVhZMKgfMKgdHJhbnNwYXJlbnRf
aHVnZXBhZ2VzX3N0YXRlwqB8wqBwZW5kaW5nX3ZtZW1fc2l6ZcKgLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0rLS0t
LS0tLS0tLS0rLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLS3CoDY4OGFlYzM0LTU2
MzAtNDc4ZS1hZTVlLTlkNTc5OTA4MDRlNcKgfMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAzMjA1N8Kg
fMKgwqDCoMKgwqDCoMKgwqA0NTgzNsKgfMKgwqDCoMKgwqDCoMKgwqAxMcKgfMKgwqDCoMKgwqDC
oMKgMTHCoHzCoMKgwqDCoMKgwqDCoMKgwqDCoDMyMcKgfMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgNjXCoHzCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAywqB8wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoDDCoDJhZTNhMjE5LWFl
OWEtNDM0Ny1iMWUyLTBlMTAwMzYwMjMxZcKgfMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAzMjA1N8Kg
fMKgwqDCoMKgwqDCoMKgwqAyNjEyMMKgfMKgwqDCoMKgwqDCoMKgwqDCoDfCoHzCoMKgwqDCoMKg
wqDCoMKgN8KgfMKgwqDCoMKgwqDCoMKgwqDCoMKgMzIxwqB8wqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqA2NcKgfMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoDLCoHzCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgMCgywqByb3dzKSMj
I8KgbWVtb3J5wqAjIyNbbjMzXSPCoGZyZWXCoC1twqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqB0
b3RhbMKgwqDCoMKgwqDCoMKgdXNlZMKgwqDCoMKgwqDCoMKgZnJlZcKgwqDCoMKgwqBzaGFyZWTC
oMKgwqDCoGJ1ZmZlcnPCoMKgwqDCoMKgY2FjaGVkTWVtOsKgwqDCoMKgwqDCoMKgwqDCoDMyMDU3
wqDCoMKgwqDCoMKgMzE3NzDCoMKgwqDCoMKgwqDCoMKgMjg3wqDCoMKgwqDCoMKgwqDCoMKgwqAw
wqDCoMKgwqDCoMKgwqDCoMKgNDHCoMKgwqDCoMKgwqDCoDYzNDctLyvCoGJ1ZmZlcnMvY2FjaGU6
wqDCoMKgwqDCoMKgMjUzODHCoMKgwqDCoMKgwqDCoDY2NzZTd2FwOsKgwqDCoMKgwqDCoMKgwqAy
OTk5OcKgwqDCoMKgwqDCoDEwMDI1wqDCoMKgwqDCoMKgMTk5NzRQaHlzaWNhbMKgTWVtb3J5OsKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgMzIw
NTfCoE1CwqB0b3RhbCzCoDI1NjQ2wqBNQsKgdXNlZCzCoDY0MTHCoE1CwqBmcmVlU3dhcMKgU2l6
ZTrCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoDI5OTk5wqBNQsKgdG90YWwswqAxMDAyNcKgTULCoHVzZWQswqAxOTk3NMKg
TULCoGZyZWVNYXjCoGZyZWXCoE1lbW9yecKgZm9ywqBzY2hlZHVsaW5nwqBuZXfCoFZNczrCoMKg
wqDCoMKgMTkyOC41wqBNQltuMzRdI8KgZnJlZcKgLW3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oHRvdGFswqDCoMKgwqDCoMKgwqB1c2VkwqDCoMKgwqDCoMKgwqBmcmVlwqDCoMKgwqDCoHNoYXJl
ZMKgwqDCoMKgYnVmZmVyc8KgwqDCoMKgwqBjYWNoZWRNZW06wqDCoMKgwqDCoMKgwqDCoMKgMzIw
NTfCoMKgwqDCoMKgwqAzMTcxM8KgwqDCoMKgwqDCoMKgwqAzNDTCoMKgwqDCoMKgwqDCoMKgwqDC
oDDCoMKgwqDCoMKgwqDCoMKgwqA3OMKgwqDCoMKgwqDCoDEzMDc0LS8rwqBidWZmZXJzL2NhY2hl
OsKgwqDCoMKgwqDCoDE4NTYwwqDCoMKgwqDCoMKgMTM0OTdTd2FwOsKgwqDCoMKgwqDCoMKgwqAy
OTk5OcKgwqDCoMKgwqDCoMKgNTA5OMKgwqDCoMKgwqDCoDI0OTAxUGh5c2ljYWzCoE1lbW9yeTrC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoDMy
MDU3wqBNQsKgdG90YWwswqAxODU5M8KgTULCoHVzZWQswqAxMzQ2NMKgTULCoGZyZWVTd2FwwqBT
aXplOsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgMjk5OTnCoE1CwqB0b3RhbCzCoDUwOTjCoE1CwqB1c2VkLMKgMjQ5MDHC
oE1CwqBmcmVlTWF4wqBmcmVlwqBNZW1vcnnCoGZvcsKgc2NoZWR1bGluZ8KgbmV3wqBWTXM6wqDC
oMKgwqDCoDIxNjQ0LjXCoE1CIyMjwqBjb2RlwqAjIyMjI2Zyb206wqBodHRwczovL2dpdGh1Yi5j
b20vb1ZpcnQvb3ZpcnQtZW5naW5ldjMuNi4wIyNmcm9tOsKgRDpcY29kZVxqYXZhXG92aXJ0LWVu
Z2luZVxiYWNrZW5kXG1hbmFnZXJcbW9kdWxlc1xkYWxcc3JjXG1haW5ccmVzb3VyY2VzXGJ1bmRs
ZXNcQXBwRXJyb3JzLnByb3BlcnRpZXNWQVJfX0RFVEFJTF9fU1dBUF9WQUxVRV9JTExFR0FMPSRk
ZXRhaWxNZXNzYWdlwqBpdHPCoHN3YXDCoHZhbHVlwqB3YXPCoGlsbGVnYWwjI2Zyb206wqBEOlxj
b2RlXGphdmFcb3ZpcnQtZW5naW5lXGJhY2tlbmRcbWFuYWdlclxtb2R1bGVzXGJsbFxzcmNcbWFp
blxqYXZhXG9yZ1xvdmlydFxlbmdpbmVcY29yZVxibGxcc2NoZWR1bGluZ1xwb2xpY3l1bml0c1xN
ZW1vcnlQb2xpY3lVbml0LmphdmEjLS0tLS0tLS0tLS1jb2RlLS0tLS0tLS0tLS0tLS0xI8KgwqDC
oMKgcHJpdmF0ZcKgYm9vbGVhbsKgaXNWTVN3YXBWYWx1ZUxlZ2FsKFZEU8KgaG9zdCnCoHvCoMKg
wqDCoMKgwqDCoMKgaWbCoCghQ29uZmlnLjxCb29sZWFuPsKgZ2V0VmFsdWUoQ29uZmlnVmFsdWVz
LkVuYWJsZVN3YXBDaGVjaykpwqB7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoHJldHVybsKgdHJ1ZTvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoH3CoMKgwqDC
oChvbWl0dGVkLi4pwqDCoMKgwqDCoMKgwqDCoHJldHVybsKgKChzd2FwX3RvdGFswqAtwqBzd2Fw
X2ZyZWXCoC3CoG1lbV9hdmFpbGFibGUpwqAqwqAxMDDCoC/CoHBoeXNpY2FsX21lbV9tYinCoDw9
wqBDb25maWcuPEludGVnZXI+wqBnZXRWYWx1ZShDb25maWdWYWx1ZXMuQmxvY2tNaWdyYXRpb25P
blN3YXBVc2FnZVBlcmNlbnRhZ2UpwqDCoMKgwqAob21pdHRlZC4uKcKgwqDCoMKgfcKgIy0tLS0t
LS0tLS0tY29kZS0tLS0tLS0tLS0tLS0tMSNpZsKgRW5hYmxlU3dhcENoZWNrwqA9wqBGYWxzZcKg
dGhlbsKgcmV0dXJuwqBUcnVlLMKgc2/CoHdlwqBjYW7CoHNpbXBsecKgZGlzYWJsZcKgdGhpc8Kg
b3B0aW9uP8KgQW55wqBTdWdnZXN0aW9uP1tyb290QGVuZ2luZcKgfl0jwqBlbmdpbmUtY29uZmln
wqAtLWdldMKgQmxvY2tNaWdyYXRpb25PblN3YXBVc2FnZVBlcmNlbnRhZ2VCbG9ja01pZ3JhdGlv
bk9uU3dhcFVzYWdlUGVyY2VudGFnZTrCoDDCoHZlcnNpb246wqBnZW5lcmFsc28sLENvbmZpZy48
SW50ZWdlcj7CoGdldFZhbHVlKENvbmZpZ1ZhbHVlcy5CbG9ja01pZ3JhdGlvbk9uU3dhcFVzYWdl
UGVyY2VudGFnZSnCoD3CoDBzbywsKHN3YXBfdG90YWzCoC3CoHN3YXBfZnJlZcKgLcKgbWVtX2F2
YWlsYWJsZSnCoCrCoDEwMMKgL8KgcGh5c2ljYWxfbWVtX21iwqA8PcKgMHNvLCxzd2FwX3RvdGFs
wqAtwqBzd2FwX2ZyZWXCoC3CoG1lbV9hdmFpbGFibGXCoDw9wqAwcmlnaHQ/c28sLMKgaWbCoChz
d2FwX3RvdGFswqAtwqBzd2FwX2ZyZWUpwqA8PcKgbWVtX2F2YWlsYWJsZcKgdGhlbsKgcmV0dXJu
wqBUcnVlwqBlbHNlwqByZXR1cm7CoEZhbHNlIy0tLS0tLS0tLS0tY29kZS0tLS0tLS0tLS0tLS0t
MiPCoMKgwqDCoMKgwqDCoGZvcsKgKFZEU8KgdmRzwqA6wqBob3N0cynCoHvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqBpZsKgKCFpc1ZNU3dhcFZhbHVlTGVnYWwodmRzKSnCoHvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoGxvZy5kZWJ1ZygiSG9zdMKgJ3t9J8Kgc3dhcMKgdmFsdWXCoGlz
wqBpbGxlZ2FsIizCoHZkcy5nZXROYW1lKCkpO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgbWVzc2FnZXMuYWRkTWVzc2FnZSh2ZHMuZ2V0SWQoKSzCoEVuZ2luZU1lc3NhZ2UuVkFSX19E
RVRBSUxfX1NXQVBfVkFMVUVfSUxMRUdBTC50b1N0cmluZygpKTvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoGNvbnRpbnVlO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoH3CoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqBpZsKgKCFtZW1vcnlDaGVja2VyLmV2YWx1YXRlKHZkcyzCoHZtKSnCoHvC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoGludMKgaG9zdEFhdmFpbGFibGVNZW3CoD3C
oFNsYVZhbGlkYXRvci5nZXRJbnN0YW5jZSgpLmdldEhvc3RBdmFpbGFibGVNZW1vcnlMaW1pdCh2
ZHMpO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgbG9nLmRlYnVnKCJIb3N0wqAne30n
wqBoYXPCoHt9wqBNQsKgYXZhaWxhYmxlLsKgSW5zdWZmaWNpZW50wqBtZW1vcnnCoHRvwqBydW7C
oHRoZcKgVk0iLMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oHZkcy5nZXROYW1lKCkswqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgaG9zdEFhdmFpbGFibGVNZW0pO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
bWVzc2FnZXMuYWRkTWVzc2FnZSh2ZHMuZ2V0SWQoKSzCoFN0cmluZy5mb3JtYXQoIiRhdmFpbGFi
bGVNZW3CoCUxJGQiLMKgaG9zdEFhdmFpbGFibGVNZW0pKTvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoG1lc3NhZ2VzLmFkZE1lc3NhZ2UodmRzLmdldElkKCkswqBFbmdpbmVNZXNzYWdl
LlZBUl9fREVUQUlMX19OT1RfRU5PVUdIX01FTU9SWS50b1N0cmluZygpKTvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoGNvbnRpbnVlO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoH3CoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAob21pdHRlZC4uKcKgwqDCoMKgwqDCoMKgwqB9Iy0tLS0tLS0t
LS0tY29kZS0tLS0tLS0tLS0tLS0tMiMhaXNWTVN3YXBWYWx1ZUxlZ2FswqB0aGVuwqB0aHJvd8Kg
ZXhjZXB0aW9uLMKgcmlnaHQ/c28sLMKgd2hlbsKgd2XCoG1pZ3JhdGXCoHZtwqBmcm9twqBuMzPC
oHRvwqBuMzQswqB0aGXCoHN3YXDCoHN0YXR1c8Kgb27CoG4zNMKgYWN0dWFsbHnCoGlzOihzd2Fw
X3RvdGFswqAtwqBzd2FwX2ZyZWUpwqA+wqBtZW1fYXZhaWxhYmxlc3dhcF91c2VkwqA+wqBtZW1f
YXZhaWxhYmxlP8KgY29uZnVzZWQuLi5zbywswqB0aGXCoGxvZ2ljwqBpczoxKcKgY2hlY2vCoG4z
MzrCoHN3YXBbcGFzc2VkXSzCoHRoZW7CoG1lbW9yeVtmYWlsZWRdLMKgdGhlbsKgZ290b8KgKGZv
ci4uY29udGludWUuLmxvb3ApMinCoGNoZWNrwqBuMzQ6wqBzd2FwW2ZhaWxlZF0swqB0aGVuwqBn
b3RvwqAoZm9yLi5jb250aW51ZS4ubG9vcClJZsKgScKgaGF2ZcKgbWlzdW5kZXJzdG9vZMKgYW55
dGhpbmcswqBwbGVhc2XCoGxldMKgbWXCoGtub3cuIyMjwqBjb25jbHVzaW9uwqAjIyMxKcKgbjMz
wqBkb8Kgbm90wqBoYXZlwqBlbm91Z2jCoG1lbW9yeS7CoFt5ZXMswqBJwqBrbm93wqB0aGF0Ll0y
KcKgbjM0wqBtZW1vcnnCoGlzwqBpbGxlZ2FswqBbd2h5wqBhbmTCoGhvd8KgdG/CoHNvbHZlwqBp
dD9dMynCoHdoYXTCoEnCoHRyaWVkOi0tY2hhbmdlwqBjb25maWc6wqBCbG9ja01pZ3JhdGlvbk9u
U3dhcFVzYWdlUGVyY2VudGFnZVtyb290QGVuZ2luZcKgfl0jwqBlbmdpbmUtY29uZmlnwqAtLXNl
dMKgQmxvY2tNaWdyYXRpb25PblN3YXBVc2FnZVBlcmNlbnRhZ2U9NzXCoC1jdmVywqBnZW5lcmFs
W3Jvb3RAZW5naW5lwqB+XSPCoGVuZ2luZS1jb25maWfCoC0tZ2V0wqBCbG9ja01pZ3JhdGlvbk9u
U3dhcFVzYWdlUGVyY2VudGFnZUJsb2NrTWlncmF0aW9uT25Td2FwVXNhZ2VQZXJjZW50YWdlOsKg
NzXCoHZlcnNpb246wqBnZW5lcmFsUmVzdWx0OsKgZmFpbGVkLi0tZGlzYWJsZcKgRW5hYmxlU3dh
cENoZWNrSG93P8KgT3B0aW9uwqBub3TCoGZvdW5kwqBmcm9twqAnZW5naW5lLWNvbmZpZ8KgLS1s
aXN0JyzCoHNob3VsZMKgScKgdXBkYXRlwqB0YWJsZcKgZmllbGTCoGRpcmVjdMKgZnJvbcKgZGI/
LS1kaXNrwqBzd2FwwqBwYXJ0aXRpb27CoG9uwqBob3N0U2hvdWxkwqBJwqBkb8KgdGhpc8Kgb3Bl
cmF0aW9uPy0tdXBkYXRlwqBvdmlydC1lbmdpbmU/Tm/CoHVzZWZ1bMKgaW5mb21hdGlvbsKgZm91
bmTCoGluwqBsYXRlc3TCoHJlbGVhc2XCoG5vdGUswqBzaG91bGTCoEnCoGRvwqB0aGlzwqBvcGVy
YXRpb24/IyMjIGhlbHAgIyMjYW55wqBoZWxwwqB3b3VsZMKgYmXCoGFwcHJlY2lhdGVkLlpZWFcu
wqBSZWZlcmVuY2VodHRwOi8vd3d3Lm92aXJ0Lm9yZy9TbGEvRnJlZU1lbW9yeUNhbGN1bGF0aW9u
aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9waXBlcm1haWwvdXNlcnMvMjAxMi1Ob3ZlbWJlci8wMTA4
NTguaHRtbGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvcGlwZXJtYWlsL3VzZXJzLzIwMTMtTWFyY2gv
MDEzMjAxLmh0bWxodHRwOi8vY29tbWVudHMuZ21hbmUub3JnL2dtYW5lLmNvbXAuZW11bGF0b3Jz
Lm92aXJ0LnVzZXIvMTkyODhodHRwOi8vamltLnJpcHBvbi5tZS51ay8yMDEzLzA3L292aXJ0LXRl
c3RpbmctZW5nbGlzaC1pbnN0cnVjdGlvbnMtZm9yLmh0bWw=
------=ALIBOUNDARY_54603_493d9940_568114ce_5020f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
<div class=3D"__aliyun_email_body_block"><div style=3D"clear: both;"><span sty=
le=3D"font-family: Tahoma, Arial, STHeiti, SimSun; font-size: 14px; color: rgb=
(0, 0, 0);">host xxx did no satisfy internal fil=
ter Memory because its swap value was illeg=
al.<br ><br >### Description ###<br >1. problem<br >1) mig=
rate vm {name:xyz001, mem(min, max) =3D (2G,4G)}=
from ovirt host n33 to n34, failed.<br >2)=
shutting down vm {name: test001, mem(min, =
max) =3D (1G,1G)} on n34, update test001's =
config: Host->Start Running On: Specific(n34), the=
n start test001, while, it's running on n33=
.<br ><br >2. err message <br >Error while executing&=
nbsp;action: migrate <br >[engine gui]<br >xyz001:<br >Cannot&n=
bsp;migrate VM. There is no host that satis=
fies current scheduling constraints. See below f=
or details:<br >The host n33.ovirt did not satis=
fy internal filter Memory because has availabe&n=
bsp;1863 MB memory. Insufficient free memory to&=
nbsp;run the VM.<br >The host n34.ovirt did not&=
nbsp;satisfy internal filter Memory because its =
swap value was illegal.<br ><br ><br >[engine.log]<br >INFO&nbs=
p; [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-2=
3) [5916aa3b] Lock Acquired to object 'EngineLoc=
k:{exclusiveLocks=3D'[73351885-9a92-4317-baaf-e4f2bed1171a=3D<VM, ACTI=
ON_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName test11>]', sharedLocks=
=3D'null'}'<br >INFO [org.ovirt.engine.core.bll.scheduling.Scheduli=
ngManager] (default task-23) [5916aa3b] Candidate hos=
t 'n34' ('2ae3a219-ae9a-4347-b1e2-0e100360231e') was filte=
red out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory=
' (correlation id: null)<br >INFO [org.ovirt.engine.=
core.bll.scheduling.SchedulingManager] (default task-23) [5916a=
a3b] Candidate host 'n33' ('688aec34-5630-478e-ae5e-9d5799=
0804e5') was filtered out by 'VAR__FILTERTYPE__INTERN=
AL' filter 'Memory' (correlation id: null)<br >WARN&n=
bsp; [org.ovirt.engine.core.bll.MigrateVmCommand] (default task=
-23) [5916aa3b] CanDoAction of action 'MigrateVm'&nbs=
p;failed for user admin@internal. Reasons: VAR__ACTIO=
N__MIGRATE,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__IN=
TERNAL,$hostName n33,$filterName Memory,$availableMem 1863,VAR_=
_DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FI=
LTERTYPE__INTERNAL,$hostName n34,$filterName Memory,VAR__DETAIL__SWA=
P_VALUE_ILLEGAL,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL<br >INFO &nbs=
p;[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23)&nbs=
p;[5916aa3b] Lock freed to object 'EngineLock:{exclus=
iveLocks=3D'[73351885-9a92-4317-baaf-e4f2bed1171a=3D<VM, ACTION_TYPE_F=
AILED_VM_IS_BEING_MIGRATED$VmName test11>]', sharedLocks=3D'null'=
}'<br ><br ><br >3. DC<br >Compatibility Version: 3.5<br ><br >=
4. Cluster<br >Memory Optimization: For Server Load&n=
bsp;- Allow scheduling of 150% of physical =
memory<br >Memory Balloon: Enable Memory Balloon Opti=
mization<br >Enable KSM: Share memory pages across&nb=
sp;all available memory (best KSM effectivness)<br ><=
br >5. HOST<br >name: n33, n34<br >mem: 32G<br ><br >6.&nb=
sp;VM<br >[n33] 11 vms<br >(min, max) =3D (2G,4G)&nbs=
p;=3D 8<br >(min, max) =3D (2G,8G) =3D 1<br >(mi=
n, max) =3D (2G,2G) =3D 2<br >total: 22G/44G<br =
><br >[n34] 7 vms<br >(min, max) =3D (0.5G,1G) =3D=
1<br >(min, max) =3D (1G,2G) =3D 1<br >(min,&nb=
sp;max) =3D (2G,2G) =3D 1<br >(min, max) =3D&nbs=
p;(2G,4G) =3D 3<br >(min, max) =3D (8G,8G) =3D&n=
bsp;1<br >total: 17.5G/25G<br >------------------------------------------=
--<br >(min, max) =3D (2G,4G) stands for: <br >M=
emory Size: 4G<br >Physical Memory Guaranteed: 2G<br =
>Memory Balloon Device Enabled: checked<br >--------------=
------------------------------<br ><br >7. rpm version<br >[root@n33=
~]# rpm -qa |grep vdsm<br >vdsm-yajsonrpc-4.16.27-0.=
el6.noarch<br >vdsm-jsonrpc-4.16.27-0.el6.noarch<br >vdsm-cli-4.16.27-0.el6.no=
arch<br >vdsm-python-zombiereaper-4.16.27-0.el6.noarch<br >vdsm-xmlrpc-4.16.27=
-0.el6.noarch<br >vdsm-python-4.16.27-0.el6.noarch<br >vdsm-4.16.27-0.el6.x86_=
64<br ><br >[root@engine ~]# rpm -qa |grep ovirt<br >=
ovirt-release36-001-2.noarch<br >ovirt-engine-setup-base-3.6.0.3-1.el6.noarch<=
br >ovirt-engine-setup-3.6.0.3-1.el6.noarch<br >ovirt-image-uploader-3.6.0-1.e=
l6.noarch<br >ovirt-engine-wildfly-8.2.0-1.el6.x86_64<br >ovirt-engine-setup-p=
lugin-vmconsole-proxy-helper-3.6.0.3-1.el6.noarch<br >ovirt-host-deploy-1.4.0-=
1.el6.noarch<br >ovirt-engine-backend-3.6.0.3-1.el6.noarch<br >ovirt-engine-we=
badmin-portal-3.6.0.3-1.el6.noarch<br >ovirt-engine-jboss-as-7.1.1-1.el6.x86_6=
4<br >ovirt-engine-lib-3.6.0.3-1.el6.noarch<br >ovirt-engine-setup-plugin-ovir=
t-engine-common-3.6.0.3-1.el6.noarch<br >ovirt-engine-setup-plugin-ovirt-engin=
e-3.6.0.3-1.el6.noarch<br >ovirt-engine-setup-plugin-websocket-proxy-3.6.0.3-1=
.el6.noarch<br >ovirt-engine-sdk-python-3.6.0.3-1.el6.noarch<br >ovirt-iso-upl=
oader-3.6.0-1.el6.noarch<br >ovirt-vmconsole-proxy-1.0.0-1.el6.noarch<br >ovir=
t-engine-extensions-api-impl-3.6.0.3-1.el6.noarch<br >ovirt-engine-websocket-p=
roxy-3.6.0.3-1.el6.noarch<br >ovirt-engine-vmconsole-proxy-helper-3.6.0.3-1.el=
6.noarch<br >ebay-cors-filter-1.0.1-0.1.ovirt.el6.noarch<br >ovirt-host-deploy=
-java-1.4.0-1.el6.noarch<br >ovirt-engine-tools-3.6.0.3-1.el6.noarch<br >ovirt=
-engine-restapi-3.6.0.3-1.el6.noarch<br >ovirt-engine-3.6.0.3-1.el6.noarch<br =
>ovirt-engine-extension-aaa-jdbc-1.0.1-1.el6.noarch<br >ovirt-engine-cli-3.6.0=
.1-1.el6.noarch<br >ovirt-vmconsole-1.0.0-1.el6.noarch<br >ovirt-engine-wildfl=
y-overlay-001-2.el6.noarch<br >ovirt-engine-dbscripts-3.6.0.3-1.el6.noarch<br =
>ovirt-engine-userportal-3.6.0.3-1.el6.noarch<br >ovirt-guest-tools-iso-3.6.0-=
0.2_master.fc22.noarch<br ><br ><br >### DB ###<br >[root@engine&nbs=
p;~]# su postgres<br >bash-4.1$ cd ~<br >bash-4.1$ ps=
ql engine<br >engine=3D# select vds_id, physical_mem_mb,&n=
bsp;mem_commited, vm_active, vm_count, reserved_mem, guest=
_overhead, transparent_hugepages_state, pending_vmem_size from&=
nbsp;vds_dynamic;<br > &n=
bsp; vds_id &n=
bsp; | physica=
l_mem_mb | mem_commited | vm_active | vm_count&n=
bsp;| reserved_mem | guest_overhead | transparent_hug=
epages_state | pending_vmem_size <br >-------------------------=
-------------+-----------------+--------------+-----------+----------+--------=
------+----------------+-----------------------------+-------------------<br >=
688aec34-5630-478e-ae5e-9d57990804e5 |  =
; 32057 |  =
; 45836 |  =
;11 | 11 |  =
; 321 | &=
nbsp; 65 | &nbs=
p; &nbs=
p; 2 |&n=
bsp; &n=
bsp; 0<br > 2ae3a219-ae9a-4347-b1e2-0e100360231e |&=
nbsp; 32057 |&=
nbsp; 26120 | &=
nbsp; 7 |  =
; 7 | &nb=
sp;321 |  =
; 65 | &n=
bsp; &n=
bsp; 2 | =
0<br >(2 rows=
)<br ><br ><br ><br >### memory ###<br >[n33]<br ># free -=
m<br > =
total used &nb=
sp; free shared  =
; buffers cached<br >Mem: &=
nbsp; 32057 31=
770 287 =
0  =
; 41 6347<br >-/+ buf=
fers/cache: 25381 &n=
bsp; 6676<br >Swap: =
29999 10025 &n=
bsp;19974<br ><br >Physical Memory: &n=
bsp; &n=
bsp; 32057 MB total,&=
nbsp;25646 MB used, 6411 MB free<br >Swap Size:&=
nbsp; &=
nbsp; &=
nbsp; 29999 MB total, =
10025 MB used, 19974 MB free<br >Max free M=
emory for scheduling new VMs:  =
;1928.5 MB<br ><br ><br >[n34]<br ># free -m<br > &n=
bsp; total &nb=
sp; used  =
;free shared buffers =
cached<br >Mem: &nb=
sp; 32057 31713  =
; 344 &n=
bsp; 0 78 =
; 13074<br >-/+ buffers/cache: &n=
bsp; 18560 13497<br >Swap:=
29999 &=
nbsp; 5098 24901<br ><br >Physic=
al Memory: &nb=
sp; &nb=
sp; 32057 MB total, 18593 MB u=
sed, 13464 MB free<br >Swap Size: &=
nbsp; &=
nbsp; &=
nbsp; 29999 MB total, 5098 MB used,&=
nbsp;24901 MB free<br >Max free Memory for sched=
uling new VMs: 21644.5 MB<br ><br =
><br ><br >### code ###<br >##from: https://github.com/oVirt/ov=
irt-engine<br >v3.6.0<br ><br >##from: D:\code\java\ovirt-engine\backend\=
manager\modules\dal\src\main\resources\bundles\AppErrors.properties<br >VAR__D=
ETAIL__SWAP_VALUE_ILLEGAL=3D$detailMessage its swap value =
was illegal<br ><br >##from: D:\code\java\ovirt-engine\backend\manag=
er\modules\bll\src\main\java\org\ovirt\engine\core\bll\scheduling\policyunits\=
MemoryPolicyUnit.java<br >#-----------code--------------1#<br > &nb=
sp; private boolean isVMSwapValueLegal(VDS host) {<br=
> if (!Config.<Boolean=
> getValue(ConfigValues.EnableSwapCheck)) {<br >  =
;  =
; return true;<br > &=
nbsp; }<br > &=
nbsp; (omitted..)<br > &nb=
sp;return ((swap_total - swap_free - mem_available)&n=
bsp;* 100 / physical_mem_mb) <=3D Config.<Integ=
er> getValue(ConfigValues.BlockMigrationOnSwapUsagePercentage)<br >&nb=
sp; (omitted..)<br > } <br >#----=
-------code--------------1#<br >if EnableSwapCheck =3D False&nb=
sp;then return True, so we can simply disab=
le this option? Any Suggestion?<br ><br >[root@engine =
;~]# engine-config --get BlockMigrationOnSwapUsagePercentage<br=
>BlockMigrationOnSwapUsagePercentage: 0 version: general<br ><=
br >so,,<br >Config.<Integer> getValue(ConfigValues.BlockMigrationO=
nSwapUsagePercentage) =3D 0<br >so,,<br >(swap_total - swa=
p_free - mem_available) * 100 / physical_mem_mb&=
nbsp;<=3D 0<br >so,,<br >swap_total - swap_free - =
mem_available <=3D 0<br >right?<br >so,, if (swap_total=
- swap_free) <=3D mem_available then return&=
nbsp;True else return False<br ><br ><br >#-----------code-----=
---------2#<br > for (VDS v=
ds : hosts) {<br > &nb=
sp; if (!isVMSwapValueLegal(vds)) {<br > =
;  =
; log.debug("Host '{}' swap value is illeg=
al", vds.getName());<br > =
messages.addMessage(vds.getId(=
), EngineMessage.VAR__DETAIL__SWAP_VALUE_ILLEGAL.toString());<br > &=
nbsp; &=
nbsp; continue;<br >  =
; }<br >  =
; if (!memoryChecker.evaluate(vds, vm)) {<br >=
=
int hostAavailableMem =3D SlaValidator.getIns=
tance().getHostAvailableMemoryLimit(vds);<br > &n=
bsp; log.debug("Hos=
t '{}' has {} MB available. Insufficient me=
mory to run the VM",<br > &nb=
sp; &nb=
sp; vds.getName(),<br > &n=
bsp; &n=
bsp; hostAavailableMem);<br > &n=
bsp; &n=
bsp;messages.addMessage(vds.getId(), String.format("$availableMem %1=
$d", hostAavailableMem));<br > &=
nbsp; messages.addMessage(vds.g=
etId(), EngineMessage.VAR__DETAIL__NOT_ENOUGH_MEMORY.toString());<br >&nb=
sp; &nb=
sp; continue;<br > &=
nbsp; }<br > &=
nbsp; (omitted..)<br > &nb=
sp; }<br ><br >#-----------code--------------2#<br >!isVMSwapValueLegal&n=
bsp;then throw exception, right?<br >so,, when we&nbs=
p;migrate vm from n33 to n34, the swap =
;status on n34 actually is:<br >(swap_total - sw=
ap_free) > mem_available<br ><br >swap_used > mem_av=
ailable? confused...<br ><br >so,, the logic is:<br >1)&nb=
sp;check n33: swap[passed], then memory[failed], then=
goto (for..continue..loop)<br >2) check n34: swap[fa=
iled], then goto (for..continue..loop)<br ><br >If I =
have misunderstood anything, please let me know.=
<br ><br ><br ><br >### conclusion ###<br >1) n33 do =
not have enough memory. [yes, I know that.]=
<br >2) n34 memory is illegal [why and how&=
nbsp;to solve it?]<br >3) what I tried:<br >--change&=
nbsp;config: BlockMigrationOnSwapUsagePercentage<br >[root@engine ~]=
# engine-config --set BlockMigrationOnSwapUsagePercentage=3D75&=
nbsp;-cver general<br >[root@engine ~]# engine-config --ge=
t BlockMigrationOnSwapUsagePercentage<br >BlockMigrationOnSwapUsagePercen=
tage: 75 version: general<br ><br >Result: failed.<br ><br=
>--disable EnableSwapCheck<br >How? Option not found =
;from 'engine-config --list', should I update ta=
ble field direct from db?<br ><br ><br >--disk swap&n=
bsp;partition on host<br >Should I do this opera=
tion?<br ><br >--update ovirt-engine?<br >No useful infomation&=
nbsp;found in latest release note, should I =
;do this operation?<br ><br ><br >### help ###<br >any help&nbs=
p;would be appreciated.<br ><br >ZYXW. Reference<br >http://www=
.ovirt.org/Sla/FreeMemoryCalculation<br >http://lists.ovirt.org/pipermail/user=
s/2012-November/010858.html<br >http://lists.ovirt.org/pipermail/users/2013-Ma=
rch/013201.html<br >http://comments.gmane.org/gmane.comp.emulators.ovirt.user/=
19288<br >http://jim.rippon.me.uk/2013/07/ovirt-testing-english-instructions-f=
or.html</span></div></div>
------=ALIBOUNDARY_54603_493d9940_568114ce_5020f--
9 years, 4 months
Add node to engine 3.6: "no unique id"
by gregor
Hi,
I added a oVirt node (3.6) to my engine (3.6) and get the following
message when I try to activate it:
---------------
Cannot activate Host. Host has no unique id.
---------------
Image: ovirt-node-iso-3.6-0.999.201512132115.el7.centos.iso
regards
gregor
9 years, 4 months
VM CPU stuck after upgrade to 3.6
by gregor
Hi,
I recently upgraded from 3.5 to 3.6. The way I did is maybe for others
complicated but it worked: Export the VM's, remove 3.5, install 3.6
fresh and import the VM's. The normal upgrade way did not work in my
case and after many tries I did it the described way.
Now I have sometimes the problem that my Centos VM's throw a kernel
error like this:
--------------
kernel:BUG: soft lockup - CPU#0 stuck for 22s! [python:730]
--------------
Whats wrong here? I see no resource-problem the host usage is: memory
85%, CPU 62%
regards
gregor
9 years, 4 months
ovirt-aaa-jdbc-tool error on engine-setup
by Kevin C
Hi list,
I cannot setup an ovirt-engine on a new server, It fails with [ ERROR ]
Failed to execute stage 'Misc configuration': Command
'/usr/bin/ovirt-aaa-jdbc-tool' failed to execute.
What I need to do to debug ?
Regards
Kevin C
9 years, 4 months
Hosted Engine RAM
by Justin Foreman
My FCP HE is now successfully imported into oVirt after 3.6.2 rc1. I wanted to increase its RAM, but found that I couldn’t. I found this article and now understand that doing most management tasks on the HE through the UI is a work in progress:
http://www.ovirt.org/Hosted_engine_VM_management
I went to edit /etc/ovirt-hosted-engine/vm.conf to increase memSize but vm.conf doesn’t exist. Is there any way that I can increase the RAM during this time of transition for HE management?
Thanks,
Justin
9 years, 4 months
Re: [ovirt-users] mount a usb
by Fernando Fuentes
------sinikael-?=_1-14509404274870.9423928409814835
Content-Type: text/plain; format=flowed
Content-Transfer-Encoding: 7bit
I am bit confused by your reply.
You run this from the ovirt engine. Once you run the command it will ask u
what
ovirt version... Choose the correct one and restart the engine... Than you
attach your usb to your host where your VM resides... Once attach you edit
your
VM and add your hot plug.
What I am puzzled is that you mention you are trying to attach it to a
VMware?
You are running a hypervisor within a hypervisor?
On Wed, Dec 23, 2015 at 11:44 PM, alireza sadeh seighalan <
seighalani(a)gmail.com [seighalani(a)gmail.com] > wrote:
hi again
i double check that document but it doesnt work. i have 2 servers (main and
hypervisoer 1) . i installed vdsm-hook-hostusb then run this command in
main server :
# engine-config -s
"UserDefinedVMProperties=hostusb=^0x[0-9a-fA-F]{4}:0x[0-9a-fA-F]{4}$"
after that restart ovirt-engine and attach usb to hypervisor1 and run lsub
command and took usb id forexample:
0x920:0x3214
and add it in my vmware . start vmware but it didnt start.
what should i do? (passthrough in vmware done in some easy step)!!
On Wed, Dec 23, 2015 at 8:32 AM, alireza sadeh seighalan <
seighalani(a)gmail.com [seighalani(a)gmail.com] > wrote:
hi Fernando Fuentes
thanks for your kind of help :)
best regards
On Wed, Dec 23, 2015 at 7:42 AM, Fernando Fuentes < ffuentes(a)darktcp.net
[ffuentes(a)darktcp.net] > wrote:
I follow this blog and worked for me like a charm.
http://blog.conoracallahan.com/blog/2014/07/19/ovirt-usb-passthrough/
[http://blog.conoracallahan.com/blog/2014/07/19/ovirt-usb-passthrough/]
--
Fernando Fuentes
ffuentes(a)txweather.org [ffuentes(a)txweather.org]
http://www.txweather.org [http://www.txweather.org]
On Tue, Dec 22, 2015, at 10:08 PM, alireza sadeh seighalan wrote:
hi everyone
i want to mount a usb to a windows vm in ovirt3.6.1. how can i run
it?thanks in
advance
_______________________________________________
Users mailing list
Users(a)ovirt.org [Users(a)ovirt.org]
http://lists.ovirt.org/mailman/listinfo/users
[http://lists.ovirt.org/mailman/listinfo/users]
_______________________________________________
Users mailing list
Users(a)ovirt.org [Users(a)ovirt.org]
http://lists.ovirt.org/mailman/listinfo/users
[http://lists.ovirt.org/mailman/listinfo/users]
------sinikael-?=_1-14509404274870.9423928409814835
Content-Type: text/html; format=flowed
Content-Transfer-Encoding: 7bit
<p dir="ltr">I am bit confused by your reply.<br>
You run this from the ovirt engine. Once you run the command it will ask u
what ovirt version... Choose the correct one and restart the
engine... Than you attach your usb to your host where your VM
resides... Once attach you edit your VM and add your hot plug.</p>
<p dir="ltr">What I am puzzled is that you mention you are trying to attach
it to a VMware?<br>
You are running a hypervisor within a hypervisor?</p>
<div class="cm_quote" style=" color: #787878">On Wed, Dec 23, 2015 at 11:44
PM, alireza sadeh seighalan <<a
href="mailto:seighalani@gmail.com">seighalani(a)gmail.com</a>>
wrote:</div><br><div id="oldcontent" style="background: rgb(255, 255,
255);"><blockquote style=""><div dir="ltr"><div><div><div><div><div><div>hi
again<br><br></div>i double check that document but it doesnt work. i have
2 servers (main and hypervisoer 1) . i installed <code>vdsm-hook-hostusb
then run this command in main server :<br></code><br><code># engine-config
-s
"UserDefinedVMProperties=hostusb=^0x[0-9a-fA-F]{4}:0x[0-9a-fA-F]{4}$"<br><br></code></div><code>after
that restart ovirt-engine and attach usb to hypervisor1 and run lsub
command and took usb id
forexample:<br><br></code></div><code>0x920:0x3214<br><br></code></div><code>and
add it in my vmware . start vmware but it didnt
start.<br><br></code></div><code>what should i do? (passthrough in vmware
done in some easy
step)!!<br><br><br></code></div><code><br></code></div><div
class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 23, 2015 at
8:32 AM, alireza sadeh seighalan <span dir="ltr"><<a
href="mailto:seighalani@gmail.com">seighalani(a)gmail.com</a>></span>
wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><div
dir="ltr"><div><div>hi<span> Fernando
Fuentes<br><br><br></span></div><span>thanks for your kind of help
:)<br><br><br></span></div><span>best regards<br></span></div><div
class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div
class="gmail_quote">On Wed, Dec 23, 2015 at 7:42 AM, Fernando Fuentes <span
dir="ltr"><<a
href="mailto:ffuentes@darktcp.net">ffuentes(a)darktcp.net</a>></span>
wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div><div>I follow this blog and worked for me like a charm.<br></div>
<div> </div>
<div><a
href="http://blog.conoracallahan.com/blog/2014/07/19/ovirt-usb-passthrough/">http://blog.conoracallahan.com/blog/2014/07/19/ovirt-usb-passthrough/</a><br></div>
<div> </div>
<div> </div>
<div><div>--<br></div>
<div>Fernando Fuentes<br></div>
<div><a
href="mailto:ffuentes@txweather.org">ffuentes(a)txweather.org</a><br></div>
<div><a
href="http://www.txweather.org">http://www.txweather.org</a><br></div>
<div> </div>
</div><span>
<div> </div>
<div> </div>
<div>On Tue, Dec 22, 2015, at 10:08 PM, alireza sadeh seighalan
wrote:<br></div>
</span><blockquote><span><div dir="ltr"><div><div>hi everyone<br></div>
</div>
<div>i want to mount a usb to a windows vm in ovirt3.6.1. how can i run
it?thanks in advance<br></div>
</div>
</span><div><u>_______________________________________________</u><br></div>
<div>Users mailing list<br></div>
<div><a href="mailto:Users@ovirt.org">Users(a)ovirt.org</a><br></div>
<div><a
href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br></div>
</blockquote></div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users(a)ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</blockquote></div>
------sinikael-?=_1-14509404274870.9423928409814835--
9 years, 4 months
VM database update ( VMrestore)
by paf1@email.cz
This is a multi-part message in MIME format.
--------------000108070003000708050207
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
1) if I've got full VM's datastore backup ( eg. via rsync ) and restore
one VM's files to different store and empty oVirt database, will that
database be updated automaticaly or any more actions needed ??
2) Are VM's file stored in oVirt database in names only ? ( no check
sum, or other IDs ) - meaning can I replace VM file by another one
with different size ??
regs.
Paf1
--------------000108070003000708050207
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello,<br>
<br>
1) if I've got full VM's datastore backup ( eg. via rsync ) and
restore one VM's files to different store and empty oVirt database,
will that database be updated automaticaly or any more actions
needed ??<br>
2) Are VM's file stored in oVirt database in names only ? ( no check
sum, or other IDs ) - meaning can I replace VM file by another one
with different size ??<br>
regs.<br>
Paf1<br>
</body>
</html>
--------------000108070003000708050207--
9 years, 4 months
aaa-LDAP schema selection
by Jamie Lawrence
--Apple-Mail=_6D675AA7-CA97-4DA5-9B6D-4B4607EEF1F0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
Hello all,
I=E2=80=99d like to get the LDAP plugin working. We have a lovely LDAP =
setup deployed (OpenLDAP), and nobody here has a clue how to map what we =
have to the options the installer presents.
Well, a clue, yes.=20
We include the core, cosine, nis, inetorgperson and misc schemas in the =
config.
The RHDS, 389, AD, IPA and Novell options are eliminated because we =
aren=E2=80=99t running any of that. I eliminated =E2=80=98RFC-2307 =
Schema (Generic)=E2=80=99 by finding attributes not included in the RFC, =
but added by OpenLDAP.=20
Assuming what we are running maps to any of them, one of the =
=E2=80=98OpenLDAP [RFC-2307|Standard] Schema' seem likely.=20
Does anyone know of a test (attribute that should be in one, or not in =
another, or some such) to figure this out? Can it be inferred from my =
schema includes (listed above)? I fear that determining this via process =
of elimination is going to be brutal due to difficult-to-replicate =
weirdness because of only minor differences, and the fact that there are =
other moving parts at the moment with this setup.
And to those who enjoy them, happy holidays.
-j=
--Apple-Mail=_6D675AA7-CA97-4DA5-9B6D-4B4607EEF1F0
Content-Disposition: attachment;
filename=smime.p7s
Content-Type: application/pkcs7-signature;
name=smime.p7s
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIHejCCB3Yw
ggVeoAMCAQICE1QAABNFH2NktVR+MW0AAAAAE0UwDQYJKoZIhvcNAQEFBQAwYDETMBEGCgmSJomT
8ixkARkWA2NvbTEbMBkGCgmSJomT8ixkARkWC3NxdWFyZXRyYWRlMRQwEgYKCZImiZPyLGQBGRYE
Y29ycDEWMBQGA1UEAxMNY29ycC1TVUJDQS1DQTAeFw0xNTA5MjgxNjA1MDVaFw0xNjA4MTMyMjA3
NDJaMIGsMRMwEQYKCZImiZPyLGQBGRYDY29tMRswGQYKCZImiZPyLGQBGRYLc3F1YXJldHJhZGUx
FDASBgoJkiaJk/IsZAEZFgRjb3JwMRIwEAYDVQQLEwlFbXBsb3llZXMxCzAJBgNVBAsTAklUMRcw
FQYDVQQDEw5KYW1pZSBMYXdyZW5jZTEoMCYGCSqGSIb3DQEJARYZamxhd3JlbmNlQHNxdWFyZXRy
YWRlLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJKcbsIRNOtf5dPVSwBJPfmu
SwaS3lsqO4k/GyptrL70oxJHcsFWu1er4Qne2LwL4pvWzG3ID8QCPzBNMaijhgmOqf5lCS66t5bt
XqqKDUWw+JYW8qKNLxEFpXYJMnoRJ6GAwsD+R/TL9qB6tSZa/ElWm3u+Q+B6PsOPTJR0FcPy6jzD
DeoLMcN/MgKBsUGYmJSBcqoBrd/7ugZancX5ZsIMtDpjGG9zYfT3r3deMRFFksfkUf2BakUoFYRP
hVl2IUAsarZ8oWGOkkm6NoV8AQrCsFtJNlfrBWFjaJDgSU/7waVpaFwNbE3y5vKfsydkHCfuatdt
gmTCIMXX8QU3mBkCAwEAAaOCAtowggLWMB0GA1UdDgQWBBSidwBvd8Keglu5uZyaGnDa0tfFBzAf
BgNVHSMEGDAWgBSsU98vE8JHTC7TGa8VLHCSZCOFSTCB1QYDVR0fBIHNMIHKMIHHoIHEoIHBhoG+
bGRhcDovLy9DTj1jb3JwLVNVQkNBLUNBLENOPXN1YmNhLENOPUNEUCxDTj1QdWJsaWMlMjBLZXkl
MjBTZXJ2aWNlcyxDTj1TZXJ2aWNlcyxDTj1Db25maWd1cmF0aW9uLERDPWNvcnAsREM9c3F1YXJl
dHJhZGUsREM9Y29tP2NlcnRpZmljYXRlUmV2b2NhdGlvbkxpc3Q/YmFzZT9vYmplY3RDbGFzcz1j
UkxEaXN0cmlidXRpb25Qb2ludDCBywYIKwYBBQUHAQEEgb4wgbswgbgGCCsGAQUFBzAChoGrbGRh
cDovLy9DTj1jb3JwLVNVQkNBLUNBLENOPUFJQSxDTj1QdWJsaWMlMjBLZXklMjBTZXJ2aWNlcyxD
Tj1TZXJ2aWNlcyxDTj1Db25maWd1cmF0aW9uLERDPWNvcnAsREM9c3F1YXJldHJhZGUsREM9Y29t
P2NBQ2VydGlmaWNhdGU/YmFzZT9vYmplY3RDbGFzcz1jZXJ0aWZpY2F0aW9uQXV0aG9yaXR5MBcG
CSsGAQQBgjcUAgQKHggAVQBzAGUAcjAOBgNVHQ8BAf8EBAMCBaAwKQYDVR0lBCIwIAYKKwYBBAGC
NwoDBAYIKwYBBQUHAwQGCCsGAQUFBwMCMFQGA1UdEQRNMEugLgYKKwYBBAGCNxQCA6AgDB5qbGF3
cmVuY2VAY29ycC5zcXVhcmV0cmFkZS5jb22BGWpsYXdyZW5jZUBzcXVhcmV0cmFkZS5jb20wRAYJ
KoZIhvcNAQkPBDcwNTAOBggqhkiG9w0DAgICAIAwDgYIKoZIhvcNAwQCAgCAMAcGBSsOAwIHMAoG
CCqGSIb3DQMHMA0GCSqGSIb3DQEBBQUAA4ICAQBwdFGJ18Dzg6eQoQU2oJ8PaoxTgOccXQNNcEZG
wP0yk9ldV2BmqAw3yr0lUnhdk/ChkF9duSmWTHXrt8nAbyO8XVTwhIR6EcJEqS/MneudUsKbwClq
yweMqsr/J9jz8Xl/IsbS0mWG9rb3o4stowNycrk2+t68DNMANQa4HGqh7Rz3XcrDtZOIRe33CPSc
552FgT1yJHBcNCkJHJQdZ6pXb0voP59eGIbrqOwhxfdorbb6lqYjSmOlUoQk5x0Gn25Z+B5q8a6o
UTf1G5vMNups9133xuc1DeyFmjJVt6Xbs+BIIkAeL543iPWrr03vLclFRF+rwHBGkwklRY2eP/Qv
oNOLBeuY85SiVdPKFlOSQc/U1kcpDani9UuQmQ1IZz3gea8WHDUyY2jSyAMZYFPNMQq+26eEo+HP
+Gz7+F5IWRO1OL01EGsjCv/cdugqOsH/aIEc9XP4b/BUyWnxJxgI0d0j8BDfGiKcV+sCvkW4sO0p
Oggj0b0SYnTB87hvjciZ4E8PHxaYlTyU95fdTQYLT6XjNMSQC3cIO4klbsObGRaIq5V4YbGiq91Z
CkPCQjmBezFM7aLI9qb28gAT1NL2HZ0y5i8CDQWasE5RGyjqAhI6z+pl5RCUtUXimuo2KoA09eC6
RZllX/dT1f6+xGQu8DHMx+TSkYtuc2gzPFeFtjGCAxMwggMPAgEBMHcwYDETMBEGCgmSJomT8ixk
ARkWA2NvbTEbMBkGCgmSJomT8ixkARkWC3NxdWFyZXRyYWRlMRQwEgYKCZImiZPyLGQBGRYEY29y
cDEWMBQGA1UEAxMNY29ycC1TVUJDQS1DQQITVAAAE0UfY2S1VH4xbQAAAAATRTAJBgUrDgMCGgUA
oIIBcTAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNTEyMjQwMTA2
NTZaMCMGCSqGSIb3DQEJBDEWBBQodZhL6dhRa4keT1Tc7lcceZyY/TCBhgYJKwYBBAGCNxAEMXkw
dzBgMRMwEQYKCZImiZPyLGQBGRYDY29tMRswGQYKCZImiZPyLGQBGRYLc3F1YXJldHJhZGUxFDAS
BgoJkiaJk/IsZAEZFgRjb3JwMRYwFAYDVQQDEw1jb3JwLVNVQkNBLUNBAhNUAAATRR9jZLVUfjFt
AAAAABNFMIGIBgsqhkiG9w0BCRACCzF5oHcwYDETMBEGCgmSJomT8ixkARkWA2NvbTEbMBkGCgmS
JomT8ixkARkWC3NxdWFyZXRyYWRlMRQwEgYKCZImiZPyLGQBGRYEY29ycDEWMBQGA1UEAxMNY29y
cC1TVUJDQS1DQQITVAAAE0UfY2S1VH4xbQAAAAATRTANBgkqhkiG9w0BAQEFAASCAQAMXanM3/mX
60yJ1vHsVl2R906uUHku+scVU2NYQX8vTqudmhIM/ESLFAQXXtxVrNMRVoC6ZpekEoiUWj505fek
c73Q7CClPV1BMl2LU+LDN1bRVQZRwg58ISCt30A0kaYZTxXIcEeM4G42B0qpqigXl184T7QXNBh2
KVbBhZMo2LsRcx5LHKODibaEeeqcoKSBq6SMcRbQhTLNXLlJJQQyYoTeFiNNM0vP4jLmCvsJVnmp
IEn6wp4F/Ufpcc8ckh7D0jveIOIbiC+EcyxRcJj/sMSLDy808YyhSpZrQYzgtrEhjI/+ym3JKze7
+zvnC8jKlRAYOv+xmxZee21zu0DTAAAAAAAA
--Apple-Mail=_6D675AA7-CA97-4DA5-9B6D-4B4607EEF1F0--
9 years, 4 months
How to add a Gluster storage domain on hyper-converged?
by Will Dennis
--_000_F3282EEAFF180F43BAF1AD0A7C34739D391C0Fnjmailneclabscom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I have a three-node hyper-converged oVirt datacenter running; now I need to=
add my first storage domain. I had prepped for this before installing oVir=
t by creating two distributed Gluster volumes with 3x replicas (one for the=
hosted engine, one for VM storage) -
[root@ovirt-node-01 ~]# gluster volume info | grep -e "Name" -e "Type" -e "=
Number"
Volume Name: engine
Type: Distributed-Replicate
Number of Bricks: 2 x 3 =3D 6
Volume Name: vmdata
Type: Distributed-Replicate
Number of Bricks: 2 x 3 =3D 6
Now I'd like to use the "vmdata" volume for my storage domain. When in weba=
dmin I select "New Domain" I get a dialog that lets me select GlusterFS as =
the storage type, but then requires a "Use host:" setting, and a path. Is i=
t possible for me to select one of my oVirt hosts (they all have the 'vmdat=
a' volume), and then use "localhost:/vmdata" for the path? Or will this not=
work?
I know this isn't officially supported yet, but if I can get it to work som=
ehow, that'd be great :) It's a non-production (PoC) setup, so the cost of=
failure should be low... That said, I don't want to trash my rig and have =
to redo the whole thing all over ;)
Thanks,
Will
--_000_F3282EEAFF180F43BAF1AD0A7C34739D391C0Fnjmailneclabscom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I have a three-node hyper-converged oVirt datacenter=
running; now I need to add my first storage domain. I had prepped for this=
before installing oVirt by creating two distributed Gluster volumes with 3=
x replicas (one for the hosted engine,
one for VM storage) –<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">[root@ovirt-node-01 ~]# gluster volume info | grep -=
e "Name" -e "Type" -e "Number"<o:p></o:p></p>
<p class=3D"MsoNormal">Volume Name: engine<o:p></o:p></p>
<p class=3D"MsoNormal">Type: Distributed-Replicate<o:p></o:p></p>
<p class=3D"MsoNormal">Number of Bricks: 2 x 3 =3D 6<o:p></o:p></p>
<p class=3D"MsoNormal">Volume Name: vmdata<o:p></o:p></p>
<p class=3D"MsoNormal">Type: Distributed-Replicate<o:p></o:p></p>
<p class=3D"MsoNormal">Number of Bricks: 2 x 3 =3D 6<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Now I’d like to use the “vmdata” v=
olume for my storage domain. When in webadmin I select “New Domain=
221; I get a dialog that lets me select GlusterFS as the storage type, but =
then requires a “Use host:” setting, and a path. Is it possible
for me to select one of my oVirt hosts (they all have the ‘vmdata=
217; volume), and then use “localhost:/vmdata” for the path? Or=
will this not work?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I know this isn’t officially supported yet, bu=
t if I can get it to work somehow, that’d be great :) It’=
s a non-production (PoC) setup, so the cost of failure should be low... Tha=
t said, I don’t want to trash my rig and have to redo the
whole thing all over ;)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal">Will<o:p></o:p></p>
</div>
</body>
</html>
--_000_F3282EEAFF180F43BAF1AD0A7C34739D391C0Fnjmailneclabscom_--
9 years, 4 months
Re: [ovirt-users] gwt super dev mode
by Alexander Wels
On Wednesday, December 23, 2015 11:15:17 AM royin rolland wrote:
> Hi,
>
> Depending on your patch, appear “Error while executing action: A Request to
> the Server failed with the following Status Code: 500".
Did you do step 2?
setup GWT_CODESVR=localhost:9876 in the shell you are going to start the
engine in. This will allow the engine to know where to get the permutations
generated by the "code" server below.
Before you start the engine, which I am assuming you are running from a shell
like so:
$HOME/ovirt-engine/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
start
Make sure you do this:
set GWT_CODESVR=localhost:9876
export GWT_CODESVR
Then run the command to start the engine.
> > 在 2015年12月21日,22:54,Alexander Wels <awels(a)redhat.com> 写道:
> >
> > On Thursday, December 17, 2015 09:05:10 AM Vojtech Szocs wrote:
> >> Hi,
> >>
> >> oVirt UI currently uses GWT SDK version 2.6.1
> >> In GWT 2.6.x "classic" dev mode is still the default one.
> >>
> >> We tried to use "super" dev mode some time ago [1] but it
> >> didn't work for us, probably due to using direct-eval RPC
> >> mechanism (we got HTTP 500 responses for RPC requests).
> >>
> >> [1] https://gerrit.ovirt.org/#/c/26093/
> >>
> >> Because "classic" dev mode relies on NPAPI-based browser
> >> plugin, the downside is that developers must use old'ish
> >> browsers that still support NPAPI (Firefox <= 26 etc).
> >>
> >> I think we can revisit this and try to experiment with
> >> "super" dev mode as I believe [1] was done in context of
> >> GWT 2.5.x anyway.
> >>
> >> Regards,
> >> Vojtech
> >
> > Hi,
> >
> > I updated the initial WIP patch and I think it should work if you follow
> > the instructions in the new patch [2]. Feel free to give it a whirl and
> > give me feedback.
> >
> > [2] https://gerrit.ovirt.org/#/c/50742/
> > <https://gerrit.ovirt.org/#/c/50742/>>
> >> ----- Original Message -----
> >>
> >>> From: "royin rolland" <royinrolland(a)yahoo.com
> >>> <mailto:royinrolland@yahoo.com>> To: vszocs(a)redhat.com
> >>> <mailto:vszocs@redhat.com>
> >>> Cc: users(a)ovirt.org <mailto:users@ovirt.org>
> >>> Sent: Wednesday, December 16, 2015 2:49:25 AM
> >>> Subject: gwt super dev mode
> >>>
> >>> hi,vszos:
> >>>
> >>> ovirt engine when it supports super dev mode?ovirt 3.6?
9 years, 4 months
Ovirt 3.5 host gluster storage connection failure
by Steve Dainard
I have two hosts, only one of them was running VM's at the time of
this crash so I can't tell if this is a node specific problem.
rpm -qa | egrep -i 'gluster|vdsm|libvirt' |sort
glusterfs-3.6.7-1.el7.x86_64
glusterfs-api-3.6.7-1.el7.x86_64
glusterfs-cli-3.6.7-1.el7.x86_64
glusterfs-fuse-3.6.7-1.el7.x86_64
glusterfs-libs-3.6.7-1.el7.x86_64
glusterfs-rdma-3.6.7-1.el7.x86_64
libvirt-client-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.5.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.5.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.5.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
vdsm-4.16.30-0.el7.centos.x86_64
vdsm-cli-4.16.30-0.el7.centos.noarch
vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
vdsm-python-4.16.30-0.el7.centos.noarch
vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
VM's were in a paused state, with errors in UI:
2015-Dec-22, 15:06
VM pcic-apps has paused due to unknown storage error.
2015-Dec-22, 15:06
Host compute2 is not responding. It will stay in Connecting state for
a grace period of 82 seconds and after that an attempt to fence the
host will be issued.
2015-Dec-22, 15:03
Invalid status on Data Center EDC2. Setting Data Center status to Non
Responsive (On host compute2, Error: General Exception).
2015-Dec-22, 15:03
VM pcic-storage has paused due to unknown storage error.
2015-Dec-22, 15:03
VM docker1 has paused due to unknown storage error.
VDSM logs look normal until:
Dummy-99::DEBUG::2015-12-22
23:03:58,949::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
dd if=/rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd None)
Dummy-99::DEBUG::2015-12-22
23:03:58,963::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0
MB) copied, 0.00350501 s, 292 MB/s\n'; <rc> = 0
VM Channels Listener::INFO::2015-12-22
23:03:59,527::guestagent::180::vm.Vm::(_handleAPIVersion)
vmId=`7067679e-43aa-43c0-b263-b0a711ade2e2`::Guest API version changed
from 2 to 1
Thread-245428::DEBUG::2015-12-22
23:03:59,718::libvirtconnection::151::root::(wrapper) Unknown
libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not found:
Requested metadata element is not present
libvirtEventLoop::INFO::2015-12-22
23:04:00,447::vm::4982::vm.Vm::(_onIOError)
vmId=`376e98b7-7798-46e8-be03-5dddf6cfb54f`::abnormal vm stop device
virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-12-22
23:04:00,447::vm::5666::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`376e98b7-7798-46e8-be03-5dddf6cfb54f`::event Suspended detail 2
opaque None
libvirtEventLoop::INFO::2015-12-22
23:04:00,447::vm::4982::vm.Vm::(_onIOError)
vmId=`376e98b7-7798-46e8-be03-5dddf6cfb54f`::abnormal vm stop device
virtio-disk0 error eother
...
libvirtEventLoop::INFO::2015-12-22
23:04:00,843::vm::4982::vm.Vm::(_onIOError)
vmId=`97fbbf97-944b-4b77-b0bf-6a831f9090d8`::abnormal vm stop device
virtio-disk1 error eother
libvirtEventLoop::DEBUG::2015-12-22
23:04:00,844::vm::5666::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`97fbbf97-944b-4b77-b0bf-6a831f9090d8`::event Suspended detail 2
opaque None
Dummy-99::DEBUG::2015-12-22
23:04:00,973::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
dd if=/rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd None)
Dummy-99::DEBUG::2015-12-22
23:04:00,983::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
FAILED: <err> = "dd: failed to open
'/rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox':
Transport endpoint is not connected\n"; <rc> = 1
Dummy-99::ERROR::2015-12-22
23:04:00,983::storage_mailbox::787::Storage.MailBox.SpmMailMonitor::(run)
Error checking for mail
Traceback (most recent call last):
File "/usr/share/vdsm/storage/storage_mailbox.py", line 785, in run
self._checkForMail()
File "/usr/share/vdsm/storage/storage_mailbox.py", line 734, in _checkForMail
"Could not read mailbox: %s" % self._inbox)
IOError: [Errno 5] _handleRequests._checkForMail - Could not read
mailbox: /rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox
Dummy-99::DEBUG::2015-12-22
23:04:02,987::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
dd if=/rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd None)
Dummy-99::DEBUG::2015-12-22
23:04:02,994::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
FAILED: <err> = "dd: failed to open
'/rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox':
Transport endpoint is not connected\n"; <rc> = 1
Dummy-99::ERROR::2015-12-22
23:04:02,994::storage_mailbox::787::Storage.MailBox.SpmMailMonitor::(run)
Error checking for mail
Traceback (most recent call last):
File "/usr/share/vdsm/storage/storage_mailbox.py", line 785, in run
self._checkForMail()
File "/usr/share/vdsm/storage/storage_mailbox.py", line 734, in _checkForMail
"Could not read mailbox: %s" % self._inbox)
IOError: [Errno 5] _handleRequests._checkForMail - Could not read
mailbox: /rhev/data-center/f72ec125-69a1-4c1b-a5e1-313fcb70b6ff/mastersd/dom_md/inbox
Thread-563692::DEBUG::2015-12-22
23:04:03,539::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.getSpmStatus' in bridge with {u'storagepoolID':
u'f72ec125-69a1-4c1b-a5e1-313fcb70b6ff'}
Thread-563692::DEBUG::2015-12-22
23:04:03,540::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6121360f-af29-48cf-8a8c-957480472a9d`::moving from state init ->
state preparing
Thread-563692::INFO::2015-12-22
23:04:03,540::logUtils::44::dispatcher::(wrapper) Run and protect:
getSpmStatus(spUUID=u'f72ec125-69a1-4c1b-a5e1-313fcb70b6ff',
options=None)
Thread-563692::ERROR::2015-12-22
23:04:03,541::task::866::Storage.TaskManager.Task::(_setError)
Task=`6121360f-af29-48cf-8a8c-957480472a9d`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 633, in getSpmStatus
status = self._getSpmStatusInfo(pool)
File "/usr/share/vdsm/storage/hsm.py", line 627, in _getSpmStatusInfo
(pool.spmRole,) + pool.getSpmStatus()))
File "/usr/share/vdsm/storage/sp.py", line 126, in getSpmStatus
return self._backend.getSpmStatus()
File "/usr/share/vdsm/storage/spbackends.py", line 416, in getSpmStatus
lVer, spmId = self.masterDomain.inquireClusterLock()
File "/usr/share/vdsm/storage/sd.py", line 515, in inquireClusterLock
return self._clusterLock.inquire()
File "/usr/share/vdsm/storage/clusterlock.py", line 321, in inquire
resource = sanlock.read_resource(self._leasesPath, SDM_LEASE_OFFSET)
SanlockException: (107, 'Sanlock resource read failure', 'Transport
endpoint is not connected')
...
I noticed the gluster mount point was still shown running 'mount' but
when I tried to ls the mount point I got a "transport endpoint is not
connected" error.
I haven't been able to find anything more interested in the logs, and
the journal is blank before the recent host restart.
In the gluster client logs from the host I see:
...
[2015-12-22 22:52:23.238369] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 31263720: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1
(No dat
a available)
[2015-12-22 22:57:23.778018] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 31316031: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1
(No dat
a available)
[2015-12-22 23:02:24.376491] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 31473552: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1
(No dat
a available)
pending frames:
frame : type(1) op(FSTAT)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(0) op(0)
frame : type(0) op(0)
...
frame : type(1) op(READ)
frame : type(0) op(0)
frame : type(1) op(OPEN)
frame : type(1) op(OPEN)
frame : type(1) op(OPEN)
frame : type(1) op(OPEN)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2015-12-22 23:04:00
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.6.7
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb2)[0x7f0d091f6392]
/lib64/libglusterfs.so.0(gf_print_trace+0x32d)[0x7f0d0920d88d]
/lib64/libc.so.6(+0x35650)[0x7f0d0820f650]
/lib64/libc.so.6(gsignal+0x37)[0x7f0d0820f5d7]
/lib64/libc.so.6(abort+0x148)[0x7f0d08210cc8]
/lib64/libc.so.6(+0x75e07)[0x7f0d0824fe07]
/lib64/libc.so.6(+0x7d1fd)[0x7f0d082571fd]
/usr/lib64/glusterfs/3.6.7/xlator/protocol/client.so(client_local_wipe+0x39)[0x7f0cfe8acdf9]
/usr/lib64/glusterfs/3.6.7/xlator/protocol/client.so(client3_3_readv_cbk+0x487)[0x7f0cfe8c1197]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f0d08fca100]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x174)[0x7f0d08fca374]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f0d08fc62c3]
/usr/lib64/glusterfs/3.6.7/rpc-transport/socket.so(+0x8790)[0x7f0d047f3790]
/usr/lib64/glusterfs/3.6.7/rpc-transport/socket.so(+0xaf84)[0x7f0d047f5f84]
/lib64/libglusterfs.so.0(+0x767c2)[0x7f0d0924b7c2]
/usr/sbin/glusterfs(main+0x502)[0x7f0d096a0fe2]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f0d081fbaf5]
/usr/sbin/glusterfs(+0x6381)[0x7f0d096a1381]
In /var/log/messages:
...
Dec 22 23:00:49 compute2 journal: metadata not found: Requested
metadata element is not present
Dec 22 23:00:59 compute2 journal: metadata not found: Requested
metadata element is not present
Dec 22 23:01:01 compute2 systemd: Created slice user-0.slice.
Dec 22 23:01:01 compute2 systemd: Starting Session 1316 of user root.
Dec 22 23:01:01 compute2 systemd: Started Session 1316 of user root.
Dec 22 23:01:01 compute2 systemd: Failed to reset devices.list on
/machine.slice: Invalid argument
Dec 22 23:01:03 compute2 journal: metadata not found: Requested
metadata element is not present
...
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: pending
frames:
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: frame :
type(1) op(FSTAT)
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: frame :
type(1) op(READ)
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: frame :
type(1) op(READ)
...
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: frame :
type(1) op(OPEN)
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: frame :
type(0) op(0)
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
patchset: git://git.gluster.com/glusterfs.git
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: signal
received: 6
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: time of
crash:
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
2015-12-22 23:04:00
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
configuration details:
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: argp 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
backtrace 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: dlfcn 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
libpthread 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
llistxattr 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: setfsid
1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
spinlock 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: epoll.h
1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]: xattr.h
1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
st_atim.tv_nsec 1
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
package-string: glusterfs 3.6.7
Dec 22 23:04:00 compute2
rhev-data-center-mnt-glusterSD-10.0.231.50:_vm-storage[14098]:
---------
Dec 22 23:04:03 compute2 sanlock[944]: 2015-12-22 23:04:03+0000 944341
[948]: open error -107
/rhev/data-center/mnt/glusterSD/10.0.231.50:_vm-storage/a5a83df1-47e2-4927-9add-079199ca7ef8/dom_md/leases
Dec 22 23:04:03 compute2 sanlock[944]: 2015-12-22 23:04:03+0000 944342
[944]: ci 8 fd 17 pid -1 recv errno 104
Dec 22 23:04:03 compute2 sanlock[944]: 2015-12-22 23:04:03+0000 944342
[949]: open error -107
/rhev/data-center/mnt/glusterSD/10.0.231.50:_vm-storage/a5a83df1-47e2-4927-9add-079199ca7ef8/dom_md/leases
Dec 22 23:04:03 compute2 sanlock[944]: 2015-12-22 23:04:03+0000 944342
[944]: ci 9 fd 18 pid -1 recv errno 104
Dec 22 23:04:03 compute2 sanlock[944]: 2015-12-22 23:04:03+0000 944342
[948]: open error -1
/rhev/data-center/mnt/glusterSD/10.0.231.50:_vm-storage/a5a83df1-47e2-4927-9add-079199ca7ef8/dom_md/leases
Dec 22 23:04:03 compute2 sanlock[944]: 2015-12-22 23:04:03+0000 944342
[948]: r1 release_token open error -107
Dec 22 23:04:03 compute2 journal: vdsm root ERROR Panic: Unrecoverable
errors during SPM stop process.
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sp.py", line 414, in stopSpm
self.masterDomain.releaseClusterLock()
File "/usr/share/vdsm/storage/sd.py", line 512, in releaseClusterLock
self._clusterLock.release()
File "/usr/share/vdsm/storage/clusterlock.py", line 342, in release
raise se.ReleaseLockFailure(self._sdUUID, e)
ReleaseLockFailure: Cannot release lock:
(u'a5a83df1-47e2-4927-9add-079199ca7ef8', SanlockException(107,
'Sanlock resource not released', 'Transport endpoint is not
connected'))
Dec 22 23:04:03 compute2 journal: End of file while reading data:
Input/output error
Dec 22 23:04:03 compute2 journal: End of file while reading data:
Input/output error
Dec 22 23:04:03 compute2 systemd: vdsmd.service: main process exited,
code=killed, status=9/KILL
Dec 22 23:04:03 compute2 systemd: Failed to reset devices.list on
/machine.slice: Invalid argument
Dec 22 23:04:03 compute2 vdsmd_init_common.sh: vdsm: Running run_final_hooks
Dec 22 23:04:03 compute2 systemd: Unit vdsmd.service entered failed state.
Dec 22 23:04:03 compute2 systemd: vdsmd.service holdoff time over,
scheduling restart.
Dec 22 23:04:03 compute2 systemd: Stopping Virtual Desktop Server Manager...
Dec 22 23:04:03 compute2 systemd: Starting Virtual Desktop Server Manager...
Dec 22 23:04:03 compute2 systemd: Failed to reset devices.list on
/machine.slice: Invalid argument
At this point I tried restarting vdsmd but ended up having to restart
the host. There was network connectivity to gluster before I
restarted, and I was able to manually mount the vm-storage volume on
the host and ls without issue.
After restarting the host I'm up and running again but I'm sure I'll
get hit by this again if I can't sort out what the issue is.
9 years, 4 months
Calculating Requirements
by Taste-Of-IT
Hello,
i have a testsystem with 16GB RAM and ovirt as sef-hosted-engine on
centos7. i have no vm started and the host consume 2,5GB RAM. In
documents it is written that the engine needs aprox 3GB RAM and the
system self 1GB RAM. At all nearly 4GB RAM. So i can use aprox 12GB RAM
for virtual machines if i use the ram 1:1. is that a good point of
calculation, or are there any other points to calculate?
thx
Taste
9 years, 4 months
[ANN] oVirt Bugzilla Now Supports Votes.
by Yaniv Dary
Hi all,
I'm happy to announce that voting has been enabled for all the oVirt
projects.
Let us know what RFEs and bugs you hold dear and help us in prioritizing
what to fix first and what will have most impact.
To vote please login, enter any oVirt bug and press on vote button (see
attached image).
Thanks!
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email: ydary(a)redhat.com
IRC : ydary
9 years, 4 months
[ANN] oVirt 3.6.2 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability
of the First Release Candidate of oVirt 3.6.2 for testing, as of December
23rd, 2015.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux >= 7.2, CentOS Linux >= 7.2 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux >= 7.2, CentOS Linux >= 7.2 (or similar) and
Fedora 22.
Highly experimental support for Debian 8.2 Jessie has been added too.
This release candidate includes updated packages for:
- ovirt-engine
- ovirt-engine-dwh
- ovirt-engine-reports
- ovirt-engine-sdk-java
- ovirt-engine-sdk-python
- ovirt-hosted-engine-setup
- ovirt-setup-lib
- vdsm
This release of oVirt 3.6.2 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO is already available [2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6.2_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 4 months
Spice SSL Certificate
by Kristof VAN DEN EYNDEN
--_004_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_
Content-Type: multipart/alternative;
boundary="_000_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_"
--_000_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I was trying to get Spice or VNC to work on Firefox. After activating the o=
virt-websocket-proxy settings (using this guide https://access.redhat.com/s=
olutions/718653)
I kept on getting error - Server disconnected (code: 1006). This pointed me=
to other posts stating it was a certificate issue. After doing some resear=
ch I found post: https://bugzilla.redhat.com/show_bug.cgi?id=3D1098574
So I started tracing the messages: grep -i 'websocket.*trace' /var/log/mess=
ages
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,314 ovirt-websocket-proxy: INFO=
msg:824 Got SIGTERM, exiting
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,314 ovirt-websocket-proxy: INFO=
msg:824 In exit
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,514 ovirt-websocket-proxy: INFO=
msg:824 WebSocket server settings:
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,515 ovirt-websocket-proxy: INFO=
msg:824 - Listen on *:6100
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,515 ovirt-websocket-proxy: INFO=
msg:824 - Flash security policy server
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,515 ovirt-websocket-proxy: INFO=
msg:824 - SSL/TLS support
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,515 ovirt-websocket-proxy: INFO=
msg:824 - Deny non-SSL/TLS connections
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,515 ovirt-websocket-proxy: INFO=
msg:824 - Recording to '/tmp/websocketproxy_trace.log.*'
Dec 23 13:47:07 ovirt36 2015-12-23 13:47:07,519 ovirt-websocket-proxy: INFO=
msg:824 - proxying from *:6100 to targets in /dummy
Dec 23 13:47:19 ovirt36 2015-12-23 13:47:19,543 ovirt-websocket-proxy: INFO=
msg:824 handler exception: [Errno 1] _ssl.c:1390: error:14094412:SSL routi=
nes:SSL3_READ_BYTES:sslv3 alert bad certificate
Dec 23 13:48:12 ovirt36 2015-12-23 13:48:12,328 ovirt-websocket-proxy: INFO=
msg:824 handler exception: [Errno 1] _ssl.c:1390: error:14094412:SSL routi=
nes:SSL3_READ_BYTES:sslv3 alert bad certificate
Dec 23 13:49:49 ovirt36 2015-12-23 13:49:49,420 ovirt-websocket-proxy: INFO=
msg:824 handler exception: [Errno 1] _ssl.c:1390: error:14094412:SSL routi=
nes:SSL3_READ_BYTES:sslv3 alert bad certificate
Dec 23 13:55:36 ovirt36 2015-12-23 13:55:36,114 ovirt-websocket-proxy: INFO=
msg:824 handler exception: [Errno 1] _ssl.c:1390: error:14094418:SSL routi=
nes:SSL3_READ_BYTES:tlsv1 alert unknown ca
Dec 23 13:56:40 ovirt36 2015-12-23 13:56:40,201 ovirt-websocket-proxy: INFO=
msg:824 handler exception: [Errno 1] _ssl.c:1390: error:14094418:SSL routi=
nes:SSL3_READ_BYTES:tlsv1 alert unknown ca
So I added the certificate by surfing to https://(ovirt)/ca.crt
which gives following box in firefox :
[cid:image001.png@01D13D83.B2200CB0]
So I assume it would be OK now? Nevertheless it still doesn't work! /var/lo=
g/messages still shows the same error? On another post I found that someone=
surfed to https://(ovirt):6100 and accepted the certificiate there. So I d=
id the same thing which solved my problem immediately.
I don't quite understand the issue, feels like the CA is not getting author=
ized or the 2 certificates do not belong to the same CA ?
I can continue like this, but I feel it should be easier to complete?
--_000_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-mailStijl17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"NL-BE" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">I was trying to get Spice or VN=
C to work on Firefox. After activating the ovirt-websocket-proxy settings (=
using this guide
<a href=3D"https://access.redhat.com/solutions/718653">https://access.redha=
t.com/solutions/718653</a>)<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I kept on getting error - Serve=
r disconnected (code: 1006). This pointed me to other posts stating it was =
a certificate issue. After doing some research I found post:
<a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1098574">https://b=
ugzilla.redhat.com/show_bug.cgi?id=3D1098574</a><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">So I started tracing the messag=
es: grep -i 'websocket.*trace' /var/log/messages<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,314 ovirt-websocket-proxy: INFO msg:824 Got SIGTERM, exiting<o=
:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,314 ovirt-websocket-proxy: INFO msg:824 In exit<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,514 ovirt-websocket-proxy: INFO msg:824 WebSocket server setti=
ngs:<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,515 ovirt-websocket-proxy: INFO msg:824 - Listen o=
n *:6100<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,515 ovirt-websocket-proxy: INFO msg:824 - Flash se=
curity policy server<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,515 ovirt-websocket-proxy: INFO msg:824 - SSL/TLS =
support<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,515 ovirt-websocket-proxy: INFO msg:824 - Deny non=
-SSL/TLS connections<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,515 ovirt-websocket-proxy: INFO msg:824 - Recordin=
g to '/tmp/websocketproxy_trace.log.*'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:07 ovirt36 2015-12=
-23 13:47:07,519 ovirt-websocket-proxy: INFO msg:824 - proxying=
from *:6100 to targets in /dummy<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:47:19 ovirt36 2015-12=
-23 13:47:19,543 ovirt-websocket-proxy: INFO msg:824 handler exception: [Er=
rno 1] _ssl.c:1390: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert=
bad certificate<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:48:12 ovirt36 2015-12=
-23 13:48:12,328 ovirt-websocket-proxy: INFO msg:824 handler exception: [Er=
rno 1] _ssl.c:1390: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert=
bad certificate<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:49:49 ovirt36 2015-12=
-23 13:49:49,420 ovirt-websocket-proxy: INFO msg:824 handler exception: [Er=
rno 1] _ssl.c:1390: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert=
bad certificate<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:55:36 ovirt36 2015-12=
-23 13:55:36,114 ovirt-websocket-proxy: INFO msg:824 handler exception: [Er=
rno 1] _ssl.c:1390: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert=
unknown ca<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Dec 23 13:56:40 ovirt36 2015-12=
-23 13:56:40,201 ovirt-websocket-proxy: INFO msg:824 handler exception: [Er=
rno 1] _ssl.c:1390: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert=
unknown ca<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">So I added the certificate by s=
urfing to <a href=3D"https://(ovirt)/ca.crt">
https://(ovirt)/ca.crt</a> <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">which gives following box in fi=
refox :<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:NL-BE"><img bord=
er=3D"0" width=3D"558" height=3D"325" id=3D"Afbeelding_x0020_1" src=3D"cid:=
image001.png(a)01D13D83.B2200CB0"></span><span lang=3D"EN-US"><o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">So I assume it would be OK now?=
Nevertheless it still doesn’t work! /var/log/messages still shows th=
e same error? On another post I found that someone surfed to
<a href=3D"https://(ovirt):6100">https://(ovirt):6100</a> and accepted the =
certificiate there. So I did the same thing which solved my problem immedia=
tely.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I don’t quite understand =
the issue, feels like the CA is not getting authorized or the 2 certificate=
s do not belong to the same CA ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can continue like this, but I=
feel it should be easier to complete?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
</div>
</body>
</html>
--_000_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_--
--_004_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_
Content-Type: image/png; name="image001.png"
Content-Description: image001.png
Content-Disposition: inline; filename="image001.png"; size=19011;
creation-date="Wed, 23 Dec 2015 12:13:07 GMT";
modification-date="Wed, 23 Dec 2015 12:13:07 GMT"
Content-ID: <image001.png(a)01D13D83.B2200CB0>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAi4AAAFFCAIAAABE8/GMAAAAAXNSR0IArs4c6QAASf1JREFUeF7t
nQl8HMWV/1s+sGwu+VhOH9iDiTYrkwyCkDgQ8MHEJFYOIgsSon845ATQ6o+SfxJWKMuSTYSXkCzK
KiIkiGMzOCSyjAkiiz0chpCDSwyYSSzWHjAYbAiHlcQm+NT/VfXdXd3T18g9o19/9LFHPa9evfet
6npdr1pdFUNDQ5J6LF68WPuMDyAAAiAAAiBQDAKvv/76vn37tm/frimvkEMRBaEjjjjiqquuev/7
3z99+vRi1A2dIAACIAACIEAE9u/f/9LbQ2d/4f/sfv7pN994g86wUERx6MILL1y2bNl9z74FTCAA
AiAAAiAwAgTmn/G+jdveaD77QxSNWCg677zzfvazn/362bdHoG5UAQIgAAIgAAIyger5J1745S9v
XXVnxamnnrpmzZr7n98BNCAAAiAAAiAwwgSqahNfq30fC0WPPfbYnY9sHuHqUR0IgAAIgECcCYzf
8cZDP/7hto1/jNbI4/7xnxZdfuXeyUfLag/94Oz/rDubhaKnnnrqtnURVxat6dAGAiAAAiAwwgTu
/b8XN3/9y/M/koy23t//Idv9/Z9+6r9ul9WOO/mErk+poej2DEJRtLShDQRAAARKm8DKCz91z0O/
eHdwQ7RuTKo++TOLLrhw5b2y2gM1J9z8aTUUrVz/v9FWBm0gAAIgAAIlTeD2hnPXZH6+a/C5igru
x7DiDf1PZ4b5r+wb/lkWUY+KYWmYlRqukNhHfpr9yv6fVP2Bz6a+cHHv/bLw1jnHr64/e4z8S4X1
eOy2he/7ovrz7dWv2ATCnCDlF6zbFkyDWnbbz74dXIlc9SvrWlQfW372Z1/mGGt/8hoGijREYJIv
IyAMAiAAAkUkQKHhwP59w/v2HdjLf+jDfvYjn7knt/Wf7xlopp81A/SZvl2T28p+ZT9PszOsyF6l
oKxk3779XIkx6Ow7wGOQvFb0i0fz5uj9aM/Z3cev7D33eDpNny99aOmtd339LLNM4N+Myv0qCVPW
UNdrd/zbhd+dcf3mptP5ydfuuP/Vi86VPzsewqoZHEnT48ObiBzxUSNEQQAEQMAHgZ76VN99t+/6
U1aezcgzHO24ct2f+n75E/nX+vO/smj21Ideett4pvPj72cxRiugTq0O/adk/dKLm/oy8jd/mnHM
/ec7zopYeTXanr385/960n0P0iQtosOo3K/KMGW1ul5Zu+K7M7+XX/5h9cz0iz+hfXa0SFg1nTxl
+gy/XpB8JI4EqBdFQAAEQMATAYoTbCa0Z+/+feznwB6a0/DPe9m/C2dNpgh0gB+9d/2Y4hD9K/9K
5+nbYSrCf9RSNDHievaZZ0U8vKkJujFShflH/lI5OWPBR2ruevpJ/uv2O645K3EB/+nhZ577gfKh
YswjPWctu3+7KvaDR5Qzq78ly1+z+mVZoa7cpk1WKMtfwDSYKr1m9Ra1rFYX/2CrQrPzmtV36FYp
2tb/Iff50z5iddns3beeY8JMec8PlnF7Ln1Qeua/vyBbJddO/yonuWsG98e8fH+zlZLBKb2g4qPO
Qa4XPyAAAiBwkAnIs5b9lJTjubXh/SyK8A/s30/Oqlow88iGz19Ob/Gh4647fyR/oDN0/hMnVDFh
Oa2n/FAc2je8l8UhNltSR7k9fNFJCUVjKsZYftisSNJP8kkS/fqbngu+M+v7L6367UurfnHNK99Y
dv9rY0752OcffOw3rPgTD75SI/3h91vp859//2vphAR9qKDhe0uKyX//8//7X7c8xytSlQu00ben
fJPr/+3t59zbTfqNlbZIv35QNUyzUFSFrtlYRPGRPKuZe5zNZV7Rr+f/gtf+fWn1KuYIKX/lhP9U
7JFOuYi+/eZC1YWFXyEj+cmOhtkGv7bef/nCPyx8hJd66SsfYS6bndILcm2Ceu0tgjMgAAIgMJIE
KEK89afnXrh75QtrVtK/g/SB/8gfBtesPDF7/wf+/trnv/jPO9SDPtMZOq+IrdFLKWXXrCSdLPao
I/Be06zIOl3jUct4yGmo1156RfrChz7Cv5hxybJPPfPaa/Rh7vtefOn1iopnf/PCRy9eWvHww69X
vPLMw9JHP3oCf+7ilIsvXMDkP5I6R3phO8nzjCE7I9RG559oX3bG7GVnXPyAJlZzzad4pcee33KO
YphmoagK0iwoovkjVeQ2k8HWg9nzzB0XUNWzl339rv/d8pJsP3fEYLbpsxGU5tfDf8h9Ydn5cin1
sDilQVA4WOr1NH2GEAiAAAgUj4A8fRl7yIQxEyaMnaD8Sx/GTqgce0glfXjqiDnPTTz+xu9/a7d6
0Gc6Q+eZABXkP0yeFZkg/yizItVuUyiyzonGsLUMOpTzrz38+1z18TNpjsCf3FPDGZthkczMRR+t
+HX2tUeeeqnu1Pn0efMbJF9RdyrJm/VoOpUPQm2v3d7+/6Rv/eHlu//w6CXzuA3cFs1CqxJhFQ5F
FCVk8LyfP/WEfSZI3n2BV81/2hY62W88bwQl9ItVanfKaDbjYK3X3iI4AwIgAAIjSYCN9mPHjpt0
6PiJh46bqPxLH8ZNnDhu0qQnDz3h2fHHXH/dN/fw46tf/678gc7Q+ScPm8UK8p9xEyexH/rAf0gn
mxWpcWQvf4JOS9CxEd/4o4QcdvKRmxu+PevG/ziVRZ05s6SfP/k4l3zt9lX3XPjh+fR59mmLpd99
p+vlxYuOZ59fePKuzRL7bAhd5s+KcqG21za/MO8kVva19b97noc9Env+27/ilb7R20VPXbCTWhgz
fNBPOhRRHZz9qUsuzHxt1s2yI9yXX9Nnbs+q3pcdOKhhWOiL8eTMRWfMM+uxO2U0W1SvpTnwKwiA
AAiMMAF5SjT+0MPGHXo4/Yw/7DD6zH8Op3+fqZj63W9/dS8/rrr6e6eO+xv9K/9K5+lbJiYXoX/Z
B1aKfuSJkeaLKRSZwxCXkQZurZ/5mQ/Tz3/NXP1q83y54KLm1de+8lV+vr7/zNXX1/KR/Lgz6iqe
l848Y478+ZU1g/JnrodNaayflZMibfMvb5KuvZz0f3vTCfPksouaOy/M8Ep/KNV9XFNo/6BX51BE
s2T+9b/SHGF1SacxB5k9FTd+jHs98zPXP2q2f9FHPsuZWM4LHJzzqZ70CaqeHz8+pkLklEGbqF7r
vcEId0JUBwIgMNoJsFBUWTn+sMMP4T/jDz2CPvMf9uH0ibu/9W83yj/0eWHVsOUMDz+8CAtCPA6x
gkeQTr5Go+CVH2NQ/q6o/+ltcv6uDI9Xb/nmZzdf8NT1p5Shb3AJBEAABIpF4Ifnnvnfd1z/50fX
RVSB8rqFo876+JcuuurK+x+T1d458fAXL1ro9LaF4q2EjbTmZ9LXDp5Mz8vhAAEQAAEQ8EGA4oSy
xkMLReqKEV8roiUfde1n4qTxymc6o60J8Q/8R15n4ktN9Kv8eRKfFSnHPvkFQvLbFn79jL7HeEQB
8OCqeelXF3/0FuUlfl+8duB7px5cc1A7CIAACJQagZ4vnvfPX730gzVzozX82dymH914a9Odd8tq
fzr+sG2XLFRC0f9kX4+2MmgDARAAARAoaQLbnnvmvhXX7nwn4g2+D5sydWnbtcd9QFkz6R572J8v
XaCEorXPvlHSyGA8CIAACIBAKRLorJj0dpM6K1r33J9L0QfYDAIgAAIgUNIEvj88cWi5GooyG94s
aWdgPAiAAAiAQCkSuP5A5V8pFJHpw8PDFIpOn3VIKboBm0EABEAABEqUwBMv77luf+WjpxyhPMxd
om7AbBAAARAAgTIggFBUBo0IF0AABECgtAkgFJV2+8F6EAABECgDAghFZdCIcAEEQAAESpsAQlFp
tx+sBwEQAIEyIIBQVAaNCBdAAARAoLQJIBSVdvvBehAAARAoAwIIRWXQiHABBEAABEqbAEJRabcf
rAcBEACBMiCAUFQGjQgXQAAEQKC0CSAUlXb7wXoQAAEQKAMCCEVl0IhwAQRAAARKm0CQUJRpqVrQ
ndf9pt9bMoEwWDUFUmIpVAydXuzyW69feS82lIRMvntBlXKYupEH41lRtQzxo4N+M570oMOXCKvE
X98O1KyBXGCFdNscqVpcCGSfL2YQBoEgBIKEolRrh9TeqQaffPeKXEdrKkjlKGMi4H2U8C5pRxym
bOgGYwNm7WDbkHL0SGsL38MYDE40rx9a35wgMzItDenG3iH2m37Si3l+3Gddu7Ext8J43yWqw49O
oY1Gv8y3eY4u5bub+uoHuvh150LV5kKqa6C+r6mQR15IQgYEoiQQJBRJiea2xrRygWY622va+OiA
AwTcCdDw2V7TOySPn+xINDcHv4dJVhe51+XX9kn1ra31Ut9aQw4gHo3MbVvCAbhRFblAV2+NficZ
D3dgBQgECkWSxCZG7AKlm650Y506nBiSBGrmwHjD6HDzqJbSsjVy6oUdTIs5D6Hp0OsSJVBsOrka
Xal6L2k8w1V3K3XbU0dmq5R7UbNGuTtxQdko50oXdG8ydz4q1ZCWsu219rKWFJFFkpnd0kLOUZVC
2kYjbLWoRti806xzwGJ2zZCjNdhAMsbGoXExq3cWk/tWUJpThNKIRVaue8Gbyei1QJGh2V0g23uR
MtonlhhjkZ2wCKlb99Na095wRlXdpvSb9dIxRiJnqpLQBboDqE7mNsUuumIwHt0EAoYiNjGiW6tu
ujfTknOZllp2z8uOgY5cg8d1gGz7CqmHivQ2ZtVbtVSXnL/p5VMvPgXrV/I4mf50I5uDUV2UnpCl
JFsCRdU5QIlEORdhlxdqyLYP1sk1a8Zo3cNsFamk6aDsreE2nwWfBolOszt/YaVKoR6pL23qeaS+
t1FKdpBTSlknmBZJ0pLNVRNCfbJhUmyy015WkbV6Z1Jhx2J1LVWntlGmP5dUphFsILTMXMQzGXFb
cKd4Y+hYZLN0L+RsnXoQfK1bKDjMftkhO/cibbQ3xyLbcCFoDluXdro0LA1nVNUs6va6p4PZmrma
607zQycXEnNrsoMIRaN75I+d90FDEY0HNPq0G5Jz+U05Sb3nZeHDY2dPdvTw4YTUSeqtmnKLTrfD
8shjHOd45GN18QkEv2m21aTq1Mywy4s1JJW4ajTGOD2Q65PP0K1l2hpv+5rYUKjEBGGlag3MNJfO
4A9mUknVCBWK7BQIWpibJGxY7K6pN9oUierb5GmE4c5d0ybsFA5t4eaU0FM+67Imi138cu1FButZ
LPKT0bJ1acfWdG04e7d36jAOl5qzC5gWxW4ghkHBQxEPHkXI16vzCja3SiqxqLWDrR2zJVh9fGKL
1srhMB+QQ5Z62OW9aDDccPPZjm4VW2oe6pGatGwczU4kKWkeFvxUUaTOKLDTVpOAeUFrzK4pUwcW
iZakaOgezNsjEckktfmtRX9xQBX2y6lemktm1Zudqtr2rORkeEFOwQUoC27v9jZ1jlRdXMibJlXB
LURJEIiMQIhQZLGBZv3a9aovIdH9lzo4U3LNi9l0mcgBjt3kKgX4QNfZ2ac+IMHrcn6wKausM2uL
E3b5Ahrsg7XAKjY1al5PAVOdzSXre9b3SupUSVipmvhjhFxoiGEWxOdI22ynV+8c6xPQ403UJN8r
0Odcf+eguq6uqeG51gbTE8jdlHn12xZOZvFB2dQrRH1JL+1SL/VVnitVD2pkORb578+skPDSKNia
jCPv9jaQptUeB6qOLvA7NOUeMvSzf4WdgAQIeCIQXSiiDD5bIuJJMz1NpVwn7GS/5JaS0qzlj4qz
3FvTYI0yK6KLmS7KdLpGe0CC1SVLqU83GL1N1gyyyQrdzdb0yjMmu7y7Bhs7u1XqQj9VYkwK8Ydl
a/kTuaJKaf2LI2qS6m00aJqpPbYghKkzMkgaTRXRttpprEUtK2Tu1n8E9FgTZZVBk8WitGEGqxve
xdcRlXYjCktY87i3hchgsW0UcHtr1F7BnkMQ+GWFLOxFbBg35Qe1qYewPxe00L01jc6YVbEVWer2
9idUzetXtMZko+rsAoUiTIo8DY4QGkkCFVTZ8PBwZsObp886ZCQrRl0gAAIFCdCNRH+d6IkUyj02
ST3mxzYKauMCwUt60w8pEPBB4ImX91y3v/LRU46IcFbko3qIggAIFCbg8ufjieYemn37exMEq5A/
zIc/BCzMHhIjTACzohEGjupAwAsB9mB6e5YWrIJMfLxUABkQiAMBzIri0AqwAQScCPAHH5W3HIES
CJQ/ASToyr+N4SEIgAAIxJwAQlHMGwjmgQAIgED5E0AoKv82hocgAAIgEHMCCEUxbyCYBwIgAALl
TwChqPzbGB6CAAiAQMwJIBTFvIFgHgiAAAiUPwGEovJvY3gIAiAAAjEngFAU8waCeSAAAiBQ/gQQ
isq/jeEhCIAACMScAEJRzBsI5oEACIBA+RNAKCr/NoaHIAACIBBzAghFMW8gmAcCIAAC5U8Aoaj8
2xgeggAIgEDMCSAUxbyBYB4IgAAIlD8BhKLyb2N4CAIgAAIxJ4BQFPMGgnkgAAIgUP4EEIrKv43h
IQiAAAjEnABCUcwbCOaBAAiAQPkTQCgq/zaGhyAAAiAQcwIIRTFvIJgHAiAAAuVPAKGo/NsYHoIA
CIBAzAkgFMW8gWAeCIAACJQ/AYSi8m9jeAgCIAACMSeAUBTzBoJ5IAACIFD+BBCKyr+N4SEIgAAI
xJwAQlHMGwjmgQAIgED5E0AoKv82hocgAAIgEHMCwUJRpqVKPxZ052PuZLzNI5gtGUn+1/OR715Q
5Qre0ES6Xu1kwTYrqJ8ZXFCLR3dYZfKhmKqfqFIrscmQbuaOzQbTSWe2MorgHvh3v0CNmkL/mmXM
hZvMY3PEQUzrqX4uijgYDhuCEggWiqi2ZMfAED96a9prfY2iQU2NqpzwUvd+/XuXtBvsUDZZnZAS
1Y30r+cj0bx+aH2ztYCuP59olZtnqLcx3cCvZxqqGnJKq9mLWmoupN+zoRZBEYHMWqnHbCor1Ngr
2y+bapXhkalfarTake/uzyWNJ8VsMy0NaVZBQQ4m/eGaPkiNfjCLm8yPhvjIZjZV8+GFOu8K3OnG
p12KaUngUKQZleoa6Eim+33c0RfTnxLUnaiuid7qREINU6k6ebzOdLZLHT226BV91b41ppoVq5ip
uU3CObZVho27Q1111roynX3V9QacLmxZjBrZY+RrHFn/IqxNbe5Etem+IsIaoCpuBMKHIklKLKnX
YpEhtWKeWrMvTJkiOTcikDfee1rvQw1pLMNXpISr1pNSSk1cplvJJvIK6UxDWsrSRM5sjfmkoxeW
4kx/Swvdn8sZNi3do302ppWEVZPVieYuGorlf/lhrV2IzpTP4TbQYXeNlK1IN9alpEx/OlkvdZry
YEpnDKVfoc6AByagXhT5Tblk/RKFQrpBmEEzyViuJvJVamueazhrY8uDstYHlAZz6IRqy8r6RM2n
FtTzfKI8orcaRSODzTDX/m9O8Zm6valf8UvCkprkZTNKolT9Ttigtj5vLSW8qF1ysPK1Kk7A1vbV
x/LuKW7DeDnYE0Uo0jlkWmrba+TEykBHrsHY3RPNbY3a3InGxcY2Gnld5MVw6b5ZUZJhiZi+tTyc
re2T+C1uqkvLSWnT+mz7YJ081c+2d2aYSG+jnF3sSql1WE66WGUvns1VU3ZJ12Wym2YiCg6SEFZt
d9NeuxCdsaBsA3dSd025tmsH2zTr+iQFhZKyU3QE06+FzQaJWtwBAJuLeSWQ725qr2G9gkVnmvPw
TiS11xpvaQwyAnJUFwXdgofeEDw/59TclpYVNH37Cp5ZVLoWD1c0dsqJ617JkFnyWKPFcoFh7v3f
UN7S7WXblKbokfrSAkhZxR0Gvck1K2YkYy8l4mnqBiJQVgEem1ZUD/jLoBZsegjEl0BUoYgnH+iO
VWJ34Oxg41t20JhsMV1GHa0k5i4vhEYzdp7CoUhU31bPYxGLRMq9tDItosmBdiR5TRSmnJM/lor8
WaXfxgvsJWvTpoBcuCMIa7ejMykS2qAO53X92gSwfomMotWaUQ2in2nqa2JDr1MY4t3AKwF5ALOo
Yp3IkLITyqgkaEqUU9q6MGWDhGNzu7YsU5BU8p1a12Kq+ISbT1DNvd9LjWa7hYa593+Xbs9mkwoe
TtV+qO7Yr1yrrJGMrZST2cYLwQ7K3k/yg9mauSOdQ/XVcyAcKYEoQhHFAk+9hobAHN0qsjFDS8P4
dYaSgSz+sEi0JEWfB/NaJGLL8uwGnd1MxyTDzMNBj9SkPxvm119FPgQ6cQy2rFsE05+l4dhxwJUt
90ZAaTq3mMafuqDmdZShoUsOA3KWMvizcQGbSCmmPm1B3dDVnYC1uPT/gBrtxVig8H+4lhJ0AzMo
uwDNJIsB0L9jKDEyBMKHIsOtamJujaQm4dQ1CqMb/Drq7NQmMWJ5ukFSxzdK5FkxcB1NcjCjz7n+
zkFlTkRjkTzAstAYgl4hLxxUO5pNVxnFRofleJsyh9pVdEoGy4d7fJFoSYIiEs9Q0kHJEOXWQV8M
CKQ/Wd+zvlfSpn1+Cai1y2kt0bjDHnSTb8GdZVQUanZWyVJ6zuwEbG5RC3BVhZ/48lijw9Xh2P/d
+wRpU9ufXZoC4ayc7uaXj5zbcL8SZRW2Us7e6ReCAyjTlWJewfTR4SFamgQChyI1EVFFCV3t9oU9
TZfjy81VwtGF5v416bQ+nArl+dKFrETwtC6LP1JWiT4sFqXVKRbd2dPKApVqGqxxmxXxQdn02AI1
nfGkuxfC4uy6FZitPkdBuSe+COJU1th3HGpnbhM697UQTX+39pc6VTSV4MOyrpc90m0b+P3p15+X
JLX1fbV8FTwYAT4war2JJRP1FWxmKTPeSUZ9TCPUFKhQp9Ubp2DzMVVyHzT8kZR9YPBYo1jMsf8X
GIBSXeyxfmZXk1QvTNDVDLLpe5WeKC1wJfIKk7ZSIrNtF4IVlFWgNEdTWB2CQAWVHR4ezmx48/RZ
h4TQ47Uodbn+Osy8veKCHAhET4BivfFpFlYBXZcBHhIIVip6h6CxdAk88fKe6/ZXPnrKEYFnRYF8
D7q0HKgyFAIBEBAQYAla/I0TukbMCIxYKOJZF/yZQMyaH+aMGgKGtCe9aMLzUtqo4QNHDzKBkU7Q
HWR3UT0IgAAIgEBsCBykBF1s/IchIAACIAAC8SEwYgm6+LgMS0AABEAABOJFAKEoXu0Ba0AABEBg
FBJAKBqFjQ6XQQAEQCBeBBCK4tUesAYEQAAERiEBhKJR2OhwGQRAAATiRQChKF7tAWtAAARAYBQS
QCgahY0Ol0EABEAgXgQQiuLVHrAGBEAABEYhAYSiUdjocBkEQAAE4kUg+It/brv9di+uXHLxxV7E
IAMCIAACIDDaCGgv/gkVir68fLkd3Lhx4yoqKvbv33/gwIGf3nILQtFo61vwFwRAAAQ8EojsHXSZ
9b8z/vxxcPPKNY9f84N7t73xDgUkoTXqLlnKBmP0X6idzxw91rco1UXY64m91Ka/xtgizoxv0TeO
84jbJCa2QWStR+1GhTJcLy56VM52simkzitVr1VCDgRAYLQRiGCtaMb06fLPzBkzTpwzO/NYfiC3
Pb/lLadQpO783NsoJTsGhugI9MZ64RBZaNykLYsL1sYGVrazmHL0SGu10MP2W2pszAn3jC5Utdaz
vNhQuBsaqtMVsg24G3sDAi1cp5OE0YCCcSt4NSgJAiBQtgQiCEUymzFjKqZNqfrNky++M7Rr9oyp
J1cfQwm6EsSW725qr+k17DObaG5OKX7k1/ZJ9a2t9VLf2nxcXcOeaHFtGdgFAiDgTCCCUDRpUuW0
qVWVEw6ZMvmINes2jBkz5vylyZ27dvrEzm7zW1pofz3KfxlnGNpnPWnWkqGTDWkp215rTJeJTqqF
1ByTUJvBUoo22cY6NfaYXeCRaEkiscQei2xVGzJ8tnyewTvdvE16XUZH2Vku380zb0ruzVKdrFA/
uaA7QzM7vV7zhI1+U78yWyKftdbO7bJitAhZDVAqsKoSqvbZTSAOAiBQjgTChqKjj5q2813pt09v
PeaoKY8/8/Kbb++acdzkM06bs2f3Hv+4srnqniHDhMSkgbZBptkKP7pSlORT83tdWtywn8y2r5BI
I8lm2ztNKzwmbWZLneYVSiSSJEEsslSdaalVjR3oyDU4rbXoYj1SX1qxgk721fO85VCvpKYCs+2D
dfyM7EgB99c3p5rbGtP9iseZ/nRjW3NC9TJVp36V6c8llRkec66aRBxqlzEOdEjtTd00IRTTs0Mw
O+LC3H9fQQkQAIFyIhA2FB179LSV9wzceffTr2zbufp+mhJVfO7cD1JqbkJlpX9MSZpzOJZKVCfT
joO6Q6lkRw8fgmn0lXKbjEk1F23ZQWH2TYtEciyyhDaTAflNOUmdWiUoKDhp3JRLdrTySMqkZBWs
LJ/t0dGQVkuqcnZHnHiZAo5SiyJLvnMYFInq2+QZnuqcU+0yRs0XL21hV+WllP9OgxIgAAJlQCBs
KNqxY6j1krPoCYWrb7jv1deHjpp6+IL5cykUjR8/LmI6bG18qEdqoiE65CNs8tgv1kZRJqlNJ4we
0C19Vg0RVbXtWUkoFY3P7MkD5dDnfH5Vp1o72PMV7EkLS4RXZnUsEi1JUVQdzBvCrCS51M7Cixs9
q41mVdG2oF8ekAcBEIgxgbChaPfu3YcdOmHRGSdVSBSPKj6dmlchDdNy0fhxYUIR3T+r8wHKLhnw
0Wg20CHf00dwiLSxO/90gyHa5bu7KdFFZiiP+8kxgqxwjkWJuTVapKJQkHZYfCIxNW/IpGSHeFnh
I3p+HeYRp7OTr2+Zy/JvmuQQRZ9z/Z2DipC49qzymIZ5Ia1AWzg4Yijl+ZlDv55DHgRAoNQIhA1F
48aOGXrnzfpzT144P7H4jLkL58/9286du3a9O3ZsqFAkxwN29EtK5kr9cyRaheELH5SBsjy2QOyF
J0VtYtVmkKElD7bAo/7ZU5O0JMUjkWlEt8+ejFWnujQNbOHHaWqT6upV3GyS6hU3aR2ILcnIGTqX
CaAHTymq1qTTMi17LMqq0YdiUVqbOAlrT9YMsskoTQZrerkvjvSsEEyOuDAvtcsG9oIACERLIJq3
LdB8iF6yMDw8TC9ZoH81E/G2hWhby682Gvz765yeA/GrDPIgAAIgEDGBaF7848UovPjHC6WiyNCz
001ST6C/Hy6KPVAKAiAAAmYCEYQiII0xAYpC9GgFrW4hEMW4lWAaCIx6ApG9g27Uk4wnAP6wWsFX
HMXTdlgFAiAw+giEfWxh9BGDxyAAAiAAAhETQCiKGCjUgQAIgAAI+CWAUOSXGORBAARAAAQiJoBQ
FDFQqAMBEAABEPBLAKHILzHIgwAIgAAIREwAoShioFAHAiAAAiDglwBCkV9ikAcBEAABEIiYAEJR
xEChDgRAAARAwC8BhCK/xCAPAiAAAiAQMQGEooiBQh0IgAAIgIBfAqHezO2lMrwO1QslyIAACIDA
KCQQwetQb7v99i8vX25nR7tF0J4RtFsE7eWKTSJGYd+CyyAAAiDgkUBkr0PNrP+d8eePg5tXrnn8
mh/cu+2NdyggCa1Rd1BT96arqlrQHc2urObqRJuE0hurPdXG5JTDbBwzPuR+5mIbQmxpalQowy0O
UBNevdIQlnvsrRADARAodwIRrBXNmD5d/pk5Y8aJc2ZnHssP5Lbnt7zlFIpol1R+9DZKyh7dgXYy
EI6AhYZF9sbqQrWxQbZ2sE02cmioR1pLG4rLB+37nWtszAn3+y5UtdaTvNhQuNsZqtMVZloa0o29
I/JK7mi8KOwnJEAABEYDgQhCkYxpzJiKaVOqfvPki+8M7Zo9Y+rJ1cdQgq4ECea7m2jbbMPOp4nm
ZraHNo9Ea/uk+tbWeqlvbTGmcZHQSlbbtg+PRC+UgAAIgEDxCEQQiiZNqpw2tapywiFTJh+xZt2G
MWPGnL80uXPXTp9Gs9v8lhbKi1H+yzjD0D7rSbOWDJ1sSEvZ9lpjukx0Ui2kpqyE2gyWUrTJNtap
scfsAo9ESxKJJfZYZKvakOGz5fMM3unmbdLrMjrKznL5bp55U3JvlupkhfrJBd0Zmtnp9donbMIq
WCF2UEHtey3Vp6dVFb0us8CCzScHdi0JKms0dgDrdz77EsRBAARKjEDYUHT0UdN2viv99umtxxw1
5fFnXn7z7V0zjpt8xmlz9uze459ENlfdM2SYkJg0ZDrZbIUfXSlK8qn5vS4tbthPZttXSKSRZLPt
nVqajY97Rm1mS53mFUokkiRBLLJUnWmpVY0d6Mg1OC3d6GI9Ul9asYJO9tUPyClMSU0FZtsH6+Sk
JnekgPvrm1PNbY3pfsXjTH+6sa3ZMFlyqEJllW6ooo3IDdUxy/S0alqYnyzU2lbgYhuUDuDSOoXq
wfcgAAKlSCBsKDr26Gkr7xm48+6nX9m2c/X9NCWq+Ny5H6TU3ITKSv84kjTncCyVqE6mHQd1h1LJ
jh4+BKfqGqXcJmNSzUVbdlCYfdMikRyLLKHNZEB+U05Sp1YJCgpOGjflkh2tPJIyKVkFK8tne3Q0
pNWSqpzdESdeJKnEokx/TqlFkXWqQmelNoSxOmVaRJPRQIcFuIMNSgcI0taBrEIhEACBeBAIG4p2
7BhqveQsekLh6hvue/X1oaOmHr5g/lwKRePHj4vYQb5Hdo/UJGeQwh5O2ijKJLXphLESulHPqiGi
qrY9Kwmlwpoll2dPHiiHPufzqzrV2sGer2BPWggivM8qKGHWIHGrBjqSfi3h8gLgzjZE29aB7EUh
EACBkSQQNhTt3r37sEMnLDrjpAqJ4lHFp1PzKqRhWi4aPy5MKKK7YnU+QNklAw8ao2gsNE9wguMS
aWPzE8pQ6dEu391NkY/MUB73k2MEWeEcixJza7RIRaEg7bD4RGJq3pBJyX7wsoFSYFYOPI/Y2cnX
t0zfBagiP5iV85ZsLa0gcA/N58EGQ+t4fjqxoGkQAAEQiCWBsKFo3NgxQ++8WX/uyQvnJxafMXfh
/Ll/27lz1653x44NFYrkeMCOfknJXKnr5rQKwxc+KHlkeWyBAAtPisBbtRlkaFWELfCof1fUJC1J
8UhkGtHtsydj1akuTQNb+HGa2qS6ehU3m6R6xU1alBnokJQMncsE0IOnFFVr0mmZlunwWIWRSati
U9NgTeFZkRLO3ZvPzU2X1onlVQSjQAAEQhII9eIf7W0LNB+ilywMDw/TSxboX80mvG0hZPOELE5D
en+d03MgIXWjOAiAAAiEJRDNi3+8WIF30HmhVBQZWuChJ+EK/UVvUaqGUhAAARDwQCCCUOShFogc
LAIUhejRClrdQiA6WE2AekEABAoTiOwddIWrgsRBIMAfQSv4iqODYBiqBAEQAAEBgbCPLQAqCIAA
CIAACIQkgFAUEiCKgwAIgAAIhCWAUBSWIMqDAAiAAAiEJIBQFBIgioMACIAACIQlgFAUliDKgwAI
gAAIhCSAUBQSIIqDAAiAAAiEJYBQFJYgyoMACIAACIQkgFAUEiCKgwAIgAAIhCWAUBSWIMqDAAiA
AAiEJIBQFBIgioMACIAACIQlEOrN3F4qx+tQvVCCDAiAAAiMQgIRvA71tttv1zaJMBKk3SJozwja
LYL2csUmEaOwb8FlEAABEPBIILLXoWbW/87488fBzSvXPH7ND+7d9sY7FJCE1qj7oql701VVLejO
e7Tbj5ho6096Y7Wn2piccpiNY8aH3M9cbEOIjUqNCmW4kQCNUJWPZgvBgdUSsrgPQyEKAiAQIYEI
1opmTJ8u/8ycMePEObMzj+UHctvzW95yCkW0Syo/ehslZY/uQDsZCAedQiMRe2N1odrYyF472CYb
OTTUI62lDcXlg/b9zjU25oT7fReqWms0LzYUbmFDdbrCTEtDurE3kldyG1V5dq2w2ZAAARAAAQGB
CEKRrHXMmIppU6p+8+SL7wztmj1j6snVx1CCrgSR57ub2mt6DTufJpqbU2okWtsn1be21kt9a4sx
jYuEVrLaun14YLURqgpsAwqCAAiMCgIRhKJJkyqnTa2qnHDIlMlHrFm3YcyYMecvTe7ctdMnP3br
3dJCeTHKfxlvw7XPetKsJUMnG9JStr3WmC4TnVQLqSkroTaDpfm1fdnGOjX2mF2g76T6JYnEEnss
slVtyPDZ8nkG73TzNul1GR1lZ7l8N8+8Kbk3S3WyQv3kgu4Mzez0eq2zGmsFNNvT8pG8lFEVVayj
pi9UtWYv1GJKVlMRMrYpm1OqtdiQFObAJAQeedBJJplsZmzMWUwRcLUrulTgs4NDHARAwJlA2FB0
9FHTdr4r/fbprcccNeXxZ15+8+1dM46bfMZpc/bs3uMfezZX3TNkmJCYNGQ62WyFH10pSvKp+b0u
LW7YT2bbV0ikkWSz7Z1amo0P70ZtZkudJgNKJJIkQSyyVJ1pqVWNHejINTgt3ehiPVJfWrGCTvbV
D8gpTElNBWbbB+vkpCZ3pID765tTzW2N6X7F40x/urGtWZ8sWX23W6vrX9/cbESdqlPVZvpzSWV2
yMDwuZieek1rOUytTYV+yT574SCJPPKkc8jUBGqXGOiQ2pv4GqUYuNIVXfqJ//6NEiAAAk4Ewoai
Y4+etvKegTvvfvqVbTtX309ToorPnftBSs1NqKz0Dz1Jcw7HUonqZNpxUHcolezo4UMwjaBSbpMx
qeaiLTsozL5pkUiORZbQZjIgvyknqVOrBA2hTho35ZIdrTySMilZBSvLZ3t0NKTVkqqc3REnXqag
odSiyFp892itXJjKcpAUierb5NmhAYzyRArNorRDbVOxX6rLXjjYPXLXKW4CtUto7eIAXOmKQXqd
/46PEiAw6gmEDUU7dgy1XnIWPaFw9Q33vfr60FFTD18wfy6FovHjx0XMlu+R3SM1GfItIWpw0kZR
JqlNJ4z66fY4q4aIqtr2rCSUCmGRoSh78kA59DmfX9Wp1g72fAV70sIS4cOQVGaELBItSVFEHsxr
kYhSWQ0St3ygIym01q9fFnmhR351aoaxEKQezkrCsPLbYpAHgVFMIGwo2r1792GHTlh0xkkVEsWj
ik+n5lVIw7RcNH5cmFBE96LqfICyS4bmoZGBxjnzBCd464m0sbvldINhWSLf3U2JLjJDedxPjhFk
hXMsSsyt0SIVhYK0w+ITial5QyalTDtYWeEjen795FGjs5Ovb9nL6r57tFadUjGtTXJ4oxpy/Z2D
SgX5wayc22TrbbYKeS1ivzxzUD2qUdKN7jrFTZBVHjnRFgVdlKhOGPoJHib02w0hDwKeCIQNRePG
jhl65836c09eOD+x+Iy5C+fP/dvOnbt2vTt2bKhQJMcDdvRLSuZK/XMkWoXhIxHlayyPLZDDwpMi
EFZtBhla8WCrC+rfFTVJS1I8EplGdPvsyVh1qkvTwBZ+nKY2qa5exc0mqV5xkxZc2DKGnKFz+Rsm
D55SVK1Jp9VxW/fQRrKQtaa6WDzIKtGHxaK0OumiSYtsd9NgjWBW5OKXZw6sbvJIe6zEVae4CZI1
g2xiTRPbml65XVyBu/QTT9cXhEAABDwRCPXiH+1tCzQfopcsDA8P00sW6F+tZrxtwVMjFE2IBtL+
OqfnQIpWa3wVE48V1QOF/rAsvvbDMhAoNwLRvPjHCxW8g84LpaLI0OJNk9SDgdc0H0QoKkpfg1IQ
CEYgglAUrGKUGhECFIXo0Qpa3UIgMvLGrGhEeh8qAQHPBBCKPKOCIAiAAAiAQHEIRPY61OKYB60g
AAIgAAKjiEDYJ+hGESq4CgIgAAIgUBwCCEXF4QqtIAACIAACngkgFHlGBUEQAAEQAIHiEEAoKg5X
aAUBEAABEPBMAKHIMyoIggAIgAAIFIcAQlFxuEIrCIAACICAZwIIRZ5RQRAEQAAEQKA4BBCKisMV
WkEABEAABDwTQCjyjAqCIAACIAACxSGAUFQcrtAKAiAAAiDgmQBCkWdUEAQBEAABECgOAYSi4nCF
VhAAARAAAc8EEIo8o4IgCIAACIBAcQggFBWHK7SCAAiAAAh4JoBQ5BkVBEEABEAABIpDAKGoOFyh
FQRAAARAwDMBhCLPqCAIAiAAAiBQHAIIRcXhCq0gAAIgAAKeCSAUeUYFQRAAARAAgeIQQCgqDldo
BQEQAAEQ8EwAocgzKgiCAAiAAAgUhwBCUXG4QisIgAAIgIBnAghFnlFBEARAAARAoDgEEIqKwxVa
QQAEQAAEPBNAKPKMCoIgAAIgAALFIYBQVByu0AoCIAACIOCZQLBQlGmp0o+WjHNtsuCC7rxng4oo
mO9eEI0p5JXNJa/KmZxymHUwVE4oY4WxiC10UFSLWtOHIcLiIXV6q95rl7NpC1zQm126lLEirQ+P
WO0FrLW1UbEMUysqln67nyPS/fx2hkLywUIRaU12DAyxY6Aj1+A8gjakG3uHhtY3JwrZ4fp9GLKG
sonm9cFNKWSDF+WsL9YOtnFudPRIa7Uonu9ekWtszK0QBe1MSzQYQ7UBCseOgJcuJzRaL1ioVwfx
WXjFGfqwP7OLYaGDV8XFIkn+HA+CvqTLBA5FqteJuTVSbpPjrCdZHS4KlTRcs/H57qb2mt6hrpSG
rrlZ/Zxf2yfVt7bWS31rhSiBsYw6wih1BX14lDa8R7dDh6JMf7qxTZn16MknmifR7UxDWsq216rJ
LENqSp1GsVuelhbKWPETpuKa/boeLmUsYrxj0j4btQjK8mkHF+5Wsox6nkwtyr8zps8serhturAc
PIQGGFqBok22sU6LQ6b24ZFoSSKxxB6L/GNUNetZVAW33Wvmgz6ltd6A8t8zSkJRxSFkbmsUaylj
42o12tpb3AE0Ug7WOvQrdX5pv60WVWNtTaHBQt+NTayx2iS4/EzVGpI1qlZxe6kkqZk0Bba2KNCf
bZlfuUpLr3ahX7CltA5AuWflqteuVmNF3HIjRqtmMwTbdedso4febrls7W3kC4vw6hB2G+NQpo0q
Rk/cL0PGSzkcL+SCrnkMBwdTLHAoYjGGHQ1Sr3Kfn2mp7auXs3a90oruRNdQbyPP4/H8HH3L5gRq
Tk9rkmyuumeITRUsxbXJQUrTo04otCIicJlOtRoSt5fVimTbB+u4qY3Z9k6eJ9Mt7JH60ibdAhva
V0hkt6G4FgGMBpgtdLovVCIRTeHtsUiv2htGvUYqKTdGY1rL+1m8TjS3Nab7lSyh8aZC1ZJV3Bzo
kNqbXFf8jI1iLyVqfVNLyfyN/cc+OxRa69SvnC8qa70kqRos6gw8Be1ltdOl/9i9k5rX99awjkeZ
K6mXN624vdRulm6oauI9TtDlmK/C/uzO09yrBVhkhjRQag0jX4LClpI7ALdPzt7r039LHzbeW1g0
WyBYrju3HuKht7tf45pVHrEYuo1+dXjsjRakBS7D6Fw7mJGmYN2BQ5G2VlS9Qp5A5Dfl+ByIx6d0
dtA8lLBv1TkBI699n6TZAO/wrsVNfqhFhM4lqpNpLyNHsqOVz1BSdY1ygpEMUM8xAwuQS3b08Kmg
VlyTdzHACkUpo0UiORYpkVFoQEGMxlLK3RTdpWqHzWuyX4lFmf6cgsSgQ3XT1GRCy4yNYislNNsC
yksHsFvrCMS5/QQNZGvNAGrd+4/Au1RXr9RguJVT737N7aV3M5WwvcsxX0X92eV6tONx6rd8Nq/m
PeTYJLxUXa9KcWe2aebBQhlB7EXce4iX3u7vGucWuI0n3vq5wBGb4+6XYZFcKxgcRlYgcChSzaTB
M6kNsewZBeXQ74m8OxSyuNx16NkEeiCgiTq026N93o3yK+lkAAOlzUBMEaOzPavG8Kra9qwklPJr
Bt140e22PAlNuhROtXawhyXYYxPKTYFjDPRrgjxoOR4CUAU7gEdrXS09aD3E1Tuv7eWrEQryNGjz
h8WPZu8mF4bgUG/hgt6NMEt6wuLazz3V7Nyxi+eaJ8NGTih0KGI3TTzxxJ5f0DNBVg/4t0oqiIa9
tG3VxL24GAjdsahRkLJL5suKBmCXpykE+sgANVXHDAzXBNSBbQbwaXiDIUDmu7vZ4ld/Wp1hKulL
ccSSA20hjJrZ+UG5WSTWQG7O8KRgZydfrLLJZZXHKPSFLkfmhrK2Us5m66C8dQCrtWLNhY0UNZCx
Awk5u6p17z8C73hqjnLZ8gOontvLa8f0xtOiTYCF30KZnuwMpFl0xdk0u0NwqdcjvaDXuENv8dHP
Te7bkSq5edFlWGTXvHan4ssFDkXqfTxL9soPa6e6WM5UztDZpyTs21wD/4oVsU+aXIrT9JVXZ53m
KIM709kvKSk1dY2PVqZ4VsGprB0tJUxYqKCjSaq3Jeg867EaYKiIkr4aBF7NEloio0hkCgKOsyeu
qCBGtTq6zZIbo2mwxm1WxOaRbTXptEzLeiRrBtn8kiZrNcqSoIh54VIis20tZe0/9gcOWDC2WCsE
4mqkSwPpjvhXS03j2n/M3nXTpJUnRNntMEsoe28vr4OC+/UoazH0akcsNA7TspZyXfNLsMCV7nC1
2u22aRZAMF53zvV6pefeRpqFXrCQsP3q8Hh52pE6X4YRu+a194y8XAVVOTw8nNnw5umzDhn56mNa
I1tWpL//CZJkjKlHrmbRKNRfZ/eWTq+oVu4zvLsVrJRAv4NVbDVBZK13CyEJAuEJRNbPNVNGZ8d+
4uU91+2vfPSUIwLPisK3ZXw10MNESnYrvjZGZxlbJrI/sBCd/oCa2AN9gmffY2ptQCdRDAQUAqO+
YyMUadeC/qQ/e7lB2DdElMQlxl2mfKn8mFa8DspmWmalcbY2XuxgTUkRQMdmzYUEXUl1WhgLAiAA
AmVEAAm6MmpMuAICIAACJU4ACboSb0CYDwIgAAKlTwChqPTbEB6AAAiAQIkTQCgq8QaE+SAAAiBQ
+gTiHIqEf+IYAXLlRbiGlyMrb77y8tJL9/p1nS7GC79ylC8WhQhAQgUIgAAIREMgzqEoGg+tWmjb
IPltD8adsqLam64Iu2PRH3DX97m/Fbs4oKAVBEAABEaKwKgLRYbXYBsZx3lfL/auG7fXdY9UX0E9
IAACIFAkAsFCkXDbKG37LMs+eMZXxxXcg8u4JZ22t5UxR6V9dqpOflGd0x5bxkhk3SnLkp8z71gl
3t5KvKuVeX8MmwxvSvtebVoL222n93D6e7drkXoL1IIACIBAUQgEC0VkinBTNeM+ePaN8rzsweW+
/5gFgfO2e877gA1ma+aaXi3gtK+XZRcv8fZWol2trO3ktiWaYEs60f5g7I3Clg2gitIboBQEQAAE
DgqBwKFIuKmacR8820Z5Xvbg8re3lfO2e1430HNlbtmxSri9lctOX5pu4c5X8rt27FvSifcHw7To
oFwdqBQEQGCECAQORXo6yW17tMJeRLUHl1mPpw2vXI0T7Fhl297Ky65WBWSEm27ZmdCmJZbJXGGy
kAABEACBUiEQOBTZN1UzuCzc0MzLHlziva0K7YTmsKeWcP86r6suoh2r1H3b1L19vOxqJZZxpif0
hU0W+T547Knz8I+cl0rnhJ0gAAKjhUDgUGTfNsqIzGHnscJ7cAn3tiq4XZttTy3nfcDYnqVrzc8V
CNtauGMVi0W0y5y6d4GXXa3EMi70RPuDYVI0Wq5H+AkCo5RAsDdzR79t1Mjhp4xZk9RTSntAlJ7F
I9eaqAkEQKCUCYziN3Mnmnvq+2qte5PHtzH5M4XC3b7jazMsAwEQAAFfBAIn6HzVEi9h9kxD6ewV
zp4FLx1r49XSsAYEQKBECAQLRTQ8llKGq0TaAmaCAAiAwCglECwUjVJYcBsEQAAEQKAYBBCKikEV
OkEABEAABHwQQCjyAQuiIAACIAACxSCAUFQMqtAJAiAAAiDggwBCkQ9YEAUBEAABECgGAYSiYlCF
ThAAARAAAR8EEIp8wIIoCIAACIBAMQggFBWDKnSCAAiAAAj4IIBQ5AMWREEABEAABIpBAKGoGFSh
EwRAAARAwAcBhCIfsCAKAiAAAiBQDAIIRcWgCp0gAAIgAAI+CCAU+YAFURAAARAAgWIQQCgqBlXo
BAEQAAEQ8EEAocgHLIiCAAiAAAgUgwBCUTGoQicIgAAIgIAPAghFPmBBFARAAARAoBgEEIqKQRU6
QQAEQAAEfBBAKPIBC6IgAAIgAALFIIBQVAyq0AkCIAACIOCDAEKRD1gQBQEQAAEQKAYBhKJiUIVO
EAABEAABHwQQinzAgigIgAAIgEAxCCAUFYMqdIIACIAACPggUEGyw8PDmQ1vnj7rEO/l7rjjju3b
t+/evdt7kZhLTpgw4dhjj73oootibifMAwEQAIGyIfDEy3uu21/56ClHBAlFP7755kkTJ55//vnT
pk0rGyJvvfXWL3/5y3f//vfLL7usbJyCIyAAAiAQZwKhQlFHR8c3vvGNqqqq9957L85O+rKtsrJy
aGjohhtuaG9v91UQwiAAAiAAAsEIaKEoyFrR3r17p0yZUk5xiCCSO+QUuRYMKEqBAAiAAAgEJhAk
FFFltLxUlkdgjigIAiAAAiAQmEDAUFSW0SgwRBQEARAAARAIQyB4KApTaymXzbTQKpl6tGSicCXf
vaBqQXc+sCpWXjnMapit0ZhosM1orQyDKg3rQmDfURAEQKAcCAQPReWXoPPcnsmOAXrEgR1dKc+F
XAQTzeuH1jcnAqliMaB2sE2xZ6hHWqtFx3z3ilxjY25FiCCnmUQxR41yurWZloZ0Y+8Qs92fCwZt
gZxGIRAAgTIjEDAUURw6EPpY1zJ5wY8262ro95Z1Bzb/aIH5dOh6vCogp0qtdfPdTe01vYaImGhu
VqNjfm2fVN/aWi/1rQ0+4SoEJFkdLIIW0ovvQQAERhWBgKEoEkbnLP3isy/oo+QD9935xaXnSHOu
eOjth66YE0kNI6DEkJlS7/X1FJ6SHONfZJQsGp3TEmrqREObJfAP3UoKUE+26Rk4c7qNok22sU48
M+ORaEkiscQpFtmUWk8wY1pa5NxfQ1rKttfK6T7ZWvpXOcntNE50BIqU/KFS3KhN0nFEn0wcgQ6A
KkAABCIgEDwURZCgm3PSB++8L6MoylAk+uTiCLQGVuEZJx+U2UEjK2WmemvaOzM0FjdIvTzNluqS
02W9jWktOZZtXyH1yOcaqpr4R/qcZQUtR7Z9sM78baaltq9eTgn2StZ0m9O0RIlEkiSORRQtNKXy
pEpYSzZXzU0lWyWeltQzkuSlctKcW7RrtgDRC3Jtrt55bhIIggAIlDSBgxqKZn/8PIpFD/DYQXOi
C+VI9MCV0xbflOcn8zctpjc6sONKkqIv2H92GfVs4CAkF/TckNpakTwwp7p6pYYqCkTqOK1Mi+je
XzuSHT18MShVR2M6TVZ4Ofqc22RNniU7WmWt6rf5TTl5RsInJ9lBcwHr70qNWiSSY5E15PHpVJtx
eUpci2qqZzSSXTMPc+rUyqbI3Tvv9UISBECglAkEDEX79u37awTHtI9+Yt6G5zeQpnvXrGxY+CGu
8t290v73drJzzaev+sRDeXbcuve6Gzd8aGHDyjX3MpF712yYt3/Vr1jBDb9atX/WURHY8te/klMR
NCXNCigqsZnEQEcyAn1cBXs4QPCkBEWZZLpf8BRfprM9q83datuzklDKap1DLSGdKAykOPWGNBvF
QQAERpBAwFAUlYUnLKqT+h/aIm3Jb5x34glmrXROev66RQl2XNr7/OYt0gknztuY3yJJj6zbWNeq
FHyoX6pbZCkZlXUe9fDUHOXOGvhCTn4wKyfN2PzAowZXscTcGknP9JlFE81tLOOnryDlu7vZak5/
Wn/KTw6K5ljEY5gp1+dWix8v7JrdgURVrx8bIQsCIBA3AsFDkdfn0tzlZi5YKvU/+PCD/dLSBTMV
UcqU8efz6P9lt2xSj46PHZCFX3x47calCz5Gnze9+KKpYEiTPLeNNt+Qn0ZoyLGcWqq1I9dAK/j0
v8TTaU2DNdHMilJdA4pKZX3KaCitvAxQxerfFTVJS1I8EilJQFnUPnuSl7j0JS+WZnSphaULtccW
3DnZNAuAGLW51+u5TSAIAiBQygSCvJn72muvve6661599dVIHN9y+3mpFbmatszdF8uTm0fbT/rh
iew3+rB8o36efUfCX+uXpLr/pK/p8y2b/3HjicvVgmHNmT59+tVXX03ehVWE8iAAAiAAAh4IhHod
KumP5O+K5EnMzLM/WSPVfPJsdU7EZ0MH2LTozO+su0pakTpJPq5eLwtLOUkWps8bV200Fgw5K/Lz
5IIHxhABARAAARDwRiDgrIj2idi6dau3KkpGasaMGbRDBGZFJdNgMBQEQKDECYSdFZH7IacgMSxe
4m0K80EABECgVAkEfGwhwgRdfGISEnSl2othNwiAQIkTCBiKStxrmA8CIAACIBAjAsFDUXxmM1FZ
EqNmgSkgAAIgMJoIBAxFIV+xE9vio6np4SsIgAAIxIVAkFA0fvz4HTt2HH/88VFNR+Kgh9whp8i1
uLQM7AABEACBUUMgyMPcN99884QJE5YtW3bkkUeWDai//OUvq1at2r1792WXXVY2TsEREAABEIgz
Ae1h7iChiBy76aabaA6xd+/eODvpyzaaD02ePPmKK67wVQrCIAACIAACgQmEDUWBK0ZBEAABEAAB
EJAJRPAnrkAJAiAAAiAAApEQCPLYQiQVQwkIgAAIgAAIyAQQitATQAAEQAAEDjIBhKKD3ACoHgRA
AARAAKEIfQAEQAAEQOAgE0AoOsgNgOpBAARAAAQQitAHQAAEQAAEDjIBhKKD3ACoHgRAAARAAKEI
fQAEQAAEQOAgEwj44p+DbDWqBwEQKGUC9zzz6v88t23b0N9L2Qnd9uOqJn7iA8d95pTpHt254447
tm/fTm+89CgffzF6K+mxxx570UUX+TUVL/7xSwzyIAAC0RC4++mt/z24Y8LpHxhz+KRoNB5sLQf+
9u7uJ577UvXk806dUdCWH99886SJE88///xp06YVFC4VgbfeeuuXv/zlu3//++U+XyeNUFQqTQw7
QaDcCFx62xN/+9iHT5xc+R//UCbTgn95c8LmHe8d/pvHb73k9IKt1dHR8Y1vfKOqquq9994rKFwq
ApWVlUNDQzfccEN7e7svm/EOOl+4IAwCIBAZgTf+8t6YQyeVTRwiLuQLeUR+eWFEGxpMmTKlnOIQ
eU3ukFNh9mrAYwteOg9kQAAEIiYQ262cgxnmi06wKuJfyhcEizBCURh6KAsCIBCQQPwHVl8W+qXg
S3lJCPslgFAUkhiKgwAIgAAIREwAs6KIgUIdCIBAQQIlcZvv18iCXhsF/CqPv7wv9+3CCEUhAaI4
CIBAEAIHinGsa5msHgt+tFmtgc5qv23+0YLJk1vWRV65LwQUV0IYwF2QD4OTIRQaixpZ+VNJTvmC
gARdGFwoCwIgEFcCD1w59YLcd556mx9PfW71aYtuetFi6wNXnrb6c0+9/cNz4upDIbtevGnR1NNe
+BfZx7ffvkVa+0ChIiXyPWZFJdJQMBMEyojAsBR5winf/R93fvGuBy+fLWuefflP/1361x9m+C9E
jv7Ndy+6QNIlIrZA8jcnCFR7vnv5v9bc9VbnYrX07Msv1z4H0mgvJLMKcITsnghFIQGiOAiAgH8C
kcei/Nq7n73wk8ZxefbHz/tgblOeRSIWJdZe+aG7z3vixqhHbnXM9hmJAg33dh/V2h+4kl7dwI8r
H+Dn6MTim25Szi6+iVHgR/6mxWZB6wnuR4BAFC47R3UiFPm/ilACBEAgHIF9+/b9Ndpj53v75806
yqSTTj2by9Gpd/dKz15zzcqGlgunRVupQRt55B1JQPcFPioWfOjf8/y4tWHldTduUF3OLZTPPXvN
D+5lghtuXHj6qk88JIv++4fozL3N2olb9/KSxGr/ezuDYPJFwM4Koch7/4EkCIBAjAk8v3mL1bp5
J54gn5p39a1Xb7z0M7faJGLsj8g0gY9c7JG2BDsu7dULzbv6K2ez387+eIO0MU+Ob3mo//mG1ksV
IuyrLfmN0vPXLVKKOukeGUQIRSPDGbWAAAjoBCgD5O/xrILSMxcsnde79mGD3IsP9j9fPXsmO0NJ
p+GZF999S/V1i/7FKFJQq3cBeUXK++Fdsy5p91H+7sWez1w6fMsmOh5omyfJZJnLakn1s/Gc/B2d
WcYL8qPjY5aCvoz07rtQEqEoJEAUBwEQCEIg0HqES6FZF//fZauWX/2IIvLI1eesGG5bfpb61AJb
/zirI9O2cflnb3sp6rr9xiHiFcgE2ce5mpPDL912Gzm8ZfPzNSfOIo0vPdz/vLwKx2pQ61A/z1qw
tGbVD43ez5pTLZnPmAr6szFIJzCUQSgKCRDFQQAEfBOgYdLXHbcn4TO/M/iT4eUnycfy4Z8M9n2J
z4n4zf8BXuHML/X9pHpF6rxbX/Kk0YeQ/GyExyP4pJB8XHfVRtXJk7524OwzDxw4s+kqaUWKnP7a
puoaxVPNZYP7ivcyoKvX0zdnfmedUlQ7ZSzow3+C7HNeaGGFrfM8dh6IgQAIRENg6Y2Pjvvswpsr
X4tGXTy0XPbe8fvWPHzfV88qaM61115L+0Rs3bq1oGRpCcyYMYN2iCDvfJmNTSJ84YIwCIBApASG
JV933PEX9jMpYiTj75FfC0P2DyToQgJEcRAAAd8Egmeo/A6QIyXvKz1Vfu6zNGC4BB1Cke+rCAVA
AARAAASiJYBQFC1PaAMBEPBEYKSmKyNUjyefDUIjZNYIVuOXgEUeoSgkQBQHARDwTcDfY8IlIu2d
Qok45NtM7wTskghFYeihLAiAgG8CRx9ZKb373vf2HTuCt+zFrYp8IY+YXx6O8ePH79ix4/jjjy+u
TSOrndwhp8g1DwDEIniYOzA6FAQBEAhC4O6nt972x7crav9RmjghSPkYlvn77uGBjZf809TzTp1R
0Lqbb755woQJy5YtO/LIIwsKl4rAX/7yl1WrVu3evfuyyy7zZbP2MDdCkS9uEAYBEIiAwKqnXvmf
57a9+bfdEeiKgYp/OHzCJz5w3LLTZnq05aabbqI5xN69ez3Kx1+M5kO0k98VV1zh11SEIr/EIA8C
IAACIBAxAfyJa8RAoQ4EQAAEQCAwATy2EBgdCoIACIAACERDAKEoGo7QAgIgAAIgEJgAQlFgdCgI
AiAAAiAQDQGEomg4QgsIgAAIgEBgAghFgdGhIAiAAAiAQDQEEIqi4QgtIAACIAACgQkgFAVGh4Ig
AAIgAALREEAoioYjtIAACIAACAQmgFAUGB0KggAIgAAIREMAoSgajtACAiAAAiAQmABCUWB0KAgC
IAACIBANAf3N3NHogxYQAAEQAAEQ8EzgygN7BpPTWSiqzr7quRQEQQAEQAAEQCBKAhSKolQHXSAA
AiAAAiAQgMD/B0OuQvfhjU3NAAAAAElFTkSuQmCC
--_004_961f2ca8a4324c9d85a5ce95e33ce72bEXCH2013politiewestkust_--
9 years, 4 months
Get Involved with oVirt project! Winter edition
by Sandro Bonazzola
Hi,
Have you got some free time and do you want to get involved in oVirt
project?
Do you like the idea of having fresh disk images of recent distribution in
oVirt Glance repository?
You can help us by testing existing online images ensuring they works with
cloud-init
or creating one yourself and report your success to devel(a)ovirt.org.
We'll be happy to upload the images once these are ready.
Do you like Debian and do you have some programming skill?
Help us getting VDSM running on it! We started releasing highly
experimental packages and it's a good time for giving them a try.
You can follow the progress here: http://www.ovirt.org/VDSM_on_Debian
Here are some bugs you can try to help with:
Bug ID Status Whiteboard Target Milestone Summary
1120588 NEW docs ovirt-4.0.0 update log collector documentation
1120586 NEW docs ovirt-4.0.0 update iso uploader documentation
1120585 NEW docs ovirt-4.0.0 update image uploader documentation
1159784 NEW docs --- [RFE] Document when and where new features are
available when upgrading cluster / datacenters
1074301 NEW infra ovirt-4.0.0 [RFE] ovirt-shell has no man page
1227019 NEW integration ovirt-3.6.2 Require sos >= 3.3 when available -
ovirt sosreport plugin doesn't obfuscate password used in aaa extensions
1251965 NEW integration ovirt-3.6.2 Appliance based setup should
default to using /var/tmp for unpacking the image
1156060 NEW integration ovirt-4.0.0 [text] engine admin password prompt
consistency
1237132 NEW integration ovirt-4.0.0 [TEXT] New package listing of
engine-setup when upgrading packages is not user friendly
1115059 ASSIGNED network ovirt-4.0.0 Incomplete error message when
adding VNIC profile to running VM
772931 NEW reports --- [RFE] Reports should include the name of the
oVirt engine
Are you great at packaging software? Do you prefer a distribution which is
currently unsupported by oVirt?
Do you want to have packages included in your preferred distribution? Help
getting oVirt ported there!
Fedora: http://lists.ovirt.org/pipermail/devel/2015-September/011426.html
CentOS: https://wiki.centos.org/SpecialInterestGroup/Virtualization
Gentoo: https://wiki.gentoo.org/wiki/OVirt
Debian Jessie: http://www.ovirt.org/Features/Debian_support_for_hosts
Archlinux: http://www.ovirt.org/OVirt_on_Arch_Linux
OpenSUSE: https://build.opensuse.org/project/show/Virtualization:oVirt
Do you love "DevOps?", you count stable builds in jenkins ci while trying
to fall a sleep?
Then oVirt infra team is looking for you!, join the infra team and dive in
to do the newest and coolest devops tools today!
Here are some of our open tasks you can help with:
https://ovirt-jira.atlassian.net/secure/RapidBoard.jspa?rapidView=1&proje...
You can also help us by sharing how you use oVirt in your DevOps
environment (please use [DevOps] in the subject)
You don't have programming skills, not enough time for DevOps but you want
still to contribute?
Here are some bugs you can take care of, also without writing a line of
code:
https://bugzilla.redhat.com/buglist.cgi?quicksearch=classification%3Aovir...
Do you prefer to test things? We have some test cases[5] you can try using
nightly snapshots[6].
Do you want to contribute test cases? Most of the features[7] included in
oVirt are missing a test case, you're welcome to contribute one!
Do you want to contribute artworks? oVirt Live backgrounds and covers,
release banners, stickers, .... Take a look at Fedora Artworks[9] as an
example of what you can do
Is this the first time you try to contribute to oVirt project?
You can start from here [1][2]!
You don't know gerrit very well? You can find some more docs here [3].
Any other question about development? Feel free to ask on devel(a)ovirt.org
or on irc channel[4].
You don't really have time / skills for any development / documentation /
testing related task?
Spread the word[8]!
Let us know you're getting involved, present yourself and tell us what
you're going to do, you'll be welcome!
[1] http://www.ovirt.org/Develop
[2] http://www.ovirt.org/Working_with_oVirt_Gerrit
[3] https://gerrit-review.googlesource.com/Documentation
[4] http://www.ovirt.org/Community
[5] http://www.ovirt.org/Category:TestCase
[6] http://www.ovirt.org/Install_nightly_snapshot
[7] http://www.ovirt.org/Category:Feature
[8]
http://www.zdnet.com/article/how-much-longer-can-red-hats-ovirt-remain-co...
[9] https://fedoraproject.org/wiki/Artwork#Resources
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 4 months
uncaught exception with engine vm in 3.6.1 selecting hosted_storage
by Gianluca Cecchi
Hello after updating from 3.6.0 to 3.6.1 my HE environment, I'm not able to
import/activate the hosted_storage domain because it results anattached
Moreover I get the popup message
Uncaught exception occurred. Please try reloading the page. Details:
(TypeError) __gwt$exception: <skipped>: c is null
when I select the line of hosted_storage domain in storage tab. See:
https://drive.google.com/file/d/0BwoPbcrMv8mvVlc1XzNUZ18yWWs/view?usp=sha...
I don't find apparently ERROR messages wen I do this, so I don't know where
to watch.
On engine, where the db lives, I have under /var/lib/pgsql/data/pg_log
[root@ractorshe pg_log]# ls -lrt
total 12
-rw-------. 1 postgres postgres 0 Dec 16 01:00 postgresql-Wed.log
-rw-------. 1 postgres postgres 0 Dec 17 01:00 postgresql-Thu.log
-rw-------. 1 postgres postgres 0 Dec 18 01:00 postgresql-Fri.log
-rw-------. 1 postgres postgres 7480 Dec 20 00:39 postgresql-Sat.log
-rw-------. 1 postgres postgres 3488 Dec 20 01:29 postgresql-Sun.log
-rw-------. 1 postgres postgres 0 Dec 21 01:00 postgresql-Mon.log
-rw-------. 1 postgres postgres 0 Dec 22 01:00 postgresql-Tue.log
On Sunday logs, when the engine was started I see
LOG: database system was not properly shut down; automatic recovery in
progress
LOG: redo starts at 1/254C6710
LOG: record with zero length at 1/254EB6B8
LOG: redo done at 1/254EB688
LOG: last completed transaction was at log time 2015-12-19
23:49:47.920147+00
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
ERROR: insert or update on table "storage_domain_dynamic" violates foreign
key constraint "fk_stora
ge_domain_dynamic_storage_domain_static"
DETAIL: Key (id)=(00000000-0000-0000-0000-000000000000) is not present in
table "storage_domain_static".
CONTEXT: SQL statement "INSERT INTO
storage_domain_dynamic(available_disk_size, id, used_disk_size)
VALUES(v_available_disk_size, v_id, v_used_disk_size)"
PL/pgSQL function
insertstorage_domain_dynamic(integer,uuid,integer) line 3 at SQL statement
STATEMENT: select * from insertstorage_domain_dynamic($1, $2, $3) as result
and then many lines of kind
LOG: autovacuum: found orphan temp table "pg_temp_8"."tt_temp22" in
database "engine"
so messages related to domains and could be the reason of the exception and
of the inability of importing/attaching my HE storage domain.
Let me know if I can provide more logs or output of queries from the
database.
Gianluca
9 years, 4 months
How to run "engine-backup"?
by Will Dennis
--_000_F3282EEAFF180F43BAF1AD0A7C34739D38F285njmailneclabscom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Yay, I *finally* have my 3-host hyper-converged oVirt datacenter stood up :=
)
[root@ovirt-node-01 ~]# hosted-engine --vm-status
--=3D=3D Host 1 status =3D=3D--
Status up-to-date : True
Hostname : ovirt-node-01
Host ID : 1
Engine status : {"health": "good", "vm": "up", "detail=
": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 65c41ca5
Host timestamp : 217522
--=3D=3D Host 2 status =3D=3D--
Status up-to-date : True
Hostname : ovirt-node-02
Host ID : 2
Engine status : {"reason": "vm not running on this hos=
t", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : a7a599d8
Host timestamp : 56101
--=3D=3D Host 3 status =3D=3D--
Status up-to-date : True
Hostname : ovirt-node-03
Host ID : 3
Engine status : {"reason": "vm not running on this hos=
t", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 6e138d0b
Host timestamp : 432658
Now in the oVirt webadmin UI down in the "Alerts" section, I am seeing this=
message:
"There is no full backup available, please run engine-backup to prevent dat=
a loss in case of corruption."
I do not see a "engine-backup" CLI command on my hosts; how does one do thi=
s? (I have searched ovirt.org to no avail...)
Thanks,
Will
--_000_F3282EEAFF180F43BAF1AD0A7C34739D38F285njmailneclabscom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Yay, I *<b>finally</b>* have my 3-host hyper-converg=
ed oVirt datacenter stood up :)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">[root@ovirt-node-01 ~]# hosted-engine --vm-status<o:=
p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">--=3D=3D Host 1 status =3D=3D--<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Status up-to-date  =
; : True<=
o:p></o:p></p>
<p class=3D"MsoNormal">Hostname &n=
bsp;  =
; : ovirt-node-01<o:p></o:p></p>
<p class=3D"MsoNormal">Host ID &nb=
sp; =
: 1<o:p></o:p></p>
<p class=3D"MsoNormal">Engine status &nb=
sp; =
: {"health": "good", "vm": "=
;up", "detail": "up"}<o:p></o:p></p>
<p class=3D"MsoNormal">Score  =
; &n=
bsp; : 3400<o:p></o:p></p>
<p class=3D"MsoNormal">stopped &nb=
sp; =
: False<o:p></o:p></p>
<p class=3D"MsoNormal">Local maintenance  =
; : False=
<o:p></o:p></p>
<p class=3D"MsoNormal">crc32  =
; &n=
bsp; : 65c41ca5<o:p></o:p><=
/p>
<p class=3D"MsoNormal">Host timestamp &n=
bsp;  =
; : 217522<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">--=3D=3D Host 2 status =3D=3D--<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Status up-to-date  =
; : True<=
o:p></o:p></p>
<p class=3D"MsoNormal">Hostname &n=
bsp;  =
; : ovirt-node-02<o:p></o:p></p>
<p class=3D"MsoNormal">Host ID &nb=
sp; =
: 2<o:p></o:p></p>
<p class=3D"MsoNormal">Engine status &nb=
sp; =
: {"reason": "vm not running on this host"=
, "health": "bad", "vm": "down", &q=
uot;detail": "unknown"}<o:p></o:p></p>
<p class=3D"MsoNormal">Score  =
; &n=
bsp; : 3400<o:p></o:p></p>
<p class=3D"MsoNormal">stopped &nb=
sp; =
: False<o:p></o:p></p>
<p class=3D"MsoNormal">Local maintenance  =
; : False=
<o:p></o:p></p>
<p class=3D"MsoNormal">crc32  =
; &n=
bsp; : a7a599d8<o:p></o:p><=
/p>
<p class=3D"MsoNormal">Host timestamp &n=
bsp;  =
; : 56101<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">--=3D=3D Host 3 status =3D=3D--<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Status up-to-date  =
; : True<=
o:p></o:p></p>
<p class=3D"MsoNormal">Hostname &n=
bsp;  =
; : ovirt-node-03<o:p></o:p></p>
<p class=3D"MsoNormal">Host ID &nb=
sp; =
: 3<o:p></o:p></p>
<p class=3D"MsoNormal">Engine status &nb=
sp; =
: {"reason": "vm not running on this host"=
, "health": "bad", "vm": "down", &q=
uot;detail": "unknown"}<o:p></o:p></p>
<p class=3D"MsoNormal">Score  =
; &n=
bsp; : 3400<o:p></o:p></p>
<p class=3D"MsoNormal">stopped &nb=
sp; =
: False<o:p></o:p></p>
<p class=3D"MsoNormal">Local maintenance  =
; : False=
<o:p></o:p></p>
<p class=3D"MsoNormal">crc32  =
; &n=
bsp; : 6e138d0b<o:p></o:p><=
/p>
<p class=3D"MsoNormal">Host timestamp &n=
bsp;  =
; : 432658<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Now in the oVirt webadmin UI down in the “Aler=
ts” section, I am seeing this message:<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">“There is no full backup available, please run=
engine-backup to prevent data loss in case of corruption.”<o:p></o:p=
></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I do not see a “engine-backup” CLI comma=
nd on my hosts; how does one do this? (I have searched ovirt.org to no avai=
l...)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal">Will<o:p></o:p></p>
</div>
</body>
</html>
--_000_F3282EEAFF180F43BAF1AD0A7C34739D38F285njmailneclabscom_--
9 years, 4 months
mount a usb
by alireza sadeh seighalan
hi everyone
i want to mount a usb to a windows vm in ovirt3.6.1. how can i run
it?thanks in advance
9 years, 4 months
virtual Disk
by Taste-Of-IT
Hello,
i testing oVirt 3.6 as Self-Hosted-Engine and create a virtual machine.
Now i want to change the size of the disk and found the possibilities to
change the size of the disk and a field to grow the disk. In ovirt
manual it is descripted to change the value of the grow field. My
question is what is the difference and what are the results of that. E.g
what happend if i only change the disk size from 8 to 10? is it the same
like i change the grow size from 0 to 2?
thx for technical explanation.
9 years, 4 months
After update to 3.6.1 profile internal does not exist message
by Gianluca Cecchi
Hello.
I updated my test self hosted engine vm to 3.6.1 and to CentOS 7.2.
Now it seems it isn't able to to login to webadmin due to this in engine.log
(field is empty in web admin gui)
2015-12-19 13:59:05,182 ERROR
[org.ovirt.engine.core.bll.aaa.LoginBaseCommand] (default task-17) []
Can't login because authentication profile 'internal' doesn't exist.
Is it an already known problem?
The first ERRROR message I find in it is dated 13:34 and it is an SQL one:
2015-12-19 13:34:39,882 ERROR [org.ovirt.engine.core.bll.Backend]
(ServerService Thread Pool -- 42) [] Failed to run compensation on startup
for Command
'org.ovirt.engine.core.bll.storage.AddExistingFileStorageDomainCommand',
Command Id 'bb7ee4a3-6b35-4c62-9823-a434a40e5b38':
CallableStatementCallback; SQL [{call insertstorage_domain_dynamic(?, ?,
?)}]; ERROR: insert or update on table "storage_domain_dynamic" violates
foreign key constraint "fk_storage_domain_dynamic_storage_domain_static"
Detail: Key (id)=(00000000-0000-0000-0000-000000000000) is not present in
table "storage_domain_static".
Where: SQL statement "INSERT INTO
storage_domain_dynamic(available_disk_size, id, used_disk_size)
VALUES(v_available_disk_size, v_id, v_used_disk_size)"
PL/pgSQL function insertstorage_domain_dynamic(integer,uuid,integer) line 3
at SQL statement; nested exception is org.postgresql.util.PSQLException:
ERROR: insert or update on table "storage_domain_dynamic" violates foreign
key constraint "fk_storage_domain_dynamic_storage_domain_static"
Detail: Key (id)=(00000000-0000-0000-0000-000000000000) is not present in
table "storage_domain_static".
Where: SQL statement "INSERT INTO
storage_domain_dynamic(available_disk_size, id, used_disk_size)
VALUES(v_available_disk_size, v_id, v_used_disk_size)"
PL/pgSQL function insertstorage_domain_dynamic(integer,uuid,integer) line 3
at SQL statement
2015-12-19 13:34:39,882 ERROR [org.ovirt.engine.core.bll.Backend]
(ServerService Thread Pool -- 42) [] Exception:
org.springframework.dao.DataIntegrityViolationException:
CallableStatementCallback; SQL [{call insertstorage_domain_dynamic(?, ?,
?)}]; ERROR: insert or update on table "storage_domain_dynamic" violates
foreign key constraint "fk_storage_domain_dynamic_storage_domain_static"
Detail: Key (id)=(00000000-0000-0000-0000-000000000000) is not present in
table "storage_domain_static".
I think it is not related to update from CentOS 7.1 to 7.2 that generates
an update of PostgreSQL too because it is 10 minutes later:
from installed one
Nov 04 12:55:30 Installed: postgresql-server-9.2.13-1.el7_1.x86_64
to 7.2 one
Dec 19 13:45:51 Updated: postgresql-server-9.2.14-1.el7_1.x86_64
BTW: my order was
yum update "ovirt-engine-setup-*"
engine-setup
the engine-setup completed successfully with
...
[ INFO ] Stage: Misc configuration
[ INFO ] Backing up database localhost:engine to
'/var/lib/ovirt-engine/backups/engine-20151219132925.5OoFQA.dump'.
[ INFO ] Creating/refreshing Engine database schema
[ INFO ] Creating/refreshing Engine 'internal' domain database schema
[ INFO ] Upgrading CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
--== SUMMARY ==--
SSH fingerprint: 19:56:8d:3e:50:fc:90:37:5a:ba:6c:57:30:b1:7d:93
Internal CA
DA:E6:04:34:99:A0:DB:CE:3F:0A:7B:A2:96:67:4C:7F:19:CA:95:5F
Note! If you want to gather statistical information you can
install Reports and/or DWH:
http://www.ovirt.org/Ovirt_DWH
http://www.ovirt.org/Ovirt_Reports
Web access is enabled at:
http://ractorshe.mydomain:80/ovirt-engine
https://ractorshe.mydomain:443/ovirt-engine
--== END OF SUMMARY ==--
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Restarting ovirt-vmconsole proxy service
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20151219132415-2pfee1.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20151219133045-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully
Strangely the VM was shutdown...
In /var/log/libvirt/qemu/HostedEngine.log I saw (in UTC)
2015-12-19 12:31:19.494+0000: shutting down
So I exited from global maintenance, the VM was started and I verified that
admin web gui was ok with new version.
Then I put maintenance again and run on engine VM
systemctl stop ovirt-engine
yum update (I forgot to also shutdown POstgreSQL before... )
shutdown -r now
As it was in maintenance it wasn't started again so I exited from
maintenance and VM was started, but now I'm not able to see the internal
profile....
Let me know if you need further info.
Gianluca
9 years, 4 months
Re: [ovirt-users] Cannot retrieve answer file from 1st HE host when setting up 2nd host
by Simone Tiraboschi
On Tue, Dec 22, 2015 at 3:06 PM, Will Dennis <wdennis(a)nec-labs.com> wrote:
> See attached for requested logs
>
Thanks, the issue is here:
Dec 21 19:40:53 ovirt-node-03 etc-glusterfs-glusterd.vol[1079]: [2015-12-22
00:40:53.496109] C [MSGID: 106002]
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
0-management: Server quorum lost for volume engine. Stopping local bricks.
Dec 21 19:40:53 ovirt-node-03 etc-glusterfs-glusterd.vol[1079]: [2015-12-22
00:40:53.496410] C [MSGID: 106002]
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
0-management: Server quorum lost for volume vmdata. Stopping local bricks.
So at that point gluster lost its quorum and the fail system got read-only.
On the getStorageDomainsList VDSM internally raises cause the file-system
is read only:
Thread-141::DEBUG::2015-12-21
11:29:59,666::fileSD::157::Storage.StorageDomainManifest::(__init__)
Reading domain in path
/rhev/data-center/mnt/glusterSD/localhost:_engine/e89b6e64-bd7d-4846-b970-9af32a3295ee
Thread-141::DEBUG::2015-12-21
11:29:59,666::__init__::320::IOProcessClient::(_run) Starting IOProcess...
Thread-141::DEBUG::2015-12-21
11:29:59,680::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW backend
Thread-141::ERROR::2015-12-21
11:29:59,686::hsm::2898::Storage.HSM::(getStorageDomainsList) Unexpected
error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2882, in getStorageDomainsList
dom = sdCache.produce(sdUUID=sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 100, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/glusterSD.py", line 32, in findDomain
return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
File "/usr/share/vdsm/storage/fileSD.py", line 198, in __init__
validateFileSystemFeatures(manifest.sdUUID, manifest.mountpoint)
File "/usr/share/vdsm/storage/fileSD.py", line 93, in
validateFileSystemFeatures
oop.getProcessPool(sdUUID).directTouch(testFilePath)
File "/usr/share/vdsm/storage/outOfProcess.py", line 350, in directTouch
ioproc.touch(path, flags, mode)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 543,
in touch
self.timeout)
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 427,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 30] Read-only file system
But instead of reporting a failure to hosted-engine-setup, it reported a
successfully execution where it wasn't able to find any storage domain
there ( this one is a real bug, I'm going to open a bug on that, can I
attach your logs there? ):
Thread-141::INFO::2015-12-21
11:29:59,702::logUtils::51::dispatcher::(wrapper) Run and protect:
getStorageDomainsList, Return response: {'domlist': []}
Thread-141::DEBUG::2015-12-21
11:29:59,702::task::1191::Storage.TaskManager.Task::(prepare)
Task=`96a9ea03-dc13-483e-9b17-b55a759c9b44`::finished: {'domlist': []}
Thread-141::DEBUG::2015-12-21
11:29:59,702::task::595::Storage.TaskManager.Task::(_updateState)
Task=`96a9ea03-dc13-483e-9b17-b55a759c9b44`::moving from state preparing ->
state finished
Thread-141::DEBUG::2015-12-21
11:29:59,703::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-141::DEBUG::2015-12-21
11:29:59,703::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-141::DEBUG::2015-12-21
11:29:59,703::task::993::Storage.TaskManager.Task::(_decref)
Task=`96a9ea03-dc13-483e-9b17-b55a759c9b44`::ref 0 aborting False
Thread-141::INFO::2015-12-21
11:29:59,704::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:39718 stopped
And so, cause VDSM doesn't report any existing storage domain,
hosted-engine-setup assumes that you are going to deploy the first host and
so your original issue.
>
>
> *From:* Simone Tiraboschi [mailto:stirabos@redhat.com]
> *Sent:* Tuesday, December 22, 2015 8:56 AM
> *To:* Will Dennis
> *Cc:* Sahina Bose; Yedidyah Bar David
>
> *Subject:* Re: [ovirt-users] Cannot retrieve answer file from 1st HE host
> when setting up 2nd host
>
>
>
>
>
> On Tue, Dec 22, 2015 at 2:44 PM, Will Dennis <wdennis(a)nec-labs.com> wrote:
>
> Which logs are needed?
>
>
> Let's start with vdsm.log and /var/log/messages
> Then it's quite strange that you have that amount of data in mom.log so
> also that one could be interesting.
>
>
>
>
>
> /var/log/vdsm
>
> total 24M
>
> drwxr-xr-x 3 vdsm kvm 4.0K Dec 18 20:10 .
>
> drwxr-xr-x. 13 root root 4.0K Dec 20 03:15 ..
>
> drwxr-xr-x 2 vdsm kvm 6 Dec 9 03:24 backup
>
> -rw-r--r-- 1 vdsm kvm 2.5K Dec 21 11:29 connectivity.log
>
> -rw-r--r-- 1 vdsm kvm 173K Dec 21 11:21 mom.log
>
> -rw-r--r-- 1 vdsm kvm 2.0M Dec 17 10:09 mom.log.1
>
> -rw-r--r-- 1 vdsm kvm 2.0M Dec 17 04:06 mom.log.2
>
> -rw-r--r-- 1 vdsm kvm 2.0M Dec 16 22:03 mom.log.3
>
> -rw-r--r-- 1 vdsm kvm 2.0M Dec 16 16:00 mom.log.4
>
> -rw-r--r-- 1 vdsm kvm 2.0M Dec 16 09:57 mom.log.5
>
> -rw-r--r-- 1 root root 115K Dec 21 11:29 supervdsm.log
>
> -rw-r--r-- 1 root root 2.7K Oct 16 11:38 upgrade.log
>
> -rw-r--r-- 1 vdsm kvm 13M Dec 22 08:44 vdsm.log
>
>
>
>
>
> *From:* Simone Tiraboschi [mailto:stirabos@redhat.com]
> *Sent:* Tuesday, December 22, 2015 3:58 AM
> *To:* Will Dennis; Sahina Bose
> *Cc:* Yedidyah Bar David; users
> *Subject:* Re: [ovirt-users] Cannot retrieve answer file from 1st HE host
> when setting up 2nd host
>
>
>
>
>
>
>
> On Tue, Dec 22, 2015 at 2:09 AM, Will Dennis <wdennis(a)nec-labs.com> wrote:
>
> http://ur1.ca/ocstf
>
>
>
>
> 2015-12-21 11:28:39 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the full
> shared storage connection path to use (example: host:/path):
> 2015-12-21 11:28:55 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVE localhost:/engine
>
>
>
> OK, so you are trying to deploy hosted-engine on GlusterFS in a
> hyper-converged way (using the same hosts for virtualization and for
> serving GlusterFS). Unfortunately I've to advise you that this is not a
> supported configuration on oVirt 3.6 due to different open bugs.
>
> So I'm glad you can help us testing it but I prefer to advise that today
> that schema is not production ready.
>
>
>
> In your case it seams that VDSM correctly connects the GlusterFS volume
> seeing all the bricks
>
>
> 2015-12-21 11:28:55 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:936
> execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine', '--remote-host=localhost') stdout:
> <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput>
> <opRet>0</opRet>
> <opErrno>0</opErrno>
> <opErrstr/>
> <volInfo>
> <volumes>
> <volume>
> <name>engine</name>
> <id>974c9da4-b236-4fc1-b26a-645f14601db8</id>
> <status>1</status>
> <statusStr>Started</statusStr>
> <brickCount>6</brickCount>
> <distCount>3</distCount>
>
>
>
> but then VDSM doesn't find any storage domain there:
>
>
>
>
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._late_customization
> 2015-12-21 11:29:58 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._getExistingDomain:476 _getExistingDomain
> 2015-12-21 11:29:58 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._storageServerConnection:638 connectStorageServer
> 2015-12-21 11:29:58 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._storageServerConnection:701 {'status': {'message': 'OK', 'code':
> 0}, 'statuslist': [{'status': 0, 'id':
> '67ece152-dd66-444c-8d18-4249d1b8f488'}]}
> 2015-12-21 11:29:58 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._getStorageDomainsList:595 getStorageDomainsList
> 2015-12-21 11:29:59 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._getStorageDomainsList:598 {'status': {'message': 'OK', 'code': 0},
> 'domlist': []}
>
>
>
> Can you please attach also the correspondent VDSM logs?
>
>
>
> Adding Sahina here.
>
>
>
>
>
> On Dec 21, 2015, at 11:58 AM, Simone Tiraboschi <stirabos(a)redhat.com
> <mailto:stirabos@redhat.com>> wrote:
>
>
> On Mon, Dec 21, 2015 at 5:52 PM, Will Dennis <wdennis(a)nec-labs.com<mailto:
> wdennis(a)nec-labs.com>> wrote:
>
> However, when I went to the 3rd host and did the setup, I selected
> 'glusterfs' and gave the path of the engine volume, it came back and
> incorrectly identified it as the first host, instead of an additional
> host... How does setup determine that? I confirmed that on this 3rd host
> that the engine volume is available and has the GUID subfolder of the
> hosted engine...
>
>
> Can you please attach a log of hosted-engine-setup also from there?
>
>
>
>
>
9 years, 4 months
Re: [ovirt-users] Hosted Engine crash - state = EngineUp-EngineUpBadHealth
by Simone Tiraboschi
On Tue, Dec 22, 2015 at 4:03 PM, Will Dennis <wdennis(a)nec-labs.com> wrote:
> I believe IPtables may be the culprit...
>
>
>
> Host 1:
>
> -------
>
> [root@ovirt-node-01 ~]# iptables -L
>
> Chain INPUT (policy ACCEPT)
>
> target prot opt source destination
>
> ACCEPT all -- anywhere anywhere state
> RELATED,ESTABLISHED
>
> ACCEPT icmp -- anywhere anywhere
>
> ACCEPT all -- anywhere anywhere
>
> ACCEPT tcp -- anywhere anywhere tcp dpt:54321
>
> ACCEPT tcp -- anywhere anywhere tcp
> dpt:sunrpc
>
> ACCEPT udp -- anywhere anywhere udp
> dpt:sunrpc
>
> ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
>
> ACCEPT udp -- anywhere anywhere udp dpt:snmp
>
> ACCEPT tcp -- anywhere anywhere tcp dpt:16514
>
> ACCEPT tcp -- anywhere anywhere multiport
> dports rockwell-csp2
>
> ACCEPT tcp -- anywhere anywhere multiport
> dports rfb:6923
>
> ACCEPT tcp -- anywhere anywhere multiport
> dports 49152:49216
>
> REJECT all -- anywhere anywhere reject-with
> icmp-host-prohibited
>
>
>
> Chain FORWARD (policy ACCEPT)
>
> target prot opt source destination
>
> REJECT all -- anywhere anywhere PHYSDEV
> match ! --physdev-is-bridged reject-with icmp-host-prohibited
>
>
>
> Chain OUTPUT (policy ACCEPT)
>
> target prot opt source destination
>
>
>
> Host 2:
>
> -------
>
> [root@ovirt-node-02 ~]# iptables -L
>
> Chain INPUT (policy ACCEPT)
>
> target prot opt source destination
>
> ACCEPT all -- anywhere anywhere state
> RELATED,ESTABLISHED
>
> ACCEPT icmp -- anywhere anywhere
>
> ACCEPT all -- anywhere anywhere
>
> ACCEPT tcp -- anywhere anywhere tcp dpt:54321
>
> ACCEPT tcp -- anywhere anywhere tcp
> dpt:sunrpc
>
> ACCEPT udp -- anywhere anywhere udp
> dpt:sunrpc
>
> ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
>
> ACCEPT udp -- anywhere anywhere udp dpt:snmp
>
> ACCEPT tcp -- anywhere anywhere tcp dpt:16514
>
> ACCEPT tcp -- anywhere anywhere multiport
> dports rockwell-csp2
>
> ACCEPT tcp -- anywhere anywhere multiport
> dports rfb:6923
>
> ACCEPT tcp -- anywhere anywhere multiport
> dports 49152:49216
>
> REJECT all -- anywhere anywhere reject-with
> icmp-host-prohibited
>
>
>
> Chain FORWARD (policy ACCEPT)
>
> target prot opt source destination
>
> REJECT all -- anywhere anywhere PHYSDEV
> match ! --physdev-is-bridged reject-with icmp-host-prohibited
>
>
>
> Chain OUTPUT (policy ACCEPT)
>
> target prot opt source destination
>
>
>
> Host 3:
>
> -------
>
> [root@ovirt-node-03 ~]# iptables -L
>
> Chain INPUT (policy ACCEPT)
>
> target prot opt source destination
>
>
>
> Chain FORWARD (policy ACCEPT)
>
> target prot opt source destination
>
>
>
> Chain OUTPUT (policy ACCEPT)
>
> target prot opt source destination
>
>
>
>
>
> An example of my Gluster engine volume status (off host #2):
>
>
>
> [root@ovirt-node-02 ~]# gluster volume status
>
> Status of volume: engine
>
> Gluster process TCP Port RDMA Port Online
> Pid
>
>
> ------------------------------------------------------------------------------
>
> Brick ovirt-node-02:/gluster_brick2/engine_
>
> brick 49217 0 Y
> 2973
>
> Brick ovirt-node-03:/gluster_brick3/engine_
>
> brick N/A N/A N
> N/A
>
> Brick ovirt-node-02:/gluster_brick4/engine_
>
> brick 49218 0 Y
> 2988
>
> Brick ovirt-node-03:/gluster_brick5/engine_
>
> brick N/A N/A N
> N/A
>
> NFS Server on localhost 2049 0 Y
> 3007
>
> Self-heal Daemon on localhost N/A N/A Y
> 3012
>
> NFS Server on ovirt-node-03 2049 0 Y
> 1671
>
> Self-heal Daemon on ovirt-node-03 N/A N/A Y
> 1707
>
>
>
> I had changed the base port # per instructions found at
> http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_...
> :
>
> “By default gluster uses a port that vdsm also wants, so we need to change
> base-port setting avoiding the clash between the two daemons. We need to add
>
>
>
> option base-port 49217
>
> to /etc/glusterfs/glusterd.vol
>
>
>
> and ensure glusterd service is enabled and started before proceeding.”
>
>
>
> So I did that on all the hosts:
>
>
>
> [root@ovirt-node-02 ~]# cat /etc/glusterfs/glusterd.vol
>
> volume management
>
> type mgmt/glusterd
>
> option working-directory /var/lib/glusterd
>
> option transport-type socket,rdma
>
> option transport.socket.keepalive-time 10
>
> option transport.socket.keepalive-interval 2
>
> option transport.socket.read-fail-log off
>
> option ping-timeout 30
>
> # option base-port 49152
>
> option base-port 49217
>
> option rpc-auth-allow-insecure on
>
> end-volume
>
>
>
>
>
> Question: does oVirt really need IPtables to be enforcing rules, or can I
> just set everything wide open? If I can, how to specify that in setup?
>
hosted-engine-setup asks:
iptables was detected on your computer, do you wish setup to
configure it? (Yes, No)[Yes]:
You have just to say no here.
If you say no it's completely up to you to configure it opening the
required ports or everything disabling it if you don't care.
The issue with gluster ports is that hosted-engine-setup simply configure
iptables for what it knows you'll need and on 3.6 it's always assuming that
the gluster volume is served by external hosts.
>
>
>
> W.
>
>
>
>
>
> *From:* Sahina Bose [mailto:sabose@redhat.com]
> *Sent:* Tuesday, December 22, 2015 9:19 AM
> *To:* Will Dennis; Simone Tiraboschi; Dan Kenigsberg
>
> *Subject:* Re: [ovirt-users] Hosted Engine crash - state =
> EngineUp-EngineUpBadHealth
>
>
>
>
>
> On 12/22/2015 07:47 PM, Sahina Bose wrote:
>
>
>
> On 12/22/2015 07:28 PM, Will Dennis wrote:
>
> See attached for requested log files
>
>
> From gluster logs
>
> [2015-12-22 00:40:53.501341] W [MSGID: 108001]
> [afr-common.c:3924:afr_notify] 0-engine-replicate-1: Client-quorum is not
> met
> [2015-12-22 00:40:53.502288] W [socket.c:588:__socket_rwv]
> 0-engine-client-2: readv on 138.15.200.93:49217 failed (No data available)
>
> [2015-12-22 00:41:17.667302] W [fuse-bridge.c:2292:fuse_writev_cbk]
> 0-glusterfs-fuse: 3875597: WRITE => -1 (Read-only file system)
>
> Could you check if the gluster ports are open on all nodes?
>
>
> It's possible you ran into this ? -
> https://bugzilla.redhat.com/show_bug.cgi?id=1288979
>
>
>
>
>
>
>
> *From:* Sahina Bose [mailto:sabose@redhat.com <sabose(a)redhat.com>]
> *Sent:* Tuesday, December 22, 2015 4:59 AM
> *To:* Simone Tiraboschi; Will Dennis; Dan Kenigsberg
> *Cc:* users
> *Subject:* Re: [ovirt-users] Hosted Engine crash - state =
> EngineUp-EngineUpBadHealth
>
>
>
>
>
> On 12/22/2015 02:38 PM, Simone Tiraboschi wrote:
>
>
>
>
>
> On Tue, Dec 22, 2015 at 2:31 AM, Will Dennis <wdennis(a)nec-labs.com> wrote:
>
> OK, another problem :(
>
> I was having the same problem with my second oVirt host that I had with my
> first one, where when I ran “hosted-engine —deploy” on it, after it
> completed successfully, then I was experiencing a ~50sec lag when SSH’ing
> into the node…
>
> vpnp71:~ will$ time ssh root@ovirt-node-02 uptime
> 19:36:06 up 4 days, 8:31, 0 users, load average: 0.68, 0.70, 0.67
>
> real 0m50.540s
> user 0m0.025s
> sys 0m0.008s
>
>
> So, in the oVirt web admin console, I put the "ovirt-node-02” node into
> Maintenance mode, then SSH’d to the server and rebooted it. Sure enough,
> after the server came back up, SSH was fine (no delay), which again was the
> same experience I had had with the first oVirt host. So, I went back to the
> web console, and choose the “Confirm host has been rebooted” option, which
> I thought would be the right action to take after a reboot. The system
> opened a dialog box with a spinner, which never stopped spinning… So
> finally, I closed the dialog box with the upper right (X) symbol, and then
> for this same host choose “Activate” from the menu. It was then I noticed I
> had recieved a state transition email notifying me that
> "EngineUp-EngineUpBadHealth” and sure enough, the web UI was then
> unresponsive. I checked on the first oVirt host, the VM with the name
> “HostedEngine” is still running, but obviously isn’t working…
>
> So, looks like I need to restart the HostedEngine VM or take whatever
> action is needed to return oVirt to operation… Hate to keep asking this
> question, but what’s the correct action at this point?
>
>
>
> ovirt-ha-agent should always restart it for you after a few minutes but
> the point is that the network configuration seams to be not that stable.
>
>
>
> I know from another thread that you are trying to deploy hosted-engine
> over GlusterFS in an hyperconverged way and this, as I said, is currently
> not supported.
>
> I think that it can also requires some specific configuration on network
> side.
>
>
> For hyperconverged gluster+engine , it should work without any specific
> configuration on network side. However if the network is flaky, it is
> possible that there are errors with gluster volume access. Could you
> provide the ovirt-ha-agent logs as well as gluster mount logs?
>
>
>
>
> Adding Sahina and Dan here.
>
>
>
> Thanks, again,
> Will
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
>
>
>
>
9 years, 4 months
Restart of vdsmd
by gflwqs gflwqs
Hi list,
Do i need to put host into maintenance before i restart vdsmd if i need to
change a parameter in /etc/vdsm/vdsm.conf?
Thanks!
Christian
9 years, 4 months
iLO2
by Eriks Goodwin
------=_Part_48121_1727506711.1450775295131
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I have been tinkering with various settings and configurations trying to get power management working properly--but it keeps giving me "Fail:" (without any details) every time I test the settings. The servers I am using are HP Proliant DL380 G6 with integrated iLO2. Any tips?
I can use the same credentials to log into the system via the web interface--so I'm sure I have the address, username, and passwd right. :-)
------=_Part_48121_1727506711.1450775295131
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: tahoma,new york,times,serif; font-size: 10pt; color: #000000"><div>I have been tinkering with various settings and configurations trying to get power management working properly--but it keeps giving me "Fail:" (without any details) every time I test the settings. The servers I am using are HP Proliant DL380 G6 with integrated iLO2. Any tips?<br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>I can use the same credentials to log into the system via the web interface--so I'm sure I have the address, username, and passwd right. :-)<br data-mce-bogus="1"></div></div></body></html>
------=_Part_48121_1727506711.1450775295131--
9 years, 4 months