Network Address Change
by Paul.LKW
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
2 years, 4 months
OVS switch type for hosted-engine
by Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
2 years, 12 months
OVN routing and firewalling in oVirt
by Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?
Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?
Is there any security policy (like security groups in Openstack) to
implement?
Thanks,
Gianluca
3 years
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
5 years, 2 months
Unable to find OVF_STORE after recovery / upgrade
by Sam Cappello
This is a multi-part message in MIME format.
--------------05F3D0780062C6D37C1619A7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
so i was running a 3.4 hosted engine two node setup on centos 6, had
some disk issues so i tried to upgrade to centos 7 and follow the path
3.4 > 3.5 > 3.6 > 4.0. i screwed up dig time somewhere between 3.6 and
4.0, so i wiped the drives, installed a fresh 4.0.3, then created the
database and restored the 3.6 engine backup before running engine-setup
as per the docs. things seemed to work, but i have the the following
issues / symptoms:
- ovirt-ha-agent running 100% CPU on both nodes
- messages in the UI that the Hosted Engine storage Domain isn't active
and Failed to import the Hosted Engine Storage Domain
- hosted engine is not visible in the UI
and the following repeating in the agent.log:
MainThread::INFO::2016-10-03
12:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 3400)
MainThread::INFO::2016-10-03
12:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host vmhost1.oracool.net (id: 1, score: 3400)
MainThread::INFO::2016-10-03
12:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2016-10-03
12:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2016-10-03
12:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2016-10-03
12:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2016-10-03
12:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2016-10-03
12:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
MainThread::INFO::2016-10-03
12:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
MainThread::INFO::2016-10-03
12:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
MainThread::INFO::2016-10-03
12:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-10-03
12:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::WARNING::2016-10-03
12:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE
MainThread::ERROR::2016-10-03
12:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
I have searched a bit and not really found a solution, and have come to
the conclusion that i have made a mess of things, and am wondering if
the best solution is to export the VMs, and reinstall everything then
import them back?
i am using remote NFS storage.
if i try and add the hosted engine storage domain it says it is already
registered.
i have also upgraded and am now running oVirt Engine Version:
4.0.4.4-1.el7.centos
hosts were installed using ovirt-node. currently at
3.10.0-327.28.3.el7.x86_64
if a fresh install is best, any advice / pointer to doc that explains
best way to do this?
i have not moved my most important server over to this cluster yet so i
can take some downtime to reinstall.
thanks!
sam
--------------05F3D0780062C6D37C1619A7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
so i was running a 3.4 hosted engine two node setup on centos 6, had
some disk issues so i tried to upgrade to centos 7 and follow the
path 3.4 > 3.5 > 3.6 > 4.0. i screwed up dig time
somewhere between 3.6 and 4.0, so i wiped the drives, installed a
fresh 4.0.3, then created the database and restored the 3.6 engine
backup before running engine-setup as per the docs. things seemed
to work, but i have the the following issues / symptoms:<br>
- ovirt-ha-agent running 100% CPU on both nodes<br>
- messages in the UI that the Hosted Engine storage Domain isn't
active and Failed to import the Hosted Engine Storage Domain<br>
- hosted engine is not visible in the UI<br>
and the following repeating in the agent.log:<br>
<br>
MainThread::INFO::2016-10-03
12:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 3400)<br>
MainThread::INFO::2016-10-03
12:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host vmhost1.oracool.net (id: 1, score: 3400)<br>
MainThread::INFO::2016-10-03
12:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost<br>
MainThread::INFO::2016-10-03
12:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM<br>
MainThread::INFO::2016-10-03
12:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage<br>
MainThread::INFO::2016-10-03
12:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server<br>
MainThread::INFO::2016-10-03
12:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server<br>
MainThread::INFO::2016-10-03
12:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain<br>
MainThread::INFO::2016-10-03
12:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images<br>
MainThread::INFO::2016-10-03
12:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images<br>
MainThread::INFO::2016-10-03
12:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain<br>
MainThread::INFO::2016-10-03
12:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE<br>
MainThread::WARNING::2016-10-03
12:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE<br>
MainThread::ERROR::2016-10-03
12:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial
vm.conf<br>
<br>
I have searched a bit and not really found a solution, and have come
to the conclusion that i have made a mess of things, and am
wondering if the best solution is to export the VMs, and reinstall
everything then import them back?<br>
i am using remote NFS storage.<br>
if i try and add the hosted engine storage domain it says it is
already registered.<br>
i have also upgraded and am now running <span
class="gwt-InlineLabel">oVirt Engine Version: 4.0.4.4-1.el7.centos<br>
hosts were installed using ovirt-node. currently at
3.10.0-327.28.3.el7.x86_64<br>
if a fresh install is best, any advice / pointer to doc that
explains best way to do this?<br>
i have not moved my most important server over to this cluster yet
so i can take some downtime to reinstall.<br>
thanks!<br>
sam<br>
<br>
</span><br>
</body>
</html>
--------------05F3D0780062C6D37C1619A7--
5 years, 11 months
[Users] Problem Creating "oVirtEngine" Machine
by Richie@HIP
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
I can't agree with you more. Modifying every box's or Virtual Machine's =
HOSTS file with a FQDN and IP SHOULD work, but in my case it is not. =
There are several reasons I've come to believe could be the problem =
during my trial-and-errors testing and learning.
FIRST - MACHINE IPs.
THe machine's "Names" where not appearing in the Microsoft Active =
Directory DHCP along with their assigned IPs; in other words, the DHCP =
just showed an "Assigned IP", equal to the Linux Machine's IP, with a =
<empty> ('i.e. blank, none, silch, plan old "no-letters-or-numbers") =
"Name" in the "Name" (i.e. machines "network name", or FQDN-value used =
by the Windows AD DNS-service) column. =20
if your IP is is appearing with an <empty> "name", there is no "host =
name" to associate the IP, it makes it difficult to define a FQDN; which =
isn't that useful if we're going to use the HOSTS files in all =
participating machines in an oVirt Installation.
I kept banging my head for three (3) long hours trying to find the =
problem.
In Fedora 18, I could't find where the "network name" of the machine =
could be defined. =20
I tried putting the "Additional Search Domains" and/or "DHCP Client ID" =
in Fedora's 18 Desktop - under "System Settings > Hardware > Network > =
Options > IPv4 Setting"
The DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"
Kept wondering around the "Settings" and seeing which one made sense, =
but what the heck, I went for it. =20
Under "System Settings > System > Details" I found the information about =
GNOME and the machine's hardware. =20
There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =20
I also installed all Kerberos libraries and client (i.e. authconfig-gtk, =
authhub, authhub-client, krb5-apple-clents, krb5-auth-dialog, =
krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) and rebooted
VOILA=85!!! =20
I don;t know if it was the definition of "Device Name" from =
"localhost.localdomain" to "ovirtengine", of the Kerberos libraries =
install, or both. But finally the MS AD DHCP was showing the =
Addigned-IP, the machine "Name" and the proper MAC-address. Regardless, =
setting the machine's "Network Name" under "System Settings > System > =
Details > Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this network =
setting could be defined.
NOTE - Somebody has to try the two steps I did together, separately. to =
see which one is the real problem-solver; for me it is working, and "if =
it ain't broke, don't fix it=85"
Now that I have the DHCP / IP thing sorted, I have to do the DNS stuff.
To this point, I've addressed the DHCP and "Network Name" of the =
IP-Lease (required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "as long as I do not =
use default HTTPd service parameters as suggested by the install". By =
using the HOST file to "define" FQDNs, AND NOT using the default HTTPd =
suggested changes, I'm able to install the oVirtEngine (given that I use =
ports 8700 and 8701) to access the "oVirtEngine Welcome Screen", BUT =
NONE of the "oVirt Portals" work=85 YET=85!!!
More to come during the week
Richie
Jos=E9 E ("Richie") Piovanetti, MD, MS=20
M: 787-615-4884 | richiepiovanetti(a)healthcareinfopartners.com
On Aug 2, 2013, at 3:10 AM, Joop <jvdwege(a)xs4all.nl> wrote:
> Hello Ritchie,
>=20
>> In a conversation via IRC, someone suggested that I activate =
"dnsmask" to overcome what appears to be a DNS problem. I'll try that =
other possibility once I get home later today.
>>=20
>> In the mean time, what do you mean by "fixing the hostname"=85? I =
opened and fixed the HOSTNAMES and changed it from =
"localhost-localdomain" to "localhost.localdomain" and that made no =
difference. Albeit, after changing I didm;t restart, remove ovirtEngine =
((using "engine-cleanup") and reinstalled via "engine-setup". Is that =
what you mean=85?
>>=20
>>=20
>>=20
>> In the mean time, the fact that even if I resolve the issue of =
oVirtEngine I will not be able to connect to the oVirt Nodes unless I =
have DNS resolution, apparently means I should do something with =
resolving via DNS in my home LAN (i.e implement some sort of "DNS Cache" =
so I can resolve my home computers via DNS inside my LAN).
>>=20
>> Any suggestions are MORE THAN WELCOME=85!!!
>> =20
>=20
> Having setup ovirt more than I can count right now I share your =
feeling that it isn't always clear why things are going wrong, but in =
this case I suspect that there is a rather small thing missing.
> In short if you setup ovirt-engine, either using virtualbox or on real =
hardware, and you give your host a meaningfull name AND you add that =
info also in your /etc/hosts file than things SHOULD work, no need for =
dnsmasq or even bind. Would make things easier once you start adding =
virt hosts to you infrastructure since you will need to duplicate these =
actions on each host (add engine name/ip to each host and add each host =
to the others and all hosts to engine)
>=20
> Just ask if you need more assistance and I will write down a small =
howto that should work out of the box else I might have some time to see =
if I can get things going.
>=20
> Regards,
>=20
> Joop
>=20
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I =
can't agree with you more. Modifying every box's or Virtual =
Machine's HOSTS file with a FQDN and IP SHOULD work, but in my case it =
is not. There are several reasons I've come to believe could be =
the problem during my trial-and-errors testing and =
learning.<div><div><br></div><div>FIRST - MACHINE IPs.</div><ul =
class=3D"MailOutline"><li>THe machine's "Names" where not appearing in =
the <b>Microsoft Active Directory DHCP</b> along with their assigned =
IPs; in other words, the DHCP just showed an "Assigned IP", equal to the =
Linux Machine's IP, with a <empty> ('i.e. blank, none, silch, plan =
old "no-letters-or-numbers") "Name" in the "Name" (i.e. machines =
"network name", or FQDN-value used by the Windows AD DNS-service) =
column. </li><li>if your IP is is appearing with an <empty> =
"name", there is no "host name" to associate the IP, it makes it =
difficult to define a FQDN; which isn't that useful if we're going to =
use the HOSTS files in all participating machines in an oVirt =
Installation.</li><li>I kept banging my head for three (3) long hours =
trying to find the problem.</li><ul><li>In Fedora 18, I could't find =
where the "network name" of the machine could be defined. =
</li><li>I tried putting the "Additional Search Domains" and/or =
"DHCP Client ID" in Fedora's 18 Desktop - under "System Settings > =
Hardware > Network > Options > IPv4 Setting"</li><ul><li>The =
DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"</li></ul><li>Kept wondering around the =
"Settings" and seeing which one made sense, but what the heck, I went =
for it. </li><ul><li>Under "System Settings > System > =
Details" I found the information about GNOME and the machine's hardware. =
</li><li>There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =
</li><li>I also installed all Kerberos libraries and client (i.e. =
authconfig-gtk, authhub, authhub-client, krb5-apple-clents, =
krb5-auth-dialog, krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) =
and rebooted</li><li>VOILA=85!!! </li></ul><li>I don;t know if it =
was the definition of "Device Name" from "localhost.localdomain" to =
"ovirtengine", of the Kerberos libraries install, or both. But =
finally the MS AD DHCP was showing the Addigned-IP, the machine "Name" =
and the proper MAC-address. Regardless, setting the machine's =
"Network Name" under "System Settings > System > Details =
> Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this =
network setting could be defined.</li><li><b>NOTE</b> - Somebody has to =
try the two steps I did together, separately. to see which one is the =
real problem-solver; for me it is working, and "if it ain't broke, don't =
fix it=85"</li></ul></ul><div><br =
class=3D"webkit-block-placeholder"></div><div>Now that I have the DHCP / =
IP thing sorted, I have to do the DNS stuff.</div><div><br></div><div>To =
this point, I've addressed the DHCP and "Network Name" of the IP-Lease =
(required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "<b><i>as long as I =
do not use default HTTPd service parameters as suggested by the =
install</i></b>". <b>By using the HOST file to "define" FQDNs, AND =
NOT using the default HTTPd suggested changes, I'm able to install the =
oVirtEngine (given that I use ports 8700 and 8701) to access the =
"oVirtEngine Welcome Screen", BUT NONE of the "oVirt Portals" work</b>=85 =
YET=85!!!</div><div><br></div><div>More to come during the =
week</div><div><br></div><div>Richie</div><div =
apple-content-edited=3D"true"><br>Jos=E9 E ("Richie") Piovanetti, MD, =
MS <br>M: 787-615-4884 | <a =
href=3D"mailto:richiepiovanetti@healthcareinfopartners.com">richiepiovanet=
ti(a)healthcareinfopartners.com</a><br><br><br><br><br><br></div><br><div><d=
iv>On Aug 2, 2013, at 3:10 AM, Joop <<a =
href=3D"mailto:jvdwege@xs4all.nl">jvdwege(a)xs4all.nl</a>> =
wrote:</div><br class=3D"Apple-interchange-newline"><blockquote =
type=3D"cite">Hello Ritchie,<br><br><blockquote type=3D"cite">In a =
conversation via IRC, someone suggested that I activate "dnsmask" to =
overcome what appears to be a DNS problem. I'll try that other =
possibility once I get home later today.<br><br>In the mean time, what =
do you mean by "fixing the hostname"=85? I opened and fixed the =
HOSTNAMES and changed it from "localhost-localdomain" to =
"localhost.localdomain" and that made no difference. Albeit, after =
changing I didm;t restart, remove ovirtEngine ((using "engine-cleanup") =
and reinstalled via "engine-setup". Is that what you =
mean=85?<br><br><br><br>In the mean time, the fact that even if I =
resolve the issue of oVirtEngine I will not be able to connect to the =
oVirt Nodes unless I have DNS resolution, apparently means I should do =
something with resolving via DNS in my home LAN (i.e implement some sort =
of "DNS Cache" so I can resolve my home computers via DNS inside my =
LAN).<br><br>Any suggestions are MORE THAN WELCOME=85!!!<br> =
<br></blockquote><br>Having setup ovirt more than I can count =
right now I share your feeling that it isn't always clear why things are =
going wrong, but in this case I suspect that there is a rather small =
thing missing.<br>In short if you setup ovirt-engine, either using =
virtualbox or on real hardware, and you give your host a meaningfull =
name AND you add that info also in your /etc/hosts file than things =
SHOULD work, no need for dnsmasq or even bind. Would make things easier =
once you start adding virt hosts to you infrastructure since you will =
need to duplicate these actions on each host (add engine name/ip to each =
host and add each host to the others and all hosts to =
engine)<br><br>Just ask if you need more assistance and I will write =
down a small howto that should work out of the box else I might have =
some time to see if I can get things =
going.<br><br>Regards,<br><br>Joop<br><br></blockquote></div><br></div></b=
ody></html>=
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241--
5 years, 11 months
Re: [ovirt-users] Question about the ovirt-engine-sdk-java
by Michael Pasternak
------=_Part_1975902_834617789.1445161505459
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Salifou,
Actually java sdk is=C2=A0intentionally=C2=A0hiding transport level interna=
ls so developers could stay in java domain,if your headers are static, easi=
est way would be using reverse proxy in a middle to intercept requests,=C2=
=A0
can you tell me why do you need this?
=20
On Friday, October 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah@=
redhat.com> wrote:
=20
Hi Micheal,
I have a question about the ovirt-engine-sdk-java.
Is there a way to add custom request headers to each RHEVM API call?
Here is an example of a request that I would like to do:
$ curl -v -k \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "ID: user1(a)ad.xyz.com" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "PASSWORD: Pwssd" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "TARGET: kobe" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 https://vm0.smalick.com/api/hosts
I would like to add ID, PASSWORD and TARGET as HTTP request header.=20
Thanks,
Salifou
------=_Part_1975902_834617789.1445161505459
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:HelveticaNeue-Light, Helvetica Neue Light, Helvetica Neue, Helve=
tica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id=3D"yui_3_16_=
0_1_1445160422533_3555" dir=3D"ltr"><span id=3D"yui_3_16_0_1_1445160422533_=
4552">Hi </span><span style=3D"font-family: 'Helvetica Neue', 'Segoe U=
I', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3_16_0_1_1445=
160422533_3568" class=3D"">Salifou,</span></div><div id=3D"yui_3_16_0_1_144=
5160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica Neue', =
'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" class=3D""><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
style=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Luci=
da Grande', sans-serif;" class=3D"" id=3D"yui_3_16_0_1_1445160422533_3595">=
Actually java sdk is </span><span style=3D"font-family: 'Helvetica Neu=
e', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3=
_16_0_1_1445160422533_4360" class=3D"">intentionally </span><span styl=
e=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Lucida G=
rande', sans-serif;" id=3D"yui_3_16_0_1_1445160422533_4362" class=3D"">hidi=
ng transport level internals so developers could stay in java domain,</span=
></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span class=
=3D"" id=3D"yui_3_16_0_1_1445160422533_4435"><font face=3D"Helvetica Neue, =
Segoe UI, Helvetica, Arial, Lucida Grande, sans-serif" id=3D"yui_3_16_0_1_1=
445160422533_4432" class=3D"">if your headers are static, easiest way would=
be using reverse proxy in a middle to intercept requests, </font><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
class=3D""><font face=3D"Helvetica Neue, Segoe UI, Helvetica, Arial, Lucida=
Grande, sans-serif" class=3D""><br></font></span></div><div id=3D"yui_3_16=
_0_1_1445160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica=
Neue', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"y=
ui_3_16_0_1_1445160422533_4357">can you tell me why do you need this?</span=
><br></div> <br><div class=3D"qtdSeparateBR"><br><br></div><div class=3D"y=
ahoo_quoted" style=3D"display: block;"> <div style=3D"font-family: Helvetic=
aNeue-Light, Helvetica Neue Light, Helvetica Neue, Helvetica, Arial, Lucida=
Grande, sans-serif; font-size: 13px;"> <div style=3D"font-family: Helvetic=
aNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-si=
ze: 16px;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Friday, O=
ctober 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah(a)redhat.com>=
wrote:<br> </font> </div> <br><br> <div class=3D"y_msg_container">Hi Mich=
eal,<br><br>I have a question about the ovirt-engine-sdk-java.<br><br>Is th=
ere a way to add custom request headers to each RHEVM API call?<br><br>Here=
is an example of a request that I would like to do:<br><br>$ curl -v -k \<=
br> -H "ID: <a ymailto=3D"mailto:user1@ad=
.xyz.com" href=3D"mailto:user1@ad.xyz.com">user1(a)ad.xyz.com</a>" \<br> =
; -H "PASSWORD: Pwssd" \<br>  =
; -H "TARGET: kobe" \<br> <=
a href=3D"https://vm0.smalick.com/api/hosts" target=3D"_blank">https://vm0.=
smalick.com/api/hosts</a><br><br><br>I would like to add ID, PASSWORD and T=
ARGET as HTTP request header. <br><br>Thanks,<br>Salifou<br><br><br><br></d=
iv> </div> </div> </div></div></body></html>
------=_Part_1975902_834617789.1445161505459--
5 years, 11 months
[Users] oVirt Weekly Sync Meeting Minutes -- 2012-05-23
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:41)
* Status of next release (mburns, 14:05:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145 (mburns,
14:05:29)
* AGREED: freeze date and beta release delayed by 1 week to 2012-06-07
(mburns, 14:12:33)
* post freeze, release notes flag needs to be used where required
(mburns, 14:14:21)
* https://bugzilla.redhat.com/show_bug.cgi?id=821867 is a VDSM blocker
for 3.1 (oschreib, 14:17:27)
* ACTION: dougsland to fix upstream vdsm right now, and open a bug on
libvirt augeas (oschreib, 14:21:44)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822158 (mburns,
14:23:39)
* assignee not available, update to come tomorrow (mburns, 14:24:59)
* ACTION: oschreib to make sure BZ#822158 is handled quickly
(oschreib, 14:25:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824397 (mburns,
14:28:55)
* 824397 expected to be merged prior next week's meeting (mburns,
14:29:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824420 (mburns,
14:30:15)
* tracker for node based on F17 (mburns, 14:30:28)
* blocked by util-linux bug currently (mburns, 14:30:40)
* new build expected from util-linux maintainer in next couple days
(mburns, 14:30:55)
* sub-project status -- engine (mburns, 14:32:49)
* nothing to report outside of blockers discussed above (mburns,
14:34:00)
* sub-project status -- vdsm (mburns, 14:34:09)
* nothing outside of blockers above (mburns, 14:35:36)
* sub-project status -- node (mburns, 14:35:43)
* working on f17 migration, but blocked by util-linux bug (mburns,
14:35:58)
* should be ready for freeze deadline (mburns, 14:36:23)
* Review decision on Java 7 and Fedora jboss rpms in oVirt Engine
(mburns, 14:36:43)
* Java7 basically working (mburns, 14:37:19)
* LINK: http://gerrit.ovirt.org/#change,4416 (oschreib, 14:39:35)
* engine will make ack/nack statement next week (mburns, 14:39:49)
* fedora jboss rpms patch is in review, short tests passed (mburns,
14:40:04)
* engine ack on fedora jboss rpms and java7 needed next week (mburns,
14:44:47)
* Upcoming Workshops (mburns, 14:45:11)
* NetApp workshop set for Jan 22-24 2013 (mburns, 14:47:16)
* already at half capacity for Workshop at LinuxCon Japan (mburns,
14:47:37)
* please continue to promote it (mburns, 14:48:19)
* proposal: board meeting to be held at all major workshops (mburns,
14:48:43)
* LINK: http://www.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:49:30)
* Open Discussion (mburns, 14:50:12)
* oVirt/Quantum integration discussion will be held separately
(mburns, 14:50:43)
Meeting ended at 14:52:47 UTC.
Action Items
------------
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib to make sure BZ#822158 is handled quickly
Action Items, by person
-----------------------
* dougsland
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib
* oschreib to make sure BZ#822158 is handled quickly
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (98)
* oschreib (55)
* doronf (12)
* lh (11)
* sgordon (8)
* dougsland (8)
* ovirtbot (6)
* ofrenkel (4)
* cestila (2)
* RobertMdroid (2)
* ydary (2)
* rickyh (1)
* yzaslavs (1)
* cctrieloff (1)
* mestery_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
5 years, 11 months
[QE][ACTION REQUIRED] oVirt 3.5.1 RC status - postponed
by Sandro Bonazzola
Hi,
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
- ACTION: Gilad please provide ETA on above blocker, the new proposed RC date will be decided on the given ETA.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 57 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 37 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- ACTION: Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- ACTION: Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
5 years, 11 months
VM has paused due to no storage space error
by Sandvik Agustin
Hi users,
I have this problem that sometimes 1 to 3 VM just automatically paused with
user interaction and getting this error "VM has paused due to no storage
space error". any inputs from you guys are very appreciated.
TIA
Sandvik
5 years, 11 months
VM Import from remote libvirt Server on web gui with Host key verification failed or permission denied error
by Rogério Ceni Coelho
Hi Ovirt Jedi´s !!!
First of all, congratulations for this amazing product. I am an Vmware and
Hyper-V Engineer but i am very excited with oVirt.
Now, let´s go to work ... :-)
I am trying to Import vm from remote libvirt server on web gui but i was
unable to solve the problem until now.
[image: pasted1]
Node logs :
[root@hlg-rbs-ovirt-kvm01-poa ~]# tail -f /var/log/vdsm/vdsm.log | grep -i
error | grep -v Host.getStats
jsonrpc.Executor/0::ERROR::2016-10-06
10:24:52,432::v2v::151::root::(get_external_vms) error connection to
hypervisor: 'Cannot recv data: Permission denied, please try
again.\r\nPermission denied, please try again.\r\nPermission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by
peer'
jsonrpc.Executor/0::INFO::2016-10-06
10:24:52,433::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC
call Host.getExternalVMs failed (error 65) in 10.05 seconds
[root@hlg-rbs-ovirt-kvm02-poa ~]# grep error /var/log/vdsm/vdsm.log | grep
get_external_vms
jsonrpc.Executor/7::ERROR::2016-10-06
10:25:37,344::v2v::151::root::(get_external_vms) error connection to
hypervisor: 'Cannot recv data: Host key verification failed.: Connection
reset by peer'
[root@hlg-rbs-ovirt-kvm02-poa ~]#
Engine Logs :
[root@hlg-rbs-ovirt01-poa ~]# tail -f /var/log/ovirt-engine/engine.log
2016-10-06 10:24:42,377 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] START, GetVmsFromExternalProviderVDSCommand(HostName =
hlg-rbs-ovirt-kvm01-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='5feddfba-d7b2-423e-a946-ac2bf36906fa', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'}), log id: eb750c7
*2016-10-06 10:24:53,435 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.*
*Permission denied, please try again.*
*Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]*
2016-10-06 10:24:53,435 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Failed in 'GetVmsFromExternalProviderVDS' method
2016-10-06 10:24:53,435 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]
2016-10-06 10:24:53,454 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-97) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VDSM hlg-rbs-ovirt-kvm01-poa.rbs.com.br command failed:
Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer
2016-10-06 10:24:53,454 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VMListReturnForXmlRpc@6c6f696c'
2016-10-06 10:24:53,454 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] HostName = hlg-rbs-ovirt-kvm01-poa.rbs.com.br
2016-10-06 10:24:53,454 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Command 'GetVmsFromExternalProviderVDSCommand(HostName
= hlg-rbs-ovirt-kvm01-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='5feddfba-d7b2-423e-a946-ac2bf36906fa', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'})' execution failed: VDSGenericException:
VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot
recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65
2016-10-06 10:24:53,454 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] FINISH, GetVmsFromExternalProviderVDSCommand, log id:
eb750c7
2016-10-06 10:24:53,459 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-97) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: Failed to retrieve VMs information from external server
qemu+ssh://root@prd-openshift-kvm03-poa.rbs.com.br/system
2016-10-06 10:24:53,459 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-97) [] Query 'GetVmsFromExternalProviderQuery' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
2016-10-06 10:24:53,460 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-97) [] Exception: org.ovirt.engine.core.common.errors.EngineException:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:257)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:27)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:195)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runQuery(GenericApiGWTServiceImpl.java:53)
at sun.reflect.GeneratedMethodAccessor222.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
[branding.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
2016-10-06 10:25:27,202 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] START, GetVmsFromExternalProviderVDSCommand(HostName
= hlg-rbs-ovirt-kvm02-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='f9c9d929-b460-4102-bb29-de1e6ad6ad72', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'}), log id: 4f3174a6
*2016-10-06 10:25:38,338 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Host key verification failed.: Connection reset
by peer]*
2016-10-06 10:25:38,338 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Failed in 'GetVmsFromExternalProviderVDS' method
2016-10-06 10:25:38,338 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Host key verification failed.: Connection reset
by peer]
2016-10-06 10:25:38,343 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-113) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VDSM hlg-rbs-ovirt-kvm02-poa.rbs.com.br command failed:
Cannot recv data: Host key verification failed.: Connection reset by peer
2016-10-06 10:25:38,343 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VMListReturnForXmlRpc@42dd60af'
2016-10-06 10:25:38,343 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] HostName = hlg-rbs-ovirt-kvm02-poa.rbs.com.br
2016-10-06 10:25:38,343 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Command
'GetVmsFromExternalProviderVDSCommand(HostName =
hlg-rbs-ovirt-kvm02-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='f9c9d929-b460-4102-bb29-de1e6ad6ad72', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'})' execution failed: VDSGenericException:
VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot
recv data: Host key verification failed.: Connection reset by peer, code =
65
2016-10-06 10:25:38,343 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] FINISH, GetVmsFromExternalProviderVDSCommand, log id:
4f3174a6
2016-10-06 10:25:38,344 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-113) [] Query 'GetVmsFromExternalProviderQuery' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Host key
verification failed.: Connection reset by peer, code = 65 (Failed with
error unexpected and code 16)
2016-10-06 10:25:38,345 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-113) [] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Host key
verification failed.: Connection reset by peer, code = 65 (Failed with
error unexpected and code 16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:257)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:27)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:195)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runQuery(GenericApiGWTServiceImpl.java:53)
at sun.reflect.GeneratedMethodAccessor222.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
[branding.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
2016-10-06 10:25:52,527 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] START, GetVmsFromExternalProviderVDSCommand(HostName
= hlg-rbs-ovirt-kvm03-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='02ead14e-0208-4a74-b1c2-4c19383820f9', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'}), log id: 7c8b1e2d
2016-10-06 10:26:02,695 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]
2016-10-06 10:26:02,695 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Failed in 'GetVmsFromExternalProviderVDS' method
2016-10-06 10:26:02,696 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-118) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VDSM hlg-rbs-ovirt-kvm03-poa.rbs.com.br command failed:
Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer
2016-10-06 10:26:02,701 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VMListReturnForXmlRpc@5ce9f7f2'
2016-10-06 10:26:02,701 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] HostName = hlg-rbs-ovirt-kvm03-poa.rbs.com.br
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Command
'GetVmsFromExternalProviderVDSCommand(HostName =
hlg-rbs-ovirt-kvm03-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='02ead14e-0208-4a74-b1c2-4c19383820f9', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'})' execution failed: VDSGenericException:
VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot
recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65
2016-10-06 10:26:02,701 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] FINISH, GetVmsFromExternalProviderVDSCommand, log id:
7c8b1e2d
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-118) [] Query 'GetVmsFromExternalProviderQuery' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-118) [] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:257)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:27)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:195)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runQuery(GenericApiGWTServiceImpl.java:53)
at sun.reflect.GeneratedMethodAccessor222.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
[branding.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
^C
[root@hlg-rbs-ovirt01-poa ~]#
When i try with virsh i have sucess.
[root@hlg-rbs-ovirt-kvm01-poa ~]# virsh -c qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system
The authenticity of host 'prd-openshift-kvm03-poa.rbs.com.br (10.1.8.32)'
can't be established.
ECDSA key fingerprint is af:e9:12:29:65:ad:41:ab:0a:3c:a1:f0:73:1c:62:a5.
Are you sure you want to continue connecting (yes/no)? yes
root(a)prd-openshift-kvm03-poa.rbs.com.br's password:
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list
Id Name State
----------------------------------------------------
5 prd-openshift-etcd03-poa running
6 prd-openshift-master03-poa running
8 prd-openshift-node03-poa running
virsh # list --all
Id Name State
----------------------------------------------------
5 prd-openshift-etcd03-poa running
6 prd-openshift-master03-poa running
8 prd-openshift-node03-poa running
- teste shut off
- teste1 shut off
- tpl-centos72-64 shut off
virsh # quit
[root@hlg-rbs-ovirt-kvm01-poa ~]#
Thanks.
5 years, 11 months
Change host names/IPs
by Davide Ferrari
Hello
Is there a clean way and possibly without downtime to change the hostname
and IP addresses of all the hosts in a running oVirt cluster?
--
Davide Ferrari
Senior Systems Engineer
5 years, 11 months
[Users] Lifecycle / upgradepath
by Sven Kieske
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
5 years, 11 months
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
5 years, 11 months
Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
5 years, 12 months
[Users] importing from kvm into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
6 years
Trying to reset password for ovirt wiki
by noc
This is a multi-part message in MIME format.
--------------000005070002050708050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hoping someone can help me out.
For some reason I keep getting the following error when I try to reset
my password:
Reset password
* Error sending mail: Failed to add recipient: jvandewege(a)nieuwland.nl
[SMTP: Invalid response code received from server (code: 554,
response: 5.7.1 <jvandewege(a)nieuwland.nl>: Relay access denied)]
Complete this form to receive an e-mail reminder of your account details.
Since I receive the ML on this address it is definitely a working address.
Tried my home account too and same error but then for my home provider,
Relay denied ??
A puzzled user,
Joop
--------------000005070002050708050606
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hoping someone can help me out.<br>
For some reason I keep getting the following error when I try to
reset my password:<br>
<br>
<fieldset><legend>Reset password</legend>
<div class="error">
<ul>
<li>Error sending mail: Failed to add recipient:
<a class="moz-txt-link-abbreviated" href="mailto:jvandewege@nieuwland.nl">jvandewege(a)nieuwland.nl</a> [SMTP: Invalid response code
received from server (code: 554, response: 5.7.1
<a class="moz-txt-link-rfc2396E" href="mailto:jvandewege@nieuwland.nl"><jvandewege(a)nieuwland.nl></a>: Relay access denied)]</li>
</ul>
</div>
<p>Complete this form to receive an e-mail reminder of your
account details.<br>
</p>
</fieldset>
<br>
Since I receive the ML on this address it is definitely a working
address.<br>
Tried my home account too and same error but then for my home
provider, Relay denied ??<br>
<br>
A puzzled user,<br>
<br>
Joop<br>
<br>
</body>
</html>
--------------000005070002050708050606--
6 years
ovirt-guest-agent issue on rhel5.5
by John Michael Mercado
Hi All,
I need your help. Anyone who encounter the below error and have the
solution? Can you help me how to fix this?
MainThread::INFO::2015-01-27
10:22:53,247::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2015-01-27
10:22:53,248::ovirt-guest-agent::138::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in ?
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 371, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
Thanks
6 years
[Users] oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
6 years
[Users] Moving iSCSI Master Data
by rni@chef.net
--========GMXBoundary282021374122634158505
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it's me again....
I started my oVirt 'project' as a proof of concept,.. but it happend as always, it became production
Now, I've to move the iSCSI Master data to the real iSCSI traget.
Is there any way to do this, and to become rid of the old Master Data?
Thank you for your help
Hans-Joachim
--========GMXBoundary282021374122634158505
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it's me again....<br /><br />I started my oVirt 'project' as a proof=
of concept,.. but it happend as always, it became production <img alt=
=3D" " title=3D" " src=3D"http://images.gmx.com/images/outsource/applicatio=
n/mailclient/mailcom/resource/mailclient/icons/blue/emoticons/animated/S_02=
-516742918.gif" /><br /><br />Now, I've to move the iSCSI Master data to th=
e real iSCSI traget.<br />Is there any way to do this, and to become rid of=
the old Master Data?<br /><br /><span id=3D"editor_signature">Thank you fo=
r your help</span><br /><br />Hans-Joachim</span></span>
--========GMXBoundary282021374122634158505--
6 years
[Users] Can't access RHEV-H aka ovirt-node
by Scotto Alberto
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
6 years, 1 month
Unable to make Single Sign on working on Windows 7 Guest
by Felipe Herrera Martinez
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
6 years, 1 month
Re: [ovirt-users] Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
6 years, 1 month
Re: [ovirt-users] Problem Windows guests start in pause
by Dafna Ron
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
6 years, 1 month
Feature: Hosted engine VM management
by Roy Golan
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review and comment on the wiki below:
http://www.ovirt.org/Hosted_engine_VM_management
Thanks,
Roy
6 years, 2 months
Re: [ovirt-users] Large DWH Database, how to empty
by Matt .
Hi,
OK thanks! I saw that after upgrading to 4.0.5 from 4.0.4 the DB
already dropped with around 500MB directly and is now at 2GB smaller.
Does this sounds familiar to you with other settings in 4.0.5 ?
Thanks,
Matt
2017-01-08 10:45 GMT+01:00 Shirly Radco <sradco(a)redhat.com>:
> No. That will corrupt your database.
>
> Are you using the full dwh or the smaller version for the dashboards?
>
> Please set the delete thresholds to save less data and the data older then
> the time you set will be deleted.
> Add a file to /ovirt-engine-dwhd.conf.d/
> update_time_to_keep_records.conf
>
> Add these lines with the new configurations. The numbers represent the hours
> to keep the data.
>
> DWH_TABLES_KEEP_SAMPLES=24
> DWH_TABLES_KEEP_HOURLY=1440
> DWH_TABLES_KEEP_DAILY=43800
>
>
> These are the configurations for a full dwh.
>
> The smaller version configurations are:
> DWH_TABLES_KEEP_SAMPLES=24
> DWH_TABLES_KEEP_HOURLY=720
> DWH_TABLES_KEEP_DAILY=0
>
> The delete process by default at 3am every day (DWH_DELETE_JOB_HOUR=3)
>
> Best regards,
>
> Shirly Radco
>
> BI Software Engineer
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
>
> On Fri, Jan 6, 2017 at 6:35 PM, Matt . <yamakasi.014(a)gmail.com> wrote:
>>
>> Hi,
>>
>> I seem to have some large database for the DWH logging and I wonder
>> how I can empty it safely.
>>
>> Can I just simply empty the database ?
>>
>> Have a good weekend!
>>
>> Cheers,
>>
>> Matt
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
6 years, 2 months
Re: [ovirt-users] Packet loss
by Doron Fediuck
----_com.android.email_640187878761650
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
SGkgS3lsZSzCoApXZSBtYXkgaGF2ZSBzZWVuIHNvbWV0aGluZyBzaW1pbGFyIGluIHRoZSBwYXN0
IGJ1dCBJIHRoaW5rIHRoZXJlIHdlcmUgdmxhbnMgaW52b2x2ZWQuwqAKSXMgaXQgdGhlIHNhbWUg
Zm9yIHlvdT/CoApUb255IC8gRGFuLCBkb2VzIGl0IHJpbmcgYSBiZWxsP8Kg
----_com.android.email_640187878761650
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5IaSBLeWxlLCZuYnNwOzwv
ZGl2PjxkaXY+V2UgbWF5IGhhdmUgc2VlbiBzb21ldGhpbmcgc2ltaWxhciBpbiB0aGUgcGFzdCBi
dXQgSSB0aGluayB0aGVyZSB3ZXJlIHZsYW5zIGludm9sdmVkLiZuYnNwOzwvZGl2PjxkaXY+SXMg
aXQgdGhlIHNhbWUgZm9yIHlvdT8mbmJzcDs8L2Rpdj48ZGl2PlRvbnkgLyBEYW4sIGRvZXMgaXQg
cmluZyBhIGJlbGw/Jm5ic3A7PC9kaXY+PC9ib2R5PjwvaHRtbD4=
----_com.android.email_640187878761650--
6 years, 2 months
Unable to backend oVirt with Cinder
by Logan Kuhn
------=_Part_51316288_608143832.1472587678781
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I've got Cinder configured and pointed at Ceph for it's back end storage. I can run ceph commands on the cinder machine and cinder is configured for noauth and I've also tried it with Keystone for auth. I can run various cinder commands and it'll return as expected.
When I configure it in oVirt it'll add the external provider fine, but when I go to create a disk it doesn't populate the volume type field, it's just empty. The corresponding command for cinder: cinder type-list and cinder type-show <name> returns fine and it is public.
Ovirt and Cinder are on the same host so it isn't a firewall issue.
Cinder config:
[DEFAULT]
rpc_backend = rabbit
#auth_strategy = keystone
auth_strategy = noauth
enabled_backends = ceph
#glance_api_servers = http://10.128.7.252:9292
#glance_api_version = 2
#[keystone_authtoken]
#auth_uri = http://10.128.7.252:5000/v3
#auth_url = http://10.128.7.252:35357/v3
#auth_type = password
#memcached_servers = localhost:11211
#project_domain_name = default
#user_domain_name = default
#project_name = services
#username = user
#password = pass
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = ovirt-images
rbd_user = cinder
rbd_secret_uuid = <secret>
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
#glance_api_version = 2
[database]
connection = postgresql://user:pass@10.128.2.33/cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_rabbit]
rabbit_host = localhost
rabbit_port = 5672
rabbit_userid = user
rabbit_password = pass
Regards,
Logan
------=_Part_51316288_608143832.1472587678781
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Arial; font-size: 12pt; color: #0000=
00"><div>I've got Cinder configured and pointed at Ceph for it's back end s=
torage. I can run ceph commands on the cinder machine and cinder is c=
onfigured for noauth and I've also tried it with Keystone for auth. I=
can run various cinder commands and it'll return as expected. </=
div><div><br data-mce-bogus=3D"1"></div><div>When I configure it in oVirt i=
t'll add the external provider fine, but when I go to create a disk it does=
n't populate the volume type field, it's just empty. The correspondin=
g command for cinder: cinder type-list and cinder type-show <name> re=
turns fine and it is public. </div><div><br data-mce-bogus=3D"1"></div=
><div>Ovirt and Cinder are on the same host so it isn't a firewall issue.</=
div><div><br data-mce-bogus=3D"1"></div><div>Cinder config:</div><div>[DEFA=
ULT]<br>rpc_backend =3D rabbit<br>#auth_strategy =3D keystone<br>auth_strat=
egy =3D noauth<br>enabled_backends =3D ceph<br>#glance_api_servers =3D http=
://10.128.7.252:9292<br>#glance_api_version =3D 2<br><br>#[keystone_authtok=
en]<br>#auth_uri =3D http://10.128.7.252:5000/v3<br>#auth_url =3D http://10=
.128.7.252:35357/v3<br>#auth_type =3D password<br>#memcached_servers =3D lo=
calhost:11211<br>#project_domain_name =3D default<br>#user_domain_name =3D =
default<br>#project_name =3D services<br>#username =3D user<br>#passwo=
rd =3D pass<br><br>[ceph]<br>volume_driver =3D cinder.volume.drivers.rbd.RB=
DDriver<br>volume_backend_name =3D ceph<br>rbd_pool =3D ovirt-images<br>rbd=
_user =3D cinder<br>rbd_secret_uuid =3D <secret><br>rbd_ceph_con=
f =3D /etc/ceph/ceph.conf<br>rbd_flatten_volume_from_snapshot =3D true<br>r=
bd_max_clone_depth =3D 5<br>rbd_store_chunk_size =3D 4<br>rados_connect_tim=
eout =3D -1<br>#glance_api_version =3D 2<br><br>[database]<br>connection =
=3D postgresql://user:pass@10.128.2.33/cinder<br><br>[oslo_concurrency]<br>=
lock_path =3D /var/lib/cinder/tmp<br><br>[oslo_messaging_rabbit]<br>rabbit_=
host =3D localhost<br>rabbit_port =3D 5672<br>rabbit_userid =3D <span =
style=3D"color: #000000; font-family: Arial; font-size: 16px; font-style: n=
ormal; font-variant-ligatures: normal; font-variant-caps: normal; font-weig=
ht: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-a=
lign: start; text-indent: 0px; text-transform: none; white-space: normal; w=
idows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inlin=
e !important; float: none; background-color: #ffffff;" data-mce-style=3D"co=
lor: #000000; font-family: Arial; font-size: 16px; font-style: normal; font=
-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal;=
letter-spacing: normal; line-height: normal; orphans: 2; text-align: start=
; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; w=
ord-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !importan=
t; float: none; background-color: #ffffff;">user</span><br>rabbit_password =
=3D <span style=3D"color: #000000; font-family: Arial; font-size: 16px=
; font-style: normal; font-variant-ligatures: normal; font-variant-caps: no=
rmal; font-weight: normal; letter-spacing: normal; line-height: normal; orp=
hans: 2; text-align: start; text-indent: 0px; text-transform: none; white-s=
pace: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
display: inline !important; float: none; background-color: #ffffff;" data-=
mce-style=3D"color: #000000; font-family: Arial; font-size: 16px; font-styl=
e: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-=
weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; te=
xt-align: start; text-indent: 0px; text-transform: none; white-space: norma=
l; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: i=
nline !important; float: none; background-color: #ffffff;">pass</span></div=
><div><br></div><div data-marker=3D"__SIG_PRE__">Regards,<br>Logan</div></d=
iv></body></html>
------=_Part_51316288_608143832.1472587678781--
6 years, 10 months
vdsClient is removed and replaced by vdsm-client
by Irit Goihman
Hi All,
vdsClient will be removed from master branch today.
It is using XMLRPC protocol which has been deprecated and replaced by
JSON-RPC.
A new client for vdsm was introduced in 4.1: vdsm-client.
This is a simple client that uses JSON-RPC protocol which was introduced in
ovirt 3.5.
The client is not aware of the available methods and parameters, and you
should consult
the schema [1] in order to construct the desired command.
Future version should parse the schema and provide online help.
If you're using vdsClient, we will be happy to assist you in migrating to
the new vdsm client.
*vdsm-client usage:*
vdsm-client [-h] [-a ADDRESS] [-p PORT] [--unsecure] [--timeout TIMEOUT]
[-f FILE] namespace method [name=value [name=value] ...]
Invoking simple methods:
# vdsm-client Host getVMList
['b3f6fa00-b315-4ad4-8108-f73da817b5c5']
For invoking methods with many or complex parameters, you can read
the parameters from a JSON format file:
# vdsm-client Lease info -f lease.json
where lease.json file content is:
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
It is also possible to read parameters from standard input, creating
complex parameters interactively:
# cat <<EOF | vdsm-client Lease info -f -
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
EOF
*Constructing a command from vdsm schema:*
Let's take VM.getStats as an example.
This is the entry in the schema:
VM.getStats:
added: '3.1'
description: Get statistics about a running virtual machine.
params:
- description: The UUID of the VM
name: vmID
type: *UUID
return:
description: An array containing a single VmStats record
type:
- *VmStats
namespace: VM
method name: getStats
params: vmID
The vdsm-client command is:
# vdsm-client VM getStats vmID=b3f6fa00-b315-4ad4-8108-f73da817b5c5
*Invoking getVdsCaps command:*
# vdsm-client Host getCapabilities
Please consult vdsm-client help and man page for further details and
options.
[1] https://github.com/oVirt/vdsm/blob/master/lib/api/vdsm-api.yml
--
Irit Goihman
Software Engineer
Red Hat Israel Ltd.
7 years
Passing VLAN trunk to VM
by Simon Vincent
Is it possible to pass multiple VLANs to a VM (pfSense) using a single
virtual NIC? All my existing oVirt networks are setup as a single tagged
VLAN. I know this didn't used to be supported but wondered if this has
changed. My other option is to pass each VLAN as a separate NIC to the VM
however if I needed to add a new VLAN I would have to add a new interface
and reboot the VM as hot-add of NICs is not supported by pfSense.
7 years, 1 month
HostedEngine with HA
by Carlos Rodrigues
Hello,
I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.
When i shutdown the network of host with HostedEngine VM, it should be
possible the HostedEngine VM migrate automatically to another host?
What is the expected behaviour on this HA scenario?
Regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
7 years, 1 month
oVIRT 4.1 / iSCSI Multipathing
by Devin Acosta
I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.
How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.
Please Advise.
Devin
7 years, 2 months
when creating VMs, I don't want hosted_storage to be an option
by Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not available to add new VMs to? Right now it’s the default storage domain when adding a VM. At the least, I’d like to make another storage domain the default.
Is there a way to do this?
Thanks
7 years, 2 months
Official Hyperconverged Gluster oVirt upgrade procedure?
by Hanson
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable glusterfs from ovirt gui, run
yum update?
Thanks!
7 years, 3 months
[vdsm] status update: running containers alongside VMs
by Francesco Romani
Hi everyone,
I'm happy to share some progress about the former "convirt"[1] project,
which aims to let Vdsm containers alongside VMs, on bare metal.
In the last couple of months I kept updating the patch series, which
is approaching the readiness to be merged in Vdsm.
Please read through this mail to see what the patchset can do now,
how you could try it *now*, even before it is merged.
Everyone is invited to share thoughts and ideas about how this effort
could evolve.
This will be a long mail; I will amend, enhance and polish the content
and make a blog post (on https://mojaves.github.io) to make it easier
to consume and to have some easy-to-find documentation. Later on the
same content will appear also on the oVirt blog.
Happy hacking!
+++
# How to try how the experimental container support for Vdsm.
Vdsm is gaining *experimental* support to run containers alongside VMs.
Vdsm had since long time the ability to manage VMs which run containers,
and recently gained support for
[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
With the new support we are describing, you will be able to manage containers
with the same, proven infrastructure that let you manage VMs.
This feature is currently being developed and it is still not merged in the
Vdsm codebase, so some extra work is needed if you want to try it out.
We aiming to merge it in the oVirt 4.1.z cycle.
## What works, aka what to expect
The basic features are expected to work:
1. Run any docker image on the public docker registry
2. Make the container accessible from the outside (aka not just from localhost)
3. Use file-based storage for persistent volumes
## What does not yet work, aka what NOT to expect
Few things are planned and currently under active development:
1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
One important drawback is that you will not be told the IP of the container from Engine,
you will need to connect to the Vdsm host to discover it using standard docker tools.
2. Proper network integration. Some steps still need manual intervention
3. Stability and recovery - it's pre-alpha software after all! :)
## 1. Introduction and prerequisites
Trying out container support affects only the host and the Vdsm.
Besides add few custom properties (totally safe and supported since early
3.z), there are zero changes required to the DB and to Engine.
Nevertheless, we recommend to dedicate one oVirt 4.y environment,
or at least one 4.y host, to try out the container feature.
To get started, first thing you need is to setup a vanilla oVirt 4.y
installation. We will need to make changes to the Vdsm and to the
Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
better to avoid them at the moment.
The reminder of this tutorial assumes you are using two hosts,
one for Vdsm (will be changed) and one for Engine (will require zero changes);
furthermore, we assume the Vdsm host is running on CentOS 7.y.
We require:
- one test host for Vdsm. This host need to have one NIC dedicated to containers.
We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
so this NIC *must not be* part of one bridge.
- docker >= 1.12
- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
- CentOS >= 7.2
Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
Caveats:
1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
2. you may want to replace the default service file
[with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/syst...
and to use this
[sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/sys....
Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
on the testing box.
## 2. Patch Vdsm to support containers
You need to patch and rebuild Vdsm.
Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.1...
and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
Rebuild Vdsm and reinstall on your box.
[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
Make sure you install the Vdsm command line client (vdsm-cli)
Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
This ensure that no regression is introduced, and that your environment can run VMs just as before.
Now we can proceed adding the container support.
start docker:
# systemctl start docker-engine
(optional)
# systemctl enable docker-engine
Restart Vdsm again
# systemctl restart vdsm
Now we can check if Vdsm detects docker, so you can use it:
still on the same Vdsm host, run
$ vdsClient -s 0 getVdsCaps | grep containers
containers = ['docker', 'fake']
This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
Now we need to make sure the host network configuration is fine.
### 2.1. Configure the docker network for Vdsm
PLEASE NOTE
that the suggested network configuration assumes that
* you have one network, `ovirtmgmt` (the default one) you use for everything
* you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
care of this automatically in the future.
You can use
[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-...,
which reuses the Vdsm libraries. Make sure
you have patched Vdsm to support container before to use it.
Let's review what the script needs:
# ./cont-setup-net -h
usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
[--interface [INTERFACE]] [--gateway [GATEWAY]]
[--subnet [SUBNET]] [--mask [MASK]]
optional arguments:
-h, --help show this help message and exit
--name [NAME] network name to use
--bridge [BRIDGE] bridge to use
--interface [INTERFACE]
interface to use
--gateway [GATEWAY] address of the gateway
--subnet [SUBNET] subnet to use
--mask [MASK] netmask to use
So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
For my case the default mask was indeed fine, so I used the script like this:
# ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
Thhis is the output I got:
DEBUG:virt.containers.runtime:configuring runtime 'docker'
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
Error: No such network: ovirtmgmt
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.runtime:configuring runtime 'fake'
You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
91535f3425a8 bridge bridge local
d42f7e5561b5 host host local
621ab6dd49b1 none null local
f4b88e4a67eb ovirtmgmt macvlan local
# docker network inspect ovirtmgmt
[
{
"Name": "ovirtmgmt",
"Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.1.0/24",
"IPRange": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"parent": "enp3s0"
},
"Labels": {}
}
]
Looks good! the host configuration is completed. Let's move to the Engine side.
## 3. Configure Engine
As mentioned above, we need now to configure Engine. This boils down to:
Add a few custom properties for VMs:
In case you were already using custom properties, you need to amend the command
line to not overwrite your existing ones.
# engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
It is worth stressing that while the variables are container-specific,
the VM custom properties are totally inuntrusive and old concept in oVirt, so
this step is totally safe.
Now restart Engine to let it use the new variables:
# systemctl restart ovirt-engine
The next step is actually configure one "container VM" and run it.
## 4. Create the container "VM"
To finally run a container, you start creating a VM much like you always did, with
few changes
1. most of the hardware-related configuration isn't relevant for container "VMs",
besides cpu share and memory limits; this will be better documented in the
future; unneeded configuration will just be ignored
2. You need to set some custom properties for your container "VM". Those are
actually needed to enable the container flow, and they are documented in
the next section. You *need* to set at least `containerType` and `containerImage`.
### 4.2. Custom variables for container support
The container support needs some custom properties to be properly configured:
1. `containerImage` (*needed* to enable the container system).
Just select the target image you want to run. You can use the standard syntax of the
container runtimes.
2. `containerType` (*needed* to enable the container system).
Selects the container runtime you want to use. All the available options are always showed.
Please note that unavailable container options are not yet grayed out.
If you *do not* have rkt support on your host, you still can select it, but it won't work.
3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
to have persistent storage. Only file-based storage is supported.
Example configuration:
`containerImage = redis`
`containerType = docker`
`volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
### 4.2. A little bit of extra work: preload the images on the Vdsm host
This step is not needed by the flow, and will be handled by oVirt in the future.
The issue is how the container image are handled. They are stored by the container
management system (rkt, docker) on each host, and they are not pre-downloaded.
To shorten the duration of the first boot, you are advised to pre-download
the image(s) you want to run. For example
## on the Vdsm host you want to use with containers
# docker pull redis
## 5. Run the container "VM"
You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
Some actions doesn't make sense for a container "VM", like live migration.
Engine won't stop you to try to do those actions, but they will fail gracefully
using the standard errors.
## 6. Next steps
What to expect from this project in the future?
For the integration with Vdsm, we want to fix the existing known issues, most notably:
* add proper monitoring/reporting of the container health
* ensure proper integration of the container image store with oVirt storage management
* streamline the network configuration
What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
moment, so fixing the following is currently unplanned:
* First and foremost, Engine will not distinguish between real VMs and container VMs.
Actions unavailable to container will not be hidden from UI. Same for monitoring
and configuration data, which will be ignored.
* Engine is NOT aware of the volumes one container can use. You must inspect and do the
mapping manually.
* Engine is NOT aware of the available container runtimes. You must select it carefully
Proper integration with Engine may be added in the future once this feature exits
from the experimental/provisional stage.
Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
+++
[1] we keep calling it that way _only_ internally, because it's a short
name we are used to. After the merge/once we release it, we will use
a different name, like "vdsm-containers" or something like it.
--
Francesco Romani
Red Hat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
7 years, 3 months
Empty cgroup files on centos 7.3 host
by Florian Schmid
Hi,
I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3 hypervisor hosts, but it looks like that all those files are empty.
For example:
ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\x2dHostedEngine.scope/
insgesamt 0
drwxr-xr-x. 2 root root 0 30. Mai 10:09 .
drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
--w-------. 1 root root 0 30. Mai 10:09 blkio.reset_stats
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.clone_children
--w--w--w-. 1 root root 0 30. Mai 10:09 cgroup.event_control
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.procs
-rw-r--r--. 1 root root 0 30. Mai 10:09 notify_on_release
-rw-r--r--. 1 root root 0 30. Mai 10:09 tasks
I thought, I can get my needed values from there, but all files are empty.
Looking at this post: http://lists.ovirt.org/pipermail/users/2017-January/079011.html
this should work.
Is this normal on centos 7.3 with oVirt installed? How can I get those values, without monitoring all VMs directly?
oVirt Version we use:
4.1.1.8-1.el7.centos
BR Florian
7 years, 4 months
Re: [ovirt-users] iSCSI domain on 4kn drives
by Martijn Grendelman
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>
> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman
> <martijn.grendelman(a)isaac.nl <mailto:martijn.grendelman@isaac.nl>> wrote:
>
> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman
>> <martijn.grendelman(a)isaac.nl
>> <mailto:martijn.grendelman@isaac.nl>> wrote:
>>
>> Hi,
>>
>> Does oVirt support iSCSI storage domains on target LUNs using
>> a block
>> size of 4k?
>>
>>
>> No, we do not - not if it exposes 4K blocks.
>> Y.
>
> Is this on the roadmap?
>
>
> Not in the short term roadmap.
> Of course, patches are welcome. It's mainly in VDSM.
> I wonder if it'll work in NFS.
> Y.
I don't think I ever replied to this, but I can confirm that in RHEV 3.6
it works with NFS.
Best regards,
Martijn.
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:<br>
<blockquote
cite="mid:280cfbd3a16ad1b76cc7de56bda88f45,CAJgorsbJHLV1e3fH4b4AR3GBp1oi44fDhfeii+PQ1iY1RwUStw@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">On Fri, Aug 5, 2016 at 4:42 PM, Martijn
Grendelman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl" target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Op 4-8-2016 om
18:36 schreef Yaniv Kaul:<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"><span class="">On Thu,
Aug 4, 2016 at 11:49 AM, Martijn Grendelman <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl"
target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
</span><span class="">
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi,<br>
<br>
Does oVirt support iSCSI storage domains on
target LUNs using a block<br>
size of 4k?<br>
</blockquote>
<div><br>
</div>
</span><span class="">
<div>No, we do not - not if it exposes 4K
blocks.</div>
<div>Y.</div>
</span></div>
</div>
</div>
</blockquote>
<br>
Is this on the roadmap?<br>
</div>
</blockquote>
<div><br>
</div>
<div>Not in the short term roadmap.</div>
<div>Of course, patches are welcome. It's mainly in VDSM.</div>
<div>I wonder if it'll work in NFS.</div>
<div>Y.</div>
</div>
</div>
</div>
</blockquote>
<br>
I don't think I ever replied to this, but I can confirm that in RHEV
3.6 it works with NFS.<br>
<br>
Best regards,<br>
Martijn.<br>
</body>
</html>
--------------DE48748F7C67E1FABE46EEAF--
7 years, 5 months
Re: [ovirt-users] How to import a qcow2 disk into ovirt
by Martín Follonier
Hi,
I've done all the recommendations in this thread, and I'm still getting the
"Paused by System" message just after the transfer starts.
Honestly I don't know were else to look at, cause I don't find any log
entry or packet capture that give me a hint about what is happening.
I'll appreciate any help! Thank you in advance!
Regards
Martin
On Thu, Sep 1, 2016 at 5:01 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
> You can do both,
> Through the database, the table is "vdc_options". change "option_value"
> where "option_name" = 'ImageProxyAddress' .
>
> On Thu, Sep 1, 2016 at 4:56 PM, Gianluca Cecchi <gianluca.cec...(a)gmail.com
> > wrote:
>
>> On Thu, Sep 1, 2016 at 3:53 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
>>
>>> You can just replace this value in the DB and change it to the right
>>> FQDN, it is a config value named "ImageProxyAddress". replace "localhost"
>>> with the right address (notice that the port is there too).
>>>
>>> If this will keep happen after users will have the latest version, we
>>> will have to open a bug and fix whatever causes the URL to be "localhost".
>>>
>>>
>> Do you mean through "engine-config" or directly into database?
>> In this second case which is the table involved?
>>
>> Gianluca
>>
>
>
[root@ractorshe bin]# systemctl stop ovirt-imageio-proxy
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value | version
-----------+-------------------+-----------------+---------
950 | ImageProxyAddress | localhost:54323 | general
(1 row)
engine=# update vdc_options set option_value='ractorshe.mydomain:54323'
where option_name='ImageProxyAddress';
UPDATE 1
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
engine=#
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
systemctl stop ovirt-engine
(otherwise it remained localhost)
systemctl start ovirt-engine
systemctl start ovirt-imageio-proxy
Now transfer is ok.
I tried a qcow2 disck configured as 40Gb but containing about 1.6Gb of data.
I'm going to connect it to a VM and see if all is ok also from a contents
point of view.
Gianluca
_______________________________________________
Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
7 years, 7 months
ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
by yayo (j)
Hi all,
We have an ovirt cluster hyperconverged with hosted engine on 3 full
replicated node . This cluster have 2 gluster volume:
- data: volume for the Data (Master) Domain (For vm)
- engine: volume fro the hosted_storage Domain (for hosted engine)
We have this problem: "engine" gluster volume have always unsynced elements
and we cant' fix the problem, on command line we have tried to use the
"heal" command but elements remain always unsynced ....
Below the heal command "status":
[root@node01 ~]# gluster volume heal engine info
Brick node01:/gluster/engine/brick
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 12
Brick node02:/gluster/engine/brick
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
<gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f>
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
<gfid:1e309376-c62e-424f-9857-f9a0c3a729bf>
<gfid:e3565b50-1495-4e5b-ae88-3bceca47b7d9>
<gfid:4e33ac33-dddb-4e29-b4a3-51770b81166a>
/__DIRECT_IO_TEST__
<gfid:67606789-1f34-4c15-86b8-c0d05b07f187>
<gfid:9ef88647-cfe6-4a35-a38c-a5173c9e8fc0>
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
<gfid:9ad720b2-507d-4830-8294-ec8adee6d384>
<gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf>
Status: Connected
Number of entries: 12
Brick node04:/gluster/engine/brick
Status: Connected
Number of entries: 0
running the "gluster volume heal engine" don't solve the problem...
Some extra info:
We have recently changed the gluster from: 2 (full repliacated) + 1 arbiter
to 3 full replicated cluster but i don't know this is the problem...
The "data" volume is good and healty and have no unsynced entry.
Ovirt refuse to put the node02 and node01 in "maintenance mode" and
complains about "unsynced elements"
How can I fix this?
Thank you
7 years, 7 months
oVIRT Node / Network Manager / oVirt 4.1
by Devin Acosta
I noticed that for some reason when I am running oVirt node on my hosts and
disabled NetworkManager it appears that it kept turning itself back on, and
my understanding is that Network Manager should be disabled. I had to force
remove Network Manager in order to get it to stay disabled. I have been
seeing strangeness with my VM's where they disconnect, and where
hosted-engine keeps trying to non-stop migrate to another host. Just wanted
to first confirm about Network Manager and oVirt Node image.
--
Devin Acosta
Red Hat Certified Architect
7 years, 8 months
Problemas with ovirtmgmt network used to connect VMs
by FERNANDO FREDIANI
Has anyone had problem when using the ovirtmgmt bridge to connect VMs ?
I am still facing a bizarre problem where some VMs connected to this
bridge stop passing traffic. Checking the problem further I see its mac
address stops being learned by the bridge and the problem is resolved
only with a VM reboot.
When I last saw the problem I run brctl showmacs ovirtmgmt and it shows
me the VM's mac adress with agening timer 200.19. After the VM reboot I
see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not used
for VMs.
Does anyone have any clue about this type of behavior ?
Fernando
7 years, 9 months
supervdsmd IOError to /dev/stdout
by Richard Chan
After an upgrade to 4.0 I have a single host that cannot start supervdsmd
because of IOError on /dev/stdout. All other hosts upgraded correctly.
In the systemd unit I have to hack StandardOutput=null.
Any thing I have overlooked? The hosts are all identical and it is just
this one
that has this weird behaviour.
--
Richard Chan
7 years, 9 months
sanlock ids file broken after server crash
by Johan Bernhardsson
Hello,
The ids file for sanlock is broken on one setup. The first host id in
the file is wrong.
>From the logfile i have:
verify_leader 1 wrong space name 0924ff77-ef51-435b-b90d-50bfbf2e�ke7
0924ff77-ef51-435b-b90d-50bfbf2e8de7 /rhev/data-center/mnt/glusterSD/
Note the broken char in the space name.
This also apears. And it seams as the hostid too is broken in the ids
file:
leader4 sn 0924ff77-ef51-435b-b90d-50bfbf2e�ke7 rn ��7afa5-3a91-415b-
a04c-221d3e060163.vbgkvm01.a ts 4351980 cs eefa4dd7
Note the broken chars there as well.
If i check the ids file with less or strings the first row where my
vbgkvm01 host are. That has broken chars.
Can this be repaired in some way without taking down all the virtual
machines on that storage?
/Johan
7 years, 9 months
VDSM Command failed: Heartbeat Exceeded
by Neil
Hi guys,
Please could someone assist me, my DC seems to be trying to re-negotiate
SPM and apparently it's failing. I tried to delete an old autogenerated
snapshot and shortly after that the issue seemed to start, however after
about an hour, the snapshot said successfully deleted, and then SPM
negotiated again albeit for a short period before it started trying to
re-negotiate again.
Last week I upgraded from ovirt 3.5 to 3.6, I also upgraded one of my 4
hosts using the 3.6 repo to the latest available from that repo and did a
yum update too.
I have 4 nodes and my ovirt engine is a KVM guest on another physical
machine on the network. I'm using an FC SAN with ATTO HBA's and recently
we've started seeing some degraded IO. The SAN appears to be alright and
the disks all seem to check out, but we are having rather slow IOPS at the
moment, which we trying to track down.
ovirt engine CentOS release 6.9 (Final)
ebay-cors-filter-1.0.1-0.1.ovirt.el6.noarch
ovirt-engine-3.6.7.5-1.el6.noarch
ovirt-engine-backend-3.6.7.5-1.el6.noarch
ovirt-engine-cli-3.6.2.0-1.el6.noarch
ovirt-engine-dbscripts-3.6.7.5-1.el6.noarch
ovirt-engine-extension-aaa-jdbc-1.0.7-1.el6.noarch
ovirt-engine-extensions-api-impl-3.6.7.5-1.el6.noarch
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
ovirt-engine-lib-3.6.7.5-1.el6.noarch
ovirt-engine-restapi-3.6.7.5-1.el6.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el6.noarch
ovirt-engine-setup-3.6.7.5-1.el6.noarch
ovirt-engine-setup-base-3.6.7.5-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.7.5-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.7.5-1.el6.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.7.5-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.7.5-1.el6.noarch
ovirt-engine-tools-3.6.7.5-1.el6.noarch
ovirt-engine-tools-backup-3.6.7.5-1.el6.noarch
ovirt-engine-userportal-3.6.7.5-1.el6.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.7.5-1.el6.noarch
ovirt-engine-webadmin-portal-3.6.7.5-1.el6.noarch
ovirt-engine-websocket-proxy-3.6.7.5-1.el6.noarch
ovirt-engine-wildfly-8.2.1-1.el6.x86_64
ovirt-engine-wildfly-overlay-8.0.5-1.el6.noarch
ovirt-host-deploy-1.4.1-1.el6.noarch
ovirt-host-deploy-java-1.4.1-1.el6.noarch
ovirt-image-uploader-3.6.0-1.el6.noarch
ovirt-iso-uploader-3.6.0-1.el6.noarch
ovirt-release34-1.0.3-1.noarch
ovirt-release35-006-1.noarch
ovirt-release36-3.6.7-1.noarch
ovirt-setup-lib-1.0.1-1.el6.noarch
ovirt-vmconsole-1.0.2-1.el6.noarch
ovirt-vmconsole-proxy-1.0.2-1.el6.noarch
node01 (CentOS 6.9)
vdsm-4.16.30-0.el6.x86_64
vdsm-cli-4.16.30-0.el6.noarch
vdsm-jsonrpc-4.16.30-0.el6.noarch
vdsm-python-4.16.30-0.el6.noarch
vdsm-python-zombiereaper-4.16.30-0.el6.noarch
vdsm-xmlrpc-4.16.30-0.el6.noarch
vdsm-yajsonrpc-4.16.30-0.el6.noarch
gpxe-roms-qemu-0.9.7-6.16.el6.noarch
qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
libvirt-0.10.2-62.el6.x86_64
libvirt-client-0.10.2-62.el6.x86_64
libvirt-lock-sanlock-0.10.2-62.el6.x86_64
libvirt-python-0.10.2-62.el6.x86_64
node01 was upgraded out of desperation after I tried changing my DC and
cluster version to 3.6, but then found that none of my hosts could be
activated out of maintenance due to an incompatibility with 3.6 (I'm still
not sure why as searching seemed to indicate Centos 6.x was compatible. I
then had to remove all 4 hosts, and change the cluster version back to 3.5
and then re-add them. When I tried changing the cluster version to 3.6 I
did get a complaint about using the "legacy protocol" so on each host under
Advanced, I changed them to use the JSON protocol, and this seemed to
resolve it, however once changing the DC/Cluster back to 3.5 the option to
change the protocol back to Legacy is no longer shown.
node02 (Centos 6.7)
vdsm-4.16.30-0.el6.x86_64
vdsm-cli-4.16.30-0.el6.noarch
vdsm-jsonrpc-4.16.30-0.el6.noarch
vdsm-python-4.16.30-0.el6.noarch
vdsm-python-zombiereaper-4.16.30-0.el6.noarch
vdsm-xmlrpc-4.16.30-0.el6.noarch
vdsm-yajsonrpc-4.16.30-0.el6.noarch
gpxe-roms-qemu-0.9.7-6.14.el6.noarch
qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
libvirt-0.10.2-54.el6_7.6.x86_64
libvirt-client-0.10.2-54.el6_7.6.x86_64
libvirt-lock-sanlock-0.10.2-54.el6_7.6.x86_64
libvirt-python-0.10.2-54.el6_7.6.x86_64
node03 CentOS 6.7
vdsm-4.16.30-0.el6.x86_64
vdsm-cli-4.16.30-0.el6.noarch
vdsm-jsonrpc-4.16.30-0.el6.noarch
vdsm-python-4.16.30-0.el6.noarch
vdsm-python-zombiereaper-4.16.30-0.el6.noarch
vdsm-xmlrpc-4.16.30-0.el6.noarch
vdsm-yajsonrpc-4.16.30-0.el6.noarch
gpxe-roms-qemu-0.9.7-6.14.el6.noarch
qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
libvirt-0.10.2-54.el6_7.6.x86_64
libvirt-client-0.10.2-54.el6_7.6.x86_64
libvirt-lock-sanlock-0.10.2-54.el6_7.6.x86_64
libvirt-python-0.10.2-54.el6_7.6.x86_64
node04 (Centos 6.7)
vdsm-4.16.20-1.git3a90f62.el6.x86_64
vdsm-cli-4.16.20-1.git3a90f62.el6.noarch
vdsm-jsonrpc-4.16.20-1.git3a90f62.el6.noarch
vdsm-python-4.16.20-1.git3a90f62.el6.noarch
vdsm-python-zombiereaper-4.16.20-1.git3a90f62.el6.noarch
vdsm-xmlrpc-4.16.20-1.git3a90f62.el6.noarch
vdsm-yajsonrpc-4.16.20-1.git3a90f62.el6.noarch
gpxe-roms-qemu-0.9.7-6.15.el6.noarch
qemu-img-0.12.1.2-2.491.el6_8.1.x86_64
qemu-kvm-0.12.1.2-2.491.el6_8.1.x86_64
qemu-kvm-tools-0.12.1.2-2.503.el6_9.3.x86_64
libvirt-0.10.2-60.el6.x86_64
libvirt-client-0.10.2-60.el6.x86_64
libvirt-lock-sanlock-0.10.2-60.el6.x86_64
libvirt-python-0.10.2-60.el6.x86_64
I'm seeing a rather confusing error in the /var/log/messages on all 4 hosts
as follows....
Jul 31 16:41:36 node01 multipathd: 36001b4d80001c80d0000000000000000: sdb -
directio checker reports path is down
Jul 31 16:41:41 node01 kernel: sd 7:0:0:0: [sdb] Result:
hostbyte=DID_ERROR driverbyte=DRIVER_OK
Jul 31 16:41:41 node01 kernel: sd 7:0:0:0: [sdb] CDB: Read(10): 28 00 00 00
00 00 00 00 01 00
Jul 31 16:41:41 node01 kernel: end_request: I/O error, dev sdb, sector 0
I say confusing, because I don't have a 3000GB LUN
[root@node01 ~]# fdisk -l | grep 3000
Disk /dev/sdb: 3000.0 GB, 2999999528960 bytes
I did have one on Friday, last week, but I trashed it and changed it to a
1500GB LUN instead, so I'm not sure if perhaps this error is still trying
to connect to the old LUN perhaps?
My LUNS are as follows...
Disk /dev/sdb: 3000.0 GB, 2999999528960 bytes (this one doesn't actually
exist anymore)
Disk /dev/sdc: 1000.0 GB, 999999668224 bytes
Disk /dev/sdd: 1000.0 GB, 999999668224 bytes
Disk /dev/sde: 1000.0 GB, 999999668224 bytes
Disk /dev/sdf: 1000.0 GB, 999999668224 bytes
Disk /dev/sdg: 1000.0 GB, 999999668224 bytes
Disk /dev/sdh: 1000.0 GB, 999999668224 bytes
Disk /dev/sdi: 1000.0 GB, 999999668224 bytes
Disk /dev/sdj: 1000.0 GB, 999999668224 bytes
Disk /dev/sdk: 1000.0 GB, 999999668224 bytes
Disk /dev/sdm: 1000.0 GB, 999999668224 bytes
Disk /dev/sdl: 1000.0 GB, 999999668224 bytes
Disk /dev/sdn: 1000.0 GB, 999999668224 bytes
Disk /dev/sdo: 1000.0 GB, 999999668224 bytes
Disk /dev/sdp: 1000.0 GB, 999999668224 bytes
Disk /dev/sdq: 1000.0 GB, 999999668224 bytes
Disk /dev/sdr: 1000.0 GB, 999988133888 bytes
Disk /dev/sds: 1500.0 GB, 1499999764480 bytes
Disk /dev/sdt: 1500.0 GB, 1499999502336 bytes
I'm quite low on SAN disk space currently so I'm a little hesitant to
migrate VM's around for fear of the migrations creating too many snapshots
and filling up my SAN. We are in the process of expanding the SAN Array
too, but we trying to get to the bottom of the bad IOPS at the moment
before adding on addition overhead.
Ping tests between hosts and engine all look alright, so I don't suspect
network issues.
I know this is very vague, everything is currently operational, however as
you can see in the attached logs, I'm getting lots of ERROR messages.
Any help or guidance is greatly appreciated.
Thanks.
Regards.
Neil Wilson.
7 years, 9 months
oVirt and Foreman
by Davide Ferrari
Hello list
is anybody successfully using oVirt + Foreman for VM creation +
provisioning?
I'm using Foremn (latest version, 1.15.2) with latest oVirt version
(4.1.3) but I'm encountering several problem, especially related to
disks. For example:
- cannot create a VM with multiple disks though Foreman CLI (hammer)
- if I create a multidisk VM from Foreman, the second disk always gets
the "bootable" flag and not the primary image, making the VMs not
bootable at all.
Any other Foreman user sharing the pain here? Foramn's list is not so
useful so I'm trying to ask here. How do you programmatically create
virtual machines with oVirt and Foreman? Should I switch do directly
using oVirt API?
Thanks in advance
Davide
7 years, 9 months
Deploying training lab
by Andy Michielsen
Hello all,
Don't know if this is the right place to ask this but I would like to set
up a trainingslab with oVirt.
I have deployed an engine and a host with local storage and want to run 1
server and 5 desktops off it.
But the desktops will be used on thin clients or old laptops with some
minimal os installation running spice client or a webbrowser.
I was wondering if anyone can give me pointer in how to set up a minimal
laptop which only need to run an spice client.
Kind regards.
7 years, 9 months
fence issue adding host
by Bill James
--------------DDA22B530AEEAC5C11E37F1E
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
I'm adding 3 hardware nodes to our cluster. All 3 same type of server
and software, HP DL360 G8, centos 7.
One fails the fence agent test.
The one I'm having problems with has a newer version of ilo firmware,
not sure if related.
Troublemaker: 2.54
Others: 2.53
ovirt-engine-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64
2017-07-26 15:14:30,215-07 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Executing power
management status on Host ovirt6.j2noc.com using Proxy Host
ovirt1.j2noc.com and Fence Agent ilo4:10.144.254.89.
2017-07-26 15:14:30,216-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] START,
FenceVdsVDSCommand(HostName = ovirt1.j2noc.com,
FenceVdsVDSCommandParameters:{runAsync='true',
hostId='23d2c0ab-5dd1-43af-9db3-2a426a539faf',
targetVdsId='00000000-0000-0000-0000-000000000000', action='STATUS',
agent='FenceAgent:{id='null', hostId='null', order='1', type='ilo4',
ip='10.144.254.89', port='null', user='Administrator', password='***',
encryptOptions='false', options='power_wait=4'}', policy='null'}), log
id: 1498b3c4
*2017-07-26 15:14:30,414-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] FINISH,
FenceVdsVDSCommand, return: FenceOperationResult:{status='ERROR',
powerStatus='UNKNOWN', message='[Failed: Unable to obtain correct plug
status or plug is not available, , ]'}, log id: 1498b3c4*
2017-07-26 15:14:30,420-07 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Execution of power
management status on Host ovirt6.j2noc.com using Proxy Host
ovirt1.j2noc.com and Fence Agent ilo4:10.144.254.89 failed.
2017-07-26 15:14:30,420-07 WARN
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-52)
[4d72d360-2b92-43f6-b2df-e80ee305c622] Fence action failed using proxy
host '10.144.110.99', trying another proxy
2017-07-26 15:14:30,740-07 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Executing power
management status on Host ovirt6.j2noc.com using Proxy Host
ovirt2.j2noc.com and Fence Agent ilo4:10.144.254.89.
2017-07-26 15:14:30,741-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] START,
FenceVdsVDSCommand(HostName = ovirt2.j2noc.com,
FenceVdsVDSCommandParameters:{runAsync='true',
hostId='91d8fa70-fd24-4530-90f7-982ff068230b',
targetVdsId='00000000-0000-0000-0000-000000000000', action='STATUS',
agent='FenceAgent:{id='null', hostId='null', order='1', type='ilo4',
ip='10.144.254.89', port='null', user='Administrator', password='***',
encryptOptions='false', options='power_wait=4'}', policy='null'}), log
id: 67d837da
2017-07-26 15:14:30,898-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] FINISH,
FenceVdsVDSCommand, return: FenceOperationResult:{status='ERROR',
powerStatus='UNKNOWN', message='[Failed: Unable to obtain correct plug
status or plug is not available, , ]'}, log id: 67d837da
2017-07-26 15:14:30,903-07 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Execution of power
management status on Host ovirt6.j2noc.com using Proxy Host
ovirt2.j2noc.com and Fence Agent ilo4:10.144.254.89 failed.
I'm not sure the right syntax for fence_ipmilan since even a "good" host
fails:
[root@ovirt4 prod vdsm]# fence_ipmilan -a 10.144.254.87 -P -l
Administrator -p *** -o status -v chassis power status
Executing: /usr/bin/ipmitool -I lanplus -H 10.144.254.87 -U
Administrator -P [set] -p 623 -L ADMINISTRATOR chassis power status
1 Error: Unable to establish IPMI v2 / RMCP+ session
Failed: Unable to obtain correct plug status or plug is not available
Any ideas on what the issue is?
--------------DDA22B530AEEAC5C11E37F1E
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
I'm adding 3 hardware nodes to our cluster. All 3 same type of
server and software, HP DL360 G8, centos 7.<br>
One fails the fence agent test.<br>
The one I'm having problems with has a newer version of ilo
firmware, not sure if related.<br>
Troublemaker: 2.54<br>
Others: 2.53<br>
<br>
ovirt-engine-4.1.0.4-1.el7.centos.noarch<br>
vdsm-4.19.4-1.el7.centos.x86_64<br>
<br>
<br>
2017-07-26 15:14:30,215-07 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message: Executing
power management status on Host ovirt6.j2noc.com using Proxy Host
ovirt1.j2noc.com and Fence Agent ilo4:10.144.254.89.<br>
2017-07-26 15:14:30,216-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] START,
FenceVdsVDSCommand(HostName = ovirt1.j2noc.com,
FenceVdsVDSCommandParameters:{runAsync='true',
hostId='23d2c0ab-5dd1-43af-9db3-2a426a539faf',
targetVdsId='00000000-0000-0000-0000-000000000000', action='STATUS',
agent='FenceAgent:{id='null', hostId='null', order='1', type='ilo4',
ip='10.144.254.89', port='null', user='Administrator',
password='***', encryptOptions='false', options='power_wait=4'}',
policy='null'}), log id: 1498b3c4<br>
<b>2017-07-26 15:14:30,414-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] FINISH,
FenceVdsVDSCommand, return: FenceOperationResult:{status='ERROR',
powerStatus='UNKNOWN', message='[Failed: Unable to obtain correct
plug status or plug is not available, , ]'}, log id: 1498b3c4</b><br>
2017-07-26 15:14:30,420-07 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Execution of
power management status on Host ovirt6.j2noc.com using Proxy Host
ovirt1.j2noc.com and Fence Agent ilo4:10.144.254.89 failed.<br>
2017-07-26 15:14:30,420-07 WARN
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-52)
[4d72d360-2b92-43f6-b2df-e80ee305c622] Fence action failed using
proxy host '10.144.110.99', trying another proxy<br>
2017-07-26 15:14:30,740-07 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED(9,020), Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message: Executing
power management status on Host ovirt6.j2noc.com using Proxy Host
ovirt2.j2noc.com and Fence Agent ilo4:10.144.254.89.<br>
2017-07-26 15:14:30,741-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] START,
FenceVdsVDSCommand(HostName = ovirt2.j2noc.com,
FenceVdsVDSCommandParameters:{runAsync='true',
hostId='91d8fa70-fd24-4530-90f7-982ff068230b',
targetVdsId='00000000-0000-0000-0000-000000000000', action='STATUS',
agent='FenceAgent:{id='null', hostId='null', order='1', type='ilo4',
ip='10.144.254.89', port='null', user='Administrator',
password='***', encryptOptions='false', options='power_wait=4'}',
policy='null'}), log id: 67d837da<br>
2017-07-26 15:14:30,898-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] FINISH,
FenceVdsVDSCommand, return: FenceOperationResult:{status='ERROR',
powerStatus='UNKNOWN', message='[Failed: Unable to obtain correct
plug status or plug is not available, , ]'}, log id: 67d837da<br>
2017-07-26 15:14:30,903-07 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-52) [4d72d360-2b92-43f6-b2df-e80ee305c622] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Execution of
power management status on Host ovirt6.j2noc.com using Proxy Host
ovirt2.j2noc.com and Fence Agent ilo4:10.144.254.89 failed.<br>
<br>
<br>
<br>
I'm not sure the right syntax for fence_ipmilan since even a "good"
host fails:<br>
<br>
<br>
[root@ovirt4 prod vdsm]# fence_ipmilan -a 10.144.254.87 -P -l
Administrator -p *** -o status -v chassis power status <br>
Executing: /usr/bin/ipmitool -I lanplus -H 10.144.254.87 -U
Administrator -P [set] -p 623 -L ADMINISTRATOR chassis power status<br>
<br>
1 Error: Unable to establish IPMI v2 / RMCP+ session<br>
<br>
<br>
Failed: Unable to obtain correct plug status or plug is not
available<br>
<br>
<br>
<br>
<br>
Any ideas on what the issue is?<br>
<br>
</body>
</html>
--------------DDA22B530AEEAC5C11E37F1E--
7 years, 9 months
problem while moving/copying disks: vdsm low level image copy failed
by Johan Bernhardsson
Hello,
We get this error message while moving or copying some of the disks on
our main cluster running 4.1.2 on centos7
This is shown in the engine:
VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
copy failed
I can copy it inside the host. And i can use dd to copy. Haven't tried
to run qemu-img manually yet.
This is from vdsm.log on the host:
2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154, in
run
self._run()
File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in _run
self._operation.wait_for_completion()
File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in
wait_for_completion
self.poll(timeout)
File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in
poll
self.error)
QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
'/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
'/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
43
5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-
4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
476f3f5b93c7/3fe43487-3302-4b34-865
a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
reading sector 12197886: No data available
, message=None
The storage domains are all based on gluster. The storage domains that
we see this on is configured as dispersed volumes.
Found a way to "fix" the problem. And that is to run dd if=/dev/vda
of=/dev/null bs=1M inside the virtual guest. After that we can copy an
image or use storage livemigration.
Is this a gluster problem or an vdsm problem? Or could it be something
with qemu-img?
/Johan
7 years, 9 months
Yum error while installing host on Centos7
by Iurcev, Massimiliano
I installed ovirt-engine on Centos7. My target is to have a single host
machine.
During the installation of the (first and unique) host, I get an error:
Failed to install Host xxxx. Yum Non-fatal POSTIN scriptlet failure in rpm
package gtk2-2.24.28-8.el7.x86_64.
This error is followed by other errors:
Failed to install Host xxxx. Failed to execute stage 'Package
installation': One or more elements within Yum transaction failed.
and finally:
Host xxxx installation failed. Command returned failure code 1 during SSH
session 'root(a)xxxx.mydomain.it'.
All other libraries are yum-installed without problems.
7 years, 9 months
ovirt and mixed selinux
by Bill James
I was hoping to migrate my systems to using selinux gradually.
I added 3 new nodes with selinux in permissive mode.
Migration fails to any of the previous hosts that currently have selinux
disabled.
Is it an all or nothing deal? Obviously not easy to reboot all nodes at
once.
2017-07-28 09:35:43,616 ERROR (migsrc/8c566813) [virt.vm]
(vmId='8c566813-4bee-4f04-be23-c9fc10e1e1f2') unsupported configuration:
Unable to find security driver for model selinux (migration:265)
2017-07-28 09:35:43,641 ERROR (migsrc/8c566813) [virt.vm]
(vmId='8c566813-4bee-4f04-be23-c9fc10e1e1f2') Failed to migrate
(migration:405)
Traceback (most recent call last):
ovirt-engine-4.1.0.4-1.el7.centos.noarch
libselinux-utils-2.5-6.el7.x86_64
related: http://lists.ovirt.org/pipermail/users/2016-October/076878.html
7 years, 9 months
[SOLVED] Re: How to stop "Failed to stop image transfer session. Ticket does not exist for image" spam
by Richard Chan
Solved: I did not see the "Cancel" option in the disk page when the disk
upload is stalled. Apologies for the noise.
On Sat, Jul 29, 2017 at 2:57 PM, Richard Chan <richard(a)treeboxsolutions.com>
wrote:
> Using oVirt 4.1.3 - I have some failed disk uploads but the logs are being
> spammed by
>
>
> "Failed to stop image transfer session. Ticket does not exist for image"
>
> How to stop this? Thnaks!
>
>
> --
> Richard Chan
>
>
--
Richard Chan
7 years, 9 months
Chrome 59 still will not upload to imageio, firefox ok
by Richard Chan
oVirt 4.1.4
I have imageio proxy working with web ui on firefox.
On Chrome 59 I read bz# https://bugzilla.redhat.com/show_bug.cgi?id=1430598
and set "EnableCommonNameFallbackForLocalAnchors": true.
However the upload fails with
Make sure ovirt-imageio-proxy service is installed and configured, and
ovirt-engine's certificate is registered as a valid CA in the browser. The
certificate can be fetched from https://
<engine_url>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
In the terminal I see
:ERROR:cert_verify_proc_nss.cc(923)] CERT_PKIXVerifyCert for XXXXXXXX
failed err=-8172
for both the web UI and imageio proxy. Any suggestions?
--
Richard Chan
7 years, 9 months
Migration network used also as vm network
by Gianluca Cecchi
Hello,
I have a 10Gbit vlan defined as migration network and currently not enabled
as vm network.
When you configure a host interface, assigning a vlan that is defined as
migration network, you must assign an ip to it on the host.
Suppose I want to edit this vlan in DC so that I enable it to be also a VM
network, are there any drawbacks having for example on a host the ip for
this vlan (the migration ip) and also one or more running VMs with their
vnics configured on this vlan too...?
Thanks in advance,
Gianluca
7 years, 9 months
Regarding Ovirt Node ISO
by TranceWorldLogic .
Hi,
I want to know packages list in ovirt node ISO.
Do we have some documentation or some auto output of jenkin job ?
Please let me know.
Thanks,
~Rohit
7 years, 9 months
Add CentOS 7 host fails during setup validation with 'Cannot locate vdsm package, possible cause is incorrect channels.'
by Schorschi
Trying to add a CentOS 7 host to oVirt instance, but fails during setup
validation with 'Cannot locate vdsm package, possible cause is incorrect
channels.' Of course, given this is oVirt (not RHEV) and CentOS not
RHEL, there is not any RHEL applicable channel. Anyone have this issue
as well?
Version of KVM host (CentOS)...
Linux node01.dachshund-digital.org 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue
Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.3.1611 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.3.1611 (Core)
CentOS Linux release 7.3.1611 (Core)
Version of oVirt (and CentOS hosting oVirt)... Is release 4.1 per
install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
Linux ovirt.dachshund-digital.org 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue
Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.3.1611 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.3.1611 (Core)
CentOS Linux release 7.3.1611 (Core)
Any help appreciated. Thanks.
7 years, 9 months
oVirt VM backups
by Abi Askushi
Hi All,
For VM backups I am using some python script to automate the snapshot ->
clone -> export -> delete steps (although with some issues when trying to
backups a Windows 10 VM)
I was wondering if there is there any plan to integrate VM backups in the
GUI or what other recommended ways exist out there.
Thanx,
Abi
7 years, 9 months
Windows 2016 Sysprep - Fixed IP
by Sven Achtelik
--_000_BFAB40933B3367488CE6299BAF8592D1014E52F1255BSOCRATESasl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi All,
I'm looking into configuring a Windows 2016 Template via Run Once to have a=
n static IP. Is there any way to do that ? Maybe over the Sysrep Section ? =
I didn't find an answer about how to use it or what syntax should be used ?=
Must I put a complete answer file into it or will this section on overwrit=
e specific parts of the original file that resides on the engine ?
Thank you,
Sven
--_000_BFAB40933B3367488CE6299BAF8592D1014E52F1255BSOCRATESasl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type" CONTENT=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE link=3D"#0563C1" v=
link=3D"#954F72"><div class=3DWordSection1><p class=3DMsoNormal>Hi All, <o:=
p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>=
<span lang=3DEN-US>I’m looking into configuring a Windows 2016 Templa=
te via Run Once to have an static IP. Is there any way to do that ? Maybe o=
ver the Sysrep Section ? I didn’t find an answer about how to use it =
or what syntax should be used ? Must I put a complete answer file into it o=
r will this section on overwrite specific parts of the original file that r=
esides on the engine ? <o:p></o:p></span></p><p class=3DMsoNormal><span lan=
g=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span lang=3DEN-=
US>Thank you, <o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US=
>Sven <o:p></o:p></span></p></div></body></html>=
--_000_BFAB40933B3367488CE6299BAF8592D1014E52F1255BSOCRATESasl_--
7 years, 9 months
Feature: Maximum memory size | BUG
by Ladislav Humenik
Hallo, after engine update to 4.1.2 from 4.0.6 we have following bug
*Your Future Description: *
Maximum memory value is stored in |VmBase.maxMemorySizeMb| property. It
is validated against range [/memory of VM/, /*MaxMemorySizeInMB/], where
/*MaxMemorySizeInMB/ is one of |VM32BitMaxMemorySizeInMB|,
|VM64BitMaxMemorySizeInMB| and |VMPpc64BitMaxMemorySizeInMB|
configuration options depending on selected operating system of the VM.
Default value in webadmin UI is 4x size of memory.
During migration of engine 4.0 -> 4.1 all VM-like entities will get max
memory = 4x memory.
If a VM (or template) is imported (from export domain, snapshot,
external system) and doesn't have max memory set yet, the maximum value
of max memory is set (|*MaxMemorySizeInMB| config options).
*Our engine settings:*
[root@ovirt]# engine-config -g VM64BitMaxMemorySizeInMB
VM64BitMaxMemorySizeInMB: 8388608 version: 4.1
VM64BitMaxMemorySizeInMB: 8388608 version: 3.6
VM64BitMaxMemorySizeInMB: 8388608 version: 4.0
[root@ovirt# engine-config -g VM32BitMaxMemorySizeInMB
VM32BitMaxMemorySizeInMB: 20480 version: general
*Template:
*engine=# select vm_guid,vm_name,mem_size_mb,max_memory_size_mb from
vm_static where vm_name LIKE 'Blank';
vm_guid | vm_name | mem_size_mb |
max_memory_size_mb
--------------------------------------+---------+-------------+--------------------
00000000-0000-0000-0000-000000000000 | Blank | 8192
| 32768
(1 row)
*Created VM*
- expected is mem_size_mb * VM64BitMaxMemorySizeInMB
- we get mem_size_mb * 4 (default)
*Engine: *engine=# select vm_guid,vm_name,mem_size_mb,max_memory_size_mb
from vm_static where vm_name LIKE 'vm-hotplug%';
vm_guid | vm_name | mem_size_mb |
max_memory_size_mb
--------------------------------------+-------------+-------------+--------------------
254a0c61-3c0a-41e7-a2ec-5f77cabbe533 | vm-hotplug | 1024
| 4096
c0794a03-58ba-4e68-8f43-e0320032830c | vm-hotplug2 | 3072
| 12288
(2 rows)
*Question:*
It is possible to change this (default * 4) behavior in DB??
Kind Regards,
Ladislav Humenik, System administrator
7 years, 9 months
Wrong CPU type set on ovirt node on ovirt version 3.6.4.1-1.el6 - anyone know a fix ?
by Joseph Kelly
--_000_CY1PR16MB0393574D1CEE59FBF5F2696288BE0CY1PR16MB0393namp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
Has anyone come across a fix for this, in that I have a one node cluster in=
which the cluster CPU type is reporting correctly as:
Cluster CPU Type:Intel SandyBridge Family
But the only node is showing:
CPU Type:Intel Haswell-noTSX Family
However on my node both vdsm and cpuinfo agree on the the cpu model its jus=
t that webadmin GUI is reporting
"Intel Haswell-noTSX Family" as the CPU type.
# vdsClient -s 0 getVdsCapabilities | egrep -i cpuModel
cpuModel =3D 'Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz'
#cat /proc/cpuinfo | grep "model name" | head -n 1
model name : Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
We're still using ovirt t version 3.6.4.1-1.el6 but are trying to move to 4=
.1.x
And I suspect that a possible fix might to do this:
1) shutdown VM's on the problem node put it into maintenance mode and onlin=
e it again.
2) if that doesn't work then put into maintenance mode again and restart vd=
smd and online the node again.
or does this need to be directly updated in the postgres DB for that node ?=
If so does anyone know how to do that ?
Thanks,
Joe.
--
J. Kelly
Infrastructure Engineer
TradingScreen
www.tradingscreen.com<http://www.tradingscreen.com/>
--_000_CY1PR16MB0393574D1CEE59FBF5F2696288BE0CY1PR16MB0393namp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:10pt;color:#000000;font=
-family:Verdana,Geneva,sans-serif;" dir=3D"ltr">
<p></p>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">Hello,</span=
></div>
<div><br>
</div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">Has anyone c=
ome across a fix for this, in that I have a one node cluster in which the c=
luster CPU type is reporting correctly as:</span></div>
<div><span style=3D"font-size: 10pt; font-family: Verdana, Geneva, sans-ser=
if;">Cluster CPU Type:</span><span style=3D"font-size: 10pt; font-family: V=
erdana, Geneva, sans-serif;">Intel SandyBridge Family</span><br>
</div>
<p></p>
<p><span style=3D"font-size: 10pt;"><br>
</span></p>
<p><span style=3D"font-size: 10pt; font-family: Verdana, Geneva, sans-serif=
;">But the only node is showing: </span><br>
</p>
<p><span style=3D"font-size: 10pt; font-family: Verdana, Geneva, sans-serif=
;">CPU Type:</span><span style=3D"font-size: 10pt; font-family: Verdana, Ge=
neva, sans-serif;">Intel Haswell-noTSX Family</span><br>
</p>
<p></p>
<div><br>
</div>
<div><span style=3D"font-size: 10pt;"><span style=3D"font-family: Verdana, =
Geneva, sans-serif;">However on my node </span><span style=3D"font-fam=
ily: Verdana, Geneva, sans-serif;">both vdsm and cpuinfo agree on the the c=
pu model its just that webadmin GUI is reporting </span></span><br>
</div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">"Intel =
Haswell-noTSX Family" as the CPU type.</span></div>
<div><br>
</div>
<div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;"># vdsClient =
-s 0 getVdsCapabilities | egrep -i cpuModel</span></div>
<div><span style=3D"white-space: pre; font-family: Verdana, Geneva, sans-se=
rif;"></span><span style=3D"font-family: Verdana, Geneva, sans-serif;">cpuM=
odel =3D 'Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz'</span></div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">#cat /proc/c=
puinfo | grep "model name" | head -n 1</span></div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">model name</=
span><span style=3D"white-space: pre; font-family: Verdana, Geneva, sans-se=
rif;">
</span><span style=3D"font-family: Verdana, Geneva, sans-serif;">: Intel(R)=
Xeon(R) CPU E5-2670 v3 @ 2.30GHz</span></div>
<div><br>
</div>
<span style=3D"font-family: Verdana, Geneva, sans-serif;">We're still =
</span><span style=3D"font-family: Verdana, Geneva, sans-serif;">using ovir=
t </span><span style=3D"font-family: Verdana, Geneva, sans-serif;">t v=
ersion 3.6.4.1-1.el6 but are trying to move to 4.1.x</span></div>
<div><span><br>
</span></div>
<div><span><span style=3D"font-family: Verdana, Geneva, sans-serif;">And&nb=
sp;I suspect that a possible </span><span style=3D"font-family: Verdan=
a, Geneva, sans-serif;">fix might to do this:</span></span></div>
<div><span><br>
</span></div>
<div><span><span style=3D"font-family: Verdana, Geneva, sans-serif;">1)&nbs=
p;</span><span style=3D"font-family: Verdana, Geneva, sans-serif;">shutdown=
VM's on the problem node put it into maintenance</span><span style=3D"font=
-family: Verdana, Geneva, sans-serif;"> mode
and online it again.</span></span></div>
<div><span><span style=3D"font-family: Verdana, Geneva, sans-serif;">2)&nbs=
p;if that doesn't work then put into maintenance</span><span style=3D"font-=
family: Verdana, Geneva, sans-serif;"> mode again and restart vdsmd an=
d online the node again.</span></span></div>
<div><span><br>
</span></div>
<div><span><span style=3D"font-family: Verdana, Geneva, sans-serif;">or doe=
s this need to be directly </span><span style=3D"font-family: Verdana,=
Geneva, sans-serif;">updated in the postgres DB for that node ? If so does=
anyone know how to do that ? </span></span></div>
<div><span><br>
</span></div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">Thanks,</spa=
n></div>
<div><span style=3D"font-family: Verdana, Geneva, sans-serif;">Joe.</span><=
/div>
<p></p>
<p><br>
</p>
<div id=3D"Signature">
<div id=3D"divtagdefaultwrapper" dir=3D"ltr" style=3D"font-size: 12pt; colo=
r: rgb(0, 0, 0); font-family: Calibri, Arial, Helvetica, sans-serif, EmojiF=
ont, "Apple Color Emoji", "Segoe UI Emoji", NotoColorEm=
oji, "Segoe UI Symbol", "Android Emoji", EmojiSymbols, =
EmojiFont, "Apple Color Emoji", "Segoe UI Emoji", NotoC=
olorEmoji, "Segoe UI Symbol", "Android Emoji", EmojiSym=
bols, EmojiFont, "Apple Color Emoji", "Segoe UI Emoji",=
NotoColorEmoji, "Segoe UI Symbol", "Android Emoji", Em=
ojiSymbols, EmojiFont, "Apple Color Emoji", "Segoe UI Emoji&=
quot;, NotoColorEmoji, "Segoe UI Symbol", "Android Emoji&quo=
t;, EmojiSymbols;">
<div style=3D"font-family:Tahoma; font-size:13px">
<table class=3D"" cellspacing=3D"0" cellpadding=3D"0" width=3D"100%" style=
=3D"font-family:Cantarell; letter-spacing:normal; orphans:auto; text-indent=
:0px; text-transform:none; widows:auto; word-spacing:0px">
<tbody>
<tr class=3D"">
<td class=3D"">
<div>-- <br>
<span style=3D"font-size:12pt; color:#000000; font-style:Arial,sans-serif; =
font-weight:bold">J. Kelly</span><br>
<span style=3D"font-size:10pt; color:#000000; font-style:Arial,sans-serif">=
Infrastructure Engineer</span><br>
<span style=3D"font-size:12pt; color:#808080; font-style:Arial,sans-serif; =
font-weight:bold">TradingScreen</span><br>
<span style=3D"font-size:10pt; color:#808080; font-style:Arial,sans-serif">=
<a href=3D"http://www.tradingscreen.com/" id=3D"LPNoLP">www.tradingscreen.c=
om</a></span><br>
</div>
</td>
</tr>
</tbody>
</table>
<br>
</div>
</div>
</div>
</div>
</body>
</html>
--_000_CY1PR16MB0393574D1CEE59FBF5F2696288BE0CY1PR16MB0393namp_--
7 years, 9 months
user can see other user's vms
by Hari Gowtham
Hi,
we have been trying to use ovirt to let the other devs in our team
spawn vms and use it.
we are nearly done with the setup. we have the hosted engine up,
created gluster volumes and now we have setup the login to the portal
too. While trying to spawn vms we noticed that,
The admin can view, start or stop the vms on the machines which is fine.
But we can see that the users can see each other user's vms too.
This will make it possible for one user to start or stop other user's
vm which we don't want to happen.
we need to avoid this situation were one user has access to other user's vm.
A particular user should be able to see the vms created by him alone
So that he will have the ability to stop his own vms alone. What is
the best way to create this setup?
Is this the way it was designed to work or have i done something wrong here?
Do let me know.
--
Regards,
Hari Gowtham.
7 years, 9 months
iSCSI Multipath issues
by Vinícius Ferrão
--_000_FAC906FE18D64366A7C371126A43C191ifufrjbr_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8sDQoNCknigJltIGpvaW5pbmcgdGhlIGNyb3dkIHdpdGggaVNDU0kgTXVsdGlwYXRoIGlz
c3VlcyBvbiBvVmlydCBoZXJlLiBJ4oCZbSB0cnlpbmcgdG8gZW5hYmxlIHRoZSBmZWF0dXJlIHdp
dGhvdXQgc3VjY2VzcyB0b28uDQoNCkhlcmXigJlzIHdoYXQgSeKAmXZlIGRvbmUsIHN0ZXAtYnkt
c3RlcC4NCg0KMS4gSW5zdGFsbGVkIG9WaXJ0IE5vZGUgNC4xLjMgd2l0aCB0aGUgZm9sbG93aW5n
IG5ldHdvcmsgc2V0dGluZ3M6DQoNCmVubzEgYW5kIGVubzIgb24gYSA4MDIuM2FkIChMQUNQKSBC
b25kLCBjcmVhdGluZyBhIGJvbmQwIGludGVyZmFjZS4NCmVubzMgd2l0aCA5MjE2IE1UVS4NCmVu
bzQgd2l0aCA5MjE2IE1UVS4NCnZsYW4xMSBvbiBlbm8zIHdpdGggOTIxNiBNVFUgYW5kIGZpeGVk
IElQIGFkZHJlc3Nlcy4NCnZsYW4xMiBvbiBlbm80IHdpdGggOTIxNiBNVFUgYW5kIGZpeGVkIElQ
IGFkZHJlc3Nlcy4NCg0KZW5vMyBhbmQgZW5vNCBhcmUgbXkgaVNDU0kgTVBJTyBJbnRlcmZhY2Vz
LCBjb21wbGV0ZWxseSBzZWdyZWdhdGVkLCBvbiBkaWZmZXJlbnQgc3dpdGNoZXMuDQoNCjIuIFN0
YXJ0ZWQgdGhlIGluc3RhbGxhdGlvbiBvZiBTZWxmLWhvc3RlZCBlbmdpbmUgYWZ0ZXIgdGhyZWUg
aG91cnMgb2Ygd2FpdGluZywgYmVjYXVzZTogaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tL3No
b3dfYnVnLmNnaT9pZD0xNDU0NTM2DQoNCjMuIFNlbGVjdGVkIGlTQ1NJIGFzIGRlZmF1bHQgaW50
ZXJmYWNlIGZvciBIb3N0ZWQgRW5naW5lLiBFdmVyeXRoaW5nIHdhcyBmaW5lLg0KDQo0LiBPbiB0
aGUgSG9zdGVkIEVuZ2luZSBJ4oCZdmUgZG9uZSB0aGUgZm9sbG93aW5nOg0KDQphLiBTeXN0ZW0g
PiBEYXRhIENlbnRlcnMgPiBEZWZhdWx0ID4gTmV0d29ya3MNCi4gQ3JlYXRlZCBpU0NTSTEgd2l0
aCBWTEFOIDExIGFuZCBNVFUgOTIxNiwgcmVtb3ZlZCBWTSBOZXR3b3JrIG9wdGlvbi4NCi4gQ3Jl
YXRlZCBpU0NTSTIgd2l0aCBWTEFOIDEyIGFuZCBNVFUgOTIxNiwgcmVtb3ZlZCBWTSBOZXR3b3Jr
IG9wdGlvbi4NCg0KYi4gU3lzdGVtID4gRGF0YSBDZW50ZXJzID4gRGVmYXVsdCA+IENsdXN0ZXJz
ID4gRGVmYXVsdCA+IEhvc3RzID4gb3ZpcnQzLmNjLmlmLnVmcmouYnI8aHR0cDovL292aXJ0My5j
Yy5pZi51ZnJqLmJyPiAobXkgbWFjaGluZSkNCg0KU2VsZWN0ZWQgU2V0dXAgSG9zdCBOZXR3b3Jr
cyBhbmQgbW92ZWQgaVNDU0kxIHRvIGVubzMgYW5kIGlTQ1NJMiB0byBlbm80LiBCb3RoIGljb25z
IGdvbmUgZ3JlZW4sIGluZGljYXRpbmcgYW4g4oCcdXDigJ0gc3RhdGUuDQoNCmMuIFN5c3RlbSA+
IERhdGEgQ2VudGVycyA+IERlZmF1bHQgPiBDbHVzdGVycw0KDQpTZWxlY3RlZCBMb2dpY2FsIE5l
dHdvcmtzIGFuZCB0aGVtIE1hbmFnZSBOZXR3b3JrLiBSZW1vdmVkIHRoZSBSZXF1aXJlZCBjaGVj
a2JveCBmcm9tIGJvdGggaVNDU0kgY29ubmVjdGlvbnMuDQoNCmQuIFN5c3RlbSA+IERhdGEgQ2Vu
dGVycyA+IERlZmF1bHQgPiBTdG9yYWdlDQoNCkFkZGVkIGFuIGlTQ1NJIFNoYXJlIHdpdGggdHdv
IGluaXRpYXRvcnMuIEJvdGggc2hvd3MgdXAgY29ycmVjdGx5Lg0KDQplLiBTeXN0ZW0gPiBEYXRh
IENlbnRlcnMNCg0KTm93IHRoZSBpU0NTSSBNdWx0aXBhdGggdGFiIGlzIHZpc2libGUuIFNlbGVj
dGVkIGl0IGFuZCBhZGRlZCBhbiBpU0NTSSBCb25kOg0KLiBpU0NTSTEgYW5kIGlTQ1NJMiBzZWxl
Y3RlZCBvbiBMb2dpY2FsIE5ldHdvcmtzLg0KLiBUd28gaXFu4oCZcyBzZWxlY3RlZCBvbiBTdG9y
YWdlIFRhcmdldHMuDQoNCjUuIG9WaXJ0IGp1c3QgZ29lcyBkb3duLiBWRFNNIGdldHMgY3Jhenkg
YW5kIGV2ZXJ5dGhpbmcg4oCcY3Jhc2hlc+KAnS4gaVNDU0kgaXMgc3RpbGwgYWxpdmUsIHNpbmNl
IHdlIGNhbiBzdGlsbCB0YWxrIHdpdGggdGhlIFNlbGYgSG9zdGVkIEVuZ2luZSwgYnV0ICoqTk9U
SElORyoqIHdvcmtzLiBJZiB0aGUgaVNDU0kgQm9uZCBpcyByZW1vdmVkIGV2ZXJ5dGhpbmcgcmVn
ZW5lcmF0ZXMgdG8gYSB1c2FibGUgc3RhdGUuDQoNCknigJl2ZSBhZGRlZCB0aGUgZm9sbG93aW5n
IGZpbGVzIG9uIG15IHB1YmxpYyBwYWdlIHRvIGhlbHAgb24gZGVidWdnaW5nOg0KL3Zhci9sb2cv
dmRzbS92ZHNtLmxvZw0KL3Zhci9sb2cvc2FubG9jay5sb2cNCi92YXIvbG9nL3Zkc20vbW9tLmxv
Zw0KL3Zhci9sb2cvbWVzc2FnZXMNCg0KVGhlcmUgYXJlIHNvbWUgcmFuZG9tIGltYWdlcyBvZiBt
eSBjb25maWd1cmF0aW9uIHRvbzogaHR0cDovL3d3dy5pZi51ZnJqLmJyL35mZXJyYW8vb3ZpcnQv
DQoNCldoYXQgc2hvdWxkIGJlIGRvbmUgbm93PyBJc3N1ZSBhIGJ1ZyBmaXggcmVxdWVzdD8NCg0K
VGhhbmtzLA0KVi4NCg0KUFM6IE15IG1hY2hpbmUgaXMgcmVhY2hhYmxlIG92ZXIgdGhlIGludGVy
bmV0LiBTbyBpZiBhbnlvbmUgd291bGQgbGlrZSB0byBjb25uZWN0IHRvIGl0LCBqdXN0IGxldCBt
ZSBrbm93Lg0KDQoNCg==
--_000_FAC906FE18D64366A7C371126A43C191ifufrjbr_
Content-Type: text/html; charset="utf-8"
Content-ID: <AC8E9BA23F15D441A9985204E5D2DA9B(a)if.ufrj.br>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KSGVsbG8sDQo8ZGl2IGNsYXNzPSIi
PjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5J4oCZbSBqb2luaW5nIHRoZSBj
cm93ZCB3aXRoIGlTQ1NJIE11bHRpcGF0aCBpc3N1ZXMgb24gb1ZpcnQgaGVyZS4gSeKAmW0gdHJ5
aW5nIHRvIGVuYWJsZSB0aGUgZmVhdHVyZSB3aXRob3V0IHN1Y2Nlc3MgdG9vLjwvZGl2Pg0KPGRp
diBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SGVyZeKAmXMg
d2hhdCBJ4oCZdmUgZG9uZSwgc3RlcC1ieS1zdGVwLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIg
Y2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+MS4gSW5zdGFsbGVkIG9WaXJ0IE5vZGUg
NC4xLjMgd2l0aCB0aGUgZm9sbG93aW5nIG5ldHdvcmsgc2V0dGluZ3M6PC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48c3BhbiBjbGFzcz0i
QXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bhbj5lbm8xIGFuZCBl
bm8yIG9uIGEgODAyLjNhZCAoTEFDUCkgQm9uZCwgY3JlYXRpbmcgYSBib25kMCBpbnRlcmZhY2Uu
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9
IndoaXRlLXNwYWNlOnByZSI+PC9zcGFuPmVubzMgd2l0aCA5MjE2IE1UVS48L2Rpdj4NCjxkaXYg
Y2xhc3M9IiI+PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6
cHJlIj48L3NwYW4+ZW5vNCB3aXRoIDkyMTYgTVRVLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48c3Bh
biBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bhbj52
bGFuMTEgb24gZW5vMyB3aXRoIDkyMTYgTVRVIGFuZCBmaXhlZCBJUCBhZGRyZXNzZXMuPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9IndoaXRl
LXNwYWNlOnByZSI+PC9zcGFuPnZsYW4xMiBvbiBlbm80IHdpdGggOTIxNiBNVFUgYW5kIGZpeGVk
IElQIGFkZHJlc3Nlcy48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9IndoaXRl
LXNwYWNlOnByZSI+PC9zcGFuPmVubzMgYW5kIGVubzQgYXJlIG15IGlTQ1NJIE1QSU8gSW50ZXJm
YWNlcywgY29tcGxldGVsbHkgc2VncmVnYXRlZCwgb24gZGlmZmVyZW50IHN3aXRjaGVzLjwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+Mi4g
U3RhcnRlZCB0aGUgaW5zdGFsbGF0aW9uIG9mIFNlbGYtaG9zdGVkIGVuZ2luZSBhZnRlciB0aHJl
ZSBob3VycyBvZiB3YWl0aW5nLCBiZWNhdXNlOg0KPGEgaHJlZj0iaHR0cHM6Ly9idWd6aWxsYS5y
ZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD0xNDU0NTM2IiBjbGFzcz0iIj5odHRwczovL2J1Z3pp
bGxhLnJlZGhhdC5jb20vc2hvd19idWcuY2dpP2lkPTE0NTQ1MzY8L2E+PC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj4zLiBTZWxlY3RlZCBp
U0NTSSBhcyBkZWZhdWx0IGludGVyZmFjZSBmb3IgSG9zdGVkIEVuZ2luZS4gRXZlcnl0aGluZyB3
YXMgZmluZS48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjQuIE9uIHRoZSBIb3N0ZWQgRW5naW5lIEnigJl2ZSBkb25lIHRoZSBmb2xsb3dp
bmc6PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFz
cz0iIj48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUi
Pjwvc3Bhbj5hLiZuYnNwO1N5c3RlbSAmZ3Q7IERhdGEgQ2VudGVycyAmZ3Q7IERlZmF1bHQgJmd0
OyBOZXR3b3JrczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNw
YW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bhbj48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+
PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6cHJlIj48L3Nw
YW4+LiBDcmVhdGVkIGlTQ1NJMSB3aXRoIFZMQU4gMTEgYW5kIE1UVSA5MjE2LCByZW1vdmVkIFZN
IE5ldHdvcmsgb3B0aW9uLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48c3BhbiBjbGFzcz0iQXBwbGUt
dGFiLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bhbj4uIENyZWF0ZWQgaVNDU0ky
IHdpdGggVkxBTiAxMiBhbmQgTVRVIDkyMTYsIHJlbW92ZWQgVk0gTmV0d29yayBvcHRpb24uPC9k
aXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48
c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bh
bj5iLiZuYnNwO1N5c3RlbSAmZ3Q7IERhdGEgQ2VudGVycyAmZ3Q7IERlZmF1bHQgJmd0OyBDbHVz
dGVycyAmZ3Q7IERlZmF1bHQgJmd0OyBIb3N0cyAmZ3Q7DQo8YSBocmVmPSJodHRwOi8vb3ZpcnQz
LmNjLmlmLnVmcmouYnIiIGNsYXNzPSIiPm92aXJ0My5jYy5pZi51ZnJqLmJyPC9hPiZuYnNwOyht
eSBtYWNoaW5lKTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxk
aXYgY2xhc3M9IiI+PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3Bh
Y2U6cHJlIj48L3NwYW4+U2VsZWN0ZWQgU2V0dXAgSG9zdCBOZXR3b3JrcyBhbmQgbW92ZWQgaVND
U0kxIHRvIGVubzMgYW5kIGlTQ1NJMiB0byBlbm80LiBCb3RoIGljb25zIGdvbmUgZ3JlZW4sIGlu
ZGljYXRpbmcgYW4g4oCcdXDigJ0gc3RhdGUuPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFz
cz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4i
IHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bhbj5jLiZuYnNwO1N5c3RlbSAmZ3Q7IERhdGEg
Q2VudGVycyAmZ3Q7IERlZmF1bHQgJmd0OyBDbHVzdGVyczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48
YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PHNwYW4gY2xhc3M9IkFwcGxlLXRh
Yi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6cHJlIj48L3NwYW4+U2VsZWN0ZWQgTG9naWNhbCBO
ZXR3b3JrcyBhbmQgdGhlbSBNYW5hZ2UgTmV0d29yay4gUmVtb3ZlZCB0aGUgUmVxdWlyZWQgY2hl
Y2tib3ggZnJvbSBib3RoIGlTQ1NJIGNvbm5lY3Rpb25zLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48
YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PHNwYW4gY2xhc3M9IkFwcGxlLXRh
Yi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6cHJlIj48L3NwYW4+ZC4gU3lzdGVtICZndDsgRGF0
YSBDZW50ZXJzICZndDsgRGVmYXVsdCAmZ3Q7IFN0b3JhZ2U8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+
PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIGNsYXNzPSJBcHBsZS10
YWItc3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOnByZSI+PC9zcGFuPkFkZGVkIGFuIGlTQ1NJIFNo
YXJlIHdpdGggdHdvIGluaXRpYXRvcnMuIEJvdGggc2hvd3MgdXAgY29ycmVjdGx5LjwvZGl2Pg0K
PGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PHNwYW4g
Y2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6cHJlIj48L3NwYW4+ZS4g
U3lzdGVtICZndDsgRGF0YSBDZW50ZXJzPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0i
Ij4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0
eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwvc3Bhbj5Ob3cgdGhlIGlTQ1NJIE11bHRpcGF0aCB0YWIg
aXMgdmlzaWJsZS4gU2VsZWN0ZWQgaXQgYW5kIGFkZGVkIGFuIGlTQ1NJIEJvbmQ6PC9kaXY+DQo8
ZGl2IGNsYXNzPSIiPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9IndoaXRlLXNw
YWNlOnByZSI+PC9zcGFuPi4gaVNDU0kxIGFuZCBpU0NTSTIgc2VsZWN0ZWQgb24gTG9naWNhbCBO
ZXR3b3Jrcy48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFu
IiBzdHlsZT0id2hpdGUtc3BhY2U6cHJlIj48L3NwYW4+LiBUd28gaXFu4oCZcyBzZWxlY3RlZCBv
biBTdG9yYWdlIFRhcmdldHMuPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwv
ZGl2Pg0KPGRpdiBjbGFzcz0iIj41LiBvVmlydCBqdXN0IGdvZXMgZG93bi4gVkRTTSBnZXRzIGNy
YXp5IGFuZCBldmVyeXRoaW5nIOKAnGNyYXNoZXPigJ0uIGlTQ1NJIGlzIHN0aWxsIGFsaXZlLCBz
aW5jZSB3ZSBjYW4gc3RpbGwgdGFsayB3aXRoIHRoZSBTZWxmIEhvc3RlZCBFbmdpbmUsIGJ1dCAq
Kk5PVEhJTkcqKiB3b3Jrcy4gSWYgdGhlIGlTQ1NJIEJvbmQgaXMgcmVtb3ZlZCBldmVyeXRoaW5n
IHJlZ2VuZXJhdGVzIHRvIGEgdXNhYmxlIHN0YXRlLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIg
Y2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SeKAmXZlIGFkZGVkIHRoZSBmb2xsb3dp
bmcgZmlsZXMgb24gbXkgcHVibGljIHBhZ2UgdG8gaGVscCBvbiBkZWJ1Z2dpbmc6PC9kaXY+DQo8
ZGl2IGNsYXNzPSIiPi92YXIvbG9nL3Zkc20vdmRzbS5sb2c8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+
L3Zhci9sb2cvc2FubG9jay5sb2c8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+L3Zhci9sb2cvdmRzbS9t
b20ubG9nPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPi92YXIvbG9nL21lc3NhZ2VzPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5UaGVyZSBhcmUg
c29tZSByYW5kb20gaW1hZ2VzIG9mIG15IGNvbmZpZ3VyYXRpb24gdG9vOiZuYnNwOzxhIGhyZWY9
Imh0dHA6Ly93d3cuaWYudWZyai5ici9+ZmVycmFvL292aXJ0LyIgY2xhc3M9IiI+aHR0cDovL3d3
dy5pZi51ZnJqLmJyL35mZXJyYW8vb3ZpcnQvPC9hPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIg
Y2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+V2hhdCBzaG91bGQgYmUgZG9uZSBub3c/
IElzc3VlIGEgYnVnIGZpeCByZXF1ZXN0PzwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9
IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhhbmtzLDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5W
LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9
IiI+UFM6IE15IG1hY2hpbmUgaXMgcmVhY2hhYmxlIG92ZXIgdGhlIGludGVybmV0LiBTbyBpZiBh
bnlvbmUgd291bGQgbGlrZSB0byBjb25uZWN0IHRvIGl0LCBqdXN0IGxldCBtZSBrbm93LjwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJy
IGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIGNsYXNzPSJBcHBsZS10YWIt
c3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOnByZSI+PC9zcGFuPjwvZGl2Pg0KPGRpdiBjbGFzcz0i
Ij48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTpwcmUiPjwv
c3Bhbj48L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_FAC906FE18D64366A7C371126A43C191ifufrjbr_--
7 years, 9 months
Ovirt Node
by Jose Vicente Rosello Vila
--_005_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_
Content-Type: multipart/alternative;
boundary="_000_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_"
--_000_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello users,
I installed ovirt engine 4.1.3.5-1.el7.centos and I tried to install 2 host=
s, but the result was " install failed".
Both nodes have been installes from CD image.
What can I do?
Thanks,
Josep Vicent Rosell=F3 Vila
=C0rea de Sistemes d'Informaci=F3 i Comunicacions
Universitat Polit=E8cnica de Val=E8ncia<http://www.upv.es/>
Cam=ED de Vera, s/n
46022 VAL=C8NCIA
Edifici 4<http://www.upv.es/pls/oalu/est_map.plano?P_ESTILO=3D2005&P_IDIOMA=
=3Dc&P_ENTIDAD=3DCPD>L
Tel. +34 963 879 075 (ext.78746)
rosello(a)asic.upv.es<mailto:rosello@asic.upv.es>
_____
Antes de imprimir este mensaje, piense si es necesario.
=A1El cuidado del medioambiente es cosa de todos!
--_000_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EstiloCorreo17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
span.gwt-inlinelabel
{mso-style-name:gwt-inlinelabel;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 3.0cm 70.85pt 3.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"CA" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"ES">Hello users,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES">I installed ovirt engine <span cla=
ss=3D"gwt-inlinelabel">
4.1.3.5-1.el7.centos and I tried to install 2 hosts, but the result was =
220; install failed”.<o:p></o:p></span></span></p>
<p class=3D"MsoNormal"><span class=3D"gwt-inlinelabel"><span lang=3D"ES"><o=
:p> </o:p></span></span></p>
<p class=3D"MsoNormal"><span class=3D"gwt-inlinelabel"><span lang=3D"ES">Bo=
th nodes have been installes from CD image.<o:p></o:p></span></span></p>
<p class=3D"MsoNormal"><span class=3D"gwt-inlinelabel"><span lang=3D"ES"><o=
:p> </o:p></span></span></p>
<p class=3D"MsoNormal"><span class=3D"gwt-inlinelabel"><span lang=3D"ES">Wh=
at can I do?</span></span><span lang=3D"ES"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:CA"><img width=
=3D"1214" height=3D"620" style=3D"width:12.6458in;height:6.4583in" id=3D"Im=
agen_x0020_2" src=3D"cid:image001.jpg@01D3057A.117AB600"></span><span lang=
=3D"ES"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES" style=3D"mso-fareast-language:CA">=
<o:p> </o:p></span></p>
<table class=3D"MsoNormalTable" border=3D"0" cellspacing=3D"0" cellpadding=
=3D"0" style=3D"margin-left:5.4pt;border-collapse:collapse">
<tbody>
<tr>
<td width=3D"241" style=3D"width:147.9pt;border:none;border-top:solid #A6A6=
A6 1.0pt;padding:8.5pt 5.4pt 8.5pt 5.4pt">
<p class=3D"MsoNormal" style=3D"margin-left:-5.4pt"><span style=3D"font-siz=
e:10.0pt;font-family:"Arial",sans-serif;color:navy;mso-fareast-la=
nguage:CA"><img width=3D"234" height=3D"78" style=3D"width:2.4375in;height:=
.8125in" id=3D"_x005f_x0032__x005f_x0020_Imagen" src=3D"cid:image002.jpg@01=
D3057A.117AB600" alt=3D"Descripci=F3n: Descripci=F3n: logo_upv_val.jpg"></s=
pan><span style=3D"mso-fareast-language:CA"><o:p></o:p></span></p>
</td>
<td width=3D"379" colspan=3D"2" style=3D"width:284.6pt;border:none;border-t=
op:solid #A6A6A6 1.0pt;padding:8.5pt 5.4pt 8.5pt 5.4pt">
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al",sans-serif;color:black;mso-fareast-language:CA">Josep Vicent Rosel=
l=F3 Vila</span><span style=3D"mso-fareast-language:CA"><o:p></o:p></span><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al",sans-serif;color:#595959;mso-fareast-language:CA">=C0rea de Sistem=
es d’Informaci=F3 i Comunicacions</span><span style=3D"mso-fareast-la=
nguage:CA"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"=
Arial",sans-serif;color:#8D721B;mso-fareast-language:CA"><a href=3D"ht=
tp://www.upv.es/"><span style=3D"color:#8D721B">Universitat Polit=E8cnica d=
e Val=E8ncia</span></a></span></b><span style=3D"mso-fareast-language:CA"><=
o:p></o:p></span></p>
</td>
</tr>
<tr>
<td width=3D"241" valign=3D"top" style=3D"width:147.9pt;border:none;border-=
top:solid #A6A6A6 1.0pt;padding:8.5pt 5.4pt 0cm 5.4pt">
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al",sans-serif;color:navy;mso-fareast-language:CA"> </span><span =
style=3D"mso-fareast-language:CA"><o:p></o:p></span></p>
</td>
<td width=3D"170" valign=3D"top" style=3D"width:127.3pt;border:none;border-=
top:solid #A6A6A6 1.0pt;padding:8.5pt 5.4pt 0cm 5.4pt">
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al",sans-serif;color:navy;mso-fareast-language:CA">Cam=ED de Vera, s/n=
</span><span style=3D"mso-fareast-language:CA"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al",sans-serif;color:navy;mso-fareast-language:CA">46022 VAL=C8NCIA</s=
pan><span style=3D"mso-fareast-language:CA"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><u><span style=3D"font-size:9.0pt;font-family:"=
Arial",sans-serif;color:#8D721B;mso-fareast-language:CA"><a href=3D"ht=
tp://www.upv.es/pls/oalu/est_map.plano?P_ESTILO=3D2005&P_IDIOMA=3Dc&=
;P_ENTIDAD=3DCPD"><span style=3D"color:#8D721B">Edifici 4</span></a>L</span=
></u><u><span style=3D"mso-fareast-language:CA"><o:p></o:p></span></u></p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al",sans-serif;color:navy;mso-fareast-language:CA"> </span><span =
style=3D"mso-fareast-language:CA"><o:p></o:p></span></p>
</td>
<td width=3D"210" valign=3D"top" style=3D"width:157.3pt;border:none;border-=
top:solid #A6A6A6 1.0pt;padding:8.5pt 5.4pt 0cm 5.4pt">
<p class=3D"MsoNormal"><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-f=
amily:"Arial",sans-serif;color:navy;mso-fareast-language:CA">Tel.=
+34 963 879 075 (ext.78746)</span><span style=3D"mso-fareast-language:=
CA"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US" style=3D"font-size:9.0pt;font-f=
amily:"Arial",sans-serif;color:#8D721B;mso-fareast-language:CA"><=
a href=3D"mailto:rosello@asic.upv.es"><span style=3D"color:blue">rosello@as=
ic.upv.es</span></a></span><span style=3D"mso-fareast-language:CA"><o:p></o=
:p></span></p>
</td>
</tr>
</tbody>
</table>
<p class=3D"MsoNormal"><span lang=3D"ES" style=3D"mso-fareast-language:CA">=
<o:p> </o:p></span></p>
<div class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><span=
lang=3D"ES" style=3D"font-size:12.0pt;font-family:"Times New Roman&qu=
ot;,serif;mso-fareast-language:CA">
<hr size=3D"2" width=3D"100%" align=3D"center">
</span></div>
<p class=3D"MsoNormal"><span lang=3D"ES" style=3D"font-size:12.0pt;font-fam=
ily:"Courier New";color:green;mso-fareast-language:CA">Antes de i=
mprimir este mensaje, piense si es necesario.<br>
=A1El cuidado del medioambiente es cosa de todos!</span><span lang=3D"ES" s=
tyle=3D"font-size:12.0pt;font-family:"Times New Roman",serif;mso-=
fareast-language:CA"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p> </o:p></span></p>
</div>
</body>
</html>
--_000_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_--
--_005_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_
Content-Type: image/jpeg; name="image002.jpg"
Content-Description: image002.jpg
Content-Disposition: inline; filename="image002.jpg"; size=6265;
creation-date="Tue, 25 Jul 2017 17:13:27 GMT";
modification-date="Tue, 25 Jul 2017 17:13:27 GMT"
Content-ID: <image002.jpg(a)01D3057A.117AB600>
Content-Transfer-Encoding: base64
/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIf
IiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/wAALCABOAOoBAREA/8QAHwAAAQUBAQEB
AQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1Fh
ByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZ
WmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXG
x8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/APZqKwtT8VWlncyWNlDJ
qV/GMvBAQFiHrI5+VB9Tn2rOe58RagwD6vZ2DyRedFaafGJ5ZEzjIkkwvUjkLTdH0e41izN3Nrmv
Q5dlGbyMZKkqeFTA5BqrNLPpuqy2EPinU4GjcIHvI4rmN2KhtuAA+cMPxOKv2/iTV7Jd2o2EeoWy
kh7nTd29Npwd8DfMMd9pP0rodP1Ky1azS7sLmO4gfo6HIz6H0PsatUUUUUUUUUUUUUUUUUUUUUUV
yeqavPrV5Lpmm3RtbKFzFdXqOFeaTH+ogJ/i9W7dBz0ybVdMuI00wWrRabNmIwJbyKbeY5Cy5ZQZ
M5wWOdrAc85pLnUxdvZTTOz6jbFIY5bORfLWfccguxEZ3rkFckjJq7NrV/4WiSO5gsbZby5d0S5v
WdlLsST8keFQE8k8DPWo74XFhMl3faaiyz363EU8Nwk299uBGFIRiMDgDJyAeag025KXd/c2oa7v
7mQw28gXasDPy7SKfmjPHQjpGoySaYzT2lxJq2nTJHP5nlR3hASLVXXgpIg/iJyFkXuD269nomsw
a3ZGaON4JonMdxbyffgkHVW/oehGDWjWR4smmtvCWrT28rQzR2crpIhwVYKSCDT/AAzLLP4X0qae
RpZZLOJndzksxQEkmq2jzXD+KPEEUk8kkMUkAiRmyseYgTgdsnmual8Uap4d8W6pc6jum8OSXi27
Sjk2UnlodxH9w7vwP69Lr9zL5+hG0uWWOfUEDGNuJE8tzg+oOBVLxXe3lt4m8KwW1zLFFc3sizIj
YEiiMnB9RxUfim+vNC8T6HqwupRpk8psryHcdil/9W+PrxmtDxbc3IsLfTLCd4L3U51t45Izho1+
9I4+iBvxxWX46mlt7zw1apf3Fpb3GoCKdo5zGWTYTgsD7VuaNY2NrLNJZanc3m4BWWW8acJ9Mk4r
mNHVNS1jxS2q6teRRWd+Y4mW+eJYU2A8YIA61qeAb7U73wvJNfSS3Wy4lW0nmG17iEH5GP19ax/C
ctt4t0rzbrXLyHX0mZriOO5aN7dg5+QRZxswMdDn1zWv8Qr28sdFsnsrmW3kk1K3jZo2wSrPgj6G
tnUdHj1KeKV7y+g8sEBba5aJWz6hetct4CsptY0ODVL3VtUlnju5gAbx9rBJCFBXOCMAVo+ObnVH
03+y9BmMWpTRvcB16qkeGP8A3021f+BGtHR9Rg8V+Fbe9ikkiS9g+YwuVeNujAEdCDkfhXN3enSw
+PNN0VNY1YWk9hNNIv26TJZWUA5znua6qLRY4okjF9fsEULue5Yk47k9zVTxTqFxDbwaZYSGO+1J
zEkgGTDGBmST/gK9PciuRu4beW1h0+G1msfLQpp5LfJOu4fJnu5xuY8OpGea0ItGv/EWmTywX80c
BTEUsr+Y12wxnJwP3PGAMAt1PGK43WPEP2xPsN7ClxJZqIo4hD5NtAxysqsgy33MbXHRjxT7zX7h
Apfw80ED3CiMXLO5eHaoNrlh/HtB9MY4pl5qwmmgGr6DcRYeXcyyP5ojZ9yrEWGEKMAC3GADiuj0
Jb/xtcC9/tARJaxFGmW2VZS5BzGxziRB3HQjHc8TNb3DapBazpDBcwMtttdj5NpEUYZROjI+3jJy
G+Unjm5d3q6Nrr6os6O9tsh1JUAHm2zcJMwHAKNkeu0Gu3BBGQcg1m+JLCXVPDOp2Fv/AK65tZI0
5/iKkCsPwp4s0dPDNlbX99BYXllAkFzbXLiOSN0AU8NgnpxitDwz5l1PqurmN44dQuQ1uHUqzRoi
oGweQCQSM9iKh0OGC/uvE1rcxLNDJqBR0cZVh5MYIrm4NJ1bwv4p0fQwXu/D8l6ZrKZyS9oRG+YS
e45yK2PGDlfGHg4YJzey/wDosj+tbnibRY/EPhy+0t+DcREI391xyp/AgVz3gW8vPEnlazqUDxS6
fb/YQrjrMP8AXOPrhR+BpvxEa1XU/CxvvK+zDU8y+aAUC7D1zxiug0S88OTTTw6E9hvADTLaKo9g
Tt/GuDg8Kx+JNR8WSQhIdTtNXEtlcMucMEGFbPVT0Irs/C/ihNb0eWS5t2tb+wJjvrTHzRuvXHqD
jIrlvFzeF/E2hrqmivE+vuV+wvanbdCXIwGA5GOc7ugzWr8Q2lh8OaR53zyjVLTeQOCQ2Sf0rtK4
74Vn/iiIicj/AEq4Jzx/y1aqVn4vtofE0N3PZ3Ux124NpYyRx5WOCJtoJ/3nLN9MGrPhYnw9401n
wwwK2tyf7RsPQBjiRB9G7VNqH/JW9I4PGlz/APoa/wCFdhXAyi+1PV9c1ONmEccosLV8sBEiczHc
vK5ZQMjpVaK6S8smT7Rcyw3jlZZPtYnRo41JkKHarBsYTJ/v+1ac2q3GhLFcmRPLcoJUMh2DLYI2
4+QIBwQeQBkGuE16WSDWX0l/IOn6pqMsNtAOpIkGdzFGwNzZGOgqeC4j1DWtQ0WSzae50lXml83U
pQo8vAyuE6gYA4rP/tdbrQ38V2weOLT5zAsktyZXRn67Q0fQ7uTnvXU+HNXutN06WKR7ZNQvr0B5
owT+78lGVgpHAAZQeuCc4q2JxJAl7cGR1gZhNvY5eEsBKpOc4GUkHoValsbyM6teadPGD9quGW8D
KIo1jIKKmOSQFPGdoJOe9dN4OuJZfD0VtcOXuLCR7OVj1JjYqD+IAP41tSyxwRNLNIscaDLO5wFH
qTVBNQ0K+ukVLzT7i4/gCyo7/hzmtFmVFLMQqgZJJwAKzYvEehzTCGLVrNnY4AEy/MfQc8mtFyiq
WcgKvJLdB702GaG6hSeCSOaNhlHRgyn6EUPcQRzJA80ayyZ2IWAZsdcDvVaTWNKgnaCXUrOOVDho
2nUMD7jNOttS06/kaK1vbW5dRkpFKrkD1wDWJp+vpaahc22p6tpU0JdmguYp40KLn/VuhPUeo698
V0gK7dwIwec1nL4i0I3Hkrq9iZScbRcJkn061cxaQXCtiGOac4U4AaQ4zj34GamIBHIH41ELq3Nt
9qFxEYNu7zd42Y9c9MU+N45I1eJlZGGVZTkEeoqjca/otpP9nuNVsoZVOCjzqCp9+eKvRvHMiyxs
rqwyrKcgj2NIJYmmaJXQyoAWUHlQc4yPfB/KpK86s47O68OaWbuCa6d5bm5FtFDK/mM0zHJK/Lxn
+MEVemEs2oJ56XCbbNNsU6oHjDzjK4TjGIwOKba6Fb6/MFmfasaiVl2llcs5z97k8IME/XFcP4v/
AORz8Pf9hy4/9HpTtF/5Kr42/wCvS8/mK57Tf+SIax/2FYv5LXpfh7w/BrumT+bIYnt5YyrquWAa
1iBwe3anwvHe5gcuTLEQ/GUw0LZA9OWJPvU2m6lM1ounW9pJfG9ijnmS1JR4iyITvdhtG7JPXOOB
W54b+TWPEcI4UX6uB6FoYyf1p3jld3gXXB/04y/+gmuN1T7Br/w80/R9Hsjd6u9vbiF4oCv2dgFJ
cyYAUDB781r66ZdW8baJ4UupC9klq15eKDgXJX5VVvVd3JHeuvudNsbyxaxubSGW1ZdpiZBtx9O1
cd4Pvbl/D/iLR7mZ7gaPcT2sMshyzRBSVBPcgcflWr8OFC/D3RAP+fUH9TVTX4w3xP8ACrd1huz/
AOOLSfE3T7EeA9auhZW/nmEEy+Uu/O5ec4zXR6Xp9laW0MttZwQyNEoZ44lUkYHGQK4PwRc6RBo1
zFdaLPdv/aVwPMTTjKv+sOPmwR/hWv4vd9T8UaD4VLMljd+ZcXiqdvmpGOI+OxPUV08+j6Zc6edP
m0+2e0K7fJMQ2Y+naub8eaOdY/sPTre7ksZftjNBPF1idYXKn6ZA/Cp/Cvie4v5J9C12NbXXrFf3
0Y+7cJ2lT1B7+lUTCqfBWePAA/siQ/8AjpNQ+Ib+5034daDp9hM1vLqX2SxEycGJXUbiPfAI/Guv
sdE0zTdOXT7WyhS2VdpQoDv92z1J7k1JFDY6JphSGOO1s7VGbaowqKMk/wBa4S1lvNG8XaV4iuzI
IfE2ba4jY8QsTm3GO3y/L9Sa9GrnfBTqmjz2GR5lje3ELL6DzWK/oRUHiRGh1qKYAkXFo6AerROs
oH4qH/KobDVrbTJ4ri6eGG3f/RmeIEqHLZTJCgZJ3dOmRXnni8keN9CQ8FNdn/WWMj9DT9E/5Kr4
2/69Lz+YrntN/wCSIax/2FYv5LXp/hnVbbT/AA5dtJMFlvZlgtlAJLuLeNcfgQfyqPzDBBd3IgaN
oYpCisgVjhSiDpnlnXua7zTrUWOm21rx+5hSMn1woH9Kx/Cx8+6129H3Z9SdVPqI1WP+amrXiy1u
r7wlqtpZRGW5ntJI4kBA3MVIA54qfQLea08O6bbXCbJobSKORCc7WCgEfnWV4j0O9fW9O8SaQiS3
1gGikt3baLiFuqhuzDqM8etWZ9fvntmWx0C/e8Iwsc6rHGrf7T5xj6ZqtofhqXQvC99bPILvUb3z
bi5kUYEkzg8DPQdAKo+ErrV9F8M6XpN14a1Ay28SxSOrwlBz1+/nH4Vb1fTNQuPiD4f1GC3L2dpD
cLPLuGELKAvGcnOKm8fadeav4I1PT9PgM9zPGFjjBAydwPU8dBW5aqyWkKuu1ljUEehxXF+E5NZ8
O6PNZXPhq/mla8nlDRPCVKu5YdXHrWn4p0O/vrnS9e0hE/tPS3LLBK20TRuMPGW7HHQ9M1ZHiK9k
hCxeG9T+1HjypQiID7ybsY9xn6U/V7S7uda0GeKDdHb3EjzsGGEBiZR9eSBUHi7wsPEFtFdWUxs9
YsjvsrxeCjf3T6qe4qpNpWpP8KpNJFuTqLaYYDEGHMhXBGenWnap4ZuNc8C2GnMwtNRtIoJYXbkR
TxgYzjtnIP1q1a+ItSW3WPUPDeoJeqMMluEkic+qvuAx9cUmtwaprOl2NibHyVvJl+3r5oYRRL8x
Ukdd2AvHqayWsfFWv+CdTt9esre21SKVpdPW3YfeTDRnOSB8wxn06109lf3kljbyXemzxXDRKZYw
VIR8cjOexzWBd29zoniy8e0aLb4hhxEJsiNbqNOASORuXJ47rVfRvDWv22iXVvfXzz3dnOr6c0jA
x/JkqR/EN25lbcScVSvZvPgiNrPJDbvG6rHOqsLaQE70wRgMgyMnJIIxgc1zGuaJf3Gv+HdUkYyS
NdwiYbNuJlKBwR6lFRvfDUzRP+SreNf+vS7/AJisrwvZHUfhHqNoASZdXiG1fvPwvyqO5PQV39jp
F3Y6XInm2clvbQeUiOhLPcliZNjA5yGwoIHO0DtWtY6ZHPqy2NvJJJb2TpLeSyYJMqjMcPAAO0ne
ffHrWfbJ4i8MPq2s6nc/a4WBhto52zM5BxCAF+Q72bnjPSut8PaYdH0G0sXbfLGmZX/vSE7nP4sT
Vq/vYdN0+4vrjd5NtE0r7Rk7VGTgVjw+L4rixjvYdF1iS3lQSI62oO5SMggBs9Par+na9puraW2p
WFwJoE3b8KQyFeqlTyCPQ1mad41tNWgt7iy0rV5be4I8uYWh2EZxnOeldJVXUr+LS9NuL+dZGito
zI4jXc20cnA70unX9vqmnW+oWj77e5jWSNsYyCMio7TVba+vr2zg3s9i6pM235dxXdgHuQCM+maZ
Brlhc65daLFNm9tIkllT0Vs4/p+Yo1rW7PQLFby+MnlNKkQ2LuO5jgcfWtCuKGt20HivUZdMTWLk
xSCLULeO3M0W8LwU5+VsY6cH0710Go+IbHSdHi1XUfNtbaRkVjKmGjLkAbh2wTz6Vpo6yIrowZWG
QwOQRVXTtTttUglmti2yKaSBt64+ZGKt+GRWcniy1u3lGlWV7qkcTFHmtY18vcOoDMQGI9s1ND4l
sZdFvNWeK6ghsd/2iOaBkkQqMkbT149OKv6ffQ6lp9vf2xYw3Mayxlhg7WGRkVXttbsLvWrzR4Zt
13ZIjzJ6B84/lz9RWhWfrmkprWlyWjSGKTIkgmXrDIpyrj6ED+VV/D+tPqUMlrexi31SzIS7t/Q9
nX1RuoP4dRWVr9tAddK2iILo24ublZTtglVThS7DlH67XHoQcip1vNAvLFtF1O2/s5peWt7s7Szf
3lkzhznkMDn6Vyi+ELnw54k1DVUju9Ui1O0kt1mt1Ejs74wXGRg8fe+6epxWj4R8JaX4JsornWLy
CKaPLxxPKNsTEYLc/ecjjIHA4HvpX97HcpJf2lsmlWQ/12rzwbZCCcfu1Izzn77AAdcGui0u2sbL
TYotPKfZQuVdX3Bs8li3ck8k96w7Rz4r1yPUF50bTXP2Y9rqcZBkH+wvIB7kk9hXUVj+Lv8AkTtZ
/wCvGb/0A07wrx4S0f8A68Yf/QBXL6YBD8Q/GUVvxbtaQyyqv3RKUOT9SKk+HU2uDwboaRWNkbLy
gDK1y3mbMnJ27MZ9s13VNdFkRkdQysMEHoRXDeE79PCtpr2g3jHZobtPbg9XtnyyAeuDlfritrSV
HhrwlJfakcTlXvbwjqZG+YgfThR9BXMatZT+F20TxhKu24E5XV8d45zzn2RtoH0FbPxKIbwvbkHI
Oo2vTv8AvVrr68/8PTaxH4m8W/2XZWc4Oojcbi4aPB8teMBTmtzxfBHe6LYWuoQRyJcX9sk0X3lO
XGR7jrWJp9zP8OtUi0bUZXl8OXT7dPvJDk2jH/li5/u+h/yG6xPPYfCrxBPaMVlM918ynkBp2BI/
Amux0C1trLw/p9taKqwR20YTb0I2jn8etVfGX/Ila1/14zf+gGotMvodG8AWN7LzHbadE2B1Y7Bg
D3JwPxrm73TbnwxqPh7xI+PPmn+zauw6ETtnJPojkAe2K9DorI1vQv7SkivrO4NlqlsD5F0q54PV
HH8SHuPxGDXMz3hXVJU1m3Ww1O82Qlrp91kyqDhozwHOSTsYg5PQ4qXSrq4lh0ewvZeLmW7nukul
DboVchQQ44GWTHTgVROmwSzay1ro9nutZpEhVoniicLt43K2M8nPHbNTWt/pVndK0VhBpQSeOG8Y
wKJrNmV+Gc5+Usq4fphqsX89tpMk9w2pGaw1OBoZp5pPmEyfcdAOWyDtwg6qCO9N0rw5LqqOv2F9
G0a42tPbqWSS9IGPu5/dIepH3m74rt4oo4IUhhRY441CoijAUDoAKfVHW9PbVtDvtOSQRNdW7xBy
MhdwIzj8aybDTPE9jpFtpsWoaYi28KwrP9mdmwoAzt34zxVrSvDcGkaXd20UzzXV6Xkububl5ZGG
Nxx0A7AcAVnaBoXiXQNIstKh1HTJLe0UJua2k3sucn+PGa6uiuZ1vwcmr+KdO1oXRhW3Ty7qELkX
KBg6KfYMM1pa3pMmsCzg89Y7aK5Sa4QrkzBOVX2G7aT9KydM8G3MPh7VtI1jWp9WTUXkIeYcxKw4
A5Pfn0z0pl74S1K+8E2OhTanE93ZyQt9paIlXEbZXK5z0AHWtiwj8QreBtRutOe22nKW8Dq+e3JY
j9KybTw7r2latq95puoafs1O588rcW7sU+UDGQwz0rT1HSbzU7DT47i5hW4trqG4ldIyFfY2SACc
jP1q9qWm2mr6dNp9/As1tOu10buP8feszQfC9vpHhX/hHrh/tltiVCXHLo7E4Pvg4qtpmjeIdAtV
06wvrO+sYhtt/tqsssS9lLLkOB24Bq3faPqWp+F9Q0y91CJrq9ikjEscO1IgwwAFzkgepOTULeGp
5NJ0PTJLxTBppia4wn+vMa4UdeBuAbv0qlZeCryPwlq+h6nrs2pvqBkMc86kmHcPl4z2OG61u2UG
rW9jbwT3NvPNHEqyS7GG9gMFsZ7nmtGiorm2gvIHt7mGOeFxho5FDKw9wa56XwNYJg6ZeXenAHKx
RuJIgfaOQMo/DFVj4O1QGbbrdo3n7vML6WuX3dc7XAye5xVoeGNWmZjeeJ7jayhWFpaxQllHQFiG
OOT371e0zwvo+lT/AGmC18y6IwbmdzLKf+BNkj6DArXoooooooooooooooooooooor//2Q==
--_005_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_
Content-Type: image/jpeg; name="image001.jpg"
Content-Description: image001.jpg
Content-Disposition: inline; filename="image001.jpg"; size=63605;
creation-date="Tue, 25 Jul 2017 17:13:27 GMT";
modification-date="Tue, 25 Jul 2017 17:13:27 GMT"
Content-ID: <image001.jpg(a)01D3057A.117AB600>
Content-Transfer-Encoding: base64
/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg
IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAJsBL4DASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDiq0dP
0trpIZmkjRZbgW8SuCRI/BwcdByBn3ql5En9w1dsr67so0jSGORY5RPGJBnZIBjcOfpweOBVkmnd
aHbQ6zN8myzlEf2dWJIVpDgKSOTtIfP+7UcXhuEXttHc3qgXHm7Y40JYbCwPJ4xlapPqmoyW9pDJ
hktJmmjJHJYtu5PcZz+Zpf7Vv/tlvcmNC8G/aCvBDliwPP8AtGlYDMOM/KcjscYzQFLOFUZZjgAd
zUjQyFiRFtBPAHQULFMrBlUhgcgjsaYGl/wjdx5jKLm1ZU3iV1ZmETIAWU4Gc89gRRceHLm1gMs8
0CjzfKUfMdx4xzjA4YdcGnPrmolnaGKK3Z95cwrtLM4AZ+vXjtx7VCmo3Qs1t5LeObZIZVkk3Fwx
IJP3sHoOooAbfaJdabOIroxITOYN27IyADuz/dww5pw0C7yVYxqwZl2luflcIT9NzfoabfX99qNt
HBcqGSOSSRcDBBc5Iz6ccDtUsms6jLcCd1QuIkh+7xhWDA9epIyT35oAkPhmdbkRPeWqbhIVdt4B
2feGNueOvTBqk+lzLGzq0bqIBcAqT8yFtoxx1zVibVtRuLiOebEkqRyRhm5O185HXtuOKINVvLe2
iiW3gJiCqJGTLFFbcFPOMZoAivNFubF7hZjGTBGkjFWyCGIHHuCcH6GnWuiSXMkaG5t4WlhE6+YW
xs5ySQDjGOc1O2uahMGF5DDd7kKN5ydRv3/wkdD/ADNMj1W5W3eBrK2kjaNYsMpztUkgZDDjJ/QU
ARQaLPNBaTtLDFFdyGKNnJ4PQZABIBOcfQ0/+wptzkXNsYY2dXm3NtXZt3HpnqwHTrTjrOomCOAq
hgiMZjj2DCFDkEd/XP1NNi1O8iDIYYpI3aRnjdcq+/G4Hnp8ox6YoAk/4Ri+27lMLJ5vlblfI+5v
Df7pB4PrUOn6De6nDay2yoUuZzbqS2NrAbst6DHf2qePXtUin82NY1+8NgT5drKF24z0AUY9MVFY
arqOmW6wWuFjDbiCM55U8/8AfI/M+tAEcWiXU0NtKpjCXNx9nTLchs4BPopOefY1Knh29a+itG8p
JZGkXLNhV2dST6HjB75FPTXdTSOOMLGYY9hSMoMKVbcD65yT+ZoXXdVSGNEKq0UZiSQKC4UsGxk/
QD6CgCqNKl+yLPJLFGzEhIXJ3sA20npgDPHJ7Grb+Gp47y4tmurcPbgGU7ZPly20DG3J574xQ+u6
i6TAwwh5dwMgTDBWbcyjnGCefxOKVte1BbuW6t4o7aeUhneHcCxDbu7HuOnpQBlXNvJaXUtvMAJI
nKNg5GQcVFU0qSyzPIYgpdixC9Bk54pvkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdF
SeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3D
R5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cN
AEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnk
Sf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eR
J/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBH
RUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9
w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3
DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ
5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNH
kSf3DQBHRUnkSf3DR5En9w0AR0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0A
R0VJ5En9w0eRJ/cNAEdFSeRJ/cNHkSf3DQBHRUnkSf3DR5En9w0AIf8AUj/eP8hWho+jHV0vWFxH
D9lhMvz/AMXtVLyZPKA2HO4/0rR0vV7vSYJIreztH83h2li3Mw9OvT2oAzrS0lvpfLgUFsZ5OKsj
R7gxqd8IYttKlwNvA6n1+YcVGkt1DNJLbr5Jc8qgGBznAB9KVrm/ZQpkfC9Ont/gPyoAcdJuBnJi
GOrFxtA+v1GKlGhzfZ2d5EWQZ/d9c/jVYzXrIVLuVOcjj61I15ftA0TMxVjye5/GgBf7InDEM8Kg
AklnHAAPUds7TSNo92hAdY1Jx1kHUnAH1pn2i+3l977j1Jxz1/xP50jTXrEMWbIxzgA8dM+uKAJE
0e6diB5ZUYJYNkYJxx64psWl3U5k8tMiNsEt8ueccZoS6v402LI4X044pEuL5EZVkkCscnkdf6UA
Ph0mWWMO0sMYYkAM3PAJz9PlNQz2MtvEXkMeA+zAcE5xnp6YqV7u/kBDSOQRjGB7/wCJ/OmSy3k8
SxylnROVBA4/HrS1AjFrMbczhP3I/jzxn0+vtTI/vH/dP8qXyZcY2tjrjNOSGQMcoeh/lTAS2g+0
SMu4IqoXZiM4A68d6mTTZZgGgZHRuVLEKducbiOwyDUcQuIJBJEGVx3GKl+0X23aHYLnOAAP8j2o
AmXRJ2gDrJE0jH5UVs5HHOfT5hUTaTdLMsWIyzDIw4x1A/8AZhS293eQSxt8zqnRCeMcf4D8hTpb
69d2MW6JGOdq468c59yAaAI/7KufJ87915XOX3/KMd8+lOGkXDAbShZsFVLAEg4+b6AkU1bm/SNU
V32KMAYBGPT3/GgXV+BgSPjj07f/AKhQBXnga3kCOVJIDAqcgg9CDTZOo/3R/Knuk743hjtAUZ7A
dBSyQyEjCE/KP5UATy6ZJGr7WLtGMyfLhV4zjd3PND6TcpG7bVPloXcZ6DJ6evSgXV+oAEjYAx0X
njHPrx60Nd6g6urSyMHzuzjnPX+dAFVPuv8A7v8AUVNp9jJqN15ERwdpYkgkDH/18D8aYkMgD/Ie
V/qKWP7TCGERdN2M7TjOOaAJYtKupSVVU3Bd7KXAKg9M+maRtMuE2hjEGYhQu/JJIzjH0IzStdX7
kl3ZiRg7lU5Hv60JcX0ZJV3BJ3cgHnGP5cUALHpU73jW7mONkYKzM3GT0x61FdWM9kIzOoXzBkAH
JH19OtSLcXySvKruHfG44HOOlRS/aZ9plBYqMAkDP59/xoAvadpN5qxkFlD5hjxu+YDGfrV7/hEN
a/58/wDyIv8AjWt8O/8AW3/+6n8zXT32qGyvDD5Esq/Z2mBjjLkEHHOOgqWxpHBf8IhrX/Pn/wCR
F/xo/wCEQ1r/AJ8//Ii/4128Ou74o3a0kZWCqXQg/vSm/YF6+2fWiDWp7q4iSCxDq8LyELONwKkD
b9ee/Si4WOI/4RDWv+fP/wAiL/jR/wAIhrX/AD5/+RF/xr0K5vniuPs8MBkkERldtwAjXoD789h6
VVttbElnDI8MjyMVjYIByxj8zgfpii4WOH/4RDWv+fP/AMiL/jR/wiGtf8+f/kRf8a7M+Iiu2VrU
C3EDzSMJQzJtIGMY689KntNaN5PFClnMrtuLbzgKFIBOSBnqOg9aLhY4X/hENa/58/8AyIv+NH/C
Ia1/z5/+RF/xr0FdUhbUjYhT5o7709M9N279Kqf2rcREzzrB9k+1Pb4UEOuM4brg9OmBRcLHE/8A
CIa1/wA+f/kRf8aP+EQ1r/nz/wDIi/41148QS3n2YWtu0KySgPJL0CbC+RkAHgc+lWbTX4buXy1h
cNuYZByMBN4OeM5FFwscP/wiGtf8+f8A5EX/ABp0fg3XJZFjSyy7ZwPMXt+Ndla63PdX0McdurQz
NwS20xr5av8AifmrotN/5CsH+6/8hRcLHmP/AAgHiT/oHH/v6n+NH/CAeJP+gcf+/qf417hRRcLH
h/8AwgHiT/oHH/v6n+NH/CAeJP8AoHH/AL+p/jXuFFFwseH/APCAeJP+gcf+/qf40f8ACAeJP+gc
f+/qf417hRRcLHh//CAeJP8AoHH/AL+p/jR/wgHiT/oHH/v6n+Ne4UUXCx4f/wAIB4k/6Bx/7+p/
jR/wgHiT/oHH/v6n+Ne4UUXCx4f/AMIB4k/6Bx/7+p/jR/wgHiT/AKBx/wC/qf417hRRcLHh/wDw
gHiT/oHH/v6n+NH/AAgHiT/oHH/v6n+Ne4UUXCx4f/wgHiT/AKBx/wC/qf40f8IB4k/6Bx/7+p/j
XuFFFwseH/8ACAeJP+gcf+/qf40f8IB4k/6Bx/7+p/jXuFFFwseH/wDCAeJP+gcf+/qf40f8IB4k
/wCgcf8Av6n+Ne4UUXCx4f8A8IB4k/6Bx/7+p/jR/wAIB4k/6Bx/7+p/jXuFFFwseH/8IB4k/wCg
cf8Av6n+NH/CAeJP+gcf+/qf417hRRcLHh//AAgHiT/oHH/v6n+NH/CAeJP+gcf+/qf417hRRcLH
h/8AwgHiT/oHH/v6n+NH/CAeJP8AoHH/AL+p/jXuFFFwseH/APCAeJP+gcf+/qf40f8ACAeJP+gc
f+/qf417hRRcLHh//CAeJP8AoHH/AL+p/jR/wgHiT/oHH/v6n+Ne4UUXCx4f/wAIB4k/6Bx/7+p/
jR/wgHiT/oHH/v6n+Ne4UUXCx4f/AMIB4k/6Bx/7+p/jR/wgHiT/AKBx/wC/qf417hRRcLHh/wDw
gHiT/oHH/v6n+NH/AAgHiT/oHH/v6n+Ne4UUXCx4f/wgHiT/AKBx/wC/qf40f8IB4k/6Bx/7+p/j
XuFFFwseH/8ACAeJP+gcf+/qf40f8IB4k/6Bx/7+p/jXuFFFwseH/wDCAeJP+gcf+/qf40f8IB4k
/wCgcf8Av6n+Ne4UUXCx4f8A8IB4k/6Bx/7+p/jR/wAIB4k/6Bx/7+p/jXuFFFwseH/8IB4k/wCg
cf8Av6n+NH/CAeJP+gcf+/qf417hRRcLHh//AAgHiT/oHH/v6n+NH/CAeJP+gcf+/qf417hRRcLH
h/8AwgHiT/oHH/v6n+NH/CAeJP8AoHH/AL+p/jXuFFFwseH/APCAeJP+gcf+/qf40f8ACAeJP+gc
f+/qf417hRRcLHh//CAeJP8AoHH/AL+p/jR/wgHiT/oHH/v6n+Ne4UUXCx4f/wAIB4k/6Bx/7+p/
jR/wgHiT/oHH/v6n+Ne4UUXCx4f/AMIB4k/6Bx/7+p/jR/wgHiT/AKBx/wC/qf417hRRcLHh/wDw
gHiT/oHH/v6n+NH/AAgHiT/oHH/v6n+Ne4UUXCx4f/wgHiT/AKBx/wC/qf40f8IB4k/6Bx/7+p/j
XuFFFwseH/8ACAeJP+gcf+/qf40N4C8RqpZtOIAGT+9T/GvcKiuf+PaX/cP8qLhY8TXwF4idQy6c
SpGQfNT/ABpf+EA8Sf8AQOP/AH9T/GvZo2kTS1aFBJKIQURm2hjt4Ge31rI8M+LE8TzTi1tGiitg
End5ASs3dAB1A/vdD2ouFjy8eBPELOyDTzuXGR5i8Z/Gl/4QDxJ/0Dj/AN/U/wAa9mh/5CN1/up/
WsS78WyW1+0Y0u6MMDFJlOzzWY/6vy13fMDg/wCc0XCx5m/gTxDGAW08gEhR+8Xqenel/wCEA8Sf
9A4/9/U/xr2S6cyW9u5RkLSxkq2MjkcHFZ2valexahp+l6bJb29zfeY32idC6oqAEgKCNzHI7jgE
0XCx5Z/wgHiT/oHH/v6n+NYt/Y3GmXslpeR+XPHjeuQccZ7V7n4b1S41OyuBeCI3FpcvbSSQ58uQ
rj5lB6deRzg5FeRePT/xWupf7y/+gihMGjBzRmmZozTEPzRmmZozQA/NGaZmjNAD80ZpmaM0APzR
mmZozQA/NGaZmjNAD80ZpmaM0APzRmmZozQA/NGaZmjNAD80ZpmaM0APzRmmZozQA/NGaZmjNAD8
0ZpmaM0APzRmmZozQA/NGaZmjNAD80ZpmaM0APzRmmZozQA/NGaZmjNAD80ZpmaM0APzRmmZozQA
/NGaZmjNAD80ZpmaM0APzRmmZozQA/NGaZmjNAD80ZpmaM0APzRmmZozQA/NGaZmjNAD80ZpmaM0
APzRmmZozQA/NGaZmjNAD80ZpmaM0APzRmmZozQB1ngG8t7WS+NzPFCGVMb2Az1rrm1DSXkMjXlo
XKGMnzR9084615J9mmP/ACzNH2aX/nkaVh3PVA2grIrrcWgZVCjE+AMDaDjOM44z196b/wASDGPt
NtnDAt9pO5t2M5OcnoOvoK8t+zS/88jR9ml/55GiwXPWJrrRriVJJLu2LoNoYT4OPQ4PI9jUQ/sA
SLIJ7QMoAXE+AMLtBxnGccZ615Z9ml/55Gj7NL/zyNFguepj+wAADcWrfeyWuMlt2M7iTznA6+gq
WG70e3ZWjvbfcqlQzXG4gE5IySfQV5P9ml/55Gj7NL/zyNFguev/ANsad/z/ANr/AN/RVVJNCS4M
4ubUyFmb5p9wBbqQCcAn2FeVfZpf+eRo+zS/88jRYLnqqSaFGioLm1KKSVVp9wXIK4AJ4GCRjpUY
Tw8I9nn22Mg5+0ndwNvXOenGPSvLvs0v/PI0fZpf+eRosFz1VJdCjMZjuLRDGwZCs2MEKF9fQAY9
qu23iHSbO9hnn1C3WNQwLB84yOOleO/Zpf8AnkaPs0v/ADyNFgue7f8ACd+G/wDoLQf+Pf4Uf8J3
4b/6C0H/AI9/hXhP2eb/AJ5tR9nm/wCebUWC7Pdv+E78N/8AQWg/8e/wo/4Tvw3/ANBaD/x7/CvC
fs83/PNqPs83/PNqLBdnu3/Cd+G/+gtB/wCPf4Uf8J34b/6C0H/j3+FeE/Z5v+ebUfZ5v+ebUWC7
Pdv+E78N/wDQWg/8e/wo/wCE78N/9BaD/wAe/wAK8J+zzf8APNqPs83/ADzaiwXZ7t/wnfhv/oLQ
f+Pf4Uf8J34b/wCgtB/49/hXhP2eb/nm1H2eb/nm1Fguz3b/AITvw3/0FoP/AB7/AAo/4Tvw3/0F
oP8Ax7/CvCDBKOqEUeTJ/d/UUWC7Pd/+E78N/wDQWg/8e/wo/wCE78N/9BaD/wAe/wAK8I8mT+7+
oo8mT+7+oosF2e7/APCd+G/+gtB/49/hR/wnfhv/AKC0H/j3+FeEeTJ/d/UUeTJ/d/UUWC7Pd/8A
hO/Df/QWg/8AHv8ACj/hO/Df/QWg/wDHv8K8JEEp6ITR9nm/55tRYLs92/4Tvw3/ANBaD/x7/Cj/
AITvw3/0FoP/AB7/AArwn7PN/wA82o+zzf8APNqLBdnu3/Cd+G/+gtB/49/hR/wnfhv/AKC0H/j3
+FeE/Z5v+ebUfZ5v+ebUWC7Pdv8AhO/Df/QWg/8AHv8ACj/hO/Df/QWg/wDHv8K8J+zzf882o+zz
f882osF2e7f8J34b/wCgtB/49/hR/wAJ34b/AOgtB/49/hXhP2eb/nm1H2eb/nm1Fguz3b/hO/Df
/QWg/wDHv8KP+E78N/8AQWg/8e/wrwn7PN/zzaj7PN/zzaiwXZ7t/wAJ34b/AOgtB/49/hR/wnfh
v/oLQf8Aj3+FeE/Z5v8Anm1H2eb/AJ5tRYLs92/4Tvw3/wBBaD/x7/Cj/hO/Df8A0FoP/Hv8K8J+
zzf882o+zzf882osF2e7f8J34b/6C0H/AI9/hR/wnfhv/oLQf+Pf4V4T9nm/55tR9nm/55tRYLs9
2/4Tvw3/ANBaD/x7/Cj/AITvw3/0FoP/AB7/AArwn7PN/wA82o+zzf8APNqLBdnu3/Cd+G/+gtB/
49/hR/wnfhv/AKC0H/j3+FeE/Z5v+ebUfZ5v+ebUWC7Pdv8AhO/Df/QWg/8AHv8ACj/hO/Df/QWg
/wDHv8K8J+zzf882o+zzf882osF2e7f8J34b/wCgtB/49/hR/wAJ34b/AOgtB/49/hXhP2eb/nm1
H2eb/nm1Fguz3b/hO/Df/QWg/wDHv8KP+E78N/8AQWg/8e/wrwn7PN/zzaj7PN/zzaiwXZ7t/wAJ
34b/AOgtB/49/hR/wnfhv/oLQf8Aj3+FeE/Z5v8Anm1H2eb/AJ5tRYLs92/4Tvw3/wBBaD/x7/Cj
/hO/Df8A0FoP/Hv8K8J+zzf882o+zzf882osF2e7f8J34b/6C0H/AI9/hR/wnfhv/oLQf+Pf4V4T
9nm/55tR9nm/55tRYLs92/4Tvw3/ANBaD/x7/Cj/AITvw3/0FoP/AB7/AArwn7PN/wA82o+zzf8A
PNqLBdnu3/Cd+G/+gtB/49/hR/wnfhv/AKC0H/j3+FeE/Z5v+ebUfZ5v+ebUWC7Pdv8AhO/Df/QW
g/8AHv8ACj/hO/Df/QWg/wDHv8K8J+zzf882o+zzf882osF2e7f8J34b/wCgtB/49/hR/wAJ34b/
AOgtB/49/hXhP2eb/nm1H2eb/nm1Fguz3b/hO/Df/QWg/wDHv8KP+E78N/8AQWg/8e/wrwn7PN/z
zaj7PN/zzaiwXZ7t/wAJ34b/AOgtB/49/hR/wnfhv/oLQf8Aj3+FeE/Z5v8Anm1H2eb/AJ5tRYLs
92/4Tvw3/wBBaD/x7/Cj/hO/Df8A0FoP/Hv8K8J+zzf882o+zzf882osF2e7f8J34b/6C0H/AI9/
hTJvHPhx4JFXVoCSpA6+n0rwz7PN/wA82o+zzf8APNqLBdntw8ZeGZtPFtcapbsjxeW6ndyCMEdK
jtfEngyynWa1u7KGRYRbho1I/djovToK8V+zzf8APNqPs83/ADzaiwXZ7fF438OreTudVg2uFwee
cZz2qnc6z4IvL03lxe2z3W5GWYl9yFPu7T2/Dr3rxz7PN/zzaj7PN/zzaiwXZ7fd+N/DsiRhNVgJ
WVGPXoDz2qLU/Evg3WbX7PqN9aTxA7gGDAqfUEDIPuK8V+zzf882o+zzf882osF2ew6RrPg3Qp7h
tN1VIIJ8E2wZvKVh1ZVxwT39cV5v4yvrfUPFd9dWcqywSMpR16H5QKxvs83/ADzaj7PN/wA82piG
5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/P
NqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNq
AG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83
/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/P
NqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs
83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83
/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozT
vs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs
83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5ozTvs83/PNqPs83/PNqAG5o
zTvs83/PNqPs83/PNqANOipfstx/z7z/APftv8KPstx/z7z/APftv8KkoioqX7Lcf8+8/wD37b/C
j7Lcf8+8/wD37b/CgCKipfstx/z7z/8Aftv8KPstx/z7z/8Aftv8KAIqKl+y3H/PvP8A9+2/wo+y
3H/PvP8A9+2/woAioqX7Lcf8+8//AH7b/Cj7Lcf8+8//AH7b/CgCKipfstx/z7z/APftv8KPstx/
z7z/APftv8KAIqKl+y3H/PvP/wB+2/wo+y3H/PvP/wB+2/woAioqX7Lcf8+8/wD37b/Cj7Lcf8+8
/wD37b/CgCKipfstx/z7z/8Aftv8KPstx/z7z/8Aftv8KAIqKl+y3H/PvP8A9+2/wo+y3H/PvP8A
9+2/woAioqX7Lcf8+8//AH7b/Cj7Lcf8+8//AH7b/CgCKipfstx/z7z/APftv8KPstx/z7z/APft
v8KAPQ/hjbwyaRevJEjN9oxllBONo/xrtfslv/zwi/74Fcb8NHW20i8WdhExufuyHafur2Ndl9st
v+fiL/vsUAZj6ppsd6bVoI/NFwINuFzym/dj0xUov9GZVYS2hDNsXgcnrUVxpum3LyNJdKfMn89g
JF67NmPpiqx0HT22F9QYupTD70DbVGAAQOOO455qtBal6zv9Iv2RbcwM75KoUAYgEjpj2NXvslv/
AM8Iv++BWRaaTYWl5DcJf5MIYKu9BwSTgkckDceta/2y2/5+Iv8AvsUnboCv1PN/idDHFqlgY40T
dC2doxn5h/jXFV23xOkSbU9PMTq4EL5KnOORXFYPofypDEopcH0P5UYPofyoASilwfQ/lRg+h/Kg
BKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPof
yoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD
6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/l
Rg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0
P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKX
B9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoAS
ilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8q
AEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h
/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UY
PofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+
VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwf
Q/lRg+h/KgBKKXB9D+VGD6H8qAEopcH0P5UYPofyoASilwfQ/lRg+h/KgBKKXB9D+VGD6H8qAPeZ
57a2eNbi5SJpW2xh5Apc+gz1NSIEkBKSFgCQSGzyOornPFuiXup3tjNYW8TyRZUyPIu1QWUncjKQ
y8dsNxVC48Pa0b+wni+cxXczsHuD5ao0u4HAwc7fT6UIGdkqo4JVywBI4bvTvKH95vzrz248H61H
BDBYgwQx3MzYjueTucFJRnuBxj/Grd34R1OaW7nWSXzpGnZWF2ygnchiOAcDGHOPehAdv5Q/vN+d
HlD+8351zGiaTq9t4uvb+8SOO1mRlCxzFgx3Aq2CeOM11VADPKH95vzo8of3m/On0UAM8of3m/Oj
yh/eb86fRQAzyh/eb86PKH95vzp9FADPKH95vzo8of3m/On0UAM8of3m/Ojyh/eb86fRQAzyh/eb
86PKH95vzp9FADPKH95vzo8of3m/On0UAM8of3m/Ojyh/eb86fRQAzyh/eb86o6tqMGj2n2ifzWB
baqqeSa0awfF1ncXmmRfZYjK8UwcoBnIwe3egCj4duodV8RajciIhHjTCyAEjHFdI62kZxIsCnBb
DADgdT+ormfCrTP4g1J7iEwSuis0ZGNuTW1q2kHUpI5ElEbxoVUlc9WUn8CFI/GgC/8AZ4f+eMf/
AHyKPs8P/PGP/vkVht4eu5JYWlvsrFKGCqWGVHc8/e/Tmphoc4nt5Ddb/KkLEnOQoI24567RtOeu
TQBqtDbopZo4lVRkkqAAKSNLWUkRrC5XGdoBxkZH6Ut3B9ps5oMgeYjJk9sjFYi+GpkC7bpc7wxb
aQRgAAjB6gAAZ96ANl2s4d28wJt+9kgY+v5j86kEcTAEIhB5BAFYEfhZyX86WJlZ1fAVvmIKnccn
r8v60ybwzdyWxiW8QMWUmX597YzjPOOD09hQB0fkx/8APNP++RUTSWiXCwM8AmYZWMkbiPYdabp9
n9itjGW3Ozs7N/eJOaoahokt5qn2lJxGpRRkFtykbugzg/e7jI7UAa3kx/8APNP++RR5Mf8AzzT/
AL5FYkHhwgJ58qNtK4Rd21QGLFRk9DxTtK0GayujNdXAmw7uqjOFZgozyfY/nQBsGKIAkogA9hTf
9H2o37ra+Np4w2emPWm39u93ZyQRymIyDaXAyQO+PwrFn8NTyCJBdqyRZCM6ENGOfu7SB3x06AUA
b3lxBgNiZPQYFIVgCliI8A4JwODWCfCp81cXOIed0fzYPJx37ZH5U2TwvcyXDyPdpIrrhlcMQTkE
Drx07UAdF5Mf/PNP++RTD9nVwhWPcTgDHfGf5Vhv4auPPiaK88tUhZGxuyzMDk9e5I/KlHh2485H
FzHGAAAiB8Rf7mT36c9qAN7yY/8Anmn/AHyKPJj/AOeaf98isiTw1b3WkNp91JL5RfejQyMjIcDo
c565/OtaCCO2gjhhXbHGoVRnOAKYCKkL52rGcHBwBwaUJCSQFjyvUYHFYk/ht5JbuRbnDTs2MZGF
IPy8H1warjwvdjySLyMOrxu7hSCxUAevoCPxpAdGscLqGVY2UjIIAIIpq/Z2k2KsZYDOAO2cVz8P
hi6h83N1HIj7MRsGCNtyOQD6EdP7oqZfDcxmcy3QdHOW+9ll3A7Dz04P5mgDd8mP/nmn/fIo8mP/
AJ5p/wB8isqfw3a3o09rxpnlsgMMkrKJMDowB5Geea1n3bG2EBscE9M0MBirA+dixtg4OADg+lKs
cLDKohHqAKwZvDVz9nEUWoFv3iy5dAPnGcn5cZznv6dajHh+7k1F1mcNbHlpCxzJlgdmM9Bg/maA
Oj8mP/nmn/fIprJCgyyxqOmSAKw4fDlwL/zri7EkR8vcnzfNt6Z5p154dlutRnuTcBg7IyI+4hcY
4xnHGCR9aANsRREAhEIPQgCgxwhgpWPJ6DA5rnYvDt/CzKt1HjaFMh3Ev94FiM/e5HtQ3haf9632
pXkdnw0m4nDYz34JxjigDo/Jj/55p/3yKQxxKCWRAB1JArI0nRLnTr4Ty3YlX7OsTA5yxAAzyeOn
61q3MJuIvLD7QSNxxnI9KAH+VH/zzT8hSGOIEAomT04FUf7OlaQh5AVChQ3OWHoaV9MdmJEigb8g
YPvz16jP6UAXvJj/AOeaf98ijyY/+eaf98iqlvYPFOZJJd43FgOevrV6gBnkx/8APNP++RR5Mf8A
zzT/AL5FPooAZ5Mf/PNP++RR5Mf/ADzT/vkU+igBnkx/880/75FHkx/880/75FPooAZ5Mf8AzzT/
AL5FHkx/880/75FPooAZ5Mf/ADzT/vkUeTH/AM80/wC+RT6KAGeTH/zzT/vkUeTH/wA80/75FPoo
AZ5Mf/PNP++RR5Mf/PNP++RT6KAGeTH/AM80/wC+RR5Mf/PNP++RT6KAGeTH/wA80/75FHkx/wDP
NP8AvkU+igBnkx/880/75FHkx/8APNP++RT6KAGeTH/zzT/vkUeTH/zzT/vkU+igBnkx/wDPNP8A
vkUeTH/zzT/vkU+igBnkx/8APNP++RR5Mf8AzzT/AL5FPooAZ5Mf/PNP++RR5Mf/ADzT/vkU+igB
nkx/880/75FHkx/880/75FPooAZ5Mf8AzzT/AL5FHkx/880/75FPooAZ5Mf/ADzT/vkUeTH/AM80
/wC+RT6KAGeTH/zzT/vkUeTH/wA80/75FPooAZ5Mf/PNP++RR5Mf/PNP++RT6KAGeTH/AM80/wC+
RR5Mf/PNP++RT6KAGeTH/wA80/75FHkx/wDPNP8AvkU+igBnkx/880/75FHkx/8APNP++RT6KAGe
TH/zzT/vkUeTH/zzT/vkU+igBnkx/wDPNP8AvkUeTH/zzT/vkU+igBnkx/8APNP++RR5Mf8AzzT/
AL5FPooAZ5Mf/PNP++RR5Mf/ADzT/vkU+igBnkx/880/75FHkx/880/75FPooAZ5Mf8AzzT/AL5F
Hkx/880/75FPooAZ5Mf/ADzT/vkUeTH/AM80/wC+RT6KAGeTH/zzT/vkUeTH/wA80/75FPooAZ5M
f/PNP++RR5Mf/PNP++RT6KAGeTH/AM80/wC+RR5Mf/PNP++RT6KAGeTH/wA80/75FHkx/wDPNP8A
vkU+igBnkx/880/75FHkx/8APNP++RT6KAGeTH/zzT/vkUeTH/zzT/vkU+igBnkx/wDPNP8AvkUe
TH/zzT/vkU+igBnkx/8APNP++RR5Mf8AzzT/AL5FPooAZ5Mf/PNP++RR5Mf/ADzT/vkU+igBnkx/
880/75FHkx/880/75FPooAZ5Mf8AzzT/AL5FHkx/880/75FPooAZ5Mf/ADzT/vkUeVH/AM80/IU+
qWqeb9kIitxOpPzoecj6d6cVd2FJ2Vy15Mf/ADzT8hR5Mf8AzzT/AL5FUNPtGhtogpm2GTcFk4KL
g8YzxTNF03ULGW7bUNUlvVeT9wrKo8tO2cAZP+Apyik9GKLbWxpeTH/zzT/vkUeTH/zzT/vkVW1a
O4m0u4SzkeOcodhTGc+gz0zVW9F48toqxzJiQbpIpMgKMZ3LxnPTv1JqSjRdIkAJjU5OAAopv7v/
AJ4f+Oinzfwf739DUMKlXbeCXP8AF2IppAOzCfuxA8Z4UUZiH/LD/wAdFNi+6cjIwKIVZVOQQufl
U9QKLAT+Wf8Ano/6f4UeWf8Ano/6f4VgSave291f4U3DRyhYrYDDFcdR8v45yenaop/E91HDcPFb
RzGNto2B8d+Dx1OOKQHSeWf+ej/p/hR5Z/56P+n+Fc/d65qCQSlLdIySfLco5wMsACMfeO0Y7c1M
mp6hJpVzcGNI3jlVE3IxJG4biR9D2oA2vLP/AD0f9P8ACjyz/wA9H/T/AArnZNfv2RAluqM0oXIR
znlcoOPvcnJ6cVraRfS39q0k0YQhsDaCAeAe/PBJB9xQBc8s/wDPR/0/wo8s/wDPR/0/wrO1PU57
K6SOKEODGXAIYmQjPyrgHkYyc+tUJdb1JrVnigiGEOJNjkE/NhgMfdwv15oA6Dyz/wA9H/T/AAo8
s/8APR/0/wAKwJfEF9DLCv2ES+asjbUDZwN23qO+3n6iprXVdQu7C4lMEUTR25dCAzbn+bGPbj9a
ANnyz/z0f9P8KPLP/PR/0/wrnJ/EF+LWbZDGrqMCTY5CnBxkY5JwCPqM1qaTqE9686zwiPyyAOCC
OvBz34B49RQBf8s/89H/AE/wo8s/89H/AE/wp9FADPLP/PR/0/wo8s/89H/T/Cn0UAM8s/8APR/0
/wAKPLP/AD0f9P8ACn0UAM8s/wDPR/0/wo8s/wDPR/0/wp9FADPLP/PR/wBP8KPLP/PR/wBP8KfR
QAzyz/z0f9P8KPLP/PR/0/wp9Y9zfTPqPlWs+yRCFMUu0K455HcnNXCDlsTKSiVdOOzxjquST+7j
5/AVvecvoawLP/kcNW/65R/yFbNJK4yUzqASQcCl85fQ1Xk/1bfSnU+VATecvoaPOX0NQ0UcqAm8
5fQ0hnUY4PNRVDdTfZ7eSYjd5SM+M9cKTihpIC55y+ho85fQ1yth43sbuHfLDNCRguoG/wAsbUJZ
vQAuBUh8Z6cQ+yK6yFJUvFtVuGxznuUYfhQ0kB03nL6Gjzl9DXNx+MdPMStMlzE/ltLIjR/6pQqs
SfbDrjHXNaWl6pb6vZi5tRKE3FSJUKsCPajlQXNLzl9DR5y+hqGijlQE3nL6Gjzl9DUNFHKgJvOX
0NHnL6GoaKOVATecvoaTz1yBg81FVTUr6PTbGa8lVmjgjZ2C9TjtRyoDR85fQ0ecvoaw5dfWxCLq
du0MrAviFvNVYxgbycDAywHSq6eMtLZ8MLmNP+eskOEx82DnPfY35UWQHSecvoaPOX0Nc3H4z0uV
InUXeJSesBG0ZUZPt869PWmJ4200RI1zHcwM0ckgRo9xwhOeh6kKSBRyoDp/OX0NJ565xg5rn/8A
hLLFJpIpknjeNhvBjyY1O35m9BlwOM1e1i//ALK024vfKM3kRl/LU4LcjgGjlQGn5y+ho85fQ1zN
v4z0+aIu6zIqsVZlXcqfMyrk9ixU4FNl8bWC28skdvdl0XcqSR7N/CtgHJ52sDRyoDqPOX0NHnL6
Gudk8X6bEspdboeSuZR5XKHeUCnn7xYEYp174otLXSI76OK4m82N5I4ghViEGW3Z+6B/nNFkB0Hn
L6Gjzl9DXP2/imxvIL57USyNZxNKwKFVfbnIVuh5BFR2/jDT5oo2dLmJyAZEaPPkg7eWPpl15HrR
yoLnSecvoaPOX0Nc9b+KrO8uYIbWC5cy3JtmJTbsIUtkgnOOKm/4SK0b7WIllZ7eKWQbl2rKIzht
p9jxRZBqbfnL6Gjzl9DXN6L4n/tjUXtls2RVUkyB920gKcNwAM7uOT0Nb1PlC5N5y+hpPPXOMHNR
VS1TUBplt55jMjMyRImcZZmCjJ7DJ60uVAafnL6Gjzl9DXN3nib7BNJb3lsIbhYfMGH8xCfnJGQA
ekZP6UTeL7FDOkMNzNLCYtyBNuQ5ABGTyPmFFkB0nnL6Gjzl9DXPW/i7TbtlW2F1KzzCFAsJ+ckM
cjPGPlbn2rbo5UBN5y+ho85fQ1DRRyoCbzl9DR5y+hqGijlQE3nL6Gjzl9DUNFHKgJvOX0NJ565I
wcioqo6lqB08RbIvNmuJkgiQttBY5PJwcDAPajlQGp5y+ho85fQ1z0/iyxsp3tr1ZY7mGLzZljQy
LH8pbG4dTgZFRnxjZCbb9mvNoUlm8sZDeYEC4z1JYe1FkB0vnL6Gjzl9DWLB4isZnnGZoxDG8haS
PAZUOHI9dp4P9aoxeNtPN00E8N1DIZAiKYyzFSqncQOg+cDvRZAdR5y+ho85fQ1z+oeKLXT9RW1e
KZwGKSyqPlQhN+Pc4xx71DB4zsLhv3UF2yMiNGVjBMhIYkAZ7BGJz6UWQHTecvoaPOX0NY2n+IbH
VL6S0tWkaSONZdzJtVlIBBGevUVp0cqC5N5y+ho85fQ1DRRyoCbzl9DSeepJ4PFRVmapqr6dcWkM
VsZ3u5jGME/Lhck8KT29KOVAbPnL6Gjzl9DXOHxdYmUxxQ3LlLkW0nyBdhwx3cnOPlNQt4405oFl
to55R5ipJlNoiBfZuY/nj1xRZAdT5y+ho85fQ1l6bq9tqqyG3EqlApKyJtO1hlW+hFXqOVATecvo
aPOX0NQ0UcqAm85fQ0gnU54PFRVk6vriaNcWaSQmRLqVkZw2PLAXOcd+wo5UBuecvoaPOX0Ncjbe
OrORLZ7qJ7fzoGlMeC7Bg2AoI4ycVcHi/TSjcXCurFCjxEEN84wfxjaiyA6Lzl9DR5y+hrmh400w
QySOLkCNSxxFw2CAQvrjcPSrdj4jsdRvJLa285pI4RNzHgMpAI2569RRyoDa85fQ0ecvoa5w+MNO
3p5YmkRwNpSM5ZiVAAH1cA88HNLB4tspCVnhubdxKYgsiD+/sB4PHIo5UB0XnL6Gjzl9DXKnx3pe
4HbP5XK52fMXDABQvfO7Oc1ftPEVjf34soDN5zQiYb4yoKkZ7855osgNoTqQDg80vnL6GuV1Txeu
lXl1amyeVrdYSpEgHmb+vbjaOfepv+Ey0thJ5f2qXZJ5Y2QH5z82SvqPkb8qLINTpPOX0NHnL6Gu
dTxhpjpuAueBuZTFyiYU72GeFw6+/PSmS+MtPhRpXS4EIV2DeXy+xtp2juM/yosgOkM6jsaXzl9D
VSKZLm3hmiOY5QrqfUHBFYknih49KXUfsSC1mkWOF2nxnL7csApK+vGaHFINzpvOX0NHnL6Guc0z
xONRuoIGspYHlYqQ7cqRGr8jH+1it2jlQXJvOX0NJ565xg5qKqWr340rTbi9MfmCCMvszjPI70cq
A0/OX0NHnL6GuWtfG+nS+WtwJIpG3ElFMkYUFgG3YHB2nHHatLSNZTVzdeXBLCIHVB5owWBQMDjt
w1HKgua/nL6Gjzl9DUNFHKgJfPUk8Hil85fQ1kaxqbaRYvdLbtOA6qwDYCA9WJwcAeuKxbjxwIXu
tmnNMkEvl7o5sg/Mq8nbgZDZGCc4PSjlQHY+cvoaPOX0NY+jaymsi8aOExpb3BhUls+YAAQ3tnPS
tKjlQXJvOX0NHnL6GoaKOVATecvoaPOX0NQ0UcqAlE6nPB4pfOX0NV16t9adRyoCbzl9DR5y+hqG
ijlQE3nL6Gjzl9DUNFHKgJvOX0NHnL6GoaKOVATecvoaPOX0NQ0UcqAm85fQ0ecvoahoo5UBN5y+
ho85fQ1DRRyoCbzl9DR5y+hqGijlQE3nL6Gjzl9DUNFHKgJvOX0NHnL6GoaKOVATecvoaPOX0NQ0
UcqAm85fQ0ecvoahoo5UBK0iOMMpIpmIv7hptFPlQDyYjjKHgYpMQ/3CfrTaKOVAWqRVC52gDJzw
O9VI9WsZbr7NHdwtPkr5Yb5sjrxUl3draqgxukkbai5xk9etRYZYoqpa3rSy+VND5TldykNuV19Q
fxH51DFr1hMnmRyuYz92QxsEb5tvDEYPNIDRoqg2uaatxHB9utzJIGKgOCPlGTk9Bx61KdUsVVWN
7bBX5UmVcH6c0AWqKrrf2jypGt1A0jjKKJASw9QO9WKACiiigAooooAKKKKACiiigAooooAKKKKA
CiiigAppjQuHKqWXoxHIp1FAHOWf/I46t/1zj/kK2aydO/5HPVv+ucf8hWodW04Zzf2gx1/fLx+t
NSsNRctkEn+rb6U6nz3MUFv5zHMZxgrznPT+dTYHpR7RXsLldrlairOB6UYHpVcwrFamSosi7HAZ
GyrA9CCDkVcwPSg4UEnAA5JNLmCxj/2DpeVP2C3BV94wv8WAM/ko/IU46LppUKbKDaMcbfTOP/Qm
/M1cXVNPZFdb21Ku21WEq4J9Bz15FOW/s3aILdW5MqlowJB849R6inzBYoQaJptqmyCxgRdrJgL/
AAtgEfQ4H5CrFraQWUAhtY1iiHIVaf8A2tp+zzPtlv5WCfM8wbOCB97p1Iq2pV1DKQVIyCOho5gs
V6Ks4HpRgelHMFitRVnA9KMD0o5gsVqKs4HpRgelHMFitUcqJL+7kVXRlKsrDIIPUGruB6UYHpS5
gsYw0HS1SNBYw7Y23pkZweP8B+Qp40fThtxZQYXGBt6YyR/6E35mtbA9KMD0p8wWMhNE01EVFsoQ
qggDGcAkE/qq/kKjPhzSDIXOnW+45ydvXOc/zP5mtvA9KMD0o5gsZMuj6fNMJpbOFpQ2/cV53cDP
v0H5CrE8MVyphnRZI3UhlYZB5FXsD0pskkcMbSSsqIoyzMcAD3NLmCxk/wBhaXvVvsFvld2ML/eJ
J/Mkn8TSyaHpkqFJLGBlIwQV9gv8gB+Aq6dV08KrG9tdrqWU+auCBnJHPTg/lTjf2YcobqAMqeYQ
ZBkJ/e+nvT5gsUV0XTkheFbKERugRl29RkkA/iSfrSz6Pp91ax209pFJBF9xGHC9jVr+1bAAE3du
EbG1zINrZzjB6HoamkuraGZIpZ4UlkyURnAZsdcDvRzBYoR6XYw+eI7WJRcArKAOHB6gj8TTDomm
l0Y2MG5G3KdvQ4A/9lH5CtGK8tZ4llhuIZI2baHRwQT6Z9aZ/aNj5nl/a7bfjdt81c49cZo5gsUY
tE02Bw8NlCjiQShlHIcAgH8ifzp66XYpLcSLaxB7kFZiB98HqD9e9XpLu1hieWW4hSNG2s7OAFPo
T2NKl3bSzmGOeF5VG4orgsB64pcwWKttawWcZjtokiQncVQYGemf0FTVZwPSjA9KfMFitUNxbw3S
PDcRrJE64ZGGQeav4HpUF7dw2FsZ7jcEBVflQsSSQAAByeSKXMFjMk8PaTNCIpNPgeMDGCvbnv1/
iP5mnHQtLZ5HaxgLSKFc46gEEfqB+Qq5Bq1jOinz0jYyeVsm/dvv/u7Wwc8jj3plprumX3+ovISS
zKAWCltpIYgHqBg8jjinzBYih0mwt3Dw2kSMJPNBA6NgjI9Op/M1bqK61rTrJ41uLuJDIQFyeOSo
HP8AwJfzq9gelHMBWoqzgelGB6UcwWK1FWcD0owPSjmCxWoqzgelGB6UcwWK1QXFrBeo8NzEssZ2
na3qOh9jWhgelVr6+g0+JZLjfh3EahIy7Mx6AAAmlzBYonQ9MPWxg/1flY29U6Y/ImlOjac0gkNl
DvBJDbe+Qf5gH8KtQ6rYzJGy3EamQsqpIdjZX7w2nnI7imWWu6ZqEKS295AyuhkUMwVio/iwece9
PmCxHHptlDPPNHbRLLPkSsF+/nrn69/WoYtA0qHZ5VhApR96kDkNxz+g/IVcn1nT7a7jtprqNJpG
CIpPVicAfX2q7gelHMBkXWiabfTtPdWUE0rDaWdckjGP5Uw+HtJMez+z4NuAMY7Akj9SfzNbWB6U
YHpS5gMyDS7G2u2uoLWKOdl2F1GDjjj9B+VWqs4HpRgelPmCxWoqzgelGB6UcwWK1Vbmwtb9l+1Q
rL5T74ycgqcYyCK08D0qGe6gtWjWd1Qykhc9OASee3ANLmCxmHQNKbzN1hAfNcO+V+8wzg/qfzNN
Xw7pCeXt063HltuT5ehyDn8wK0P7V0/y0f7bbFHzsIlXDY6455xUa65pj24mS9gZDGJQFYFth77e
uOaOYLDbOwtNPRks7eOFXOWCDqasVDJrNnHBBMDJIs7skYihZ2YjOeAM8YNSf2pZcgzxiRVDNET8
6g4xleo6jrT5gsOopsOqWE6Bo7qHmPzcFgCE/vEHkD3qSzvbbUITLaSpLGHKb15GQcHB70cwWG1V
udPtNQG28t451UnaHGcZxn+QrTwPSjA9KOYLGKfD2kM5c6dbFipUnZ2Oc/zP51Fb+F9Itwu2yjco
zMrSfMV3EkjPp8x/Ot/A9KMD0pcwGK/h/SZI9j6fblc5xt78H+g/KpbbSbCzuPPtrSKKYJ5YdRyF
44/QflWrgelVF1O0fUDZBz54z/AdpIAJAbGCQCDinzBYqjR9PBQizgGxiy4X7pLbiR/wIZ+tRyaD
pUswmksIGkDmTcV/iJyT+fNbOB6UYHpS5gsYieHdIjjMaadbqhGMBfp3/AflU6aVYw3Quo7WJbhU
2LIByFxjH5DFamB6UYHpT5gsZE2j6fdyCa4s4ZZSBl2XJPGP5Uo0fTwzsLOHLvvb5f4sEZ/8eP5m
tbA9KoT6mtrDFLcGCJZSAu5z3/ClzDKzaHpjlS1jASpDD5e4AA/QAfgKJtD0y4jWOaxgdEztBX7u
Tk4+p5q1BqK3MjRwtbu6MUKiQ5BHX+GrMMjO8iOiqyEdDkHI+lPmFYrrGkEMcUShI02qqjoAOgqs
mj6fHK0qWcKuziQkD+IHIPtzzx3rWwPSmTP5URZVDHIABOOpxRzBYojT7QX7Xwt4xdsu0y4+YirF
Qz6pHbXHkTSWyS7C+0yHoOf7vsfyqS2vDeRh7fyJFIByHPfp2o5gsOqKeCK5QwzoskTqQyMOD0q3
BJ5sKuVCk9R1p+B6UcwWMn+x9P8APE/2OHzQWO7b3br+eT+ZqSy06z01HSyt44Fc7mCDqcY/kMVd
lkZZESNFYtk/McYx+HvVFtbtlZ1aa2BjcRsPMPDHgD7vsfyNHMFi1RSfaZPIMypC8YzyrnnH4Vaw
PSjmCxm3VjbahG0N5Ck0e4Ha3rik/s2y+zm3+yxeSX8wx7fl3euPWrV3qFpYSWyXUqxNcyiGLI+8
+CQP0NJFqdjNt2XMW5txCswViFJBODzgEH8qXMOxBZ2Nrp8Zjs4I4EOMqgwOAAP0AH4VYpo1bTmx
tvrQ554mX1x6+pFDapYpJ5f2mFpBIsRRGDFWY4AIHT8afMKw6irOB6UYHpRzBYrUVZwPSjA9KOYL
FRerfWnVZwPSjA9KOYLFairOB6UYHpRzBYrUVZwPSjA9KOYLFairOB6UYHpRzBYrUVZwPSjA9KOY
LFairOB6UYHpRzBYrUVZwPSjA9KOYLFairOB6UYHpRzBYrUVZwPSjA9KOYLFairOB6UYHpRzBYrU
VZwPSjA9KOYLFairOB6UYHpRzBYrUVZwPSjA9KOYLFairOB6UYHpRzBYzbfQ7e31y41NB+8mQLtx
909z+PFWNQszeQFUZUlUHYzLuAJGOlW6gN7bLN5TXEIkzjYXGc+mKm7GZ2h6NcadGxvLv7RNyFKr
hUBxnGfoKpXHg5blWVrsRqcZSGEIjkOHyy5wTx2A6mpPFviZ/DkFr5MCzTXLlV3HgYGax9M+IU8t
1BBqNgkfmyBA8cnTcQB8p68kZwaHeTuJaKxbfwEkibDfsq7WQIkWFUEAcc57Z647YFSDwNAZ2mku
QXZlZsRcEiQP3JPOMHmt5NWsZL42a3KGcEjb7jqM9CR3HWrAniIUiVMP907hz9KXmM5u18GG1ubS
Qag7R20wlRPKA6buOD/td8muoqvFfWs8CzRXETROu5XDjBHrUwkQuUDKWXqoPIoAdRUAvbYyyRie
PfHjeNw+XOQM/kaW4uoLRA9xKkakgAscZJ4/rQBNRUDXlukiRtPGHkcxqu4ZLAEkfXANPE8TBCsi
EPwuGHzfSgCSiiigAooooAKKKKACiiigAooooAKKKKAOf07/AJHPVv8ArnH/ACFNufBGkXWvpqsk
PzD5nhAHlyP2Yj/Of50pWZfFmpbWI+ROh9hVj7Sf+e//AI/T9mprUunXqUW/Ztq+mhuajbPdWnlR
hc7lOCxUYBB7VbrmWuGVdxlfb67jS+bJ/wA9H/76NCprmbT1M3N8qXQ6Wiua82T/AJ6P/wB9GjzZ
P+ej/wDfRquQnnOlqO4iM1vJEr7C6lQ2AcZHXB4Nc95sn/PR/wDvo0ebJ/z0f/vo0cg+cZF4KXbP
9pvfOaZSuTF93Pl5xkn/AJ5j86nPhRPNmxcjyZ5fNkQwgnIdmUA54HzYIwc+2TTPMlzjfJn6mlVp
m+60h/4EaOQOchHgoCFVN6C6DCZiO1Pu4x824Y2/3u/pxXR2kH2WzhgLmTykCbyMFsDGeKwPNlHV
3/M0ebJ/z0f/AL6NPkFznS0VzXmyf89H/wC+jR5sn/PR/wDvo0uQOc6Wiua82T/no/8A30aPNk/5
6P8A99GjkDnOlormvNk/56P/AN9GjzZP+ej/APfRo5A5zpaK5rzZP+ej/wDfRo82T/no/wD30aOQ
Oc6Wiua82T/no/8A30aPNk/56P8A99GjkDnOlormvNk/56P/AN9GjzZP+ej/APfRo5A5zparahat
e2E1vHL5LSDAk2htv4GsPzZP+ej/APfRpRLKTgO+f940cgc4Wfg+O3O6a585/OSXLR5+7Iz4yST1
fue1NXwai2n2U3SvCBuAeEE79mzJOeVx29e/FO8yX+/J+ZpQ8xBIZyB/tGjkHzkT+DN4BN+fNxjz
DESVGXztO7P8fcnpzmr19oD300rPdKEni8qUeSCdo3bdpJ+U/NycH8Kq+bLnG98/U0eZL/fk/M0c
gc5cbRriSzET3cIkWVJldLYKNykEZXdyMD1/HtWZZ+D3WWWG6nV7Vdu0iMB5G8sqWJzwPmPHsOam
82X++/5ml8yX++/HuaOQXMSy+GDJpNtZ/bWLwOJfOKndJJzuZtrDruPAIxT9J8NLpN8J4rgeWI9n
lrHtBOFG48kZ+XsAfUmq/my/33/M0nmyf33/AO+jT5A5jpaK5vzZf77/AJmk82X++/HuaXIHOdLV
PVLJtQsjDHKInDpIrlNwBVgwyMjPT1rG82Ufxv8AmaVXmdtqu5Ppuo5B84XfhH7dcfabi9LXDkiY
hCqMp28BQ3GAo6k0kng8vPCft7CGNy/liPGclyeQcfx9wcY4xTmeZThmccZ+8aGaZPvNIO/U0cgc
41PCLhhJJf75lKFT5ACjb5eOM/8ATP17mulrnFadgSrSEDrgmm+bJ/z0f/vo0+QXMjpaK5rzZP8A
no//AH0aPNk/56P/AN9GlyBznS0VzXmyf89H/wC+jR5sn/PR/wDvo0cgc50tFc15sn/PR/8Avo0e
bJ/z0f8A76NHIHOdLVHVLCS/igEM6wyQTLMrMm8ZGeCMj19ayPNk/wCej/8AfRpyvM5wruT1+9Ry
D5xH8ILJepdS3jPKWDz/ACkK5D7hgBsD053dB35qNvBZfy0bUGMUcRiCiLHWPZ649+me2cVIzzKS
GaQEe5pWaZCQzSAj3NHIHMOi8LOt4t3Jfb5xP5xPkgKTuyRjPHHHX866GucDTspYNIQOpyab5sn/
AD0f/vo0+ToLm6nS0VzXmyf89H/76NHmyf8APR/++jS5A5zpaK5rzZP+ej/99GjzZP8Ano//AH0a
OQOc6Wiua82T/no//fRo82T/AJ6P/wB9GjkDnOlrP1fSV1aFI2lMYUOOFzncjJ/7NmsrzZP+ej/9
9GlEkpBIdzjr8xo5B84+Xwmj3gmjufLG/cQsWDjC/KDnGPl7g9eMHmoD4NkdovM1IlIo/LVVh28b
Nv8Aex2znGffHFSeZL/fk/M0u6fdt3Sbs4xk9aOQXMWm8N28ttZQXDedHayvJtZfvlt38t36VVuf
CbXOoPcNfsIzkJEI/uglTjrj+DrjPPJNKPPJI3Pkcn5ulN3zZxvf/vo0cg+chTwNEjH/AExyp+YB
lJw+AM43bccdMfjW5pOnHTbaSNpVleSZ5mZU2DLHJAGTWSXmUkF5ARx1NBklXG53GRnkmnyC5jpK
K5rzZP8Ano//AH0aPNk/56P/AN9GlyBznS0VzXmyf89H/wC+jR5sn/PR/wDvo0cgc50tZUeieXq3
2v7QTEJGmWHZyHZQpO7PIxnjHfrWf5sn/PR/++jTt02zfuk2+u40cg+c6Oiua82T/no//fRo82T/
AJ6P/wB9GjkFznS0VzXmyf8APR/++jR5sn/PR/8Avo0cgc50tZF7pE17DFG0yxmNTHuTqyEYYcju
O/aqPmyf89H/AO+jTUnuJBlRIeM8PRyD5i1L4bR7xLmLZA6SiXEZIDEdj9a14Y5FeR5CuXI4X2Fc
+ZbgDJEgHX79Is8pJBZwR/tUcgcx09RzxtJEVUgNkEZ6cHNc95sn/PR/++jSNPIq53ufbdRyC5i/
caM9zffaJGQq2C8J+65AKgnjPRjxUdjoB06++027qv7sx+Xk7eTnP9PwqoJbkjIEhGcZ30Ga4BwQ
4Pu9HIPmOit4zFCqMQSOuOlSVzKzyMoO9xntuNL5sn/PR/8Avo0cguY35o5GkjeIplcjDZ7/AP6q
yP8AhHAwmWVklV/9Wrf8svmLfLx1yx5NVWuJQQAzkn/axTvMuePlk5GR89HIPmL1jo0mnw3MUUgK
TuXwxJ2ewrXrl2uJlBLbwB/t0/zZP+ej/wDfRo5BcxranpMWqmAXBPlxMxKY+9lSvXtjOc+1Y8fg
zZam3fUHlR8GR3iG9mG7B3Z4+9yMc+2aeHmYMVeQhRk4J4FBeZTgtIPxNHIPnJJvCVvMXJlALhhn
yhxmERfpjP40yHwm6akt3JqDSbGVgnl44D7gOuPbgD160nmS/wB+T8zQHmPR5Omepp8j3FzaWOko
rmvNk/56P/30aPNk/wCej/8AfRpcgc50tFc15sn/AD0f/vo0ebJ/z0f/AL6NHIHOdLRXNebJ/wA9
H/76NHmyf89H/wC+jRyBznS0VzXmyf8APR/++jR5sn/PR/8Avo0cgc50tFc15sn/AD0f/vo0ebJ/
z0f/AL6NHIHOdLRXNebJ/wA9H/76NHmyf89H/wC+jRyBznS0VzXmyf8APR/++jR5sn/PR/8Avo0c
gc50tFc15sn/AD0f/vo0ebJ/z0f/AL6NHIHOdLRXNebJ/wA9H/76NHmyf89H/wC+jRyBznS0VzXm
yf8APR/++jR5sn/PR/8Avo0cgc50tFc15sn/AD0f/vo0ebJ/z0f/AL6NHIHOdLRXNebJ/wA9H/76
NHmyf89H/wC+jRyBznS0VzXmyf8APR/++jR5sn/PR/8Avo0cgc50tFc15sn/AD0f/vo0ebJ/z0f/
AL6NHIHOdLRXNebJ/wA9H/76NHmyf89H/wC+jRyBznS0VzXmyf8APR/++jR5sn/PR/8Avo0cgc50
tY8vh63l8SRaqcbkTBTHV+zfl/StiiouWcl8QdEn1bTraa3jaZrWQsYVUsXDDHGOeOtc9Y+HdTv9
T0/dFNEkEgmdpzJgAMpxhlxk47V6Yzqn3mAz6mlBBGQcg9xTTtcVtdzAuNAvJ7O4sFu4o7OQyMjC
M+aC+TjOcdWPI5xxVWDwcUCtJNCZAcrhCQh8xWO3J4yFxxjrVr/hNdLM0iILh442KvMkeUXHUnnO
B9K1p9TsbaaOGe7gilkAKI8gBYHgYFK1hvzMC38INbW6bfsMksbJtV4PkZVQphhn33fWrOj+Gn0r
UWmFwrRmLYSFO9zhRkkk4+724rcnuIbWIy3EqRRggFnYAcnA5+tSUAcbJ4HmdCFuLdSoVAqRlRIA
rLufnlvmzmtLVPDT3zWuyaIiGNIyZkLldrBty88McYNbkdxDK7JHKjsv3grAkc45/EH8qkoA48eC
7lpZHku7cFz/AAQ42/LIpYeh+cflU48JzfbYLrzLWJ1kDMsURCIAV4RScc7eeM5PWupooAKKKKAC
iiigAooooAKKKKACiiigAooooA4+b/kbNS/3E/kKybjwhp9xrC3pBWI/NJbj7rt6+w9RXSWCLJ4y
1UOoYeXH1HsK3fs0P/PJP++RTajJWkrl0q1Wi26crXVjl7hC0GyMdxgDA4zUua6P7ND/AM8k/wC+
RR9mh/55J/3yKaaUnLuZNNpI5yiuj+zQ/wDPJP8AvkUfZof+eSf98iq5xchzlOQjeuSAMiuh+zQ/
88k/75FH2aH/AJ5J/wB8ijnDkMV7vIYKCCcjcD19P0pReDP3O3PPfua2fs0P/PJP++RR9mh/55J/
3yKXMh2ZhTzLKFCgjbnqc1DXR/Zof+eSf98ij7ND/wA8k/75FPnQuU5yiuj+zQ/88k/75FH2aH/n
kn/fIo5w5DnKK6P7ND/zyT/vkUfZof8Ankn/AHyKOcOQ5yiuj+zQ/wDPJP8AvkUfZof+eSf98ijn
DkOcoro/s0P/ADyT/vkUfZof+eSf98ijnDkOcoro/s0P/PJP++RR9mh/55J/3yKOcOQ5yiuj+zQ/
88k/75FH2aH/AJ5J/wB8ijnDkOcqSBgsysxwAc1v/Zof+eSf98ij7ND/AM8k/wC+RRzhymIbw7cK
Cp4yQep7mnfbFyfkIXsAegrZ+zQ/88k/75FH2aH/AJ5J/wB8ilzIfKzGFyjTxsRtCjBJ696abvIC
4bABBO7npjP1rb+zQ/8APJP++RR9mh/55J/3yKOZBysxhe4OdmTnqT29Pz5pGugyMpDYIx1ra+zQ
/wDPJP8AvkUfZof+eSf98ijmQuVmItwoUD5/u7eG6e/1pI51RySGbJDcnnIrc+zQ/wDPJP8AvkUf
Zof+eSf98inzoOVmO12oYDBdeMgn26Co47nYGyCSx3E+9bn2aH/nkn/fIo+zQ/8APJP++RS5kPlZ
gzTCXbwcjqSabE4R8kZGCOPcV0H2aH/nkn/fIo+zQ/8APJP++RT50LlMVLpUQIqnAHBJ57/40ovB
g/KxJHduO3+FbP2aH/nkn/fIo+zQ/wDPJP8AvkUuZD5WY32tQCArAHPfnnP+NVa6P7ND/wA8k/75
FH2aH/nkn/fIp86FynOUV0f2aH/nkn/fIo+zQ/8APJP++RRzhyHOUV0f2aH/AJ5J/wB8ij7ND/zy
T/vkUc4chzlFdH9mh/55J/3yKPs0P/PJP++RRzhyHOU+N1QtuBIZSvFdB9mh/wCeSf8AfIo+zQ/8
8k/75FHOHKYouwE2hTgDA59sc077avJ2sSTnlvfNbH2aH/nkn/fIo+zQ/wDPJP8AvkUuZD5WYpul
8soqsBt25zVauj+zQ/8APJP++RR9mh/55J/3yKfOhcpzlFdH9mh/55J/3yKPs0P/ADyT/vkUc4ch
zlFdH9mh/wCeSf8AfIo+zQ/88k/75FHOHIc5RXR/Zof+eSf98ij7ND/zyT/vkUc4chzlSQzCJicZ
zj+ea3/s0P8AzyT/AL5FH2aH/nkn/fIo5w5TFF38uCGJx6077YoJOxiSc8t75rY+zQ/88k/75FH2
aH/nkn/fIpcyHZmEbj5pCB98Ac84xT0ulWMLtJI9+O/+NbX2aH/nkn/fIo+zQ/8APJP++RRzIXKz
HN6OPk/X/PNQTSCRgRnhQPmPJxW/9mh/55J/3yKPs0P/ADyT/vkUcyDlZzlFdH9mh/55J/3yKPs0
P/PJP++RT5w5DnKK6P7ND/zyT/vkUfZof+eSf98ijnDkOcqUzAxbcfNgKTnjAOa3vs0P/PJP++RR
9mh/55J/3yKOcOU5yiuj+zQ/88k/75FH2aH/AJ5J/wB8ijnDkOcoro/s0P8AzyT/AL5FH2aH/nkn
/fIo5w5DnKIJHhUDYrYx/Fjp+FdH9mh/55J/3yKPs0P/ADyT/vkUc4cpz4uZNhVo0bjGS3/1qiUH
LFsDOOhrpfs0P/PJP++RR9mh/wCeSf8AfIo50HKc5TXBZcDGeOtdL9mh/wCeSf8AfIo+zQ/88k/7
5FHOHKc8s8ipgIuR0O7/AOtSvcPImDGmc9d3/wBaug+zQ/8APJP++RR9mh/55J/3yKXOg5Tm0G1A
DjNLXR/Zof8Ankn/AHyKPs0P/PJP++RT5w5TmmB3KVwcZ6nFTfaZBjCKPX5uv6Vv/Zof+eSf98ij
7ND/AM8k/wC+RRzoOU5yeR51xsRTjBIbr+lGa6P7ND/zyT/vkUfZof8Ankn/AHyKOcOUwIpjFuI6
kY/WpjeLvDBCAOi547f4Vs/Zof8Ankn/AHyKPs0P/PJP++RS5kHKzFF4Rj7x/H3zQbpTGVCHkdz0
4xW19mh/55J/3yKPs0P/ADyT/vkUcyHys5yiuj+zQ/8APJP++RR9mh/55J/3yKfOLkOcoro/s0P/
ADyT/vkUfZof+eSf98ijnDkOcoro/s0P/PJP++RR9mh/55J/3yKOcOQ5yiuj+zQ/88k/75FH2aH/
AJ5J/wB8ijnDkOcoro/s0P8AzyT/AL5FH2aH/nkn/fIo5w5DnKK6P7ND/wA8k/75FH2aH/nkn/fI
o5w5DnKK6P7ND/zyT/vkUfZof+eSf98ijnDkOcoro/s0P/PJP++RR9mh/wCeSf8AfIo5w5DnKK6P
7ND/AM8k/wC+RR9mh/55J/3yKOcOQ5yiuj+zQ/8APJP++RR9mh/55J/3yKOcOQ5yiuj+zQ/88k/7
5FH2aH/nkn/fIo5w5DnKK6P7ND/zyT/vkUfZof8Ankn/AHyKOcOQ5yiuj+zQ/wDPJP8AvkUfZof+
eSf98ijnDkOcoro/s0P/ADyT/vkUfZof+eSf98ijnDkOcoro/s0P/PJP++RR9mh/55J/3yKOcOQ5
yiuj+zQ/88k/75FH2aH/AJ5J/wB8ijnDkJaKKKzLMbxHoK67bQoGCSRSAhv9k/eH5fyrWhiSCFIo
lCoihVA7AU+ii4HlreEtW/0uP+z+XZ9jAAg56HIcfmR+FdrqGlXtxcTmBbVormzW2cyu2UwWyQAC
G4b1HSt2svVdet9KlSJ4pppGXdtjA4HTJyR6H8qesrISVlYxF8H3ZuZGmuY5k85ZFMjZL4kDDcAo
5AGByevamQ+F9Xghn2TWivJnCK7BAWjKFuFHcg45Pqc811VpfQXtil3C/wC5dd2W4xjrn0xg1Mrq
33WB4B4PY0vIZhaJ4fl0vU57qTyHM0e0uud332b05GG/St+iigAooooAKKRWDDKkEdOKMjIGRk9q
AFooooAKKKKACiiigAooooAKKKKAOf07/kc9W/65x/yFdBXP6d/yOerf9c4/5CugoAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigArlPFUB+1NKYpDuhRI3VWPzBmJHH1FdXUYnjM5hDgyqN
xXuB61UZcruBzdtpN/N4Os9MAW3aU4uDIMlIyxYjHckYGPc1TTwzq6vLGZwEihMUMsUxjL7VbyyQ
Om0sBjnpmuzzilqXqByVzoutLcxC1f8AdRzh0drlyVXcpYEE88Z9fTgVHPoetPDEsLyRMjHzCLtm
8x8cSjP3QOfl9+nFdjRQBycuh6skUmyQzGYkyhrl+T5hKkcgDCkccCo4vD2svaq1xdSfaRGVJW6f
GREoX/yICf8AGuwooWgHHtoWvfbWkF2Qp3ldsuAmS5wfXOV7ce2KG8N6lHPG8bNII0KDddPuwREX
GTz8xRxntuzXYUUAUdIt57awWO5J3bmKqXLlFJyq7j1wOM1eoooAKKKKACiiigAooooAKKKKAOf0
7/kc9W/65x/yFdBXP6d/yOerf9c4/wCQroKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoo
ooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKA
CiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArO
15LyTSJl04sLoldhU4P3h/StGigClpSaglko1SSF5/WIY49/esi50ZtMm+3JdalctvGETazf8COM
lfaukppdQcFgD7mhq402k7HNXUd34hsre3aB0aG5R5JZoB5brhuQrdcHAwao22l+IrR4IYW8qONm
+aPb5ZJcktt3DClSMLg4wfqe0DBvukH6UtC0JV7anDHSvESPLJbm6QyFN++ZXdmCY3D5x8m7PBPf
pjit3RINVh1K8N+0rwNyjSOOuTwACRjHsv0PWtyigYUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAc/p3/ACOerf8AXOP+QroK5/Tv+Rz1b/rnH/IV0FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFVbjeuSsRfJH3etWqqT6na21wIZZQr8ZODhc9Nx6DPvTQFDXbG5v7a2igUDddI0m7J
ATac7gCpIzjjNYt9aa1pKyxaebycmNfLePG0N8524bcQvIH4DJrswQRkcg0tIDkriPXJWSSR7wRt
cB2SMIDGqypgDAycqWyDnOKqQxa7bbpWivAzKFPlsDtz5fOOfQ9BnrXcUUAcxoc+uvqNv/aKz+W0
A8xWRVVGwO4HJJz0I69MV09FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAHP6d/yOerf9c4/5Cug
rn9O/wCRz1b/AK5x/wAhXQUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUA
FFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABVbUUuJNPmWzk8u
42kxtgHn8as0UAZXhw6g+lJLqkjNPId20qF2DsOKpPNs1O8tnwwlnDGE/KXXYOfcZH6V0VV7uygv
UC3Ee7acqwOGU+oI5FUpWdwOdjnaS0uBaidLOPUNkq2+dyRhBkLjkDcRnbzgnFRzS3aOQialNpzJ
tYTI5cqS/THzf3Rk846109tbQWNuIoEWONefxPUk9z71PUvVgcWt/r8hRHFxEqmB2EVsRsXKblOR
z1bOCeB2xUlnqXiW4OGhKMCzMDCRhgjHy8lQNuQozz1612FFAHGwav4gMdu0kchBlCsotm3MPlyO
VAAGW9P97jnsqKKACiiigAooooAKKKKACiiigAooooAKKKKAOf07/kc9W/65x/yFdBXP6d/yOerf
9c4/5CugoAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKo/2vb+kn5Uf2vb+kn5U+Viui9RVH+17f
0k/Kj+17f0k/KjlYXRerB0e41iXWr2G+aP7NbnCkR4355HP0rQ/te39JPyo/te39JPyp2fYLobq8
hihjfClQSSHGR0OM1Whu2FnqEduZfMggDKG+YhipPH6cVaOq2zAgq5B6grTIb+yt1IhiZATk4XrS
5WCkjm01u9tbRJLG4mvYmjj8+WdciBz15+XP0zxxTJfEWr2gnu5Sd0gQx2xgJRQI1ZgpyOSWPqeO
BxXVf2vbnPD8e1L/AGvb+kn5U7MLo56/8QalOxt7RWimRm3lYWO3DkKDng5XBrpNKme40m0llk8y
R4UZn27dxwMnHamf2vb+kn5Uf2vb+kn5UcrsF0XqKo/2vb+kn5Uf2vb+kn5UuVhdF6iqP9r2/pJ+
VH9r2/pJ+VHKwui9RVH+17f0k/Kj+17f0k/KjlYXReoqj/a9v6SflR/a9v6SflRysLovUVR/te39
JPyo/te39JPyo5WF0XqKo/2vb+kn5Uf2vb+kn5UcrC6M3Tv+Rz1b/rnH/IV0Fc5pEqz+LtUkTO0x
x9foK6OkMKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooA5eirf8AZl1/cX/voUf2Zdf3F/76FbXRlZlS
irf9mXX9xf8AvoUf2Zdf3F/76FF0FmVKKt/2Zdf3F/76FH9mXX9xf++hRdBZmHJqrxau1n9neRcK
Q0YyRn1rQclY2I6gE1cGl3IJIjXJ6ncOaP7Luf7i/wDfQougsyjPLEtzZwRwrG/ls7EHO4e578+t
XEgjbblsEx5x7+tCaNPGSViUHGPvdB6fSnf2Zdf3B/30KG0PXsILVTnlwfQjmkmt0VWZN3HY/hT/
AOzLr+4P++qT+zLr+4P++hSv5hbyKlFW/wCzLr+4v/fQo/sy6/uL/wB9CndCsypRVv8Asy6/uL/3
0KP7Muv7i/8AfQougsypRVv+zLr+4v8A30KP7Muv7i/99Ci6CzKlFW/7Muv7i/8AfQo/sy6/uL/3
0KLoLMqUVb/sy6/uL/30KP7Muv7i/wDfQougsypRVv8Asy6/uL/30KP7Muv7i/8AfQougsyn4d/5
GTUv+uaV1NcxoMTQ+KNTRxhhGma6esnuaLYKKKKQwooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACoL1nSxnaPO8RsV29c4qeihAcFa67qs91Ywg3Hml/9LTbkRjd8vvyOtXb/wAVX9rcajbI
kHmJcAWzFTjyht8wtzyRn2+8K61YY1cusaBz1YLyaQ28LEkxRknOcqOc4z/IflVN3YHON4kvI96C
GOaazyl0q/Lly4VMdcZXL4wTjAHWt3Tb0ajp0F0oAEqBsA5A/HA/lUsltBKsiyQxusuN4ZQQ319a
eiLGioihVUYCgYAFSA6iiigAooooAKKKKACiiigAooooAKKKKAOf07/kc9W/65x/yFdBXP6d/wAj
nq3/AFzj/kK6CgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArEm16eLxEmlCwLF8MJfM42dzjHatu
ozDGZxMUXzQpUPjkA9v0poCvqmq2mj2TXV9MsUQOAT3PYD3rjrbxY15cK0N6i+bcgLGZATtLgYGD
g8V2l/p1pqdv5F9Ak8QYNtcZGR0NVZPDejyX0d42m232iIgpIIwCCDkdPc04S5WU+Xlt1MoeJb9J
Z5Xs/NtYpZYztiZMbX2L87Ha2TjoBjn0qy3ipICGurR4YSzL5u9SAEYI5PoAxH161sm1gMLQmGPy
2YsU2jBOc5x9eaiOlWLNcM1nATcjE2YwfMHv61JJhnxvboJGksrlVjTJ+XPzbQwU9ujDnND+McoH
isZAsciLcs5x5YZiOAcFvun9K3H0uxlmeaS0gaR02MxQElemPypq6NpymEixt8wnMZ8sfKc54/Gg
DJfxekWRLYyqyxG4cB1O2ILu3e59qdB4tSaaCNrGeMSSeW7vwikkAYJAzncPT2zWtDpVjbgiGzgQ
HdwsY79fzxTI9F02IxGOxt1MR3IRGMqfWgChdeKI7fUrizS1mlaAEFhwCwTeRk8Abe5PXiqKeNvM
CzLYObedI3tiHyz7t5+YAHbwmcde1dBNpVjcztNPaQSSsu0syAkimPommyb99jbnect+7HPOf5kn
8aAMo+LwyGSHT53jycFnVT8sYdsg9MA/nTpfFipGJI7GeRJS3kFSDvCsFYkDJUAn3rYGnWYXaLWE
LzwEGORtP5jj6VG+jac/m7rG3Pm43/ux82OlAE9pcpeWcNxH9yVA45B4I9RU1NjjSKNY41CIowqq
MACnUAFFFFABRRRQAUUUUAFFFFAHP6d/yOerf9c4/wCQroK5/Tv+Rz1b/rnH/IV0FABRRRQAUUUU
AFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABR
RRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUU
AFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAcO/inW206XU1js1s4rkwldrFz8+319xXS3uuQ2MzRtB
PJsVWkaNQQm4kKOSCSSMcfjis1vAOjvK7sLkq7mQx+cdoJOelbUumWk93HcyQhpo8bWye3IyOhwe
melZwU0/eOrETpSS9mvwt2/4JmnxZaNMiwRTTLJhU2AfOx2cDJA43jOenNNj8VwHS4bqeMwyO3z2
+4M6LluePXbWhHomnQ+V5dpGvksXjwPuktuJ/wC+uacdIsWtBatbqYA5cISSATnOPzP51ocpQTxV
bNvDWl3GygnDqgyQFJGd2Pusp5498jFNj8X2MkSzLDc+R0klCqViPzcHB5+6emR09avzaHp1wuJb
VD827OSCDhRkEdDhV/KqDeE7P7fBNGfLgiO7yQudzfN1JPOdx6g/UUAhYfF+mzWMtzudTGCfKO0u
/APy4JB4I78d6b/wmFl9olt/IuTPG2wxqFJL5UFeG4wWHJwOuCcVpRaRZQ2clqkH7iX76sxbPbqT
ntVe/wDDtjfiUshR5mVnZWPOGUnAzgE7RkjmgCD/AISq0w7fZ7rEbbJcICY5OQEIByTkdsjpzzVd
vGUGbZo7aRopT853LmNRuyxwSD93oOa0xoGmAofsifKu0cnnryeeTyeTzzSL4e0xQo+yKSDu3MxJ
Jz3JOT+P0oAqv4pgQlDZXpmVDI0QVdyoADu+9joeg59q07C/h1KAz2xLQ7iqv2fHce1QRaLaWqE2
cSwzBWVJTlyuQB3PI4HHtU+n2MOm6fBZ26hYoUCKAMUAWaKKKACiiigAooooAKKKKAOf07/kc9W/
65x/yFdBXP6d/wAjnq3/AFzj/kK6CgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAgu7pLOHz
JMkZwAO5rE0zxCXvpbe8UK0s2IdrbuD0z6fStHW9Ok1TTzBDMInDBgxGRx2rm9O8G6jFrEN5d3UA
SGQMFQFi2PrjFc1T2vOuXYtRg1d7mrD4rheW682OJYrd3Rgk2+U7X2fcC9z796nbxRZRoHkjukj8
0xF2hIAYHB59jx+dTDQLTMgL3Jikcu0RnbZuLbiQM8c80y78Nade7fOikwGdsLIwyWbc2ef73NdJ
BDD4rs3jgaeK4t/O3bfMjP8ACW9Ov3T09vWtHTNSg1azFzbb/LJwNy4NUf8AhFdM+0JNsm3I25R5
z4BySO/bcfzq9p+m2+mQvHbBsO29mdixY4xkk+wFAFuiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooA5/Tv+Rz1b/rnH/IV0Fc/p3/ACOerf8AXOP+QroKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigBsj+XGz4J2gnA6ms7R9X/tTzsxhfLI5U5HPb6jvVy+tRe2M9sXKCVCm4dRkVmaLoEm
l3ktzNem4Z4xGBs2gDOfU1nLm51bbqO6StYdH4ltHnulcFIrYsHkLqcFTg5QHcOenHPHqKefEumH
cEuNzqhdlCMCuM8Nx8p+U8HHSmXfhmzv3drySacMGChyp8vPocZPQdSRUX/CIWBEYLzYjVgACq/e
zk8KMfePAwOmRWgi03iLTVUlrg8EKMRsdxzjC4Hzc8cZ5pH8S6TGMveIq7BJuKttwRkDOMZxzt6+
1Mj8NWcUsThpyIX3woX4i+bcQOOhIGc5/CoU8IadHL5kZlRtmwkFcngrnJGc4PTOPagC/JrNlFYr
dvI4idiqjym3FueNuM54PbtUdhr9hqLQpBN+9lQOqMp7qGxnpkAgkA5pqeH7aPTorSKWePyZDIkq
sA6sc5xxgDDEYxjmo9P8L2GmXqXVsHDqu3DbTk7QuScZzgeuPagCabxBpsCqz3I+Y4UKjMWO4rwA
MnlT+VPbXLBVz5+7kjCozEkYyAAMn7y/nVKXwnZS+cDNc4l42llKoNxbABBHUk85/QUj+ErJ2JE9
2oxgKJeFztyeRznYM5yOvrQvMCf/AISXTDs23GQ2CWKMoUHOMkjAPB4PPFWY9Xs5bSS5SRzHGQGH
lNuBOMDbjdzkdu9UIfCOnQ2Ztv3zxFlYhn643Y6Af3j0qydBgaxntnnuXNwytJMXG9sYx2xjAAxj
B75zQBFceKdMt7Vp/NkkVUL4jhcnvweOD8p4ODxUqeINPYSbpthjfy2Uqc55xxj/AGT+RqrD4Ssr
e2e3inu0hkVlkRZABJndyeO249MDp1xUreGLJpFkZ7jzQsq+YJMNmT7x47jJx6ZNAEsfiLTZZI41
uG8yRioQxOGBBAOQRleWHJx1rTrBh8IWEPl7ZLghJvOxlRlvl9FGPuj7uO9b1ABRRRQAUUUUAFFF
FAHP6af+Kz1Yd/Kj/kK6CsDTdo8Y6sAi7ikZLY5PA4rfoAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA5/Tv8Akc9W
/wCucf8AIV0Fc/p3/I56t/1zj/kK6CgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigDn9O/5HPVv+ucf8hXQVz+nf8A
I56t/wBc4/5CugoAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooA5/Tv+Rz1b/rnH/IV0Fc/p3/I56t/1zj/AJCugoAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoo
ooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKA
CiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoo
ooAKKKKACiiigAooooA5/Tv+Rz1b/rnH/IV0Fc/p3/I56t/1zj/kK6CgAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
Dir1r8eLr5dNm8uVlTOFB3DaPWp/L8Uf8/Lf9+0qW2/5KBdf7g/9AFburXradpdxdIgdo1yAemen
PtQBzvl+KP8An5b/AL9pR5fij/n5b/v2laMmvDTpHt7mU3VxvVVRITH1UnqSQR8pwRQ3iq2RmL28
6wqMlyBkHy/Mxt69M/jT5WK6M7y/FH/Py3/ftKPL8Uf8/Lf9+0rTi8TRSGIfZpVLltxYgBVVQxbJ
6jBplt4iN9e2sUMLRBpzHKJBzjyt6kH8qfKwujMlPiSCMyS3hRB1JiWlU+JXUMt2SpGQfLWuh1z/
AJA9x9B/MVQ1C6ls9Cee32+ckS7NwyMnA5H41IzOx4m/5+z/AN+0ox4m/wCfs/8AftKWPX5nkuht
QgJGIQq8lznPfnkHj2qyniGGRYmSGVkcLluBglC4GPoKAKuPE3/P2f8Av2lGPE3/AD9n/v2lWX1+
P5HWNwgG5wQCSDEZABzxRL4ihigaRreXchO5AQSAFDZ9+GFAFbHib/n7P/ftKMeJv+fs/wDftK07
fU0uZbmNIZA8AztYgFxzyB+FVJPEMYCzJG5gCOxGBliqqcA54xnFAFfHib/n7P8A37SjHib/AJ+z
/wB+0q0de8p51uLVk8suBtcNnagc/wA6Zd+IFRJ0gibzEiZ0duUJXGQf++qAIMeJv+fs/wDftKbI
/iSJC8l4VUdSY14q6fEEP77ZbzyeWSAVH3iGCn6cmrFzOtzorzJ92SLcOc/qKAMpG8SSIHS8JU8g
+WvNOx4m/wCfs/8AftK0Zrh7Tw+1xFt3xQbl3DIziq1x4gEQlEds7OhZVywAYqyg/T7woAr48Tf8
/Z/79pRjxN/z9n/v2lWz4hhxIVt5m2NtGMYY7wh57cn8qhXX2h843cJOJ2QLGQdihguT68mgCLHi
b/n7P/ftKMeJv+fs/wDftKvSa3HHEsn2eQpJJ5cRyPnIJB9x909afaavHeXa26wyxlow6mQYzwCQ
B14zQBnY8Tf8/Z/79pRjxN/z9n/v2lTN4hVbnJib7Oyfu/7zt5mz8BxV06ov2KC4WGQmeRYljOFI
YkjnPbigDMx4m/5+z/37SkY+JUUs14QoGSfLWrkXiCCY4jgmJZwkfYOS23r0HIqazupL3QxcShQ7
xvkKOO4/pQBlxv4jmQPHellPQiNafjxN/wA/Z/79pWlZOYtEEi43JGzDPqMmqS6vcRPAk5WQypDJ
lI8H5wcrjPtwaAIseJv+fs/9+0ox4m/5+z/37SrK+I4mQN9ml4UvINy5jAIHI9fmHFLJ4hhRN4t5
mVgWjxj51DbSfbB9aAKuPE3/AD9n/v2lGPE3/P2f+/aU+XxA6mcRIGRVkcSYHyhVUjjPzfe9RVs6
3H5nlCCVpQ4jK5A+YtgDPv1+lAFHHib/AJ+z/wB+0ox4m/5+z/37Sr9prUN3dxwLFIhdSQz4AyM5
UeuMHpUEeveW0yXMLFlkdYimMOA4QD2OSOaAK+PE3/P2f+/aUyV/EcKF5b0qo6kxrVu48RxwrIPs
0nmIr5DMAu9c5XPfp2q1qDtJozuy7WZFYrnOM4oAzFPiV1DLdkqRkHy1pceJv+fs/wDftK07q6ay
0gToAWVEA3dATgZPsM1Xm1gWAnSZmuZYmwUSPy2A27uh68AnigCpjxN/z9n/AL9pRjxN/wA/Z/79
pVtvENujNuhlEag4fjkhA+MfRhSTaxLb36RzWxWIxgsNwLKS+0HIPI56UAVceJv+fs/9+0ox4m/5
+z/37SrL+IoUj3/ZpiGXemCPmTJG4+nI71JqmrG0ts26b5Wh85ScbVXIGT69e1AFLHib/n7P/ftK
MeJv+fs/9+0qefX1MU32aNgyMFV3XKtiQI3068VYu9Uaz1byHj3QGNCWHVWZyoPuOlAFDHib/n7P
/ftKMeJgM/az/wB+0qy/iKFIw5tpsMokXkcoSRuJ7dO9XdOvWv4JJTGIwsrRrhs7gDjNAGLFL4in
UtDfF1BxkRL/AIVJjxN/z9n/AL9pWjo3/Hkf+ujVSXWLmNpp5kLWsMkyybYsbQpwuGzyTQBHjxN/
z9n/AL9pRjxN/wA/Z/79pVttfjVSfs02VSR3GQMBMZxnrnIxTJtfCxTgQPFIivhmIYZUAnjPowoA
r48Tf8/Z/wC/aUY8Tf8AP2f+/aVcXXY2naBbeUz79ioSAX4Jz7D5TUljrMN/ceVHHIoMYkVnwNw4
6D2zQBkyXWvwvsl1DY2M4MS/4U37frf/AEEx/wB+l/wq9rH/AB+r/wBcx/M1RoAPt+t/9BMf9+l/
wo+363/0Ex/36X/CiigA+363/wBBMf8Afpf8KPt+t/8AQTH/AH6X/CiigA+363/0Ex/36X/Cj7fr
f/QTH/fpf8KKKAD7frf/AEEx/wB+l/wo+363/wBBMf8Afpf8KKKAAXutBc/2oS+f+eS4x+VH2/W/
+gmP+/S/4UUUAH2/W/8AoJj/AL9L/hR9v1v/AKCY/wC/S/4UUUAH2/W/+gmP+/S/4Ufb9b/6CY/7
9L/hRRQAfb9b/wCgmP8Av0v+FH2/W/8AoJj/AL9L/hRRQAfb9b/6CY/79L/hR9v1v/oJj/v0v+FF
FAB9v1v/AKCY/wC/S/4Ufb9b/wCgmP8Av0v+FFFAB9v1v/oJj/v0v+FH2/W/+gmP+/S/4UUUAH2/
W/8AoJj/AL9L/hR9v1v/AKCY/wC/S/4UUUAH2/W/+gmP+/S/4Ufb9b/6CY/79L/hRRQAfb9b/wCg
mP8Av0v+FH2/W/8AoJj/AL9L/hRRQAfb9b/6CY/79L/hR9v1v/oJj/v0v+FFFAB9v1v/AKCY/wC/
S/4Ufb9b/wCgmP8Av0v+FFFAB9v1v/oJj/v0v+FH2/W/+gmP+/S/4UUUAH2/W/8AoJj/AL9L/hR9
v1v/AKCY/wC/S/4UUUAH2/W/+gmP+/S/4Ufb9b/6CY/79L/hRRQAfb9b/wCgmP8Av0v+FH2/W/8A
oJj/AL9L/hRRQAfb9b/6CY/79L/hR9v1v/oJj/v0v+FFFAB9v1v/AKCY/wC/S/4Ufb9b/wCgmP8A
v0v+FFFAB9v1v/oJj/v0v+FH2/W/+gmP+/S/4UUUAH2/W/8AoJj/AL9L/hR9v1v/AKCY/wC/S/4U
UUAH2/W/+gmP+/S/4Ufb9b/6CY/79L/hRRQAfb9b/wCgmP8Av0v+FH2/W/8AoJj/AL9L/hRRQAfb
9b/6CY/79L/hR9v1v/oJj/v0v+FFFAB9v1v/AKCY/wC/S/4Ufb9b/wCgmP8Av0v+FFFAB9v1v/oJ
j/v0v+FH2/W/+gmP+/S/4UUUAH2/W/8AoJj/AL9L/hR9v1v/AKCY/wC/S/4UUUAH2/W/+gmP+/S/
4Ufb9b/6CY/79L/hRRQAfb9b/wCgmP8Av0v+FH2/W/8AoJj/AL9L/hRRQAfb9b/6CY/79L/hR9v1
v/oJj/v0v+FFB6GgCdD4ldAyXhKkZB8taXHib/n7P/ftK3LX/j0h/wBxf5VR+23J1K8jVGeG2Cna
iD5srnBJPX8KAKOPE3/P2f8Av2lGPE3/AD9n/v2lWm8RW4K7IZnDZIKjqoxkj8Wxj2NJN4gC7lit
XZw+1dzABsSBD9OTQBWx4m/5+z/37SjHib/n7P8A37SrUniGJE3/AGaZgQWTBHzKG2kn0wfWtSOR
ZYkkU5V1DD6GgDRooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igDlIW8vx3eSFXKqi5KoWx8o9K6Fr2B1KskzKRggwPgj8qydO/5HPVv+ucf8hXQUAZS22krG8a2O
EcgsBavyRwO1SY0/du+ynOc5+yt/d2/3f7vH0rRop3CxmxppsSqEtCAucf6M/GRg/wAPccUkEWmW
xUwWZjKtuBW1cYOMZ+76cVp0UXYGVqtwtxps0UUc7OwGAIH55HtVPzWe3WKW0nZdoBVoGI4/Cuho
pAc0YoCpU6a2D1H2ZvUn09Sfzpj20LSQOtlNG0DArst2HABGDx05NdRRQBzSwwIu1dNYL6C2b0x6
enH0oeKCUYk01mGc82zemPT0A/KulooA56Jlgd3hspUZ/vFbdgT+lMKRFAh099g3YX7M2Pm+9279
66SigDmpY4Zixl053LHLFrdjnjHp6cUGKAuzHTWLMCGP2ZuR37V0tFAHNskTly2nuTJ9/Nu3P6Us
8jtZPBFazqu3aqiBgB+ldHRQBzqSsbNYJbSdl2BWVoGIP6U1lifdu09zuznNu3OcZ7d8D8q6SigD
m9kQLH+z3yxBb/R25wcjt680kkUEpzJpzsclsm2bqeSenrXS0UAc2Y4W35058yHL/wCjtyfXpTkK
RS+bHYyLJt27lt2Bx6dK6KigDmvLhy5/s58yffP2duec+nrzUm8CNI/scuyMhkXyGwpHQjiuhooA
5spERIP7PfEhy/8Ao7c859PXmneYY7Uww2kyIFKqqwMAP0roqKAObtJZYrNIXtrjgEEGFv8ACkSO
GNNiac4XIbH2duo4HaulooA5b7Jbi4jlFhKPLUqqC3bbyQc9OvFSNHA2/dpznzDl/wDRm5P5V0tF
AHNNHC4w2nORz/y7N3GD29AB+FPbY/mbrGQ+bjfm3b5sdM8dq6KigDnEEcUiyR2Eiuq7VYW7Agen
SgiNt2dPc7s5zbtzk5PbuRXR0UAc15UHy/8AEub5VKj/AEZuAeo6e5/Oi5eR7FoIrW4AwAqiF+B+
VdLRQBz6zuYFje1nZdoUgwMQePpUH2e22FP7MbaTuI+zNycY9PSunooA5wLEP+Ye/f8A5dm7jae3
pxSLHCigLpzgDj/j3b1z6evNdJRQBzRigYODpzEOct/ozcn8vc06by7gKJ7CSQJ90NbscfpXR0UA
c4ViLOxsJCZDlj9nbnnPp68053Er75LKVn4+Y27E8HI7djzXQ0UAc0YYCrKdNbDHcR9mbk/l7mpY
5fJ3eVaTJvO5tsDDJ9eldBRQBzNhJNbW5Rre4B3E/wCpb/CphJiN4xZy7HJLL9nbDE9c8d66CigD
mwsQTaNPk27SuPs7dD1HTvSssTZ3ae53Zzm3bnIAPbuAPyro6KAOaMUDBwdOchzlv9Hbk/l70+Mp
FKZY7GRJCNpZbdgcenT2roqKAOTvRPcXAdba4ICgf6lv8Kg+z3H/AD63P/flv8K7OigDjPs9x/z6
3P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9
x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigD
jPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7
OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/fl
v8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63
P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/
AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+
z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/
AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A
35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z6
3P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9
x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigD
jPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7
OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/fl
v8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63
P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/
AD63P/flv8K7OigDjPs9x/z63P8A35b/AAo+z3H/AD63P/flv8K7OigDjPs9x/z63P8A35b/AApD
b3GD/otz/wB+W/wrtKKAMCG5lSGNTb3GVUD/AFL/AOFMOw3Hn/YpfO4+fyHzx07V0VFAHN7IiiJ/
Z77YySg+zthc8nHFDLE4IbT3O7Of9HbnJye3rzXSUUAc0YoGDg6cxDnLf6M3P6VYF1IqhVtpwAMA
CBuP0rdooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA5/
Tv8Akc9W/wCucf8AIV0Fc3aGYeMdV8gRk+XHneSOw9K2s339y2/77b/CgDlb46/aahqjabHOFknl
kU+VvDbbZCgGegLgjjrzUCHX7GfUJoIrndcTyyKfIDlnCp5aH0jOX57Y6jv2Ob7+5bf99t/hRm+/
uW3/AH23+FdKxFlblW1hNHKXV54qEM0kQlEm67ZY1t1YDZjylHHIbnnv7Umoan4otoXS3hnlkjnk
2P8AZgRKg2bQQBxnc/PHTrmuszff3Lb/AL7b/CjN9/ctv++2/wAKFXWnuILHNXE2s3LASQTvIl+h
8o2+I0QSnDK4+98uCeuPbpVa0uPEYEaxxTRefIj3MhtgDHIUfzFUHgqCseG569TXXZvv7lt/323+
FGb7+5bf99t/hSVdJW5UFhukTXNxo9nLfRmO6eFGmQrja5AyMduadqSNJplyiCYs0bAeSQH6fw54
zRm+/uW3/fbf4UZvv7lt/wB9t/hWLl71xrQ5mFtSsbZ1sLeVIy5IeO2Zd7bPlBibOwE8Fhx7jrUd
lFrenvfPF9pkkmkDbJIwV+Z5FLA47ZQ/QeldVm+/uW3/AH23+FGb7+5bf99t/hW3t/JCsc3LqXiE
NbFbefc2S6eR8uMkcnHXjPUfeHXte0241o6tDDeBjb+SpdmjxuJUEnIXAO7IxnoOnetbN9/ctv8A
vtv8KM339y2/77b/AAqXVTVuVBYtVzuu2moz3N8bKRkQ6ftA8otvbL8KcjB6fmK2M339y2/77b/C
jN9/ctv++2/wqIT5HdDOfW912XVXhjS4jtvMVQzwj5QGOcHGMEAc5br26VXgn19JZp40uHkm8tXE
kG0IwUk7R3UEFc99w5NdRm+/uW3/AH23+FGb7+5bf99t/hWntl/KhWOan1HxEZlEMNwu+HLAwAqj
FcjHHJBOOvY8d61dKn1R9XvIr0H7NHkRkpjOD8pB2gHI5PJ59OlaGb7+5bf99t/hRm+/uW3/AH23
+FKVVNW5UFi1XMS2p/4Sm5nmiYrvjMRNk8ucKOjjhea3c339y2/77b/CjN9/ctv++2/wqIT5LjOd
kuvEEVgzsblpikW0JbrwxUs38J4BAXof6iA3uvwWOVS9e6ecuVMOVA2odo4PGSfToeR36nN9/ctv
++2/wozff3Lb/vtv8K1VdfyoVino9xfzXV6l6smxH/ds0excZPA4B4AHr9ew1qq5vv7lt/323+FG
b7+5bf8Afbf4VjJ3d7DLVFVc339y2/77b/CjN9/ctv8Avtv8KkC1RVXN9/ctv++2/wAKM339y2/7
7b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2
/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozff3Lb
/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1RVXN
9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/c
tv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1R
VXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+2/wo
zff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KAL
VFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2
/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8
KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv
++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/v
tv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y
2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339
y2/77b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc3
39y2/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozf
f3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1
RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/Cj
N9/ctv8Avtv8KALVFVc339y2/wC+2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/C
gC1RVXN9/ctv++2/wozff3Lb/vtv8KALVFVc339y2/77b/CjN9/ctv8Avtv8KALVFVc339y2/wC+
2/wozff3Lb/vtv8ACgC1RVXN9/ctv++2/wAKM339y2/77b/CgC1RVXN9/ctv++2/wpN19/dtv++2
/wAKALdFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAc/p3/I
56t/1zj/AJCugrn9O/5HPVv+ucf8hXQUAFFcm41T7ZqN3CZQltcuys9y5DIqA7BFjbgnv+NH9v6l
byKkkURdnGT5b4lJWL5EGTtPzt6/d6daB2Osori7vxBqosJGcRwPNCGEwjkxHlZCFAz94lVGfU9O
lOTxTqC3L24RPKRVDTyQt+5+ZVJf5snqey+vI5oEdlRXJyXVy3hbSpZ7t45HwZWkMkayHaeGdfmT
nBB9sVDYLqmozs8TXUYQQlZZ7phsGAW+QDa+Rnk4zntQB2VFcNpV7qTPpUE092Bbtvnkk3YlEkbM
u49wuDn3Art0O5FOQcjqOhoAdRRRQAUUUUAFFFFABRRRQAUVz0OqXQS3DyM7RkvLhRl1KkqP0P5V
LJrU0kCbUSJmI+Yt1G5RhffmgDcorMs9UaeeKIooDL/ey2dobOMdOcZ9a06ACiiigAooooAKKKKA
CiiigAooooAKKKzfELSr4fvjbvJHL5LbXi++p9R70AjSorjp9c1Kwu7xGBLRFY8yxsyMwjLAJjHL
8c9jkc1abW9XFwx8mAQhz8nlPv2iUR4znGcHOcduhoA6eiuL07xFq32WONES8McDSO/lOGZlTPlc
n72cZPv0rTh1i8n8PpeSQoZTcKg8vOGXeBu4J7dskcUAdDRXIL4i1VURp7eLf5YcuscgSMMsZJIz
zt3Nn6duaTUPE95ZW8ssAimfapRxC+yX5WPALDGcDufx4p2A7CiuMbxDqH9pvIzxwxhdjI0TlbYe
ZgM/PzEjBGMferSttX1C60TUbue3WNoYv3UIVwxPlhjk5z1bHGCMetIDoaK4q4vr6O8vBaT3M05E
pCoX3RD/AGomG3AH3WXrx1zUt5dQ/ZlNjqd01sJckzyyrGx2fdEw+YevORnI9qAOworC8O3VxP5v
2l5lyybY7gfPjykJ5GB1JJ479q3aACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKjnJW3kK53BSRj6UMCSiuCtdT1aGGw5vJ3sYW8xX3f6SzQ
s67j3wAPxrTh17V7mOAxx2q7sbmMbMGzLsBGGwODnqenWhgdVRXHv4o1RUbFvCZljP7rynycKT5n
X7m4bcdfepW8RX1vqVraeWjGS5aOTMT8r5hXcpLccDOACB7DFAHV0VyOsajfW99fpHJJGiurJKQx
WMDyM8DqPmbP4+9Nk8T6stnLOtvAUV1jV/LYAghj5nLD5ThQB23d+hAOworD1fVNRtYLd7SGIO0D
zSLIjPyNvyjBH948+3Ssq68RX8WpWsEuwNHK6SFYnVCMsofG7kAAHHP1GRR5B0udjRXJw+INTuGT
/R44p2whRkciLLxqCRkA5DFh6DjsahuvFOq2dg0kkUBmZA8YWB8E/PlTlhydgPXvgAnFAHZUVz2r
a9dWb2CwrCHuITL5bozNIwKfIuOhO48nPTpVC18Vahc3UiCKPyEZmMn2d87AjttwGPOVA7HnpQC1
Oworl7XxJeSeGL3UJUhWW3l2IxQhGU7fmIDH+8eh7VCPEmpmOaRvs0cEbrGLgwOVYHf+8xu+6dqg
D1br6gHXUVxzeKdUa7ht1t4IppsDyZInLR5MY3k5GR85446fWpIvEup3N/cW1vHDhZAqytA2F5kB
BG7r8g6469OlAHW0Vxtx4q1S2toy8EJmfY4VYHwysqkry3UFj6n/AGeDWqNT1J9Ht51SFbi4uREA
YmKxoXIyRnJOBntQBu0VyP8Awll/9nH+ho1z5ZkMQRgdvlBs89txYfhinf8ACTaktn5ktvCpMZkW
VYyybQ2znDYySVP3sYJ5wM0AdZRXL6T4jvr29sYriKNFuEbcFjbdkF/m68D5R/eHPXpWdqV5qccu
qW8U15++uDLE67v3SR7dyqewPy/m1AHc0VzljNeHUXtZriVE07eXlkyVlDn91k/xYXOeeoFdHQAU
UUUAFFZpuZP7ZEXm8bgvlccrtJ3evXiqUuoXSyTRCY7jPuQ7RxGGAI/PH50Ab9FY2l3c8t1GJJt4
eNmYbw2CCMcADb1962aACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArB
1G/v4bnydP003RVQzu8gjUZ6AE9Twc+nFb1YDX84YjK8E/w0Ab9FFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAc/p3/I56t/1zj/AJCugrn9O/5HPVv+ucf8hXQU
ANLqASWXC8HnpShlOCCOenPWuJ1PQdQml1SKG1LQX05nc7hyYwuzjP8AEcf98Vsabpc8OsMssW2z
szI9o2R8xlOTx22/MvP96gGb9JkAgZGT0FcjKfEDaldnZepZFxkIyF8Bj/q/w2n6Z704WutwXM8t
utyzCVpwszIfMHlx4TPbJDA4wM0IDraK49k8Ri7uE3XflCIjeCjbmymCo4xkb/f8cV0Wkm5NlEbx
JUuPLXeHYMAfqO/rQBeooooAKKKKACiiigAooooAKKKKAGgoeRt49KQrGijcEVV6ZAAFc/DZyyWT
S20YbfEkeAQNw3HP4g8/nWtqKGa3iPktIFmVmTAJIB9DQBaGwYcbRkYDe1PrGmj8rSZlkj8tWl3x
o2PlBYYH174rZoAKKKKACiiigAooooAKKKKACiiigApCwUEsQAOpNLWb4htpbzw/fW8CGSWSFlVR
jJP48fnQCL8c0cwJikRwOpVgafXJNDqEFgg0yC9SQTbmUwQwl+OAdvGM98H34qNodet/s8Nml3kX
Uju8kispUzZ5742Hj+WaAOwyMkZ5FBIHU4riY7XX4mnmRb4Sy7dzFo8mUIce3lbs++MVq6/balc3
tqkKzNADEwERUJvEgLGTPONo4x70AdCSAMk4FG4cjI4GTz0rl4bXWrrwxqcF/mS6lQrEjAD5tvOD
nld3TpVbUdO1dbG6tVjnleUyebcQbA1zuQ7M5PAB+Uj0xjqaBnZUVy0ceuyuts32mGNGCPKpQZXz
c5U+yYHT9ap3Da9bLawST3zSTgH935e7zBG2R0wFDBSaBHa0VBZ+Z9nXz93m/wAeeme+Pb0qegAo
oooAKKKKACiiigAooooAKKKKAE6UiurjKsGHqDmo7xGksp0QZZo2AHqcVkNBdqhexga3UqisoADE
jPIA+ooA3aKyHGoiWUfvSpC5K7RgZGdo9cZpJhqJJ8nzx8ny7tvTaev+1nHtQBsUmRnHeskpqKSB
VaVgGIRiVwRu/j/Dpik235iQqLjIB8wvs3dVzt/8exQBr5zS1jWEV9DLCjCURgkkMVxtJYnd/tZx
WzQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUyVWeF1RtrlSFb0NACrIjlgrKxU4IB6GnVyMNp
d29jEttpdxa3EaRx3U8TJ5kwB+bZz8x77jzz605rTxFLHvkubmOXZgrG0e3IhyO3UycH8e1DA6yk
JCgkkADqTXGC4124F00H21x5rxvgxqq4fA8vvgAEH8KQWXiCVIrm6N21yIpIxGHTYGMK4JXocvu/
HHahagdrSFgCASMnoPWueun1Cz8NXEs01x9qe542bSyoZQoCdvu4xn1rJuf+EhjmjnCyoIIpMTTt
HgIVcgt6NwmfcelAHb5GcZGeuKWuJ099Y1Pe1vcXf2TziizEx+ZtG/jcBzzt6Z/nWjpTa62uxG9S
ZbXyMS7mUqX2pgjHTnf/AJxTsB0mRnGefSlriJ7XxCLq6uI1uRMwWN3yp43ucRAdsFOvvW1q0OsS
2lqLWaVJ0gdpDFtG6UBdoIOeM54pdLh1sbJgjM6zFB5qqVDdwDgkfoPyqSuV8rxDFqUUayytao77
XbaSy5P3+nbGDUf2XxHDb7o7i6klKgFXaPvFliOOofgf4UAdTcQxTxGKdVZGI+VuhIOR+oqWuUih
10usrfaWjjZfLjkMeSu98lv9raE59/rV3w1/aflSf2qLkHzCYvMKn5dq5zj/AGt2KANVNPtY717t
IQJ3GC+T7ZwOgzgdPSrNFFABRRRQAUUUUAFFFFABRRRQAUUUUAJtG7dgZxjNG0egpaKAGrGiElVU
E9cDrTqKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArmG++31NdPXM
N99vqaAOnooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigDn9O
/wCRz1b/AK5x/wAhXQVz+nf8jnq3/XOP+QroKAM8a3YtNLEJjvinFs4KnhyNwH0x36VZju4ZBndt
+cxjf8u4j0z1rFvfCxu5HcXXl+ZctK+EzuQrjb16g85+vrUdv4ZuYrmO5mntJpd3zK8BKKMRjKgt
w37vr7+1AM3bW/t7yBJYZVKyJ5i54O31weakE8TFAsqEyDKYYfN9PWuX/wCEKbcc3uQbfygSp4O3
b0z909SKsW/heWG+s7kTwKYpGdwkXGCxbaoJwBz9aYG5Nf28F1FbySqJpc7V78Akn2GAaLm/t7W1
FxI+Y2ICbPmLk9AoHUmsPVPCj6jfzzC5jjWbcd/l/vVJj2bQ2fu8Zx7mrMegPDo1vaxTqs9vN9oR
yCybtxOCCc45I60gJG8TWK/w3R2oXkxbufKAJB3ccYIP5Vbi1W0n8jy5NwnkeOM7T8zLnP8A6Cao
Dw8bu9N1q0wuHaLyyke6NANxOMA/MMHHOf1p1tos0F/E7Txm1gmlmiRYyH3PnIJzjA3HGB6UwLya
nbSuqI5ZmleEDafvLnd+Ax1qX7VCekilQCSwI2jHXJ7VgXHhWSS6eUXCYeRpCRH+8T52cBGz8pO7
BPfFZ2meFLqYSzTpFakFQkRiAV9oj++oY8ExkHnnNIDshPESAJEJK7gNw5Hr9KVJY5QDG6sCMgqc
5Fc9J4Vea48x5LRVMeGVICMHYybV+bhPmzj1FaWi6bLplosMskUhBY7lTaRk5x7j680AaVFFFABR
RRQBSj1S0ZW2sV2qHIKEfKTjP5irFxcJbRh5AxywUBVyST7VlxaTJNZYdjDKVVeRnG0nPfkEGtC7
t5Z4k2MiyJIHGQSDg0ABvYDa+edxjDbT8pypzjkduas1mz2kyWEqD97JK+9toxyWHqegArSoAKKK
KACiiigAooooAKKKKACiiigAqvfXsOnWM93csVhgQyOQpJAHXgcmrFVNTszqGm3FqHCGVCu4jOPw
oBDE1mykvFtlm/eNbi65BAEZOMk9B16VZa5gQ4eaNTjPLAcetc3L4OeQS/6blmJ2fJjCB0ZEPPQB
cfjT18GQ7FEjQyFVVcvFuOBEyYyTnGWz+FD2A6Jp4lLhpUBTG7LD5c9M+lJHcRSFVDgOy7ghOGx6
4rnYvCBFxG008MiRyK/+q+aX5wx3nPOCuB6Cnaf4VmstWs7truN0tY9gVYtrN8m3k5/H9O1AGtba
zZ3UsyRSHELFHkZcJuBwQG6dePwq0LiMsFLBWLFVDHBYjrj1rnLjweXW68maBWnk8wZi+6d0hzwe
TiTHPp+U8PhqWC++0LPbyFmYkywbinzFgU5wDzz9BQDN1JopP9XIjckfKwPI60/FYfh7QJtF84yz
QzGZwx2x7duFC8emcf0rdoAKKKKACiiigAooooAKKKKACiiigAooooAbI6xRs7HCqCT9KrR6lby5
2swIZUIKkEFhkfzqa6RpLSZEGWZGAHviqEemMwhl3mNxIsjKV7BQNp59RnNAF24ukttgZXZnJCqi
lie9H2yExwyBiUmYKhAPU+vpTLm3mleCSJ4xJESTuBIORj1qBrSSKG1hXMgSZWLAY9SSaALouImZ
1Ei5RtrDPQ+n60i3MLrlZUI3FeD3HUVQutJaeSVleMCR92GTPVQp/EYyKjk0SRxGqzRhEkLj93zy
27r+lAGl9pj+yfacnytm/OO2M9Kjj1C3lJCsdwkEZBUghsZx+VRvbSrpUtsAHKw7Ex1b5cfzqNdN
YyRzCTYwlMrKV6jHCnnse9AFu4u47ZkVlkZnzgIhY8delH2uLEDBiVnOEYDg8Z/Co7m3nklglheN
XjDA71JBzj39qiNpJCtlCgLrFICW6YAU5J/E9qALkk8UK7pJFVcgZJ7nj+tMW7iZ2TJBT72RgCqU
+kvLcO/mR7N/mKrJnnKnnnp8v60w6PKu5op4w5zjdHkDOe2fegDUEsbbQHUlumD1psdzDKWCSKSr
bevfGcD161lyaG0nkgSRxJGuMRpgjr0P4/5zSjRH3BjJCuWyQkeAo+XleeD8vX3oA057hLdVMmfm
YKMDPJplvewXQjML7hInmLwRxnFRXkc0sUHyZZZQWCnoORnn602105rWdJFlBAhEZUr34yfxxQBL
LqEMM5ifeCuNzBDtXPTJqUTobkwc7wu/pxjOOtVbiwlmnmKyosUwUONp3cenbmpFjk/tPzGX5RDt
3DoTu6flQBbooooAKa7BEZmOAoyTTqZKnmROmcblIzQwRnW3iGyuY2f99Eoi88edEyb09Vz16j8x
V03cWwHdlioYJ/Fg+1Yj+FI49HitbWRRcKsaPLLuk3KuMgBiduSM8VkR+EbtL62tMK0MChmu3jGT
hFXaDuzt+XpjjnmgDs/tNuqs3nRBVOGO4YB9PrTxKjOyK6l0+8oPI+tcrc+Cmexa3t7mCMSIqyEw
feIEmW69fnH5YrT0nQZNN1Ke5M8bJJGF2qmCTxliST6dBxQBoW+oWt3AkscyFHTzFDHB2+uDUpuY
MNmaLCjLZYcA+tcpL4HmmaMPewmOOHyseTyw27SDzyO/6UxvDd5c6tdCOKG3gG/bLJCpMm6RXxkE
5AAwDgY4oA6+OSHIjjaPO3cFUjp649KUTRF2QSIXUgMoYZGemawNH8MSaVe2032mN1htxC2I8NIQ
oGeScdO1Qt4Uuv7Xm1BLyDzTJvjUw4H393zYPpx+tHUDoI7+1laRUuIiYnEb4YfKxAIH1wRUn2iH
fs82PfnG3cM59K5ceC5Es/IW4tz6s0HrEsZbGfvDbkHtk1YbwjGZTKJIhKWZhIYgWyZRIDn1wMfj
QBr/ANr2fmmLzh5mWVU7uVALbfXGasNcwqHLTRgR435YfL9fSuUh8EzIZXku7dnlR0JEGAoZNuRz
weh/Src3hed7Se3jmtAry7/Ma3zI4LM2HbPJG7gj096AOh8+LcV8xNwAJG4ZA9akrndM8My6bdJI
s8EirbrEQYvvkKoyck4+729fauioAKKKKACiiigAooooAKKKKACiiigAooooArtfQLLLGX+eIqGG
Dxu6VMZEXq6jPqao3GmGeV5BLtLSI/3ewxlfxxVcaGzLiWWNsJsX5Ony7QevWgDVMqDdl1G372T0
+tOBzyKy30mQwyRrJDhn3bjHlmGSeT+PFX7SA21pDAW3GNAu7GM4FAE1FFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABXMN99vqa6euYb77fU0AdPRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAHP6d/yOerf9c4/5Cugrn9O/5HPVv+ucf8hX
QUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQBHcS+RbSy4z
sQtj1wKpDWIhJiRSE25LgEhTkjB/Kr8kayxtG4yrgqR7GqjaTasQSrY5yN5wec8j6mgBYNTtp5ki
RmDuu4BhjtnH1xzVyq0djBFOJkUhwoXrxwMfyqzQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFcw332
+prp65hvvt9TQB09FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAc/p3/ACOerf8AXOP+QroK5/Tv+Rz1b/rnH/IV0FAHIXeuX9ldXG6VpI21AxwrtHAWPcY8474z
nr1qe08Tma5WKKS3aN5CfOmm+VgRGdiEDlh5mAPb3rp9o9B1z070zyoUCjZGoU5UYHB9qAZyNr4v
uooVSW2M5S28xpGYKWbbu3Y/u9jgVdt/FMsl9ZwSR2+2aRojIkhYMQxUFR1wcdTx78V0QSM4cKhy
uAQO3+FAijXbhFGwYXA+79KfUDmNY8S3NjqzRxpGYrcvuhDfvZcRbgcY+5k9fUGrM2sz3GgW902L
QS3AhmlUgiNNxUuDyBnA5PTNb5jQtuKKWxjOOcUeWgj2bV2Yxtxxj6UgOQe4v7q9a30y/u7xY4CY
5Y3jRQ29gN+R84GAMjrg1esdVnlvLK1e5V7gXVwtxGuMhF37cjqB93B9xXQqioAFUAAYGB2oEaBy
4VQ54LY5NMDlG8RXMF4YnPmLHcyfIGAklXeyKqDHzbcZP4UyHxjNLIwMVsSoADxzkxru8vl+ONu/
n6e/HWtEj/eQZGcHHIz15qtY6Va6fFIkEZIkOXMjFy3AHJPsAKQGNJ4pljuBGI7STEefkn++2xm3
KSPufLjPqa1dE1I6rpy3DBA29kYJnGQcd/8A6/1q6IkBBCLkDaDjoPSlVFRQqKFUdABgCgB1FFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFRXURmtpEEkkRI+/GcMPpUtFDA4q38UTa
ZBZQyObySa2E0jSv84YxM/YAAfJ79auP4pvLYSfaoLRMbkVlkYjeNhGeOn7wD6j0PHTeVGWDbFyB
jOO3pStGjAhkUg9QR1oA4uLxVe3Dx6gIyLZIyrwJICHfy5Hz93OPk9fT052U16ZtCuL0JbvLDKYs
pL+6fkDcrHqOfzGK2wijooH0FIIkEflhF2Yxtxxj6UAcyPGEhkt41tFYzqjKwYgHfwnUf3gw/Cq4
8byyCURx2Y2yYWR5iqFdjMO2c/Lj8a6/YvHyjjpxUMlhbSyRSPChaJi6ccAkYJx+NAC2c5urZJWA
UuM7R1X2PuO9T0gAHQAZpaACiiigAooooAKKKKACiiigAooooAKKKKAIriQw20sijJRCwHrgVnG8
uv7LuGjlR50dVSTaMHO09P8AgWK1qjSCKOPy0iRUBztCgD16UAYz6rcz3IFu37t9iqFC7s87uTx1
GPwq1FfTizhZlVna48lt3GBuI7cZ47cVda0t3Xa8ETL6FAR1z/M07yIvLWPyk2LjC7RgY6cUAZq6
yzzRjyUWNlLsWf8Ah25H0PHIqSw1Jr67ZNmxVQ5XryCPYevpVz7LbgEeRFgkt9wdT1NC2kCABIUQ
AgjaoHv2oAqSXcq6qsQYeXuVCmOTlWOc+2KqT6pcxvcxqybxN+6+Xogxu+v/ANetkxRmUSlFMgGA
+OQPTNIYIicmJCeeSo79f5CgDNsL25kuohMWKSh8ZC44PGMc/nTn1djLEkUQ+ZiGLN0GWHHv8tX4
7aCF2eKGNHbqyqATSfZbfczeRFljuJ2Dk+tAGbFrD7XYrvIGVUkDP3cDgcnmny615Ewikg+cr0D5
w2AdpOMd/wD61X3s7aQYe3iYehQGj7Jb5J8iLJG3OwdOmKAKFrqzvII5YxkyFGKsPlyzBQPX7vWp
dUupbZU8p1Q7XckjOdozt/GrQtLcbdsEYKghSFAwD1x6U5oImREeNWVMbQwzjHSgCheahLa36Alf
JMXKkc7yfl5/DH41Tg1G+KB5HJCGNXKqu3JODkdfyrcaKNzlkVjxyRnpyP1pn2S38wSeRFvXo2wZ
H40AZ99qMttPcIhHyoGTpjOCeaVtbK+YPs53RkIcNkbiSMDA6cda0XgikJLxoxIwcqDmk+ywCMx+
THsIwV2DGPpQAsEvnwRyhSu9Q2G6ipKQAKAFAAHQCloAKjnYpbyMvVVJH5VJRQwODtvEmoRQWBln
edrSJjdjaB57NEzpnjjgA8VduPFd0/lRxi2QrMiu6SZ88ebtIi4+bgc+ma63Yv8AdH5UghjG3EaD
Z935en0oYGDaa9NqOiX84aC3khjysyEyIu5AwzxnK5waxTq120sNuNQcL5r7na8jCn5QRtmCnd64
IBGfSu5VFQYVQoJzgDFMNtAUCGGPYDkLtGM0Ac8mt/Z7bUVku0mkhMflIJlLEGNCSCBzySc4pi+L
povmu7eCONgXRvNOAMS4BJHUmL/x6ukFvCGDCGMEDAO0cCnNEjDDIpHoRQCMPTvEMl/fNbmOCLag
cOZDh87eF46rnDehK+tb9N8tOPkXjpxTqACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKAMq41CeC7mQsvl+ZFHGcdCcZB+oPH0q1fSSKbdIpPLMkm0sACcbSe/0qwYo2zuRTkgnI7jo
aSS3hljEckUboOQrKCB+FAEVhM89sGkIZgzLuA4bBIz+lWaasaJjYqrgYGBjj0p1ABRRRQAUUUUA
FFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVzDffb6munrmG++31NAHT0UUUAFFFFABR
RRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQBz+nf8jnq3/XOP+QroK5/Tv8A
kc9W/wCucf8AIV0FAHMXUPiCXVrtY5ZI7VmQRmMLwm5MkE9Gx5meP6Ut/ZapPoNmm15byGSUksVz
jZIqk9jnK/nWn/bIGoy2j2dwnlbC0rNGEwxIU/ezyQeMZqZ9Y06OMyPfWyoHMZYyqAGHUdetHQdz
nZbbxDDbyCB5trnlAV+RRIwAQDGPk21a0/8At3/hIIftYl+yiHbMxK7GbYuCAOnzbv8AHGK2BrGn
ExAX9qTKxSP96vzkHBA555qWG/tbiKSWG5hkjiJDsrghSOuT2oESQ/db7/3j9/r1/lUlZ1nrun32
PJuY/mlaKPLj94V67eeRzU82p2VvOYZ7uCOUIXKPIAQo5zj04NAFqiqI1rTSVAv7XLMEH71eWPbr
15H50v8AbOm+VJJ9vtfLjYI7eauFJ6AnNAF2iqzalZLGZGu4AgBYt5gxgHBP5kD8alinimGYpFcY
BypzwelAElFFFABRRRQAUUUUAFFV7e+guhGYXJEgYrwR0OD/ADpJ7+G3lMcgfgBmYISFB4BJ/A0A
WaKKKACiiigAooooAKKKKACiiigAooooAKKKrajfR6Zp1xeTK7xwRmRljGWIHYD1oAs0Vlr4isWv
Bbs7J/ooujI+AioSAATnrz0qaXWtNgbbLqFqhKh8NMo+U9D16UAXqKqPq1hG0qve26tEQJAZBlc9
M/WiDVLO4lSFLmHz3TzBEJFLbfXANAFuiqj6rYRtOr3tupgx5oMg+TPTPpTJda02F9suoWqNtDYa
ZRweh6+4oAvUVSfWNOjMwe+tgYcCUGVfk5xzzxzUUviHTIrhLf7ZA8zoXVFkByAu7Oc4HHPNAGlR
Wa/iDTY7yW2e7iVoU3ysXG1OcYJzwc9qvRTxTruhkSRcA5U5HIyP0NAElFFFABRRRQAUUUUAFFFF
ABRRRQAUU1mVEZmOFUZJPYVUXVbVig3MpckYZCMfX06j86ALtFNjcSRq65wwyMjFOoAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAoopsj+XGz7WbaCdq9T7CgB1FZieIbB1gZpfLjmgE4e
QhVVSQACSeCTnj2NWDqtgJfKN5biTf5e3zBnd/d+vI4oAt0Vn/23ZCZI3njUtEZmJlTCKO55/l6G
p/7Rs/sX2z7VD9m/57bxs6469OvFAFmiqNhrFnqMcJgmTfNH5qRsw37PXHpT31WwjadXvLdWgx5o
Mg+TPTPpQBboqjLrWmQttl1C1RtobDTKOD0PXpzTn1jTo2mV762Bhx5gMq/JzjnnjmgC5RWTJ4l0
1IriVLhZUtziQowOPlDcc8jDA5FSSa9ZI8QSRZVlGUdHUr0J5OePunrQBpUVmt4h0xLqeF7yFfs4
BkcyKFUkkbSc9flPFSrq9nJqS2EUySXDRmQqjA7QNvX0zuGKALtFZw1/TftM8P2uEfZwvmOXG1SS
QFJz1yp4qWPVLV7eedpBFFBI0bvIQoBU4PPpQBcorO/t2zYyiN9/lleQy4ZSAdwJPKgMMmnvrWnx
zxwG7hMsk3kBVYE78E4OOnQ0AXqKqDVrBghW9tzvLKuJByR1A+lWqAFooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK5hvvt9TXT1z
Dffb6mgDp6Ki8/8A2DR5/wDsGgCWiovP/wBg0ef/ALBoAloqLz/9g0ef/sGgCWiovP8A9g0ef/sG
gCWiovP/ANg0ef8A7BoAloqLz/8AYNHn/wCwaAJaKi8//YNHn/7BoAloqLz/APYNHn/7BoAloqLz
/wDYNHn/AOwaAJaKi8//AGDR5/8AsGgCWiovP/2DR5/+waAJaKi8/wD2DR5/+waAMTTv+Rz1b/rn
H/IV0Fc9pjbvGOqnGP3cf8hXQ0AZt3o0d3PPK0rr53k5AA48ty4/POKzrfwdbwQNEJ3IIdQSuTtM
ZQA5JzgE/wCFMvfFktoupgWatLazrFCvmY81SAWY8cY+b16D1q4fE9ql5dQOsheFlVY0jYu+d2SB
/wABb8BmjoMqXHhVp9SYifbaSxkTDaNzcp8o9Pudat23huOLT7u0luHkFzEsO4KFKoq7VHHUgd+9
PHiWwkiieKVsSNtUtG3qgP8A6GtRHxdposjdN9oESkZLQkEAruDfQigRDH4ST7dFdz3bSzCUyyfJ
tVzuDDABwMEd80/VvCw1e/a4mvZVjK4EQUEL8pU+38RPTPvjipdW8Rx6VPPG8TvstDcIVR23EE8H
aDgcdTS2niGK612TTsRgqnysJQWZwAXXb1GAy8/X0pgQX3hVbwbReyxxmd5nUKPmLMrfptxzng9O
lMn8JhbX/RLopPHCI4yyAjpIDke/mGtCTXrSNpxtnYRP5e5YiVd8hdqnucnH/wCqm2niOxvbyO3g
MxaToxiIXO3dtz64zx7Ut0O9jMj8HbrZVmvHG1QURV4jfcrnkEEjcvtwa3tPshp9nFbRsDFEgVAF
xjFZ0Xii2MlzHPFMksUzRoqoW83D7Pl9TkjI7Zptx4tsYoJZIY57ho4RNtRDkg44578/5NMRu0Ux
H3gnay4OPmGKfSAKKKKACiiigDPttMa1mikSUHZFsKleCcj5vyFOurKWeaUpKixzRiN8rlgBnpz7
1WTWn2M0kKj92rqQ3BJJ46egz+dX72d4I08pVZ3kVBuOAM0ATKCFwccdMelOqg1/ItlJKY18yKTY
67uDyBwfoav0AFFFFABRRRQAUUUUAFFFFABRRRQAVW1CzXULCa1ZyiyrtLAcirNRXJmW3c24jMoH
y+YTt/HHNAGDJ4OgcSf6VNuZ2dSf4fnVlHGDhdgHXoami8K28UaosrAKBjCjtE0f/sxNJZeJM2Uc
l/FiaS3W5226s6qjcDJIHOc/lT4/FumSruRpjH5fmb/Jbb9zfjPrt5xQ1pYERQeE44Z4XN07JA4e
JCg4O4M2T3yQPpS2XhYWeo2l19tlkW1TakbKMfc2/wD1+mffFMl8X20N2BJHIlr5Ik8xlOWYlhtH
bgIx69uKsv4r0tGG6Vwpk8tXKEK3TJB7gblyfemA1/DzkOsd6VQT/aIAYlPluWLHJ6sMk8cUyPwn
bxwCJZ3wARkqM8xeV/LmpU8U2EuRGty75ARBC26T73KjuPkb8qefEVnFPFFKz5lnMKsIzgNuC4Oe
+SKVugeZmx+E53STzb5oisrmERoOEZwTuIwTnA7jHvUsXg6KCyW2jvJAoV0ZigJYNEsf5/KDVrUP
Eltp9+tu6vsUkTzFTsixGXxn1wBx71OmvWUmmC+jZ3jaTygirly+cbceuaEGxnSeEizP5eoSxqch
FCD5VLlyCQQTyT3H887Gm2X9m6fBaCQyJBGsaEjBwABz+VZdj4rtrm3t3mV0kncII1QkqSAef++h
/kGpT4q08M64uS4JCqIWzJgkMV9QNpz9PpRtoBtUVjSeJrLE3kl38opk7Dht2MAe/wAwplv4rspI
fMn3wfuUm2MhyoYKcHsT8w6dqANyiqVhqkGoqTAsowcMHQqV4BGQfUEEVdoAKKKKACiiigAooooA
ZNEJoJImJAdSpI96oDSvMg2zujSM+4uE6DAGB6dB+tXriXyLaWUDJRC2PXAqj/av2cEXwRDtV1MZ
JBBz6+mDQBasbeS2txHLKZWyTk5/rViqf9p2583bvYRgEkL1z6fnUbazB9neaJJZVVN52p+mfWgD
QoqkNUgBkDFtyNgqEOe/+BqJtbgIjMYZgzAOSMBBkjk/gaANKioba6S6QtGGG04IZcEd+n0IqagA
ooooAKKKKACiiigAooooAKKKKACiiigAooooA5248H280Uy/aJAXnMqZHEakMPLABHy/O5/GpW8L
xgQiG5eIRTtLlVG7BIO0H0+UDuPbpjdooA5tvBsTRGE3cnkFcldi537dm7PptPSr8miK1m8Mc7I/
2o3SPtB2vu3Yx3FatFAGBpnhSHTL6G5Wd5TGgGGHVtu3cOcDjtj8akfw858wR3pVBP8AaIQYlPlu
W3HJ6sMk8cVt0UAYEfhO3jgEQncgAjJUZ/1Ri/kSarReE53WTzL5oisrmERoOFZgTuIwTnaO4xXU
UUdbgc1H4Mjisvsy3suwJtUlFJB2Kmff7gP50p8HRO0jyXkjSysXkYIACx3ZIHb7/wCldJRRcDmp
vB5kSRF1GVEYbFAQcJlzgkHJ5c857Drzm3pfhxdLvI5UuneOJGVIyg4LbdxJ6n7g4962qKAOZk8F
xSqVa8kIUBYRsxsUb+DggtxI36Vf/sCNNPa1hnZCLgXEblQdjBgQMdxxWvRQBz1z4VNyzub51klG
2VhEuGUqoIA7fcBqRfDAW5aX7Y4UzeYIwgCgEOCMepEh5GOgOM5zu0UAYWleF4tJkSSGctIiMisy
54IQDOSScBB3/KtylooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACuYb77fU109cw332+poAual4kTTNcisJYMxPbtM0+/G0jOF
xjvtPeqln41t3sLC4vrSa2a8mMIC4dY2D7QCeOp9BV/UtG0vVpjLd5Zyip8shXhW3Dp7/pxVWbwp
ok88Uzh98Tl1xKQMl/M6ezciheYPyF/4TKzIJSz1GQmV40CW5YvsJDsozyBjk/lmpLjxdp1pfXNr
cLcRvBCZ8mPh14+73zkgcgUT+H9KntYYN80Yhd3R4p2RxvJLjcDnBz0qE+EtCNzcTlG3XCsrjzWx
hsZx6cqD9RQAkfjO3McjS2N6sizNEsCxFpSFVWZivYDd6/nmnz+NNMg89iLhoYURjMsfyMXAKKCT
1O4cfnSHwtpBt/LM11v8xpDN9qfzSWAVgXzkggDIqaXw/o81pc2zIBFcFCyq5G0oAFK+mABQBFF4
10ud7BITM7XxIQKo+Qhtp3c9j6ZroKwJPDOkTW9nBI0rRWjbkUzNhjnPzevIrb8+L++v50ASUVH5
8X99fzo8+L++v50ASUVH58X99fzo8+L++v50ASUVH58X99fzo8+L++v50ASUVH58X99fzo8+L++v
50ASUVH58X99fzo8+L++v50ASUVH58X99fzo8+L++v50ASUVH58X99fzo8+L++v50ASUVH58X99f
zo8+L++v50AYenf8jnq3/XOP+QroK57TGDeMdVKnI8uPn8BW3dXMVlaTXNw+yGFDI7HsoGSaAKU/
h7TridppYN0jb8nef4wA3f8A2R/k1CfCumjzmjSWOWU58xZmDIcsflOePvN+ZFV5PGNpbCP7bbXN
s0nlMiyBclJG2huDxgnkdR6VdHiLTi75uYhCsUcon8xSjhywAHOf4T2/rQFyC08K2FvHEJFeR0RQ
T5jBcjblgucAkopP0p9x4V0q5iWOSB9qqEwJWGVC7cHn0FPPibSBLcI1/AptwhclsL843Lg9Dkc8
Vbg1Oyurp7a3u4ZZ0UMyI4JAOMH9R+dABNpttO0rSISZYfs7/MeU54/U1FFolhAIfLgCtDIZEfJ3
bjnOW6nOT1qsninS3vri2FymLeNXebcNmWZlCg565U8Vdn1SytrJLua6hS2fBWUuNrZ6YPegPIgk
0GykeZiJh5rbyFmYBXyDuUZwDkA5FOttEsbN42giKtG29TvJ527M9fSlbXNMSQxtf2wYR+djzB9z
Gd30xz9KrSeJ9PQSskqSRxwCYSLImGyxUKMnO7IxzxQBNLoNjNktG4YszBlkYFWZg5IOeDuANRP4
Y0yTAaKTasRhUea2FBGCRzwfep4td02WWCFb6386dVaOMSglgwJGPXOD+RrQoAZHGI1IDM2Tn5jm
n0UUAFFFFABRRRQBQi0qEWnkzgSH5QSMjO05XvVma1S4iCOXwG3AhiCD9amooApy6erWhghbYGIJ
LZbPOSevX3q5RRQAUUUUAFFFFABRRRQAUUUUAFFFFABSEZBB6GlooAzJfD9hLbrCYnCLHHGu2RgQ
qHK857E0kPhzTreBYoYXRF+7iRsj5NnXOfu1qUUAZH/CL6Z5Pl+Sw6Hf5h3Zyxzn1+dvzqV9AsHa
M+UwMchkBDkHJxkfQ7Rx7VpUUAZbeHrAqgVJI2QAI8crKygbuhByPvt+dNfw1pr3UdwYX8yN/MH7
xsbt24EjPrzWtRQBm3WgWF7dPPcRM5cHcm9gjHaVyVzjO04zT/7HtTp62eJPLVt6t5h3hs53Buuc
1fooAyB4X0wQJCIpBGrq+3zWwxAAGeeegqKDwrZrEwneaWZnZhKJWUoGZiQuD8o+Yggda3KKAMv/
AIR3TjOZTCxbCqoMjYUKQQAM8DKjj2oj8OabGka+QXEeNu92bomwdTz8vFalFAFOx0u305Ntv5nJ
yxeQsW4AGSeuAABVyiigAooooAKKKKACiiigBskayxtG4yrgqR7Gqn9lWxQqwdjx87OSwx05/E/n
V2igCnJpdtK7MytuYAZ3njBB4/IUi6TaohRVcIV2lQ5wfc89fertFAFR9MtZAQyEkhhncc8kE/yp
i6RaLtAjbAOSN5wx5PPr1P51eooAht7aO1QrHu5OSWYknt1PsBU1FFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFc
w332+prp65hvvt9TQBsXd/pthKkV5fW9vI4yqSzKhbtwCanBgZwglUsSRgNzx1/LNc54g0fUrnxJ
BfWcLSQrAsbbJ44zkOTzuRsjB7Yqh/wiepwteR2SpbLI92yus5+fzNpTI7dCPbrQB23kr7/nR5K+
/wCdcLZaVqyRTiysliaG6lX7M9021VeBVBDkfMAefzqSHwrrB1+0nuLgtarBHG+2b7oEe1oyOpBO
TketAHaIkUiB433KehU5Bp3kr7/nXB2fhTWbabRFjt7e3isCu9kuGLMN53D0wQc4+oNbXg/RtQ0c
36XoXypJN0RMvmO3XJY9PTtmgDovJX3/ADo8lff86kooAj8lff8AOjyV9/zqSigCPyV9/wA6PJX3
/OpKKAI/JX3/ADo8lff86kooAj8lff8AOjyV9/zqSigCPyV9/wA6PJX3/OpKKAI/JX3/ADo8lff8
6kooAj8lff8AOjyV9/zqSigCPyV9/wA6PJX3/OpKKAOe0wBfGWqgf884/wCQrbvLWK+s5rW4XdDO
hjdc9QRg1i6d/wAjnq3/AFzj/kK6CgDm5fBy3AiefUbiS4g8oQzFVyixuHAxjByQMnvjtVCPwGHu
5fNmZIohCYJQ2XaRWkZ2YAAAHzSMV0UV/JJrF1a4Ty4Y1YEdcn1qlP4lWw8PWeo3cJdp4hI6xkDH
y7jgE81MZqauu5UouOjKyeCLe3kSSzvJoHiCeXhEIUrGYycYxyrHj1qxo3hG10O+NxazzbPJEIiO
AMAKMtgfMflHJ55NWtR1+LT544zBNKCqPIyY/dqzBFJyeeT29KpR+MbWa2WeO2uDEZAm8gBQpGQ2
ScdO3rVElX/hX9ptUG9uG8kRrb5C/uhGzso4A3ffI59qv3fhWC40Oz0yOeSGK1ORtVcP8pB3LjB+
8T9cVAviaeSIoLVhdyblhiBVgSHdQd2R2QnH61b0zxJDqV2kEcEwV1JSZgAGIVGIxnI4cfrRa4bM
oxeCIIdPksI764FpJEism1Ml0RVV84z0ReOmfyqa48JLd3SXk9/O15GI9suxAAyOzg7QMfxEY9Pf
moZfGSw3Uha1k+zFB5HTdK29lJHPA+Q9ea0L/wAR21hDZSNHM/2sb0VV+YLxkkeo3DjrRvqFraFG
18EWdrLFIlzOWSSGU52/M0bSNnp3MjfpiulrnJPGVvFG7PaXCncBEp2/vRlxkc8f6tuvt609/FsK
pI8dlcuibiT8oOxQu5sE9i6jHXr6UAdBRWBZeJ45r21tZEYvdFtjrtAX7xAIyT0XrT5PFEaXUsH2
O4LLIYojlcSuGVCBzxy45PvQBuUVjXXia2tNJt75opWFwxVYgBuyASw9ONp+tQzeKMXPkW9nIz+d
En7xghZXYAsAeeMigDforDsfEcVy+ptKCsdkPM4APyYbnIJyflP6U6TxGIBD9osbiNnG9wWQ7Eyq
hjg88sOBz1oA2qK5O58YTObb7PaSQxuweRpNrHyypIIG7qcH6elXk8WW7XHlta3ChkaSNztw6AOc
9cj/AFZ4PtQFjeornpfGFvFCszWswiMvlhiyZP3ckDOcAsKhk8YCH7RPJaOLVIkki+ZQ0gJky3Xp
hOnWgDp6Kyb7X47JLdxbTSpNCZ2K7R5cY25Jyf8AaHAqrF4ut5rtrZLWYur4yHTbtwx3Zzjoh/Sg
DoKKxLfxRbTaNdai0UkaWz+W6EgksduMEHH8QqCfxciWhkis5t7W7yoJCEBZQ3y5J5+4ensaAOio
rIsvEMN5qjaf5MqTImXOMqrYUlc/8CHPfmqVp4qeSIpLZSvdF2EaoVAlAZxkZPGBGev9aAOkornp
fFinAtLSSRmMJTzGVN6uyAkAnPG8fjWhrOoS6bawywxGUvcRxlFGWIZsHHI5oA0aK5uPxaEcLdW0
iGS6aBR8oZBuVV3DPPLdqhTxbM9rbK9uYbuRI3KsAUZWZRuBB46njr60AdVRXML4xEWnz3dxauYo
3CoQ6hnXyw5OCeDz057VffXGl0x7u3hKKtysCmXHzjzAjHAPHU4zQBsUVi2nie1ubfUJmikjWxG6
TlW3LgkEYPtUK+LoBeR2k1pcRXBk8t0+VinKgHgnIyw6e9AHQUVzT+LSxHl2ciIqzNKWZd8exQw+
XPU56HFPPinbcui2s0odtluqhQXwWDNktwMqeuKAOiorD1TxEbbSYrmzt3mluLZriNTgBVCgktk9
tw4HNTab4httSmuo4klAtgSXK/KwBIOPxU8UAa1Fc3H40tpoFlhs7l1ILEjaAEAU7uTzw44HPUVI
vjC1kldI7ed9svl7gAAQN+5s57eW3HXp60AdBRXNjxYrykfZpYvLQyMkmzLghCuG3YH3x+RFbOl6
jHqumw3kKsqSgkK3UYOD/KgC3RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFc61rPub903U10VYOo65BpVz5d5FcBGXckkcZcMe446EcfnQByv8AwsG9/vx/
9+6P+Fg3v9+P/v3WPaaXBLdYeJjEYYWBycFmxnn86SXRYQXKyyJEg3EmPLH5EPrjGWoA2f8AhYN7
/fj/AO/dH/Cwb3+/H/37rG/4R4byFuWYbzED5X8YJHPPC8df0qpZadHdWrzSTtHtLABY92dq7j3H
agDpP+Fg3v8Afj/790f8LBvf78f/AH7rH/4RzDKGu1G58D5OdueuM9fb9ahOihoZHimdyFV0Xy8E
qQDzz79s0Ab3/Cwb3+/H/wB+6P8AhYN7/fj/AO/dZLeHNpx54wRuyygEAb8j72P4fX8qybu3FrcP
CH8woxG4DAP0oA6z/hYN7/fj/wC/dH/Cwb3+/H/37rimYhwMfKe/pRGWYEsMcnHrioVSLlydTZ0J
qmqr2eh2v/Cwb3+/H/37o/4WDe/34/8Av3XNWNlDdQSNJJIjKcbsDYD/AAj1JJ4wPrV4aDC9yRHO
5t+VWUENvYNjGMDB4PFWYmv/AMLBvf78f/fuj/hYN7/fj/791gnTbUQPLuuNpgSWPlc7n4VDxzyD
z6VKmi2rytELiUMsnl7yow7D7yqOpIAJz06UAbP/AAsG9/vx/wDfuj/hYN7/AH4/+/dc+2m2/kzS
rNKAtqtzGjKMkEgEE9OKgv7a3tmgWNpVZ0DyCTB2A8joBnjn8aAOn/4WDe/34/8Av3R/wsG9/vx/
9+6yBoEL3FxDHcOxjYhSdq4Hl79zDrz04qKx0aK9SELNIXlQyqyr8uwOFPHXPJNAG5/wsG9/vx/9
+6P+Fg3v9+P/AL91z82m28d7FEkk0iyxh0VAGZzzwGHy4yOv+FE+n2kcFy8dy7+UdqucbGbj5B/e
PXkccUAdB/wsG9/vx/8Afuj/AIWDe/34/wDv3XGUUAdn/wALBvf78f8A37o/4WDe/wB+P/v3XGVs
x+HzMzrHOmRDHIoLDOWxnIHIAyfyoA0rXxnJa6lcXu2N5Z1CsGU4GPTFXv8AhZFx/wA8Lf8A75b/
ABrm4dCaaNZVuo2jcBl2qdzKep29RjBpsWjG51G5tYJhiGQxqzIfmOSB9OlAHQJ49kS7luBFGXkA
UqS5UAegql/wklh5Qi/smxMYOQpVyBxjHXpjtVD/AIR6YqAlzC0hHCYYHO0NjOMdD1pq6DI6u0d1
CwUsowrZZlOCAOvHr70kkth3bNHUvFQ1O6tZpooALZgyoIzzhgw56jkCpW8WWrR7G0yyKb/MwUbG
fz6e3Ssa/wBGeyt3mEyOqYyq5JXPqe1Sv4fcBSLmJdyhtr54X5cknGON4piNSTxZaypIj6XYlZDu
YbG5Od3r6kn8ami8cJBIrxWFmjLnBWNhjIAPf0UfkKxn8P7IlJuhvZyBiNsFQuc/5496kl8OGKAl
pyJlco2FyowW59f4f1oAvN4rtGaZjpdlunOZCEYFjnPr6k/mann8crcrEs1jaOISDGCjfLj8f0rK
bw0yqUFwpmXcXGCFADAZ9+Dmqx0N/sguFuYmV0MkY2tl1C7j9OPWgDafxhbyRmNtNstpAGAjDoSf
X1Y/maq6n4ji1OOONre3hjWQuyxxfeJxnk9Og5FU7fQ1u7W0aG4IlnBLBkO0fMVAyPpUg8Nu9ukq
zqnTfv6DIXH/AKEfyoA0Y/F9tFcpcR6ZYrMn3XEbZHX39z+dOl8ZwzK6yadZMH3bso3O4gnv3Kg/
gKw7zSPsNlJJLOrTxyIjRoOF3KSOe/TtTI7KD7NbtM05kud/liJQwXbxyOpyfToKAOhk8axy2aWr
2FmYI8bE2MAuPTmo28X2zyySNplkXkKljsbqCCO/HIB49KyV8PyEyZuoQIm8uQ7W+V8qNvTn7w5p
R4dk6m7gAJKqcN8zDdkdO2w8/SgDXtfGcNl5v2bTrKMSjDhUb5hzx16cnj3qNPFVpGsSrpdiBC29
Pkbg8e/sPyHpWWvh+Z92yeM4AZflYBlO3nP/AAIcVJD4d3XESy3cYhkfy1dVOWb5sgA9Punk0AaZ
8YW5Cg6ZY4VQo/dtwBnA6+5qjDrtskt1JcWltcm4YMQ8RAGCxHT/AHjz3qhZaQ97brKs6Ju3/KQS
cIASfTuKsy+HnQRBLmNpGIDAA4GX2DB70AaUviy1m2CTS7FhGSVHltwTgnv7D8hTH8TWMm/dpNgT
I25v3bcnn3/2m49zWadAcEr9ri8wbQF2tyzKWAz9BSt4dliAae5hRFKiQ8koTtwMDr97rQBp3Pi2
K7ubWaeztmFqpWKMo20Z29s9tooi8V2cJYx6VYLuBB/dtyCCCOvTBIx71lx6PE7X6ee4e3nEKfJn
dy3Jx7LUy+GnKurTrvHKHHyn7p5/BqANKHxlDb2klrFp9msEv30CNhuMZPPsPyph8W2xKFtMsjsQ
xjKMflwR6+5/M+tZMWgSyQwubiNTMhkVSDnaFDH9CKgOmGOe4SW4iSO3ClpQCw+bGMAc9/woA6KD
xsltN5sNjaJIECbgrdBx6+w/Ko5fF1tPEY5NMsWQ8EeW3qW9fUk/iaxzoUirMzXMQW3AM7Yb93kZ
HbnPtT9S0B7BZpBOjRocqDwzLnbn8+3pQBqP4ttZHdm0uxJcBT+7boMY78fdH5D0qS78btd26wmN
IgsgkVowchgc55z3rm5LJBZ2csMhd7hmQqVwFIIGAe/XrV1/D5EcZiuUkcyGJ/lIAbeEwPxNAEsm
tQyziaRpWlDbgxIznOc/d9QDTYtWt4F2wh0Gd2Bjrx/s+w/Kmf8ACPlVl3XcYMSh3O1sKpUt6cnA
qG50SS1jlMk8W+NDIEAPzIG25B6dT0oAsPqlrKu1w5XjjjHC7R/D6cfSrA8SkQeQJ5/K3b9mVxuz
uz9315qjFpltLDE4uGxhGmdSpEQJwcp94Y9eRSyaOhtpJ4ZAqKFKNJKhR8sQcMOO3SgCzBriWkMs
VrJLCsq7XC7SCMYGQV6c9Kr2GpR2EMai4nkdHMgkO0fMe+AuOw/KmnRS2o3drFOo+znA3DLP9AKk
/wCEcmDBXuYFbAypPIJAOAO/X9DR5gPGr24XaPMxknGR1Iwf4fTjHSnrrsaTPMryiR23swxktzz9
33P5moY/Dk0mAs0bswPC5AVvQnH6cGmR6C8sYkS6iKO2yM7W+dvmGMY4+6eTQBZn12O6t4oLh5ZI
YhiNGxhRjGB8vTFSw+JWt5JJIZ7hHlOXII+b/wAdrLsdNXUIMxTbJlYKyupwSxwuCPxzUi6G7xRS
x3MTRzsUhO1h5jDORjt06mgC2uuRIhRWlCkEEZB4OMj7vTgfkKcviAJKZFlmDl95bI+9gjP3fc/m
aRfDtusz+bdP5XmLFGVXBZjjrwe59O1QHw87ySC3uFZANybkILLjPPYHmgCK0vILaGRHmmmMjbmY
hV9MYAXjG1enoK3rDx02m2UdrbwQmKPO3cDnk57YHesU+HZSSI7qB8ZJwGGFBKk9OxBpItBeVEkS
4jZXCsoUHcVOPmx26/pQDOj/AOFkXH/PC3/75b/Gj/hZFx/zwt/++W/xrl7vR5LYxbZY5fNkEY25
wGPQE9M02TS1jR3N5FsVzFnY/LjOV6e3X3oA6r/hZFx/zwt/++W/xo/4WRcf88Lf/vlv8a5ez0Wa
9WJkljUSqGG7PA37Ofxpz6JILU3KTxvAEMhcAjAxxkH1IIH0oA6b/hZFx/zwt/8Avlv8aP8AhZFx
/wA8Lf8A75b/ABrmoNBlmt4ZmuII1lUOA7YwpJAP146VUvrZLOQQ72aZc+ZlcD2x36UAdh/wsi4/
54W//fLf40H4kXOPlgts+6t/jXDUUAd0fiPOrECK2YDoQrjP60n/AAsi4/54W/8A3y3+NcNRQB3P
/CyLj/nhb/8AfLf40f8ACyLj/nhb/wDfLf41w1FAHc/8LIuP+eFv/wB8t/jR/wALIuP+eFv/AN8t
/jXDUUAdz/wsi4/54W//AHy3+NH/AAsi4/54W/8A3y3+NcNRQB3P/CyLj/nhb/8AfLf40f8ACyLj
/nhb/wDfLf41w1FAHc/8LIuP+eFv/wB8t/jR/wALIuP+eFv/AN8t/jXDUUAdz/wsi4/54W//AHy3
+NH/AAsi4/54W/8A3y3+NcNRQB3P/CyLj/nhb/8AfLf40f8ACyLj/nhb/wDfLf41w1FAHc/8LIuP
+eFv/wB8t/jR/wALIuP+eFv/AN8t/jXDUUAdz/wsi4/54W//AHy3+NH/AAsi4/54W/8A3y3+NcNR
QB3P/CyLj/nhb/8AfLf40f8ACyLj/nhb/wDfLf41w1FAHc/8LIuP+eFv/wB8t/jR/wALIuP+eFv/
AN8t/jXDUUAdz/wsi4/54W//AHy3+NH/AAsi4/54W/8A3y3+NcNRQB3P/CyLj/nhb/8AfLf40f8A
CyLj/nhb/wDfLf41w1FAHc/8LIuP+eFv/wB8t/jR/wALIuP+eFv/AN8t/jXDUUAdz/wsi4/54W//
AHy3+NH/AAsi4/54W/8A3y3+NcNRQB3P/CyLj/nhb/8AfLf40f8ACyLj/nhb/wDfLf41w1FAHc/8
LIuP+eFv/wB8t/jR/wALIuP+eFv/AN8t/jXDUUAdz/wsi4/54W//AHy3+NH/AAsi4/54W/8A3y3+
NcNRQB3P/CyLj/nhb/8AfLf40f8ACyLj/nhb/wDfLf41w1FAHc/8LIuP+eFv/wB8t/jR/wALIuP+
eFv/AN8t/jXDUUAdz/wsi4/54W//AHy3+NH/AAsi4/54W/8A3y3+NcNRQB3P/CyLj/nhb/8AfLf4
0f8ACyLj/nhb/wDfLf41w1FAHc/8LIuP+eFv/wB8t/jR/wALIuP+eFv/AN8t/jXDUUAdz/wsi4/5
4W//AHy3+NH/AAsi4/54W/8A3y3+NcNRQB3P/CyLj/nhb/8AfLf40f8ACyLj/nhb/wDfLf41w1FA
Hc/8LIuP+eFv/wB8t/jR/wALIuP+eFv/AN8t/jXDUUAegR+Pb2ZA6W1sVP8Avf407/hOL/8A59bb
/wAe/wAa5jS1Q6fIXIBAJXnvmmG5xfLC6tGjKQpdcBmB42nuCKz9ouZx7fqbww86ivHz/A6r/hOL
/wD59bb/AMe/xo/4Ti//AOfW2/8AHv8AGuWuDLkJCp3fePOOnaohcS5BVSVJJGRz16fhTdRJ2Lp4
SdSPMmvv/r+kdd/wnF//AM+tt/49/jR/wnF//wA+tt/49/jXIvNNGrEtliFIGzgcf41LDK7yOGHA
GRgUlUTdgng5xi5XVkdT/wAJxf8A/Prbf+Pf41HN8QLyAAyW9sAc9Ax6fjXLq5Z4x5jAlcn05HT/
AD0xVfUmLW8THqUYn/vmqjJSIq4eVJXb/rX/ACOg/wCFup/zzh/79vR/wtxP+ecP/ft6870qyhvb
ZhcKyqrhRLH1BI/iHPH4cVcs9Mt7bxIbSdRIgt3YLKFPz7CV4JAPOMDNUYHcf8LcT/nnD/37ej/h
bif884f+/b1xiaZps/nyX032RoWUMgCRlg+ApVVJHByTz09KsS6FokcxRppMu6xgfaFGzJcb89xh
Qe3WgDq/+FuJ/wA84f8Av29SwfFRrlmWKKAlRuOUccfnXG3Oj6TJIJFxFCyAs8dwuIsIpHB6l2LD
2NQyWNtYau8dm2Y2ty20yByvz459DgA96HoB6P8A8JzfkZ+y23/j3+NH/CcX/wDz623/AI9/jXKT
tMRFFbn97ICFAXcSccYFRWEk7oEuZd7bTlgu05z/APXoA7D/AITi/wD+fW2/8e/xo/4Ti/8A+fW2
/wDHv8a4t5pra23K0juWfG8FuATge1K13dLsIQYYnAKnscAflzQB2f8AwnF//wA+tt/49/jR/wAJ
xf8A/Prbf+Pf41xLX9x8+0cDBBMZ9+PrwKsXMsmVCllHl7ztHPUA/kKAOuPjm/AJ+y23H+9/jVL/
AIWFN18mD/vlv8a523kkLMrMzKUYqW64BwD+I/lWx4Ksorlbh10+2v5xwUmnC7F9Qu09+9AGAIX2
Kg1G22KcqvntgH2GKX7PI+F/tC2PGADOen5VTlADJjj5QansYVuLiONiQD3HWsPasVyYW8wUsNQt
wGypInbnPUdKRbGThEvLb5jwqynn8MeldBqnhiz03w613DLcNKZYx87DHJwegFc5bH/SYP8AeP8A
Wh1JJ2C5Y/s+8Ax9qjxndjzWxn16Un9n3aoV+1RhTjI81sHHTtS3DmOEuACQQOazJdRuHGNwUf7I
rsw9CpiPhsc9fExo/EjYFtqDWzD7VEy7wxkMjFwRnHOMgcmoTpN1JgmeFskkZkY89+1T6dI32U55
3Lg5+hqW3OFi9mFcspuMmjoi+ZJmW+kmR0DXFsSwyE8w4YeuMc1IuiTRLjzocHpl2P8ASraoHmtd
wzlWz+Rqy8KPcRRYwpwOKjnV72LdSbXLfQqRWmoRWzQx3cawZyVDnGTxnpSvBqUuGe+RtjAgmVuG
HQ9OvWupsfDtpdWU5d5htlK4Vhzt6dveuZlXypJ41JKq2Bn2JpqtfYnpcie11ByZHvYydysSZD1H
3T07UxnvIlSF9UhQKdyqZSMc5yOPXmnfwN7EVnal/r1Hbyx/M1XOxXLUpkeSRpdTti0i7XJmPzD0
PHTgUx4PPkJe/tHcjkmUk4A+lGk/8fUz4UtHbuykqDg8c8/WtKzu5pb6BJCjK8iqwMScgnkdK68P
hp14OcWtDlrYuNGag1uVP9J2yf8AE0t9snD/AL8/Nxj09OKA10qFRqluqlt+BOR83r09hVFkUuDj
k81KcRwsAAe3Irh9szquW1kvBIZl1W3D4wWExzjpjp0pJJLqSExS6pbNH1KmXjn8KpOgDso6A4ro
/AMEN14jljuIY5UFsTtkUMM5X1pxqtuwXOe+zR/8/wBZcf8ATU/4Uv2VCcfbbLP/AF1P+FetX9jY
WqoY9NsTuJB3QLU9rpmnTW6SNp1mGYZ4gX/CttRnj/2Nf+fyz/7+n/CrUNteOzPDfQlgoBZJTnAx
jt2wPyqPxNti8T6lFEiIiTEKFXAHSnaYx8if32/zNYuq0xXLDW+qAOWvVw+N37w8/pTVh1KGV5Vv
kWSU7mYSHLH16davWdol7dPHIzqu3d8p59K0Doduwx5kw+hH+FXGUmrhc5429/nm+TJ/6aN6Y9PT
inEajuLHUVyw2k+Y3T8vc1o3umRW5YI8nCZ5I9/asdScVTbQIfIl3cW8iy6lA8SkBw0xwD2B4pvm
XO4EatBkDA/fngccdPYflWde3MjXkiZwqsQAPrVd3OegqeZjNz7TeKW/4m8A3dR5xwfwxQLi8LMf
7YgLHJOZzz9ePc/nV/wpodpq9vLJeB28sEhQ2AceuOf1qhqDoL57VbeBIQhICxgEEDPXrWVTEck+
S3S/yHBc6Bp7w5LavCctuJ889eOensPyqRLTUrtWeO7WVclSVkYjJGCOnpisaWQhioAA9BW3opLa
Vk/893/9BStlJiHw6drNsqrDcGML90Kzcc5449SaBY6yhBFyQR/tt7e3sPyq7n5WbGdq01Jd0KsU
TJPPFdFGhKqm49Dmr4qFCSjJPUpnTtYaERNcbo16KWYj+XuaZHpuqW8ZhjnEaSE/IGYZOOe3pWvN
GIJ3jXorYBPWiEDzrYf9Nx/6C1Ycx0mKsOoXKBl1CORAMZEpxxg+nsPyFK1pqSgBr1AAdw/et1Oe
enufzNMtRiyUdt7fzqbrntjpjtS52NCLHqiRhVv1CYwB5jdOPb2H5URwanGSY75V4I4lb1J9PUn8
6hZyDkVYgPmIWbv1xWcayk7HVXwc6MFKTWpClhfW6oUuokCZZcSHAz17d8U5H1BHymrQhuTkXBzy
c+nqKslsSFAABsP8qyYxkqO2O1Xzs5bFkx3eS51KDIIOfPPUDA7ehIpytersZdUhGwYXEx4H5ew/
KqyNukZCo2qpIFZuraq2nQGT7Nb3BMioFmDEKNpJxgjnpWlGMqs1CO7E9EbMaXaO8kepQKztudvO
I3Hnk8c9T+dSH+0HO86pETkYPnntj29h+VZOlzpe/Yp/s0MHnwyF44twUlX2g4JParbthGwB8rYH
0qanNTm4PoNalplv5JBI+pRM8fIYzH5e3p+FNSO7jnadNRgWV/vP5xyf0qK5G1ImGfmGSO1Z013J
GTtC+mcdKKfNN2QpNRV2a+L7MZ/tSDKZ2Eznjjnt6U2VbuaN1m1O3dGO5w05OT+VYkd5M0oLMGGD
wRxV63uGmkCOq7eOg6VpVpzpb2IhUU9iyLGWSNE+22pRclF844GeuOParAm1ASKRq0W9RtBE5yBn
Pp61HGN23PQMpAH1xTZlFvJujyCy81h7RmlhxW6beDqdv84w2Zz8wxjnj0yKna4uTYSWr3lkwk4d
2nYlhnPTp2HNdzFaW0OiaYq2tvulgXe5iUsflHfFQCwswy7bO3Vg33hGMnmtNRHDZu1VIxqluAmN
oE54x07USw3UxZZtQt2zgMpm4OOnGPeuy8RwwHwl9ojtreKbzgu+OJVOMn0FcKTgBR/dBqXJpjRZ
WO8ileRdRgWSX7zecct9eKUNfRsif2pCGUYUecen5ewqluJZv9npVW5uX3kcHb8ozk8VPOyJzUFd
moZbogIdWhwOMeef8KkZL58StqcBOcg+f3GeenufzrEtp2Zn3AMMcA9qu2jfaJgsijB4OOPX/Cnz
smnUU1dE8drLGCI761XcQSFmPJHI7dqkaS7Jk3arAS42v++PI/L3NSadbpdeYzAoVHGziqqxJ5Uj
EZK9M0vaM1sX7W/u4QVN9ay9CCZ2Vl+hA6cCmyT38heSPULaJDnKxSnHv2z25NT+F9IttX1CZbkO
EjVSFRsZycc962/FnhjTtI0p7vT45IZUYdJCQc+oNHO3qaqjotdzlV+0ghl1S3BzwROfXPp680pl
uxuH9qwDccn98f8AD2FZzt5csgCjHbPbNI/Eyr6jr3p87MjQkkuJFCyarbsB0BmP+FSGW9YtnV4C
WG0/vz0/L3P51FZRImqWnyggzJkMMg5I/wAa9gbSdPY5awtST3MK/wCFNSbEeUW097aSRsupWpVX
3+W0x2k57jHrUTRXkzSbtRgYybd/78/NjpnjtXrn9k6f/wA+Fr/35X/CuH+IVvDaXVgLaGOEFHJ8
tQueR6U22kMwEt9RClU1GABDuwJ+mc+31qJtPu7je7XdvIZMFj5uc4/CoUBL+XuYK52sAcZFWyrJ
pauksigYUKCMY/Ko9ox2KY01m6XVn/39/wDrUf2Y/wDz82nH/TX/AOtW/wCGdGs7iAT3EZmO8qFc
/KPwrota8N6YNJeeG2EEqIXDREjJHrVXla4jz/8Asx+P9JtOen73/wCtSf2a3/P1Z/8Af3/61Kzl
McAk45NOlJSRgOy5qPaMdhh05h1urP8A7+//AFqQ6eV63VmP+2p/wpVXcwyTSBAXx2DYpSqtK41C
7sJ9h/6e7P8A7+n/AApV04sCRdWeB/01P+FNuEWKSUIMBTke1RXR2gKvABx+gqYV3OKkuoTg4ScX
0LC6czuFW6syxOABKf8ACkOnkZBurTjr+9P+FQ52QBl4Y96fZKbq9hgdmCyyBWK9cH0pqrJmdxfs
Y/5/LP8A7+n/AApTY7V3G8ssevm//WroNa0qx0eSSzt7ZG327uZpCWkBCkjBzgcj0p8Wj2Vh4ftt
R8kTzzKCRMSVHHYDH61vGM5SUVbUmc+SLk+hzgsdwyLyzI/66n/CnHTmHW6s/wDv7/8AWqS8jRNQ
u40QKiO20DtVbeTCPasHVknYq5INPJxi7s+en70/4UHTyDg3dnnOP9af8KdagMXJA45qvcErcui8
AcVpRc6snGNjsweDqYubhTaVu5P/AGaxUN9qs8Hv5p/wpDp5XGbuz5/6an/Cm2jmSOVW52/MD701
zlM/jU1JTpy5ZWM8Vh54ao6c7XXYk/s88/6XZ8f9NT/hQLAnP+l2fH/TU/4VEwySP89KfaKJbuNX
GVbOR68U6c5VJqC6nLOfJFyfQd/Z5zj7XZ5/66n/AApBYZ/5e7P/AL+n/Ct+DTrVvmaBCfWq13Z2
8UchSFFO09K9H6hW7r8Tk+vw7MyhpzMcC6syf+up/wAKT7B/092f/f0/4VHAcqppoc+WG7gV5Xtp
HaWBpjlsfarPOM/63/61K2luvW6s/wAJc/0qGSZo7RZBgsr4BPpiqMt/cS7QXwF6ADFbU1UqbWNa
dKU1dGkNPLdLuzP/AG1P+FOOlSLEJDc2ewnGfO7/AJVEl2WiQ+XGCpxkDqB6+tNIzGT3Lk1m6skz
NqxKdPIBJu7PA6/vT/hSfYen+l2fJwP3p6/lTn5swCASCpz36GsPU5JIWjMbsuc9DUuu0rnRhsPL
EVVSi7XNs6cy9bqzH/bU/wCFILHP/L3Z/wDf0/4VzBvLg4zNIcdPmp0V1O0qKZXwWHf3qFin2PUl
kVVK/OvxOm/s84z9rs8f9df/AK1TroVyxUCa0ywyoM2CR+VV0xhCQDluQe/BqxdSN9s3biGjcBTn
kcVp7ZnhrUbPos9sgaae0RT0Jl6/pUH2RcZ+22WP+uv/ANavRfBVtBqOivPeQRTyGZhmRAcAAYAH
bqa6D+x9N/6B9p/35X/CtU5NXA8bFjkgC8s8n/pqf8KPsOf+Xuz/AO/p/wAK9X1vR9P/ALDviLK3
VlgdlZYwpBCkggivIx80MjHqAP1qZTlERY/sx9wX7VZ5Pbzf/rU1tPKthruzB9PNP+FJOSJgR12A
/jgVFdyMNQCqcDaDx9a6MHSniqvs4tLS5lXrKjHmaJRY5z/pdnx1/en/AAp66XIwJW5tCFG4/veg
/KtLUtIgt7PzkaQsY95yRjP5VVtIVW3Q8kysgbPpk8Vz1Jyg7G0dUV5NMeI4kubNTgHBl7Hp2pgs
M9Luz/7+n/CkunMl7M7cnzDx2pJFEYXHOfWpVVgP+wHbu+12eP8Arr/9akax2nDXdmD/ANdT/hTY
2J2x9mcLnHIrUuNKhS2dxJNuxk5fOfw6V1UaNSrHmViZTUWZpsNoy13ZjnH+tPX8qU6eQFJu7P5h
kfvTz+lMlUP9mY/8tVLsO2f8imSktDGxPJJH0H+TXN7RlFkaRMQCJbcg9DuPP6Uv9jz/APPW3/76
P+FT5+VPZF/9BFOmylpOynDIhKn0OKftGBX/ALFuMZ8yDH+8f8KBo1wRkSW+B/tn/Csa11v7T4kb
TH0+z8nznj3jzN+BnBzv68elb2mu0ljaO7FmZFyT3rfEU50LKVtRJ3Kp051Xc1xahfUucfyp40mU
pvFxabcZz5h6flWjaQpMsnmLnCjH/fTf4VJp1jHcWcUjNIrqBhlbHXnFc7qMLmQ2nMoBa6tAD0Pm
n/Ch9MaNQz3dkAf+m39MUjxD7WUBYAFuQeeKgmk8yQAqo2jHA6/Wl7VhcuwpNbxssd7ZBG65kz/S
lbzZV2vd2Dr1xvP6cVngnyWH0NIGOzrjAJGKXP5FKclZp7Gpm5YnF5Y8dt//ANam+ZP/AM/tj/33
/wDWrODt5YbcfWpJ41RYyM5Zdx+uTT9qyS9vn/5/bH/vv/61J5k//P7Y/wDff/1qda6fDKkZcv8A
PFvPPQ7sVlH7p9qPasDS3y7CPtthtPbf/wDWqKWJ52DPe2Rx/wBND/hVEH5cetIvGaPaMd2y5bWZ
sg4tL2zgD8sEk4P5io59LhupWlurixmkbGWeU5/QVGPuMaQcjmj2rEKNDshjDad/39aj+wrMDrp2
P+ujU0H5vxpB9/FP2jGObRbFSNz6cD/10anRada2+fJuNPj3ddsh5/Sq8o4Pt61AWJkHt0qlNiub
iXE8UTRx6paIGG1iJOSPTO36/maZGXjbcNQsScY5fA/RayGHzVLtHlj6ilzsLmuJpu19Y/8AfZ/w
pBcSnpf2H/fZ/wAKxZ2KgBTgZqONiM+wp87C5vG5lH/L/Yf99n/Cm/aGLBvt2n5Xod54/SsV+fwF
JgBvpRzsLmy8xbdm/sAW4JDnJ/Soo4xGd0eo2iHGMrMQf5VnMo2Djtmol4zRzhc//9k=
--_005_B30A1AA2D347B0428A4AFC31D8C738529C79EF55AGENDA1upvnetup_--
7 years, 9 months
oVirt memory quota
by Staniforth, Paul
--_000_150099104189557101leedsbeckettacuk_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
when I restart the ovirt engine all quotas show 100% usage for =
memory, if I open the quota in edit mode and close it again it updates memo=
ry used.
I'm using 4.1.3.5-1.el7.centos
Regards,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
--_000_150099104189557101leedsbeckettacuk_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none"><!--P{margin-top:0;margin-b=
ottom:0;} --></style>
</head>
<body dir=3D"ltr" style=3D"font-size:12pt;color:#000000;background-color:#F=
FFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hello,</p>
<p> when I restart the ovirt engin=
e all quotas show 100% usage for memory, if I open the =
quota in edit mode and close it again it upda=
tes memory used. </p>
<p>I'm using 4.1.3.5-1.el7.centos</p>
<p><br>
</p>
<p>Regards,</p>
<p> Paul S. <br>
</p>
To view the terms under which this email is distributed, please go to:- <br=
>
<a href=3D"http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html"=
target=3D"_blank">http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaim=
er.html</a>
<p></p>
</body>
</html>
--_000_150099104189557101leedsbeckettacuk_--
7 years, 9 months
Ovirt node ready for production env?
by Lionel Caignec
Hi,
i'm did not test myself so i prefer asking before use it (https://www.ovirt.org/node/).
Is ovirt node can be used for production environment ?
Is it possible to add some software on host (ex: backup tools, ossec,... )?
How does work security update, is it managed by ovirt? or can i plug ovirt node on spacewalk/katello?
Sorry for my "noobs question"
Regards
--
Lionel
7 years, 9 months
Problems with oVirt3.5 engine + CentOS6 Host
by Antonio Sebastian Salles M.
Hello friends,
Since a few days I've been trying to register a CentOS 6 host on an
oVirt 3.5 engine, but I have not been able to finish the process
successfully.
I copied the ssh keys and lifted vdsmd without problems, then tried
via firefox with admin portal to integrate it, but without success.
Could you help me? This is the error ... Thank you very much!
[root@ovirt ovirt-engine]# tail -f -n0 /var/log/ovirt-engine/engine.log
2017-07-19 17:21:25,078 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,079 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,079 INFO
[org.ovirt.engine.core.bll.InstallVdsCommand] (ajp--127.0.0.1-8702-3)
[7a173239] Running command: InstallVdsCommand internal: false.
Entities affected : ID: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 Type:
VDSAction group EDIT_HOST_CONFIGURATION with role type ADMIN
2017-07-19 17:21:25,088 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,088 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,090 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(ajp--127.0.0.1-8702-3) [7a173239] Lock Acquired to object EngineLock
[exclusiveLocks= key: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 value: VDS
, sharedLocks= ]
2017-07-19 17:21:25,093 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [7a173239] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: Failed to verify Power Management
configuration for Host kvm2.segic.cl.
2017-07-19 17:21:25,095 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Running command:
InstallVdsInternalCommand internal: true. Entities affected : ID:
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 Type: VDS
2017-07-19 17:21:25,095 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Before Installation
host 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, kvm2.segic.cl
2017-07-19 17:21:25,105 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [7a173239] Correlation ID: 7a173239, Call
Stack: null, Custom Event ID: -1, Message: Host kvm2.segic.cl
configuration was updated by admin@internal.
2017-07-19 17:21:25,106 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] START,
SetVdsStatusVDSCommand(HostName = kvm2.segic.cl, HostId =
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, status=Installing,
nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 8c3992
2017-07-19 17:21:25,109 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] FINISH,
SetVdsStatusVDSCommand, log id: 8c3992
2017-07-19 17:21:25,133 INFO
[org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation
158.170.39.12: Connected to host 158.170.39.12 with SSH key
fingerprint: 16:2b:79:78:60:ea:d2:24:0a:8d:7c:2f:2e:8e:20:51
2017-07-19 17:21:25,138 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID:
7a173239, Call Stack: null, Custom Event ID: -1, Message: Installing
Host kvm2.segic.cl. Connected to host 158.170.39.12 with SSH key
fingerprint: 16:2b:79:78:60:ea:d2:24:0a:8d:7c:2f:2e:8e:20:51.
2017-07-19 17:21:25,223 INFO [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation of
158.170.39.12. Executing command via SSH umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)";
trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr
\"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
"${MYTMP}" -x && "${MYTMP}"/setup DIALOG/dialect=str:machine
DIALOG/customization=bool:True <
/var/cache/ovirt-engine/ovirt-host-deploy.tar
2017-07-19 17:21:25,223 INFO
[org.ovirt.engine.core.utils.archivers.tar.CachedTar]
org.ovirt.thread.pool-8-thread-32) Tarball
'/var/cache/ovirt-engine/ovirt-host-deploy.tar' refresh
2017-07-19 17:21:25,254 INFO
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-32) SSH execute root(a)158.170.39.12
'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null
2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
--warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/setup
DIALOG/dialect=str:machine DIALOG/customization=bool:True'
2017-07-19 17:21:25,351 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(VdsDeploy) Error during deploy dialog: java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83) [bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969) [bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,353 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Error during host
158.170.39.12 install: java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83) [bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969) [bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,354 ERROR
[org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation
158.170.39.12: null
2017-07-19 17:21:25,357 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID:
7a173239, Call Stack: null, Custom Event ID: -1, Message: Failed to
install Host kvm2.segic.cl. <UNKNOWN>.
2017-07-19 17:21:25,357 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Error during host
158.170.39.12 install, prefering first exception:
java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83) [bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969) [bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,358 ERROR
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Host installation
failed for host 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, kvm2.segic.cl.:
java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83) [bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969) [bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,359 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] START,
SetVdsStatusVDSCommand(HostName = kvm2.segic.cl, HostId =
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, status=InstallFailed,
nonOperationalReason=NONE, stopSpmFailureLogged=false), log id:
5c71c632
2017-07-19 17:21:25,362 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] FINISH,
SetVdsStatusVDSCommand, log id: 5c71c632
2017-07-19 17:21:25,365 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID:
7a173239, Call Stack: null, Custom Event ID: -1, Message: Host
kvm2.segic.cl installation failed. Please refer to
/var/log/ovirt-engine/engine.log and log logs under
/var/log/ovirt-engine/host-deploy/ for further details..
2017-07-19 17:21:25,365 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Lock freed to object
EngineLock [exclusiveLocks= key: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5
value: VDS , sharedLocks= ]
--
Antonio Salles | Fedora Ambassador
7 years, 9 months
NullPointerException when changing compatibility version to 4.0
by Marcel Hanke
Hi,
i currently have a problem with changing one of our clusters to compatibility
version 4.0.
The Log shows a NullPointerException after several successful vms:
2017-07-19 11:19:45,886 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand]
(default task-31) [1acd2990] Error during ValidateFailure.:
java.lang.NullPointerException
at
org.ovirt.engine.core.bll.UpdateVmCommand.validate(UpdateVmCommand.java:632)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:886)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:391)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
[bll.jar:]
.....
On other Clusters with the exect same configuration the change to 4.0 was
successfull without a problem.
Turning off the cluster for the change is also not possible because of >1200
Vms running on it.
Does anyone have an idea what to do, or that to look for?
Thanks
Marcel
7 years, 9 months
ovirt-hosted-engine state transition messages
by Jim Kusznir
Hello:
I find that I often get random-seeming messages. A lot of them mention
"ReintializeFSM", but I also get engine down, engine start, etc.
messages. All the time, nothing appears to be happening on the cluster,
and I rarely can find anything wrong or any trigger/cause. Is this
normal? What causes this (beyond obvious hardware issues / hosts
rebooting)? Most of the time when I get these, my cluster is going along
smoothly, and nothing (not even administrative access) is interrupted.
Could ISP issues cause these messages to be generated?
Thanks!
--Jim
7 years, 9 months
ovirt on sdcard?
by Lionel Caignec
Hi,
i'm planning to install some new hypervisors (ovirt) and i'm wondering if it's possible to get it installed on sdcard.
I know there is write limitation on this kind of storage device.
Is it a viable solution? there is somewhere some tuto about tuning ovirt on this kind of storage?
Thanks
--
Lionel
7 years, 9 months
workflow suggestion for the creating and destroying the VMs?
by Arman Khalatyan
Hi,
Can some one share an experience with dynamic creating and removing VMs
based on the load?
Currently I am just creating with the python SDK a clone of the apache
worker, are there way to copy some config files to the VM before starting
it ?
Thanks,
Arman.
7 years, 9 months
Ovirt Engine Reports oVirt >= 4
by Victor José Acosta Domínguez
Hello everyone, quick question, is ovirt-engine-reports deprecated on oVirt
>= 4?
Because i don't find ovirt-engine-reports package on oVirt's 4 repo.
Regards
Victor Acosta
7 years, 9 months
Problems with oVirt3.5 engine + CentOS6 Host
by Antonio Sallés
Hello friends,
Since yesterday I've been trying to register a CentOS 6 host on an oVirt
3.5 engine, but I have not been able to finish the process successfully.
I copied the ssh keys and lifted vdsmd without problems, then tried via
firefox with admin portal to integrate it, but without success.
Could you help me? This is the error ... Thank you very much!
[root@ovirt ovirt-engine]# tail -f -n0 /var/log/ovirt-engine/engine.log
2017-07-19 17:21:25,078 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,079 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,079 INFO
[org.ovirt.engine.core.bll.InstallVdsCommand] (ajp--127.0.0.1-8702-3)
[7a173239] Running command: InstallVdsCommand internal: false. Entities
affected : ID: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 Type: VDSAction
group EDIT_HOST_CONFIGURATION with role type ADMIN
2017-07-19 17:21:25,088 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,088 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,090 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(ajp--127.0.0.1-8702-3) [7a173239] Lock Acquired to object EngineLock
[exclusiveLocks= key: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 value: VDS
, sharedLocks= ]
2017-07-19 17:21:25,093 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [7a173239] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: Failed to verify Power Management
configuration for Host kvm2.segic.cl.
2017-07-19 17:21:25,095 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Running command:
InstallVdsInternalCommand internal: true. Entities affected : ID:
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 Type: VDS
2017-07-19 17:21:25,095 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Before Installation host
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, kvm2.segic.cl
2017-07-19 17:21:25,105 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [7a173239] Correlation ID: 7a173239, Call Stack:
null, Custom Event ID: -1, Message: Host kvm2.segic.cl configuration was
updated by admin@internal.
2017-07-19 17:21:25,106 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] START,
SetVdsStatusVDSCommand(HostName = kvm2.segic.cl, HostId =
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, status=Installing,
nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 8c3992
2017-07-19 17:21:25,109 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] FINISH,
SetVdsStatusVDSCommand, log id: 8c3992
2017-07-19 17:21:25,133 INFO
[org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation
158.170.39.12: Connected to host 158.170.39.12 with SSH key fingerprint:
16:2b:79:78:60:ea:d2:24:0a:8d:7c:2f:2e:8e:20:51
2017-07-19 17:21:25,138 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID: 7a173239,
Call Stack: null, Custom Event ID: -1, Message: Installing Host
kvm2.segic.cl. Connected to host 158.170.39.12 with SSH key fingerprint:
16:2b:79:78:60:ea:d2:24:0a:8d:7c:2f:2e:8e:20:51.
2017-07-19 17:21:25,223 INFO [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation of
158.170.39.12. Executing command via SSH umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap
"chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
/dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
"${MYTMP}"/setup DIALOG/dialect=str:machine
DIALOG/customization=bool:True <
/var/cache/ovirt-engine/ovirt-host-deploy.tar
2017-07-19 17:21:25,223 INFO
[org.ovirt.engine.core.utils.archivers.tar.CachedTar]
(org.ovirt.thread.pool-8-thread-32) Tarball
'/var/cache/ovirt-engine/ovirt-host-deploy.tar' refresh
2017-07-19 17:21:25,254 INFO
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-32) SSH execute root(a)158.170.39.12
'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1;
rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
"${MYTMP}" -x && "${MYTMP}"/setup DIALOG/dialect=str:machine
DIALOG/customization=bool:True'
2017-07-19 17:21:25,351 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(VdsDeploy) Error during deploy dialog: java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969)
[bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,353 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Error during host
158.170.39.12 install: java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969)
[bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,354 ERROR
[org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation
158.170.39.12: null
2017-07-19 17:21:25,357 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID: 7a173239,
Call Stack: null, Custom Event ID: -1, Message: Failed to install Host
kvm2.segic.cl. <UNKNOWN>.
2017-07-19 17:21:25,357 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Error during host
158.170.39.12 install, prefering first exception:
java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969)
[bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,358 ERROR
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Host installation failed
for host 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, kvm2.segic.cl.:
java.lang.NullPointerException
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:835)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$2000(VdsDeploy.java:83)
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$51.run(VdsDeploy.java:969)
[bll.jar:]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.7.0_141]
2017-07-19 17:21:25,359 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] START,
SetVdsStatusVDSCommand(HostName = kvm2.segic.cl, HostId =
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, status=InstallFailed,
nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 5c71c632
2017-07-19 17:21:25,362 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] FINISH,
SetVdsStatusVDSCommand, log id: 5c71c632
2017-07-19 17:21:25,365 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID: 7a173239,
Call Stack: null, Custom Event ID: -1, Message: Host kvm2.segic.cl
installation failed. Please refer to /var/log/ovirt-engine/engine.log
and log logs under /var/log/ovirt-engine/host-deploy/ for further details..
2017-07-19 17:21:25,365 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Lock freed to object
EngineLock [exclusiveLocks= key: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5
value: VDS
, sharedLocks= ]
--
Antonio Salles | Fedora Ambassador
7 years, 9 months
Re: [ovirt-users] Host stuck unresponsive after Network Outage
by Pavel Gashev
--_000_90D58F04AB4E44D98CF3C90FDCB546C1acroniscom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
QW50aG9ueSwNCg0KT3V0cHV0IG9mIOKAnHN5c3RlbWN0bCBzdGF0dXMgLWwgdmRzbS1uZXR3b3Jr
4oCdIHdvdWxkIGhlbHAuDQoNCg0KRnJvbTogPHVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnPiBvbiBi
ZWhhbGYgb2YgIkFudGhvbnkuRmlsbG1vcmUiIDxBbnRob255LkZpbGxtb3JlQHRhcmdldC5jb20+
DQpEYXRlOiBUdWVzZGF5LCAxOCBKdWx5IDIwMTcgYXQgMTg6MTMNClRvOiAidXNlcnNAb3ZpcnQu
b3JnIiA8dXNlcnNAb3ZpcnQub3JnPg0KQ2M6ICJCcmFuZG9uLk1hcmtncmFmIiA8QnJhbmRvbi5N
YXJrZ3JhZkB0YXJnZXQuY29tPiwgIlNhbmRlZXAuTWVuZGlyYXR0YSIgPFNhbmRlZXAuTWVuZGly
YXR0YUB0YXJnZXQuY29tPg0KU3ViamVjdDogW292aXJ0LXVzZXJzXSBIb3N0IHN0dWNrIHVucmVz
cG9uc2l2ZSBhZnRlciBOZXR3b3JrIE91dGFnZQ0KDQpIZXkgT3ZpcnQgVXNlcnMgYW5kIFRlYW0s
DQoNCkkgaGF2ZSBhIGhvc3QgdGhhdCBJIGFtIHVuYWJsZSB0byByZWNvdmVyIHBvc3QgYSBuZXR3
b3JrIG91dGFnZS4gIFRoZSBob3N0IGlzIHN0dWNrIGluIHVucmVzcG9uc2l2ZSBtb2RlLCBldmVu
IHRob3VnaCB0aGUgaG9zdCBpcyBvbiB0aGUgbmV0d29yaywgYWJsZSB0byBTU0ggYW5kIHNlZW1z
IHRvIGJlIGhlYWx0aHkuICBJ4oCZdmUgdHJpZWQgc2V2ZXJhbCB0aGluZ3MgdG8gcmVjb3ZlciB0
aGUgaG9zdCBpbiBPdmlydCwgYnV0IGhhdmUgaGFkIG5vIHN1Y2Nlc3Mgc28gZmFyLiAgSeKAmWQg
bGlrZSB0byByZWFjaCBvdXQgdG8gdGhlIGNvbW11bml0eSBiZWZvcmUgYmxvd2luZyBhd2F5IGFu
ZCByZWJ1aWxkaW5nIHRoZSBob3N0Lg0KDQpFbnZpcm9ubWVudDogSSBoYXZlIGFuIE92ZW5naW5l
IHNlcnZlciB3aXRoIGFib3V0IDI2IERhdGFjZW50ZXJzLCB3aXRoIDIgdG8gMyBob3N0cyBwZXIg
RGF0YWNlbnRlci4gIE15IE92ZW5naW5lIHNlcnZlciBpcyBob3N0ZWQgY2VudHJhbGx5LCB3aXRo
IG15IGhvc3RzIGJlaW5nIGJhcmUtbWV0YWwgYW5kIGRpc3RyaWJ1dGVkIHRocm91Z2hvdXQgbXkg
ZW52aXJvbm1lbnQuICAgIE92ZW5naW5lIGlzIHZlcnNpb24gNC4wLjYuDQoNCldoYXQgSeKAmXZl
IHRyaWVkOiBwdXQgaW50byBtYWludGVuYW5jZSBtb2RlLCByZWJvb3RlZCB0aGUgaG9zdC4gIENv
bmZpcm1lZCBob3N0IHdhcyByZWJvb3RlZCBhbmQgdHJpZWQgdG8gYWN0aXZlLCBnb2VzIGJhY2sg
dG8gdW5yZXNwb25zaXZlLiAgIEF0dGVtcHRlZCBhIHJlaW5zdGFsbCwgd2hpY2ggZmFpbHMuDQoN
CkNoZWNraW5nIGZyb20gdGhlIGhvc3QgcGVyc3BlY3RpdmUsIEkgY2FuIHNlZSB0aGUgZm9sbG93
aW5nIHByb2JsZW1zOg0KDQpbYm94bmFtZX5dIyBzeXN0ZW1jdGwgc3RhdHVzIHZkc21kDQril48g
dmRzbWQuc2VydmljZSAtIFZpcnR1YWwgRGVza3RvcCBTZXJ2ZXIgTWFuYWdlcg0KICAgTG9hZGVk
OiBsb2FkZWQgKC91c3IvbGliL3N5c3RlbWQvc3lzdGVtL3Zkc21kLnNlcnZpY2U7IGVuYWJsZWQ7
IHZlbmRvciBwcmVzZXQ6IGVuYWJsZWQpDQogICBBY3RpdmU6IGluYWN0aXZlIChkZWFkKQ0KDQpK
dWwgMTQgMTI6MzQ6MjggYm94bmFtZSBzeXN0ZW1kWzFdOiBEZXBlbmRlbmN5IGZhaWxlZCBmb3Ig
VmlydHVhbCBEZXNrdG9wIFNlcnZlciBNYW5hZ2VyLg0KSnVsIDE0IDEyOjM0OjI4IGJveG5hbWUg
c3lzdGVtZFsxXTogSm9iIHZkc21kLnNlcnZpY2Uvc3RhcnQgZmFpbGVkIHdpdGggcmVzdWx0ICdk
ZXBlbmRlbmN5Jy4NCg0KR29pbmcgYSBiaXQgZGVlcGVyLCB0aGUgcmVzdWx0cyBvZiBqb3VybmFs
Y3RsIOKAk3hlOg0KDQpbcm9vdEBib3huYW1lIH5dIyBqb3VybmFsY3RsIC14ZQ0KLS0gRGVmaW5l
ZC1CeTogc3lzdGVtZA0KLS0gU3VwcG9ydDogaHR0cDovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9t
YWlsbWFuL2xpc3RpbmZvL3N5c3RlbWQtZGV2ZWwNCi0tDQotLSBVbml0IGxpYnZpcnRkLnNlcnZp
Y2UgaGFzIGJlZ3VuIHNodXR0aW5nIGRvd24uDQpKdWwgMTggMDk6MDc6MzEgYm94bmFtZSBzeXN0
ZW1kWzFdOiBTdG9wcGVkIFZpcnR1YWxpemF0aW9uIGRhZW1vbi4NCi0tIFN1YmplY3Q6IFVuaXQg
bGlidmlydGQuc2VydmljZSBoYXMgZmluaXNoZWQgc2h1dHRpbmcgZG93bg0KLS0gRGVmaW5lZC1C
eTogc3lzdGVtZA0KLS0gU3VwcG9ydDogaHR0cDovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWls
bWFuL2xpc3RpbmZvL3N5c3RlbWQtZGV2ZWwNCi0tDQotLSBVbml0IGxpYnZpcnRkLnNlcnZpY2Ug
aGFzIGZpbmlzaGVkIHNodXR0aW5nIGRvd24uDQpKdWwgMTggMDk6MDc6MzEgYm94bmFtZSBzeXN0
ZW1kWzFdOiBSZWxvYWRpbmcuDQpKdWwgMTggMDk6MDc6MzEgYm94bmFtZSBzeXN0ZW1kWzFdOiBC
aW5kaW5nIHRvIElQdjYgYWRkcmVzcyBub3QgYXZhaWxhYmxlIHNpbmNlIGtlcm5lbCBkb2VzIG5v
dCBzdXBwb3J0IElQdjYuDQpKdWwgMTggMDk6MDc6MzEgYm94bmFtZSBzeXN0ZW1kWzFdOiBbL3Vz
ci9saWIvc3lzdGVtZC9zeXN0ZW0vcnBjYmluZC5zb2NrZXQ6Nl0gRmFpbGVkIHRvIHBhcnNlIGFk
ZHJlc3MgdmFsdWUsIGlnbm9yaW5nOiBbOjoNCkp1bCAxOCAwOTowNzozMSBib3huYW1lIHN5c3Rl
bWRbMV06IFN0YXJ0ZWQgQXV4aWxpYXJ5IHZkc20gc2VydmljZSBmb3IgcnVubmluZyBoZWxwZXIg
ZnVuY3Rpb25zIGFzIHJvb3QuDQotLSBTdWJqZWN0OiBVbml0IHN1cGVydmRzbWQuc2VydmljZSBo
YXMgZmluaXNoZWQgc3RhcnQtdXANCi0tIERlZmluZWQtQnk6IHN5c3RlbWQNCi0tIFN1cHBvcnQ6
IGh0dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9zeXN0ZW1kLWRl
dmVsDQotLQ0KLS0gVW5pdCBzdXBlcnZkc21kLnNlcnZpY2UgaGFzIGZpbmlzaGVkIHN0YXJ0aW5n
IHVwLg0KLS0NCi0tIFRoZSBzdGFydC11cCByZXN1bHQgaXMgZG9uZS4NCkp1bCAxOCAwOTowNzoz
MSBib3huYW1lIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEF1eGlsaWFyeSB2ZHNtIHNlcnZpY2UgZm9y
IHJ1bm5pbmcgaGVscGVyIGZ1bmN0aW9ucyBhcyByb290Li4uDQotLSBTdWJqZWN0OiBVbml0IHN1
cGVydmRzbWQuc2VydmljZSBoYXMgYmVndW4gc3RhcnQtdXANCi0tIERlZmluZWQtQnk6IHN5c3Rl
bWQNCi0tIFN1cHBvcnQ6IGh0dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0
aW5mby9zeXN0ZW1kLWRldmVsDQotLQ0KLS0gVW5pdCBzdXBlcnZkc21kLnNlcnZpY2UgaGFzIGJl
Z3VuIHN0YXJ0aW5nIHVwLg0KSnVsIDE4IDA5OjA3OjMxIGJveG5hbWUgc3lzdGVtZFsxXTogU3Rh
cnRpbmcgVmlydHVhbGl6YXRpb24gZGFlbW9uLi4uDQotLSBTdWJqZWN0OiBVbml0IGxpYnZpcnRk
LnNlcnZpY2UgaGFzIGJlZ3VuIHN0YXJ0LXVwDQotLSBEZWZpbmVkLUJ5OiBzeXN0ZW1kDQotLSBT
dXBwb3J0OiBodHRwOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vc3lz
dGVtZC1kZXZlbA0KLS0NCi0tIFVuaXQgbGlidmlydGQuc2VydmljZSBoYXMgYmVndW4gc3RhcnRp
bmcgdXAuDQpKdWwgMTggMDk6MDc6MzIgYm94bmFtZSBzeXN0ZW1kWzFdOiBTdGFydGVkIFZpcnR1
YWxpemF0aW9uIGRhZW1vbi4NCi0tIFN1YmplY3Q6IFVuaXQgbGlidmlydGQuc2VydmljZSBoYXMg
ZmluaXNoZWQgc3RhcnQtdXANCi0tIERlZmluZWQtQnk6IHN5c3RlbWQNCi0tIFN1cHBvcnQ6IGh0
dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9zeXN0ZW1kLWRldmVs
DQotLQ0KLS0gVW5pdCBsaWJ2aXJ0ZC5zZXJ2aWNlIGhhcyBmaW5pc2hlZCBzdGFydGluZyB1cC4N
Ci0tDQotLSBUaGUgc3RhcnQtdXAgcmVzdWx0IGlzIGRvbmUuDQpKdWwgMTggMDk6MDc6MzIgYm94
bmFtZSBzeXN0ZW1kWzFdOiBTdGFydGluZyBWaXJ0dWFsIERlc2t0b3AgU2VydmVyIE1hbmFnZXIg
bmV0d29yayByZXN0b3JhdGlvbi4uLg0KLS0gU3ViamVjdDogVW5pdCB2ZHNtLW5ldHdvcmsuc2Vy
dmljZSBoYXMgYmVndW4gc3RhcnQtdXANCi0tIERlZmluZWQtQnk6IHN5c3RlbWQNCi0tIFN1cHBv
cnQ6IGh0dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9zeXN0ZW1k
LWRldmVsDQotLQ0KLS0gVW5pdCB2ZHNtLW5ldHdvcmsuc2VydmljZSBoYXMgYmVndW4gc3RhcnRp
bmcgdXAuDQpsaW5lcyAyNzUxLTI3OTcvMjc5NyAoRU5EKQ0KDQpEb2VzIHRoZSBjb21tdW5pdHkg
aGF2ZSBzdWdnZXN0aW9ucyBvbiB3aGF0IGNhbiBiZSBkb25lIG5leHQgdG8gcmVjb3ZlciB0aGlz
IGhvc3Qgd2l0aGluIE92aXJ0PyAgSSBjYW4gcHJvdmlkZSBhZGRpdGlvbmFsIGxvZyBkdW1wcyBh
cyBuZWVkZWQsIHBsZWFzZSBpbmZvcm0gd2l0aCB3aGF0IHlvdSBuZWVkIHRvIGFzc2lzdCBmdXJ0
aGVyLg0KDQpUaGFuayB5b3UsDQpUb255DQoNCg==
--_000_90D58F04AB4E44D98CF3C90FDCB546C1acroniscom_
Content-Type: text/html; charset="utf-8"
Content-ID: <F68C33CB1CFBDD49B0C6277F306B7E68(a)acronis.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8N
CnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBjbTsN
CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjExLjBwdDsNCglmb250LWZhbWls
eTpDYWxpYnJpO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXByaW9y
aXR5Ojk5Ow0KCWNvbG9yOiMwNTYzQzE7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpzcGFu
LkVtYWlsU3R5bGUxNw0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbDsNCglmb250LWZhbWlseTpD
YWxpYnJpOw0KCWNvbG9yOndpbmRvd3RleHQ7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTgNCgl7bXNvLXN0
eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7DQoJZm9udC1mYW1pbHk6Q2FsaWJyaTsNCgljb2xvcjp3
aW5kb3d0ZXh0O30NCnNwYW4ubXNvSW5zDQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0K
CW1zby1zdHlsZS1uYW1lOiIiOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7DQoJY29sb3I6
dGVhbDt9DQouTXNvQ2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsNCglm
b250LXNpemU6MTAuMHB0O30NCkBwYWdlIFdvcmRTZWN0aW9uMQ0KCXtzaXplOjYxMi4wcHQgNzky
LjBwdDsNCgltYXJnaW46NzIuMHB0IDcyLjBwdCA3Mi4wcHQgNzIuMHB0O30NCmRpdi5Xb3JkU2Vj
dGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHlsZT4NCjwvaGVhZD4NCjxib2R5
IGJnY29sb3I9IndoaXRlIiBsYW5nPSJFTi1HQiIgbGluaz0iIzA1NjNDMSIgdmxpbms9IiM5NTRG
NzIiPg0KPGRpdiBjbGFzcz0iV29yZFNlY3Rpb24xIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJtc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUyI+QW50aG9ueSw8bzpwPjwvbzpw
Pjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0ibXNvLWZhcmVh
c3QtbGFuZ3VhZ2U6RU4tVVMiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJtc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUyI+T3V0
cHV0IG9mIOKAnHN5c3RlbWN0bCBzdGF0dXMgLWwgdmRzbS1uZXR3b3Jr4oCdIHdvdWxkIGhlbHAu
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9
Im1zby1mYXJlYXN0LWxhbmd1YWdlOkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+
DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0ibXNvLWZhcmVhc3QtbGFuZ3VhZ2U6
RU4tVVMiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxkaXYgc3R5bGU9ImJvcmRlcjpu
b25lO2JvcmRlci10b3A6c29saWQgI0I1QzRERiAxLjBwdDtwYWRkaW5nOjMuMHB0IDBjbSAwY20g
MGNtIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxiPjxzcGFuIHN0eWxlPSJjb2xvcjpibGFjayI+
RnJvbTogPC9zcGFuPjwvYj48c3BhbiBzdHlsZT0iY29sb3I6YmxhY2siPiZsdDt1c2Vycy1ib3Vu
Y2VzQG92aXJ0Lm9yZyZndDsgb24gYmVoYWxmIG9mICZxdW90O0FudGhvbnkuRmlsbG1vcmUmcXVv
dDsgJmx0O0FudGhvbnkuRmlsbG1vcmVAdGFyZ2V0LmNvbSZndDs8YnI+DQo8Yj5EYXRlOiA8L2I+
VHVlc2RheSwgMTggSnVseSAyMDE3IGF0IDE4OjEzPGJyPg0KPGI+VG86IDwvYj4mcXVvdDt1c2Vy
c0BvdmlydC5vcmcmcXVvdDsgJmx0O3VzZXJzQG92aXJ0Lm9yZyZndDs8YnI+DQo8Yj5DYzogPC9i
PiZxdW90O0JyYW5kb24uTWFya2dyYWYmcXVvdDsgJmx0O0JyYW5kb24uTWFya2dyYWZAdGFyZ2V0
LmNvbSZndDssICZxdW90O1NhbmRlZXAuTWVuZGlyYXR0YSZxdW90OyAmbHQ7U2FuZGVlcC5NZW5k
aXJhdHRhQHRhcmdldC5jb20mZ3Q7PGJyPg0KPGI+U3ViamVjdDogPC9iPltvdmlydC11c2Vyc10g
SG9zdCBzdHVjayB1bnJlc3BvbnNpdmUgYWZ0ZXIgTmV0d29yayBPdXRhZ2U8L3NwYW4+PHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZToxMi4wcHQ7Y29sb3I6YmxhY2siPjxvOnA+PC9vOnA+PC9zcGFuPjwv
cD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250
LWZhbWlseTomcXVvdDtUaW1lcyBOZXcgUm9tYW4mcXVvdDsiPjxvOnA+Jm5ic3A7PC9vOnA+PC9z
cGFuPjwvcD4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SGV5IE92aXJ0IFVzZXJzIGFu
ZCBUZWFtLDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48
L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5JIGhhdmUgYSBob3N0IHRoYXQgSSBhbSB1
bmFibGUgdG8gcmVjb3ZlciBwb3N0IGEgbmV0d29yayBvdXRhZ2UuJm5ic3A7IFRoZSBob3N0IGlz
IHN0dWNrIGluIHVucmVzcG9uc2l2ZSBtb2RlLCBldmVuIHRob3VnaCB0aGUgaG9zdCBpcyBvbiB0
aGUgbmV0d29yaywgYWJsZSB0byBTU0ggYW5kIHNlZW1zIHRvIGJlIGhlYWx0aHkuJm5ic3A7IEni
gJl2ZSB0cmllZCBzZXZlcmFsIHRoaW5ncyB0byByZWNvdmVyIHRoZSBob3N0IGluIE92aXJ0LA0K
IGJ1dCBoYXZlIGhhZCBubyBzdWNjZXNzIHNvIGZhci4mbmJzcDsgSeKAmWQgbGlrZSB0byByZWFj
aCBvdXQgdG8gdGhlIGNvbW11bml0eSBiZWZvcmUgYmxvd2luZyBhd2F5IGFuZCByZWJ1aWxkaW5n
IHRoZSBob3N0LjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86
cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48Yj5FbnZpcm9ubWVudDwvYj46IEkg
aGF2ZSBhbiBPdmVuZ2luZSBzZXJ2ZXIgd2l0aCBhYm91dCAyNiBEYXRhY2VudGVycywgd2l0aCAy
IHRvIDMgaG9zdHMgcGVyIERhdGFjZW50ZXIuJm5ic3A7IE15IE92ZW5naW5lIHNlcnZlciBpcyBo
b3N0ZWQgY2VudHJhbGx5LCB3aXRoIG15IGhvc3RzIGJlaW5nIGJhcmUtbWV0YWwgYW5kIGRpc3Ry
aWJ1dGVkIHRocm91Z2hvdXQgbXkgZW52aXJvbm1lbnQuJm5ic3A7ICZuYnNwOyZuYnNwO092ZW5n
aW5lIGlzDQogdmVyc2lvbiA0LjAuNi4mbmJzcDsgPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxi
PldoYXQgSeKAmXZlIHRyaWVkOiA8L2I+cHV0IGludG8gbWFpbnRlbmFuY2UgbW9kZSwgcmVib290
ZWQgdGhlIGhvc3QuJm5ic3A7IENvbmZpcm1lZCBob3N0IHdhcyByZWJvb3RlZCBhbmQgdHJpZWQg
dG8gYWN0aXZlLCBnb2VzIGJhY2sgdG8gdW5yZXNwb25zaXZlLiZuYnNwOyZuYnNwOyBBdHRlbXB0
ZWQgYSByZWluc3RhbGwsIHdoaWNoIGZhaWxzLiZuYnNwOw0KPG86cD48L286cD48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxiPkNoZWNraW5nIGZyb20gdGhlIGhvc3QgcGVyc3BlY3RpdmUsIEkgY2FuIHNlZSB0aGUg
Zm9sbG93aW5nIHByb2JsZW1zOg0KPC9iPjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05v
cm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5bYm94bmFt
ZX5dIyBzeXN0ZW1jdGwgc3RhdHVzIHZkc21kPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj7il48gdmRzbWQuc2VydmljZSAtIFZpcnR1YWwgRGVza3RvcCBTZXJ2ZXIgTWFuYWdl
cjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7Jm5ic3A7IExvYWRl
ZDogbG9hZGVkICgvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbS92ZHNtZC5zZXJ2aWNlOyBlbmFibGVk
OyB2ZW5kb3IgcHJlc2V0OiBlbmFibGVkKTxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05v
cm1hbCI+Jm5ic3A7Jm5ic3A7IEFjdGl2ZTogaW5hY3RpdmUgKGRlYWQpPG86cD48L286cD48L3A+
DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPkp1bCAxNCAxMjozNDoyOCBib3huYW1lIHN5c3RlbWRbMV06IERlcGVuZGVuY3kg
ZmFpbGVkIGZvciBWaXJ0dWFsIERlc2t0b3AgU2VydmVyIE1hbmFnZXIuPG86cD48L286cD48L3A+
DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5KdWwgMTQgMTI6MzQ6MjggYm94bmFtZSBzeXN0ZW1kWzFd
OiBKb2IgdmRzbWQuc2VydmljZS9zdGFydCBmYWlsZWQgd2l0aCByZXN1bHQgJ2RlcGVuZGVuY3kn
LjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48
L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48Yj5Hb2luZyBhIGJpdCBkZWVwZXIsIHRoZSByZXN1
bHRzIG9mIGpvdXJuYWxjdGwg4oCTeGU6IDwvYj48bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+W3Jv
b3RAYm94bmFtZSB+XSMgam91cm5hbGN0bCAteGU8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPi0tIERlZmluZWQtQnk6IHN5c3RlbWQ8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPi0tIFN1cHBvcnQ6IGh0dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFp
bG1hbi9saXN0aW5mby9zeXN0ZW1kLWRldmVsPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj4tLTxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS0gVW5pdCBs
aWJ2aXJ0ZC5zZXJ2aWNlIGhhcyBiZWd1biBzaHV0dGluZyBkb3duLjxvOnA+PC9vOnA+PC9wPg0K
PHAgY2xhc3M9Ik1zb05vcm1hbCI+SnVsIDE4IDA5OjA3OjMxIGJveG5hbWUgc3lzdGVtZFsxXTog
U3RvcHBlZCBWaXJ0dWFsaXphdGlvbiBkYWVtb24uPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj4tLSBTdWJqZWN0OiBVbml0IGxpYnZpcnRkLnNlcnZpY2UgaGFzIGZpbmlzaGVk
IHNodXR0aW5nIGRvd248bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0tIERl
ZmluZWQtQnk6IHN5c3RlbWQ8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0t
IFN1cHBvcnQ6IGh0dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9z
eXN0ZW1kLWRldmVsPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLTxvOnA+
PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS0gVW5pdCBsaWJ2aXJ0ZC5zZXJ2aWNl
IGhhcyBmaW5pc2hlZCBzaHV0dGluZyBkb3duLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+SnVsIDE4IDA5OjA3OjMxIGJveG5hbWUgc3lzdGVtZFsxXTogUmVsb2FkaW5nLjxv
OnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SnVsIDE4IDA5OjA3OjMxIGJveG5h
bWUgc3lzdGVtZFsxXTogQmluZGluZyB0byBJUHY2IGFkZHJlc3Mgbm90IGF2YWlsYWJsZSBzaW5j
ZSBrZXJuZWwgZG9lcyBub3Qgc3VwcG9ydCBJUHY2LjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+SnVsIDE4IDA5OjA3OjMxIGJveG5hbWUgc3lzdGVtZFsxXTogWy91c3IvbGli
L3N5c3RlbWQvc3lzdGVtL3JwY2JpbmQuc29ja2V0OjZdIEZhaWxlZCB0byBwYXJzZSBhZGRyZXNz
IHZhbHVlLCBpZ25vcmluZzogWzo6PG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij5KdWwgMTggMDk6MDc6MzEgYm94bmFtZSBzeXN0ZW1kWzFdOiBTdGFydGVkIEF1eGlsaWFyeSB2
ZHNtIHNlcnZpY2UgZm9yIHJ1bm5pbmcgaGVscGVyIGZ1bmN0aW9ucyBhcyByb290LjxvOnA+PC9v
OnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS0gU3ViamVjdDogVW5pdCBzdXBlcnZkc21k
LnNlcnZpY2UgaGFzIGZpbmlzaGVkIHN0YXJ0LXVwPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj4tLSBEZWZpbmVkLUJ5OiBzeXN0ZW1kPG86cD48L286cD48L3A+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj4tLSBTdXBwb3J0OiBodHRwOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21h
aWxtYW4vbGlzdGluZm8vc3lzdGVtZC1kZXZlbDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+LS08bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0tIFVuaXQg
c3VwZXJ2ZHNtZC5zZXJ2aWNlIGhhcyBmaW5pc2hlZCBzdGFydGluZyB1cC48bzpwPjwvbzpwPjwv
cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0tPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj4tLSBUaGUgc3RhcnQtdXAgcmVzdWx0IGlzIGRvbmUuPG86cD48L286cD48L3A+DQo8
cCBjbGFzcz0iTXNvTm9ybWFsIj5KdWwgMTggMDk6MDc6MzEgYm94bmFtZSBzeXN0ZW1kWzFdOiBT
dGFydGluZyBBdXhpbGlhcnkgdmRzbSBzZXJ2aWNlIGZvciBydW5uaW5nIGhlbHBlciBmdW5jdGlv
bnMgYXMgcm9vdC4uLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS0gU3Vi
amVjdDogVW5pdCBzdXBlcnZkc21kLnNlcnZpY2UgaGFzIGJlZ3VuIHN0YXJ0LXVwPG86cD48L286
cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSBEZWZpbmVkLUJ5OiBzeXN0ZW1kPG86cD48
L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSBTdXBwb3J0OiBodHRwOi8vbGlzdHMu
ZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vc3lzdGVtZC1kZXZlbDxvOnA+PC9vOnA+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS08bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPi0tIFVuaXQgc3VwZXJ2ZHNtZC5zZXJ2aWNlIGhhcyBiZWd1biBzdGFydGluZyB1
cC48bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkp1bCAxOCAwOTowNzozMSBi
b3huYW1lIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFZpcnR1YWxpemF0aW9uIGRhZW1vbi4uLjxvOnA+
PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS0gU3ViamVjdDogVW5pdCBsaWJ2aXJ0
ZC5zZXJ2aWNlIGhhcyBiZWd1biBzdGFydC11cDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+LS0gRGVmaW5lZC1CeTogc3lzdGVtZDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+LS0gU3VwcG9ydDogaHR0cDovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWls
bWFuL2xpc3RpbmZvL3N5c3RlbWQtZGV2ZWw8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29O
b3JtYWwiPi0tPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSBVbml0IGxp
YnZpcnRkLnNlcnZpY2UgaGFzIGJlZ3VuIHN0YXJ0aW5nIHVwLjxvOnA+PC9vOnA+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+SnVsIDE4IDA5OjA3OjMyIGJveG5hbWUgc3lzdGVtZFsxXTogU3Rh
cnRlZCBWaXJ0dWFsaXphdGlvbiBkYWVtb24uPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj4tLSBTdWJqZWN0OiBVbml0IGxpYnZpcnRkLnNlcnZpY2UgaGFzIGZpbmlzaGVkIHN0
YXJ0LXVwPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSBEZWZpbmVkLUJ5
OiBzeXN0ZW1kPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSBTdXBwb3J0
OiBodHRwOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vc3lzdGVtZC1k
ZXZlbDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS08bzpwPjwvbzpwPjwv
cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0tIFVuaXQgbGlidmlydGQuc2VydmljZSBoYXMgZmlu
aXNoZWQgc3RhcnRpbmcgdXAuPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4t
LTxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS0gVGhlIHN0YXJ0LXVwIHJl
c3VsdCBpcyBkb25lLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SnVsIDE4
IDA5OjA3OjMyIGJveG5hbWUgc3lzdGVtZFsxXTogU3RhcnRpbmcgVmlydHVhbCBEZXNrdG9wIFNl
cnZlciBNYW5hZ2VyIG5ldHdvcmsgcmVzdG9yYXRpb24uLi48bzpwPjwvbzpwPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPi0tIFN1YmplY3Q6IFVuaXQgdmRzbS1uZXR3b3JrLnNlcnZpY2UgaGFz
IGJlZ3VuIHN0YXJ0LXVwPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSBE
ZWZpbmVkLUJ5OiBzeXN0ZW1kPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4t
LSBTdXBwb3J0OiBodHRwOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8v
c3lzdGVtZC1kZXZlbDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+LS08bzpw
PjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0tIFVuaXQgdmRzbS1uZXR3b3JrLnNl
cnZpY2UgaGFzIGJlZ3VuIHN0YXJ0aW5nIHVwLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+bGluZXMgMjc1MS0yNzk3LzI3OTcgKEVORCk8bzpwPjwvbzpwPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+RG9lcyB0aGUgY29tbXVuaXR5IGhhdmUgc3VnZ2VzdGlvbnMgb24gd2hhdCBjYW4gYmUgZG9u
ZSBuZXh0IHRvIHJlY292ZXIgdGhpcyBob3N0IHdpdGhpbiBPdmlydD8mbmJzcDsgSSBjYW4gcHJv
dmlkZSBhZGRpdGlvbmFsIGxvZyBkdW1wcyBhcyBuZWVkZWQsIHBsZWFzZSBpbmZvcm0gd2l0aCB3
aGF0IHlvdSBuZWVkIHRvIGFzc2lzdCBmdXJ0aGVyLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5U
aGFuayB5b3UsPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5Ub255PG86cD48
L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjwv
ZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_90D58F04AB4E44D98CF3C90FDCB546C1acroniscom_--
7 years, 9 months
Hosted Engine/NFS Troubles
by Phillip Bailey
Hi,
I'm having trouble with my hosted engine setup (v4.0) and could use some
help. The problem I'm having is that whenever I try to add additional hosts
to the setup via webadmin, the operation fails due to storage-related
issues.
webadmin shows the following error messages:
"Host <host name> cannot access the Storage Domain(s) hosted_storage
attached to the Data Center Default. Setting Host state to Non-Operational.
Failed to connect Host ovirt-node-1 to Storage Pool Default"
The VDSM log from the host shows the following error message:
"Thread-18::ERROR::2017-07-17 13:01:11,483::sdc::146::
Storage.StorageDomainCache::(_findDomain) domain
ca044720-e5cf-40a8-8b21-57a17026db7c
not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'ca044720-e5cf-40a8-8b21-57a17026db7c',)"
The engine log shows the following error messages:
"2017-07-17 18:32:11,409 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-6-thread-34) [] Domain
'ca044720-e5cf-40a8-8b21-57a17026db7c:hosted_storage'
was reported with error code '358'
2017-07-17 18:32:11,410 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand]
(org.ovirt.thread.pool-6-thread-34) [] Storage Domain 'hosted_storage' of
pool 'Default' is in problem in host 'ovirt-node-1'
2017-07-17 18:32:11,487 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-34)
[] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:
Host ovirt-node-1 reports about one of the Active Storage Domains as
Problematic."
I have ownership set to vdsm/kvm and full rwx rights enabled on both
directories. I have successfully mounted both the master domain and the
hosted_storage manually on one of the hosts I'm trying to add. I have
attached the engine log and the VDSM log for that host.
Could someone please help me figure out what's causing this?
-Phillip Bailey
7 years, 9 months
[kolla] Looking for Docker images for Cinder, Glance etc for oVirt
by Leni Kadali Mutungi
Hello all.
I am trying to use the Cinder and Glance Docker images you provide in
relation to the setup here:
http://www.ovirt.org/develop/release-management/features/cinderglance-doc...
I tried to run `sudo docker pull
kollaglue/centos-rdo-glance-registry:latest` and got an error of not
found. I thought that it could possible to use a Dockerfile to spin up
an equivalent of it, so I would like some guidance on how to go about
doing that. Best practices and so on. Alternatively, if it is
possible, may you point me in the direction of the equivalent images
mentioned in the guides if they have been superseded by something else? Thanks.
CCing the oVirt users and devel lists to see if anyone has experienced
something similar.
--
- Warm regards
Leni Kadali Mutungi
7 years, 9 months
test email
by Abi Askushi
several days without receiving any email from this list.
please test back.
Abi
7 years, 9 months
[ANN] oVirt 4.1.4 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.4 for testing, as of July 19th, 2017
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the second release candidate of the fourth in a series of
stabilization updates to the 4.1 series.
4.1.4 brings more than 20 enhancements and more than 80 bugfixes,
including more than 50 high or urgent
severity fixes, on top of oVirt 4.1 series
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* oVirt Node 4.1
* Fedora 24 (tech preview)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Live is already available[4]
- oVirt Node is already available[4]
Additional Resources:
* Read more about the oVirt 4.1.4 release highlights:
http://www.ovirt.org/release/4.1.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.4/
[4] resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years, 9 months
"remove" option greyed out on Permissions tab
by Ian Neilsen
Hey guys
Ive just noticed that I am unable to choose the "remove" option on any
"Permissions" tab in Ovirt Self-hosted 4.1.
Anyone have a suggestion on how to fix this. Im logged in as admin,
original admin created during installation.
Thanks in Advance
--
Ian Neilsen
Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
7 years, 9 months
Backup oVirt Node configuration
by Fernando Frediani
Folks. I had a need to reinstall a oVirt Node a few times these days. This
imposed reconfigure it all in order to add it back to oVirt Engine.
What is a better way to backup a oVirt Node configuration, for when you
reinstall it or if it fail completelly you just reinstall it and restore
the backed up files with network configuration, UUID, VDSM, etc ?
Thanks
Fernando
7 years, 9 months
Host stuck unresponsive after Network Outage
by Anthony.Fillmore
--_000_f25a1076729a4b24a9712b1c5c7f8a9fteemlmbx11phqtargetcom_
Content-Type: text/plain; charset="iso-2022-jp"
Content-Transfer-Encoding: quoted-printable
Hey Ovirt Users and Team,
I have a host that I am unable to recover post a network outage. The host =
is stuck in unresponsive mode, even though the host is on the network, able=
to SSH and seems to be healthy. I=1B$B!G=1B(Bve tried several things to r=
ecover the host in Ovirt, but have had no success so far. I=1B$B!G=1B(Bd l=
ike to reach out to the community before blowing away and rebuilding the ho=
st.
Environment: I have an Ovengine server with about 26 Datacenters, with 2 to=
3 hosts per Datacenter. My Ovengine server is hosted centrally, with my h=
osts being bare-metal and distributed throughout my environment. Ovengin=
e is version 4.0.6.
What I=1B$B!G=1B(Bve tried: put into maintenance mode, rebooted the host. =
Confirmed host was rebooted and tried to active, goes back to unresponsive.=
Attempted a reinstall, which fails.
Checking from the host perspective, I can see the following problems:
[boxname~]# systemctl status vdsmd
=1B$B!|=1B(B vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor p=
reset: enabled)
Active: inactive (dead)
Jul 14 12:34:28 boxname systemd[1]: Dependency failed for Virtual Desktop S=
erver Manager.
Jul 14 12:34:28 boxname systemd[1]: Job vdsmd.service/start failed with res=
ult 'dependency'.
Going a bit deeper, the results of journalctl -xe:
[root@boxname ~]# journalctl -xe
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun shutting down.
Jul 18 09:07:31 boxname systemd[1]: Stopped Virtualization daemon.
-- Subject: Unit libvirtd.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished shutting down.
Jul 18 09:07:31 boxname systemd[1]: Reloading.
Jul 18 09:07:31 boxname systemd[1]: Binding to IPv6 address not available s=
ince kernel does not support IPv6.
Jul 18 09:07:31 boxname systemd[1]: [/usr/lib/systemd/system/rpcbind.socket=
:6] Failed to parse address value, ignoring: [::
Jul 18 09:07:31 boxname systemd[1]: Started Auxiliary vdsm service for runn=
ing helper functions as root.
-- Subject: Unit supervdsmd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:31 boxname systemd[1]: Starting Auxiliary vdsm service for run=
ning helper functions as root...
-- Subject: Unit supervdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has begun starting up.
Jul 18 09:07:31 boxname systemd[1]: Starting Virtualization daemon...
-- Subject: Unit libvirtd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun starting up.
Jul 18 09:07:32 boxname systemd[1]: Started Virtualization daemon.
-- Subject: Unit libvirtd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:32 boxname systemd[1]: Starting Virtual Desktop Server Manager=
network restoration...
-- Subject: Unit vdsm-network.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsm-network.service has begun starting up.
lines 2751-2797/2797 (END)
Does the community have suggestions on what can be done next to recover thi=
s host within Ovirt? I can provide additional log dumps as needed, please =
inform with what you need to assist further.
Thank you,
Tony
--_000_f25a1076729a4b24a9712b1c5c7f8a9fteemlmbx11phqtargetcom_
Content-Type: text/html; charset="iso-2022-jp"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-2022-=
jp">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hey Ovirt Users and Team,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I have a host that I am unable to recover post a net=
work outage. The host is stuck in unresponsive mode, even though the =
host is on the network, able to SSH and seems to be healthy. I=1B$B!G=
=1B(Bve tried several things to recover the host in Ovirt,
but have had no success so far. I=1B$B!G=1B(Bd like to reach out to =
the community before blowing away and rebuilding the host.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><b>Environment</b>: I have an Ovengine server with a=
bout 26 Datacenters, with 2 to 3 hosts per Datacenter. My Ovengine se=
rver is hosted centrally, with my hosts being bare-metal and distributed th=
roughout my environment. Ovengine is
version 4.0.6. <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><b>What I=1B$B!G=1B(Bve tried: </b>put into maintena=
nce mode, rebooted the host. Confirmed host was rebooted and tried to=
active, goes back to unresponsive. Attempted a reinstall, whic=
h fails.
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><b>Checking from the host perspective, I can see the=
following problems:
<o:p></o:p></b></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">[boxname~]# systemctl status vdsmd<o:p></o:p></p>
<p class=3D"MsoNormal">=1B$B!|=1B(B vdsmd.service - Virtual Desktop Server =
Manager<o:p></o:p></p>
<p class=3D"MsoNormal"> Loaded: loaded (/usr/lib/systemd/system=
/vdsmd.service; enabled; vendor preset: enabled)<o:p></o:p></p>
<p class=3D"MsoNormal"> Active: inactive (dead)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Jul 14 12:34:28 boxname systemd[1]: Dependency faile=
d for Virtual Desktop Server Manager.<o:p></o:p></p>
<p class=3D"MsoNormal">Jul 14 12:34:28 boxname systemd[1]: Job vdsmd.servic=
e/start failed with result 'dependency'.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><b>Going a bit deeper, the results of journalctl =
211;xe: <o:p></o:p></b></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">[root@boxname ~]# journalctl -xe<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit libvirtd.service has begun shutting down.<o:=
p></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: Stopped Virtuali=
zation daemon.<o:p></o:p></p>
<p class=3D"MsoNormal">-- Subject: Unit libvirtd.service has finished shutt=
ing down<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit libvirtd.service has finished shutting down.=
<o:p></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: Reloading.<o:p><=
/o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: Binding to IPv6 =
address not available since kernel does not support IPv6.<o:p></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: [/usr/lib/system=
d/system/rpcbind.socket:6] Failed to parse address value, ignoring: [::<o:p=
></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: Started Auxiliar=
y vdsm service for running helper functions as root.<o:p></o:p></p>
<p class=3D"MsoNormal">-- Subject: Unit supervdsmd.service has finished sta=
rt-up<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit supervdsmd.service has finished starting up.=
<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- The start-up result is done.<o:p></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: Starting Auxilia=
ry vdsm service for running helper functions as root...<o:p></o:p></p>
<p class=3D"MsoNormal">-- Subject: Unit supervdsmd.service has begun start-=
up<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit supervdsmd.service has begun starting up.<o:=
p></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:31 boxname systemd[1]: Starting Virtual=
ization daemon...<o:p></o:p></p>
<p class=3D"MsoNormal">-- Subject: Unit libvirtd.service has begun start-up=
<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit libvirtd.service has begun starting up.<o:p>=
</o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:32 boxname systemd[1]: Started Virtuali=
zation daemon.<o:p></o:p></p>
<p class=3D"MsoNormal">-- Subject: Unit libvirtd.service has finished start=
-up<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit libvirtd.service has finished starting up.<o=
:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- The start-up result is done.<o:p></o:p></p>
<p class=3D"MsoNormal">Jul 18 09:07:32 boxname systemd[1]: Starting Virtual=
Desktop Server Manager network restoration...<o:p></o:p></p>
<p class=3D"MsoNormal">-- Subject: Unit vdsm-network.service has begun star=
t-up<o:p></o:p></p>
<p class=3D"MsoNormal">-- Defined-By: systemd<o:p></o:p></p>
<p class=3D"MsoNormal">-- Support: http://lists.freedesktop.org/mailman/lis=
tinfo/systemd-devel<o:p></o:p></p>
<p class=3D"MsoNormal">--<o:p></o:p></p>
<p class=3D"MsoNormal">-- Unit vdsm-network.service has begun starting up.<=
o:p></o:p></p>
<p class=3D"MsoNormal">lines 2751-2797/2797 (END)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Does the community have suggestions on what can be d=
one next to recover this host within Ovirt? I can provide additional =
log dumps as needed, please inform with what you need to assist further.<o:=
p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thank you,<o:p></o:p></p>
<p class=3D"MsoNormal">Tony<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_f25a1076729a4b24a9712b1c5c7f8a9fteemlmbx11phqtargetcom_--
7 years, 9 months
ISCSI storage with multiple nics on same subnet disabled on host activation
by Nelson Lameiras
------=_Part_57353689_1234561196.1496840388734
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello,=20
In our oVirt hosts, we are using DELL equallogic SAN with each server conne=
cting to SAN via 2 physical interfaces. Since both interfaces share the sam=
e network (Equalogic limitation) we must patch the linux kernel to to allow=
iSCSI multipath with multiple NICs in the same subnet with sysctl :=20
---------------------------------------------------------------------------=
-----------------=20
net.ipv4.conf.p2p1.arp_ignore=3D1
net.ipv4.conf.p2p1.arp_announce=3D2
net.ipv4.conf.p2p1.rp_filter=3D2
net.ipv4.conf.p2p2.arp_ignore=3D1
net.ipv4.conf.p2p2.arp_announce=3D2
net.ipv4.conf.p2p2.rp_filter=3D2=20
---------------------------------------------------------------------------=
-----------------=20
This works great in most setups, but for a strange reason, on some of our s=
etups, the sysctl configuration is updated by VDSM when activating a host a=
nd the second interface stops working immeadiatly :=20
---------------------------------------------------------------------------=
-----------------=20
vdsm.log=20
2017-06-07 11:51:51,063+0200 INFO (jsonrpc/5) [storage.ISCSI] Setting stri=
ct mode rp_filter for device 'p2p2'. (iscsi:602)
2017-06-07 11:51:51,064+0200 ERROR (jsonrpc/5) [storage.HSM] Could not conn=
ect to storageServer (hsm:2392)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2389, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 433, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 232, =
in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 33=
7, in node_login
raise IscsiNodeError(rc, out, err)=20
---------------------------------------------------------------------------=
-----------------=20
"strict mode" is enforced for second interface, and it no longuer works...=
=20
Which means - at least - that there is no redundancy in case of hardware fa=
illure and this is not acceptable for our production needs.=20
What is really strange is that we have another "twin" site on another geogr=
aphic region with simillar hardware configuration and same oVirt installati=
on, and this problem does not happen.=20
What can be the root cause of this behaviour? How can I correct it?=20
our setup:=20
hostedEngine : Centor 7.3, ovirt 4.1.2=20
3 physical nodes centos 7.3, ovirt 4.1.2=20
SAN DELL Equalogic=20
cordialement, regards,=20
=09Nelson LAMEIRAS=20
Ing=C3=A9nieur Syst=C3=A8mes et R=C3=A9seaux / Systems and Networks enginee=
r=20
Tel: +33 5 32 09 09 70=20
nelson.lameiras(a)lyra-network.com=20
www.lyra-network.com | www.payzen.eu=20
=09
=09
=09
Lyra Network, 109 rue de l'innovation, 31670 Lab=C3=A8ge, FRANCE=20
------=_Part_57353689_1234561196.1496840388734
Content-Type: multipart/related;
boundary="----=_Part_57353690_157338222.1496840388734"
------=_Part_57353690_157338222.1496840388734
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Hello,<br></div><div><br data-mce-bogus=3D"=
1"></div><div>In our oVirt hosts, we are using DELL equallogic SAN with eac=
h server connecting to SAN via 2 physical interfaces. Since both interfaces=
share the same network (Equalogic limitation) we must patch the linux kern=
el to to allow iSCSI multipath with multiple NICs in the same subnet with s=
ysctl :<br data-mce-bogus=3D"1"></div><div><br data-mce-bogus=3D"1"></div><=
div>-----------------------------------------------------------------------=
---------------------<br data-mce-bogus=3D"1"></div><div><pre>net.ipv4.conf=
.p2p1.arp_ignore=3D1
net.ipv4.conf.p2p1.arp_announce=3D2
net.ipv4.conf.p2p1.rp_filter=3D2
net.ipv4.conf.p2p2.arp_ignore=3D1
net.ipv4.conf.p2p2.arp_announce=3D2
net.ipv4.conf.p2p2.rp_filter=3D2<br><br></pre><p>--------------------------=
------------------------------------------------------------------<br><br><=
/p><p>This works great in most setups, but for a strange reason, on some of=
our setups, the sysctl configuration is updated by VDSM when activating a =
host and the second interface stops working immeadiatly :</p><div>---------=
---------------------------------------------------------------------------=
--------</div><div>vdsm.log<br data-mce-bogus=3D"1"></div><div><pre id=3D"p=
review"><code>2017-06-07 11:51:51,063+0200 INFO (jsonrpc/5) [storage.ISCSI=
] Setting strict mode rp_filter for device 'p2p2'. (iscsi:602)
2017-06-07 11:51:51,064+0200 ERROR (jsonrpc/5) [storage.HSM] Could not conn=
ect to storageServer (hsm:2392)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2389, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 433, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 232, =
in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 33=
7, in node_login
raise IscsiNodeError(rc, out, err)</code></pre><pre><br><br></pre><p st=
yle=3D"margin: 0px;" data-mce-style=3D"margin: 0px;">----------------------=
----------------------------------------------------------------------</p><=
/div></div><div><div><br data-mce-bogus=3D"1"></div><div>"strict mode" is e=
nforced for second interface, and it no longuer works...</div><div>Which me=
ans - at least - that there is no redundancy in case of hardware faillure a=
nd this is not acceptable for our production needs.<br data-mce-bogus=3D"1"=
></div><div><br data-mce-bogus=3D"1"></div><div>What is really strange is t=
hat we have another "twin" site on another geographic region with simillar =
hardware configuration and same oVirt installation, and this problem does n=
ot happen.</div><div><br data-mce-bogus=3D"1"></div><div>What can be the ro=
ot cause of this behaviour? How can I correct it?<br data-mce-bogus=3D"1"><=
/div><div><br data-mce-bogus=3D"1"></div><div>our setup:</div><div>hostedEn=
gine : Centor 7.3, ovirt 4.1.2<br></div><div>3 physical nodes centos 7.3, o=
virt 4.1.2<br data-mce-bogus=3D"1"></div><div>SAN DELL Equalogic<br data-mc=
e-bogus=3D"1"></div></div><div><br data-mce-bogus=3D"1"></div><div data-mar=
ker=3D"__SIG_PRE__"><div>cordialement, regards,</div><div><br data-mce-bogu=
s=3D"1"></div><table style=3D"margin: 0px; table-layout: fixed; border: non=
e; width: 345pt; padding: 0px;" data-mce-style=3D"margin: 0px; table-layout=
: fixed; border: none; width: 345pt; padding: 0px;" width=3D"460" cellspaci=
ng=3D"0" cellpadding=3D"0" border=3D"0"><tbody><tr><td style=3D"margin: 0; =
font-size: 11px; font-family: Arial,sans-serif; color: #2b3c7a; line-height=
: 16px; width: 86.25pt; text-align: left;" data-mce-style=3D"margin: 0; fon=
t-size: 11px; font-family: Arial,sans-serif; color: #2b3c7a; line-height: 1=
6px; width: 86.25pt; text-align: left;" width=3D"115" valign=3D"top"><a hre=
f=3D"https://www.lyra-network.com/" style=3D"border: none; text-decoration:=
none; width: 100%; display: block; clear: both;" data-mce-href=3D"https://=
www.lyra-network.com/" data-mce-style=3D"border: none; text-decoration: non=
e; width: 100%; display: block; clear: both;"><img src=3D"cid:6fa8666d679e1=
8cab5e566228d0ca3519aabec27@zimbra" alt=3D"" data-mce-src=3D"/home/admin@ly=
ra-network.com/corporate-documents/element-signature_logo_lyra_115x94.jpg" =
doc=3D"corporate-documents/element-signature_logo_lyra_115x94.jpg" height=
=3D"94" width=3D"115" border=3D"0"></a><br data-mce-bogus=3D"1"></td><td st=
yle=3D"margin: 0; font-size: 11px; font-family: Arial,sans-serif; color: #2=
b3c7a; line-height: 16px; width: 258.75pt;" data-mce-style=3D"margin: 0; fo=
nt-size: 11px; font-family: Arial,sans-serif; color: #2b3c7a; line-height: =
16px; width: 258.75pt;" width=3D"345" valign=3D"top"><table style=3D"margin=
: 0; width: 258.75pt; table-layout: fixed; border: none;" data-mce-style=3D=
"margin: 0; width: 258.75pt; table-layout: fixed; border: none;" width=3D"3=
45" cellspacing=3D"0" cellpadding=3D"0" border=3D"0"><tbody><tr><td style=
=3D"font-size: 16px; font-family: Arial,sans-serif; color: #2b3c7a; line-he=
ight: 18px; font-weight: bold;" data-mce-style=3D"font-size: 16px; font-fam=
ily: Arial,sans-serif; color: #2b3c7a; line-height: 18px; font-weight: bold=
;">Nelson LAMEIRAS</td></tr><tr><td style=3D"font-family: Arial,sans-serif;=
color: #2b3c7a; font-size: 11px; line-height: 16px;" data-mce-style=3D"fon=
t-family: Arial,sans-serif; color: #2b3c7a; font-size: 11px; line-height: 1=
6px;">Ing=C3=A9nieur Syst=C3=A8mes et R=C3=A9seaux<span style=3D"font-famil=
y: Arial,sans-serif; color: #337ab7; font-size: 11px;" data-mce-style=3D"fo=
nt-family: Arial,sans-serif; color: #337ab7; font-size: 11px;"> / Systems a=
nd Networks engineer</span></td></tr><tr><td style=3D"line-height: 16px; fo=
nt-family: 'Arial',sans-serif; color: #2b3c7a; font-size: 11px;" data-mce-s=
tyle=3D"line-height: 16px; font-family: 'Arial',sans-serif; color: #2b3c7a;=
font-size: 11px;"><span style=3D"font-family: Arial,sans-serif; color: #2b=
3c7a; font-size: 11px;" data-mce-style=3D"font-family: Arial,sans-serif; co=
lor: #2b3c7a; font-size: 11px;">Tel: +33 5 32 09 09 70</span></td></tr><tr>=
<td style=3D"line-height: 16px; font-family: 'Arial',sans-serif; color: #2b=
3c7a; font-size: 11px;" data-mce-style=3D"line-height: 16px; font-family: '=
Arial',sans-serif; color: #2b3c7a; font-size: 11px;"><a href=3D"mailto:nels=
on.lameiras(a)lyra-network.com" style=3D"font-size: 11px; font-family: Arial,=
sans-serif; line-height: 16px; text-decoration: none;" data-mce-href=3D"mai=
lto:nelson.lameiras@lyra-network.com" data-mce-style=3D"font-size: 11px; fo=
nt-family: Arial,sans-serif; line-height: 16px; text-decoration: none;">nel=
son.lameiras(a)lyra-network.com</a><br data-mce-bogus=3D"1"></td></tr><tr><td=
style=3D"line-height: 16px; font-family: 'Arial',sans-serif; color: #2b3c7=
a; font-size: 11px;" data-mce-style=3D"line-height: 16px; font-family: 'Ari=
al',sans-serif; color: #2b3c7a; font-size: 11px;"><a href=3D"https://www.ly=
ra-network.com/" style=3D"font-size: 11px; font-family: Arial,sans-serif; l=
ine-height: 16px; border: none; text-decoration: none;" data-mce-href=3D"ht=
tps://www.lyra-network.com/" data-mce-style=3D"font-size: 11px; font-family=
: Arial,sans-serif; line-height: 16px; border: none; text-decoration: none;=
">www.lyra-network.com</a> <a href=3D"https://payzen.eu" style=3D"font-size=
: 11px; font-family: Arial,sans-serif; line-height: 16px; border: none; tex=
t-decoration: none;" data-mce-href=3D"https://payzen.eu" data-mce-style=3D"=
font-size: 11px; font-family: Arial,sans-serif; line-height: 16px; border: =
none; text-decoration: none;"> | www.payzen.eu</a><br data-mce-bogus=3D"1">=
</td></tr></tbody></table></td></tr></tbody></table><table style=3D"margin:=
0; table-layout: fixed; border: none; width: 345pt;" data-mce-style=3D"mar=
gin: 0; table-layout: fixed; border: none; width: 345pt;" data-mce-selected=
=3D"1" width=3D"460" cellspacing=3D"0" cellpadding=3D"0"><tbody><tr><td sty=
le=3D"margin: 0; font-size: 11px; font-family: Arial,sans-serif; color: #2b=
3c7a; line-height: 16px; width: 24pt; text-align: left;" data-mce-style=3D"=
margin: 0; font-size: 11px; font-family: Arial,sans-serif; color: #2b3c7a; =
line-height: 16px; width: 24pt; text-align: left;" width=3D"32" valign=3D"t=
op"><a href=3D"https://www.youtube.com/channel/UCrVl1CO_Jlu3KbiRH-tQ_vA" st=
yle=3D"border: none; text-decoration: none; margin: 0; float: left; display=
: inline-block; width: 32px;" data-mce-href=3D"https://www.youtube.com/chan=
nel/UCrVl1CO_Jlu3KbiRH-tQ_vA" data-mce-style=3D"border: none; text-decorati=
on: none; margin: 0; float: left; display: inline-block; width: 32px;"><img=
src=3D"cid:2806a89cb9fbba6f879bd3fd54a93b515defd6d2@zimbra" alt=3D"" data-=
mce-src=3D"/home/admin(a)lyra-network.com/corporate-documents/element-signatu=
re_logo_YouTube_32x28.jpg" doc=3D"corporate-documents/element-signature_log=
o_YouTube_32x28.jpg" height=3D"28" width=3D"32" border=3D"0"></a><br data-m=
ce-bogus=3D"1"></td><td style=3D"margin: 0; font-size: 11px; font-family: A=
rial,sans-serif; color: #2b3c7a; line-height: 16px; width: 30.75pt; text-al=
ign: left;" data-mce-style=3D"margin: 0; font-size: 11px; font-family: Aria=
l,sans-serif; color: #2b3c7a; line-height: 16px; width: 30.75pt; text-align=
: left;" width=3D"41" valign=3D"top"><a href=3D"https://www.linkedin.com/co=
mpany/lyra-network_2" style=3D"border: none; text-decoration: none; margin:=
0; float: left; display: inline-block; width: 41px;" data-mce-href=3D"http=
s://www.linkedin.com/company/lyra-network_2" data-mce-style=3D"border: none=
; text-decoration: none; margin: 0; float: left; display: inline-block; wid=
th: 41px;"><img src=3D"cid:878634ba055bb6ae851a7ef4540f754afa3ff86c@zimbra"=
data-mce-src=3D"/home/admin(a)lyra-network.com/corporate-documents/element-s=
ignature_logo_LinkedIn_41x28.jpg" doc=3D"corporate-documents/element-signat=
ure_logo_LinkedIn_41x28.jpg" height=3D"28" width=3D"41" border=3D"0"></a><b=
r data-mce-bogus=3D"1"></td><td style=3D"margin: 0; font-size: 11px; font-f=
amily: Arial,sans-serif; color: #2b3c7a; line-height: 16px; width: 31.5pt; =
text-align: left;" data-mce-style=3D"margin: 0; font-size: 11px; font-famil=
y: Arial,sans-serif; color: #2b3c7a; line-height: 16px; width: 31.5pt; text=
-align: left;" width=3D"42" valign=3D"top"><a href=3D"https://twitter.com/L=
yraNetwork" style=3D"border: none; text-decoration: none; margin: 0; float:=
left; display: inline-block; width: 42px;" data-mce-href=3D"https://twitte=
r.com/LyraNetwork" data-mce-style=3D"border: none; text-decoration: none; m=
argin: 0; float: left; display: inline-block; width: 42px;"><img src=3D"cid=
:18a767bbdad6a1133fc4152b03ea78228ce25a14@zimbra" alt=3D"" data-mce-src=3D"=
/home/admin(a)lyra-network.com/corporate-documents/element-signature_logo_Twi=
tter_42x28.jpg" doc=3D"corporate-documents/element-signature_logo_Twitter_4=
2x28.jpg" height=3D"28" width=3D"42" border=3D"0"></a><br data-mce-bogus=3D=
"1"></td><td style=3D"margin: 0; font-size: 11px; font-family: Arial,sans-s=
erif; color: #2b3c7a; line-height: 16px; width: 258.75pt; text-align: left;=
" data-mce-style=3D"margin: 0; font-size: 11px; font-family: Arial,sans-ser=
if; color: #2b3c7a; line-height: 16px; width: 258.75pt; text-align: left;" =
width=3D"345" valign=3D"top"><a href=3D"https://payzen.eu" style=3D"border:=
none; text-decoration: none; margin: 0; float: left; display: inline-block=
; width: 61px;" data-mce-href=3D"https://payzen.eu" data-mce-style=3D"borde=
r: none; text-decoration: none; margin: 0; float: left; display: inline-blo=
ck; width: 61px;"><img src=3D"cid:d3fc119a0aa72bf985c19ca559c993b5b4b54ca3@=
zimbra" alt=3D"" data-mce-src=3D"/home/admin(a)lyra-network.com/corporate-doc=
uments/element-signature_payzen_61x28.jpg" doc=3D"corporate-documents/eleme=
nt-signature_payzen_61x28.jpg" height=3D"28" width=3D"61" border=3D"0"></a>=
<br data-mce-bogus=3D"1"></td></tr></tbody></table><table style=3D"margin: =
0; table-layout: fixed; border: none; width: 345pt;" data-mce-style=3D"marg=
in: 0; table-layout: fixed; border: none; width: 345pt;" width=3D"460" cell=
spacing=3D"0" cellpadding=3D"0"><tbody><tr><td><hr style=3D"background: #29=
3b79 none repeat scroll 0% 0%; width: 100%; height: 4px; color: #293b79; ma=
rgin: 0px;" data-mce-style=3D"background: #293b79 none repeat scroll 0% 0%;=
width: 100%; height: 4px; color: #293b79; margin: 0px;"></td></tr></tbody>=
</table><table style=3D"margin: 0; table-layout: fixed; border: none; width=
: 345pt;" data-mce-style=3D"margin: 0; table-layout: fixed; border: none; w=
idth: 345pt;" data-mce-selected=3D"1" width=3D"460" cellspacing=3D"0" cellp=
adding=3D"0"><tbody><tr><td style=3D"margin: 0; font-size: 11px; font-famil=
y: Arial,sans-serif; color: #2b3c7a; line-height: 16px; width: 30.75pt; tex=
t-align: left;" data-mce-style=3D"margin: 0; font-size: 11px; font-family: =
Arial,sans-serif; color: #2b3c7a; line-height: 16px; width: 30.75pt; text-a=
lign: left;"><div>Lyra Network, 109 rue de l'innovation, 31670 Lab=C3=A8ge,=
FRANCE</div></td></tr></tbody></table><div><br></div></div></div></body></=
html>
------=_Part_57353690_157338222.1496840388734
Content-Type: image/jpeg; name=element-signature_logo_lyra_115x94.jpg
Content-Disposition: attachment;
filename=element-signature_logo_lyra_115x94.jpg
Content-Transfer-Encoding: base64
Content-ID: <6fa8666d679e18cab5e566228d0ca3519aabec27@zimbra>
/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAAA8AAD/4QOBaHR0cDov
L25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENl
aGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4
OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzEzMiA3OS4xNTkyODQsIDIwMTYvMDQvMTktMTM6
MTM6NDAgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5
OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHht
bG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1sbnM6eG1w
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9
InhtcC5kaWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1LTgxYmQtMTYwMTFjZjk5YWVjIiB4bXBNTTpEb2N1
bWVudElEPSJ4bXAuZGlkOkU1QUU4Nzk1OEExMzExRTZBNTFCRjVENEYyNDYxNTQ1IiB4bXBNTTpJ
bnN0YW5jZUlEPSJ4bXAuaWlkOkU1QUU4Nzk0OEExMzExRTZBNTFCRjVENEYyNDYxNTQ1IiB4bXA6
Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDQyAyMDE1LjUgKFdpbmRvd3MpIj4gPHhtcE1N
OkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1
LTgxYmQtMTYwMTFjZjk5YWVjIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOmRkMGY5MmM1LTc4
YWYtY2U0NS04MWJkLTE2MDExY2Y5OWFlYyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRG
PiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEAAYE
BAQFBAYFBQYJBgUGCQsIBgYICwwKCgsKCgwQDAwMDAwMEAwODxAPDgwTExQUExMcGxsbHB8fHx8f
Hx8fHx8BBwcHDQwNGBAQGBoVERUaHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8f
Hx8fHx8fHx8fHx8fH//AABEIAF4AcwMBEQACEQEDEQH/xACdAAEBAAIDAQEAAAAAAAAAAAAACAUH
AgMGBAEBAQADAQEAAAAAAAAAAAAAAAACBAUBAxAAAAUDAgAHDQcCBwAAAAAAAAECAwQRBQYSByFR
E7RWFzcxQSKS0nOT03RVlQgYYXEyQiMUFYEzsVJicmODOBEAAgECAwcEAgIDAAAAAAAAAAECEQNR
EwQhgbESMlIzMUGhFHEVwQUiQzT/2gAMAwEAAhEDEQA/AKpAE9b/AGQZdF3FtNpsd4l29M2DHSll
iS6w0bzsp5slqJCiLvERnTuENTRQg7bckntK15vmojj1WfMP0rc+Jy/JD7Fjt+EMueJ6bbnBN4bR
lkWdkt+XPtDaHSfjKnSHyUpTZkg+TcLSdFGRjyv3rUo0iqP8EoQkntZ7fcOy5VdrXGYxuccGW2+S
3nCecYq3oUWnU2RmfhGXAPPSXLcJNzVVT8kNVbnKKUHRmppW1/zBOSXVtZS4lpSjNtP8nLKhfcSR
alqdPXp+ERhauKKTe38nX1WfMP0qc+Jy/JHPsWO34RPLniOqz5h+lTnxOX5IfYsdvwhlzxHVZ8w/
Spz4nL8kPsWO34Qy54jqs+YfpU58Tl+SH2LHb8IZc8R1WfMP0qc+Jy/JD7Fjt+EMueJ2bCX3MH9y
LxZr7eJdxKBCkoW0/JdfaJ5mUy2a0EszL/MRHTuGGshDLTiqVf8AAst81GUGMssgAAAAE5b89s+L
+zwefPDV0fhlv4FW71oo0ZRaAAAAAAAAAAAAACdtj+3DLfNXDn7I1dX4Y7uBWtdbKJGUWQAAAACc
t+e2fF/Z4PPnhq6Pwy38Crd60UaMotAAAAAAAAAAAAABO2x/bhlvmrhz9kaur8Md3ArWutlEjKLI
AAAABOW/PbPi/s8Hnzw1dH4Zb+BVu9aKNGUWjAKzrGk5cnEjkq/nVo5RMbk3NOnkzd/uU0fgKvdH
rky5eb2I86rQz48iQAAAAAAAAABO2x/bhlvmrhz9kaur8Md3ArWutlEjKLIAAAABOW/PbPi/s8Hn
zw1dH4Zb+BVu9aKNGUWjSD//AKkj+xnzJQ0V/wA2/wDkr/7D3e7GX5DiOMfzlniMTEMPIRORIJfg
tOeCladCk/nNJH94raa1GcuVnpck4qqOrJdz4dr2yazKOhDjkxhlUGMszop98i/TOlD/AE/CNX+0
x23p27nIJXKRqeeyjdnLMd27sGSzLdEO4Xd4uWifqkhDDiFut0qrUSzQlNa9wz7g9bemjK44puiI
yuNRTOzGs53hud+gvzsYjwMVnqNfLuqUTrEck69bitZmStPc1NJIz4hy5ZtKLpKskIzk36bDHRt2
9yMsuMw9v8ejybNBc5M5s5Rp5Uy4aEZuMJSai4dPCZFStBN6a3BLne1nMyT6Uffgu8GQ5DnysXuN
mbtZx461TG1ms3kyGiLWRHXToMz8Hg7nfMQvaWMYcydTsLjcqG2BSPYnbY/twy3zVw5+yNXV+GO7
gVrXWyiRlFkAAAAAnLfntnxf2eDz54auj8Mt/Aq3etFGjKLROmdZbAxL5g/5yey6/GjREJW2wSTc
M3YpoKms0F3VcY1bNpzscqKs5UnU9pad4cG3Fefw1UWXFO7x3mULkpaJJnoM6JNC10WREak/aXGK
8tLO1/ns2HorilsNTYzar/kOQWTa+5tmmBjk+W9cKGqimkrJS68RcCkoV/yC7clGMXcXrJI8Yptq
OBsj5oEpTiNmSRESSuKSJJFwUJlzgoKv9f1P8Hrf9Db1wh/vbVJh6tH7lhbOvi5RBpr/AEqKMXR1
PZrYaC2x3Jgbawp+IZjBkwpkWS480603rSslkRU7qTOuiqFlUlEfepw6WosO61ODK9ufLsZ27f36
RkG/8q9OwXbe3PgKciMPp0uHHS22hpxRf60o1f4VC9DlsUrWjEHWdSghllknbY/twy3zVw5+yNXV
+GO7gVrXWyiRlFkAAAAAnLfntnxf2eDz54auj8Mt/Aq3etFGjKLRxU00o6qQlR8ZkRhUGMyO7W7H
7FOvUpCSZgMrfMqERqNJeCgj41qokvtMTtxcpJL3OSdFU1n8vFimPwbrnF18O55DIXybhlw8ihZm
s08RLdqVOJJC5rZqqgvSJ42V7v3MpM3dTBnyWbtBQmHFnXFpLzdXOWjW9tWo2irwPpd0pWhVPBPU
R0qILTVWx+y+STuYmQj7sxHH1Kctb7NuahnMflKW2akk3J/bvGlojqtpv8ZuJP8ACR8Ag9M8dtTu
YdErdfEJTCZKrY/L5JSySbrbCdKmicdWRLeWlJK/bs8skq+ElSacJ8HVppr3OZiP2RuxFK4W5xi3
LK2T5D8NmY8baHHnWySSeTSa6to1K8JTumn3gtM6PbtQzDYYqnqTtsf24Zb5q4c/ZGrq/DHdwK1r
rZRIyiyAAAAATlvz2z4v7PB588NXR+GW/gVbvWijRlFoADGZHjVmyO2Ltd4YOTAcUlbjJLcbJRoO
qam2pCuA+GlROFxwdV6nJRT2M+q226DbLfHt8BkmIcRtLMdlNaJQgqEVTqZ/eYjKTbqwlQ5Hb4Bp
Uk4zRpUpa1FoTQ1uEaVqPg7qiUZGffDmYodSbNZ0toaTBjk02hbTaCaQSUtu/wBxCSpQkr/MXfHe
d4iiP12z2h5k2XYMdxlSkuKbW0hSTW2kkIUZGVKpSkiI+8RApPEURydtltdQpDsRlxCtepKm0mR8
qdXKkZfn/NxjnMxQ+kcOk7bH9uGW+auHP2Rq6vwx3cCta62USMosgAAAAE5b89s+L+zwefPDV0fh
lv4FW71oo0ZRaAAAAAAAAAAAAACdtj+3DLfNXDn7I1dX4Y7uBWtdbKJGUWQAAAACb9/3mWN4cbff
WTTLUWEt1xX4UpTNeNSj+wiGtolW1LfwKt7qRuTrY226RwfSkKH1rnaz3zI4jrZ226RwfSkH1rna
xmRxHWztt0jg+lIPrXO1jMjiOtnbbpHB9KQfWudrGZHEdbO23SOD6Ug+tc7WMyOJ+o3W24WtKE5F
CUpR0SknCqZmH1rnazjuxW2pnbTfbPeGnHbXMamNtK0OLaVqJKjKtDELlqUNklQ7C5GfS6mOu+f4
XZpy4F0vMWHMbJKlsOrJKyJRVTUvtIdjZnJVSOuaXqfF1sbbdI4PpSEvrXO1nMyOJp7YiQxJ3oyi
RHcS9Hejz3GXUcKVoVOZNKi+wyOovaxUsx3cDwtdbKNGUWgAAAADX+4GzGPZveWbtcpsuO+xGTFS
iObRINCFrcIz1oWdaun3xas6qVtUSR5ztKTqeZ+lrCvely8Zj1Q9f2M8EQyEPpawr3pcvGY9UH7G
eCGQh9LWFe9Ll4zHqg/YzwQyEPpawr3pcvGY9UH7GeCGQh9LWFe9Ll4zHqg/YzwQyEc2flfwxl5t
1NzuJqbUSiI1MUqX/UOr+xmn6I5LTRaaxNg4XhNuxOHIiwX3n0SXCdWb5pMyMk6eDSlPEPDU6mV5
pteg0+nVpNI8zm+xeNZfkDt7nzprEl5Dbam2FNEgibTpKmttR97jErWslCNEkTlaUnUwP0tYV70u
XjMeqHp+xngiOQj023+zGPYReXrtbZsuQ+/GVFUiQbRoJC1ocMy0IQdatF3x5XtVK4qNInC0oups
AVT0AAAAAAAAAAAAAAAAAAAAAAAAAAAAD//Z
------=_Part_57353690_157338222.1496840388734
Content-Type: image/jpeg; name=element-signature_logo_YouTube_32x28.jpg
Content-Disposition: attachment;
filename=element-signature_logo_YouTube_32x28.jpg
Content-Transfer-Encoding: base64
Content-ID: <2806a89cb9fbba6f879bd3fd54a93b515defd6d2@zimbra>
/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAAA8AAD/4QOBaHR0cDov
L25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENl
aGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4
OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzEzMiA3OS4xNTkyODQsIDIwMTYvMDQvMTktMTM6
MTM6NDAgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5
OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHht
bG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1sbnM6eG1w
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9
InhtcC5kaWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1LTgxYmQtMTYwMTFjZjk5YWVjIiB4bXBNTTpEb2N1
bWVudElEPSJ4bXAuZGlkOjQ1REY5Njk1OEExNzExRTY5OUVDOTI1QTU3QzAwNzI0IiB4bXBNTTpJ
bnN0YW5jZUlEPSJ4bXAuaWlkOjQ1REY5Njk0OEExNzExRTY5OUVDOTI1QTU3QzAwNzI0IiB4bXA6
Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDQyAyMDE1LjUgKFdpbmRvd3MpIj4gPHhtcE1N
OkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1
LTgxYmQtMTYwMTFjZjk5YWVjIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOmRkMGY5MmM1LTc4
YWYtY2U0NS04MWJkLTE2MDExY2Y5OWFlYyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRG
PiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEAAYE
BAQFBAYFBQYJBgUGCQsIBgYICwwKCgsKCgwQDAwMDAwMEAwODxAPDgwTExQUExMcGxsbHB8fHx8f
Hx8fHx8BBwcHDQwNGBAQGBoVERUaHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8f
Hx8fHx8fHx8fHx8fH//AABEIABwAIAMBEQACEQEDEQH/xAB5AAADAQEAAAAAAAAAAAAAAAAABwgF
BgEAAgMBAAAAAAAAAAAAAAAAAAQCAwUBEAABBAECBAUFAAAAAAAAAAABAgMEBQARBiESEwcxUdIU
CEEik1UYEQACAQMCBQUAAAAAAAAAAAAAAQIRAwQhMVFhEiITQaHBYhX/2gAMAwEAAhEDEQA/AKpw
AnD5Hbg3JA35XxKqxmxUPVrJEaI862FuqkPp4IbI5lHQDNXBhFwbaW4tebqcMIvfYjURtz6eesv1
YzWz9fYh38zR7eX2+mu5lHV3VhaNOe8aTJgTHnweVXEBba1cQRkb8IeNtJbHYN9WpW+YY2T53inQ
4HfzZk6a4lmHGaiOSHl8EoT7mSOZR+gBIzUxU3Yklz+Be4+9DRl30N3dMOxjbyro9EzHW3MqupHc
L7qlaoX1SsFvlHl44moPpa6XXiW113FPuu3qrT5I7dfrJLUxllMRl15hQWjqBbiuXmTqCQlQx23F
rHdSqTrNFFZlDAou7vZa43xuSNaw7CPFZZhIiKbeSsqKkOuuE/aCNNHcexspW40a9Sm5bcmcH/J1
5+0rvxOenGf0o8GQ8DN7ZXxzv9vbpqrhy0huMQJCHlstocBKU+ITqANcru50ZRao9SUbLTqPvMwv
DAAwAMAP/9k=
------=_Part_57353690_157338222.1496840388734
Content-Type: image/jpeg; name=element-signature_logo_LinkedIn_41x28.jpg
Content-Disposition: attachment;
filename=element-signature_logo_LinkedIn_41x28.jpg
Content-Transfer-Encoding: base64
Content-ID: <878634ba055bb6ae851a7ef4540f754afa3ff86c@zimbra>
/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAAA8AAD/4QOBaHR0cDov
L25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENl
aGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4
OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzEzMiA3OS4xNTkyODQsIDIwMTYvMDQvMTktMTM6
MTM6NDAgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5
OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHht
bG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1sbnM6eG1w
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9
InhtcC5kaWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1LTgxYmQtMTYwMTFjZjk5YWVjIiB4bXBNTTpEb2N1
bWVudElEPSJ4bXAuZGlkOjY2MEYzRTFGOEExNzExRTY5QzI5OUNGQkY1MzM2MUQ1IiB4bXBNTTpJ
bnN0YW5jZUlEPSJ4bXAuaWlkOjY2MEYzRTFFOEExNzExRTY5QzI5OUNGQkY1MzM2MUQ1IiB4bXA6
Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDQyAyMDE1LjUgKFdpbmRvd3MpIj4gPHhtcE1N
OkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1
LTgxYmQtMTYwMTFjZjk5YWVjIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOmRkMGY5MmM1LTc4
YWYtY2U0NS04MWJkLTE2MDExY2Y5OWFlYyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRG
PiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEAAYE
BAQFBAYFBQYJBgUGCQsIBgYICwwKCgsKCgwQDAwMDAwMEAwODxAPDgwTExQUExMcGxsbHB8fHx8f
Hx8fHx8BBwcHDQwNGBAQGBoVERUaHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8f
Hx8fHx8fHx8fHx8fH//AABEIABwAKQMBEQACEQEDEQH/xAB2AAEBAQEAAAAAAAAAAAAAAAAABggH
AQEAAwEAAAAAAAAAAAAAAAAAAwQFAhAAAQMEAgICAgMBAAAAAAAAAQIDBAARBQYhEjEHMghBE1Fh
IhcRAAICAQIDCQAAAAAAAAAAAAABAgMREgQxkTIhQVGhscETMxX/2gAMAwEAAhEDEQA/ANU0Bl33
zsuzQfZcmHj8tMiMKaihDLMl1psKcQkXslSUjk8mtjZ1xdeWkVbW9RDZ/ZN8wOXkYnI7LKMuL1Lq
o+RddaIWkLHVwLAPCuf4NWYVwksqPkcNtd5V+ldr2ef7MwkaZmJsqK6p/uy9JdcbUBGcULpUog8i
9Q7uuKreEjquT1GsKxC2KAUBk37B2/6y72t1tB7X8dbIvf8Aq3mtvZfVzKlvUWWw7Dr+Dfy8rCt4
d153Z8ew0FNx3kohuxmUyFtAfFPyBUOBz+arwhKWE89L9SRtLmTuns4xj7LLaxYaGNE6SYojlJZ6
rhLUf19f89eyj4qa1t7ft449ziPWairGLQoBQHFvZ/onObht0jNxclFjMPNtNpZdS4Vj9aAk36gj
mtDb7yMIaWiGdTbySSfqnsKbdctAFh1Fm3Bwfx8fFT/ox8GcfAyk9dfX7OapuGNzj+SiPRoJcKmG
kuBRDjS2x1uAPK71DfvYzg444nUKmnk7lWcTigFAKAUAoBQCgP/Z
------=_Part_57353690_157338222.1496840388734
Content-Type: image/jpeg; name=element-signature_logo_Twitter_42x28.jpg
Content-Disposition: attachment;
filename=element-signature_logo_Twitter_42x28.jpg
Content-Transfer-Encoding: base64
Content-ID: <18a767bbdad6a1133fc4152b03ea78228ce25a14@zimbra>
/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAAA8AAD/4QOBaHR0cDov
L25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENl
aGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4
OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzEzMiA3OS4xNTkyODQsIDIwMTYvMDQvMTktMTM6
MTM6NDAgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5
OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHht
bG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1sbnM6eG1w
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9
InhtcC5kaWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1LTgxYmQtMTYwMTFjZjk5YWVjIiB4bXBNTTpEb2N1
bWVudElEPSJ4bXAuZGlkOjdDMkI3MjMyOEExNzExRTY5MDUyRTk1NEIyREUwNjVCIiB4bXBNTTpJ
bnN0YW5jZUlEPSJ4bXAuaWlkOjdDMkI3MjMxOEExNzExRTY5MDUyRTk1NEIyREUwNjVCIiB4bXA6
Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDQyAyMDE1LjUgKFdpbmRvd3MpIj4gPHhtcE1N
OkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1
LTgxYmQtMTYwMTFjZjk5YWVjIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOmRkMGY5MmM1LTc4
YWYtY2U0NS04MWJkLTE2MDExY2Y5OWFlYyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRG
PiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEAAYE
BAQFBAYFBQYJBgUGCQsIBgYICwwKCgsKCgwQDAwMDAwMEAwODxAPDgwTExQUExMcGxsbHB8fHx8f
Hx8fHx8BBwcHDQwNGBAQGBoVERUaHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8f
Hx8fHx8fHx8fHx8fH//AABEIABwAKgMBEQACEQEDEQH/xAB3AAEBAQEBAAAAAAAAAAAAAAAABwUI
BgEBAAMBAQAAAAAAAAAAAAAAAAMEBQIBEAABAwIEBwEBAQAAAAAAAAABAgMEAAUREhMGITFBFBUH
CIEiFhEAAgIBAgQHAAAAAAAAAAAAAAERAgMxBCHBEgVBUWGBQhMV/9oADAMBAAIRAxEAPwDC9g7u
3dH33uJiNeri2wxcJQQ01JfCUNodVwCUqwSlI/AK3sGKropS0Kd7OWZMXcPsyZDdnQ7he5UFhQQ/
LZelOMoURiEqWlRAOFdumNOGqyeTYtPzFfL1df8ASm5XCTP0ey0e5eceyZu4zZc5VhjlGOFZ/cKJ
dMKNeRNhbclzrOJxQCgOSW7o5a/oKXMRFXNT5iYw/EaAUtbL61tOBKTwXglWOXrW30zgj0RUmLlb
9p3rZ/r/ANd3GxWJtm3XC8B1EO2RsEqDkng88pAP8JSnEk8ugqlt6Xy5Fa3FImyNVUI8z8nJCU7o
SOSRAA/O5qbuXx9+Rxg8ToGsssCgFAc/7q+b9xXrc11u7V3iNN3CW/JbQUuhaUuuFYBIHMA9K08e
+rWqUPgivbC25MofK25Qoq81DKjwKlJeUT+kE13+jXyZ59DKX6a9WXPYfmO+msTPJdtp6AWMuhq4
5swHPVFVN1uFkiFoS46dJSqqEgoBQCgFAKAUAoBQH//Z
------=_Part_57353690_157338222.1496840388734
Content-Type: image/jpeg; name=element-signature_payzen_61x28.jpg
Content-Disposition: attachment; filename=element-signature_payzen_61x28.jpg
Content-Transfer-Encoding: base64
Content-ID: <d3fc119a0aa72bf985c19ca559c993b5b4b54ca3@zimbra>
/9j/4QAYRXhpZgAASUkqAAgAAAAAAAAAAAAAAP/sABFEdWNreQABAAQAAAA8AAD/4QOBaHR0cDov
L25zLmFkb2JlLmNvbS94YXAvMS4wLwA8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENl
aGlIenJlU3pOVGN6a2M5ZCI/PiA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4
OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzEzMiA3OS4xNTkyODQsIDIwMTYvMDQvMTktMTM6
MTM6NDAgICAgICAgICI+IDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5
OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+IDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiIHht
bG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1sbnM6eG1w
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9
InhtcC5kaWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1LTgxYmQtMTYwMTFjZjk5YWVjIiB4bXBNTTpEb2N1
bWVudElEPSJ4bXAuZGlkOkU1NjczQjM2OEEzNDExRTZCOEJBQ0Y4Mzg3RTEzODkyIiB4bXBNTTpJ
bnN0YW5jZUlEPSJ4bXAuaWlkOkU1NjczQjM1OEEzNDExRTZCOEJBQ0Y4Mzg3RTEzODkyIiB4bXA6
Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDQyAyMDE1LjUgKFdpbmRvd3MpIj4gPHhtcE1N
OkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6ZGQwZjkyYzUtNzhhZi1jZTQ1
LTgxYmQtMTYwMTFjZjk5YWVjIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOmRkMGY5MmM1LTc4
YWYtY2U0NS04MWJkLTE2MDExY2Y5OWFlYyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRG
PiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pv/uAA5BZG9iZQBkwAAAAAH/2wCEAAYE
BAQFBAYFBQYJBgUGCQsIBgYICwwKCgsKCgwQDAwMDAwMEAwODxAPDgwTExQUExMcGxsbHB8fHx8f
Hx8fHx8BBwcHDQwNGBAQGBoVERUaHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8f
Hx8fHx8fHx8fHx8fH//AABEIABwAPQMBEQACEQEDEQH/xABvAAABBAMAAAAAAAAAAAAAAAAAAQUG
BwIDBAEBAAAAAAAAAAAAAAAAAAAAABAAAQQABAQEAgsBAAAAAAAAAQIDBAUAERIGITETFEEiFQfS
I1FhcYGhMlIkRMQWhhEBAAAAAAAAAAAAAAAAAAAAAP/aAAwDAQACEQMRAD8AvqZ7q7OhzZMSS9IQ
Yjy2JT/avqZbW2rSvU6lBSACOeeAljLzTzSHmVhxpxIW24k5pUlQzBBHgRgM8BBJ9fKuvcGwr3La
wgxIlfGeaagyVsJ1uOOBRUBmOSRgI0bnZgfQwdx7qDrroYZSVzBrcJ0hKM2xq+7AWBU7PTXTEShc
2svRx6MuWp5o5gjigj68BombFRKmPyTfXTPXWpzoszloaRqOelCQPKkeAwEP6lp6f6V6vYaP9Z6b
3fcr7ntuhno6vPnxwGyohb2ntbpr6Y17NdKt7Bp5+YHlPJLitKyltI0KGk+XPANVgzV1Eyyqbe3s
IMmnhsMbVaYcebDraY+XVbQ15XXFOghQVnkMBkbjcVPTUs6M5IkPbnpWa1hKlrWG7QZBp0hROnUh
xRUfHTgJDWuwtsbptVTpC1xaqhgJfkuFTji9DjoKiTmpSlq/HAcO2d11d/cuXUySJG4C0+NvbfCH
CmK2hClHUop0GQ6E+dWfAeUYBi2lY2E+zgW7twhVlHDk+7AkzFO9u2lRdjORFIEZrTmAkA+GYwDl
TwLJhjZFouxsF3NzN6r7Lj7imExHUOSHWi0TpyCdORIzzwHT/J/7n+vgLVAA5YBChBUlRSCpOelR
HEZ88sA2WG3YVhb1lnJW4VVJcXFjAgM9VxOjqKTlmVJTnp48M8Ax2+3N3J3TJu6KVAbRKisxnWpr
brh+SpasxoKf14BBD91hxE2kB+nt5Px4AEL3VGoiZSAr4qyjyOP2+fAL2nuvw/e0vDl8iT8eAbv8
Lur0vPvYXrPrfrfU6bvb59Lp6NOev83HngLBwBgDAGAMAYAwBgP/2Q==
------=_Part_57353690_157338222.1496840388734--
------=_Part_57353689_1234561196.1496840388734--
7 years, 9 months
Active Directory authentication setup
by Todd Punderson
--_000_ff591869654646c7bc8df4c4af6d898fdoongaorg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
I've been pulling my hair out over this one. Here's the outp=
ut of ovirt-engine-extension-aaa-ldap-setup. Everything works fine if I use=
"plain" but I don't really want to do that. I searched the error that's sh=
own below and tried several different "fixes" but none of them helped. Thes=
e are Server 2016 DCs. Not too sure where to go next.
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-extension-aaa-ldap-setup=
.conf.d/10-packaging.conf']
Log file: /tmp/ovirt-engine-extension-aaa-ldap-setup-201707151709=
53-wfo1pk.log
Version: otopi-1.6.2 (otopi-1.6.2-1.el7.centos)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
Welcome to LDAP extension configuration program
Available LDAP implementations:
1 - 389ds
2 - 389ds RFC-2307 Schema
3 - Active Directory
4 - IBM Security Directory Server
5 - IBM Security Directory Server RFC-2307 Schema
6 - IPA
7 - Novell eDirectory RFC-2307 Schema
8 - OpenLDAP RFC-2307 Schema
9 - OpenLDAP Standard Schema
10 - Oracle Unified Directory RFC-2307 Schema
11 - RFC-2307 Schema (Generic)
12 - RHDS
13 - RHDS RFC-2307 Schema
14 - iPlanet
Please select: 3
Please enter Active Directory Forest name: home.doonga.org
[ INFO ] Resolving Global Catalog SRV record for home.doonga.org
[ INFO ] Resolving LDAP SRV record for home.doonga.org
NOTE:
It is highly recommended to use secure protocol to access the LDA=
P server.
Protocol startTLS is the standard recommended method to do so.
Only in cases in which the startTLS is not supported, fallback to=
non standard ldaps protocol.
Use plain for test environments only.
Please select protocol to use (startTLS, ldaps, plain) [startTLS]=
: ldaps
Please select method to obtain PEM encoded CA certificate (File, =
URL, Inline, System, Insecure): System
[ INFO ] Resolving SRV record 'home.doonga.org'
[ INFO ] Connecting to LDAP using 'ldaps://DC1.home.doonga.org:636'
[WARNING] Cannot connect using 'ldaps://DC1.home.doonga.org:636': {'info': =
'TLS error -8157:Certificate extension not found.', 'desc': "Can't contact =
LDAP server"}
[ INFO ] Connecting to LDAP using 'ldaps://DC2.home.doonga.org:636'
[WARNING] Cannot connect using 'ldaps://DC2.home.doonga.org:636': {'info': =
'TLS error -8157:Certificate extension not found.', 'desc': "Can't contact =
LDAP server"}
[ INFO ] Connecting to LDAP using 'ldaps://DC3.home.doonga.org:636'
[WARNING] Cannot connect using 'ldaps://DC3.home.doonga.org:636': {'info': =
'TLS error -8157:Certificate extension not found.', 'desc': "Can't contact =
LDAP server"}
[ ERROR ] Cannot connect using any of available options
Also:
2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap=
.common common._connectLDAP:391 Connecting to LDAP using 'ldap://DC2.home.d=
oonga.org:389'
2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap=
.common common._connectLDAP:442 Executing startTLS
2017-07-15 18:18:06 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.lda=
p.common common._connectLDAP:459 Exception
Traceback (most recent call last):
File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovi=
rt-engine-extension-aaa-ldap/ldap/common.py", line 443, in _connectLDAP
c.start_tls_s()
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 564, i=
n start_tls_s
return self._ldap_call(self._l.start_tls_s)
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in=
_ldap_call
result =3D func(*args,**kwargs)
CONNECT_ERROR: {'info': 'TLS error -8157:Certificate extension not found.',=
'desc': 'Connect error'}
2017-07-15 18:18:06 WARNING otopi.plugins.ovirt_engine_extension_aaa_ldap.l=
dap.common common._connectLDAP:463 Cannot connect using 'ldap://DC2.home.do=
onga.org:389': {'info': 'TLS error -8157:Certificate extension not found.',=
'desc': 'Connect error'}
2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap=
.common common._connectLDAP:391 Connecting to LDAP using 'ldap://DC3.home.d=
oonga.org:389'
2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap=
.common common._connectLDAP:442 Executing startTLS
2017-07-15 18:18:06 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.lda=
p.common common._connectLDAP:459 Exception
Traceback (most recent call last):
File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovi=
rt-engine-extension-aaa-ldap/ldap/common.py", line 443, in _connectLDAP
c.start_tls_s()
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 564, i=
n start_tls_s
return self._ldap_call(self._l.start_tls_s)
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in=
_ldap_call
result =3D func(*args,**kwargs)
CONNECT_ERROR: {'info': 'TLS error -8157:Certificate extension not found.',=
'desc': 'Connect error'}
Any help would be appreciated!
Thanks
--_000_ff591869654646c7bc8df4c4af6d898fdoongaorg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi,<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; I’ve been pulling my hair out over t=
his one. Here’s the output of ovirt-engine-extension-aaa-ldap-setup. =
Everything works fine if I use “plain” but I don’t really=
want to do that. I searched the error that’s shown below and
tried several different “fixes” but none of them helped. These=
are Server 2016 DCs. Not too sure where to go next.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">[ INFO ] Stage: Initializing<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Stage: Environment setup<o:p></o:p></=
p>
<p class=3D"MsoNormal"> &nbs=
p; Configuration files: ['/etc/ovirt-engine-extension-aaa-ldap-setup.conf.d=
/10-packaging.conf']<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Log file: /tmp/ovirt-engine-extension-aaa-ldap-setup-20170715170953-wfo1=
pk.log<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Version: otopi-1.6.2 (otopi-1.6.2-1.el7.centos)<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Stage: Environment packages setup<o:p=
></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Stage: Programs detection<o:p></o:p><=
/p>
<p class=3D"MsoNormal">[ INFO ] Stage: Environment customization<o:p>=
</o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Welcome to LDAP extension configuration program<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Available LDAP implementations:<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 1 - 389ds<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 2 - 389ds RFC-2307 Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 3 - Active Directory<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 4 - IBM Security Directory Server<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 5 - IBM Security Directory Server RFC-2307 Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 6 - IPA<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 7 - Novell eDirectory RFC-2307 Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 8 - OpenLDAP RFC-2307 Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nb=
sp; 9 - OpenLDAP Standard Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 10 - Oracle Unified Directory RFC-2307 Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 11 - RFC-2307 Schema (Generic)<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 12 - RHDS<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 13 - RHDS RFC-2307 Schema<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; 14 - iPlanet<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Please select: 3<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Please enter Active Directory Forest name: home.doonga.org<o:p></o:p></p=
>
<p class=3D"MsoNormal">[ INFO ] Resolving Global Catalog SRV record f=
or home.doonga.org<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Resolving LDAP SRV record for home.do=
onga.org<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; NOTE:<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; It is highly recommended to use secure protocol to access the LDAP serve=
r.<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Protocol startTLS is the standard recommended method to do so.<o:p></o:p=
></p>
<p class=3D"MsoNormal"> &nbs=
p; Only in cases in which the startTLS is not supported, fallback to non st=
andard ldaps protocol.<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Use plain for test environments only.<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Please select protocol to use (startTLS, ldaps, plain) [startTLS]: ldaps=
<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; Please select method to obtain PEM encoded CA certificate (File, URL, In=
line, System, Insecure): System<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Resolving SRV record 'home.doonga.org=
'<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Connecting to LDAP using 'ldaps://DC1=
.home.doonga.org:636'<o:p></o:p></p>
<p class=3D"MsoNormal">[WARNING] Cannot connect using 'ldaps://DC1.home.doo=
nga.org:636': {'info': 'TLS error -8157:Certificate extension not found.', =
'desc': "Can't contact LDAP server"}<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Connecting to LDAP using 'ldaps://DC2=
.home.doonga.org:636'<o:p></o:p></p>
<p class=3D"MsoNormal">[WARNING] Cannot connect using 'ldaps://DC2.home.doo=
nga.org:636': {'info': 'TLS error -8157:Certificate extension not found.', =
'desc': "Can't contact LDAP server"}<o:p></o:p></p>
<p class=3D"MsoNormal">[ INFO ] Connecting to LDAP using 'ldaps://DC3=
.home.doonga.org:636'<o:p></o:p></p>
<p class=3D"MsoNormal">[WARNING] Cannot connect using 'ldaps://DC3.home.doo=
nga.org:636': {'info': 'TLS error -8157:Certificate extension not found.', =
'desc': "Can't contact LDAP server"}<o:p></o:p></p>
<p class=3D"MsoNormal">[ ERROR ] Cannot connect using any of available opti=
ons<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Also:<o:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_=
extension_aaa_ldap.ldap.common common._connectLDAP:391 Connecting to LDAP u=
sing 'ldap://DC2.home.doonga.org:389'<o:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_=
extension_aaa_ldap.ldap.common common._connectLDAP:442 Executing startTLS<o=
:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 DEBUG otopi.plugins.ovirt_engine=
_extension_aaa_ldap.ldap.common common._connectLDAP:459 Exception<o:p></o:p=
></p>
<p class=3D"MsoNormal">Traceback (most recent call last):<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/ovirt-engine-extension-=
aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.p=
y", line 443, in _connectLDAP<o:p></o:p></p>
<p class=3D"MsoNormal"> c.start_tls_s()<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/lib64/python2.7/site-packages=
/ldap/ldapobject.py", line 564, in start_tls_s<o:p></o:p></p>
<p class=3D"MsoNormal"> return self._ldap_call(self._l.st=
art_tls_s)<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/lib64/python2.7/site-packages=
/ldap/ldapobject.py", line 99, in _ldap_call<o:p></o:p></p>
<p class=3D"MsoNormal"> result =3D func(*args,**kwargs)<o=
:p></o:p></p>
<p class=3D"MsoNormal">CONNECT_ERROR: {'info': 'TLS error -8157:Certificate=
extension not found.', 'desc': 'Connect error'}<o:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 WARNING otopi.plugins.ovirt_engi=
ne_extension_aaa_ldap.ldap.common common._connectLDAP:463 Cannot connect us=
ing 'ldap://DC2.home.doonga.org:389': {'info': 'TLS error -8157:Certificate=
extension not found.', 'desc': 'Connect
error'}<o:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_=
extension_aaa_ldap.ldap.common common._connectLDAP:391 Connecting to LDAP u=
sing 'ldap://DC3.home.doonga.org:389'<o:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 INFO otopi.plugins.ovirt_engine_=
extension_aaa_ldap.ldap.common common._connectLDAP:442 Executing startTLS<o=
:p></o:p></p>
<p class=3D"MsoNormal">2017-07-15 18:18:06 DEBUG otopi.plugins.ovirt_engine=
_extension_aaa_ldap.ldap.common common._connectLDAP:459 Exception<o:p></o:p=
></p>
<p class=3D"MsoNormal">Traceback (most recent call last):<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/ovirt-engine-extension-=
aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.p=
y", line 443, in _connectLDAP<o:p></o:p></p>
<p class=3D"MsoNormal"> c.start_tls_s()<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/lib64/python2.7/site-packages=
/ldap/ldapobject.py", line 564, in start_tls_s<o:p></o:p></p>
<p class=3D"MsoNormal"> return self._ldap_call(self._l.st=
art_tls_s)<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/lib64/python2.7/site-packages=
/ldap/ldapobject.py", line 99, in _ldap_call<o:p></o:p></p>
<p class=3D"MsoNormal"> result =3D func(*args,**kwargs)<o=
:p></o:p></p>
<p class=3D"MsoNormal">CONNECT_ERROR: {'info': 'TLS error -8157:Certificate=
extension not found.', 'desc': 'Connect error'}<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Any help would be appreciated!<o:p></o:p></p>
<p class=3D"MsoNormal">Thanks<o:p></o:p></p>
</div>
</body>
</html>
--_000_ff591869654646c7bc8df4c4af6d898fdoongaorg_--
7 years, 9 months
Engine HA-Issues
by Sven Achtelik
--_000_BFAB40933B3367488CE6299BAF8592D1014E52E495ACSOCRATESasl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi All,
after running solid for several month my ovirt-engine started rebooting on =
several hosts. I've looked into the hostend-engine -vm-status and it sees t=
hat the engine is up on one host but not reachable. At the same time I can =
access the gui and everything is working fine. After some time the engine i=
s shutting down and all hosts are trying to start the engine until one is t=
he winner, at least it looks like this. Any clues where to look at and find=
the issue with the liveliness check ?
---------------------------------------------------------------------------=
-----------------------------
--=3D=3D Host 1 status =3D=3D--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-node01
Host ID : 1
Engine status : {"reason": "vm not running on this hos=
t", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 3eb33843
local_conf_timestamp : 17128
Host timestamp : 17113
Extra metadata (valid at timestamp):
metadata_parse_version=3D1
metadata_feature_version=3D1
timestamp=3D17113 (Fri Jul 14 11:50:23 2017)
host-id=3D1
score=3D3400
vm_conf_refresh_time=3D17128 (Fri Jul 14 11:50:38 2017)
conf_on_shared_storage=3DTrue
maintenance=3DFalse
state=3DEngineDown
stopped=3DFalse
--=3D=3D Host 2 status =3D=3D--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-node02.mgmt.lan
Host ID : 2
Engine status : {"reason": "failed liveliness check", =
"health": "bad", "vm": "up", "detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 2a8c86cc
local_conf_timestamp : 523182
Host timestamp : 523167
Extra metadata (valid at timestamp):
metadata_parse_version=3D1
metadata_feature_version=3D1
timestamp=3D523167 (Fri Jul 14 11:50:25 2017)
host-id=3D2
score=3D3400
vm_conf_refresh_time=3D523182 (Fri Jul 14 11:50:40 2017)
conf_on_shared_storage=3DTrue
maintenance=3DFalse
state=3DEngineStarting
stopped=3DFalse
--=3D=3D Host 3 status =3D=3D--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-node03.mgmt.lan
Host ID : 3
Engine status : {"reason": "vm not running on this hos=
t", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : f8490d79
local_conf_timestamp : 527698
Host timestamp : 527683
Extra metadata (valid at timestamp):
metadata_parse_version=3D1
metadata_feature_version=3D1
timestamp=3D527683 (Fri Jul 14 11:50:33 2017)
host-id=3D3
score=3D3400
vm_conf_refresh_time=3D527698 (Fri Jul 14 11:50:47 2017)
conf_on_shared_storage=3DTrue
maintenance=3DFalse
state=3DEngineDown
stopped=3DFalse
---------------------------------------------------------------------------=
-------------------
Thank you,
Sven
--_000_BFAB40933B3367488CE6299BAF8592D1014E52E495ACSOCRATESasl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE link=3D"#0563C1" v=
link=3D"#954F72"><div class=3DWordSection1><p class=3DMsoNormal>Hi All, <o:=
p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>=
<span lang=3DEN-US>after running solid for several month my ovirt-engine st=
arted rebooting on several hosts. I’ve looked into the hostend-engine=
–vm-status and it sees that the engine is up on one host but not rea=
chable. At the same time I can access the gui and everything is working fin=
e. After some time the engine is shutting down and all hosts are trying to =
start the engine until one is the winner, at least it looks like this. Any =
clues where to look at and find the issue with the liveliness check ? <o:p>=
</o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p><=
/span></p><p class=3DMsoNormal><span lang=3DEN-US>-------------------------=
---------------------------------------------------------------------------=
----<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbs=
p;</o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>--=3D=3D Host 1 =
status =3D=3D--<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-U=
S><o:p> </o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>conf_=
on_shared_storage &nb=
sp; : True<o:p></o:p></span></p><p class=3DMsoNormal><span lang=
=3DEN-US>Status up-to-date &=
nbsp; : True<o:p></o:p></sp=
an></p><p class=3DMsoNormal><span lang=3DEN-US>Hostname &n=
bsp;  =
; : ovirt-node0=
1<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>Host ID =
; &n=
bsp;  =
; : 1<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>En=
gine status &nbs=
p; : {"rea=
son": "vm not running on this host", "health": &qu=
ot;bad", "vm": "down", "detail": "u=
nknown"}<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>=
Score &nbs=
p; &=
nbsp; : 3400<o:p></o:p></span></p><p class=3DMsoNor=
mal><span lang=3DEN-US>stopped &nb=
sp; =
: False<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Local maintenance &nb=
sp; =
: False<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US=
>crc32 &nb=
sp; =
: 3eb33843<o:p></o:p></span></p><p class=3DM=
soNormal><span lang=3DEN-US>local_conf_timestamp &nb=
sp; : 17128<o:p></o:p=
></span></p><p class=3DMsoNormal><span lang=3DEN-US>Host timestamp &nb=
sp;  =
; : 17113<o:p></o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US>Extra metadata (valid at timestamp):<o:p></=
o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> &n=
bsp; metadata_parse_version=3D1<o:p></o:p></span></p><p c=
lass=3DMsoNormal><span lang=3DEN-US> &nb=
sp; metadata_feature_version=3D1<o:p></o:p></span></p><p class=3DMsoNormal>=
<span lang=3DEN-US> timestamp=3D1=
7113 (Fri Jul 14 11:50:23 2017)<o:p></o:p></span></p><p class=3DMsoNormal><=
span lang=3DEN-US> host-id=3D1<o:=
p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> &nbs=
p; score=3D3400<o:p></o:p></span></p><p class=3DMso=
Normal><span lang=3DEN-US> vm_con=
f_refresh_time=3D17128 (Fri Jul 14 11:50:38 2017)<o:p></o:p></span></p><p c=
lass=3DMsoNormal><span lang=3DEN-US> &nb=
sp; conf_on_shared_storage=3DTrue<o:p></o:p></span></p><p class=3DMsoNormal=
><span lang=3DEN-US> maintenance=
=3DFalse<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> =
; state=3DEngineDown<o:p></o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US> &n=
bsp; stopped=3DFalse<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span lang=3D=
EN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>-=
-=3D=3D Host 2 status =3D=3D--<o:p></o:p></span></p><p class=3DMsoNormal><s=
pan lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span lan=
g=3DEN-US>conf_on_shared_storage &=
nbsp; : True<o:p></o:p></span></p><p class=3DMsoNor=
mal><span lang=3DEN-US>Status up-to-date  =
; : True<=
o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>Hostname =
&nb=
sp; =
: ovirt-node02.mgmt.lan<o:p></o:p></span></p><p class=3DMsoNormal><span la=
ng=3DEN-US>Host ID &nb=
sp; =
: 2<o:p></o:p></span></p><p class=3DMsoNorma=
l><span lang=3DEN-US>Engine status  =
; &n=
bsp; : {"reason": "failed liveliness check", &quo=
t;health": "bad", "vm": "up", "deta=
il": "up"}<o:p></o:p></span></p><p class=3DMsoNormal><span l=
ang=3DEN-US>Score &nbs=
p; &=
nbsp; : 3400<o:p></o:p></span></p><p cl=
ass=3DMsoNormal><span lang=3DEN-US>stopped &nb=
sp; =
: False<o:p></o:p></=
span></p><p class=3DMsoNormal><span lang=3DEN-US>Local maintenance &nb=
sp; =
: False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>crc32 &nb=
sp; =
: 2a8c86cc<o:p></o:p></span></p>=
<p class=3DMsoNormal><span lang=3DEN-US>local_conf_timestamp &nb=
sp; : 523=
182<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>Host times=
tamp  =
; : 523167<o:p></o:p></span=
></p><p class=3DMsoNormal><span lang=3DEN-US>Extra metadata (valid at times=
tamp):<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> &=
nbsp; metadata_parse_version=3D1<o:p></o:p></=
span></p><p class=3DMsoNormal><span lang=3DEN-US> &n=
bsp; metadata_feature_version=3D1<o:p></o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US> =
timestamp=3D523167 (Fri Jul 14 11:50:25 2017)<o:p></o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US> =
host-id=3D2<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>&n=
bsp; score=3D3400<o:p></o:p></span></p>=
<p class=3DMsoNormal><span lang=3DEN-US>  =
; vm_conf_refresh_time=3D523182 (Fri Jul 14 11:50:40 2017)<o:p></o:p>=
</span></p><p class=3DMsoNormal><span lang=3DEN-US> =
conf_on_shared_storage=3DTrue<o:p></o:p></span></p><p cl=
ass=3DMsoNormal><span lang=3DEN-US> &nbs=
p; maintenance=3DFalse<o:p></o:p></span></p><p class=3DMsoNormal><span lang=
=3DEN-US> state=3DEngineStarting<=
o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> &n=
bsp; stopped=3DFalse<o:p></o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoN=
ormal><span lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><=
span lang=3DEN-US>--=3D=3D Host 3 status =3D=3D--<o:p></o:p></span></p><p c=
lass=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p class=3D=
MsoNormal><span lang=3DEN-US>conf_on_shared_storage =
: True<o:p></o:p></span></=
p><p class=3DMsoNormal><span lang=3DEN-US>Status up-to-date &nbs=
p; &=
nbsp; : True<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DE=
N-US>Hostname &n=
bsp;  =
; : ovirt-node03.mgmt.lan<o:p></o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US>Host ID =
&nb=
sp; : 3<o:p></o:p></span></=
p><p class=3DMsoNormal><span lang=3DEN-US>Engine status &n=
bsp;  =
; : {"reason": "vm not running=
on this host", "health": "bad", "vm": &=
quot;down", "detail": "unknown"}<o:p></o:p></span>=
</p><p class=3DMsoNormal><span lang=3DEN-US>Score &n=
bsp; &nbs=
p; :=
3400<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>stopped&=
nbsp; &nbs=
p; &=
nbsp; : False<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3D=
EN-US>Local maintenance &nbs=
p; : False<o:p></o:p></span=
></p><p class=3DMsoNormal><span lang=3DEN-US>crc32 &=
nbsp; &nbs=
p; =
: f8490d79<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>loc=
al_conf_timestamp &nbs=
p; : 527698<o:p></o:p></span></p><p class=3DMsoNorm=
al><span lang=3DEN-US>Host timestamp &nb=
sp; =
: 527683<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-U=
S>Extra metadata (valid at timestamp):<o:p></o:p></span></p><p class=3DMsoN=
ormal><span lang=3DEN-US> metadat=
a_parse_version=3D1<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3D=
EN-US> metadata_feature_version=
=3D1<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> &nb=
sp; timestamp=3D527683 (Fri Jul 14 11:50:33 2=
017)<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> &nb=
sp; host-id=3D3<o:p></o:p></span></p><p class=
=3DMsoNormal><span lang=3DEN-US> =
score=3D3400<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>&=
nbsp; vm_conf_refresh_time=3D527698 (Fr=
i Jul 14 11:50:47 2017)<o:p></o:p></span></p><p class=3DMsoNormal><span lan=
g=3DEN-US> conf_on_shared_storage=
=3DTrue<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> =
maintenance=3DFalse<o:p></o:p></span><=
/p><p class=3DMsoNormal><span lang=3DEN-US> &n=
bsp; state=3DEngineDown<o:p></o:p></span></p><p class=3DMsoNormal><sp=
an lang=3DEN-US> stopped=3DFalse<=
o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US><o:p> </o=
:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>---------------------=
-------------------------------------------------------------------------<o=
:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>Thank you, <o:p=
></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>Sven <o:p></o:p><=
/span></p></div></body></html>=
--_000_BFAB40933B3367488CE6299BAF8592D1014E52E495ACSOCRATESasl_--
7 years, 9 months
Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.
by Benny Zlotnik
[Adding ovirt-users]
On Sun, Jul 16, 2017 at 12:58 PM, Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
> We can see a lot of related errors in the engine log but we are unable
> to correlate to the vdsm log. Do you have more hosts? If yes, please
> attach their logs as well.
> And just to be sure you were attempting to perform cold merge?
>
> On Fri, Jul 14, 2017 at 7:32 PM, Devin Acosta <devin(a)pabstatencio.com> wrote:
>>
>> You can get my logs from:
>>
>> https://files.linuxstack.cloud/s/NjoyMF11I38rJpH
>>
>> They were a little to big to attach to this e-mail. Would like to know if
>> this is the similar bug that Richard indicated is a possibility.
>>
>> --
>>
>> Devin Acosta
>> Red Hat Certified Architect, LinuxStack
>> 602-354-1220 || devin(a)linuxguru.co
>>
>> On July 14, 2017 at 9:18:08 AM, Devin Acosta (devin(a)pabstatencio.com) wrote:
>>
>> I have attached the logs.
>>
>>
>>
>> --
>>
>> Devin Acosta
>> Red Hat Certified Architect, LinuxStack
>> 602-354-1220 || devin(a)linuxguru.co
>>
>> On July 13, 2017 at 9:22:03 AM, richard anthony falzini
>> (richardfalzini(a)gmail.com) wrote:
>>
>> Hi,
>> i have the same problem with gluster.
>> this is a bug that i opened
>> https://bugzilla.redhat.com/show_bug.cgi?id=1461029 .
>> In the bug i used single disk vm but i start to notice the problem with
>> multiple disk vm.
>>
>>
>> 2017-07-13 0:07 GMT+02:00 Devin Acosta <devin(a)pabstatencio.com>:
>>>
>>> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
>>> question has multiple Disks (4 to be exact). It snapshotted OK while on
>>> iSCSI however when I went to delete the single snapshot that existed it went
>>> into Locked state and never came back. The deletion has been going for well
>>> over an hour, and I am not convinced since the snapshot is less than 12
>>> hours old that it’s really doing anything.
>>>
>>> I have seen that doing some Googling indicates there might be some known
>>> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>>>
>>> In the logs on the engine it shows:
>>>
>>> 2017-07-12 21:59:42,473Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 21:59:52,480Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 21:59:52,483Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:02,490Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:02,493Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:12,498Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:12,501Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:22,508Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:22,511Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>>
>>> This is what I seen on the SPM when I grep’d the Snapshot ID.
>>>
>>> 2017-07-12 14:22:18,773-0700 INFO (jsonrpc/6) [vdsm.api] START
>>> createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
>>> volFormat=4, preallocate=2, diskType=2,
>>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', desc=u'',
>>> srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
>>> from=::ffff:10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
>>> (api:46)
>>> 2017-07-12 14:22:19,095-0700 WARN (tasks/6) [root] File:
>>> /rhev/data-center/00000001-0001-0001-0001-000000000311/0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
>>> already removed (utils:120)
>>> 2017-07-12 14:22:19,096-0700 INFO (tasks/6) [storage.Volume] Request to
>>> create snapshot
>>> 6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845 of
>>> volume
>>> 6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
>>> (blockVolume:545)
>>> 2017-07-12 14:22:19,676-0700 INFO (tasks/6) [storage.LVM] Change LV tags
>>> (vg=0c02a758-4295-4199-97de-b041744b3b15,
>>> lv=5921ba71-0f00-46cd-b0be-3c2ac1396845, delTags=['OVIRT_VOL_INITIALIZING'],
>>> addTags=['MD_10', u'PU_0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae',
>>> u'IU_6a887015-67cd-4f7b-b709-eef97142258d']) (lvm:1344)
>>> 2017-07-12 14:22:36,010-0700 INFO (jsonrpc/5) [vdsm.api] START
>>> getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
>>> from=::ffff:10.4.64.7,59664, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
>>> (api:46)
>>> 2017-07-12 14:22:36,077-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
>>> Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
>>> imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
>>> 5921ba71-0f00-46cd-b0be-3c2ac1396845 (volume:238)
>>> 2017-07-12 14:22:36,185-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
>>> 0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
>>> info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
>>> 'voltype': 'LEAF', 'description': '', 'parent':
>>> '0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW', 'generation': 0,
>>> 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499894539',
>>> 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
>>> '1073741824', 'children': [], 'pool': '', 'capacity': '107374182400',
>>> 'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845', 'truesize': '1073741824',
>>> 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
>>> 2017-07-12 14:22:36,186-0700 INFO (jsonrpc/5) [vdsm.api] FINISH
>>> getVolumeInfo return={'info': {'status': 'OK', 'domain':
>>> '0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
>>> '', 'parent': '0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW',
>>> 'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
>>> '1499894539', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
>>> 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity':
>>> '107374182400', 'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845', 'truesize':
>>> '1073741824', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}}
>>> from=::ffff:10.4.64.7,59664, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
>>> (api:52)
>>> 2017-07-12 14:24:24,854-0700 INFO (jsonrpc/1) [vdsm.api] START
>>> deleteVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volumes=[u'5921ba71-0f00-46cd-b0be-3c2ac1396845'], postZero=u'false',
>>> force=u'false', discard=False) from=::ffff:10.4.64.7,60016,
>>> flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e (api:46)
>>> 2017-07-12 14:24:25,010-0700 INFO (tasks/7) [storage.Volume] Request to
>>> delete LV 5921ba71-0f00-46cd-b0be-3c2ac1396845 of image
>>> 6a887015-67cd-4f7b-b709-eef97142258d in VG
>>> 0c02a758-4295-4199-97de-b041744b3b15 (blockVolume:579)
>>> 2017-07-12 14:24:25,130-0700 INFO (tasks/7) [storage.VolumeManifest]
>>> sdUUID=0c02a758-4295-4199-97de-b041744b3b15
>>> imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
>>> 5921ba71-0f00-46cd-b0be-3c2ac1396845 legality = ILLEGAL (volume:398)
>>> 2017-07-12 14:24:38,881-0700 INFO (jsonrpc/2) [vdsm.api] START
>>> getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
>>> from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
>>> (api:46)
>>> 2017-07-12 14:24:49,911-0700 INFO (jsonrpc/1) [vdsm.api] START
>>> getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', options=None)
>>> from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
>>> (api:46)
>>> 2017-07-12 14:24:49,912-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
>>> Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
>>> imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
>>> 0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae (volume:238)
>>> 2017-07-12 14:24:50,036-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
>>> 0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
>>> info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
>>> 'voltype': 'LEAF', 'description': '', 'parent':
>>> '00000000-0000-0000-0000-000000000000', 'format': 'COW', 'generation': 0,
>>> 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499885619',
>>> 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
>>> '110729625600', 'children': [], 'pool': '', 'capacity': '107374182400',
>>> 'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'truesize': '110729625600',
>>> 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
>>> 2017-07-12 14:24:50,037-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
>>> getVolumeInfo return={'info': {'status': 'OK', 'domain':
>>> '0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
>>> '', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'COW',
>>> 'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
>>> '1499885619', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
>>> 'apparentsize': '110729625600', 'children': [], 'pool': '', 'capacity':
>>> '107374182400', 'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'truesize':
>>> '110729625600', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}}
>>> from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
>>> (api:52)
>>>
>>> HELP, Right now I am starting to think Block Storage and oVIRT = BAD!
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Devin Acosta
>>> Red Hat Certified Architect, LinuxStack
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
7 years, 9 months
oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.
by Devin Acosta
We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
question has multiple Disks (4 to be exact). It snapshotted OK while on
iSCSI however when I went to delete the single snapshot that existed it
went into Locked state and never came back. The deletion has been going for
well over an hour, and I am not convinced since the snapshot is less than
12 hours old that it’s really doing anything.
I have seen that doing some Googling indicates there might be some known
issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
In the logs on the engine it shows:
2017-07-12 21:59:42,473Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 21:59:52,480Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 21:59:52,483Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 22:00:02,490Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 22:00:02,493Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 22:00:12,498Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 22:00:12,501Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 22:00:22,508Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 22:00:22,511Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
This is what I seen on the SPM when I grep’d the Snapshot ID.
2017-07-12 14:22:18,773-0700 INFO (jsonrpc/6) [vdsm.api] START
createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
volFormat=4, preallocate=2, diskType=2,
volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', desc=u'',
srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
from=::ffff:10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
(api:46)
2017-07-12 14:22:19,095-0700 WARN (tasks/6) [root] File:
/rhev/data-center/00000001-0001-0001-0001-000000000311/0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
already removed (utils:120)
2017-07-12 14:22:19,096-0700 INFO (tasks/6) [storage.Volume] Request to
create snapshot
6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
of volume
6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
(blockVolume:545)
2017-07-12 14:22:19,676-0700 INFO (tasks/6) [storage.LVM] Change LV tags
(vg=0c02a758-4295-4199-97de-b041744b3b15,
lv=5921ba71-0f00-46cd-b0be-3c2ac1396845,
delTags=['OVIRT_VOL_INITIALIZING'], addTags=['MD_10',
u'PU_0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae',
u'IU_6a887015-67cd-4f7b-b709-eef97142258d']) (lvm:1344)
2017-07-12 14:22:36,010-0700 INFO (jsonrpc/5) [vdsm.api] START
getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
from=::ffff:10.4.64.7,59664, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
(api:46)
2017-07-12 14:22:36,077-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
5921ba71-0f00-46cd-b0be-3c2ac1396845 (volume:238)
2017-07-12 14:22:36,185-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
'voltype': 'LEAF', 'description': '', 'parent':
'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW', 'generation': 0,
'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499894539',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'1073741824', 'children': [], 'pool': '', 'capacity': '107374182400',
'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845', 'truesize': '1073741824',
'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
2017-07-12 14:22:36,186-0700 INFO (jsonrpc/5) [vdsm.api] FINISH
getVolumeInfo return={'info': {'status': 'OK', 'domain':
'0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
'', 'parent': '0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW',
'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
'1499894539', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity':
'107374182400', 'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845',
'truesize': '1073741824', 'type': 'SPARSE', 'lease': {'owners': [],
'version': None}}} from=::ffff:10.4.64.7,59664,
flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e (api:52)
2017-07-12 14:24:24,854-0700 INFO (jsonrpc/1) [vdsm.api] START
deleteVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volumes=[u'5921ba71-0f00-46cd-b0be-3c2ac1396845'], postZero=u'false',
force=u'false', discard=False) from=::ffff:10.4.64.7,60016,
flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e (api:46)
2017-07-12 14:24:25,010-0700 INFO (tasks/7) [storage.Volume] Request to
delete LV 5921ba71-0f00-46cd-b0be-3c2ac1396845 of image
6a887015-67cd-4f7b-b709-eef97142258d in VG
0c02a758-4295-4199-97de-b041744b3b15 (blockVolume:579)
2017-07-12 14:24:25,130-0700 INFO (tasks/7) [storage.VolumeManifest]
sdUUID=0c02a758-4295-4199-97de-b041744b3b15
imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
5921ba71-0f00-46cd-b0be-3c2ac1396845 legality = ILLEGAL (volume:398)
2017-07-12 14:24:38,881-0700 INFO (jsonrpc/2) [vdsm.api] START
getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
(api:46)
2017-07-12 14:24:49,911-0700 INFO (jsonrpc/1) [vdsm.api] START
getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', options=None)
from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
(api:46)
2017-07-12 14:24:49,912-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae (volume:238)
2017-07-12 14:24:50,036-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
'voltype': 'LEAF', 'description': '', 'parent':
'00000000-0000-0000-0000-000000000000', 'format': 'COW', 'generation': 0,
'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499885619',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'110729625600', 'children': [], 'pool': '', 'capacity': '107374182400',
'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'truesize':
'110729625600', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}
(volume:272)
2017-07-12 14:24:50,037-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
getVolumeInfo return={'info': {'status': 'OK', 'domain':
'0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
'', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'COW',
'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
'1499885619', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
'apparentsize': '110729625600', 'children': [], 'pool': '', 'capacity':
'107374182400', 'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae',
'truesize': '110729625600', 'type': 'SPARSE', 'lease': {'owners': [],
'version': None}}} from=::ffff:10.4.64.7,59664,
flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e (api:52)
HELP, Right now I am starting to think Block Storage and oVIRT = BAD!
--
Devin Acosta
Red Hat Certified Architect, LinuxStack
7 years, 9 months
oVirt Metrics
by Arsène Gschwind
This is a multi-part message in MIME format.
--------------9A99E910BE6E6C7F1C6C0B2C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi all,
I'm trying to setup oVirt metrics as described at
https://www.ovirt.org/develop/release-management/features/engine/metrics-...
using SSO.
My oVirt installation is based on Version: 4.1.3.5-1.el7.centos.
I'm missing the SSO tool called ovirt-register-sso-client as written in
the doc to register new SSO client. I couldn't figure out which package
contains that tool, is it included in the latest distribution ?
Thanks for any help.
rgds,
Arsène
--
*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 | CH-4056 Basel | Switzerland
Tel. +41 79 449 25 63 | http://its.unibas.ch <http://its.unibas.ch/>
ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11
--------------9A99E910BE6E6C7F1C6C0B2C
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi all,</p>
<p>I'm trying to setup oVirt metrics as described at
<a class="moz-txt-link-freetext" href="https://www.ovirt.org/develop/release-management/features/engine/metrics-...">https://www.ovirt.org/develop/release-management/features/engine/metrics-...</a>
using SSO.<br>
My oVirt installation is based on Version: 4.1.3.5-1.el7.centos.</p>
<p>I'm missing the SSO tool called ovirt-register-sso-client as
written in the doc to register new SSO client. I couldn't figure
out which package contains that tool, is it included in the latest
distribution ?</p>
<p>Thanks for any help.</p>
<p>rgds,<br>
Arsène<br>
</p>
<div class="moz-signature">-- <br>
<p class="western" style="margin-bottom: 0in; line-height: 150%">
<font color="#000000"><font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> <b>Arsène Gschwind</b> </font>
</font>
<font color="#000000"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> </font> </font>
</font>
<font face="Tahoma, serif"> <font style="font-size: 8pt"
size="1"> </font>
</font>
<font face="Tahoma, serif">
</font>
<font color="#000000"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> <br>
</font> </font>
</font>
<font color="#7f7f7f"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> Fa. Sapify AG im
Auftrag der Universität Basel<br>
IT Services<br>
Klingelbergstr. 70 | CH-4056 Basel | Switzerland<br>
Tel. +41 79 449 25 63 | </font> </font>
</font>
<a href="http://its.unibas.ch/"> <font face="Tahoma, serif">
<font style="font-size: 8pt" size="1">
http://its.unibas.ch </font> </font>
</a><br>
<font color="#7f7f7f"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> ITS-ServiceDesk:
<a class="moz-txt-link-abbreviated" href="mailto:support-its@unibas.ch">support-its(a)unibas.ch</a> | +41 61 267 14 11 </font> </font>
</font>
</font></p>
<font color="#000000">
</font></div>
</body>
</html>
--------------9A99E910BE6E6C7F1C6C0B2C--
7 years, 9 months
Fwd: Windows USB Redirection
by Abi Askushi
Hi All,
I have Ovirt 4.1 with 3 nodes on top glusterfs.
I Have 2 VMs: Windows 2016 64bit and Windows 10 64bit
When I attach a USB flash disk to a VM (from host devices) the VM cannot
see the USB drive and report driver issue at device manager (see attached).
This happens with both VMs when tested.
When testing with Windows 7 or Windows XP USB is attached and accessed
normally.
Have you encountered such issue?
I have installed latest guest tools on both VMs.
Many thanx
7 years, 9 months
Removing iSCSI domain: host side part
by Gianluca Cecchi
Hello,
I have cleanly removed an iSCSI domain from oVirt. There is another one
(connecting to another storage array) that is the master domain.
But I see that oVirt hosts still maintain the iscsi session to the LUN.
So I want to clean from os point of view before removing the LUN itself
from storage.
At the moment I still see the multipath lun on both hosts
[root@ov301 network-scripts]# multipath -l
. . .
364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
size=4.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 9:0:0:0 sde 8:64 active undef running
`- 10:0:0:0 sdf 8:80 active undef running
and
[root@ov301 network-scripts]# iscsiadm -m session
tcp: [1] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
(non-flash)
tcp: [2] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
(non-flash)
. . .
Do I have to clean the multipath paths and multipath device and then iSCSI
logout, or is it sufficient to iSCSI logout and the multipath device and
its path will be cleanly removed from OS point of view?
I would like not to have multipath device in stale condition.
Thanks
Gianluca
7 years, 9 months
Failed to create template
by aduckers
I’m running 4.1 with a hosted engine, using FC SAN storage. I’ve uploaded a qcow2 image, then created a VM and attached that image.
When trying to create a template from that VM, we get failures with:
failed: low level image copy failed
VDSM command DeleteImageGroupVDS failed: Image does not exist in domain
failed to create template
What should I be looking at to resolve this? Anyone recognize this issue?
Thanks
7 years, 9 months
Re: [ovirt-users] Detach disk from one VM, attach to another VM
by Victor José Acosta Domínguez
Hello
Yes it is possible, you must remove from first VM's configuration (be
careful, do not delete your virtual disk)
After that you can attach ad that disk to another VM
Process should be:
- Detach disk from VM1
- Delete disk from VM1's configuration
- Attach disk to VM2
Victor Acosta
7 years, 9 months
2 hosts starting the engine at the same time?
by Gianluca Cecchi
Hello.
I'm on 4.1.3 with self hosted engine and glusterfs as storage.
I updated the kernel on engine so I executed these steps:
- enable global maintenace from the web admin gui
- wait some minutes
- shutdown the engine vm from inside its OS
- wait some minutes
- execute on one host
[root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none
I see that the qemu-kvm process for the engine starts on two hosts and then
on one of them it gets a "kill -15" and stops
Is it expected behaviour? It seems somehow dangerous to me..
- when in maintenance
[root@ovirt02 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2597
stopped : False
Local maintenance : False
crc32 : 7931c5c3
local_conf_timestamp : 19811
Host timestamp : 19794
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=19794 (Sun Jul 9 21:31:50 2017)
host-id=1
score=2597
vm_conf_refresh_time=19811 (Sun Jul 9 21:32:06 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 616ceb02
local_conf_timestamp : 2829
Host timestamp : 2812
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2812 (Sun Jul 9 21:31:52 2017)
host-id=2
score=3400
vm_conf_refresh_time=2829 (Sun Jul 9 21:32:09 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 871204b2
local_conf_timestamp : 24584
Host timestamp : 24567
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24567 (Sun Jul 9 21:31:52 2017)
host-id=3
score=3400
vm_conf_refresh_time=24584 (Sun Jul 9 21:32:09 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@ovirt02 ~]#
- then I exit global maintenance
[root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none
- During monitoring of status, at some point I see "EngineStart" on both
host2 and host3
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3230
stopped : False
Local maintenance : False
crc32 : 25cadbfb
local_conf_timestamp : 20055
Host timestamp : 20040
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20040 (Sun Jul 9 21:35:55 2017)
host-id=1
score=3230
vm_conf_refresh_time=20055 (Sun Jul 9 21:36:11 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : e6951128
local_conf_timestamp : 3075
Host timestamp : 3058
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3058 (Sun Jul 9 21:35:59 2017)
host-id=2
score=3400
vm_conf_refresh_time=3075 (Sun Jul 9 21:36:15 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStart
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 382efde5
local_conf_timestamp : 24832
Host timestamp : 24816
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24816 (Sun Jul 9 21:36:01 2017)
host-id=3
score=3400
vm_conf_refresh_time=24832 (Sun Jul 9 21:36:17 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStart
stopped=False
[root@ovirt02 ~]#
and then
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3253
stopped : False
Local maintenance : False
crc32 : 3fc39f31
local_conf_timestamp : 20087
Host timestamp : 20070
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20070 (Sun Jul 9 21:36:26 2017)
host-id=1
score=3253
vm_conf_refresh_time=20087 (Sun Jul 9 21:36:43 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4a05c31e
local_conf_timestamp : 3109
Host timestamp : 3079
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3079 (Sun Jul 9 21:36:19 2017)
host-id=2
score=3400
vm_conf_refresh_time=3109 (Sun Jul 9 21:36:49 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 382efde5
local_conf_timestamp : 24832
Host timestamp : 24816
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24816 (Sun Jul 9 21:36:01 2017)
host-id=3
score=3400
vm_conf_refresh_time=24832 (Sun Jul 9 21:36:17 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStart
stopped=False
[root@ovirt02 ~]#
and
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3253
stopped : False
Local maintenance : False
crc32 : 3fc39f31
local_conf_timestamp : 20087
Host timestamp : 20070
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20070 (Sun Jul 9 21:36:26 2017)
host-id=1
score=3253
vm_conf_refresh_time=20087 (Sun Jul 9 21:36:43 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4a05c31e
local_conf_timestamp : 3109
Host timestamp : 3079
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3079 (Sun Jul 9 21:36:19 2017)
host-id=2
score=3400
vm_conf_refresh_time=3109 (Sun Jul 9 21:36:49 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : fc1e8cf9
local_conf_timestamp : 24868
Host timestamp : 24836
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24836 (Sun Jul 9 21:36:21 2017)
host-id=3
score=3400
vm_conf_refresh_time=24868 (Sun Jul 9 21:36:53 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
[root@ovirt02 ~]#
and at the end Host3 goes to "ForceStop" for the engine
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3312
stopped : False
Local maintenance : False
crc32 : e9d53432
local_conf_timestamp : 20120
Host timestamp : 20102
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20102 (Sun Jul 9 21:36:58 2017)
host-id=1
score=3312
vm_conf_refresh_time=20120 (Sun Jul 9 21:37:15 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "up", "detail": "powering up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 7d2330be
local_conf_timestamp : 3141
Host timestamp : 3124
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3124 (Sun Jul 9 21:37:04 2017)
host-id=2
score=3400
vm_conf_refresh_time=3141 (Sun Jul 9 21:37:21 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "Storage of VM is locked.
Is another host already starting the VM?", "health": "bad", "vm":
"already_locked", "detail": "down"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 179825e8
local_conf_timestamp : 24900
Host timestamp : 24883
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24883 (Sun Jul 9 21:37:08 2017)
host-id=3
score=3400
vm_conf_refresh_time=24900 (Sun Jul 9 21:37:24 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineForceStop
stopped=False
[root@ovirt02 ~]#
Comparing /var/log/libvirt/qemu/HostedEngine of host2 and host3
Host2:
2017-07-09 19:36:36.094+0000: starting up libvirt version: 2.0.0, package:
10.el7_3.9 (CentOS BuildSystem <http://bugs.centos.org>,
2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version: 2.6.0
(qemu-kvm-ev-2.6.0-28.el7.10.1), hostname: ovirt02.localdomain.local
... char device redirected to /dev/pts/1 (label charconsole0)
warning: host doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
Host3:
2017-07-09 19:36:38.143+0000: starting up libvirt version: 2.0.0, package:
10.el7_3.9 (CentOS BuildSystem <http://bu
gs.centos.org>, 2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version:
2.6.0 (qemu-kvm-ev-2.6.0-28.el7.10.1), hos
tname: ovirt03.localdomain.local
... char device redirected to /dev/pts/1 (label charconsole0)
2017-07-09 19:36:38.584+0000: shutting down
2017-07-09T19:36:38.589729Z qemu-kvm: terminating on signal 15 from pid 1835
any comment?
Is it only a matter of powering on the VM in paused mode before starting
the OS itself, or do I risk corruption due to 2 qemu-kvm processes trying
to start the engine vm os?
Thanks,
Gianluca
7 years, 9 months
Installation of oVirt 4.1, Gluster Storage and Hosted Engine
by Simone Marchioni
Hi to all,
I have an old installation of oVirt 3.3 with the Engine on a separate
server. I wanted to test the last oVirt 4.1 with Gluster Storage and
Hosted Engine.
Followed the following tutorial:
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-glust...
I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt
4.1 repo and all required packages. Configured passwordless ssh as stated.
Then I log in cockpit web interface, selected "Hosted Engine with
Gluster" and hit the Start button. Configured the parameters as shown in
the tutorial.
In the last step (5) the Generated Gdeply configuration (note: replaced
the real domain with "domain.it"):
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it
[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb
-h ha1.domain.it,ha2.domain.it,ha3.domain.it
[disktype]
raid6
[diskcount]
12
[stripesize]
256
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
[pv1]
action=create
devices=sdb
ignore_pv_errors=no
[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB
[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB
[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB
[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no
arbiter_count=1
[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no
arbiter_count=1
[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no
arbiter_count=1
[volume4]
action=create
volname=iso
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso
ignore_volume_errors=no
arbiter_count=1
When I hit "Deploy" button the Deployment fails with the following error:
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
What I'm doing wrong? Maybe I need to initializa glusterfs in some way...
What are the logs used to log the status of this deployment so I can
check the errors?
Thanks in advance!
Simone
7 years, 9 months
Problems with vdsm-tool and ovn-config option
by Gianluca Cecchi
Hello,
on February I installed OVN controller on some hypervisors (CentOS 7.3
hosts).
At that time the vdsm-tool command was part
of vdsm-python-4.19.4-1.el7.centos.noarch
I was able to configure my hosts with command
vdsm-tool ovn-config OVN_central_server_IP local_OVN_tunneling_IP
as described here:
https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
and now here
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/...
I have 2 hosts that I added later and on which I want to configure OVN
external network provider too.
But now, with vdsm-python at
version vdsm-python-4.19.10.1-1.el7.centos.noarch I get an usage error
trying to execute the command
[root@ov301 ~]# vdsm-tool ovn-config 10.4.192.43 10.4.167.41
Usage: /bin/vdsm-tool [options] <action> [arguments]
Valid options:
-h, --help
Show this help menu.
-l, --logfile <path>
...
Also, the changelog of the package seems quite broken:
[root@ov301 ~]# rpm -q --changelog vdsm-python
* Wed Aug 03 2016 Yaniv Bronhaim <ybronhei(a)redhat.com> - 4.18.999
- Re-review of vdsm.spec to return it to fedora Bug #1361659
* Sun Oct 13 2013 Yaniv Bronhaim <ybronhei(a)redhat.com> - 4.13.0
- Removing vdsm-python-cpopen from the spec
- Adding dependency on formal cpopen package
* Sun Apr 07 2013 Yaniv Bronhaim <ybronhei(a)redhat.com> - 4.9.0-1
- Adding cpopen package
* Wed Oct 12 2011 Federico Simoncelli <fsimonce(a)redhat.com> - 4.9.0-0
- Initial upstream release
* Thu Nov 02 2006 Simon Grinberg <simong(a)qumranet.com> - 0.0-1
- Initial build
[root@ov301 ~]#
How can I configure OVN?
Thanks in advance,
Gianluca
7 years, 9 months
Manually moving disks from FC to iSCSI
by Gianluca Cecchi
Hello,
I have a source oVirt environment with storage domain on FC
I have a destination oVirt environment with storage domain on iSCSI
The two environments can communicate only via the network of their
respective hypervisors.
The source environment, in particular, is almost isolated and I cannot
attach an export domain to it or something similar.
So I'm going to plan a direct move through dd of the disks of some VMs
The workflow would be
On destination create a new VM with same config and same number of disks of
the same size of corresponding source ones.
Also I think same allocation policy (thin provision vs preallocated)
Using lvs -o+lv_tags I can detect the names of my origin and destination
LVs, corresponding to the disks
When a VM is powered down, the LV that maps the disk will be not open, so I
have to force its activation (both on source and on destination)
lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
copy source disk with dd through network (I use gzip to limit network usage
basically...)
on src_host:
dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd
bs=1024k of=/dev/dest_vg/dest_lv"
deactivate LVs on source and dest
lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
Try to power on the VM on destination
Some questions:
- about overall workflow
- about dd flags, in particular if source disks are thin vs preallocated
Thanks,
Gianluca
7 years, 9 months
Bizzare oVirt network problem
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------2363491D1BDE9DA861A526B0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello.
I am facing a pretty bizzare problem in two of my Nodes running oVirt. A
given VM running a few hundred Mbps of traffic simply stops passing
traffic and only recovers after a reboot. Checking the bridge with
'brctl showmacs BRIDGE' I see the VM's MAC address missing during this
event.
It seems the bridge simply unlearn the VM's mac address which only
returns when the VM is rebooted.
This problems happened in two different Nodes running in different
hardware, in different datacenter, in different network architecture,
different switch vendors and different bonding modes.
The main differences these Nodes have compared to others I have and
which don't show this problem are:
- The CentOS 7 installed is a Minimal installation instead of oVirt-NG
- The Kernel used is 4.12 (elrepo) instead of the default 3.10
- The ovirtmgmt network is used also for the Virtual Machine
showing this problem.
Has anyone have any idea if it may have anything to do with oVirt (any
filters) or any of the components different from a oVirt-NG installation ?
Thanks
Fernando
--------------2363491D1BDE9DA861A526B0
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello.<br>
<br>
I am facing a pretty bizzare problem in two of my Nodes running
oVirt. A given VM running a few hundred Mbps of traffic simply
stops passing traffic and only recovers after a reboot. Checking
the bridge with 'brctl showmacs BRIDGE' I see the VM's MAC address
missing during this event.<br>
<br>
It seems the bridge simply unlearn the VM's mac address which only
returns when the VM is rebooted.<br>
This problems happened in two different Nodes running in different
hardware, in different datacenter, in different network
architecture, different switch vendors and different bonding
modes.<br>
<br>
The main differences these Nodes have compared to others I have
and which don't show this problem are:<br>
- The CentOS 7 installed is a Minimal installation instead of
oVirt-NG<br>
- The Kernel used is 4.12 (elrepo) instead of the default 3.10<br>
- The ovirtmgmt network is used also for the Virtual Machine
showing this problem.<br>
<br>
Has anyone have any idea if it may have anything to do with oVirt
(any filters) or any of the components different from a oVirt-NG
installation ?<br>
<br>
Thanks<br>
Fernando<br>
</font>
</body>
</html>
--------------2363491D1BDE9DA861A526B0--
7 years, 9 months
Deleted the default Datacenter and hosted-engine was there...
by Vinícius Ferrão
Hello,
As someone may know I’m learning my way through oVirt. During the configuration of the my production oVirt pool I’ve created a new datacenter and a cluster to put my hosts and deleted the default one.
But now I don’t know what to do. I can’t add one of my machines to the new datacenter since it’s still on the “old” and deleted Datacenter. The hosted engine is running on the same machine.
What are the next steps? I think I’ve screwed all the installation removing the default DC.
Any help is appreciated,
V.
PS: I still have access to the Hosted Engine, and it’s running fine.
7 years, 9 months
[ovirt 4.1] how to increase HA VM retry count ?
by TranceWorldLogic .
Hi,
I have created HA VM with priority as High.
And then I block host network connection using firewall rule as shown below.
e.g.
iptables -I INPUT -j DROP; iptables -I OUTPUT -j DROP;
As expected ovirt had call fencing to power off host and power on it again.
At backgroud ovirt tried 10 time to start that VM but as host down it fail
to start.
And after 10 trial it giveup to start VM.
Is there any way to increase trail count ?
And also let me know how to change retry timer.
Thanks,
~Rohit
7 years, 9 months
Add disk image from node command line?
by Chris Adams
I have a qcow2 disk image sitting on the local filesystem of one node.
Is there a way to copy this image to oVirt (into an iSCSI storage
domain) without copying it to my desktop and uploading through the web
UI?
--
Chris Adams <cma(a)cmadams.net>
7 years, 9 months
trying to use raw disks from foreign KVM directly in oVirt
by Matthias Leopold
hi,
i'm using a KVM system that creates it's disks in raw format as LVM
logical volumes. there's one VG with one LV per disk. the physical
devices for these LVM entities are iSCSI devices. now my idea was to use
these disks directly in oVirt as "Direct LUN". i did the following:
- stopped the foreign KVM domain
- deactivated the LV that is the disk
- reconfigured the SAN so the iSCSI device is removed from foreign KVM
host and is visible to the oVirt hosts
- created an oVirt "Direct LUN" disk with the iSCSI device
- created a VM in oVirt, attached the "Direct LUN" disk to it and set
the "bootable" flag
- started the VM
- console displays "boot failed: not a bootable disk" :-(
i tried virtIO, virtIO-SCSI, IDE interfaces for the disk, no change
i ran "scan alignment" for the disk, no change
i tried without bootable flag, no change
strange thing is the wrong virtual size the oVirt GUI display for this
disk, GUI says virtual size is 372GB, "qemu-img info" (in the oVirt
host) says virtual size is 47GB (which was the size in the foreign KVM
system)
what could be wrong? can this work at all?
could LVM metadata from the old system be a problem?
i know the whole operation is a litte crazy, but if this worked the
migration process would be so much easier...
thx
matthias
7 years, 9 months
[ovirt 4.1] tutorial for external Scheduling policy
by TranceWorldLogic .
Hi,
Can some one point me to some document or tutorial for external Scheduling
policy ?
I want to understand this feature "external scheduling policy" but not got
any idea how to implement that.
Please point me some document [best will be tutorial or sample code].
Thanks,
~Rohit
7 years, 9 months
oVIRT 4.1 / iSCSI Multipathing / Dell Compellent
by Devin Acosta
I just installed a brand new Dell Compellent SAN for use with our oVIRT
4.1.3 fresh installation. I presented a LUN of 30TB to the cluster over
iSCSI 10G. I went into the Storage domain and added a new storage mount
called “dell-storage” and logged into each of the ports for the target. It
detects the targets just right and the Dell SAN is happy, until a host is
rebooted at which point the iSCSI seems to choose to log into only one of
the controllers and not all the paths that it originally logged into. At
this point the Dell SAN shows only 1/2 connected and therefore my e-mail.
When I looked at the original iscsiadm session information after initially
joining to domain it shows correct connected to (1f,21,1e,20) ports.
tcp: [11] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [12] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe21
(non-flash)
tcp: [13] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)
tcp: [14] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe20
(non-flash)
tcp: [15] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [16] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)
After I reboot the hypervisor and it re-connects to the cluster it shows:
tcp: [1] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)
tcp: [2] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)
tcp: [3] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [4] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
What is bizarre is it shows multiple connections to the same IP but it
shows 2 connections to 1e, and 2 connections to 1f. It seems to selected
only the top controller on each fault domain and not the bottom controller
also.
I did configure a “bond” inside the iSCSI Multipathing of selecting only
the 2 VLANS together for the iSCSI. I didn’t select a target with it
because wasn’t sure the proper configuration for this. If I selected both
virtual and target port the cluster goes down hard.
Any ideas?
Devin Acosta
7 years, 9 months
Template for imported VMs?
by Eduardo Mayoral
Hi,
I am using oVirt 4.1 and I am importing a significant number of VMs
from VMWare. So far, everything is fine.
However, I find that after importing each VM I have to modify some
parameters of the imported VM (Timezone, VM type from desktop to server,
VNC console...).
I have those parameters changed on the "blank" template, but it
seems that the import process is not picking that template as the base
for the imported VM. Where is the import process picking the defaults
for the imported VM from? Can those be changed? I have failed to find
that information on the docs.
Thanks in advance!
--
Eduardo Mayoral Jimeno (emayoral(a)arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
7 years, 9 months
[Ovirt 4.1] Hosted engine with glusterfs (2 node)
by TranceWorldLogic .
Hi,
I was trying to setup hosted engine on 2 host machine but it not allowing
to setup hosted engine on 2 node gluster file system.
I have also modified vdsm.com to allow 2 replica count.
But latter I found in code it is hard coded to 3 as shown below:
------------------- -------------------------------
------------------------------ ---------------
src/plugins/gr-he-setup/storage/nfs.py
if* replicaCount != '3'*:
raise RuntimeError(
_(
'GlusterFS Volume is not using replica 3'
)
)
------------------- -------------------------------
------------------------------ ---------------
Can I get to know the reason to hard coded to 3 ?
from online I got to know that split brain issue not get resolve
automatically using 2 node gluster setup. But please let me know more (why
3 ? why not 2 or 4 ?)
If I am missing something please let me know.
Also want to understand, can quorum can solve 2 node gluster issue
(split-brain) ?
Thanks,
~Rohit
7 years, 9 months
virt-v2v to glusterfs storage domain
by Ramachandra Reddy Ankireddypalle
Hi,
Does virt-v2v command work with glusterfs storage domain. I have an
OVA image and that needs to be imported to glusterfs storage domain. Please
provide some pointers to this.
Thanks and Regards,
Ram
7 years, 9 months
oVirt Node 4.1.2 offering RC update?
by Vinícius Ferrão
Hello,
I’ve noted a strange thing on oVirt. On the Hosted Engine an update was offered and I was a bit confused, since I’m running the latest oVirt Node release.
To check if 4.1.3 was already released I issued an “yum update” on the command line and for my surprise an RC release was offered. This not seems to be right:
======================================================================================================================
Package Arch Version Repository Size
======================================================================================================================
Installing:
ovirt-node-ng-image-update noarch 4.1.3-0.3.rc3.20170622082156.git47b4302.el7.centos ovirt-4.1 544 M
replacing ovirt-node-ng-image-update-placeholder.noarch 4.1.2-1.el7.centos
Updating:
ovirt-engine-appliance noarch 4.1-20170622.1.el7.centos ovirt-4.1 967 M
Transaction Summary
======================================================================================================================
Install 1 Package
Upgrade 1 Package
Total download size: 1.5 G
Is this ok [y/d/N]: N
Is this normal behavior? This isn’t really good, since it can lead to stable to unstable moves on production. If this is normal, how can we avoid it?
Thanks,
V.
7 years, 9 months
node ng upgrade failed
by Grundmann, Christian
--_000_6A17C71B52524C408E7AAF69103E9E490FCE2C1Bfabamailserverf_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
I tried to update to node ng 4.1.3 (from 4.1.1) which failed
Jul 10 10:10:49 imgbased: 2017-07-10 10:10:49,986 [INFO] Extracting image '=
/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.20170709.0.el7.squash=
fs.img'
Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] Starting base crea=
tion
Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] New base will be: =
ovirt-node-ng-4.1.3-0.20170709.0
Jul 10 10:10:51 imgbased: 2017-07-10 10:10:51,539 [INFO] New LV is: <LV 'on=
n_cs-kvm-001/ovirt-node-ng-4.1.3-0.20170709.0' />
Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,070 [INFO] Creating new files=
ystem on base
Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,412 [INFO] Writing tree to ba=
se
Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new layer=
after <Base ovirt-node-ng-4.1.3-0.20170709.0 [] />
Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new layer=
after <Base ovirt-node-ng-4.1.3-0.20170709.0 [] />
Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,345 [INFO] New layer will be:=
<Layer ovirt-node-ng-4.1.3-0.20170709.0+1 />
Jul 10 10:12:52 imgbased: 2017-07-10 10:12:52,714 [ERROR] Failed to migrate=
etc#012Traceback (most recent call last):#012 File "/tmp/tmp.EMsKrrbmZs/u=
sr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py", line 119, in=
on_new_layer#012 check_nist_layout(imgbase, new_lv)#012 File "/tmp/tmp=
.EMsKrrbmZs/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",=
line 173, in check_nist_layout#012 v.create(t, paths[t]["size"], paths[=
t]["attach"])#012 File "/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-package=
s/imgbased/volume.py", line 48, in create#012 "Path is already a volume:=
%s" % where#012AssertionError: Path is already a volume: /var/log
Jul 10 10:12:53 python: detected unhandled Python exception in '/tmp/tmp.EM=
sKrrbmZs/usr/lib/python2.7/site-packages/imgbased/__main__.py'
Jul 10 10:12:53 abrt-server: Executable '/tmp/tmp.EMsKrrbmZs/usr/lib/python=
2.7/site-packages/imgbased/__main__.py' doesn't belong to any package and P=
rocessUnpackaged is set to 'no'
Jul 10 10:15:10 imgbased: 2017-07-10 10:15:10,079 [INFO] Extracting image '=
/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.20170622.0.el7.squash=
fs.img'
Jul 10 10:15:11 imgbased: 2017-07-10 10:15:11,226 [INFO] Starting base crea=
tion
Jul 10 10:15:11 imgbased: 2017-07-10 10:15:11,226 [INFO] New base will be: =
ovirt-node-ng-4.1.3-0.20170622.0
Jul 10 10:15:11 python: detected unhandled Python exception in '/tmp/tmp.pq=
f2qhifaY/usr/lib/python2.7/site-packages/imgbased/__main__.py'
Jul 10 10:15:12 abrt-server: Executable '/tmp/tmp.pqf2qhifaY/usr/lib/python=
2.7/site-packages/imgbased/__main__.py' doesn't belong to any package and P=
rocessUnpackaged is set to 'no'
How can I fix it?
Thx Christian
--_000_6A17C71B52524C408E7AAF69103E9E490FCE2C1Bfabamailserverf_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE-AT" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I tried to update to node ng 4.=
1.3 (from 4.1.1) which failed<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:10:49 imgbased: 2017-=
07-10 10:10:49,986 [INFO] Extracting image '/usr/share/ovirt-node-ng/image/=
/ovirt-node-ng-4.1.3-0.20170709.0.el7.squashfs.img'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:10:50 imgbased: 2017-=
07-10 10:10:50,816 [INFO] Starting base creation<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:10:50 imgbased: 2017-=
07-10 10:10:50,816 [INFO] New base will be: ovirt-node-ng-4.1.3-0.20170709.=
0<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:10:51 imgbased: 2017-=
07-10 10:10:51,539 [INFO] New LV is: <LV 'onn_cs-kvm-001/ovirt-node-ng-4=
.1.3-0.20170709.0' /><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:10:53 imgbased: 2017-=
07-10 10:10:53,070 [INFO] Creating new filesystem on base<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:10:53 imgbased: 2017-=
07-10 10:10:53,412 [INFO] Writing tree to base<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:12:04 imgbased: 2017-=
07-10 10:12:04,344 [INFO] Adding a new layer after <Base ovirt-node-ng-4=
.1.3-0.20170709.0 [] /><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:12:04 imgbased: 2017-=
07-10 10:12:04,344 [INFO] Adding a new layer after <Base ovirt-node-ng-4=
.1.3-0.20170709.0 [] /><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:12:04 imgbased: 2017-=
07-10 10:12:04,345 [INFO] New layer will be: <Layer ovirt-node-ng-4.1.3-=
0.20170709.0+1 /><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:12:52 imgbased: 2017-=
07-10 10:12:52,714 [ERROR] Failed to migrate etc#012Traceback (most recent =
call last):#012 File "/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site=
-packages/imgbased/plugins/osupdater.py", line
119, in on_new_layer#012 check_nist_layout(imgbase, new_=
lv)#012 File "/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-package=
s/imgbased/plugins/osupdater.py", line 173, in check_nist_layout#012&n=
bsp; v.create(t, paths[t]["size"], paths[t]["att=
ach"])#012 File
"/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/imgbased/volume.=
py", line 48, in create#012 "Path is already a =
volume: %s" % where#012AssertionError: Path is already a volume: /var/=
log<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:12:53 python: detecte=
d unhandled Python exception in '/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site=
-packages/imgbased/__main__.py'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:12:53 abrt-server: Ex=
ecutable '/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/imgbased/__ma=
in__.py' doesn't belong to any package and ProcessUnpackaged is set to 'no'=
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:15:10 imgbased: 2017-=
07-10 10:15:10,079 [INFO] Extracting image '/usr/share/ovirt-node-ng/image/=
/ovirt-node-ng-4.1.3-0.20170622.0.el7.squashfs.img'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:15:11 imgbased: 2017-=
07-10 10:15:11,226 [INFO] Starting base creation<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:15:11 imgbased: 2017-=
07-10 10:15:11,226 [INFO] New base will be: ovirt-node-ng-4.1.3-0.20170622.=
0<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:15:11 python: detecte=
d unhandled Python exception in '/tmp/tmp.pqf2qhifaY/usr/lib/python2.7/site=
-packages/imgbased/__main__.py'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Jul 10 10:15:12 abrt-server: Ex=
ecutable '/tmp/tmp.pqf2qhifaY/usr/lib/python2.7/site-packages/imgbased/__ma=
in__.py' doesn't belong to any package and ProcessUnpackaged is set to 'no'=
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">How can I fix it?<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thx Christian<o:p></o:p></span>=
</p>
</div>
</body>
</html>
--_000_6A17C71B52524C408E7AAF69103E9E490FCE2C1Bfabamailserverf_--
7 years, 9 months
Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission
by Tomas Jelinek
On Tue, Jun 27, 2017 at 12:08 PM, Jerome R <jerzkie102030(a)gmail.com> wrote:
> I tried this workaround, I tried to logon the user acct to moVirt with 1
> of the admin permission resources, it works I can access to the VM assigned
> however I'm able to see what Admin can see in the portal though not able to
> perform action. So far that's one of my concern the user should be able to
> see just his/her VM assigned.
>
yes, this is a consequence of using the admin API - you can see all the
entities and do actions only on the ones you have explicit rights to.
Unfortunately, until the https://github.com/oVirt/moVirt/issues/282 is
done, there is nothing better I can offer you.
We can try to give that item a priority, just need to get the current RC
out of the door (hopefully soon).
>
> Thanks,
> Jerome
>
> On Tue, Jun 27, 2017 at 3:20 PM, Tomas Jelinek <tjelinek(a)redhat.com>
> wrote:
>
>>
>>
>> On Tue, Jun 27, 2017 at 10:13 AM, Jerome Roque <jerzkie102030(a)gmail.com>
>> wrote:
>>
>>> Hi Tomas,
>>>
>>> Thanks for your response. What do you mean by "removing the support for
>>> user permissions"? I'm using
>>>
>>
>> The oVirt permission model expects to be told explicitly by one header if
>> the logged in user has some admin permissions or not. In the past the API
>> behaved differently in this two cases so we needed to remove the option to
>> use it without admin permissions.
>>
>> Now the situation is better so we may be able to bring this support back,
>> but it will require some testing.
>>
>>
>>> the latest version of moVirt 1.7.1, and ovirt-engine 4.1.
>>> Is there anyone tried running user role in moVirt?
>>>
>>
>> you will get permission denied from the API if you try to log in with a
>> user which has no admin permission. If you give him any admin permission on
>> any resource, it might work as a workaround.
>>
>>
>>>
>>> Best Regards,
>>> Jerome
>>>
>>> On Tue, Jun 20, 2017 at 5:14 PM, Tomas Jelinek <tjelinek(a)redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque <jerzkie102030(a)gmail.com>
>>>> wrote:
>>>>
>>>>> Good day oVirt Users,
>>>>>
>>>>> I need some little help. I have a KVM and used oVirt for the
>>>>> management of VMs. What I want is that my client will log on to their
>>>>> account and access their virtual machine using their Smart phone. I tried
>>>>> to install mOvirt and yes can connect to the console of my machine, but it
>>>>> is only accessible for admin console.
>>>>>
>>>>
>>>> moVirt originally worked both with admin and user permissions. We had
>>>> to remove the support for user permissions since the oVirt API did not
>>>> provide all features moVirt needed for user permissions (search for
>>>> example). But the API moved significantly since then (the search works also
>>>> for users now for one) so we can move it back. I have opened an issue about
>>>> it: https://github.com/oVirt/moVirt/issues/282 - we can try to do it
>>>> in next version.
>>>>
>>>>
>>>>> Tried to use web console, it downloaded console.vv but can't open it.
>>>>> By any chance could make this thing possible?
>>>>>
>>>>
>>>> If you want to use a web console for users, I would suggest to try the
>>>> new ovirt-web-ui [1] - you have a link to it from oVirt landing page and
>>>> since 4.1 it is installed by default with oVirt.
>>>>
>>>> The .vv file can not be opened using aSPICE AFAIK - adding Iordan as
>>>> the author of aSPICE to comment on this.
>>>>
>>>> [1]: https://github.com/oVirt/ovirt-web-ui
>>>>
>>>>
>>>>>
>>>>> Thank you,
>>>>> Jerome
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>
>>>
>>
>
7 years, 9 months
op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)
by Sahina Bose
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
wrote:
>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>> start
>>> gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export
>>> gl01.localdomain.local:/gluster/brick3/export commit force
>>>
>>> Correct?
>>>
>>
>> Yes, correct. gl01.localdomain.local should resolve correctly on all 3
>> nodes.
>>
>
>
> It fails at first step:
>
> [root@ovirt01 ~]# gluster volume reset-brick export
> ovirt01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: failed: Cannot execute command. The cluster is
> operating at version 30712. reset-brick command reset-brick start is
> unavailable in this version.
> [root@ovirt01 ~]#
>
> It seems somehow in relation with this upgrade not of the commercial
> solution Red Hat Gluster Storage
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Storage/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html
>
> So ti seems I have to run some command of type:
>
> gluster volume set all cluster.op-version XXXXX
>
> with XXXXX > 30712
>
> It seems that latest version of commercial Red Hat Gluster Storage is 3.1
> and its op-version is indeed 30712..
>
> So the question is which particular op-version I have to set and if the
> command can be set online without generating disruption....
>
It should have worked with the glusterfs 3.10 version from Centos repo.
Adding gluster-users for help on the op-version
>
> Thanks,
> Gianluca
>
7 years, 9 months
Re: [ovirt-users] vm freezes when using yum update
by M Mahboubian
------=_Part_3118757_945095184.1499053755165
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi Yaniv,
Thanks for your reply. Apologies for my late reply we had a long holiday he=
re.=C2=A0
To answer you:
Yes the =C2=A0guest VM become completely frozen and non responsive as soon =
as its disk has any activity for example when we shutdown or do a yum updat=
e.=C2=A0
Versions of all the components involved - guest OS, host OS (qemu-kvm versi=
on), how do you run the VM (vdsm log would be helpful here), exact storage =
specification (1Gb or 10Gb link? What is the NFS version? What is it hosted=
on? etc.)=C2=A0Y.
Some facts about our environment:
1) Previously, this environment was using XEN using raw disk and we change =
it to Ovirt (Ovirt were able to read the VMs's disks without any conversion=
.)=C2=A02) The issue we are facing is not happening for any of the existing=
VMs.=C2=A03) This issue only happens for new VMs.4) Guest (kernel v3.10) a=
nd host(kernel v4.1) OSes are both CentOS 7 minimal installation.=C2=A05) N=
FS version 4 and Using Ovirt 4.16)=C2=A0The network speed is 1 GB.7)=C2=A0T=
he output for rpm -qa | grep qemu-kvm shows:=C2=A0 =C2=A0 =C2=A0qemu-kvm-co=
mmon-ev-2.6.0-28.e17_3.6.1.x86_64
=C2=A0 =C2=A0 =C2=A0qemu-kvm-tools-ev-2.6.0-28.e17_3.6.1.x86_64=C2=A0 =C2=
=A0 =C2=A0qemu-kvm-ev-2.6.0-28.e17_3.6.1.x86_648)=C2=A0The storage is from =
a SAN device which is connected to the NFS server using fiber channel.
So for example during shutdown also it froze and shows something like this =
in event section:
VM ILMU_WEB has been paused due to storage I/O problem.
More information:
VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:50:=
43 AM):
2017-07-03 09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC =
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC =
call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515)
2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC =
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect=
: repoStats(options=3DNone) (logUtils:51)
2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect=
: repoStats, Return response: {u'e01186c1-7e44-4808-b551-4722f0f8e84b': {'c=
ode': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000144=
822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d-ba2a372=
190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay'=
: '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4-8=
a06-37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb771-=
5b73-4b5c-ac46-56499df97721': {'code': 0, 'actual': True, 'version': 0, 'ac=
quired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': True}, =
u'f620781f-93d4-4410-8697-eb41045cacd6': {'code': 0, 'actual': True, 'versi=
on': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', 'valid=
': True}, u'a1a7d0a4-e3b6-4bd5-862b-96e70dae3f29': {'code': 0, 'actual': Tr=
ue, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastCheck': '8=
.8', 'valid': True}} (logUtils:54)
2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC =
call Host.getStats succeeded in 0.01 seconds (__init__:515)
2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7-=
9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a-9b51-f5ab8242c8d5', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d-=
a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18-9c00-c1539b632e8a', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8-=
b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c-b193-045f23c596d8', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08-=
a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c-b72c-2f46c4b4db1e', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec=
t: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize=
': '107374182400'} (logUtils:54)
............
2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro=
r eio (vm:4112)
2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997)
2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997)
2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro=
r eio (vm:4112)
2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997)
2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro=
r eio (vm:4112)
2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError
On Thursday, June 22, 2017, 2:48 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
On Thu, Jun 22, 2017 at 5:07 AM, M Mahboubian <m_mahboubian(a)yahoo.com> wrot=
e:
Dear all,I appreciate if anybody could possibly help with the issue I am fa=
cing.
In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. =
The NFS server provides storage to the VMs in the hosts.
I can create new VMs and install os but once i do something like yum update=
the VM freezes. I can reproduce this every single time I do yum update.
Is it paused, or completely frozen?=C2=A0
what information/log files should I provide you to trubleshoot this?
Versions of all the components involved - guest OS, host OS (qemu-kvm versi=
on), how do you run the VM (vdsm log would be helpful here), exact storage =
specification (1Gb or 10Gb link? What is the NFS version? What is it hosted=
on? etc.)=C2=A0Y.
=C2=A0Regards
______________________________ _________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
------=_Part_3118757_945095184.1499053755165
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font=
-size:13px"><div id=3D"yiv6639019017"><div id=3D"yui_3_16_0_ym19_1_14990517=
25992_6544">
Hi Yaniv,<div id=3D"yui_3_16_0_ym19_1_1499051725992_6543"><br clear=3D"none=
"></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6566">Thanks for your re=
ply. Apologies for my late reply we had a long holiday here. </div><di=
v id=3D"yui_3_16_0_ym19_1_1499051725992_6565"><br clear=3D"none"></div><div=
id=3D"yui_3_16_0_ym19_1_1499051725992_6564">To answer you:</div><div id=3D=
"yui_3_16_0_ym19_1_1499051725992_6563"><br clear=3D"none"></div><div id=3D"=
yui_3_16_0_ym19_1_1499051725992_6557">Yes the guest VM become complet=
ely frozen and non responsive as soon as its disk has any activity for exam=
ple when we shutdown or do a yum update. </div><div id=3D"yui_3_16_0_y=
m19_1_1499051725992_6556"><br clear=3D"none"></div><div id=3D"yui_3_16_0_ym=
19_1_1499051725992_6555"><br clear=3D"none"></div><div id=3D"yui_3_16_0_ym1=
9_1_1499051725992_6546" dir=3D"ltr"><div style=3D"" id=3D"yui_3_16_0_ym19_1=
_1499051725992_6554"><span style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992=
_6553">Versions of all the components involved - guest OS, host OS (qemu-kv=
m version), how do you run the VM (vdsm log would be helpful here), exact s=
torage specification (1Gb or 10Gb link? What is the NFS version? What is it=
hosted on? etc.)</span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_14990=
51725992_6552"><span style=3D""> Y.</span></div><div style=3D"" id=3D"=
yui_3_16_0_ym19_1_1499051725992_6551"><span style=3D""><br clear=3D"none"><=
/span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6551"><sp=
an style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_9556">Some facts about =
our environment:</span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_149905=
1725992_6551"><span style=3D""><br></span></div><div style=3D"" id=3D"yui_3=
_16_0_ym19_1_1499051725992_6551"><span style=3D"" id=3D"yui_3_16_0_ym19_1_1=
499051725992_9703">1) Previously, this environment was using XEN using raw =
disk and we change it to Ovirt (Ovirt were able to read the VMs's disks wit=
hout any conversion.) </span></div><div style=3D"" id=3D"yui_3_16_0_ym=
19_1_1499051725992_6551"><span style=3D"" id=3D"yui_3_16_0_ym19_1_149905172=
5992_9800">2) The issue we are facing is not happening for any of the exist=
ing VMs. </span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_14990517=
25992_6551"><span style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_9686"><b=
id=3D"yui_3_16_0_ym19_1_1499051725992_9685">3) This issue only happens for=
new VMs.</b></span></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_149905172=
5992_6551"><span style=3D"">4) G</span>uest (kernel v3.10) and host(kernel =
v4.1) OSes are both CentOS 7 minimal installation. </div><div style=3D=
"" id=3D"yui_3_16_0_ym19_1_1499051725992_6551"><b id=3D"yui_3_16_0_ym19_1_1=
499051725992_9591">5) NFS version 4</b> and Using Ovirt 4.1</div><div style=
=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6550"><span style=3D"">6) =
</span><b>The network speed is 1 GB.</b></div><div style=3D"" id=3D"yui_3_1=
6_0_ym19_1_1499051725992_6550"><span style=3D"">7) </span>The output f=
or rpm -qa | grep qemu-kvm shows:</div><div style=3D"" id=3D"yui_3_16_0_ym1=
9_1_1499051725992_6550"><b id=3D"yui_3_16_0_ym19_1_1499051725992_9222">&nbs=
p; qemu-kvm-common-ev-2.6.0-28.e17_3.6.1.x86_64</b><br></div><=
div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6550" dir=3D"ltr"><b i=
d=3D"yui_3_16_0_ym19_1_1499051725992_9360"><span style=3D"" id=3D"yui_3_16_=
0_ym19_1_1499051725992_8120"> qemu-kvm-tools-ev-</span>2=
.6.0-28.e17_3.6.1.x86_64</b></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1=
499051725992_6550" dir=3D"ltr"><b id=3D"yui_3_16_0_ym19_1_1499051725992_935=
9"> qemu-kvm-ev-2.6.0-28.e17_3.6.1.x86_64</b></div><div =
style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6550" dir=3D"ltr"><b>8)&nb=
sp;</b>The storage is from a SAN device which is connected to the NFS serve=
r using fiber channel.</div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051=
725992_6548"><span style=3D""><br clear=3D"none"></span></div><div style=3D=
"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><span style=3D"" id=3D"yui_3=
_16_0_ym19_1_1499051725992_7076">So for example during shutdown also it fro=
ze and shows something like this in event section:</span></div><div style=
=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><br></div><div style=3D""=
id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><b id=3D"yui_3_16_0_ym19_1_149=
9051725992_9195">VM ILMU_WEB has been paused due to storage I/O problem.</b=
></div><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><br></di=
v><div style=3D"" id=3D"yui_3_16_0_ym19_1_1499051725992_6548"><br></div>Mor=
e information:</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D=
"ltr"><br></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D"ltr=
">VDSM log at the time of this issue (The issue happened at Jul 3, 2017 9:5=
0:43 AM):</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D"ltr"=
><br></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6546" dir=3D"ltr"><di=
v dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499051725992_8436"><pre style=3D"use=
r-select: text; font-family: "Courier New", Courier, monospace, a=
rial, sans-serif; font-size: 14px; margin-top: 0px; margin-bottom: 0px; wor=
d-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_8875">2017-07-03=
09:50:37,113+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.=
getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-07-03 09:50:37,897+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC =
call Host.getAllVmIoTunePolicies succeeded in 0.02 seconds (__init__:515)
2017-07-03 09:50:42,510+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC =
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
<b id=3D"yui_3_16_0_ym19_1_1499051725992_9241">2017-07-03 09:50:43,548+0800=
INFO (jsonrpc/3) [dispatcher] Run and protect: repoStats(options=3DNone) =
(logUtils:51)
2017-07-03 09:50:43,548+0800 INFO (jsonrpc/3) [dispatcher] Run and protect=
: repoStats, Return response: {u'e01186c1-7e44-4808-b551-4722f0f8e84b': {'c=
ode': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000144=
822', 'lastCheck': '8.9', 'valid': True}, u'721b5233-b0ba-4722-8a7d-ba2a372=
190a0': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay'=
: '0.000327909', 'lastCheck': '8.9', 'valid': True}, u'94775bd3-3244-45b4-8=
a06-37eff8856afa': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '0.000256425', 'lastCheck': '8.9', 'valid': True}, u'731bb771-=
5b73-4b5c-ac46-56499df97721': {'code': 0, 'actual': True, 'version': 0, 'ac=
quired': True, 'delay': '0.000238159', 'lastCheck': '8.9', 'valid': True}, =
u'f620781f-93d4-4410-8697-eb41045cacd6': {'code': 0, 'actual': True, 'versi=
on': 4, 'acquired': True, 'delay': '0.00022004', 'lastCheck': '8.9', 'valid=
': True}, u'a1a7d0a4-e3b6-4bd5-862b-96e70dae3f29': {'code': 0, 'actual': Tr=
ue, 'version': 0, 'acquired': True, 'delay': '0.000298581', 'lastCheck': '8=
.8', 'valid': True}} (logUtils:54)
</b>2017-07-03 09:50:43,563+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] =
RPC call Host.getStats succeeded in 0.01 seconds (__init__:515)
2017-07-03 09:50:46,737+0800 INFO (periodic/3) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'3c26476e-1dae-44d7-=
9208-531b91ae5ae1', volUUID=3Du'a7e789fb-6646-4d0a-9b51-f5ab8242c8d5', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,738+0800 INFO (periodic/0) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'2158fdae-54e1-413d-=
a844-73da5d1bb4ca', volUUID=3Du'6ee0b0eb-0bba-4e18-9c00-c1539b632e8a', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,740+0800 INFO (periodic/2) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'f620781f-93d4-4410-8697-eb41045cacd6', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'a967016d-a56b-41e8-=
b7a2-57903cbd2825', volUUID=3Du'784514cb-2b33-431c-b193-045f23c596d8', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,741+0800 INFO (periodic/1) [dispatcher] Run and protec=
t: getVolumeSize(sdUUID=3Du'721b5233-b0ba-4722-8a7d-ba2a372190a0', spUUID=
=3Du'b04ca6e4-2660-4eaa-acdb-c1dae4e21f2d', imgUUID=3Du'bb35c163-f068-4f08-=
a1c2-28c4cb1b76d9', volUUID=3Du'fce7e0a0-7411-4d8c-b72c-2f46c4b4db1e', opti=
ons=3DNone) (logUtils:51)
2017-07-03 09:50:46,743+0800 INFO (periodic/0) [dispatcher] Run and protec=
t: getVolumeSize, Return response: {'truesize': '6361276416', 'apparentsize=
': '107374182400'} (logUtils:54)</pre><pre style=3D"user-select: text; font=
-family: "Courier New", Courier, monospace, arial, sans-serif; fo=
nt-size: 14px; margin-top: 0px; margin-bottom: 0px; word-wrap: break-word;"=
id=3D"yui_3_16_0_ym19_1_1499051725992_8875"><br></pre><pre style=3D"user-s=
elect: text; font-family: "Courier New", Courier, monospace, aria=
l, sans-serif; font-size: 14px; margin-top: 0px; margin-bottom: 0px; word-w=
rap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_8875">......</pre><=
pre style=3D"user-select: text; font-family: "Courier New", Couri=
er, monospace, arial, sans-serif; font-size: 14px; margin-top: 0px; margin-=
bottom: 0px; word-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_=
8875">......</pre><pre style=3D"user-select: text; font-family: "Couri=
er New", Courier, monospace, arial, sans-serif; font-size: 14px; margi=
n-top: 0px; margin-bottom: 0px; word-wrap: break-word;" id=3D"yui_3_16_0_ym=
19_1_1499051725992_8875"><br></pre><pre style=3D"user-select: text; font-fa=
mily: "Courier New", Courier, monospace, arial, sans-serif; font-=
size: 14px; margin-top: 0px; margin-bottom: 0px; word-wrap: break-word;" id=
=3D"yui_3_16_0_ym19_1_1499051725992_8875"><pre style=3D"user-select: text; =
font-family: "Courier New", Courier, monospace, arial, sans-serif=
; margin-top: 0px; margin-bottom: 0px; word-wrap: break-word;" id=3D"yui_3_=
16_0_ym19_1_1499051725992_9311"><b id=3D"yui_3_16_0_ym19_1_1499051725992_93=
41">2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'=
c84f519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 =
error eio (vm:4112)
2017-07-03 09:52:16,941+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997)
2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onSuspend (vm:4997)
2017-07-03 09:52:16,942+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro=
r eio (vm:4112)
2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError (vm:4997)
2017-07-03 09:52:16,943+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') abnormal vm stop device scsi0-0-0-0 erro=
r eio (vm:4112)
2017-07-03 09:52:16,944+0800 INFO (libvirt/events) [virt.vm] (vmId=3D'c84f=
519e-398d-40a3-85b2-b7e53f3d7f67') CPU stopped: onIOError</b></pre><pre sty=
le=3D"user-select: text; font-family: "Courier New", Courier, mon=
ospace, arial, sans-serif; margin-top: 0px; margin-bottom: 0px; word-wrap: =
break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_9311"><b><br></b></pre><=
pre style=3D"user-select: text; font-family: "Courier New", Couri=
er, monospace, arial, sans-serif; margin-top: 0px; margin-bottom: 0px; word=
-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_9311"><b><br></b>=
</pre><pre style=3D"user-select: text; font-family: "Courier New"=
, Courier, monospace, arial, sans-serif; margin-top: 0px; margin-bottom: 0p=
x; word-wrap: break-word;" id=3D"yui_3_16_0_ym19_1_1499051725992_9311"><b><=
br></b></pre></pre></div><div class=3D"qtdSeparateBR"><br><br></div><div cl=
ass=3D"yiv6639019017yqt2677119843" id=3D"yiv6639019017yqtfd82856"><div clas=
s=3D"yiv6639019017yahoo-quoted-begin" style=3D"font-size:15px;color:#715FFA=
;padding-top:15px;margin-top:0;" id=3D"yui_3_16_0_ym19_1_1499051725992_6545=
"><b id=3D"yui_3_16_0_ym19_1_1499051725992_9344">On Thursday, June 22, 2</b=
>017, 2:48 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:</div><blockquote =
class=3D"yiv6639019017iosymail" id=3D"yui_3_16_0_ym19_1_1499051725992_6642"=
><div id=3D"yiv6639019017"><div id=3D"yui_3_16_0_ym19_1_1499051725992_6641"=
><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1499051725992_6640"><br clear=3D"=
none"><div class=3D"yiv6639019017gmail_extra" id=3D"yui_3_16_0_ym19_1_14990=
51725992_6643"><br clear=3D"none"><div class=3D"yiv6639019017gmail_quote" i=
d=3D"yui_3_16_0_ym19_1_1499051725992_6644">On Thu, Jun 22, 2017 at 5:07 AM,=
M Mahboubian <span dir=3D"ltr"><<a rel=3D"nofollow" shape=3D"rect" ymai=
lto=3D"mailto:m_mahboubian@yahoo.com" target=3D"_blank" href=3D"mailto:m_ma=
hboubian(a)yahoo.com">m_mahboubian(a)yahoo.com</a>></span> wrote:<br clear=
=3D"none"><blockquote class=3D"yiv6639019017gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" i=
d=3D"yui_3_16_0_ym19_1_1499051725992_6646"><div id=3D"yui_3_16_0_ym19_1_149=
9051725992_6645">
Dear all,<div id=3D"yui_3_16_0_ym19_1_1499051725992_6647">I appreciate if a=
nybody could possibly help with the issue I am facing.</div><div id=3D"yui_=
3_16_0_ym19_1_1499051725992_6648"><br clear=3D"none"></div><div id=3D"yui_3=
_16_0_ym19_1_1499051725992_6649">In our environment we have 2 hosts 1 NFS s=
erver and 1 ovirt engine server. The NFS server provides storage to the VMs=
in the hosts.</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6650"><br cl=
ear=3D"none"></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6651">I can c=
reate new VMs and install os but once i do something like yum update the VM=
freezes. I can reproduce this every single time I do yum update.</div></di=
v></blockquote><div id=3D"yui_3_16_0_ym19_1_1499051725992_6652"><br clear=
=3D"none"></div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6653">Is it paus=
ed, or completely frozen?</div><div id=3D"yui_3_16_0_ym19_1_1499051725992_6=
654"> <br clear=3D"none"></div><blockquote class=3D"yiv6639019017gmail=
_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204=
,204);padding-left:1ex;" id=3D"yui_3_16_0_ym19_1_1499051725992_9059"><div i=
d=3D"yui_3_16_0_ym19_1_1499051725992_9058"><div id=3D"yui_3_16_0_ym19_1_149=
9051725992_9057"><br clear=3D"none"></div><div id=3D"yui_3_16_0_ym19_1_1499=
051725992_9791">what information/log files should I provide you to trublesh=
oot this?</div></div></blockquote><div><br clear=3D"none"></div><div id=3D"=
yui_3_16_0_ym19_1_1499051725992_9834">Versions of all the components involv=
ed - guest OS, host OS (qemu-kvm version), how do you run the VM (vdsm log =
would be helpful here), exact storage specification (1Gb or 10Gb link? What=
is the NFS version? What is it hosted on? etc.)</div><div> Y.</div><d=
iv class=3D"yiv6639019017yqt6606837936" id=3D"yiv6639019017yqtfd75418"><div=
><br clear=3D"none"></div></div><blockquote class=3D"yiv6639019017gmail_quo=
te" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204=
);padding-left:1ex;" id=3D"yui_3_16_0_ym19_1_1499051725992_9794"><div id=3D=
"yui_3_16_0_ym19_1_1499051725992_9793"><div class=3D"yiv6639019017yqt660683=
7936" id=3D"yiv6639019017yqtfd33806"><div id=3D"yui_3_16_0_ym19_1_149905172=
5992_9792"><br clear=3D"none"></div><div> Regards</div></div>
</div><br clear=3D"none">______________________________ _________________<b=
r clear=3D"none">
Users mailing list<br clear=3D"none">
<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:Users@ovirt.org" targe=
t=3D"_blank" href=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a><br clear=
=3D"none">
<a rel=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"http://lists.o=
virt.org/mailman/listinfo/users">http://lists.ovirt.org/ mailman/listinfo/u=
sers</a><div class=3D"yiv6639019017yqt6606837936" id=3D"yiv6639019017yqtfd6=
5242"><br clear=3D"none">
<br clear=3D"none"></div></blockquote></div><div class=3D"yiv6639019017yqt6=
606837936" id=3D"yiv6639019017yqtfd02968"><br clear=3D"none"></div></div></=
div></div></div><blockquote></blockquote></blockquote></div></div><div clas=
s=3D"yiv6639019017yqt2677119843" id=3D"yiv6639019017yqtfd23176">
</div></div></div></div></body></html>
------=_Part_3118757_945095184.1499053755165--
7 years, 9 months
Re: [ovirt-users] [Libguestfs] virt-v2v import from KVM without storage-pool ?
by Richard W.M. Jones
On Wed, Jul 05, 2017 at 11:14:09AM +0200, Matthias Leopold wrote:
> hi,
>
> i'm trying to import a VM in oVirt from a KVM host that doesn't use
> storage pools. this fails with the following message in
> /var/log/vdsm/vdsm.log:
>
> 2017-07-05 09:34:20,513+0200 ERROR (jsonrpc/5) [root] Error getting
> disk size (v2v:1089)
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in
> _get_disk_info
> vol = conn.storageVolLookupByPath(disk['alias'])
> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4770,
> in storageVolLookupByPath
> if ret is None:raise libvirtError('virStorageVolLookupByPath()
> failed', conn=self)
> libvirtError: Storage volume not found: no storage vol with matching path
>
> the disks in the origin VM are defined as
>
> <disk type='file' device='disk'>
> <driver name='qemu' type='raw' cache='writethrough'/>
> <source file='/dev/kvm108/kvm108_img'/>
>
> <disk type='file' device='cdrom'>
> <driver name='qemu' type='raw'/>
> <source file='/some/path/CentOS-7-x86_64-Minimal-1611.iso'/>
>
> is this a virt-v2v or oVirt problem?
Well the stack trace is in the oVirt code, so I guess it's an oVirt
problem. Adding ovirt-users mailing list.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
7 years, 9 months
SQL : last time halted?
by Nicolas Ecarnot
Hello,
I'm trying to find a way to clean up the VMs list of my DCs.
I think some of my users have created VM they're not using anymore, but
it's difficult to sort them out.
In some cases, I can shutdown some of them and wait.
Is there somewhere stored in the db tables the date of the last VM
exctinction?
Thank you.
--
Nicolas ECARNOT
7 years, 9 months
Very poor GlusterFS performance
by Chris Boot
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.
My volume configuration looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.
Cheers,
Chris
--
Chris Boot
bootc(a)bootc.net
7 years, 9 months
ovirt can't find user
by Fabrice Bacchella
--Apple-Mail=_B78E14EB-413D-4DC1-AC54-F837E05B2CF8
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I tried to add a user in ovirt, but it's not identified:
2017-06-28 16:48:48,505+02 ERROR =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-22) =
[] External Authentication Failed: Cannot resolve principal =
'rexecutor@internal'
/usr/bin/ovirt-aaa-jdbc-tool user show rexecutor
-- User rexecutor(b1727291-5ad4-4575-b8ec-53bdc9ce4aef) --
Namespace: *
Name: rexecutor
ID: b1727291-5ad4-4575-b8ec-53bdc9ce4aef
Display Name:=20
Email:=20
First Name:=20
Last Name:=20
Department:=20
Title:=20
Description:=20
Account Disabled: false
Account Locked: false
Account Unlocked At: 2017-06-16 13:49:31Z
Account Valid From: 2017-06-15 16:41:14Z
Account Valid To: 2217-06-15 16:41:14Z
Account Without Password: true
Last successful Login At: 1970-01-01 00:00:00Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: 2025-08-15 10:30:00Z
It's listed as a known user:
<user =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e" =
id=3D"49a12b6e-de03-4095-b6ed-2c1883f5542e">
<department></department>
=
<domain_entry_id>62313732373239312D356164342D343537352D623865632D353362646=
339636534616566</domain_entry_id>
<email></email>
<last_name></last_name>
<name></name>
<namespace>*</namespace>
<principal>rexecutor</principal>
<user_name>rexecutor@internal-authz</user_name>
<domain href=3D"/ovirt-engine/api/domains/696E7465726E616C2D617574687A" =
id=3D"696E7465726E616C2D617574687A">
<name>internal-authz</name>
</domain>
<permissions =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/permi=
ssions"/>
<roles =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/roles=
"/>
<ssh_public_keys =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/sshpu=
blickeys"/>
<tags =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/tags"=
/>
</user>
My admin domain authentication looks OK:
=
config.datasource.jdbcurl=3Djdbc:postgresql://pgdb:5432/ovirt_engine?sslfa=
ctory=3Dorg.postgresql.ssl.NonValidatingFactory
config.datasource.dbuser=3Dovirt
config.datasource.dbpassword=3DXXX
config.datasource.jdbcdriver=3Dorg.postgresql.Driver
config.datasource.schemaname=3Daaa_jdbc
It tried to increase org.ovirt.engine.core.sso.utils debug log level by =
modifying =
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.xml.in
diff ovirt-engine.xml.in*
201,204d200
< <logger category=3D"org.ovirt.engine.core.sso.utils">
< <level name=3D"ALL"/>
< </logger>
<=20
I just got in the log:
2017-06-28 17:17:09,404+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NonInteractiveAuth] (default task-7) [] =
Performing Negotiate Auth
2017-06-28 17:17:09,404+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
Performing external authentication
2017-06-28 17:17:09,410+02 ERROR =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
External Authentication Failed: Cannot resolve principal =
'rexecutor@internal'
2017-06-28 17:17:09,410+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
External Authentication Failed: Class: class =
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException=
Input:
{Extkey[name=3DEXTENSION_INVOKE_COMMAND;type=3Dclass =
org.ovirt.engine.api.extensions.ExtUUID;uuid=3DEXTENSION_INVOKE_COMMAND[48=
5778ab-bede-4f1a-b823-77b262a2f28d];]=3DAAA_AUTHZ_FETCH_PRINCIPAL_RECORD[5=
a5bf9bb-9336-4376-a823-26efe1ba26df], =
Extkey[name=3DAAA_AUTHZ_QUERY_FLAGS;type=3Dclass =
java.lang.Integer;uuid=3DAAA_AUTHZ_QUERY_FLAGS[97d226e9-8d87-49a0-9a7f-af6=
89320907b];]=3D3, Extkey[name=3DEXTENSION_INVOKE_CONTEXT;type=3Dclass =
org.ovirt.engine.api.extensions.ExtMap;uuid=3DEXTENSION_INVOKE_CONTEXT[886=
d2ebb-312a-49ae-9cc3-e1f849834b7d];]=3D{Extkey[name=3DAAA_AUTHZ_AVAILABLE_=
NAMESPACES;type=3Dinterface =
java.util.Collection;uuid=3DAAA_AUTHZ_AVAILABLE_NAMESPACES[6dffa34c-955f-4=
86a-bd35-0a272b45a711];]=3D[DC=3DXXX], =
Extkey[name=3DEXTENSION_LICENSE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_LICENSE[8a61ad65-054c-4e31-9c6d-1ca4d60a=
4c18];]=3DASL 2.0, Extkey[name=3DEXTENSION_GLOBAL_CONTEXT;type=3Dclass =
org.ovirt.engine.api.extensions.ExtMap;uuid=3DEXTENSION_GLOBAL_CONTEXT[979=
9e72f-7af6-4cf1-bf08-297bc8903676];]=3D*skip*, =
Extkey[name=3DEXTENSION_NAME;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_NAME[651381d3-f54f-4547-bf28-b0b01a10318=
4];]=3Dovirt-engine-extension-aaa-ldap.authz, =
Extkey[name=3DEXTENSION_MANAGER_TRACE_LOG;type=3Dinterface =
org.slf4j.Logger;uuid=3DEXTENSION_MANAGER_TRACE_LOG[863db666-3ea7-4751-969=
5-918a3197ad83];]=3Dorg.slf4j.impl.Slf4jLogger(org.ovirt.engine.core.exten=
sions.mgr.ExtensionsManager.trace.ovirt-engine-extension-aaa-ldap.authz.XX=
X-authz), =
Extkey[name=3DEXTENSION_CONFIGURATION_SENSITIVE_KEYS;type=3Dinterface =
java.util.Collection;uuid=3DEXTENSION_CONFIGURATION_SENSITIVE_KEYS[a456efa=
1-73ff-4204-9f9b-ebff01e35263];]=3D[], =
Extkey[name=3DEXTENSION_VERSION;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_VERSION[fe35f6a8-8239-4bdb-ab1a-af9f779c=
e68c];]=3D1.3.1, Extkey[name=3DEXTENSION_PROVIDES;type=3Dinterface =
java.util.Collection;uuid=3DEXTENSION_PROVIDES[8cf373a6-65b5-4594-b828-0e2=
75087de91];]=3D[org.ovirt.engine.api.extensions.aaa.Authz], =
Extkey[name=3DEXTENSION_AUTHOR;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_AUTHOR[ef242f7a-2dad-4bc5-9aad-e07018b7f=
bcc];]=3DThe oVirt Project, Extkey[name=3DEXTENSION_LOCALE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_LOCALE[0780b112-0ce0-404a-b85e-8765d778b=
b29];]=3Den_US, Extkey[name=3DEXTENSION_CONFIGURATION_FILE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_CONFIGURATION_FILE[4fb0ffd3-983c-4f3f-98=
ff-9660bd67af6a];]=3D/etc/ovirt-engine/extensions.d/XXXX-authz.properties,=
Extkey[name=3DEXTENSION_HOME_URL;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_HOME_URL[4ad7a2f4-f969-42d4-b399-72d192e=
18304];]=3Dhttp://www.ovirt.org, =
Extkey[name=3DEXTENSION_CONFIGURATION;type=3Dclass =
java.util.Properties;uuid=3DEXTENSION_CONFIGURATION[2d48ab72-f0a1-4312-b4a=
e-5068a226b0fc];]=3D***, =
Extkey[name=3DEXTENSION_INTERFACE_VERSION_MAX;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_INTERFACE_VERSION_MAX[f4cff49f-2717-490=
1-8ee9-df362446e3e7];]=3D0, =
Extkey[name=3DAAA_AUTHZ_QUERY_MAX_FILTER_SIZE;type=3Dclass =
java.lang.Integer;uuid=3DAAA_AUTHZ_QUERY_MAX_FILTER_SIZE[2eb1f541-0f65-44a=
1-a6e3-014e247595f5];]=3D50, =
Extkey[name=3DEXTENSION_INTERFACE_VERSION_MIN;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_INTERFACE_VERSION_MIN[2b84fc91-305b-497=
b-a1d7-d961b9d2ce0b];]=3D0, =
Extkey[name=3DEXTENSION_INSTANCE_NAME;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_INSTANCE_NAME[65c67ff6-aeca-4bd5-a245-86=
74327f011b];]=3DXXXX-authz, =
Extkey[name=3DEXTENSION_BUILD_INTERFACE_VERSION;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_BUILD_INTERFACE_VERSION[cb479e5a-4b23-4=
6f8-aed3-56a4747a8ab7];]=3D0, Extkey[name=3DEXTENSION_NOTES;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_NOTES[2da5ad7e-185a-4584-aaff-97f66978e4=
ea];]=3DDisplay name: =
ovirt-engine-extension-aaa-ldap-1.3.1-1.el7.centos}, =
Extkey[name=3DAAA_AUTHN_AUTH_RECORD;type=3Dclass =
org.ovirt.engine.api.extensions.ExtMap;uuid=3DAAA_AUTHN_AUTH_RECORD[e94621=
68-b53b-44ac-9af5-f25e1697173e];]=3D{Extkey[name=3DAAA_AUTHN_AUTH_RECORD_P=
RINCIPAL;type=3Dclass =
java.lang.String;uuid=3DAAA_AUTHN_AUTH_RECORD_PRINCIPAL[c3498f07-11fe-464c=
-958c-8bd7490b119a];]=3Drexecutor@internal}}
Output:
{Extkey[name=3DEXTENSION_INVOKE_MESSAGE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_INVOKE_MESSAGE[b7b053de-dc73-4bf7-9d26-b=
8bdb72f5893];]=3DCannot resolve principal 'rexecutor@internal', =
Extkey[name=3DEXTENSION_INVOKE_RESULT;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_INVOKE_RESULT[0909d91d-8bde-40fb-b6c0-0=
99c772ddd4e];]=3D2}
at =
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.=
java:95) [extensions-manager.jar:]
at =
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.=
java:109) [extensions-manager.jar:]
at =
org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUti=
ls.java:122) [enginesso.jar:]
at =
org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUti=
ls.java:68) [enginesso.jar:]
at =
org.ovirt.engine.core.sso.utils.NonInteractiveAuth$2.doAuth(NonInteractive=
Auth.java:51) [enginesso.jar:]
at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.issueTokenUsingHttpHe=
aders(OAuthTokenServlet.java:183) [enginesso.jar:]
at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenSer=
vlet.java:72) [enginesso.jar:]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) =
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at =
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.j=
ava:85) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:129) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java=
:73) [branding.jar:]
at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.jav=
a:66) [utils.jar:]
at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.jav=
a:94) [utils.jar:]
at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.jav=
a:84) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleReq=
uest(ServletSecurityRoleHandler.java:62) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(Servl=
etDispatchingHandler.java:36) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.=
handleRequest(SecurityContextAssociationHandler.java:78)
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.han=
dleRequest(SSLInformationAssociationHandler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.han=
dleRequest(ServletAuthenticationCallHandler.java:57) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AuthenticationConstraintHandler.handleReques=
t(AuthenticationConstraintHandler.java:53) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest=
(AbstractConfidentialityHandler.java:46) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHand=
ler.handleRequest(ServletConfidentialityConstraintHandler.java:64) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.han=
dleRequest(ServletSecurityConstraintHandler.java:59) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleReques=
t(AuthenticationMechanismsHandler.java:60) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.ha=
ndleRequest(CachedAuthenticatedSessionHandler.java:77) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(No=
tificationReceiverHandler.java:50) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.ha=
ndleRequest(AbstractSecurityContextAssociationHandler.java:43) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRe=
quest(JACCContextIdHandler.java:61)
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(Serv=
letInitialHandler.java:292) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletIniti=
alHandler.java:81) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:138) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:135) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(Ser=
vletRequestContextThreadSetupAction.java:48) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClass=
LoaderSetupAction.java:43) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(Servlet=
InitialHandler.java:272) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletIniti=
alHandler.java:81) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(Servlet=
InitialHandler.java:104) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:=
1142) [rt.jar:1.8.0_121]
at =
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java=
:617) [rt.jar:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_121]
2017-06-28 17:17:09,414+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
External Authentication result: false
2017-06-28 17:17:09,414+02 ERROR =
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-7) [] =
OAuthException access_denied: Cannot authenticate user Authentication =
failed..
2017-06-28 17:17:09,414+02 DEBUG =
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-7) [] =
Exception: org.ovirt.engine.core.sso.utils.OAuthException: Cannot =
authenticate user Authentication failed..
at =
org.ovirt.engine.core.sso.utils.SsoUtils.sendJsonDataWithMessage(SsoUtils.=
java:569) [enginesso.jar:]
at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenSer=
vlet.java:81) [enginesso.jar:]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) =
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at =
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.j=
ava:85) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:129) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java=
:73) [branding.jar:]
at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.jav=
a:66) [utils.jar:]
at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.jav=
a:94) [utils.jar:]
at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.jav=
a:84) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleReq=
uest(ServletSecurityRoleHandler.java:62) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(Servl=
etDispatchingHandler.java:36) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.=
handleRequest(SecurityContextAssociationHandler.java:78)
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.han=
dleRequest(SSLInformationAssociationHandler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.han=
dleRequest(ServletAuthenticationCallHandler.java:57) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AuthenticationConstraintHandler.handleReques=
t(AuthenticationConstraintHandler.java:53) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest=
(AbstractConfidentialityHandler.java:46) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHand=
ler.handleRequest(ServletConfidentialityConstraintHandler.java:64) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.han=
dleRequest(ServletSecurityConstraintHandler.java:59) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleReques=
t(AuthenticationMechanismsHandler.java:60) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.ha=
ndleRequest(CachedAuthenticatedSessionHandler.java:77) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(No=
tificationReceiverHandler.java:50) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.ha=
ndleRequest(AbstractSecurityContextAssociationHandler.java:43) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRe=
quest(JACCContextIdHandler.java:61)
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(Serv=
letInitialHandler.java:292) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletIniti=
alHandler.java:81) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:138) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:135) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(Ser=
vletRequestContextThreadSetupAction.java:48) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClass=
LoaderSetupAction.java:43) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(Servlet=
InitialHandler.java:272) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletIniti=
alHandler.java:81) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(Servlet=
InitialHandler.java:104) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]
at =
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:=
1142) [rt.jar:1.8.0_121]
at =
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java=
:617) [rt.jar:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_121]
Caused by: org.ovirt.engine.core.sso.utils.AuthenticationException: =
Cannot authenticate user Authentication failed..
at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.issueTokenUsingHttpHe=
aders(OAuthTokenServlet.java:214) [enginesso.jar:]
at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenSer=
vlet.java:72) [enginesso.jar:]
... 50 more
2017-06-28 17:17:09,419+02 TRACE =
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-7) [] Sending =
json data {"error_code":"access_denied","error":"Cannot authenticate =
user Authentication failed.."}
--Apple-Mail=_B78E14EB-413D-4DC1-AC54-F837E05B2CF8
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">I tried to add a user in ovirt, but it's not identified:<div =
class=3D""><div style=3D"margin: 0px; font-size: 11px; line-height: =
normal; font-family: Menlo;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">2017-06-28 16:48:48,505+02 ERROR =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-22) =
[] External Authentication Failed: Cannot resolve principal =
'rexecutor@internal'</span></div></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br =
class=3D""></span></div><div class=3D""><br class=3D""><div =
class=3D""><div style=3D"margin: 0px; font-size: 11px; line-height: =
normal; font-family: Menlo;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">/usr/bin/ovirt-aaa-jdbc-tool user show =
rexecutor</span></div><div style=3D"margin: 0px; font-size: 11px; =
line-height: normal; font-family: Menlo;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">-- User =
rexecutor(b1727291-5ad4-4575-b8ec-53bdc9ce4aef) --</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">Namespace: *</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">Name: rexecutor</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">ID: =
b1727291-5ad4-4575-b8ec-53bdc9ce4aef</span></div><div style=3D"margin: =
0px; font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Display Name: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Email: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">First Name: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Last Name: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Department: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Title: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Description: </span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Account Disabled: false</span></div><div style=3D"margin: =
0px; font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Account Locked: false</span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Account Unlocked At: 2017-06-16 13:49:31Z</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">Account Valid From: 2017-06-15 =
16:41:14Z</span></div><div style=3D"margin: 0px; font-size: 11px; =
line-height: normal; font-family: Menlo;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Account =
Valid To: 2217-06-15 16:41:14Z</span></div><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Account Without Password: true</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">Last successful Login At: 1970-01-01 =
00:00:00Z</span></div><div style=3D"margin: 0px; font-size: 11px; =
line-height: normal; font-family: Menlo;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Last =
unsuccessful Login At: 1970-01-01 00:00:00Z</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">Password Valid To: 2025-08-15 =
10:30:00Z</span></div></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br =
class=3D""></span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">It's =
listed as a known user:</span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><div style=3D"margin: 0px; line-height: normal;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""><user =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e" =
id=3D"49a12b6e-de03-4095-b6ed-2c1883f5542e"></span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<department></department></span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<domain_entry_id>62313732373239312D356164342D343537352D623865632D353=
362646339636534616566</domain_entry_id></span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<email></email></span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""> =
<last_name></last_name></span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<name></name></span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""> =
<namespace>*</namespace></span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<principal>rexecutor</principal></span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<user_name>rexecutor@internal-authz</user_name></span></div><d=
iv style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<domain href=3D"/ovirt-engine/api/domains/696E7465726E616C2D617574687A"=
id=3D"696E7465726E616C2D617574687A"></span></div><div style=3D"margin:=
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
<name>internal-authz</name></span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> =
</domain></span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""> <permissions =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/permi=
ssions"/></span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""> <roles =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/roles=
"/></span></div><div style=3D"margin: 0px; line-height: normal;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""> <ssh_public_keys =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/sshpu=
blickeys"/></span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""> <tags =
href=3D"/ovirt-engine/api/users/49a12b6e-de03-4095-b6ed-2c1883f5542e/tags"=
/></span></div><div style=3D"margin: 0px; line-height: normal;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""></user></span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br =
class=3D""></span></div></div><div class=3D"">My admin domain =
authentication looks OK:</div><div class=3D""><div style=3D"margin: 0px; =
font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">config.datasource.jdbcurl=3Djdbc:postgresql://pgdb:5432/ovirt_e=
ngine?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory</span></div><di=
v style=3D"margin: 0px; font-size: 11px; line-height: normal; =
font-family: Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" =
class=3D"">config.datasource.dbuser=3Dovirt</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" =
class=3D"">config.datasource.dbpassword=3DXXX</span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" =
class=3D"">config.datasource.jdbcdriver=3Dorg.postgresql.Driver</span></di=
v><div style=3D"margin: 0px; font-size: 11px; line-height: normal; =
font-family: Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" =
class=3D"">config.datasource.schemaname=3Daaa_jdbc</span></div></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><br class=3D""></span></div><div =
style=3D"margin: 0px; font-size: 11px; line-height: normal; font-family: =
Menlo;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">It tried to =
increase </span>org.ovirt.engine.core.sso.utils debug log level by =
modifying =
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.xml.in</div><di=
v style=3D"margin: 0px; font-size: 11px; line-height: normal; =
font-family: Menlo;" class=3D""><br class=3D""></div><div style=3D"margin:=
0px; font-size: 11px; line-height: normal; font-family: Menlo;" =
class=3D""><div style=3D"margin: 0px; line-height: normal;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">diff ovirt-engine.xml.in*</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">201,204d200</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">< <logger =
category=3D"org.ovirt.engine.core.sso.utils"></span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">< =
<level name=3D"ALL"/></span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">< =
</logger></span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">< </span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br =
class=3D""></span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">I just =
got in the log:</span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">2017-06-28 17:17:09,404+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NonInteractiveAuth] (default task-7) [] =
Performing Negotiate Auth</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">2017-06-28 17:17:09,404+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
Performing external authentication</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">2017-06-28 17:17:09,410+02 ERROR =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
External Authentication Failed: Cannot resolve principal =
'rexecutor@internal'</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">2017-06-28 17:17:09,410+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
External Authentication Failed: Class: class =
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException=
</span></div><div style=3D"margin: 0px; line-height: normal;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Input:</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" =
class=3D"">{Extkey[name=3DEXTENSION_INVOKE_COMMAND;type=3Dclass =
org.ovirt.engine.api.extensions.ExtUUID;uuid=3DEXTENSION_INVOKE_COMMAND[48=
5778ab-bede-4f1a-b823-77b262a2f28d];]=3DAAA_AUTHZ_FETCH_PRINCIPAL_RECORD[5=
a5bf9bb-9336-4376-a823-26efe1ba26df], =
Extkey[name=3DAAA_AUTHZ_QUERY_FLAGS;type=3Dclass =
java.lang.Integer;uuid=3DAAA_AUTHZ_QUERY_FLAGS[97d226e9-8d87-49a0-9a7f-af6=
89320907b];]=3D3, Extkey[name=3DEXTENSION_INVOKE_CONTEXT;type=3Dclass =
org.ovirt.engine.api.extensions.ExtMap;uuid=3DEXTENSION_INVOKE_CONTEXT[886=
d2ebb-312a-49ae-9cc3-e1f849834b7d];]=3D{Extkey[name=3DAAA_AUTHZ_AVAILABLE_=
NAMESPACES;type=3Dinterface =
java.util.Collection;uuid=3DAAA_AUTHZ_AVAILABLE_NAMESPACES[6dffa34c-955f-4=
86a-bd35-0a272b45a711];]=3D[DC=3DXXX], =
Extkey[name=3DEXTENSION_LICENSE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_LICENSE[8a61ad65-054c-4e31-9c6d-1ca4d60a=
4c18];]=3DASL 2.0, Extkey[name=3DEXTENSION_GLOBAL_CONTEXT;type=3Dclass =
org.ovirt.engine.api.extensions.ExtMap;uuid=3DEXTENSION_GLOBAL_CONTEXT[979=
9e72f-7af6-4cf1-bf08-297bc8903676];]=3D*skip*, =
Extkey[name=3DEXTENSION_NAME;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_NAME[651381d3-f54f-4547-bf28-b0b01a10318=
4];]=3Dovirt-engine-extension-aaa-ldap.authz, =
Extkey[name=3DEXTENSION_MANAGER_TRACE_LOG;type=3Dinterface =
org.slf4j.Logger;uuid=3DEXTENSION_MANAGER_TRACE_LOG[863db666-3ea7-4751-969=
5-918a3197ad83];]=3Dorg.slf4j.impl.Slf4jLogger(org.ovirt.engine.core.exten=
sions.mgr.ExtensionsManager.trace.ovirt-engine-extension-aaa-ldap.authz.XX=
X-authz), =
Extkey[name=3DEXTENSION_CONFIGURATION_SENSITIVE_KEYS;type=3Dinterface =
java.util.Collection;uuid=3DEXTENSION_CONFIGURATION_SENSITIVE_KEYS[a456efa=
1-73ff-4204-9f9b-ebff01e35263];]=3D[], =
Extkey[name=3DEXTENSION_VERSION;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_VERSION[fe35f6a8-8239-4bdb-ab1a-af9f779c=
e68c];]=3D1.3.1, Extkey[name=3DEXTENSION_PROVIDES;type=3Dinterface =
java.util.Collection;uuid=3DEXTENSION_PROVIDES[8cf373a6-65b5-4594-b828-0e2=
75087de91];]=3D[org.ovirt.engine.api.extensions.aaa.Authz], =
Extkey[name=3DEXTENSION_AUTHOR;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_AUTHOR[ef242f7a-2dad-4bc5-9aad-e07018b7f=
bcc];]=3DThe oVirt Project, Extkey[name=3DEXTENSION_LOCALE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_LOCALE[0780b112-0ce0-404a-b85e-8765d778b=
b29];]=3Den_US, Extkey[name=3DEXTENSION_CONFIGURATION_FILE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_CONFIGURATION_FILE[4fb0ffd3-983c-4f3f-98=
ff-9660bd67af6a];]=3D/etc/ovirt-engine/extensions.d/XXXX-authz.properties,=
Extkey[name=3DEXTENSION_HOME_URL;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_HOME_URL[4ad7a2f4-f969-42d4-b399-72d192e=
18304];]=3D<a href=3D"http://www.ovirt.org" =
class=3D"">http://www.ovirt.org</a>, =
Extkey[name=3DEXTENSION_CONFIGURATION;type=3Dclass =
java.util.Properties;uuid=3DEXTENSION_CONFIGURATION[2d48ab72-f0a1-4312-b4a=
e-5068a226b0fc];]=3D***, =
Extkey[name=3DEXTENSION_INTERFACE_VERSION_MAX;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_INTERFACE_VERSION_MAX[f4cff49f-2717-490=
1-8ee9-df362446e3e7];]=3D0, =
Extkey[name=3DAAA_AUTHZ_QUERY_MAX_FILTER_SIZE;type=3Dclass =
java.lang.Integer;uuid=3DAAA_AUTHZ_QUERY_MAX_FILTER_SIZE[2eb1f541-0f65-44a=
1-a6e3-014e247595f5];]=3D50, =
Extkey[name=3DEXTENSION_INTERFACE_VERSION_MIN;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_INTERFACE_VERSION_MIN[2b84fc91-305b-497=
b-a1d7-d961b9d2ce0b];]=3D0, =
Extkey[name=3DEXTENSION_INSTANCE_NAME;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_INSTANCE_NAME[65c67ff6-aeca-4bd5-a245-86=
74327f011b];]=3DXXXX-authz, =
Extkey[name=3DEXTENSION_BUILD_INTERFACE_VERSION;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_BUILD_INTERFACE_VERSION[cb479e5a-4b23-4=
6f8-aed3-56a4747a8ab7];]=3D0, Extkey[name=3DEXTENSION_NOTES;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_NOTES[2da5ad7e-185a-4584-aaff-97f66978e4=
ea];]=3DDisplay name: =
ovirt-engine-extension-aaa-ldap-1.3.1-1.el7.centos}, =
Extkey[name=3DAAA_AUTHN_AUTH_RECORD;type=3Dclass =
org.ovirt.engine.api.extensions.ExtMap;uuid=3DAAA_AUTHN_AUTH_RECORD[e94621=
68-b53b-44ac-9af5-f25e1697173e];]=3D{Extkey[name=3DAAA_AUTHN_AUTH_RECORD_P=
RINCIPAL;type=3Dclass =
java.lang.String;uuid=3DAAA_AUTHN_AUTH_RECORD_PRINCIPAL[c3498f07-11fe-464c=
-958c-8bd7490b119a];]=3Drexecutor@internal}}</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">Output:</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" =
class=3D"">{Extkey[name=3DEXTENSION_INVOKE_MESSAGE;type=3Dclass =
java.lang.String;uuid=3DEXTENSION_INVOKE_MESSAGE[b7b053de-dc73-4bf7-9d26-b=
8bdb72f5893];]=3DCannot resolve principal 'rexecutor@internal', =
Extkey[name=3DEXTENSION_INVOKE_RESULT;type=3Dclass =
java.lang.Integer;uuid=3DEXTENSION_INVOKE_RESULT[0909d91d-8bde-40fb-b6c0-0=
99c772ddd4e];]=3D2}</span></div><div style=3D"margin: 0px; line-height: =
normal; min-height: 13px;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.=
java:95) [extensions-manager.jar:]</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.=
java:109) [extensions-manager.jar:]</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUti=
ls.java:122) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUti=
ls.java:68) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.utils.NonInteractiveAuth$2.doAuth(NonInteractive=
Auth.java:51) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.issueTokenUsingHttpHe=
aders(OAuthTokenServlet.java:183) [enginesso.jar:]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenSer=
vlet.java:72) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
javax.servlet.http.HttpServlet.service(HttpServlet.java:790) =
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.j=
ava:85) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:129) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java=
:73) [branding.jar:]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.jav=
a:66) [utils.jar:]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.jav=
a:94) [utils.jar:]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.jav=
a:84) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleReq=
uest(ServletSecurityRoleHandler.java:62) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(Servl=
etDispatchingHandler.java:36) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.=
handleRequest(SecurityContextAssociationHandler.java:78)</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.han=
dleRequest(SSLInformationAssociationHandler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.han=
dleRequest(ServletAuthenticationCallHandler.java:57) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AuthenticationConstraintHandler.handleReques=
t(AuthenticationConstraintHandler.java:53) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest=
(AbstractConfidentialityHandler.java:46) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHand=
ler.handleRequest(ServletConfidentialityConstraintHandler.java:64) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.han=
dleRequest(ServletSecurityConstraintHandler.java:59) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleReques=
t(AuthenticationMechanismsHandler.java:60) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.ha=
ndleRequest(CachedAuthenticatedSessionHandler.java:77) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(No=
tificationReceiverHandler.java:50) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.ha=
ndleRequest(AbstractSecurityContextAssociationHandler.java:43) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRe=
quest(JACCContextIdHandler.java:61)</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(Serv=
letInitialHandler.java:292) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletIniti=
alHandler.java:81) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:138) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:135) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(Ser=
vletRequestContextThreadSetupAction.java:48) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClass=
LoaderSetupAction.java:43) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(Servlet=
InitialHandler.java:272) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletIniti=
alHandler.java:81) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(Servlet=
InitialHandler.java:104) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:=
1142) [rt.jar:1.8.0_121]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java=
:617) [rt.jar:1.8.0_121]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
java.lang.Thread.run(Thread.java:745) =
[rt.jar:1.8.0_121]</span></div><div style=3D"margin: 0px; line-height: =
normal; min-height: 13px;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">2017-06-28 17:17:09,414+02 DEBUG =
[org.ovirt.engine.core.sso.utils.NegotiateAuthUtils] (default task-7) [] =
External Authentication result: false</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">2017-06-28 17:17:09,414+02 ERROR =
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-7) [] =
OAuthException access_denied: Cannot authenticate user Authentication =
failed..</span></div><div style=3D"margin: 0px; line-height: normal;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D"">2017-06-28 17:17:09,414+02 DEBUG =
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-7) [] =
Exception: org.ovirt.engine.core.sso.utils.OAuthException: Cannot =
authenticate user Authentication failed..</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.utils.SsoUtils.sendJsonDataWithMessage(SsoUtils.=
java:569) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenSer=
vlet.java:81) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
javax.servlet.http.HttpServlet.service(HttpServlet.java:790) =
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.j=
ava:85) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:129) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java=
:73) [branding.jar:]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.jav=
a:66) [utils.jar:]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.jav=
a:94) [utils.jar:]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(Filter=
Handler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.jav=
a:84) [undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleReq=
uest(ServletSecurityRoleHandler.java:62) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(Servl=
etDispatchingHandler.java:36) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.=
handleRequest(SecurityContextAssociationHandler.java:78)</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.han=
dleRequest(SSLInformationAssociationHandler.java:131) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.han=
dleRequest(ServletAuthenticationCallHandler.java:57) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AuthenticationConstraintHandler.handleReques=
t(AuthenticationConstraintHandler.java:53) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest=
(AbstractConfidentialityHandler.java:46) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHand=
ler.handleRequest(ServletConfidentialityConstraintHandler.java:64) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.han=
dleRequest(ServletSecurityConstraintHandler.java:59) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleReques=
t(AuthenticationMechanismsHandler.java:60) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.ha=
ndleRequest(CachedAuthenticatedSessionHandler.java:77) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(No=
tificationReceiverHandler.java:50) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.ha=
ndleRequest(AbstractSecurityContextAssociationHandler.java:43) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRe=
quest(JACCContextIdHandler.java:61)</span></div><div style=3D"margin: =
0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandle=
r.java:43) [undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(Serv=
letInitialHandler.java:292) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletIniti=
alHandler.java:81) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:138) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHa=
ndler.java:135) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(Ser=
vletRequestContextThreadSetupAction.java:48) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClass=
LoaderSetupAction.java:43) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThread=
SetupActionWrapper.java:44) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(Servlet=
InitialHandler.java:272) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletIniti=
alHandler.java:81) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(Servlet=
InitialHandler.java:104) =
[undertow-servlet-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805) =
[undertow-core-1.4.0.Final.jar:1.4.0.Final]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:=
1142) [rt.jar:1.8.0_121]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java=
:617) [rt.jar:1.8.0_121]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>at =
java.lang.Thread.run(Thread.java:745) =
[rt.jar:1.8.0_121]</span></div><div style=3D"margin: 0px; line-height: =
normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">Caused by: =
org.ovirt.engine.core.sso.utils.AuthenticationException: Cannot =
authenticate user Authentication failed..</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.issueTokenUsingHttpHe=
aders(OAuthTokenServlet.java:214) [enginesso.jar:]</span></div><div =
style=3D"margin: 0px; line-height: normal;" class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>at =
org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenSer=
vlet.java:72) [enginesso.jar:]</span></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space:pre"> </span>... 50 more</span></div><div =
style=3D"margin: 0px; line-height: normal; min-height: 13px;" =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""></span><br class=3D""></div><div style=3D"margin: 0px; =
line-height: normal;" class=3D""><span style=3D"font-variant-ligatures: =
no-common-ligatures" class=3D"">2017-06-28 17:17:09,419+02 TRACE =
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-7) [] Sending =
json data {"error_code":"access_denied","error":"Cannot authenticate =
user Authentication failed.."}</span></div></span></div></div><div =
class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" =
class=3D""><br class=3D""></span></div><div class=3D""><span =
style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br =
class=3D""></span></div></span></div></div></body></html>=
--Apple-Mail=_B78E14EB-413D-4DC1-AC54-F837E05B2CF8--
7 years, 9 months
ovirt-ng upgrade
by Nathanaël Blanchet
This is a multi-part message in MIME format.
--------------A92F6FF9E79201C2077DE070
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Hello,
I've been used to install vdsm on regular centos to provision my hosts.
I recently installed ovirt-ng on new hosts, but centos "base" and=20
"updates" repo are disabled by default. So I updated them by enabling=20
theses repos tio update centos.
Several questions :
* Is this a good practice or should I wait a new ovirt-ng image to
update the whole system with
| # ovirt-node-upgrade --iso=3D/path/to/ovirt-node-image.iso --reboo=
t=3D1|
* |Now ovirt 4.1.3 is out, I have to uncomment
"includepkgs=3Dovirt-node-ng-image-update ovirt-node-ng-image
ovirt-engine-appliance" in the ovirt repo to get the last vdsm, but
the dependencies are broken:
[root@ulysses yum.repos.d]# yum update --enablerepo=3Dbase
--enablerepo=3Dupdates -y
...
|--> R=C3=A9solution des d=C3=A9pendances termin=C3=A9e
Erreur : Paquet :
ovirt-hosted-engine-setup-2.1.3.3-1.el7.centos.noarch (ovirt-4.1)
Requiert : rubygem-fluent-plugin-viaq_data_model
Is this a good alternative way to do, or may I have issues if I do want=20
such a thing?
--=20
Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau
P=C3=B4le Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 =09
T=C3=A9l. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
--------------A92F6FF9E79201C2077DE070
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"content-type" content=3D"text/html; charset=3Dutf=
-8">
</head>
<body text=3D"#000000" bgcolor=3D"#FFFFFF">
<p>Hello,</p>
<p>I've been used to install vdsm on regular centos to provision my
hosts.</p>
<p> I recently installed ovirt-ng on new hosts, but centos "base"
and "updates" repo are disabled by default. So I updated them by
enabling theses repos tio update centos.</p>
<p>Several questions :</p>
<ul>
<li>Is this a good practice or should I wait a new ovirt-ng image
to update the whole system with <br>
<pre class=3D"highlight plaintext"><code>=C2=A0#=C2=A0ovirt-node-=
upgrade=C2=A0--iso=3D/path/to/ovirt-node-image.iso=C2=A0--reboot=3D1</cod=
e></pre>
</li>
<li><code>N<font size=3D"+1">ow ovirt 4.1.3 is out, I have to
uncomment "includepkgs=3Dovirt-node-ng-image-update
ovirt-node-ng-image ovirt-engine-appliance" in the ovirt
repo to get the last vdsm, but the dependencies are broken:<b=
r>
[root@ulysses yum.repos.d]# yum update --enablerepo=3Dbase
--enablerepo=3Dupdates -y<br>
...</font><br>
</code>--> R=C3=A9solution des d=C3=A9pendances termin=C3=A9e<=
br>
Erreur=C2=A0: Paquet=C2=A0:
ovirt-hosted-engine-setup-2.1.3.3-1.el7.centos.noarch
(ovirt-4.1)<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 Requiert=C2=A0: rubygem-fluent-plugin-viaq_data_model</li>
</ul>
<p>Is this a good alternative way to do, or may I have issues if I
do want such a thing?<br>
</p>
<pre class=3D"moz-signature" cols=3D"72">--=20
Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau
P=C3=B4le Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 =09
T=C3=A9l. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:blanchet@abes.fr">bl=
anchet(a)abes.fr</a> </pre>
</body>
</html>
--------------A92F6FF9E79201C2077DE070--
7 years, 9 months
oVirt Guest Agent on FreeBSD
by Vinícius Ferrão
Hello,
Someone had success compiling and executing the oVirt Guest Agent on FreeBSD?
I was able to compile the package on 11.0-RELEASE but it was in a unusable state. Since FreeBSD appears to be supported on oVirt, how can I install the guest agent? There are any binary package available?
Thanks,
V.
7 years, 10 months
ovirt 4.1 : Can't deploy second node on self hosted engine cluster and host with hosted engine deployed (and also hosted engine VM) is not added to interface
by yayo (j)
Hi at all,
I have correctly deployed an hosted engine using node01 via:
hosted-engine --deploy
Using FC shared storage.
Seems all work good but, when I login in to the ovirt web interface I can't
find the hosted engine under the VM tab (also the node01 server).
So, I have tried to add node02 (no problem) and added the "Data Domain"
storage (another FC share storage).
Now (I Think) I needs to deploy also here the hosted engine but I can't.
There is only one way, to put node02 in maintenance mode but, after that,
"Data Domain" added going down and, when I try to deploy the hosted engine
interface says:
Error while executing action:
node02:
Cannot edit Host. You are using an unmanaged hosted engine VM. Please add
the first storage domain in order to start the hosted engine import process.
what is this "first storage domain"?
I have not added the node01 yet because last time hosted engine crashed and
the only way was to restart the installation ....
Note: Hosted engine seems to be correcty deployed because all work good,
but deploy script ended with error:
[ INFO ] Engine-setup successfully completed
[ INFO ] Engine is still unreachable
[ INFO ] Engine is still not reachable, waiting...
[more of these messages]
[ ERROR ] Engine is still not reachable
[ ERROR ] Failed to execute stage 'Closing up': Engine is still not
reachable
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20170123113113.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue,fix and redeploy
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170123105043-kvzx84.log
Is related to this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1415822
7 years, 10 months
Import OVA to KVM
by Ramachandra Reddy Ankireddypalle
Hi,
I am trying to import a windows OVA file. Ovirt engine is up and
running. The target storage domain is going to be glusterfs storage
domain. Is there any documentation that can be looked at to achieve this.
Thanks and Regards,
Ram
7 years, 10 months
ovirt engine "Configure Local Storage" , why a cluster can have only one host ?
by 转圈圈
This is a multi-part message in MIME format.
------=_NextPart_595F70F4_09E3EFA0_306BF721
Content-Type: text/plain;
charset="gb18030"
Content-Transfer-Encoding: base64
b3ZpcnQgZW5naW5lICJDb25maWd1cmUgTG9jYWwgU3RvcmFnZSIgLHdoeSBhIGNsdXN0ZXIg
Y2FuIGhhdmUgb25seSBvbmUgaG9zdCA/
------=_NextPart_595F70F4_09E3EFA0_306BF721
Content-Type: text/html;
charset="gb18030"
Content-Transfer-Encoding: base64
b3ZpcnQgZW5naW5lICJDb25maWd1cmUgTG9jYWwgU3RvcmFnZSIgLHdoeSBhIGNsdXN0ZXIg
Y2FuIGhhdmUgb25seSBvbmUgaG9zdCA/
------=_NextPart_595F70F4_09E3EFA0_306BF721--
7 years, 10 months
user permissions
by Fabrice Bacchella
I'm trying to give a user the permissions to stop/start a specific server.
This user is given the generic UserRole for the System.
I tried to give him the roles :
UserVmManager
UserVmRunTimeManager
UserInstanceManager
InstanceCreator
UserRole
for that specific VM, I always get: query execution failed due to insufficient permissions.
As soon as I give him the SuperUser role, he can stop/start it.
What role should I give him for that VM ? I don't want to give the privilege to destroy the vm, or add disks. But he should be able to change the os settings too.
7 years, 10 months
How to create a new Gluster volume
by Gianluca Cecchi
Hello,
I'm trying to create a new volume. I'm in 4.1.2
I'm following these indications:
http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_...
When I click the "add brick" button, I don't see anything in "Brick
Directory" dropdown field and I cannot manuall input a directory name.
On the 3 nodes I already have formatted and mounted fs
[root@ovirt01 ~]# df -h /gluster/brick3/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/gluster-export 50G 33M 50G 1% /gluster/brick3
[root@ovirt01 ~]#
The guide tells
7. Click the Add Bricks button to select bricks to add to the volume.
Bricks must be created externally on the Gluster Storage nodes.
What does it mean with "created externally"?
The next step from os point would be volume creation but it is indeed what
I would like to do from the gui...
Thanks,
Gianluca
7 years, 10 months
Cloning VM on NFS Leads to Locked Disks
by Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest
release as of yesterday) cluster. When I clone a VM the disks of both
the original and the clone stay in the locked state, and the only way I
can resolve it is to go into the database on the engine and run "update
images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file
sizes match and everything), but the locking worries me. To clone the
VM I just shut the source VM down and then right click on it and select
"Clone"
I've attached the full VDSM log from my last attempt, but here is the
excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and
protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354',
spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='',
srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0',
srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051',
dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224',
dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'',
dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8,
volFormat=5, preallocate=2, postZero=u'false', force=u'false',
discard=False) (logUtils:51)
2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image
d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist
[u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319)
2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249)
2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249)
2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID=
srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051
dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7
dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW
preallocate=SPARSE force=False postZero=False discard=False (image:765)
2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source
20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051
vol size 41943040 destination
20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7
apparentsize 41943040 (image:815)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image
670a7b20-fecd-45c6-af5c-3c7b98258224 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create
placeholder
/rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224
for image's volumes (image:149)
2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to
create RAW volume
/rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7
with size = 20480 sectors (fileVolume:439)
2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID =
013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)
7 years, 10 months
How to reset volume info in storage domain metadata LV
by Kuko Armas
Recently I got a disk snapshot marked as illegal in an VM due to a failed live merge
I fixed the snapshot manually with qemu-img rebase, now qemu-img check shows everything OK
I also set the imagestatus to OK in the engine database, as shown below
engine=# select image_guid,parentid,imagestatus,vm_snapshot_id,volume_type,volume_format,active from images where image_group_id='c35ccdc5-a256-4460-8dd2-9e639b8430e9';
image_guid | parentid | imagestatus | vm_snapshot_id | volume_type
| volume_format | active
--------------------------------------+--------------------------------------+-------------+--------------------------------------+-------------
+---------------+--------
07729f62-2cd2-45d0-993a-ec8d7fbb6ee0 | 00000000-0000-0000-0000-000000000000 | 1 | 7ae58a5b-eacf-4f6b-a06f-bda1d85170b5 | 1
| 5 | f
ee733323-308a-40c8-95d4-b33ca6307362 | 07729f62-2cd2-45d0-993a-ec8d7fbb6ee0 | 1 | 2ae07078-cc48-4e20-a249-52d3f44082b4 | 2
| 4 | t
But when I try to boot the VM it still fails with this error in the SPM
Thread-292345::ERROR::2017-07-06 14:20:33,155::task::866::Storage.TaskManager.Task::(_setError) Task=`2d262a05-5a03-4ff9-8347-eaa22b6e143c`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3227, in prepareImage
raise se.prepareIllegalVolumeError(volUUID)
prepareIllegalVolumeError: Cannot prepare illegal volume: ('ee733323-308a-40c8-95d4-b33ca6307362',)
Thread-292345::DEBUG::2017-07-06 14:20:33,156::task::885::Storage.TaskManager.Task::(_run) Task=`2d262a05-5a03-4ff9-8347-eaa22b6e143c`::Task._run: 2d262a05-5a03-4ff9-8347-eaa22b6e143c (u'146dca57-05fd-4b3f-af8d-b253a7ca6f6e', u'00000001-0001-0001-0001-00000000014d', u'c35ccdc5-a256-4460-8dd2-9e639b8430e9', u'ee733323-308a-40c8-95d4-b33ca6307362') {} failed - stopping task
Just before that error I see this operation in the log:
/usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=11 bs=512 if=/dev/146dca57-05fd-4b3f-af8d-b253a7ca6f6e/metadata count=1
If I run it manually I get this:
[root@blade6 ~]# /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=11 bs=512 if=/dev/146dca57-05fd-4b3f-af8d-b253a7ca6f6e/metadata count=1
DOMAIN=146dca57-05fd-4b3f-af8d-b253a7ca6f6e
VOLTYPE=LEAF
CTIME=1497399380
FORMAT=COW
IMAGE=c35ccdc5-a256-4460-8dd2-9e639b8430e9
DISKTYPE=2
PUUID=07729f62-2cd2-45d0-993a-ec8d7fbb6ee0
LEGALITY=ILLEGAL
MTIME=0
POOL_UUID=
SIZE=209715200
TYPE=SPARSE
DESCRIPTION=
EOF
So I guess the volume info is cached in the storage domain's metadata LV, and it's still in ILLEGAL status there
Is there a way to force ovirt to update the information in the metadata LV?
Of course I've thought of updating manually with dd, with it seems too risky (and scary) to do it in production
Salu2!
--
Miguel Armas
CanaryTek Consultoria y Sistemas SL
http://www.canarytek.com/
7 years, 10 months
Installation problem - missing dependencies?
by Paweł Zaskórski
Hi everyone!
I'm trying to install oVirt on fresh CentOS 7 (up-to-date). According
to documentation i did:
# yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
# yum install ovirt-engine
Unfortunately, I'm getting an error:
--> Finished Dependency Resolution
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
Requires: ovirt-engine-cli >= 3.6.2.0
Error: Package:
ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
(ovirt-4.1)
Requires: ovirt-engine-dwh-setup >= 4.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
Requires: ovirt-iso-uploader >= 4.0.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
Requires: ovirt-engine-wildfly >= 10.1.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
Requires: ovirt-engine-wildfly-overlay >= 10.0.0
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
Requires: ovirt-imageio-proxy
Error: Package:
ovirt-engine-setup-plugin-ovirt-engine-4.1.3.5-1.el7.centos.noarch
(ovirt-4.1)
Requires: ovirt-imageio-proxy-setup
Error: Package: ovirt-engine-4.1.3.5-1.el7.centos.noarch (ovirt-4.1)
Requires: ovirt-engine-dashboard >= 1.0.0
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
My enabled repos:
# yum --noplugins repolist | awk 'FNR > 1 {print $1}' | head -n-1
base/7/x86_64
centos-opstools-release/7/x86_64
extras/7/x86_64
ovirt-4.1/7
ovirt-4.1-centos-gluster38/x86_64
ovirt-4.1-epel/x86_64
ovirt-4.1-patternfly1-noarch-epel/x86_64
ovirt-centos-ovirt41/7/x86_64
sac-gdeploy/x86_64
updates/7/x86_64
virtio-win-stable
Did I miss something? From which repository come packages like
ovirt-engine-dashboard or ovirt-engine-dwh-setup?
Thank you in advance for your help!
Best regards,
Paweł
7 years, 10 months
Configure an ovirt 4.1 hosted engine using SAS storage (aka DAS storage)
by yayo (j)
Hi all,
I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use a
SAN that expose LUN via SAS . When I start to deploy ovirt and the engine
using "hosted-engine --deploy" the only options I have are:
(glusterfs, iscsi, fc, nfs3, nfs4)
There is no option for "local" storage (that are not local naturally, but
are multipath device exposed by SAN via LUN)
Can you help me? What is the right configuration?
Thank you
7 years, 10 months
Networking and oVirt 4.1
by Gabriel Stein
Hi all,
I'm installing oVirt for the first time and I'm having some issues with the
Networking.
Setup:
OS: CentOS 7 Mininal
3 Bare Metal Servers(1 for Engine, 2 for Nodes).
Network:
Nn Trunk Interfaces with VLANs and Bridges.
e.g.:
trunk.100, VLAN: 100, Bridge: vmbr100. IPV4 only.
I have already a VLAN for MGMNT, without DHCP Server(not needed for oVirt,
but explaining my setup).
Networking works as expected, I can ping/ssh each host without problems.
On the two nodes, I have a Interface named ovirtmgmt and dhcp...
Question 1: What kind of configuration can I use here? Can I set static IPs
from VLAN MGMNT and put everything from oVirt on that VLAN? oVirt doens't
have a Internal DHCP Server for Nodes, or?
Question 2: Should I leave oVirt to Setup it(ovirtmgmt Interface) for me?
Problems:
I configured the Engine with the IP 1.1.1.1, and I reach the web interface
with https://FQDN( which is IP: 1.1.1.1)
But, when I add a Host to the Cluster, I have some errors:
"Host XXXX does not comply with the cluster Default networks, the following
networks are missing on host: 'ovirtmgmt'"
Question 3: I saw that Engine tries to call dhclient and Setup an IP for
it, but could I have static IPs? Where can I configure it?
* vdsm.log
*2017-07-03 15:15:01,772+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getCapabilities succeeded in 0.11 seconds
(__init__:533)2017-07-03 15:15:01,808+0200 INFO (jsonrpc/0)
[jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01
seconds (__init__:533)2017-07-03 15:15:06,870+0200 INFO (periodic/3)
[dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:15:06,871+0200 INFO (periodic/3) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)2017-07-03
15:15:10,059+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmStats succeeded in 0.00 seconds (__init__:533)2017-07-03
15:15:11,643+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmStats succeeded in 0.00 seconds (__init__:533)2017-07-03
15:15:12,270+0200 INFO (jsonrpc/3) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:51)2017-07-03 15:15:12,271+0200 INFO
(jsonrpc/3) [dispatcher] Run and protect: repoStats, Return response: {}
(logUtils:54)2017-07-03 15:15:12,277+0200 INFO (jsonrpc/3)
[jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds
(__init__:533)2017-07-03 15:15:21,915+0200 INFO (periodic/3) [dispatcher]
Run and protect: repoStats(options=None) (logUtils:51)2017-07-03
15:15:21,916+0200 INFO (periodic/3) [dispatcher] Run and protect:
repoStats, Return response: {} (logUtils:54)2017-07-03 15:15:25,078+0200
INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats
succeeded in 0.00 seconds (__init__:533)2017-07-03 15:15:27,273+0200 INFO
(jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded
in 0.00 seconds (__init__:533)2017-07-03 15:15:28,330+0200 INFO
(jsonrpc/6) [dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:15:28,330+0200 INFO (jsonrpc/6) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)2017-07-03
15:15:28,337+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call
Host.getStats succeeded in 0.00 seconds (__init__:533)2017-07-03
15:15:36,960+0200 INFO (periodic/3) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:51)2017-07-03 15:15:36,960+0200 INFO
(periodic/3) [dispatcher] Run and protect: repoStats, Return response: {}
(logUtils:54)2017-07-03 15:15:40,096+0200 INFO (jsonrpc/7)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:15:43,280+0200 INFO (jsonrpc/0)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:15:44,408+0200 INFO (jsonrpc/1)
[dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:15:44,408+0200 INFO (jsonrpc/1) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)2017-07-03
15:15:44,415+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call
Host.getStats succeeded in 0.01 seconds (__init__:533)2017-07-03
15:15:52,006+0200 INFO (periodic/3) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:51)2017-07-03 15:15:52,006+0200 INFO
(periodic/3) [dispatcher] Run and protect: repoStats, Return response: {}
(logUtils:54)2017-07-03 15:15:55,115+0200 INFO (jsonrpc/2)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:15:59,287+0200 INFO (jsonrpc/3)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00
seconds (__init__:533)2017-07-03 15:16:00,465+0200 INFO (jsonrpc/4)
[dispatcher] Run and protect: repoStats(options=None)
(logUtils:51)2017-07-03 15:16:00,465+0200 INFO (jsonrpc/4) [dispatcher]
Run and protect: repoStats, Return response: {} (logUtils:54)*
* supervdsm.log
*MainProcess|jsonrpc/7::DEBUG::2017-07-03
15:15:01,661::supervdsmServer::93::SuperVdsm.ServerCallback::(wrapper) call
network_caps with () {}MainProcess|jsonrpc/7::DEBUG::2017-07-03
15:15:01,693::commands::69::root::(execCmd) /usr/bin/taskset --cpu-list
0-23 /sbin/ip route show to 0.0.0.0/0 <http://0.0.0.0/0> table main (cwd
None)MainProcess|jsonrpc/7::DEBUG::2017-07-03
15:15:01,697::commands::93::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0MainProcess|jsonrpc/7::DEBUG::2017-07-03
15:15:01,748::commands::69::root::(execCmd) /usr/bin/taskset --cpu-list
0-23 /usr/sbin/tc qdisc show (cwd
None)MainProcess|jsonrpc/7::DEBUG::2017-07-03
15:15:01,753::commands::93::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0MainProcess|jsonrpc/7::DEBUG::2017-07-03
15:15:01,754::supervdsmServer::100::SuperVdsm.ServerCallback::(wrapper)
return network_caps with {'bridges': {'vmbr200': {'ipv6autoconf': True,
'addr': '', 'dhcpv6': False, 'ipv6addrs': [], 'gateway': '', 'dhcpv4':
False, 'netmask': '', 'ipv4defaultroute': False, 'stp': 'on', 'ipv4addrs':
[], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['em1.200'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '50', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '1', 'topology_change_detected':
'0', 'priority': '32768', 'multicast_membership_interval': '26000',
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '525', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr201': {'ipv6autoconf': True, 'addr': '172.30.0.11', 'dhcpv6': False,
'ipv6addrs': [], 'gateway': '172.30.255.254', 'dhcpv4': False, 'netmask':
'255.255.0.0', 'ipv4defaultroute': True, 'stp': 'on', 'ipv4addrs':
['1.1.1.1/16' <http://1.1.1.1/16'>], 'mtu': '1500', 'ipv6gateway':
'fe80::be5f:f4ff:fe52:96c1', 'ports': ['em1.201'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '0', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '1', 'topology_change_detected':
'0', 'priority': '32768', 'multicast_membership_interval': '26000',
'root_path_cost': '100', 'root_port': '1', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.0013725e0ff1', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '627', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr202': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.202'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '50',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '525', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr210': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.210'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '51',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '526', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr206': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.206'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '50',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '526', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr260': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.260'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '51',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '526', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr207': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.207'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '51',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '526', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr208': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.208'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '51',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.b8ac6f90cb99', 'bridge_id': '8000.b8ac6f90cb99',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '526', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr101': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': [], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '51',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':
'512', 'stp_state': '1', 'topology_change_detected': '0', 'priority':
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.000000000000', 'bridge_id': '8000.000000000000',
'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables':
'0', 'gc_timer': '527', 'nf_call_arptables': '0', 'group_addr':
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid':
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0',
'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '500'}},
'vmbr100': {'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs':
[], 'gateway': '', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute':
False, 'stp': 'on', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['em1.100'], 'opts': {'multicast_last_member_count': '2',
'hash_elasticity': '4', 'multicast_query_response_interval': '1000',
'group_fwd_mask': '0x0', 'multicast_snooping': '1',
'multicast_startup_query_interval': '3125', 'hello_timer': '49',
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max':*
Thanks in Advance!
Best Regards,
Gabriel
7 years, 10 months
Upgrading HC from 4.0 to 4.1
by Gianluca Cecchi
Hello,
I'm going to try to update to 4.1 an HC environment, currently on 4.0 with
3 nodes in CentOS 7.3 and one of them configured as arbiter
Any particular caveat in HC?
Are the steps below, normally used for Self Hosted Engine environments the
only ones to consider?
- update repos on the 3 hosts and on the engine vm
- global maintenance
- update engine
- update also os packages of engine vm
- shutdown engine vm
- disable global maintenance
- verify engine vm boots and functionality is ok
Then
- update hosts: preferred way will be from the gui itself that takes care
of moving VMs, maintenance and such or to proceed manually?
Is there a preferred order with which I have to update the hosts, after
updating the engine? Arbiter for first or as the latest or not important at
all?
Any possible problem having disaligned versions of glusterfs packages until
I complete all the 3 hosts? Any known bugs passing from 4.0 to 4.1 and
related glusterfs components?
Thanks in advance,
Gianluca
7 years, 10 months
How can a Responsive website benefit your business?
by gajla parvin
--_000_PN1PR01MB0221DC7E1719654599CA5D3CF4D40PN1PR01MB0221INDP_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
Hello Sir/Mam,
By the end of 2017, the number of customers using mobile devices to browse =
the Web is projected to reach 5 billion people because customers are lookin=
g for your business on their mobile phones, it=92s the perfect opportunity =
to expand your online presence with a mobile website.
How can a mobile website benefit your business?
When customers are looking for a business on their mobile devices, it=92s n=
ot because they=92re casually shopping or researching. They=92re searching =
for a particular product or service because they have a pressing need to lo=
cate that product or service. So when you have a mobile website that appear=
s during a user=92s search-and-find mission, you let your customers know th=
at you=92re open and ready for business . . . right at the moment they need=
you.
We offer a new mobile website service that makes it easy for mobile users t=
o find your business online. With our service, you can:
Launch a mobile website in weeks: With our intuitive user interface, we ca=
n create a mobile website for your business in weeks.
Leverage your current website address: Our technology detects when custome=
rs are using a mobile device and then presents your mobile website to them =
using the same domain address. That=92s way, there=92s no need to choose a =
new website address for your business.
Optimize your website for mobile users: A screen with easy-to-read icons h=
elps users quickly navigate to the information they need.
Maximize your business presence: Because mobile users can search for your =
business anywhere, anytime, a mobile website gives your business 24=D77 exp=
osure.
OUR PROCESS:
=95 STEP 1 =96 PLANNING
=95 STEP 2 =96 DESIGN
=95 STEP 3 =96 CODING
=95 STEP 4 =96 CONTENT
=95 STEP 5 - WEBSITE SETUP
=95 STEP 6 =96 REVIEW
=95 STEP 7 - TESTING & LAUNCH
=95 STEP 8 - HANDOVER & SUPPORT
Some of the Web Design & Development services that we offer:
=95 HTML5 & CSS3 Coding
=95 AJAX & JQuery Development
=95 PHP Web Development
=95 Responsive Web Design
=95 Mobile Web Design
=95 Content Management Systems &Framework.
If you=92re ready to make your mobile website now, please email us back.
Thanks & Regards
Marketing Executive
Note: - If this is something you are interested, please respond to this ema=
il. If this is not your interest, don't worry, we will not email you again.
--_000_PN1PR01MB0221DC7E1719654599CA5D3CF4D40PN1PR01MB0221INDP_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <5BDB643C688FD0419571BB962A90AD82(a)INDPRD01.PROD.OUTLOOK.COM>
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style>
<!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin-top:0in;
margin-right:0in;
margin-bottom:10.0pt;
margin-left:0in;
line-height:115%;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
p.MsoNoSpacing, li.MsoNoSpacing, div.MsoNoSpacing
{mso-style-priority:1;
margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
p.ecxmsonormal, li.ecxmsonormal, div.ecxmsonormal
{mso-style-name:ecxmsonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
span.apple-converted-space
{mso-style-name:apple-converted-space;}
.MsoChpDefault
{mso-style-type:export-only;}
@page Section1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.Section1
{page:Section1;}
-->
</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US">
<div class=3D"Section1">
<p class=3D"ecxmsonormal" style=3D"mso-margin-top-alt:0in;margin-right:0in;
margin-bottom:16.2pt;margin-left:0in;line-height:17.05pt;background:white">
<span style=3D"font-family:"Calibri","sans-serif";color=
:black">Hello Sir/Mam,<o:p></o:p></span></p>
<p class=3D"ecxmsonormal" style=3D"mso-margin-top-alt:0in;margin-right:0in;
margin-bottom:16.2pt;margin-left:0in;line-height:17.05pt;background:white">
<span style=3D"font-family:"Calibri","sans-serif";color=
:#333399">By the end of 2017, the number of customers using mobile devices =
to browse the Web is projected to reach 5 billion people because customers =
are looking for your business on their mobile phones,
it=92s the perfect opportunity to expand your online presence with a mobil=
e website.</span><span style=3D"color:#444444"><o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><b>How can a mobile website benefit your business=
?<o:p></o:p></b></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#444444"><o:p> </o:p></=
span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#333399">When customers are =
looking for a business on their mobile devices, it=92s not because they=92r=
e casually shopping or researching. They=92re searching for a particular pr=
oduct or service because they have a pressing
need to locate that product or service. So when you have a mobile website =
that appears during a user=92s search-and-find mission, you let your custom=
ers know that you=92re open and ready for business . . . right at the momen=
t they need you.</span><span style=3D"color:#444444"><o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#333399">We offer a new mobi=
le website service that makes it easy for mobile users to find your busines=
s online. With our service, you can:<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#444444"><o:p> </o:p></=
span></p>
<p class=3D"MsoNoSpacing"><b>Launch a mobile website in weeks:</b><span sty=
le=3D"color:#333399"> With our intuitive user interface, we can creat=
e a mobile website for your business in weeks.<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#444444"><o:p> </o:p></=
span></p>
<p class=3D"MsoNoSpacing"><b>Leverage your current website address:</b><spa=
n style=3D"color:#333399"> Our technology detects when customers are =
using a mobile device and then presents your mobile website to them using t=
he same domain address. That=92s way, there=92s
no need to choose a new website address for your business.<o:p></o:p></spa=
n></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#444444"><o:p> </o:p></=
span></p>
<p class=3D"MsoNoSpacing"><b>Optimize your website for mobile users:</b><sp=
an style=3D"color:#333399"> A screen with easy-to-read icons helps us=
ers quickly navigate to the information they need.<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><b><span style=3D"color:#444444"><o:p> </o:p=
></span></b></p>
<p class=3D"MsoNoSpacing"><b>Maximize your business presence:</b><span styl=
e=3D"color:#333399"> Because mobile users can search for your busines=
s anywhere, anytime, a mobile website gives your business 24=D77 exposure.<=
/span><span style=3D"color:#444444"><o:p></o:p></span></p>
<p class=3D"ecxmsonormal" style=3D"mso-margin-top-alt:0in;margin-right:0in;
margin-bottom:16.2pt;margin-left:0in;line-height:17.05pt;background:white">
<b><span style=3D"font-family:"Calibri","sans-serif"">O=
UR PROCESS:</span><o:p></o:p></b></p>
<p class=3D"MsoNoSpacing"> &=
nbsp; <span style=3D"color:#002060">=95 STEP 1 =96 =
PLANNING<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 2 =96 =
DESIGN<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 3 =96 =
CODING<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 4 =96 =
CONTENT<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 5 - WE=
BSITE SETUP<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 6 =96 =
REVIEW<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 7 - TE=
STING & LAUNCH<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 STEP 8 - HA=
NDOVER & SUPPORT<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#444444"><o:p> </o:p></=
span></p>
<p class=3D"ecxmsonormal" style=3D"mso-margin-top-alt:0in;margin-right:0in;
margin-bottom:16.2pt;margin-left:0in;line-height:17.05pt;background:white">
<b><span style=3D"font-family:"Calibri","sans-serif"">S=
ome of the Web Design & Development services that we offer:</span><o:p>=
</o:p></b></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; </span><span class=3D"apple-=
converted-space"><span style=3D"color:#002060"> </span></span><span st=
yle=3D"color:#002060">=95 HTML5 & CSS3 Coding<o:p></o=
:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 AJAX =
& JQuery Development<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 PHP W=
eb Development<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 Respo=
nsive Web Design<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 Mobil=
e Web Design <o:p></o:=
p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#002060"> &=
nbsp; =95 Conte=
nt Management Systems &Framework.<o:p></o:p></span></p>
<p class=3D"MsoNoSpacing"><span style=3D"color:#444444"><o:p> </o:p></=
span></p>
<p class=3D"ecxmsonormal" style=3D"mso-margin-top-alt:0in;margin-right:0in;
margin-bottom:16.2pt;margin-left:0in;line-height:17.05pt;background:white">
<span style=3D"font-family:"Calibri","sans-serif";color=
:black">If you=92re ready to make your mobile website now, please email us =
back.</span><span style=3D"color:black"><o:p></o:p></span></p>
<p class=3D"ecxmsonormal" style=3D"mso-margin-top-alt:0in;margin-right:0in;
margin-bottom:16.2pt;margin-left:0in;line-height:17.05pt;background:white">
<span style=3D"font-family:"Calibri","sans-serif";color=
:black">Thanks & Regards</span><span style=3D"color:black"><br>
</span><span style=3D"font-family:"Calibri","sans-serif"=
;;color:black">Marketing Executive<br>
</span><span style=3D"color:#3021EF">Note: - </span><span style=3D"font-siz=
e:8.0pt;
color:#3021EF">If this is something you are interested, please respond to t=
his email. If this is not your interest, don't worry, we will not email you=
again.</span><span style=3D"font-size:8.0pt;color:black"><o:p></o:p></span=
></p>
</div>
</body>
</html>
--_000_PN1PR01MB0221DC7E1719654599CA5D3CF4D40PN1PR01MB0221INDP_--
7 years, 10 months
Best practices for LACP bonds on oVirt
by Vinícius Ferrão
Hello,
I’m deploying oVirt for the first time and a question has emerged: what is the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during the oVirt Node installation in Anaconda, or it should be done in a posterior moment inside the Hosted Engine manager?
In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI disks (MPIO).
Thanks,
V.
7 years, 10 months
ovirt-guest-agent - Ubuntu 16.04
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------3E6DFB8B8A97AF19E5E33BB0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello
Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?
I have noticed that if you install ovirt-guest-agent package from Ubuntu
repositories it doesn't start. Throws an error about python and never
starts. Has anyone noticied the same ? OS in this case is a clean
minimal install of Ubuntu 16.04.
Installing it from the following repository works fine -
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04...
Fernando
--------------3E6DFB8B8A97AF19E5E33BB0
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello<br>
<br>
Is the maintainer of ovirt-guest-agent for Ubuntu on this mail
list ?<br>
<br>
I have noticed that if you install ovirt-guest-agent package from
Ubuntu repositories it doesn't start. Throws an error about python
and never starts. Has anyone noticied the same ? OS in this case
is a clean minimal install of Ubuntu 16.04.<br>
<br>
Installing it from the following repository works fine -
<a class="moz-txt-link-freetext" href="http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04...">http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04...</a><br>
<br>
Fernando<br>
</font>
</body>
</html>
--------------3E6DFB8B8A97AF19E5E33BB0--
7 years, 10 months
oVirt 4.1.2 and rubygem-fluent-plugin packages missing
by Gianluca Cecchi
Hello,
an environment with engine in 4.1.2 and 3 hosts too (updated all from 4.0.5
3 days ago).
In web admin gui the 3 hosts keep the symbol that there are updates
available.
In events message board I have
Check for available updates on host ovirt01.localdomain.local was completed
successfully with message 'found updates for packages
rubygem-fluent-plugin-collectd-nest-0.1.3-1.el7,
rubygem-fluent-plugin-viaq_data_model-0.0.3-1.el7'.
But on host:
[root@ovirt01 qemu]# yum update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: it.centos.contactlab.it
* epel: mirror.spreitzer.ch
* extras: it.centos.contactlab.it
* ovirt-4.1: ftp.nluug.nl
* ovirt-4.1-epel: mirror.spreitzer.ch
* updates: it.centos.contactlab.it
No packages marked for update
[root@ovirt01 qemu]#
And
[root@ovirt01 qemu]# rpm -q rubygem-fluent-plugin-collectd-nest
rubygem-fluent-plugin-viaq_data_model
package rubygem-fluent-plugin-collectd-nest is not installed
package rubygem-fluent-plugin-viaq_data_model is not installed
[root@ovirt01 qemu]#
Is it a bug in 4.1.2? Or should I manually install these two packages?
Thanks,
Gianluca
7 years, 10 months
[Ovirt 4.0.6] Suggestion required for Network Throughput options
by TranceWorldLogic .
Hi,
To increase network throughput we have changed txqueuelen of network device
and bridge manually. And observed improved throughput.
But in ovirt I not see any option to increase txqueuelen.
Can someone suggest me what will be the right way to increase throughput ?
Note: I am trying to increase throughput for ipsec packets.
Thanks,
~Rohit
7 years, 10 months
iSCSI multipathing setup troubles
by Matthias Leopold
hi,
i'm trying to use iSCSI multipathing for a LUN shared by a Hitachi SAN.
i can't figure out how this is supposed to work, maybe my setup isn't
applicable at all...
our storage admin shared the same LUN for me on two targets, which are
located in two logical networks connected to different switches (i asked
him to do so). on the oVirt hypervisor side there is only one bonded
interface for storage traffic, so i configured two VLAN interfaces
located in these networks on the bond interface.
now i create the storage domain logging in to one of the targets
connecting through its logical network. when i try to create a "second"
storage domain for the same LUN logging in to the second target, oVirt
tells me "LUN is already in use". i understand this, but now i can't
configure an oVirt "iSCSI Bond" in any way.
how is this supposed to work?
right now the only working setup i can think of would be an iSCSI target
that uses a redundant bond interface (with only one IP addresss) to
which my hypervisors connect through different routed networks (using
either dedicated network cards or vlan interfaces). is that correct?
i feel like i'm missing something, but i couldn't find any examples for
real world IP setups for iSCSI multipathing.
thanks for explaining
matthias
7 years, 10 months
download/export a VM image
by aduckers
Running a 4.1 cluster with FC SAN storage. I’ve got a VM that I’ve customized, and would now like to pull that out of oVirt in order to share with folks outside the environment.
What’s the easiest way to do that?
I see that the export domain is being deprecated, though I can still set one up at this time. Even in the case of an NFS export domain though, it looks like I’d need to drill down into the exported file system and find the correct image based on VMID (I think..).
Is there a simple way to grab a VM image?
Thanks
7 years, 10 months
add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine
by yayo (j)
Hi at all,
we have a 3 node cluster with this configuration:
ovirtzz 4.1 with 3 node hyperconverged with gluster. 2 node are "full
replicated" and 1 node is the arbiter.
Now we have a new server to add to cluster then we want to add this new
server and remove the arbiter (or, make this new server a "full replicated"
gluster with arbiter role? I don't know)
Can you please help me to know what is the right way to do this? Or, Can
you give me any doc or link that explain the steps to do this?
Thank you in advance!
7 years, 10 months
Re: [ovirt-users] Recovering hosted-engine
by Andrew Dent
Hi Didi
Fair enough.
If I'm in this situation.....
I have 3 hosts with 6 production VMs.
The hosted-engine VM is completely toast and not recoverable.
However I have a backup of the hosted-engine database (do I need
anything else).
Is it possible to build a new VM, import the backup of the previous
hosted-engine database and reconnect the storage domains and VMs in
their running state without any VMs experiencing an outage?
The URL
http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restor...
looks to be longer now.
I'll review, test and see if this will give me what I'm looking for.
The broken link still seems to be broken.
When I click the link, the browser ends up at this
http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restor...
But I suspect it should be
http://www.ovirt.org/documentation/self-hosted/chap-Installing_Additional...
Kind regards
Andrew
------ Original Message ------
From: "Yedidyah Bar David" <didi(a)redhat.com>
To: "Andrew Dent" <adent(a)ctcroydon.com.au>
Cc: "users" <users(a)ovirt.org>
Sent: 3/07/2017 11:12:05 PM
Subject: Re: [ovirt-users] Recovering hosted-engine
>On Mon, Jul 3, 2017 at 3:46 PM, Andrew Dent <adent(a)ctcroydon.com.au>
>wrote:
>> Has anyone successfully completed a hosted-engine recovery on a
>>multiple
>> host setup with production VMs?
>
>I'd like to clarify that "recovery" can span a large spectrum of
>flows, from a trivial "I did some change to the engine database
>that broke stuff and I want to restore a backup I took prior to
>this change" to a full system restoration including purchasing
>and deploying new (perhaps different) hosts/network/storage
>hardware, including many other flows in between.
>
>So when you plan for recovery, you should define very well what
>flows you plan to handle, and how you handle each.
>
>The linked procedure correctly says it's "providing an example".
>
>>
>> Kind regards
>>
>>
>> Andrew
>>
>>
>>
>> ------ Original Message ------
>> From: "Andrew Dent" <adent(a)ctcroydon.com.au>
>> To: "users" <users(a)ovirt.org>
>> Sent: 2/07/2017 2:22:16 PM
>> Subject: [ovirt-users] Recovering hosted-engine
>>
>> Hi
>>
>> A couple of questions about hosted-engine recovery.
>> Part way through this URL, in the section "Workflow for Restoring the
>> Self-Hosted Engine Environment"
>>
>>http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restor...
>> it looks like once the hosted-engine is recovered on Host 1, the VMs
>>on Host
>> 2 and 3 will be running, but not accessible to the recovered Hosted
>>Engine.
>> Is that correct?
>
>I am pretty certain that the procedure assumed that all hosts need
>restoration,
>not that some are still up-and-running.
>
>> If so, how to you remove host 2 and host 3 from the environment, then
>>add
>> back in again while keeping the VMs running?
>
>That's a good question.
>
>Please try to describe the exact flow you have in mind. What's broken
>and
>needs restoration, and how do you plan to do that?
>
>>
>> Host 2 and Host 3 are not recoverable in their current state. These
>>hosts
>> need to be removed from the environment, and then added again to the
>> environment using the hosted-engine deployment script. For more
>>information
>> on these actions, see the Removing Non-Operational Hosts from a
>>Restored
>> Self-Hosted Engine Environment section below and Chapter 7:
>>Installing
>> Additional Hosts to a Self-Hosted Environment.
>>
>> BTW: The link referring to chapter 7 is broken.
>
>You are right. The link in the bottom of the page seems working.
>Now pushed [1] to fix. Thanks for the report!
>
>[1]
>
>Best,
>--
>Didi
7 years, 10 months
Virtual Machine looses connectivity with no clear explanation
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------CA6FA101A23C7121C82BF9C5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
I have a rather strange issue which is affecting one of my last deployed
Hypervisors. It is a CentOS 7 (not a oVirt Node) which runs only 3
Virtual Machines.
One of these VMs have a reasonable output traffic at peaks (500 -
700Mbps) and the hypervisor underneath is connected to the switch via a
bonding (mode=2) which in turn creates bond0.XX interfaces which are
connected to different bridges for each network. The VM in question is
connected to bridge "ovirtmgmt".
When the problem happens the VM stops passing traffic and cannot reach
even the router or other VMs in the same Layer 2. Seems the bridge stop
passing traffic for that particular VM. Other VMs work fine since they
were created. When this problem happens I just need to go to its Console
and run a reboot (Ctrl-Alt-Del), don't even need to Power Off and Power
On again using oVirt Engine.
I have even re-installed this VMs operating system from scratch but the
problem persists. Have also changed the vNic mac address in the case
(already check) of conflicted mac addresses somewhere in that Layer 2.
Last, my hypervisor machine (due a mistake) has been running with
SElinux disabled, not sure if it could have anything to do with this
behavior.
Anyway, anyone has ever seen any behavior like that ?
Thanks
Fernando
--------------CA6FA101A23C7121C82BF9C5
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">I have a rather strange
issue which is affecting one of my last deployed Hypervisors. It
is a CentOS 7 (not a oVirt Node) which runs only 3 Virtual
Machines.<br>
<br>
One of these VMs have a reasonable output traffic at peaks (500 -
700Mbps) and the hypervisor underneath is connected to the switch
via a bonding (mode=2) which in turn creates bond0.XX interfaces
which are connected to different bridges for each network. The VM
in question is connected to bridge "ovirtmgmt".<br>
<br>
When the problem happens the VM stops passing traffic and cannot
reach even the router or other VMs in the same Layer 2. Seems the
bridge stop passing traffic for that particular VM. Other VMs work
fine since they were created. When this problem happens I just
need to go to its Console and run a reboot (Ctrl-Alt-Del), don't
even need to Power Off and Power On again using oVirt Engine.<br>
I have even re-installed this VMs operating system from scratch
but the problem persists. Have also changed the vNic mac address
in the case (already check) of conflicted mac addresses somewhere
in that Layer 2.<br>
<br>
Last, my hypervisor machine (due a mistake) has been running with
SElinux disabled, not sure if it could have anything to do with
this behavior.<br>
<br>
Anyway, anyone has ever seen any behavior like that ?<br>
<br>
Thanks<br>
Fernando<br>
</font>
</body>
</html>
--------------CA6FA101A23C7121C82BF9C5--
7 years, 10 months
Re: [ovirt-users] Recovering hosted-engine
by Andrew Dent
--------=_MB15379295-79C3-4E7D-BA3E-C6F662EE3B38
Content-Type: text/plain; format=flowed; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Has anyone successfully completed a hosted-engine recovery on a multiple=20
host setup with production VMs?
Kind regards
Andrew
------ Original Message ------
From: "Andrew Dent" <adent(a)ctcroydon.com.au>
To: "users" <users(a)ovirt.org>
Sent: 2/07/2017 2:22:16 PM
Subject: [ovirt-users] Recovering hosted-engine
>Hi
>
>A couple of questions about hosted-engine recovery.
>Part way through this URL, in the section "Workflow for Restoring the=20
>Self-Hosted Engine Environment"
>http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restori=
ng_an_EL-Based_Self-Hosted_Environment/
>it looks like once the hosted-engine is recovered on Host 1, the VMs on=20
>Host 2 and 3 will be running, but not accessible to the recovered=20
>Hosted Engine.
>Is that correct?
>If so, how to you remove host 2 and host 3 from the environment, then=20
>add back in again while keeping the VMs running?
>
>Host 2 and Host 3 are not recoverable in their current state. These=20
>hosts need to be removed from the environment, and then added again to=20
>the environment using the hosted-engine deployment script. For more=20
>information on these actions, see the Removing Non-Operational Hosts=20
>from a Restored Self-Hosted Engine Environment section below and=20
>Chapter 7: Installing Additional Hosts to a Self-Hosted Environment=20
><http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restor=
ing_an_EL-Based_Self-Hosted_Environment/chap-Installing_Additional_Hosts_to=
_a_Self-Hosted_Environment>.
>
>BTW: The link referring to chapter 7 is broken.
>
>Kind regards
>
>
>
>Andrew
>
>
--------=_MB15379295-79C3-4E7D-BA3E-C6F662EE3B38
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<?xml version=3D"1.0" encoding=3D"utf-16"?><html><head>
<style id=3D"signatureStyle" type=3D"text/css"><!--#xb62ca86bd62b482, #xb62=
ca86bd62b482 #x136253a26533439c817a2d4cd93dbb00 #xea120d60c95044d #xb71224f=
920234978acc74f4d23143069, #xb62ca86bd62b482 #x136253a26533439c817a2d4cd93d=
bb00 #xea120d60c95044d
{font-family: Tahoma; font-size: 12pt;}
#xb62ca86bd62b482, #xb62ca86bd62b482 #x136253a26533439c817a2d4cd93dbb00 #xe=
a120d60c95044d, #xb62ca86bd62b482 #x136253a26533439c817a2d4cd93dbb00, #xb62=
ca86bd62b482
{font-family: 'Segoe UI'; font-size: 12pt;}
#xb62ca86bd62b482 #x136253a26533439c817a2d4cd93dbb00
{font-family: 'Segoe UI'; font-size: 12pt; color: rgb(0, 0, 0); margin-left=
: 0px; margin-right: 8px; background-color: rgb(255, 255, 255);}
#xb62ca86bd62b482 #x136253a26533439c817a2d4cd93dbb00 #xea120d60c95044d #xb7=
1224f920234978acc74f4d23143069 p.MsoNormal, #xb62ca86bd62b482 #x136253a2653=
3439c817a2d4cd93dbb00 #xea120d60c95044d p.MsoNormal
{margin: 0cm 0cm 0.0001pt; font-size: 11pt; font-family: Calibri, sans-seri=
f;}
#xb62ca86bd62b482 #x136253a26533439c817a2d4cd93dbb00 #xea120d60c95044d #xb7=
1224f920234978acc74f4d23143069 div.WordSection1, #xb62ca86bd62b482 #x136253=
a26533439c817a2d4cd93dbb00 #xea120d60c95044d div.WordSection1
{page: WordSection1;}
--></style><style type=3D"text/css"><!--blockquote.cite2
{margin-left: 5px; margin-right: 0px; padding-left: 10px; padding-right: 0p=
x; border-left-width: 1px; border-left-style: solid; border-left-color: rgb=
(204, 204, 204); margin-top: 3px; padding-top: 0px;}
body, #xa03d74168c5b47d #x75154ce80e9e4bf, #xa03d74168c5b47d #x75154ce80e9e=
4bf, #xa03d74168c5b47d #x75154ce80e9e4bf, #xa03d74168c5b47d #x75154ce80e9e4=
bf #xb71224f920234978acc74f4d23143069, #xa03d74168c5b47d #x75154ce80e9e4bf
{font-family: Tahoma; font-size: 12pt;}
#xa03d74168c5b47d #x75154ce80e9e4bf, #xa03d74168c5b47d #x75154ce80e9e4bf
{font-family: 'Segoe UI'; font-size: 12pt;}
#xa03d74168c5b47d #x75154ce80e9e4bf #xb71224f920234978acc74f4d23143069 p.Ms=
oNormal, #xa03d74168c5b47d #x75154ce80e9e4bf p.MsoNormal
{margin: 0cm 0cm 0.0001pt; font-size: 11pt; font-family: Calibri, sans-seri=
f;}
#xa03d74168c5b47d #x75154ce80e9e4bf #xb71224f920234978acc74f4d23143069 div.=
WordSection1, #xa03d74168c5b47d #x75154ce80e9e4bf div.WordSection1
{page: WordSection1;}
--></style>
<style type=3D"text/css"><!--#xa03d74168c5b47d
{font-family: 'Segoe UI'; font-size: 12pt;}
--></style>
</head>
<body><div>Has anyone successfully completed a hosted-engine recovery on a=
multiple host setup with production VMs?</div><div><br /></div><div id=3D"s=
ignature_old"><div id=3D"xb62ca86bd62b482"><div style=3D"font-family: Tahom=
a;"><span id=3D"x89c8d9902b0345bca5fb60b10010a8ea"><font>
<div id=3D"x136253a26533439c817a2d4cd93dbb00"><div id=3D"signature_old"><di=
v id=3D"xea120d60c95044d"><div class=3D"WordSection1"><div id=3D"xb71224f92=
0234978acc74f4d23143069"><div class=3D"WordSection1"><p class=3D"MsoNormal"=
><font size=3D"4" style=3D"font-size: 14pt;">Kind regards</font></p><p clas=
s=3D"MsoNormal"><font size=3D"4" style=3D"font-size: 14pt;"><br /></font></=
p><p class=3D"MsoNormal"><font size=3D"4" style=3D"font-size: 14pt;">Andrew=
=C2=A0</font></p></div></div></div></div></div></div></font></span></div></=
div></div><div><br /></div><div><br /></div>
<div>------ Original Message ------</div>
<div>From: "Andrew Dent" <<a href=3D"mailto:adent@ctcroydon.com.au">aden=
t(a)ctcroydon.com.au</a>></div>
<div>To: "users" <<a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a>=
></div>
<div>Sent: 2/07/2017 2:22:16 PM</div>
<div>Subject: [ovirt-users] Recovering hosted-engine</div><div><br /></div>
<div id=3D"xa03d74168c5b47d"><blockquote cite=3D"a55544f993c0aef5fbfa13e1ed=
78ca048e629118@localhost" type=3D"cite" class=3D"cite2">
<div>Hi</div><div><br /></div><div>A couple of questions about hosted-engin=
e recovery.</div><div>Part way through this URL, in the section "Workflow f=
or Restoring the Self-Hosted Engine Environment"=C2=A0</div><div><a href=3D=
"http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restori=
ng_an_EL-Based_Self-Hosted_Environment/">http://www.ovirt.org/documentation=
/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environm=
ent/</a></div><div>it looks like once the hosted-engine is recovered on Hos=
t 1, the VMs on Host 2 and 3 will be running, but not accessible to the rec=
overed Hosted Engine.=C2=A0</div><div>Is that correct?</div><div>If so, how =
to you remove host 2 and host 3 from the environment, then add back in aga=
in while keeping the VMs running?</div><div><br /></div><div><div>
<code style=3D"box-sizing: border-box; font-family: Menlo, Monaco, Consolas=
, 'Courier New', monospace; font-size: 14.4px; padding: 2px 4px; color: rgb=
(199, 37, 78); border-radius: 4px; font-variant-ligatures: normal; orphans: =
2; widows: 2; background-color: rgb(249, 242, 244);">Host 2</code><span st=
yle=3D"color: rgb(51, 51, 51); font-family: 'Source Sans Pro', 'Open Sans', =
'Helvetica Neue', Helvetica, Arial, sans-serif; font-variant-ligatures: no=
rmal; orphans: 2; widows: 2; background-color: rgb(255, 255, 255);">=C2=A0a=
nd=C2=A0</span><code style=3D"box-sizing: border-box; font-family: Menlo, M=
onaco, Consolas, 'Courier New', monospace; font-size: 14.4px; padding: 2px=
4px; color: rgb(199, 37, 78); border-radius: 4px; font-variant-ligatures: n=
ormal; orphans: 2; widows: 2; background-color: rgb(249, 242, 244);">Host 3=
</code><span style=3D"color: rgb(51, 51, 51); font-family: 'Source Sans Pro=
', 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; font-varian=
t-ligatures: normal; orphans: 2; widows: 2; background-color: rgb(255, 255, =
255);">=C2=A0are not recoverable in their current state. These hosts need=
to be removed from the environment, and then added again to the environment =
using the hosted-engine deployment script. For more information on these a=
ctions, see the Removing Non-Operational Hosts from a Restored Self-Hosted=
Engine Environment section below and=C2=A0</span><a href=3D"http://www.ovir=
t.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_S=
elf-Hosted_Environment/chap-Installing_Additional_Hosts_to_a_Self-Hosted_En=
vironment" style=3D"box-sizing: border-box; color: rgb(97, 182, 14); text-d=
ecoration: none; font-family: 'Source Sans Pro', 'Open Sans', 'Helvetica Ne=
ue', Helvetica, Arial, sans-serif; font-variant-ligatures: normal; orphans: =
2; widows: 2; background-color: rgb(255, 255, 255);">Chapter 7: Installing =
Additional Hosts to a Self-Hosted Environment</a><span style=3D"color: rgb=
(51, 51, 51); font-family: 'Source Sans Pro', 'Open Sans', 'Helvetica Neue'=
, Helvetica, Arial, sans-serif; font-variant-ligatures: normal; orphans: 2; =
widows: 2; background-color: rgb(255, 255, 255);">.</span></div></div><div=
><br /></div><div>BTW: The link referring to chapter 7 is broken.=C2=A0</di=
v><div><br /></div><div id=3D"signature_old"><div id=3D"x75154ce80e9e4bf">
<div class=3D"WordSection1">
<div id=3D"xb71224f920234978acc74f4d23143069"><div class=3D"WordSection1"><=
p class=3D"MsoNormal"><font face=3D"Tahoma"><font size=3D"3" style=3D"font-=
size: 12pt;">Kind r</font><span style=3D"font-size: 12pt;">egards</span></f=
ont></p><p class=3D"MsoNormal"><font size=3D"3" style=3D"font-size: 12pt;"=
face=3D"Tahoma"><br /></font></p><p class=3D"MsoNormal"><font size=3D"3" st=
yle=3D"font-size: 12pt;" face=3D"Tahoma">Andrew=C2=A0</font></p></div></div=
>
</div>
</div></div><div><br /></div>
</blockquote></div>
</body></html>
--------=_MB15379295-79C3-4E7D-BA3E-C6F662EE3B38--
7 years, 10 months
Gluster issue with /var/lib/glusterd/peers/<ip> file
by Mike DePaulo
Hi everyone,
I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine.
I was working on setting up a network for gluster storage and
migration. The addresses for it will be 10.0.20.x, rather than
192.168.1.x for the management network. However, I switched gluster
storage and migration back over to the management network.
I updated and rebooted one of my hosts (death-star, 10.0.20.52) and on
reboot, the glusterd service would start, but wouldn't seem to work.
The engine webgui reported that its bricks were down, and commands
like this would fail:
[root@death-star glusterfs]# gluster pool list
pool list: failed
[root@death-star glusterfs]# gluster peer status
peer status: failed
Upon further investigation, I had under /var/lib/glusterd/peers/ the 2
existing UUID files, plus a new 3rd one:
[root@death-star peers]# cat 10.0.20.53
uuid=00000000-0000-0000-0000-000000000000
state=0
hostname1=10.0.20.53
I moved that file out of there, restarted glusterd, and now gluster is
working again.
I am guessing that this is a bug. Let me know if I should attach other
log files; I am not sure which ones.
And yes, 10.0.20.53 is the IP of one of the other hosts.
-Mike
7 years, 10 months
Other portals work, user portal experiences console errors
by Sophia Valentine
Hi all!
I installed the nightly from the repo on CentOS 7.0.
I currently have an issue where visiting the user portal triggers
the
following errors in the browser console. I believe that I will
need to
compile the engine from source due to the current source being
compacted, thus obscuring the stacktrace.
If that is needed, can I generate an RPM with the debug build I
create
(might need some help with this, if that's okay)? I'm hoping it's
a
"known error" and that there's a simple fix. Would love to get to
the
bottom of this!
Thanks.
Stacktrace below:
Mon Jul 03 09:13:51 GMT+100 2017
com.gwtplatform.mvp.client.presenter.slots.LegacySlotConvertor
SEVERE: Warning: You're using an untyped slot!
Untyped slots are dangerous! Please upgrade your slots using
the Arcbee's easy upgrade tool at
https://arcbees.github.io/gwtp-slot-upgrader
uGc @ userportal-0.js:12779
userportal-0.js:12779 Mon Jul 03 09:13:51 GMT+100 2017
SEVERE: Uncaught exception
com.google.web.bindery.event.shared.UmbrellaException: 2
exceptions caught: Adding non GroupedTabData; Adding non
GroupedTabData
at Unknown.lp(userportal-0.js)
at Unknown.xp(userportal-0.js)
at Unknown.dQ(userportal-0.js)
at Unknown.EP(userportal-0.js)
at Unknown.HP(userportal-0.js)
at Unknown.YP(userportal-0.js)
at Unknown.Omd(userportal-0.js)
at Unknown.Und(userportal-28.js)
at Unknown.bmd(userportal-0.js)
at Unknown.mnd(userportal-28.js)
at Unknown.amd(userportal-0.js)
at Unknown.Rqo(userportal-2.js)
at Unknown.dro(userportal-2.js)
at Unknown.yq(userportal-0.js)
at Unknown.Dq(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.anonymous(userportal-2.js)
at
Unknown.userportal.__installRunAsyncCode(https://vmengine.x.com/ovirt-eng...
at Unknown.__gwtInstallCode(userportal-0.js)
at Unknown.mr(userportal-0.js)
at Unknown.Kr(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at
Unknown.anonymous(https://vmengine.x.com/ovirt-engine/userportal/deferred...
Suppressed: java.lang.RuntimeException: Adding non
GroupedTabData
at Unknown.kp(userportal-0.js)
at Unknown.up(userportal-0.js)
at Unknown.wp(userportal-0.js)
at Unknown.XLk(userportal-2.js)
at Unknown.w2j(userportal-28.js)
at Unknown.Rnd(userportal-28.js)
at Unknown.znd(userportal-28.js)
at Unknown.Bnd(userportal-28.js)
at Unknown.EP(userportal-0.js)
at Unknown.HP(userportal-0.js)
at Unknown.YP(userportal-0.js)
at Unknown.Omd(userportal-0.js)
at Unknown.Und(userportal-28.js)
at Unknown.bmd(userportal-0.js)
at Unknown.mnd(userportal-28.js)
at Unknown.amd(userportal-0.js)
at Unknown.Rqo(userportal-2.js)
at Unknown.dro(userportal-2.js)
at Unknown.yq(userportal-0.js)
at Unknown.Dq(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.anonymous(userportal-2.js)
at
Unknown.userportal.__installRunAsyncCode(https://vmengine.x.com/ovirt-eng...
at Unknown.__gwtInstallCode(userportal-0.js)
at Unknown.mr(userportal-0.js)
at Unknown.Kr(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at
Unknown.anonymous(https://vmengine.x.com/ovirt-engine/userportal/deferred...
Caused by: java.lang.RuntimeException: Adding non GroupedTabData
at Unknown.kp(userportal-0.js)
at Unknown.up(userportal-0.js)
at Unknown.wp(userportal-0.js)
at Unknown.XLk(userportal-2.js)
at Unknown.w2j(userportal-28.js)
at Unknown.Rnd(userportal-28.js)
at Unknown.znd(userportal-28.js)
at Unknown.Bnd(userportal-28.js)
at Unknown.EP(userportal-0.js)
at Unknown.HP(userportal-0.js)
at Unknown.YP(userportal-0.js)
at Unknown.Omd(userportal-0.js)
at Unknown.Und(userportal-28.js)
at Unknown.bmd(userportal-0.js)
at Unknown.mnd(userportal-28.js)
at Unknown.amd(userportal-0.js)
at Unknown.Rqo(userportal-2.js)
at Unknown.dro(userportal-2.js)
at Unknown.yq(userportal-0.js)
at Unknown.Dq(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.anonymous(userportal-2.js)
at
Unknown.userportal.__installRunAsyncCode(https://vmengine.x.com/ovirt-eng...
at Unknown.__gwtInstallCode(userportal-0.js)
at Unknown.mr(userportal-0.js)
at Unknown.Kr(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at
Unknown.anonymous(https://vmengine.x.com/ovirt-engine/userportal/deferred...
uGc @ userportal-0.js:12779
userportal-0.js:12779 Mon Jul 03 09:13:51 GMT+100 2017 remote
SEVERE: Uncaught exception
com.google.web.bindery.event.shared.UmbrellaException: 2
exceptions caught: Adding non GroupedTabData; Adding non
GroupedTabData
at Unknown.lp(userportal-0.js)
at Unknown.xp(userportal-0.js)
at Unknown.dQ(userportal-0.js)
at Unknown.EP(userportal-0.js)
at Unknown.HP(userportal-0.js)
at Unknown.YP(userportal-0.js)
at Unknown.Omd(userportal-0.js)
at Unknown.Und(userportal-28.js)
at Unknown.bmd(userportal-0.js)
at Unknown.mnd(userportal-28.js)
at Unknown.amd(userportal-0.js)
at Unknown.Rqo(userportal-2.js)
at Unknown.dro(userportal-2.js)
at Unknown.yq(userportal-0.js)
at Unknown.Dq(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.anonymous(userportal-2.js)
at
Unknown.userportal.__installRunAsyncCode(https://vmengine.x.com/ovirt-eng...
at Unknown.__gwtInstallCode(userportal-0.js)
at Unknown.mr(userportal-0.js)
at Unknown.Kr(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at
Unknown.anonymous(https://vmengine.x.com/ovirt-engine/userportal/deferred...
Suppressed: java.lang.RuntimeException: Adding non
GroupedTabData
at Unknown.kp(userportal-0.js)
at Unknown.up(userportal-0.js)
at Unknown.wp(userportal-0.js)
at Unknown.XLk(userportal-2.js)
at Unknown.w2j(userportal-28.js)
at Unknown.Rnd(userportal-28.js)
at Unknown.znd(userportal-28.js)
at Unknown.Bnd(userportal-28.js)
at Unknown.EP(userportal-0.js)
at Unknown.HP(userportal-0.js)
at Unknown.YP(userportal-0.js)
at Unknown.Omd(userportal-0.js)
at Unknown.Und(userportal-28.js)
at Unknown.bmd(userportal-0.js)
at Unknown.mnd(userportal-28.js)
at Unknown.amd(userportal-0.js)
at Unknown.Rqo(userportal-2.js)
at Unknown.dro(userportal-2.js)
at Unknown.yq(userportal-0.js)
at Unknown.Dq(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.anonymous(userportal-2.js)
at
Unknown.userportal.__installRunAsyncCode(https://vmengine.x.com/ovirt-eng...
at Unknown.__gwtInstallCode(userportal-0.js)
at Unknown.mr(userportal-0.js)
at Unknown.Kr(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at
Unknown.anonymous(https://vmengine.x.com/ovirt-engine/userportal/deferred...
Caused by: java.lang.RuntimeException: Adding non GroupedTabData
at Unknown.kp(userportal-0.js)
at Unknown.up(userportal-0.js)
at Unknown.wp(userportal-0.js)
at Unknown.XLk(userportal-2.js)
at Unknown.w2j(userportal-28.js)
at Unknown.Rnd(userportal-28.js)
at Unknown.znd(userportal-28.js)
at Unknown.Bnd(userportal-28.js)
at Unknown.EP(userportal-0.js)
at Unknown.HP(userportal-0.js)
at Unknown.YP(userportal-0.js)
at Unknown.Omd(userportal-0.js)
at Unknown.Und(userportal-28.js)
at Unknown.bmd(userportal-0.js)
at Unknown.mnd(userportal-28.js)
at Unknown.amd(userportal-0.js)
at Unknown.Rqo(userportal-2.js)
at Unknown.dro(userportal-2.js)
at Unknown.yq(userportal-0.js)
at Unknown.Dq(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.anonymous(userportal-2.js)
at
Unknown.userportal.__installRunAsyncCode(https://vmengine.x.com/ovirt-eng...
at Unknown.__gwtInstallCode(userportal-0.js)
at Unknown.mr(userportal-0.js)
at Unknown.Kr(userportal-0.js)
at Unknown.eval(userportal-0.js)
at Unknown.Zq(userportal-0.js)
at Unknown.ar(userportal-0.js)
at Unknown.eval(userportal-0.js)
at
Unknown.anonymous(https://vmengine.x.com/ovirt-engine/userportal/deferred...
uGc @ userportal-0.js:12779
--
Sophia Valentine System administrator
https://apertron.net
7 years, 10 months
Gluster storage network not being used for Gluster
by Mike DePaulo
Hi,
I configured a "Gluster storage" network, but it doesn't look like it
is being used for Gluster. Specifically, the switch's LEDs are not
blinking, and the hosts' "Total Tx" and "Total Rx" counts are not
changing (and they're tiny, under 1 MB.) The management network must
still be being used.
I have 3 hosts running oVirt Node 4.1.x. I set them up via the gluster
hosted engine. The gluster storage network is 10.0.20.x. These are the
contents of /var/lib/glusterd/peers:
[root@centerpoint peers]# cat 8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
uuid=8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
state=3
hostname1=death-star.ad.depaulo.org
hostname2=death-star
hostname3=192.168.1.52
hostname4=10.0.20.52
[root@centerpoint peers]# cat b6b96427-a0dd-47ff-b3e0-038eb0967fb9
uuid=b6b96427-a0dd-47ff-b3e0-038eb0967fb9
state=3
hostname1=starkiller-base.ad.depaulo.org
hostname2=starkiller-base
hostname3=192.168.1.53
Thanks in advance,
-Mike
7 years, 10 months
vdsm changing disk scheduler when starting, configurable?
by Darrell Budic
It seems vdsmd under 4.1.x (or something under it’s control) changes the disk schedulers when it starts or a host node is activated, and I’d like to avoid this. Is it preventable? Or configurable anywhere? This was probably happening under earlier version, but I just noticed it while upgrading some converged boxes today.
It likes to set deadline, which I understand is the RHEL default for centos 7 on non SATA disks. But I’d rather have NOOP on my SSDs because SSDs, and NOOP on my SATA spinning platters because ZFS does it’s own scheduling, and running anything other than NOOP can cause increased CPU utilization for no gain. It’s also fighting ZFS, which tires to set NOOP on whole disks it controls, and my kernel command line setting.
Thanks,
-Darrell
7 years, 10 months
vdsm (4.1) restarts glusterd when activating a node, even if it's already running
by Darrell Budic
Upgrading some nodes today, and noticed that vdsmd restarts glusterd on a node when it activates it. This is causing a short break in healing when the shd gets disconnected, forcing some extra healing when the healing process reports “Transport Endpoint Disconnected” (N/A in the ovirt gui).
This is on a converged cluster (3 nodes, gluster replica volume across all 3, ovirt-engine running elsewhere). Centos 7 install, just upgraded to Ovirt 4.1.2, running cluster 3.10 from the Centos SIG.
The process I’m observing:
Place a node into maintenance via GUI
Update node from command line
Reboot node (kernel update)
Watch gluster heal itself after reboot
Activate node in GUI
gluster is completely stopped on this node
gluster is started on this node
healing begins again, but isn’t working
“gluster vol heal XXXX info” reports this node’s information not available because “Transport endpoint not connected”.
This clears up in 5-10 minutes, then volume heals normally
Someone with a similar setup want to check this and see if it’s something specific to my nodes, or just a general problem with the way it’s restarting gluster? Looking for a little confirmation before I file a bug report on it.
Or a dev want to comment on why it stops and starts gluster, instead of a restart which would presumably leave the brick processes and shd running and not causing this effect?
Thanks,
-Darrell
7 years, 10 months
Storage, extend or new lun ?
by Enrico Becchetti Gmail
Dear All,
I've oVirt 3.5.3-1-1.el6 with two kind of storage , NFS and Fibre
Channel. Each Hypervisor
has HBA controller and Data CLutser has four Storage named: DATA_FC,
DATA_NFS, EXPORT
and ISO-DOMAIN.
Anybody know how can I extend a DATA_FC ? Which is the best practies ?
Extend LUN from HP MSA controller or adding a new lun ?
if I extend lun inside HP MSA oVirt see the new size ?
Enrico
Thanks a lot.
Best Regards
7 years, 10 months
Cannot find master domain
by Iman Darabi
hi.
i moved my host to maintenance mode and upgrade it. then restarted server.
after restarting, i activated server but the data domain is inactive and i
get the following error:
"Cannot find master domain: u'spUUID=9a1584cf-81bb-4ce8-975f-659e643a14b5,
msdUUID=193072f2-9c4a-405c-86a8-c4b036413171'", 'code': 304
The output of "multipath -v2" is as follows:
Jul 01 12:38:34 | sdb: emc prio: path not correctly configured for failover
Jul 01 12:38:34 | sdc: emc prio: path not correctly configured for failover
Jul 01 12:38:34 | 3600508b1001cbeb7dc20cfc250a2e506: ignoring map
Jul 01 12:38:34 | sdb: emc prio: path not correctly configured for failover
Jul 01 12:38:34 | sdc: emc prio: path not correctly configured for failover
Jul 01 12:38:34 | DM message failed [fail_if_no_path]
reject: 3600601604d003a00036edddd268de611 undef DGC ,VNX5400WDVR
size=2.0T features='0' hwhandler='1 emc' wp=undef
|-+- policy='service-time 0' prio=0 status=undef
| |- 12:0:0:16384 sdb 8:16 undef faulty running
| `- 12:0:1:16384 sdc 8:32 undef faulty running
|-+- policy='service-time 0' prio=50 status=undef
| `- 11:0:1:0 sde 8:64 undef ready running
`-+- policy='service-time 0' prio=10 status=undef
`- 11:0:0:0 sdd 8:48 undef ready running
thanks.
--
R&D expert at Ferdowsi University of Mashhadhttps://ir.linkedin.com/in/
imandarabi
<https://www.linkedin.com/profile/public-profile-settings?trk=prof-edit-ed...>
7 years, 10 months
Recovering hosted-engine
by Andrew Dent
--------=_MB617458AF-4FCC-4AB6-8A34-3338D99C6BB8
Content-Type: text/plain; format=flowed; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi
A couple of questions about hosted-engine recovery.
Part way through this URL, in the section "Workflow for Restoring the=20
Self-Hosted Engine Environment"
http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restorin=
g_an_EL-Based_Self-Hosted_Environment/
it looks like once the hosted-engine is recovered on Host 1, the VMs on=20
Host 2 and 3 will be running, but not accessible to the recovered Hosted=20
Engine.
Is that correct?
If so, how to you remove host 2 and host 3 from the environment, then=20
add back in again while keeping the VMs running?
Host 2 and Host 3 are not recoverable in their current state. These=20
hosts need to be removed from the environment, and then added again to=20
the environment using the hosted-engine deployment script. For more=20
information on these actions, see the Removing Non-Operational Hosts=20
from a Restored Self-Hosted Engine Environment section below and Chapter=20
7: Installing Additional Hosts to a Self-Hosted Environment=20
<http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restori=
ng_an_EL-Based_Self-Hosted_Environment/chap-Installing_Additional_Hosts_to_=
a_Self-Hosted_Environment>.
BTW: The link referring to chapter 7 is broken.
Kind regards
Andrew
--------=_MB617458AF-4FCC-4AB6-8A34-3338D99C6BB8
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<?xml version=3D"1.0" encoding=3D"utf-16"?><html><head>
<style id=3D"signatureStyle"><!--#x75154ce80e9e4bf, #x75154ce80e9e4bf, #x75=
154ce80e9e4bf, #x75154ce80e9e4bf #xb71224f920234978acc74f4d23143069, #x7515=
4ce80e9e4bf
{font-family: Tahoma; font-size: 12pt;}
#x75154ce80e9e4bf, #x75154ce80e9e4bf
{font-family: 'Segoe UI'; font-size: 12pt;}
#x75154ce80e9e4bf #xb71224f920234978acc74f4d23143069 p.MsoNormal, #x75154ce=
80e9e4bf p.MsoNormal
{margin: 0cm 0cm 0.0001pt; font-size: 11pt; font-family: Calibri, sans-seri=
f;}
#x75154ce80e9e4bf #xb71224f920234978acc74f4d23143069 div.WordSection1, #x75=
154ce80e9e4bf div.WordSection1
{page: WordSection1;}
#x75154ce80e9e4bf a:link
{color: rgb(5, 99, 193); text-decoration: underline;}
--></style>
<style><!--body
{font-family: 'Segoe UI'; font-size: 12pt;}
--></style>
</head>
<body><div>Hi</div><div><br /></div><div>A couple of questions about hosted=
-engine recovery.</div><div>Part way through this URL, in the section "Work=
flow for Restoring the Self-Hosted Engine Environment"=C2=A0</div><div><a h=
ref=3D"http://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_R=
estoring_an_EL-Based_Self-Hosted_Environment/">http://www.ovirt.org/documen=
tation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_En=
vironment/</a></div><div>it looks like once the hosted-engine is recovered=
on Host 1, the VMs on Host 2 and 3 will be running, but not accessible to t=
he recovered Hosted Engine.=C2=A0</div><div>Is that correct?</div><div>If s=
o, how to you remove host 2 and host 3 from the environment, then add back=
in again while keeping the VMs running?</div><div><br /></div><div><div>
<code style=3D"box-sizing: border-box; font-family: Menlo, Monaco, Consolas=
, 'Courier New', monospace; font-size: 14.4px; padding: 2px 4px; color: rgb=
(199, 37, 78); border-radius: 4px; font-variant-ligatures: normal; orphans: =
2; widows: 2; background-color: rgb(249, 242, 244);">Host 2</code><span st=
yle=3D"color: rgb(51, 51, 51); font-family: 'Source Sans Pro', 'Open Sans', =
'Helvetica Neue', Helvetica, Arial, sans-serif; font-variant-ligatures: no=
rmal; orphans: 2; widows: 2; background-color: rgb(255, 255, 255);">=C2=A0a=
nd=C2=A0</span><code style=3D"box-sizing: border-box; font-family: Menlo, M=
onaco, Consolas, 'Courier New', monospace; font-size: 14.4px; padding: 2px=
4px; color: rgb(199, 37, 78); border-radius: 4px; font-variant-ligatures: n=
ormal; orphans: 2; widows: 2; background-color: rgb(249, 242, 244);">Host 3=
</code><span style=3D"color: rgb(51, 51, 51); font-family: 'Source Sans Pro=
', 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; font-varian=
t-ligatures: normal; orphans: 2; widows: 2; background-color: rgb(255, 255, =
255);">=C2=A0are not recoverable in their current state. These hosts need=
to be removed from the environment, and then added again to the environment =
using the hosted-engine deployment script. For more information on these a=
ctions, see the Removing Non-Operational Hosts from a Restored Self-Hosted=
Engine Environment section below and=C2=A0</span><a href=3D"http://www.ovir=
t.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_S=
elf-Hosted_Environment/chap-Installing_Additional_Hosts_to_a_Self-Hosted_En=
vironment" style=3D"box-sizing: border-box; color: rgb(97, 182, 14); text-d=
ecoration: none; font-family: 'Source Sans Pro', 'Open Sans', 'Helvetica Ne=
ue', Helvetica, Arial, sans-serif; font-variant-ligatures: normal; orphans: =
2; widows: 2; background-color: rgb(255, 255, 255);">Chapter 7: Installing =
Additional Hosts to a Self-Hosted Environment</a><span style=3D"color: rgb=
(51, 51, 51); font-family: 'Source Sans Pro', 'Open Sans', 'Helvetica Neue'=
, Helvetica, Arial, sans-serif; font-variant-ligatures: normal; orphans: 2; =
widows: 2; background-color: rgb(255, 255, 255);">.</span></div></div><div=
><br /></div><div>BTW: The link referring to chapter 7 is broken.=C2=A0</di=
v><div><br /></div><div id=3D"signature_old"><div id=3D"x75154ce80e9e4bf">
<div class=3D"WordSection1">
<div id=3D"xb71224f920234978acc74f4d23143069"><div class=3D"WordSection1"><=
p class=3D"MsoNormal"><font face=3D"Tahoma"><font size=3D"3" style=3D"font-=
size: 12pt;">Kind r</font><span style=3D"font-size: 12pt;">egards</span></f=
ont></p><p class=3D"MsoNormal"><font size=3D"3" style=3D"font-size: 12pt;"=
face=3D"Tahoma"><br /></font></p><p class=3D"MsoNormal"><font size=3D"3" st=
yle=3D"font-size: 12pt;" face=3D"Tahoma">Andrew=C2=A0</font></p></div></div=
>
</div>
</div></div><div><br /></div>
</body></html>
--------=_MB617458AF-4FCC-4AB6-8A34-3338D99C6BB8--
7 years, 10 months
oVirt Node update not updating kernel, glibc, etc
by Mike DePaulo
Hi,
I installed 3 ovirt 4.1.1 nodes in early May. Today used oVirt's
webgui to update ("Upgrade") 2 of them. They are both reporting oVirt
Node 4.1.2 & vdsm-4.19.15-1.el7.centos now.
However, it appears that the kernel was not updated:
[root@death-star peers]# uname -a
Linux death-star 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
And neither was glibc The installed version is still 2.17-157.el7_3.1
rather than 2.17-157.el7_3.4 .
The following lines were appended to /var/log/yum.log:
Jul 01 18:47:49 Installed: ovirt-node-ng-image-4.1.2-1.el7.centos.noarch
Jul 01 18:50:27 Installed: ovirt-node-ng-image-update-4.1.2-1.el7.centos.noarch
Jul 01 18:50:27 Erased:
ovirt-node-ng-image-update-placeholder-4.1.1.1-1.el7.centos.noarch
Thanks in advance,
-Mike
7 years, 10 months
Re: [ovirt-users] [ovirt-devel] problem about vm in ovirt node can ping node, but cannot ping gateway or the third host in LAN
by Edward Haas
If I understood correctly, you are using a nested solution where the node
is itself a VM.
If this is the case, you may have a security feature active on the physical
host/node.
In oVirt we have the default filter which allows only a single source mac
to exit the vNIC, but you disabled it.
I guess you should have something similar on the upper node.
Please note that such topics fit the user list, not devel.
Thanks,
Edy.
On Fri, Jun 30, 2017 at 1:46 PM, pengyixiang <yxpengi386(a)163.com> wrote:
> hi, everyone
>
> I am a new bird in ovirt, and I transplate ovirt node to debian
> jessie(customed) system, it's work well now, then we create vm, use default
> nic1, like this:
>
> in vm, we ping node, it's ok; and in node, we ping vm, it's ok too;
> then in node, we ping gatway or other computer in the lan, it's ok too;
>
> but when in vm, we ping gatway or other computer in the lan, it's not ok,
> why is this?
>
>
> >>> next is some test infomation:
>
> 1) after vm up, vnet0 is created, and use :
>
> # brctl show ovirtmgmt
>
> we get this:
>
> bridge name bridge id STP enabled interfaces
> ovirtmgmt 8000.000c29e3037e no eth0
>
> vnet0
>
>
> 2) then we test network in three host in the same time
>
> in vm
>
> # tcpdump -i eth0 arp
>
> vm send request
>
> vm not recv reply
>
> # ping hostA
>
>
> in node(vm in)
>
> # tcpdump -i vnet0 arp
>
> vnet0 send request
>
> vnet0 not recv reply
>
>
> in third host in the lan(hostA)
>
> # tcpdump -i eth0 arp
>
> eth0 recv request
>
> eth0 send reply
>
>
> problem like this:
>
>
> why is this, do I lost some configurations ?
>
>
> ovirtmgmt configuration:
>
>
> vnet info:(node installed in ESXI)
>
>
>
>
> Look forward to your reply
>
>
> Have a nice day
>
> --
>
>
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
7 years, 10 months