Network Address Change
by Paul.LKW
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
2 years, 3 months
OVS switch type for hosted-engine
by Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
2 years, 11 months
OVN routing and firewalling in oVirt
by Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?
Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?
Is there any security policy (like security groups in Openstack) to
implement?
Thanks,
Gianluca
3 years
deprecating export domain?
by Charles Kozler
Hello,
I recently read on this list from a redhat member that export domain is
either being deprecated or looking at being deprecated
To that end, can you share details? Can you share any notes/postings/bz's
that document this? I would imagine something like this would be discussed
in larger audience
This seems like a somewhat significant change to make and I am curious
where this is scheduled? Currently, a lot of my backups rely explicitly on
an export domain for online snapshots, so I'd like to plan accordingly
Thanks!
4 years, 7 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
5 years, 2 months
Unable to find OVF_STORE after recovery / upgrade
by Sam Cappello
This is a multi-part message in MIME format.
--------------05F3D0780062C6D37C1619A7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
so i was running a 3.4 hosted engine two node setup on centos 6, had
some disk issues so i tried to upgrade to centos 7 and follow the path
3.4 > 3.5 > 3.6 > 4.0. i screwed up dig time somewhere between 3.6 and
4.0, so i wiped the drives, installed a fresh 4.0.3, then created the
database and restored the 3.6 engine backup before running engine-setup
as per the docs. things seemed to work, but i have the the following
issues / symptoms:
- ovirt-ha-agent running 100% CPU on both nodes
- messages in the UI that the Hosted Engine storage Domain isn't active
and Failed to import the Hosted Engine Storage Domain
- hosted engine is not visible in the UI
and the following repeating in the agent.log:
MainThread::INFO::2016-10-03
12:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 3400)
MainThread::INFO::2016-10-03
12:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host vmhost1.oracool.net (id: 1, score: 3400)
MainThread::INFO::2016-10-03
12:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2016-10-03
12:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2016-10-03
12:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2016-10-03
12:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2016-10-03
12:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2016-10-03
12:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
MainThread::INFO::2016-10-03
12:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
MainThread::INFO::2016-10-03
12:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
MainThread::INFO::2016-10-03
12:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-10-03
12:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::WARNING::2016-10-03
12:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE
MainThread::ERROR::2016-10-03
12:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
I have searched a bit and not really found a solution, and have come to
the conclusion that i have made a mess of things, and am wondering if
the best solution is to export the VMs, and reinstall everything then
import them back?
i am using remote NFS storage.
if i try and add the hosted engine storage domain it says it is already
registered.
i have also upgraded and am now running oVirt Engine Version:
4.0.4.4-1.el7.centos
hosts were installed using ovirt-node. currently at
3.10.0-327.28.3.el7.x86_64
if a fresh install is best, any advice / pointer to doc that explains
best way to do this?
i have not moved my most important server over to this cluster yet so i
can take some downtime to reinstall.
thanks!
sam
--------------05F3D0780062C6D37C1619A7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
so i was running a 3.4 hosted engine two node setup on centos 6, had
some disk issues so i tried to upgrade to centos 7 and follow the
path 3.4 > 3.5 > 3.6 > 4.0. i screwed up dig time
somewhere between 3.6 and 4.0, so i wiped the drives, installed a
fresh 4.0.3, then created the database and restored the 3.6 engine
backup before running engine-setup as per the docs. things seemed
to work, but i have the the following issues / symptoms:<br>
- ovirt-ha-agent running 100% CPU on both nodes<br>
- messages in the UI that the Hosted Engine storage Domain isn't
active and Failed to import the Hosted Engine Storage Domain<br>
- hosted engine is not visible in the UI<br>
and the following repeating in the agent.log:<br>
<br>
MainThread::INFO::2016-10-03
12:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 3400)<br>
MainThread::INFO::2016-10-03
12:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host vmhost1.oracool.net (id: 1, score: 3400)<br>
MainThread::INFO::2016-10-03
12:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost<br>
MainThread::INFO::2016-10-03
12:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM<br>
MainThread::INFO::2016-10-03
12:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage<br>
MainThread::INFO::2016-10-03
12:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server<br>
MainThread::INFO::2016-10-03
12:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server<br>
MainThread::INFO::2016-10-03
12:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain<br>
MainThread::INFO::2016-10-03
12:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images<br>
MainThread::INFO::2016-10-03
12:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images<br>
MainThread::INFO::2016-10-03
12:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain<br>
MainThread::INFO::2016-10-03
12:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE<br>
MainThread::WARNING::2016-10-03
12:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE<br>
MainThread::ERROR::2016-10-03
12:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial
vm.conf<br>
<br>
I have searched a bit and not really found a solution, and have come
to the conclusion that i have made a mess of things, and am
wondering if the best solution is to export the VMs, and reinstall
everything then import them back?<br>
i am using remote NFS storage.<br>
if i try and add the hosted engine storage domain it says it is
already registered.<br>
i have also upgraded and am now running <span
class="gwt-InlineLabel">oVirt Engine Version: 4.0.4.4-1.el7.centos<br>
hosts were installed using ovirt-node. currently at
3.10.0-327.28.3.el7.x86_64<br>
if a fresh install is best, any advice / pointer to doc that
explains best way to do this?<br>
i have not moved my most important server over to this cluster yet
so i can take some downtime to reinstall.<br>
thanks!<br>
sam<br>
<br>
</span><br>
</body>
</html>
--------------05F3D0780062C6D37C1619A7--
5 years, 11 months
[Users] Problem Creating "oVirtEngine" Machine
by Richie@HIP
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
I can't agree with you more. Modifying every box's or Virtual Machine's =
HOSTS file with a FQDN and IP SHOULD work, but in my case it is not. =
There are several reasons I've come to believe could be the problem =
during my trial-and-errors testing and learning.
FIRST - MACHINE IPs.
THe machine's "Names" where not appearing in the Microsoft Active =
Directory DHCP along with their assigned IPs; in other words, the DHCP =
just showed an "Assigned IP", equal to the Linux Machine's IP, with a =
<empty> ('i.e. blank, none, silch, plan old "no-letters-or-numbers") =
"Name" in the "Name" (i.e. machines "network name", or FQDN-value used =
by the Windows AD DNS-service) column. =20
if your IP is is appearing with an <empty> "name", there is no "host =
name" to associate the IP, it makes it difficult to define a FQDN; which =
isn't that useful if we're going to use the HOSTS files in all =
participating machines in an oVirt Installation.
I kept banging my head for three (3) long hours trying to find the =
problem.
In Fedora 18, I could't find where the "network name" of the machine =
could be defined. =20
I tried putting the "Additional Search Domains" and/or "DHCP Client ID" =
in Fedora's 18 Desktop - under "System Settings > Hardware > Network > =
Options > IPv4 Setting"
The DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"
Kept wondering around the "Settings" and seeing which one made sense, =
but what the heck, I went for it. =20
Under "System Settings > System > Details" I found the information about =
GNOME and the machine's hardware. =20
There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =20
I also installed all Kerberos libraries and client (i.e. authconfig-gtk, =
authhub, authhub-client, krb5-apple-clents, krb5-auth-dialog, =
krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) and rebooted
VOILA=85!!! =20
I don;t know if it was the definition of "Device Name" from =
"localhost.localdomain" to "ovirtengine", of the Kerberos libraries =
install, or both. But finally the MS AD DHCP was showing the =
Addigned-IP, the machine "Name" and the proper MAC-address. Regardless, =
setting the machine's "Network Name" under "System Settings > System > =
Details > Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this network =
setting could be defined.
NOTE - Somebody has to try the two steps I did together, separately. to =
see which one is the real problem-solver; for me it is working, and "if =
it ain't broke, don't fix it=85"
Now that I have the DHCP / IP thing sorted, I have to do the DNS stuff.
To this point, I've addressed the DHCP and "Network Name" of the =
IP-Lease (required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "as long as I do not =
use default HTTPd service parameters as suggested by the install". By =
using the HOST file to "define" FQDNs, AND NOT using the default HTTPd =
suggested changes, I'm able to install the oVirtEngine (given that I use =
ports 8700 and 8701) to access the "oVirtEngine Welcome Screen", BUT =
NONE of the "oVirt Portals" work=85 YET=85!!!
More to come during the week
Richie
Jos=E9 E ("Richie") Piovanetti, MD, MS=20
M: 787-615-4884 | richiepiovanetti(a)healthcareinfopartners.com
On Aug 2, 2013, at 3:10 AM, Joop <jvdwege(a)xs4all.nl> wrote:
> Hello Ritchie,
>=20
>> In a conversation via IRC, someone suggested that I activate =
"dnsmask" to overcome what appears to be a DNS problem. I'll try that =
other possibility once I get home later today.
>>=20
>> In the mean time, what do you mean by "fixing the hostname"=85? I =
opened and fixed the HOSTNAMES and changed it from =
"localhost-localdomain" to "localhost.localdomain" and that made no =
difference. Albeit, after changing I didm;t restart, remove ovirtEngine =
((using "engine-cleanup") and reinstalled via "engine-setup". Is that =
what you mean=85?
>>=20
>>=20
>>=20
>> In the mean time, the fact that even if I resolve the issue of =
oVirtEngine I will not be able to connect to the oVirt Nodes unless I =
have DNS resolution, apparently means I should do something with =
resolving via DNS in my home LAN (i.e implement some sort of "DNS Cache" =
so I can resolve my home computers via DNS inside my LAN).
>>=20
>> Any suggestions are MORE THAN WELCOME=85!!!
>> =20
>=20
> Having setup ovirt more than I can count right now I share your =
feeling that it isn't always clear why things are going wrong, but in =
this case I suspect that there is a rather small thing missing.
> In short if you setup ovirt-engine, either using virtualbox or on real =
hardware, and you give your host a meaningfull name AND you add that =
info also in your /etc/hosts file than things SHOULD work, no need for =
dnsmasq or even bind. Would make things easier once you start adding =
virt hosts to you infrastructure since you will need to duplicate these =
actions on each host (add engine name/ip to each host and add each host =
to the others and all hosts to engine)
>=20
> Just ask if you need more assistance and I will write down a small =
howto that should work out of the box else I might have some time to see =
if I can get things going.
>=20
> Regards,
>=20
> Joop
>=20
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I =
can't agree with you more. Modifying every box's or Virtual =
Machine's HOSTS file with a FQDN and IP SHOULD work, but in my case it =
is not. There are several reasons I've come to believe could be =
the problem during my trial-and-errors testing and =
learning.<div><div><br></div><div>FIRST - MACHINE IPs.</div><ul =
class=3D"MailOutline"><li>THe machine's "Names" where not appearing in =
the <b>Microsoft Active Directory DHCP</b> along with their assigned =
IPs; in other words, the DHCP just showed an "Assigned IP", equal to the =
Linux Machine's IP, with a <empty> ('i.e. blank, none, silch, plan =
old "no-letters-or-numbers") "Name" in the "Name" (i.e. machines =
"network name", or FQDN-value used by the Windows AD DNS-service) =
column. </li><li>if your IP is is appearing with an <empty> =
"name", there is no "host name" to associate the IP, it makes it =
difficult to define a FQDN; which isn't that useful if we're going to =
use the HOSTS files in all participating machines in an oVirt =
Installation.</li><li>I kept banging my head for three (3) long hours =
trying to find the problem.</li><ul><li>In Fedora 18, I could't find =
where the "network name" of the machine could be defined. =
</li><li>I tried putting the "Additional Search Domains" and/or =
"DHCP Client ID" in Fedora's 18 Desktop - under "System Settings > =
Hardware > Network > Options > IPv4 Setting"</li><ul><li>The =
DHCP went crazy; showing an "Aberrant-MAC-Address" (i.e. a really =
long-sting value where the machine's MAC address should be), and we knew =
the MAC address as we obtained using "ifconfig" on the machine getting =
it's IP from the DHCP. So we reverted these entries from the =
aforementioned, rebooted, and got an assigned IP, with proper MAC =
address, but still no "Name"</li></ul><li>Kept wondering around the =
"Settings" and seeing which one made sense, but what the heck, I went =
for it. </li><ul><li>Under "System Settings > System > =
Details" I found the information about GNOME and the machine's hardware. =
</li><li>There was a field for "Device Name" that originally had =
"localhost.localdomain"; I changed the value to "ovirtmanager". and =
under "Graphic" changed to "Forced Fallback Mode" to "ON". =
</li><li>I also installed all Kerberos libraries and client (i.e. =
authconfig-gtk, authhub, authhub-client, krb5-apple-clents, =
krb5-auth-dialog, krb5-workstation, pam-kcoda, pam-krb5, root-net.krb5) =
and rebooted</li><li>VOILA=85!!! </li></ul><li>I don;t know if it =
was the definition of "Device Name" from "localhost.localdomain" to =
"ovirtengine", of the Kerberos libraries install, or both. But =
finally the MS AD DHCP was showing the Addigned-IP, the machine "Name" =
and the proper MAC-address. Regardless, setting the machine's =
"Network Name" under "System Settings > System > Details =
> Device Name", with no explanation of what "Device Name" meant =
or was used for, was the last place I would have imagined this =
network setting could be defined.</li><li><b>NOTE</b> - Somebody has to =
try the two steps I did together, separately. to see which one is the =
real problem-solver; for me it is working, and "if it ain't broke, don't =
fix it=85"</li></ul></ul><div><br =
class=3D"webkit-block-placeholder"></div><div>Now that I have the DHCP / =
IP thing sorted, I have to do the DNS stuff.</div><div><br></div><div>To =
this point, I've addressed the DHCP and "Network Name" of the IP-Lease =
(required for the DNS to work). This still does't completely =
explain why, by modifying the HOSTS file (allowing be to set and IP and =
non-DNS FQDN). allows me to install the oVirtEngine "<b><i>as long as I =
do not use default HTTPd service parameters as suggested by the =
install</i></b>". <b>By using the HOST file to "define" FQDNs, AND =
NOT using the default HTTPd suggested changes, I'm able to install the =
oVirtEngine (given that I use ports 8700 and 8701) to access the =
"oVirtEngine Welcome Screen", BUT NONE of the "oVirt Portals" work</b>=85 =
YET=85!!!</div><div><br></div><div>More to come during the =
week</div><div><br></div><div>Richie</div><div =
apple-content-edited=3D"true"><br>Jos=E9 E ("Richie") Piovanetti, MD, =
MS <br>M: 787-615-4884 | <a =
href=3D"mailto:richiepiovanetti@healthcareinfopartners.com">richiepiovanet=
ti(a)healthcareinfopartners.com</a><br><br><br><br><br><br></div><br><div><d=
iv>On Aug 2, 2013, at 3:10 AM, Joop <<a =
href=3D"mailto:jvdwege@xs4all.nl">jvdwege(a)xs4all.nl</a>> =
wrote:</div><br class=3D"Apple-interchange-newline"><blockquote =
type=3D"cite">Hello Ritchie,<br><br><blockquote type=3D"cite">In a =
conversation via IRC, someone suggested that I activate "dnsmask" to =
overcome what appears to be a DNS problem. I'll try that other =
possibility once I get home later today.<br><br>In the mean time, what =
do you mean by "fixing the hostname"=85? I opened and fixed the =
HOSTNAMES and changed it from "localhost-localdomain" to =
"localhost.localdomain" and that made no difference. Albeit, after =
changing I didm;t restart, remove ovirtEngine ((using "engine-cleanup") =
and reinstalled via "engine-setup". Is that what you =
mean=85?<br><br><br><br>In the mean time, the fact that even if I =
resolve the issue of oVirtEngine I will not be able to connect to the =
oVirt Nodes unless I have DNS resolution, apparently means I should do =
something with resolving via DNS in my home LAN (i.e implement some sort =
of "DNS Cache" so I can resolve my home computers via DNS inside my =
LAN).<br><br>Any suggestions are MORE THAN WELCOME=85!!!<br> =
<br></blockquote><br>Having setup ovirt more than I can count =
right now I share your feeling that it isn't always clear why things are =
going wrong, but in this case I suspect that there is a rather small =
thing missing.<br>In short if you setup ovirt-engine, either using =
virtualbox or on real hardware, and you give your host a meaningfull =
name AND you add that info also in your /etc/hosts file than things =
SHOULD work, no need for dnsmasq or even bind. Would make things easier =
once you start adding virt hosts to you infrastructure since you will =
need to duplicate these actions on each host (add engine name/ip to each =
host and add each host to the others and all hosts to =
engine)<br><br>Just ask if you need more assistance and I will write =
down a small howto that should work out of the box else I might have =
some time to see if I can get things =
going.<br><br>Regards,<br><br>Joop<br><br></blockquote></div><br></div></b=
ody></html>=
--Apple-Mail=_DF4BF6D1-1F87-4F60-9CF9-070D4D836241--
5 years, 11 months
Re: [ovirt-users] Question about the ovirt-engine-sdk-java
by Michael Pasternak
------=_Part_1975902_834617789.1445161505459
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Salifou,
Actually java sdk is=C2=A0intentionally=C2=A0hiding transport level interna=
ls so developers could stay in java domain,if your headers are static, easi=
est way would be using reverse proxy in a middle to intercept requests,=C2=
=A0
can you tell me why do you need this?
=20
On Friday, October 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah@=
redhat.com> wrote:
=20
Hi Micheal,
I have a question about the ovirt-engine-sdk-java.
Is there a way to add custom request headers to each RHEVM API call?
Here is an example of a request that I would like to do:
$ curl -v -k \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "ID: user1(a)ad.xyz.com" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "PASSWORD: Pwssd" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -H "TARGET: kobe" \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 https://vm0.smalick.com/api/hosts
I would like to add ID, PASSWORD and TARGET as HTTP request header.=20
Thanks,
Salifou
------=_Part_1975902_834617789.1445161505459
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:HelveticaNeue-Light, Helvetica Neue Light, Helvetica Neue, Helve=
tica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id=3D"yui_3_16_=
0_1_1445160422533_3555" dir=3D"ltr"><span id=3D"yui_3_16_0_1_1445160422533_=
4552">Hi </span><span style=3D"font-family: 'Helvetica Neue', 'Segoe U=
I', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3_16_0_1_1445=
160422533_3568" class=3D"">Salifou,</span></div><div id=3D"yui_3_16_0_1_144=
5160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica Neue', =
'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" class=3D""><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
style=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Luci=
da Grande', sans-serif;" class=3D"" id=3D"yui_3_16_0_1_1445160422533_3595">=
Actually java sdk is </span><span style=3D"font-family: 'Helvetica Neu=
e', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"yui_3=
_16_0_1_1445160422533_4360" class=3D"">intentionally </span><span styl=
e=3D"font-family: 'Helvetica Neue', 'Segoe UI', Helvetica, Arial, 'Lucida G=
rande', sans-serif;" id=3D"yui_3_16_0_1_1445160422533_4362" class=3D"">hidi=
ng transport level internals so developers could stay in java domain,</span=
></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span class=
=3D"" id=3D"yui_3_16_0_1_1445160422533_4435"><font face=3D"Helvetica Neue, =
Segoe UI, Helvetica, Arial, Lucida Grande, sans-serif" id=3D"yui_3_16_0_1_1=
445160422533_4432" class=3D"">if your headers are static, easiest way would=
be using reverse proxy in a middle to intercept requests, </font><br>=
</span></div><div id=3D"yui_3_16_0_1_1445160422533_3555" dir=3D"ltr"><span =
class=3D""><font face=3D"Helvetica Neue, Segoe UI, Helvetica, Arial, Lucida=
Grande, sans-serif" class=3D""><br></font></span></div><div id=3D"yui_3_16=
_0_1_1445160422533_3555" dir=3D"ltr"><span style=3D"font-family: 'Helvetica=
Neue', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-serif;" id=3D"y=
ui_3_16_0_1_1445160422533_4357">can you tell me why do you need this?</span=
><br></div> <br><div class=3D"qtdSeparateBR"><br><br></div><div class=3D"y=
ahoo_quoted" style=3D"display: block;"> <div style=3D"font-family: Helvetic=
aNeue-Light, Helvetica Neue Light, Helvetica Neue, Helvetica, Arial, Lucida=
Grande, sans-serif; font-size: 13px;"> <div style=3D"font-family: Helvetic=
aNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-si=
ze: 16px;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Friday, O=
ctober 16, 2015 1:14 AM, Salifou Sidi M. Malick <ssidimah(a)redhat.com>=
wrote:<br> </font> </div> <br><br> <div class=3D"y_msg_container">Hi Mich=
eal,<br><br>I have a question about the ovirt-engine-sdk-java.<br><br>Is th=
ere a way to add custom request headers to each RHEVM API call?<br><br>Here=
is an example of a request that I would like to do:<br><br>$ curl -v -k \<=
br> -H "ID: <a ymailto=3D"mailto:user1@ad=
.xyz.com" href=3D"mailto:user1@ad.xyz.com">user1(a)ad.xyz.com</a>" \<br> =
; -H "PASSWORD: Pwssd" \<br>  =
; -H "TARGET: kobe" \<br> <=
a href=3D"https://vm0.smalick.com/api/hosts" target=3D"_blank">https://vm0.=
smalick.com/api/hosts</a><br><br><br>I would like to add ID, PASSWORD and T=
ARGET as HTTP request header. <br><br>Thanks,<br>Salifou<br><br><br><br></d=
iv> </div> </div> </div></div></body></html>
------=_Part_1975902_834617789.1445161505459--
5 years, 11 months
[Users] oVirt Weekly Sync Meeting Minutes -- 2012-05-23
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:41)
* Status of next release (mburns, 14:05:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145 (mburns,
14:05:29)
* AGREED: freeze date and beta release delayed by 1 week to 2012-06-07
(mburns, 14:12:33)
* post freeze, release notes flag needs to be used where required
(mburns, 14:14:21)
* https://bugzilla.redhat.com/show_bug.cgi?id=821867 is a VDSM blocker
for 3.1 (oschreib, 14:17:27)
* ACTION: dougsland to fix upstream vdsm right now, and open a bug on
libvirt augeas (oschreib, 14:21:44)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822158 (mburns,
14:23:39)
* assignee not available, update to come tomorrow (mburns, 14:24:59)
* ACTION: oschreib to make sure BZ#822158 is handled quickly
(oschreib, 14:25:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824397 (mburns,
14:28:55)
* 824397 expected to be merged prior next week's meeting (mburns,
14:29:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824420 (mburns,
14:30:15)
* tracker for node based on F17 (mburns, 14:30:28)
* blocked by util-linux bug currently (mburns, 14:30:40)
* new build expected from util-linux maintainer in next couple days
(mburns, 14:30:55)
* sub-project status -- engine (mburns, 14:32:49)
* nothing to report outside of blockers discussed above (mburns,
14:34:00)
* sub-project status -- vdsm (mburns, 14:34:09)
* nothing outside of blockers above (mburns, 14:35:36)
* sub-project status -- node (mburns, 14:35:43)
* working on f17 migration, but blocked by util-linux bug (mburns,
14:35:58)
* should be ready for freeze deadline (mburns, 14:36:23)
* Review decision on Java 7 and Fedora jboss rpms in oVirt Engine
(mburns, 14:36:43)
* Java7 basically working (mburns, 14:37:19)
* LINK: http://gerrit.ovirt.org/#change,4416 (oschreib, 14:39:35)
* engine will make ack/nack statement next week (mburns, 14:39:49)
* fedora jboss rpms patch is in review, short tests passed (mburns,
14:40:04)
* engine ack on fedora jboss rpms and java7 needed next week (mburns,
14:44:47)
* Upcoming Workshops (mburns, 14:45:11)
* NetApp workshop set for Jan 22-24 2013 (mburns, 14:47:16)
* already at half capacity for Workshop at LinuxCon Japan (mburns,
14:47:37)
* please continue to promote it (mburns, 14:48:19)
* proposal: board meeting to be held at all major workshops (mburns,
14:48:43)
* LINK: http://www.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:49:30)
* Open Discussion (mburns, 14:50:12)
* oVirt/Quantum integration discussion will be held separately
(mburns, 14:50:43)
Meeting ended at 14:52:47 UTC.
Action Items
------------
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib to make sure BZ#822158 is handled quickly
Action Items, by person
-----------------------
* dougsland
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib
* oschreib to make sure BZ#822158 is handled quickly
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (98)
* oschreib (55)
* doronf (12)
* lh (11)
* sgordon (8)
* dougsland (8)
* ovirtbot (6)
* ofrenkel (4)
* cestila (2)
* RobertMdroid (2)
* ydary (2)
* rickyh (1)
* yzaslavs (1)
* cctrieloff (1)
* mestery_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
5 years, 11 months
[QE][ACTION REQUIRED] oVirt 3.5.1 RC status - postponed
by Sandro Bonazzola
Hi,
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
- ACTION: Gilad please provide ETA on above blocker, the new proposed RC date will be decided on the given ETA.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 57 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 37 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- ACTION: Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- ACTION: Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
5 years, 11 months
VM has paused due to no storage space error
by Sandvik Agustin
Hi users,
I have this problem that sometimes 1 to 3 VM just automatically paused with
user interaction and getting this error "VM has paused due to no storage
space error". any inputs from you guys are very appreciated.
TIA
Sandvik
5 years, 11 months
VM Import from remote libvirt Server on web gui with Host key verification failed or permission denied error
by Rogério Ceni Coelho
Hi Ovirt Jedi´s !!!
First of all, congratulations for this amazing product. I am an Vmware and
Hyper-V Engineer but i am very excited with oVirt.
Now, let´s go to work ... :-)
I am trying to Import vm from remote libvirt server on web gui but i was
unable to solve the problem until now.
[image: pasted1]
Node logs :
[root@hlg-rbs-ovirt-kvm01-poa ~]# tail -f /var/log/vdsm/vdsm.log | grep -i
error | grep -v Host.getStats
jsonrpc.Executor/0::ERROR::2016-10-06
10:24:52,432::v2v::151::root::(get_external_vms) error connection to
hypervisor: 'Cannot recv data: Permission denied, please try
again.\r\nPermission denied, please try again.\r\nPermission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by
peer'
jsonrpc.Executor/0::INFO::2016-10-06
10:24:52,433::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC
call Host.getExternalVMs failed (error 65) in 10.05 seconds
[root@hlg-rbs-ovirt-kvm02-poa ~]# grep error /var/log/vdsm/vdsm.log | grep
get_external_vms
jsonrpc.Executor/7::ERROR::2016-10-06
10:25:37,344::v2v::151::root::(get_external_vms) error connection to
hypervisor: 'Cannot recv data: Host key verification failed.: Connection
reset by peer'
[root@hlg-rbs-ovirt-kvm02-poa ~]#
Engine Logs :
[root@hlg-rbs-ovirt01-poa ~]# tail -f /var/log/ovirt-engine/engine.log
2016-10-06 10:24:42,377 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] START, GetVmsFromExternalProviderVDSCommand(HostName =
hlg-rbs-ovirt-kvm01-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='5feddfba-d7b2-423e-a946-ac2bf36906fa', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'}), log id: eb750c7
*2016-10-06 10:24:53,435 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.*
*Permission denied, please try again.*
*Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]*
2016-10-06 10:24:53,435 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Failed in 'GetVmsFromExternalProviderVDS' method
2016-10-06 10:24:53,435 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]
2016-10-06 10:24:53,454 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-97) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VDSM hlg-rbs-ovirt-kvm01-poa.rbs.com.br command failed:
Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer
2016-10-06 10:24:53,454 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VMListReturnForXmlRpc@6c6f696c'
2016-10-06 10:24:53,454 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] HostName = hlg-rbs-ovirt-kvm01-poa.rbs.com.br
2016-10-06 10:24:53,454 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] Command 'GetVmsFromExternalProviderVDSCommand(HostName
= hlg-rbs-ovirt-kvm01-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='5feddfba-d7b2-423e-a946-ac2bf36906fa', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'})' execution failed: VDSGenericException:
VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot
recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65
2016-10-06 10:24:53,454 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-97) [] FINISH, GetVmsFromExternalProviderVDSCommand, log id:
eb750c7
2016-10-06 10:24:53,459 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-97) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: Failed to retrieve VMs information from external server
qemu+ssh://root@prd-openshift-kvm03-poa.rbs.com.br/system
2016-10-06 10:24:53,459 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-97) [] Query 'GetVmsFromExternalProviderQuery' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
2016-10-06 10:24:53,460 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-97) [] Exception: org.ovirt.engine.core.common.errors.EngineException:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:257)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:27)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:195)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runQuery(GenericApiGWTServiceImpl.java:53)
at sun.reflect.GeneratedMethodAccessor222.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
[branding.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
2016-10-06 10:25:27,202 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] START, GetVmsFromExternalProviderVDSCommand(HostName
= hlg-rbs-ovirt-kvm02-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='f9c9d929-b460-4102-bb29-de1e6ad6ad72', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'}), log id: 4f3174a6
*2016-10-06 10:25:38,338 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Host key verification failed.: Connection reset
by peer]*
2016-10-06 10:25:38,338 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Failed in 'GetVmsFromExternalProviderVDS' method
2016-10-06 10:25:38,338 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Host key verification failed.: Connection reset
by peer]
2016-10-06 10:25:38,343 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-113) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VDSM hlg-rbs-ovirt-kvm02-poa.rbs.com.br command failed:
Cannot recv data: Host key verification failed.: Connection reset by peer
2016-10-06 10:25:38,343 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VMListReturnForXmlRpc@42dd60af'
2016-10-06 10:25:38,343 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] HostName = hlg-rbs-ovirt-kvm02-poa.rbs.com.br
2016-10-06 10:25:38,343 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] Command
'GetVmsFromExternalProviderVDSCommand(HostName =
hlg-rbs-ovirt-kvm02-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='f9c9d929-b460-4102-bb29-de1e6ad6ad72', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'})' execution failed: VDSGenericException:
VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot
recv data: Host key verification failed.: Connection reset by peer, code =
65
2016-10-06 10:25:38,343 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-113) [] FINISH, GetVmsFromExternalProviderVDSCommand, log id:
4f3174a6
2016-10-06 10:25:38,344 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-113) [] Query 'GetVmsFromExternalProviderQuery' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Host key
verification failed.: Connection reset by peer, code = 65 (Failed with
error unexpected and code 16)
2016-10-06 10:25:38,345 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-113) [] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Host key
verification failed.: Connection reset by peer, code = 65 (Failed with
error unexpected and code 16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:257)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:27)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:195)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runQuery(GenericApiGWTServiceImpl.java:53)
at sun.reflect.GeneratedMethodAccessor222.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
[branding.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
2016-10-06 10:25:52,527 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] START, GetVmsFromExternalProviderVDSCommand(HostName
= hlg-rbs-ovirt-kvm03-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='02ead14e-0208-4a74-b1c2-4c19383820f9', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'}), log id: 7c8b1e2d
2016-10-06 10:26:02,695 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]
2016-10-06 10:26:02,695 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Failed in 'GetVmsFromExternalProviderVDS' method
2016-10-06 10:26:02,696 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Unexpected return value: StatusForXmlRpc [code=65,
message=Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer]
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-118) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VDSM hlg-rbs-ovirt-kvm03-poa.rbs.com.br command failed:
Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer
2016-10-06 10:26:02,701 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VMListReturnForXmlRpc@5ce9f7f2'
2016-10-06 10:26:02,701 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] HostName = hlg-rbs-ovirt-kvm03-poa.rbs.com.br
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] Command
'GetVmsFromExternalProviderVDSCommand(HostName =
hlg-rbs-ovirt-kvm03-poa.rbs.com.br,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='02ead14e-0208-4a74-b1c2-4c19383820f9', url='qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system', username='root',
originType='KVM'})' execution failed: VDSGenericException:
VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot
recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65
2016-10-06 10:26:02,701 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFromExternalProviderVDSCommand]
(default task-118) [] FINISH, GetVmsFromExternalProviderVDSCommand, log id:
7c8b1e2d
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-118) [] Query 'GetVmsFromExternalProviderQuery' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
2016-10-06 10:26:02,701 ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-118) [] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsFromExternalProviderVDS, error = Cannot recv data: Permission denied,
please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).:
Connection reset by peer, code = 65 (Failed with error unexpected and code
16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:257)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:27)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103)
[bll.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor179.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
[wildfly-ee-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66)
[wildfly-ejb3-10.0.0.Final.jar:10.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at
org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at
org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:195)
at
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown
Source) [common.jar:]
at
org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runQuery(GenericApiGWTServiceImpl.java:53)
at sun.reflect.GeneratedMethodAccessor222.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_102]
at
com.google.gwt.rpc.server.RPC.invokeAndStreamResponse(RPC.java:196)
at
com.google.gwt.rpc.server.RpcServlet.processCall(RpcServlet.java:172)
at
com.google.gwt.rpc.server.RpcServlet.processPost(RpcServlet.java:233)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
[branding.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
[utils.jar:]
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
^C
[root@hlg-rbs-ovirt01-poa ~]#
When i try with virsh i have sucess.
[root@hlg-rbs-ovirt-kvm01-poa ~]# virsh -c qemu+ssh://
root(a)prd-openshift-kvm03-poa.rbs.com.br/system
The authenticity of host 'prd-openshift-kvm03-poa.rbs.com.br (10.1.8.32)'
can't be established.
ECDSA key fingerprint is af:e9:12:29:65:ad:41:ab:0a:3c:a1:f0:73:1c:62:a5.
Are you sure you want to continue connecting (yes/no)? yes
root(a)prd-openshift-kvm03-poa.rbs.com.br's password:
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list
Id Name State
----------------------------------------------------
5 prd-openshift-etcd03-poa running
6 prd-openshift-master03-poa running
8 prd-openshift-node03-poa running
virsh # list --all
Id Name State
----------------------------------------------------
5 prd-openshift-etcd03-poa running
6 prd-openshift-master03-poa running
8 prd-openshift-node03-poa running
- teste shut off
- teste1 shut off
- tpl-centos72-64 shut off
virsh # quit
[root@hlg-rbs-ovirt-kvm01-poa ~]#
Thanks.
5 years, 11 months
Change host names/IPs
by Davide Ferrari
Hello
Is there a clean way and possibly without downtime to change the hostname
and IP addresses of all the hosts in a running oVirt cluster?
--
Davide Ferrari
Senior Systems Engineer
5 years, 11 months
[Users] Lifecycle / upgradepath
by Sven Kieske
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
5 years, 11 months
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
5 years, 11 months
Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
5 years, 11 months
[Users] importing from kvm into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
5 years, 12 months
Trying to reset password for ovirt wiki
by noc
This is a multi-part message in MIME format.
--------------000005070002050708050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hoping someone can help me out.
For some reason I keep getting the following error when I try to reset
my password:
Reset password
* Error sending mail: Failed to add recipient: jvandewege(a)nieuwland.nl
[SMTP: Invalid response code received from server (code: 554,
response: 5.7.1 <jvandewege(a)nieuwland.nl>: Relay access denied)]
Complete this form to receive an e-mail reminder of your account details.
Since I receive the ML on this address it is definitely a working address.
Tried my home account too and same error but then for my home provider,
Relay denied ??
A puzzled user,
Joop
--------------000005070002050708050606
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hoping someone can help me out.<br>
For some reason I keep getting the following error when I try to
reset my password:<br>
<br>
<fieldset><legend>Reset password</legend>
<div class="error">
<ul>
<li>Error sending mail: Failed to add recipient:
<a class="moz-txt-link-abbreviated" href="mailto:jvandewege@nieuwland.nl">jvandewege(a)nieuwland.nl</a> [SMTP: Invalid response code
received from server (code: 554, response: 5.7.1
<a class="moz-txt-link-rfc2396E" href="mailto:jvandewege@nieuwland.nl"><jvandewege(a)nieuwland.nl></a>: Relay access denied)]</li>
</ul>
</div>
<p>Complete this form to receive an e-mail reminder of your
account details.<br>
</p>
</fieldset>
<br>
Since I receive the ML on this address it is definitely a working
address.<br>
Tried my home account too and same error but then for my home
provider, Relay denied ??<br>
<br>
A puzzled user,<br>
<br>
Joop<br>
<br>
</body>
</html>
--------------000005070002050708050606--
6 years
ovirt-guest-agent issue on rhel5.5
by John Michael Mercado
Hi All,
I need your help. Anyone who encounter the below error and have the
solution? Can you help me how to fix this?
MainThread::INFO::2015-01-27
10:22:53,247::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2015-01-27
10:22:53,248::ovirt-guest-agent::138::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in ?
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 371, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
Thanks
6 years
[Users] oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
6 years
[Users] Moving iSCSI Master Data
by rni@chef.net
--========GMXBoundary282021374122634158505
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it's me again....
I started my oVirt 'project' as a proof of concept,.. but it happend as always, it became production
Now, I've to move the iSCSI Master data to the real iSCSI traget.
Is there any way to do this, and to become rid of the old Master Data?
Thank you for your help
Hans-Joachim
--========GMXBoundary282021374122634158505
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it's me again....<br /><br />I started my oVirt 'project' as a proof=
of concept,.. but it happend as always, it became production <img alt=
=3D" " title=3D" " src=3D"http://images.gmx.com/images/outsource/applicatio=
n/mailclient/mailcom/resource/mailclient/icons/blue/emoticons/animated/S_02=
-516742918.gif" /><br /><br />Now, I've to move the iSCSI Master data to th=
e real iSCSI traget.<br />Is there any way to do this, and to become rid of=
the old Master Data?<br /><br /><span id=3D"editor_signature">Thank you fo=
r your help</span><br /><br />Hans-Joachim</span></span>
--========GMXBoundary282021374122634158505--
6 years
[Users] Can't access RHEV-H aka ovirt-node
by Scotto Alberto
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
6 years, 1 month
Unable to make Single Sign on working on Windows 7 Guest
by Felipe Herrera Martinez
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
6 years, 1 month
Re: [ovirt-users] Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
6 years, 1 month
Re: [ovirt-users] Problem Windows guests start in pause
by Dafna Ron
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
6 years, 1 month
Feature: Hosted engine VM management
by Roy Golan
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review and comment on the wiki below:
http://www.ovirt.org/Hosted_engine_VM_management
Thanks,
Roy
6 years, 1 month
Re: [ovirt-users] Large DWH Database, how to empty
by Matt .
Hi,
OK thanks! I saw that after upgrading to 4.0.5 from 4.0.4 the DB
already dropped with around 500MB directly and is now at 2GB smaller.
Does this sounds familiar to you with other settings in 4.0.5 ?
Thanks,
Matt
2017-01-08 10:45 GMT+01:00 Shirly Radco <sradco(a)redhat.com>:
> No. That will corrupt your database.
>
> Are you using the full dwh or the smaller version for the dashboards?
>
> Please set the delete thresholds to save less data and the data older then
> the time you set will be deleted.
> Add a file to /ovirt-engine-dwhd.conf.d/
> update_time_to_keep_records.conf
>
> Add these lines with the new configurations. The numbers represent the hours
> to keep the data.
>
> DWH_TABLES_KEEP_SAMPLES=24
> DWH_TABLES_KEEP_HOURLY=1440
> DWH_TABLES_KEEP_DAILY=43800
>
>
> These are the configurations for a full dwh.
>
> The smaller version configurations are:
> DWH_TABLES_KEEP_SAMPLES=24
> DWH_TABLES_KEEP_HOURLY=720
> DWH_TABLES_KEEP_DAILY=0
>
> The delete process by default at 3am every day (DWH_DELETE_JOB_HOUR=3)
>
> Best regards,
>
> Shirly Radco
>
> BI Software Engineer
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
>
> On Fri, Jan 6, 2017 at 6:35 PM, Matt . <yamakasi.014(a)gmail.com> wrote:
>>
>> Hi,
>>
>> I seem to have some large database for the DWH logging and I wonder
>> how I can empty it safely.
>>
>> Can I just simply empty the database ?
>>
>> Have a good weekend!
>>
>> Cheers,
>>
>> Matt
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
6 years, 2 months
Re: [ovirt-users] Packet loss
by Doron Fediuck
----_com.android.email_640187878761650
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
SGkgS3lsZSzCoApXZSBtYXkgaGF2ZSBzZWVuIHNvbWV0aGluZyBzaW1pbGFyIGluIHRoZSBwYXN0
IGJ1dCBJIHRoaW5rIHRoZXJlIHdlcmUgdmxhbnMgaW52b2x2ZWQuwqAKSXMgaXQgdGhlIHNhbWUg
Zm9yIHlvdT/CoApUb255IC8gRGFuLCBkb2VzIGl0IHJpbmcgYSBiZWxsP8Kg
----_com.android.email_640187878761650
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5IaSBLeWxlLCZuYnNwOzwv
ZGl2PjxkaXY+V2UgbWF5IGhhdmUgc2VlbiBzb21ldGhpbmcgc2ltaWxhciBpbiB0aGUgcGFzdCBi
dXQgSSB0aGluayB0aGVyZSB3ZXJlIHZsYW5zIGludm9sdmVkLiZuYnNwOzwvZGl2PjxkaXY+SXMg
aXQgdGhlIHNhbWUgZm9yIHlvdT8mbmJzcDs8L2Rpdj48ZGl2PlRvbnkgLyBEYW4sIGRvZXMgaXQg
cmluZyBhIGJlbGw/Jm5ic3A7PC9kaXY+PC9ib2R5PjwvaHRtbD4=
----_com.android.email_640187878761650--
6 years, 2 months
Unable to backend oVirt with Cinder
by Logan Kuhn
------=_Part_51316288_608143832.1472587678781
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I've got Cinder configured and pointed at Ceph for it's back end storage. I can run ceph commands on the cinder machine and cinder is configured for noauth and I've also tried it with Keystone for auth. I can run various cinder commands and it'll return as expected.
When I configure it in oVirt it'll add the external provider fine, but when I go to create a disk it doesn't populate the volume type field, it's just empty. The corresponding command for cinder: cinder type-list and cinder type-show <name> returns fine and it is public.
Ovirt and Cinder are on the same host so it isn't a firewall issue.
Cinder config:
[DEFAULT]
rpc_backend = rabbit
#auth_strategy = keystone
auth_strategy = noauth
enabled_backends = ceph
#glance_api_servers = http://10.128.7.252:9292
#glance_api_version = 2
#[keystone_authtoken]
#auth_uri = http://10.128.7.252:5000/v3
#auth_url = http://10.128.7.252:35357/v3
#auth_type = password
#memcached_servers = localhost:11211
#project_domain_name = default
#user_domain_name = default
#project_name = services
#username = user
#password = pass
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = ovirt-images
rbd_user = cinder
rbd_secret_uuid = <secret>
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
#glance_api_version = 2
[database]
connection = postgresql://user:pass@10.128.2.33/cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_rabbit]
rabbit_host = localhost
rabbit_port = 5672
rabbit_userid = user
rabbit_password = pass
Regards,
Logan
------=_Part_51316288_608143832.1472587678781
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Arial; font-size: 12pt; color: #0000=
00"><div>I've got Cinder configured and pointed at Ceph for it's back end s=
torage. I can run ceph commands on the cinder machine and cinder is c=
onfigured for noauth and I've also tried it with Keystone for auth. I=
can run various cinder commands and it'll return as expected. </=
div><div><br data-mce-bogus=3D"1"></div><div>When I configure it in oVirt i=
t'll add the external provider fine, but when I go to create a disk it does=
n't populate the volume type field, it's just empty. The correspondin=
g command for cinder: cinder type-list and cinder type-show <name> re=
turns fine and it is public. </div><div><br data-mce-bogus=3D"1"></div=
><div>Ovirt and Cinder are on the same host so it isn't a firewall issue.</=
div><div><br data-mce-bogus=3D"1"></div><div>Cinder config:</div><div>[DEFA=
ULT]<br>rpc_backend =3D rabbit<br>#auth_strategy =3D keystone<br>auth_strat=
egy =3D noauth<br>enabled_backends =3D ceph<br>#glance_api_servers =3D http=
://10.128.7.252:9292<br>#glance_api_version =3D 2<br><br>#[keystone_authtok=
en]<br>#auth_uri =3D http://10.128.7.252:5000/v3<br>#auth_url =3D http://10=
.128.7.252:35357/v3<br>#auth_type =3D password<br>#memcached_servers =3D lo=
calhost:11211<br>#project_domain_name =3D default<br>#user_domain_name =3D =
default<br>#project_name =3D services<br>#username =3D user<br>#passwo=
rd =3D pass<br><br>[ceph]<br>volume_driver =3D cinder.volume.drivers.rbd.RB=
DDriver<br>volume_backend_name =3D ceph<br>rbd_pool =3D ovirt-images<br>rbd=
_user =3D cinder<br>rbd_secret_uuid =3D <secret><br>rbd_ceph_con=
f =3D /etc/ceph/ceph.conf<br>rbd_flatten_volume_from_snapshot =3D true<br>r=
bd_max_clone_depth =3D 5<br>rbd_store_chunk_size =3D 4<br>rados_connect_tim=
eout =3D -1<br>#glance_api_version =3D 2<br><br>[database]<br>connection =
=3D postgresql://user:pass@10.128.2.33/cinder<br><br>[oslo_concurrency]<br>=
lock_path =3D /var/lib/cinder/tmp<br><br>[oslo_messaging_rabbit]<br>rabbit_=
host =3D localhost<br>rabbit_port =3D 5672<br>rabbit_userid =3D <span =
style=3D"color: #000000; font-family: Arial; font-size: 16px; font-style: n=
ormal; font-variant-ligatures: normal; font-variant-caps: normal; font-weig=
ht: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-a=
lign: start; text-indent: 0px; text-transform: none; white-space: normal; w=
idows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inlin=
e !important; float: none; background-color: #ffffff;" data-mce-style=3D"co=
lor: #000000; font-family: Arial; font-size: 16px; font-style: normal; font=
-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal;=
letter-spacing: normal; line-height: normal; orphans: 2; text-align: start=
; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; w=
ord-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !importan=
t; float: none; background-color: #ffffff;">user</span><br>rabbit_password =
=3D <span style=3D"color: #000000; font-family: Arial; font-size: 16px=
; font-style: normal; font-variant-ligatures: normal; font-variant-caps: no=
rmal; font-weight: normal; letter-spacing: normal; line-height: normal; orp=
hans: 2; text-align: start; text-indent: 0px; text-transform: none; white-s=
pace: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
display: inline !important; float: none; background-color: #ffffff;" data-=
mce-style=3D"color: #000000; font-family: Arial; font-size: 16px; font-styl=
e: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-=
weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; te=
xt-align: start; text-indent: 0px; text-transform: none; white-space: norma=
l; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: i=
nline !important; float: none; background-color: #ffffff;">pass</span></div=
><div><br></div><div data-marker=3D"__SIG_PRE__">Regards,<br>Logan</div></d=
iv></body></html>
------=_Part_51316288_608143832.1472587678781--
6 years, 10 months
Mailing-Lists upgrade
by Marc Dequènes (Duck)
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: multipart/mixed; boundary="AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv";
protected-headers="v1"
From: =?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?= <duck(a)redhat.com>
To: oVirt Infra <infra(a)ovirt.org>, users <users(a)ovirt.org>,
devel <devel(a)ovirt.org>
Message-ID: <c5c71fce-0290-e97a-ddd0-eab0e6fccea4(a)redhat.com>
Subject: Mailing-Lists upgrade
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Quack,
On behalf of the oVirt infra team, I'd like to announce the current
Mailing-Lists system is going to be upgraded to a brand new Mailman 3
installation on Monday during the slot 11:00-12:00 JST.
It should not take a full hour to migrate as we already made incremental
synchronization with the current system but better keep some margin. The
system will then take over delivery of the mails but might be a bit slow
at first as it needs to reindex all the archived mails (which might take
a few hours).
To manage your subscriptions and delivery settings you can do this
easily on the much nicer web interface (https://lists.ovirt.org). There
is a notion of account so you don't need to login separately for each ML.=
You can Sign In using Fedora, GitHub or Google or create a local account
if you prefer. Please keep in mind signing in with a different method
would create separate accounts (which cannot be merged at the moment).
But you can easily link your account to other authentication methods in
your settings (click on you name in the up-right corner -> Account ->
Account Connections).
As for the original mail archives, because the previous system did not
have stable URLs, we cannot create mappings to the new pages. We decided
to keep the old archives around on the same URL (/pipermail), so the
Internet links would still work fine.
Hope you'd be happy with the new system.
\_o<
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv--
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEcpcqg+UmRT3yiF+BVen596wcRD8FAlmwx0MACgkQVen596wc
RD9LTQ/+LtUncsq9K8D/LX8wqUTd6VyPwAD5UnAk5c3H/2tmyVA0u7FIfhEyPsXs
Z//LE9FEneTDqDVRi1Dw9I54K0ZwxPBemi71dXfwgBI7Ay0ezkbLWrA168Mt9spE
tHAODEuxPt2to2aqaS4ujogrkp/gvEP8ILoxPEqoTPCJ/eDTPAu/I1a2JzjMPK3n
2BBS6D8z0TLAf7w1n72TsgX2QzJW57ig/0HELyjvat2/K8V3HSrkwiKlsdULQDWe
zB+aMde7r6UoyVKHqlu4asTl2tU/lGZ+e31Hd9Bnx1/oZOJdzslGOhEo9Qoz6763
AHWU9LKiK4NtxYHj2UQTWhndr8PiTtTmR73eIDmkb0cuRXxzjl9VQbwYJ0Kbrmfp
attTqpc2CnEojTNXUUNSmNxotZoYXZiX8ZvjPfSgRVr15TUYujzlOfG+lUynbQMV
9rQ9/m58wgwYUymMpOIsRGaIcAKzjm+WpuuVnO+bS2AfmcBkGMQRoIhfV+3SkS8q
kT9cDXgcDZOzVFcnZZB4EjbycMcPgZDcoHxU88VdYH+jFJYvvb21esgswVF/wJ2Z
uEI/chp4+ADaQhl8ehZNWMSZq125v6SeirPhBNgLG7zFVZI1S9Tm/6qFmH+ajQY7
nCk1X9HZlB1ubex1X+HibRz9QKOilkMgkADyJ4yMDckwYj93sx0=
=l6uN
-----END PGP SIGNATURE-----
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq--
6 years, 12 months
vdsClient is removed and replaced by vdsm-client
by Irit Goihman
Hi All,
vdsClient will be removed from master branch today.
It is using XMLRPC protocol which has been deprecated and replaced by
JSON-RPC.
A new client for vdsm was introduced in 4.1: vdsm-client.
This is a simple client that uses JSON-RPC protocol which was introduced in
ovirt 3.5.
The client is not aware of the available methods and parameters, and you
should consult
the schema [1] in order to construct the desired command.
Future version should parse the schema and provide online help.
If you're using vdsClient, we will be happy to assist you in migrating to
the new vdsm client.
*vdsm-client usage:*
vdsm-client [-h] [-a ADDRESS] [-p PORT] [--unsecure] [--timeout TIMEOUT]
[-f FILE] namespace method [name=value [name=value] ...]
Invoking simple methods:
# vdsm-client Host getVMList
['b3f6fa00-b315-4ad4-8108-f73da817b5c5']
For invoking methods with many or complex parameters, you can read
the parameters from a JSON format file:
# vdsm-client Lease info -f lease.json
where lease.json file content is:
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
It is also possible to read parameters from standard input, creating
complex parameters interactively:
# cat <<EOF | vdsm-client Lease info -f -
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
EOF
*Constructing a command from vdsm schema:*
Let's take VM.getStats as an example.
This is the entry in the schema:
VM.getStats:
added: '3.1'
description: Get statistics about a running virtual machine.
params:
- description: The UUID of the VM
name: vmID
type: *UUID
return:
description: An array containing a single VmStats record
type:
- *VmStats
namespace: VM
method name: getStats
params: vmID
The vdsm-client command is:
# vdsm-client VM getStats vmID=b3f6fa00-b315-4ad4-8108-f73da817b5c5
*Invoking getVdsCaps command:*
# vdsm-client Host getCapabilities
Please consult vdsm-client help and man page for further details and
options.
[1] https://github.com/oVirt/vdsm/blob/master/lib/api/vdsm-api.yml
--
Irit Goihman
Software Engineer
Red Hat Israel Ltd.
7 years
Passing VLAN trunk to VM
by Simon Vincent
Is it possible to pass multiple VLANs to a VM (pfSense) using a single
virtual NIC? All my existing oVirt networks are setup as a single tagged
VLAN. I know this didn't used to be supported but wondered if this has
changed. My other option is to pass each VLAN as a separate NIC to the VM
however if I needed to add a new VLAN I would have to add a new interface
and reboot the VM as hot-add of NICs is not supported by pfSense.
7 years, 1 month
HostedEngine with HA
by Carlos Rodrigues
Hello,
I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.
When i shutdown the network of host with HostedEngine VM, it should be
possible the HostedEngine VM migrate automatically to another host?
What is the expected behaviour on this HA scenario?
Regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
7 years, 1 month
oVIRT 4.1 / iSCSI Multipathing
by Devin Acosta
I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.
How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.
Please Advise.
Devin
7 years, 1 month
when creating VMs, I don't want hosted_storage to be an option
by Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not available to add new VMs to? Right now it’s the default storage domain when adding a VM. At the least, I’d like to make another storage domain the default.
Is there a way to do this?
Thanks
7 years, 2 months
qemu-kvm images corruption
by Nicolas Ecarnot
TL;DR:
How to avoid images corruption?
Hello,
On two of our old 3.6 DC, a recent series of VM migrations lead to some
issues :
- I'm putting a host into maintenance mode
- most of the VM are migrating nicely
- one remaining VM never migrates, and the logs are showing :
* engine.log : "...VM has been paused due to I/O error..."
* vdsm.log : "...Improbable extension request for volume..."
After digging amongst the RH BZ tickets, I saved the day by :
- stopping the VM
- lvchange -ay the adequate /dev/...
- qemu-img check [-r all] /rhev/blahblah
- lvchange -an...
- boot the VM
- enjoy!
Yesterday this worked for a VM where only one error occurred on the qemu
image, and the repair was easily done by qemu-img.
Today, facing the same issue on another VM, it failed because the errors
were very numerous, and also because of this message :
[...]
Rebuilding refcount structure
ERROR writing refblock: No space left on device
qemu-img: Check failed: No space left on device
[...]
The PV/VG/LV are far from being full, so I guess I don't where to look at.
I tried many ways to solve it but I'm not comfortable at all with qemu
images, corruption and solving, so I ended up exporting this VM (to an
NFS export domain), importing it into another DC : this had the side
effect to use qemu-img convert from qcow2 to qcow2, and (maybe?????) to
solve some errors???
I also copied it into another qcow2 file with the same qemu-img convert
way, but it is leading to another clean qcow2 image without errors.
I saw that on 4.x some bugs are fixed about VM migrations, but this is
not the point here.
I checked my SANs, my network layers, my blades, the OS (CentOS 7.2) of
my hosts, but I see nothing special.
The real reason behind my message is not to know how to repair anything,
rather than to understand what could have lead to this situation?
Where to keep a keen eye?
--
Nicolas ECARNOT
7 years, 2 months
Official Hyperconverged Gluster oVirt upgrade procedure?
by Hanson
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable glusterfs from ovirt gui, run
yum update?
Thanks!
7 years, 3 months
[vdsm] status update: running containers alongside VMs
by Francesco Romani
Hi everyone,
I'm happy to share some progress about the former "convirt"[1] project,
which aims to let Vdsm containers alongside VMs, on bare metal.
In the last couple of months I kept updating the patch series, which
is approaching the readiness to be merged in Vdsm.
Please read through this mail to see what the patchset can do now,
how you could try it *now*, even before it is merged.
Everyone is invited to share thoughts and ideas about how this effort
could evolve.
This will be a long mail; I will amend, enhance and polish the content
and make a blog post (on https://mojaves.github.io) to make it easier
to consume and to have some easy-to-find documentation. Later on the
same content will appear also on the oVirt blog.
Happy hacking!
+++
# How to try how the experimental container support for Vdsm.
Vdsm is gaining *experimental* support to run containers alongside VMs.
Vdsm had since long time the ability to manage VMs which run containers,
and recently gained support for
[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
With the new support we are describing, you will be able to manage containers
with the same, proven infrastructure that let you manage VMs.
This feature is currently being developed and it is still not merged in the
Vdsm codebase, so some extra work is needed if you want to try it out.
We aiming to merge it in the oVirt 4.1.z cycle.
## What works, aka what to expect
The basic features are expected to work:
1. Run any docker image on the public docker registry
2. Make the container accessible from the outside (aka not just from localhost)
3. Use file-based storage for persistent volumes
## What does not yet work, aka what NOT to expect
Few things are planned and currently under active development:
1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
One important drawback is that you will not be told the IP of the container from Engine,
you will need to connect to the Vdsm host to discover it using standard docker tools.
2. Proper network integration. Some steps still need manual intervention
3. Stability and recovery - it's pre-alpha software after all! :)
## 1. Introduction and prerequisites
Trying out container support affects only the host and the Vdsm.
Besides add few custom properties (totally safe and supported since early
3.z), there are zero changes required to the DB and to Engine.
Nevertheless, we recommend to dedicate one oVirt 4.y environment,
or at least one 4.y host, to try out the container feature.
To get started, first thing you need is to setup a vanilla oVirt 4.y
installation. We will need to make changes to the Vdsm and to the
Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
better to avoid them at the moment.
The reminder of this tutorial assumes you are using two hosts,
one for Vdsm (will be changed) and one for Engine (will require zero changes);
furthermore, we assume the Vdsm host is running on CentOS 7.y.
We require:
- one test host for Vdsm. This host need to have one NIC dedicated to containers.
We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
so this NIC *must not be* part of one bridge.
- docker >= 1.12
- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
- CentOS >= 7.2
Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
Caveats:
1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
2. you may want to replace the default service file
[with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/syst...
and to use this
[sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/sys....
Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
on the testing box.
## 2. Patch Vdsm to support containers
You need to patch and rebuild Vdsm.
Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.1...
and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
Rebuild Vdsm and reinstall on your box.
[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
Make sure you install the Vdsm command line client (vdsm-cli)
Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
This ensure that no regression is introduced, and that your environment can run VMs just as before.
Now we can proceed adding the container support.
start docker:
# systemctl start docker-engine
(optional)
# systemctl enable docker-engine
Restart Vdsm again
# systemctl restart vdsm
Now we can check if Vdsm detects docker, so you can use it:
still on the same Vdsm host, run
$ vdsClient -s 0 getVdsCaps | grep containers
containers = ['docker', 'fake']
This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
Now we need to make sure the host network configuration is fine.
### 2.1. Configure the docker network for Vdsm
PLEASE NOTE
that the suggested network configuration assumes that
* you have one network, `ovirtmgmt` (the default one) you use for everything
* you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
care of this automatically in the future.
You can use
[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-...,
which reuses the Vdsm libraries. Make sure
you have patched Vdsm to support container before to use it.
Let's review what the script needs:
# ./cont-setup-net -h
usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
[--interface [INTERFACE]] [--gateway [GATEWAY]]
[--subnet [SUBNET]] [--mask [MASK]]
optional arguments:
-h, --help show this help message and exit
--name [NAME] network name to use
--bridge [BRIDGE] bridge to use
--interface [INTERFACE]
interface to use
--gateway [GATEWAY] address of the gateway
--subnet [SUBNET] subnet to use
--mask [MASK] netmask to use
So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
For my case the default mask was indeed fine, so I used the script like this:
# ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
Thhis is the output I got:
DEBUG:virt.containers.runtime:configuring runtime 'docker'
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
Error: No such network: ovirtmgmt
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.runtime:configuring runtime 'fake'
You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
91535f3425a8 bridge bridge local
d42f7e5561b5 host host local
621ab6dd49b1 none null local
f4b88e4a67eb ovirtmgmt macvlan local
# docker network inspect ovirtmgmt
[
{
"Name": "ovirtmgmt",
"Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.1.0/24",
"IPRange": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"parent": "enp3s0"
},
"Labels": {}
}
]
Looks good! the host configuration is completed. Let's move to the Engine side.
## 3. Configure Engine
As mentioned above, we need now to configure Engine. This boils down to:
Add a few custom properties for VMs:
In case you were already using custom properties, you need to amend the command
line to not overwrite your existing ones.
# engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
It is worth stressing that while the variables are container-specific,
the VM custom properties are totally inuntrusive and old concept in oVirt, so
this step is totally safe.
Now restart Engine to let it use the new variables:
# systemctl restart ovirt-engine
The next step is actually configure one "container VM" and run it.
## 4. Create the container "VM"
To finally run a container, you start creating a VM much like you always did, with
few changes
1. most of the hardware-related configuration isn't relevant for container "VMs",
besides cpu share and memory limits; this will be better documented in the
future; unneeded configuration will just be ignored
2. You need to set some custom properties for your container "VM". Those are
actually needed to enable the container flow, and they are documented in
the next section. You *need* to set at least `containerType` and `containerImage`.
### 4.2. Custom variables for container support
The container support needs some custom properties to be properly configured:
1. `containerImage` (*needed* to enable the container system).
Just select the target image you want to run. You can use the standard syntax of the
container runtimes.
2. `containerType` (*needed* to enable the container system).
Selects the container runtime you want to use. All the available options are always showed.
Please note that unavailable container options are not yet grayed out.
If you *do not* have rkt support on your host, you still can select it, but it won't work.
3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
to have persistent storage. Only file-based storage is supported.
Example configuration:
`containerImage = redis`
`containerType = docker`
`volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
### 4.2. A little bit of extra work: preload the images on the Vdsm host
This step is not needed by the flow, and will be handled by oVirt in the future.
The issue is how the container image are handled. They are stored by the container
management system (rkt, docker) on each host, and they are not pre-downloaded.
To shorten the duration of the first boot, you are advised to pre-download
the image(s) you want to run. For example
## on the Vdsm host you want to use with containers
# docker pull redis
## 5. Run the container "VM"
You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
Some actions doesn't make sense for a container "VM", like live migration.
Engine won't stop you to try to do those actions, but they will fail gracefully
using the standard errors.
## 6. Next steps
What to expect from this project in the future?
For the integration with Vdsm, we want to fix the existing known issues, most notably:
* add proper monitoring/reporting of the container health
* ensure proper integration of the container image store with oVirt storage management
* streamline the network configuration
What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
moment, so fixing the following is currently unplanned:
* First and foremost, Engine will not distinguish between real VMs and container VMs.
Actions unavailable to container will not be hidden from UI. Same for monitoring
and configuration data, which will be ignored.
* Engine is NOT aware of the volumes one container can use. You must inspect and do the
mapping manually.
* Engine is NOT aware of the available container runtimes. You must select it carefully
Proper integration with Engine may be added in the future once this feature exits
from the experimental/provisional stage.
Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
+++
[1] we keep calling it that way _only_ internally, because it's a short
name we are used to. After the merge/once we release it, we will use
a different name, like "vdsm-containers" or something like it.
--
Francesco Romani
Red Hat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
7 years, 3 months
Empty cgroup files on centos 7.3 host
by Florian Schmid
Hi,
I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3 hypervisor hosts, but it looks like that all those files are empty.
For example:
ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\x2dHostedEngine.scope/
insgesamt 0
drwxr-xr-x. 2 root root 0 30. Mai 10:09 .
drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
--w-------. 1 root root 0 30. Mai 10:09 blkio.reset_stats
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.clone_children
--w--w--w-. 1 root root 0 30. Mai 10:09 cgroup.event_control
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.procs
-rw-r--r--. 1 root root 0 30. Mai 10:09 notify_on_release
-rw-r--r--. 1 root root 0 30. Mai 10:09 tasks
I thought, I can get my needed values from there, but all files are empty.
Looking at this post: http://lists.ovirt.org/pipermail/users/2017-January/079011.html
this should work.
Is this normal on centos 7.3 with oVirt installed? How can I get those values, without monitoring all VMs directly?
oVirt Version we use:
4.1.1.8-1.el7.centos
BR Florian
7 years, 4 months
Reg: Ovirt mouse not responding
by syedquadeer@ctel.in
Dear Team,
I am using Ovirt 3.x on centos 3 node cluster and in that Ubuntu 14.04
64bit vm's are installed. But the end users, who are using this vm' s
are facing some issue daily and issues are mentioned below,
1. Keyboard will not respond in middle automatically and after checking
log file in vm, it show pmouse sync issue.
2. If Vm is restarted it is giving black screen, then Vm need to power
off and start again.
Please provide solution for above issues. Thanks in advance...
--
Thanks & Regards,
Syed Abdul Qadeer.
7660022818.
7 years, 4 months
Re: [ovirt-users] iSCSI domain on 4kn drives
by Martijn Grendelman
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>
> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman
> <martijn.grendelman(a)isaac.nl <mailto:martijn.grendelman@isaac.nl>> wrote:
>
> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman
>> <martijn.grendelman(a)isaac.nl
>> <mailto:martijn.grendelman@isaac.nl>> wrote:
>>
>> Hi,
>>
>> Does oVirt support iSCSI storage domains on target LUNs using
>> a block
>> size of 4k?
>>
>>
>> No, we do not - not if it exposes 4K blocks.
>> Y.
>
> Is this on the roadmap?
>
>
> Not in the short term roadmap.
> Of course, patches are welcome. It's mainly in VDSM.
> I wonder if it'll work in NFS.
> Y.
I don't think I ever replied to this, but I can confirm that in RHEV 3.6
it works with NFS.
Best regards,
Martijn.
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:<br>
<blockquote
cite="mid:280cfbd3a16ad1b76cc7de56bda88f45,CAJgorsbJHLV1e3fH4b4AR3GBp1oi44fDhfeii+PQ1iY1RwUStw@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">On Fri, Aug 5, 2016 at 4:42 PM, Martijn
Grendelman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl" target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Op 4-8-2016 om
18:36 schreef Yaniv Kaul:<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"><span class="">On Thu,
Aug 4, 2016 at 11:49 AM, Martijn Grendelman <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl"
target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
</span><span class="">
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi,<br>
<br>
Does oVirt support iSCSI storage domains on
target LUNs using a block<br>
size of 4k?<br>
</blockquote>
<div><br>
</div>
</span><span class="">
<div>No, we do not - not if it exposes 4K
blocks.</div>
<div>Y.</div>
</span></div>
</div>
</div>
</blockquote>
<br>
Is this on the roadmap?<br>
</div>
</blockquote>
<div><br>
</div>
<div>Not in the short term roadmap.</div>
<div>Of course, patches are welcome. It's mainly in VDSM.</div>
<div>I wonder if it'll work in NFS.</div>
<div>Y.</div>
</div>
</div>
</div>
</blockquote>
<br>
I don't think I ever replied to this, but I can confirm that in RHEV
3.6 it works with NFS.<br>
<br>
Best regards,<br>
Martijn.<br>
</body>
</html>
--------------DE48748F7C67E1FABE46EEAF--
7 years, 5 months
Engine crash, storage won't activate, hosts won't shutdown, template locked, gpu passthrough failed
by M R
Hello!
I have been using Ovirt for last four weeks, testing and trying to get
things working.
I have collected here the problems I have found and this might be a bit
long but help to any of these or maybe to all of them from several people
would be wonderful.
My version is ovirt node 4.1.5 and 4.1.6 downloaded from website latest
stable release at the time. Also tested with CentOS minimal +ovirt repo. In
this case, 3. is solved, but other problems persist.
1. Power off host
First day after installing ovirt node, it was able to reboot and shutdown
clean. No problems at all. After few days of using ovir, I have noticed
that hosts are unable to shutdown. I have tested this in several different
ways and come to the following conclusion. IF engine has not been started
after boot, all hosts are able to shutdown clean. But if engine is started
even once, none of the hosts are able to shutdown anymore. The only way to
get power off is to unplug or press power button for a longer time as hard
reset. I have failed to find a way to have the engine running and then
shutdown host. This effects to all hosts in the cluster.
2. Glusterfs failed
Every time I have booted hosts, glusterfs has failed. For some reason, it
turns inactive state even if I have setup systemctl enable glusterd. Before
this command it was just inactive. After this command, it will say "failed
(inactive). There is still a way to get glusterfs working. I have to give
command systemctl start glusterd manually and everything starts working.
Why do I have to give manual commands to start glusterfs? I have used this
for CentOS before and never had this problem before. Node installer is that
much different from the CentOS core?
3. Epel
As I said that I have used CentOS before, I would like to able to install
some packets from repo. But even if I install epel-release, it won't find
packets such as nano or htop. I have read about how to add epel-release to
ovirt node from here: https://www.ovirt.org/release/4.1.1/#epel
I have tested even manually edit repolist, but it will fail to find normal
epel packets. I have setup additional exclude=collectd* as guided in the
link above. This doesn't make any difference. All being said I am able to
install manually packets which are downloaded with other CentOS machine and
transferred with scp to ovirt node. Still, this once again needs a lot of
manual input and is just a workaround for the bug.
4. Engine startup
When I try to start the engine when glusterfs is up, it will say vm doesn't
exist, starting up. Still, it won't startup automatically. I have to give
several times command hosted-engine --vm-start. I wait for about 5minutes
until I give it next time. This will take usually about 30minutes and then
randomly. Completely randomly after one of the times, I give this command
engine shoots up and is up in 1minute. This has happened every time I boot
up. And the times that I have to give a command to start the engine, has
been changing. At best it's been 3rd time at worst it has been 7th time.
Calculating from there it might take from 15minutes to 35minutes to get the
engine up.Nevertheless, it will eventually come up every time. If there is
a way to get it up on the first try or even better, automatically up, it
would be great.
5. Activate storage
Once the engine is up, there has been a problem with storage. When I go to
storage tab, it will show all sources red. Even if I wait for 15~20minutes,
it won't get storage green itself. I have to go and press active button
from main data storage. Then it will get main storage up in
2~3munutes.Sometimes it fails it once, but will definitely get main data
storage up on the seconds try. And then magically at the same time all
other storages instantly go green. Main storage is glusterfs and I have 3
NFS storages as well. This is only a problem when starting up and once
storages are on green they stay green. Still annoying that it cannot get it
done by itself.
6.Template locked
I try to create a template from existing VM and it resulted in original VM
going into locked state and template being locked. I have read that some
other people had a similar problem and they were suggested to restart
engine to see if it solves it. For me it has been now a week and several
restarts of engine and hosts, but there is still one VM locked and template
locked as well. This is not a big problem, but still a problem. Everything
is grey and cannot delete this bugged VM or template.
7. unable to use GPU
I have been trying to do GPU passthrough with my VM. First, there was a
problem with qemu cmd line, but once I figure out a way to get commands, it
maybe is working(?). Log shows up fine, but it still doesn't give
functionality I¨m looking for. As I mentioned in the other email that I
have found this: https://www.mail-archive.com/users@ovirt.org/msg40422.html
. It will give right syntax in log, but still, won't fix error 43 with
nvidia drivers. If anybody got this working or has ideas how to do it,
would really like to know how it's done properly. I have also tested with
AMD graphics cards such as vega, but as soon as drivers have installed, I
will get a black screen. Even if I restart VM or hosts or both. I will only
see black screen and unable to use VM at all. I might be able to live with
the other six things listed above, but this one is a bit of a problem for
me. My use of VMs will eventually need graphical performance and therefore
I will have to get this working or find an alternative to ovirt..I have
found several things that I really like in ovirt and would prefer to use
it.
Best regards
Mikko
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Ei
viruksia. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
7 years, 6 months
VM remote noVNC console
by Alex K
Hi all,
I am trying to get the VM console of a VM through SSH socks proxy.
This is a scenario I will frequently face, as the ovirt cluster will be
available only though a remote SSH tunnel.
I am trying several console options without success.
With SPICE or VNC I get issue with virt-viewer saying "Unable to connect to
libvirt with URI [none]'
With noVNC I get a separate tab on browser where it is stuck showing
"loading".
Has anyone success with this kind of remote console access?
Thanx,
Alex
7 years, 6 months
LVM structure
by Nicolas Ecarnot
Hello,
I'm still coping with my qemu image corruption, and I'm following some
Redhat guidelines that explains the way to go :
- Start the VM
- Identify the host
- On this host, run the ps command to identify the disk image location :
# ps ax|grep qemu-kvm|grep vm_name
- Look for "-drive
file=/rhev/data-center/00000001-0001-0001-0001-00000000033e/b72773dc-c99c-472a-9548-503c122baa0b/images/91bfb2b4-5194-4ab3-90c8-3c172959f712/e7174214-3c2b-4353-98fd-2e504de72c75"
(YMMV)
- Resolve this symbolic link
# ls -la
/rhev/data-center/00000001-0001-0001-0001-00000000033e/b72773dc-c99c-472a-9548-503c122baa0b/images/91bfb2b4-5194-4ab3-90c8-3c172959f712/e7174214-3c2b-4353-98fd-2e504de72c75
lrwxrwxrwx 1 vdsm kvm 78 3 oct. 2016
/rhev/data-center/00000001-0001-0001-0001-00000000033e/b72773dc-c99c-472a-9548-503c122baa0b/images/91bfb2b4-5194-4ab3-90c8-3c172959f712/e7174214-3c2b-4353-98fd-2e504de72c75
->
/dev/b72773dc-c99c-472a-9548-503c122baa0b/e7174214-3c2b-4353-98fd-2e504de72c75
- Shutdown the VM
- On the SPM, activate the logical volume :
# lvchange -ay
/dev/b72773dc-c99c-472a-9548-503c122baa0b/e7174214-3c2b-4353-98fd-2e504de72c75
- Verify the state of the qemu image :
# qemu-img check
/dev/b72773dc-c99c-472a-9548-503c122baa0b/e7174214-3c2b-4353-98fd-2e504de72c75
- If needed, attempt a repair :
# qemu-img check -r all /dev/...
- In any case, deactivate the LV :
# lvchange -an /dev/...
I followed this steps tens of times, and finding the LV and activating
it was obvious and successful.
Since yesterday, I'm finding some VMs one which these steps are not
working : I can identify the symbolic link, but the SPM neither the host
are able to find the LV device, thus can not LV-activate it :
# lvchange -ay
/dev/de2fdaa0-6e09-4dd2-beeb-1812318eb893/ce13d349-151e-4631-b600-c42b82106a8d
Failed to find logical volume
"de2fdaa0-6e09-4dd2-beeb-1812318eb893/ce13d349-151e-4631-b600-c42b82106a8d"
Either I need two more coffees, either I may be missing a step or
something to check.
Looking at the SPM /dev/disk/* structure, it looks like very sound (I
can see my three storage domains dm-name-* series of links).
As the VM can nicely be ran and stopped, does the host activates
something more before being launched?
--
Nicolas ECARNOT
7 years, 6 months
Re: [ovirt-users] How to import a qcow2 disk into ovirt
by Martín Follonier
Hi,
I've done all the recommendations in this thread, and I'm still getting the
"Paused by System" message just after the transfer starts.
Honestly I don't know were else to look at, cause I don't find any log
entry or packet capture that give me a hint about what is happening.
I'll appreciate any help! Thank you in advance!
Regards
Martin
On Thu, Sep 1, 2016 at 5:01 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
> You can do both,
> Through the database, the table is "vdc_options". change "option_value"
> where "option_name" = 'ImageProxyAddress' .
>
> On Thu, Sep 1, 2016 at 4:56 PM, Gianluca Cecchi <gianluca.cec...(a)gmail.com
> > wrote:
>
>> On Thu, Sep 1, 2016 at 3:53 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
>>
>>> You can just replace this value in the DB and change it to the right
>>> FQDN, it is a config value named "ImageProxyAddress". replace "localhost"
>>> with the right address (notice that the port is there too).
>>>
>>> If this will keep happen after users will have the latest version, we
>>> will have to open a bug and fix whatever causes the URL to be "localhost".
>>>
>>>
>> Do you mean through "engine-config" or directly into database?
>> In this second case which is the table involved?
>>
>> Gianluca
>>
>
>
[root@ractorshe bin]# systemctl stop ovirt-imageio-proxy
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value | version
-----------+-------------------+-----------------+---------
950 | ImageProxyAddress | localhost:54323 | general
(1 row)
engine=# update vdc_options set option_value='ractorshe.mydomain:54323'
where option_name='ImageProxyAddress';
UPDATE 1
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
engine=#
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
systemctl stop ovirt-engine
(otherwise it remained localhost)
systemctl start ovirt-engine
systemctl start ovirt-imageio-proxy
Now transfer is ok.
I tried a qcow2 disck configured as 40Gb but containing about 1.6Gb of data.
I'm going to connect it to a VM and see if all is ok also from a contents
point of view.
Gianluca
_______________________________________________
Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
7 years, 6 months
Hosted engine setup question
by Demeter Tibor
--=_d5a03c9f-d720-4690-b3f7-196f2e084694
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I just installed a hosted engine based four nodes cluster to glustered storage.
It seems to working fine, but I have some question about it.
- I would like to make an own cluster and datacenter. Is it possible to remove a host and re-add to an another cluster while it is running the hosted engine?
- Is it possible to remove default datacenter without any problems?
- I have a productive ovirt cluter that is based on 3.5 series. It is using a shared nfs storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach shared storage from the old cluster and attach it to the new cluster?
- If yes what will happend with the VM properies? For example mac addresses, limits, etc. Those will be migrated or not?
Thanks in advance,
Regard
Tibor
--=_d5a03c9f-d720-4690-b3f7-196f2e084694
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Hi,</div><div><br data-mce-bogus=3D"1"></di=
v><div>I just installed a hosted engine based four nodes cluster to gluster=
ed storage.</div><div>It seems to working fine, but I have some question ab=
out it.</div><div><br data-mce-bogus=3D"1"></div><div>- I would like to mak=
e an own cluster and datacenter. Is it possible to remove a host and re-add=
to an another cluster while it is running the hosted engine? </div><d=
iv>- Is it possible to remove default datacenter without any problems? =
;</div><div><br></div><div>- I have a productive ovirt cluter that is based=
on 3.5 series. It is using a shared nfs storage. Is it possible to m=
igrate VMs from 3.5 to 4.1 with detach shared storage from the old cluster =
and attach it to the new cluster? </div><div>- If yes what will happen=
d with the VM properies? For example mac addresses, limits, etc. Those will=
be migrated or not?</div><div><br data-mce-bogus=3D"1"></div><div>Thanks i=
n advance,</div><div>Regard</div><div><br data-mce-bogus=3D"1"></div><div><=
br data-mce-bogus=3D"1"></div><div>Tibor</div><div data-marker=3D"__SIG_PRE=
__"><p style=3D"font-family: 'Times New Roman'; font-size: medium; margin: =
0px;" data-mce-style=3D"font-family: 'Times New Roman'; font-size: medium; =
margin: 0px;"><strong><span style=3D"font-size: medium;" data-mce-style=3D"=
font-size: medium;"><span style=3D"color: #2d67b0;" data-mce-style=3D"color=
: #2d67b0;"><br></span></span></strong></p><p style=3D"font-family: 'Times =
New Roman'; font-size: medium; margin: 0px;" data-mce-style=3D"font-family:=
'Times New Roman'; font-size: medium; margin: 0px;"><span style=3D"font-fa=
mily: georgia, serif; color: #000080;" data-mce-style=3D"font-family: georg=
ia, serif; color: #000080;"><strong><span style=3D"font-size: medium;" data=
-mce-style=3D"font-size: medium;"><span></span></span></strong></span></p><=
p></p></div></div></body></html>
--=_d5a03c9f-d720-4690-b3f7-196f2e084694--
7 years, 6 months
Help with Power Management network
by ~Stack~
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--OnEhT75wdmSi56KelkEfSHNlc4EOmbWar
Content-Type: multipart/mixed; boundary="QtNjKESroum0Gs4B16RTXPJnq400V1QDU";
protected-headers="v1"
From: ~Stack~ <i.am.stack(a)gmail.com>
To: users(a)ovirt.org
Message-ID: <42d5325d-217f-5559-ec5a-11a10fbad2ed(a)gmail.com>
Subject: Help with Power Management network
--QtNjKESroum0Gs4B16RTXPJnq400V1QDU
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Greetings,
I hit up the IRC earlier, but only crickets. Guess no one wants to stick
around late on a Friday night. :-D
I'm an ovirt newb here. I've been going through the docs setting up 4.1
on Scientific Linux 7.4. For the most part everything is going well once
I learn how to do it. I'm, however, stuck on power management.
I have multiple networks:
192.168.1.x is my BMC/ilo network. The security team wants as few entry
points into this as possible and wants as much segregation as possible.
192.168.2.x is my "management" access network. For my other machines on
this network this means admin-SSH/rsyslog/SaltStack configuration
management/ect.
192.168.3.x is my high speed network where my NFS storage sits and
applications that need the bandwidth do their thing.
10.10.86.x is my "public" access
All networks are configured on the Host network settings. Mostly
confident I got it right...at least each network/IP matches the right
interface. ;-)
Right now I only have the engine server and one hyper-visor. On either
host I can ssh into the command line and run fence_ipmilan -a
192.168.1.x -l USER -p PASS -o status -v -P" it works, all is good.
However, when I try to add it in the ovirt interface I get an error. :-/
Edit Host -> Power Management:
Address: 192.168.1.14
User Name: root
Password: SorryCantTellYou
Type: ipmilan
Options: <blank>
Test
Test failed: Failed to run fence status-check on host '192.168.2.14'. No
other host was available to serve as proxy for the operation.
Yes, same host because I only have one right now. :-)
Any help or guidance would be much appreciated. In the meantime I'm
going back to the docs to poke at a few other things I need to figure
out. :-)
Thanks!
~Stack~
--QtNjKESroum0Gs4B16RTXPJnq400V1QDU--
--OnEhT75wdmSi56KelkEfSHNlc4EOmbWar
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJZzqxCAAoJELkej+ysXJPmkn8P/i7sx6DP5aSOejTEvOzq45jc
uTYNnoAqniDK/do47z2ojjB0+Oa6czExR7IqyzAzz9+pFEMZlRttxVwQ0XyEj+4t
Fw44htR1PhU+YnNQm4fgEo04P7X72qEzdgeMgA/vVVp6chpw0tSG5/bLosrX/yJC
NsUF4X0yhnfsCtLZ9Tw78S392OqIQ1iyx12Brmxtip0c97JenMXxXXrxPoUHDFcR
T+mqVf7jnC+VxpRj0x5qU+JAOr05oje9coAgbDE6MhWaL6sjClEwhsi5VOU47he9
JcBjKbye4bRHIlzkgpg01Ge0m5fQ4FclJl9wnV4V5vX1Rkuol61wiPQ6SXd/CPy2
PiVsbvX3WloealAupANhaaYG93QPpQsmrw/6Ew/Finlsz6CNfg2VZHbzBGc79QV6
trLMhu+fw7Hsi/lmiU9Rkkmi8OOSgtapMkA283ft1wnBr7gYTyPZwQsp2chO66X5
QZvrRC64nBv9QcVswawWruWSIsETWNNRg7NltEiy8CKBDUsaJ4vJftXzEuHe++ML
2tgOaVRK9nikf6C5OlGPf2TVTVuBRyXGQTVQhGmPVx40499B5sUaen3+dyDHy8QW
qLWi6iPiN0YGZkzh/inl/jT4aowQlZEZTfT3KpnH5tyZQ018rcJBQnKFBiTwi5aM
/KzRHvKBIvKpjiIREQ7V
=kxQZ
-----END PGP SIGNATURE-----
--OnEhT75wdmSi56KelkEfSHNlc4EOmbWar--
7 years, 6 months
libvirt: XML-RPC error : authentication failed: Failed to start SASL
by Ozan Uzun
Hello,
Today I updated my ovirt engine v3.5 and all my hosts on one datacenter
(centos 7.4 ones).
and suddenly my vdsm and vdsm-network services stopped working.
btw: My other DC is centos 6 based (managed from the same ovirt engine),
everything works just fine there.
vdsm fails dependent on vdsm-network service, with lots of RPC error.
I tried to configure vdsm-tool configure --force, deleted everything
(vdsm-libvirt), reinstalled.
Could not make it work.
My logs are filled with the follogin
Sep 18 23:06:01 node6 python[5340]: GSSAPI Error: Unspecified GSS failure.
Minor code may provide more information (No Kerberos credentials available
(default cache: KEYRING:persistent:0))
Sep 18 23:06:01 node6 vdsm-tool[5340]: libvirt: XML-RPC error :
authentication failed: Failed to start SASL negotiation: -1 (SASL(-1):
generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may
provide more information (No Kerberos credent
Sep 18 23:06:01 node6 libvirtd[4312]: 2017-09-18 20:06:01.954+0000: 4312:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
-------
journalctl -xe output for vdsm-network
Sep 18 23:06:02 node6 vdsm-tool[5340]: libvirt: XML-RPC error :
authentication failed: Failed to start SASL negotiation: -1 (SASL(-1):
generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may
provide more information (No Kerberos credent
Sep 18 23:06:02 node6 vdsm-tool[5340]: Traceback (most recent call last):
Sep 18 23:06:02 node6 vdsm-tool[5340]: File "/usr/bin/vdsm-tool", line 219,
in main
Sep 18 23:06:02 node6 libvirtd[4312]: 2017-09-18 20:06:02.558+0000: 4312:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
Sep 18 23:06:02 node6 vdsm-tool[5340]: return
tool_command[cmd]["command"](*args)
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", line
83, in upgrade_networks
Sep 18 23:06:02 node6 vdsm-tool[5340]: networks = netinfo.networks()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in networks
Sep 18 23:06:02 node6 vdsm-tool[5340]: conn = libvirtconnection.get()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 159, in
get
Sep 18 23:06:02 node6 vdsm-tool[5340]: conn = _open_qemu_connection()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 95, in
_open_qemu_connection
Sep 18 23:06:02 node6 vdsm-tool[5340]: return utils.retry(libvirtOpen,
timeout=10, sleep=0.2)
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry
Sep 18 23:06:02 node6 vdsm-tool[5340]: return func()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth
Sep 18 23:06:02 node6 vdsm-tool[5340]: if ret is None:raise
libvirtError('virConnectOpenAuth() failed')
Sep 18 23:06:02 node6 vdsm-tool[5340]: libvirtError: authentication failed:
Failed to start SASL negotiation: -1 (SASL(-1): generic failure: GSSAPI
Error: Unspecified GSS failure. Minor code may provide more information
(No Kerberos credentials availa
Sep 18 23:06:02 node6 systemd[1]: vdsm-network.service: control process
exited, code=exited status=1
Sep 18 23:06:02 node6 systemd[1]: Failed to start Virtual Desktop Server
Manager network restoration.
-----
libvirt is running but throws some errors.
[root@node6 ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
vendor preset: enabled)
Drop-In: /etc/systemd/system/libvirtd.service.d
└─unlimited-core.conf
Active: active (running) since Mon 2017-09-18 23:15:47 +03; 19min ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 6125 (libvirtd)
CGroup: /system.slice/libvirtd.service
└─6125 /usr/sbin/libvirtd --listen
Sep 18 23:15:56 node6 libvirtd[6125]: 2017-09-18 20:15:56.195+0000: 6125:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
Sep 18 23:15:56 node6 libvirtd[6125]: 2017-09-18 20:15:56.396+0000: 6125:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
Sep 18 23:15:56 node6 libvirtd[6125]: 2017-09-18 20:15:56.597+0000: 6125:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
----------------
[root@node6 ~]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list
error: failed to connect to the hypervisor
error: authentication failed: Failed to start SASL negotiation: -1
(SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor
code may provide more information (No Kerberos credentials available
(default cache: KEYRING:persistent:0)))
=================
I do not want to lose all my virtual servers, is there any way to recover
them? Currenty everything is down. I am ok to install a new ovirt engine if
somehow I can restore my virtual servers. I can also split centos 6 and
centos 7 ovirt engine's.
7 years, 6 months
iSCSI VLAN host connections - bond or multipath & IPv6
by Ben Bradley
Hi All
I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and will
hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host to
dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here. Both
iSCSI host and oVirt host plugged into both switches.
If this was non-iSCSI traffic and without oVirt I would create bonded
interfaces in active-backup mode and layer the VLANs on top of that.
But for iSCSI traffic without oVirt involved I wouldn't bother with a
bond and just use multipath.
From scanning the oVirt docs it looks like there is an option to have
oVirt configure iSCSI multipathing.
So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single storage
link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's
multipath/bonding?
Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely down
to the fact I could use link-local addressing and not have to worry
about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI supported
by oVirt?
Thanks, Ben
7 years, 6 months
Re: [ovirt-users] 4.2 downgrade
by Yaniv Kaul
On Sep 30, 2017 8:09 AM, "Ryan Mahoney" <ryan(a)beaconhillentertainment.com>
wrote:
Accidentally upgraded a 4.0 environment to 4.2 (didn't realize the "master"
repo was development repo). What's my chances/best way if possible to roll
back to 4.0 (or 4.1 for that matter).
There is no roll back to oVirt installation.
That being said, I believe the Alpha quality is good. It is not feature
complete and we of course have more polishing to do, but it's very usable
and we will continue to ship updates to it. Let us know promptly what
issues you encounter.
Y.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 6 months
DR with oVirt: no data on OVF_STORE
by Luca 'remix_tj' Lorenzetto
Hello,
i'm experimenting a DR architecture that involves block storage
replication from the storage side (EMC VNX 8000).
Our idea is to import the replicated data storage domain on another
datacenter, managed by another engine, with the option "Import Domain"
and then import all the vms contained.
The idea works, but we encountered an issue that we don't want to have
again: we imported an SD and *no* vm were listed in the tab "VM
Import". Disks were available, but no VM informations.
What we did:
- on storage side: split the replica between the main disk in Site A
and the secondary disk Site B
- on storage side: added the disk to the storage group of the
"recovery" cluster
- from engine UI: Imported storage domain, confirming that i want to
activate even if seems to be attached to another DC
- from engine UI: move out from maintenance the storage domain and
click on the "VM Import" tab of the new SD.
What happened: *no* vm were listed
To identify better what's happening, I've found here some indications
on how lvm for block storage works and I identified the command on how
to find and read the OVF_STORE.
Looking inside the OVF_STORE has shown why no vm were listed: it was
empty (no .ovf file listed with tar tvf)
So, without the possibility import vms, i did a rollback, detaching
the storage domain and re-establishing the replication between the
main site and the DR site.
Then, after a day of replications (secondary volume is aligned every
30 minutes), i tried again and i've been able to import also vms
(OVF_STORE was populated).
So my question is: how to i force to have OVF_STORE to be aligned at
least as frequent as the replication? I want to have the VM disks
replicated to the remote site along with VM OVF informations.
Is possible to have OVF_STORE informations aligned when a VM is
created/edited or with a scheduled task? Is this so I/O expensive?
Thank you,
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 6 months
oVirt 4.2 hosted-engine command damaged
by Julián Tete
I updated my lab environment from oVirt 4.1.x to oVirt 4.2 Alpha
The hosted-engine command has been corrupted
An example:
hosted-engine --vm-status
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 213, in <module>
if not status_checker.print_status ():
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 110, in print_status
all_host_stats = self._get_all_host_stats ()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 75, in _get_all_host_stats
all_host_stats = ha_cli.get_all_host_stats ()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 154, in get_all_host_stats
return self.get_all_stats (self.StatModes.HOST)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 99, in get_all_stats
stats = broker.get_stats_from_storage (service)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 147, in get_stats_from_storage
for host_id, data in six.iteritems (result):
File "/usr/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems (** kw)
AttributeError: 'NoneType' object has no attribute 'iteritems'
hosted-engine --set-maintenance --mode = none
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 88, in <module>
if not maintenance.set_mode (sys.argv [1]):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 76, in set_mode
value = m_global,
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 240, in set_maintenance_mode
str (value))
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 187, in set_global_md_flag
all_stats = broker.get_stats_from_storage (service)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 147, in get_stats_from_storage
for host_id, data in six.iteritems (result):
File "/usr/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems (** kw)
AttributeError: 'NoneType' object has no attribute 'iteritems'
hosted-engine --vm-start
VM exists and its status is Up
Hardware
Manufacturer: HP
Family: ProLiant
Product Name: ProLiant BL460c Gen8
CPU Model Name: Intel (R) Xeon (R) CPU E5-2667 v2 @ 3.30GHz
CPU Type: Intel SandyBridge Family
CPU Sockets: 2
CPU Cores per Socket: 8
CPU Threads per Core: 2 (SMT Enabled)
Software:
OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 4.12.0 - 1.el7.elrepo.x86_64
KVM Version: 2.9.0 - 16.el7_4.5.1
LIBVIRT Version: libvirt-3.2.0-14.el7_4.3
VDSM Version: vdsm-4.20.3-95.git0813890.el7.centos
SPICE Version: 0.12.8 - 2.el7.1
GlusterFS Version: glusterfs-3.12.1-2.el7
CEPH Version: librbd1-0.94.5-2.el7
7 years, 6 months
Re: [ovirt-users] Engine crash, storage won't activate, hosts won't shutdown, template locked, gpu passthrough failed
by Yaniv Kaul
On Sep 30, 2017 7:50 PM, "M R" <gr8nextmail(a)gmail.com> wrote:
Hello!
I have been using Ovirt for last four weeks, testing and trying to get
things working.
I have collected here the problems I have found and this might be a bit
long but help to any of these or maybe to all of them from several people
would be wonderful.
It's a bit difficult and inefficient to list all issues in a single post -
unless you feel they are related ?
Also, it'd be challenging to understand them without logs.
Lastly, it's usually a good habit, when something doesn't work, solve it,
rather than continue. I do suspect your issues are somehow related.
Y.
My version is ovirt node 4.1.5 and 4.1.6 downloaded from website latest
stable release at the time. Also tested with CentOS minimal +ovirt repo. In
this case, 3. is solved, but other problems persist.
1. Power off host
First day after installing ovirt node, it was able to reboot and shutdown
clean. No problems at all. After few days of using ovir, I have noticed
that hosts are unable to shutdown. I have tested this in several different
ways and come to the following conclusion. IF engine has not been started
after boot, all hosts are able to shutdown clean. But if engine is started
even once, none of the hosts are able to shutdown anymore. The only way to
get power off is to unplug or press power button for a longer time as hard
reset. I have failed to find a way to have the engine running and then
shutdown host. This effects to all hosts in the cluster.
2. Glusterfs failed
Every time I have booted hosts, glusterfs has failed. For some reason, it
turns inactive state even if I have setup systemctl enable glusterd. Before
this command it was just inactive. After this command, it will say "failed
(inactive). There is still a way to get glusterfs working. I have to give
command systemctl start glusterd manually and everything starts working.
Why do I have to give manual commands to start glusterfs? I have used this
for CentOS before and never had this problem before. Node installer is that
much different from the CentOS core?
3. Epel
As I said that I have used CentOS before, I would like to able to install
some packets from repo. But even if I install epel-release, it won't find
packets such as nano or htop. I have read about how to add epel-release to
ovirt node from here: https://www.ovirt.org/release/4.1.1/#epel
I have tested even manually edit repolist, but it will fail to find normal
epel packets. I have setup additional exclude=collectd* as guided in the
link above. This doesn't make any difference. All being said I am able to
install manually packets which are downloaded with other CentOS machine and
transferred with scp to ovirt node. Still, this once again needs a lot of
manual input and is just a workaround for the bug.
4. Engine startup
When I try to start the engine when glusterfs is up, it will say vm doesn't
exist, starting up. Still, it won't startup automatically. I have to give
several times command hosted-engine --vm-start. I wait for about 5minutes
until I give it next time. This will take usually about 30minutes and then
randomly. Completely randomly after one of the times, I give this command
engine shoots up and is up in 1minute. This has happened every time I boot
up. And the times that I have to give a command to start the engine, has
been changing. At best it's been 3rd time at worst it has been 7th time.
Calculating from there it might take from 15minutes to 35minutes to get the
engine up.Nevertheless, it will eventually come up every time. If there is
a way to get it up on the first try or even better, automatically up, it
would be great.
5. Activate storage
Once the engine is up, there has been a problem with storage. When I go to
storage tab, it will show all sources red. Even if I wait for 15~20minutes,
it won't get storage green itself. I have to go and press active button
from main data storage. Then it will get main storage up in
2~3munutes.Sometimes it fails it once, but will definitely get main data
storage up on the seconds try. And then magically at the same time all
other storages instantly go green. Main storage is glusterfs and I have 3
NFS storages as well. This is only a problem when starting up and once
storages are on green they stay green. Still annoying that it cannot get it
done by itself.
6.Template locked
I try to create a template from existing VM and it resulted in original VM
going into locked state and template being locked. I have read that some
other people had a similar problem and they were suggested to restart
engine to see if it solves it. For me it has been now a week and several
restarts of engine and hosts, but there is still one VM locked and template
locked as well. This is not a big problem, but still a problem. Everything
is grey and cannot delete this bugged VM or template.
7. unable to use GPU
I have been trying to do GPU passthrough with my VM. First, there was a
problem with qemu cmd line, but once I figure out a way to get commands, it
maybe is working(?). Log shows up fine, but it still doesn't give
functionality I¨m looking for. As I mentioned in the other email that I
have found this: https://www.mail-archive.com/users@ovirt.org/msg40422.html
. It will give right syntax in log, but still, won't fix error 43 with
nvidia drivers. If anybody got this working or has ideas how to do it,
would really like to know how it's done properly. I have also tested with
AMD graphics cards such as vega, but as soon as drivers have installed, I
will get a black screen. Even if I restart VM or hosts or both. I will only
see black screen and unable to use VM at all. I might be able to live with
the other six things listed above, but this one is a bit of a problem for
me. My use of VMs will eventually need graphical performance and therefore
I will have to get this working or find an alternative to ovirt..I have
found several things that I really like in ovirt and would prefer to use
it.
Best regards
Mikko
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Ei
viruksia. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#m_1830658822099207723_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 6 months
Passing through a display port to a GUEST vm
by Alexander Witte
--_000_6DDEB273B96F45CC8FB776FCE14DD4FFbaicanadacom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCk91ciBzZXJ2ZXIgaGFzIDIgZGlzcGxheSBwb3J0cyBvbiBhbiBpbnRlZ3JhdGVkIGdy
YXBoaWNzIGNhcmQuICBPbmUgcG9ydCBkaXNwbGF5cyB0aGUgaG9zdCBPUyAoQ2VudG9zNyB3aXRo
IEtWTSBpbnN0YWxsZWQpIGFuZCB3ZSB3b3VsZCBsaWtlIHRoZSBzZWNvbmQgZGlzcGxheSBwb3J0
IHRvIGRpc3BsYXkgb25lIG9mIHRoZSBHVUVTVCBWTXMgKGEgV2luZG93cyAxMCBzZXJ2ZXIpLiAg
SSB3YXMganVzdCBjdXJpb3VzIGlmIGFueW9uZSBoYWQgc2V0IHRoaXMga2luZCBvZiB0aGluZyB1
cCBiZWZvcmUgb3IgaWYgdGhpcyBpcyBldmVuIHBvc3NpYmxlIGFzIHRoZXJlIGlzIG5vdCBleHRl
cm5hbCBWaWRlbyBjYXJkLiAgVGhpcyBpcyBhbGwgaW4gYW4gb1ZpcnQgZW52aXJvbm1lbnQuDQoN
CklmIHRoZSBwYXNzdGhyb3VnaCBvbiB0aGUgZGlzcGxheSBwb3J0IGlzIG5vdCBwb3NzaWJsZSBJ
IHdhcyB0aGlua2luZyBtYXliZSBvZiB1c2luZyBhIHVzYiB0byBoZG1pIGFkYXB0ZXIgYW5kIHBh
c3NpbmcgdGhyb3VnaCB0aGUgVVNCIHBvcnQgdG8gdGhlIGd1ZXN0IFZNPw0KDQpIZXJl4oCZcyB0
aGUgc2VydmVyIHdl4oCZcmUgdXNpbmc6DQoNCmh0dHBzOi8vd3d3Lm1lbm1pY3JvLmNvbS9wcm9k
dWN0cy9ib3gtcGNzL2JsNzB3Lw0KDQpJZiBhbnlvbmUgaGFzIGRvbmUgdGhpcyBvciBoYXMgYW55
IHRob3VnaHRzIGl0IHdvdWxkIGJlIGhlbHBmdWwhDQoNClRoYW5rcywNCg0KQWxleCBXaXR0ZQ0K
--_000_6DDEB273B96F45CC8FB776FCE14DD4FFbaicanadacom_
Content-Type: text/html; charset="utf-8"
Content-ID: <F67DA902C0572B4DBC4D4E4B308B3190(a)baicanada.local>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5IaSw8L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPk91
ciBzZXJ2ZXIgaGFzIDIgZGlzcGxheSBwb3J0cyBvbiBhbiBpbnRlZ3JhdGVkIGdyYXBoaWNzIGNh
cmQuICZuYnNwO09uZSBwb3J0IGRpc3BsYXlzIHRoZSBob3N0IE9TIChDZW50b3M3IHdpdGggS1ZN
IGluc3RhbGxlZCkgYW5kIHdlIHdvdWxkIGxpa2UgdGhlIHNlY29uZCBkaXNwbGF5IHBvcnQgdG8g
ZGlzcGxheSBvbmUgb2YgdGhlIEdVRVNUIFZNcyAoYSBXaW5kb3dzIDEwIHNlcnZlcikuICZuYnNw
O0kgd2FzIGp1c3QgY3VyaW91cyBpZg0KIGFueW9uZSBoYWQgc2V0IHRoaXMga2luZCBvZiB0aGlu
ZyB1cCBiZWZvcmUgb3IgaWYgdGhpcyBpcyBldmVuIHBvc3NpYmxlIGFzIHRoZXJlIGlzIG5vdCBl
eHRlcm5hbCBWaWRlbyBjYXJkLiAmbmJzcDtUaGlzIGlzIGFsbCBpbiBhbiBvVmlydCBlbnZpcm9u
bWVudC48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPklmIHRoZSBwYXNzdGhyb3VnaCBvbiB0aGUgZGlzcGxheSBwb3J0IGlzIG5vdCBwb3Nz
aWJsZSBJIHdhcyB0aGlua2luZyBtYXliZSBvZiB1c2luZyBhIHVzYiB0byBoZG1pIGFkYXB0ZXIg
YW5kIHBhc3NpbmcgdGhyb3VnaCB0aGUgVVNCIHBvcnQgdG8gdGhlIGd1ZXN0IFZNPzwvZGl2Pg0K
PGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SGVyZeKA
mXMgdGhlIHNlcnZlciB3ZeKAmXJlIHVzaW5nOjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xh
c3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGEgaHJlZj0iaHR0cHM6Ly93d3cubWVubWlj
cm8uY29tL3Byb2R1Y3RzL2JveC1wY3MvYmw3MHcvIiBjbGFzcz0iIj5odHRwczovL3d3dy5tZW5t
aWNyby5jb20vcHJvZHVjdHMvYm94LXBjcy9ibDcwdy88L2E+PC9kaXY+DQo8ZGl2IGNsYXNzPSIi
PjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5JZiBhbnlvbmUgaGFzIGRvbmUg
dGhpcyBvciBoYXMgYW55IHRob3VnaHRzIGl0IHdvdWxkIGJlIGhlbHBmdWwhPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KVGhhbmtzLA0KPGRpdiBjbGFzcz0iIj48
YnIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPg0KPGRpdiBzdHlsZT0iY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0
OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5v
cm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXNpemUt
YWRqdXN0OiBhdXRvOyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHdvcmQtd3JhcDog
YnJlYWstd29yZDsgLXdlYmtpdC1uYnNwLW1vZGU6IHNwYWNlOyAtd2Via2l0LWxpbmUtYnJlYWs6
IGFmdGVyLXdoaXRlLXNwYWNlOyIgY2xhc3M9IiI+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyI+DQpBbGV4IFdpdHRl
PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+DQo8
L2h0bWw+DQo=
--_000_6DDEB273B96F45CC8FB776FCE14DD4FFbaicanadacom_--
7 years, 6 months
Re: [ovirt-users] Qemu prevents vm from starting up properly
by M R
Hello!
I have maybe found a way to do this.
I found this older email archive where similar problem was described:
https://www.mail-archive.com/users@ovirt.org/msg40422.html
With this -cpu arguments show up corretcly in log.
But the it still won't fix nvidia problem 43, which is annoying "bug"
implemented by nvidia.
I have several gtx graphic cards collecting dust and would like to use
them, but fail to do so...
best regards
Mikko
On Thu, Sep 28, 2017 at 8:46 AM, Yedidyah Bar David <didi(a)redhat.com> wrote:
> On Wed, Sep 27, 2017 at 8:32 PM, M R <gr8nextmail(a)gmail.com> wrote:
> > Hello!
> >
> > Thank you very much! I had misunderstood how it was suppose to be
> written in
> > qemu_cmdline. There was a typo in syntax and error log revealed it. It is
> > working IF I use ["-spice", "tls-ciphers=DES-CBC3-SHA"].
> > So I believe that installation is correctly done.
> >
> > Though, my problem still exists.
> > This is what I have been trying to use for qemu_cmdline:
> > ["-cpu", "kvm=off, hv_vendor_id=sometext"]
> > It does not work and most likely is incorrectly written.
>
> You should first come up with something that works when you try it
> manually, then try adapting that to the hook's syntax.
>
> >
> > I understood that qemu commands are often exported into xml files and the
> > command I'm trying to write is the following:
> >
> > <features>
> > <hyperv>
> > <vendor_id state='on' value='customvalue'/>
> > </hyperv>
> > <kvm>
> > <hidden state='on'/>
> > </kvm>
> > </features>
>
> I guess you refer above to libvirt xml. This isn't strictly
> related to qemu, although in practice most usage of libvirt
> is with qemu.
>
> >
> > How do I write this in custom properties for qemu_cmdline?
>
> If you have a working libvirt vm, with the options you need,
> simply check how it translated your xml to qemu's command line.
> You can see this either in its logs, or using ps.
>
> Best,
>
> >
> >
> > best regards
> >
> > Mikko
> >
> >
> >
> > On 27 Sep 2017 3:27 pm, "Yedidyah Bar David" <didi(a)redhat.com> wrote:
> >>
> >> On Wed, Sep 27, 2017 at 1:14 PM, M R <gr8nextmail(a)gmail.com> wrote:
> >> > Hello!
> >> >
> >> > I did check logs from hosts, but didnt notice anything that would help
> >> > me. I
> >> > can copy paste logs later.
> >> >
> >> > I was not trying to get qemu crash vm.
> >> > I'm trying to add new functionalities with qemu.
> >> >
> >> > I wasnt sure if my syntax was correct, so I copy pasted the example
> >> > command
> >> > for spice from that website. And it still behaves similarly.
> >> >
> >> > My conclusion is that qemu cmdline is setup wrong or it's not working
> at
> >> > all. But I dont know how to check that.
> >>
> >> Please check/share /var/log/libvirt/qemu/* and /var/log/vdsm/* . Thanks.
> >>
> >> >
> >> > On 27 Sep 2017 12:32, "Yedidyah Bar David" <didi(a)redhat.com> wrote:
> >> >>
> >> >> On Wed, Sep 27, 2017 at 11:32 AM, M R <gr8nextmail(a)gmail.com> wrote:
> >> >> > Hello!
> >> >> >
> >> >> > I have followed instructions in
> >> >> > https://www.ovirt.org/develop/developer-guide/vdsm/hook/
> qemucmdline/
> >> >> >
> >> >> > After adding any command for qemu cmdline, vm will try to start,
> but
> >> >> > will
> >> >> > immediately shutdown.
> >> >> >
> >> >> > Is this a bug?
> >> >>
> >> >> If you intended, with the params you passed, to make qemu fail, for
> >> >> whatever
> >> >> reason (debugging qemu?), then it's not a bug :-) Otherwise, it is,
> but
> >> >> we
> >> >> can't know where.
> >> >>
> >> >> > or is the information in the link insufficient?
> >> >> > If it would be possible to confirm this and if there's a way to
> fix,
> >> >> > I
> >> >> > would
> >> >> > really like to have step by step guide of how to get this working.
> >> >>
> >> >> Did you check relevant logs? vdsm/libvirt?
> >> >>
> >> >> Best,
> >> >> --
> >> >> Didi
> >>
> >>
> >>
> >> --
> >> Didi
>
>
>
> --
> Didi
>
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Ei
viruksia. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
7 years, 6 months
4.2 downgrade
by Ryan Mahoney
Accidentally upgraded a 4.0 environment to 4.2 (didn't realize the "master"
repo was development repo). What's my chances/best way if possible to roll
back to 4.0 (or 4.1 for that matter).
7 years, 6 months
changing ip of host and its ovirtmgmt vlan
by Gianluca Cecchi
Hello,
an host not maintained by me was modified so that its mgmt network had
become ovirtmgmntZ2Z3.
Originally the host had been added into engine with its hostname and not
its ip and this simplifies things.
So a dns entry change was done while the host was in maintenance
(as far as I have understood...)
The guy changed the /etc/sysconfig/netowrk-scripts/ files and apparently it
was activated ok, but when host rebooted the config was reverted due to
persistence of vdsm.
As he had urgence for this host to become operational again, in the mean
time I worked like this, having now a working host:
- modified /etc/sysconfig/network/scripts files with the new required
configuration
- modified files under /var/lib/vdsm/persistence/netconf/nets/
eg the file ovirtmgmntZ2Z3 with its ip and vlan correct information
sync
then power off / power on host
The host comes up good and as it had been peviously put into maintenance,
it was able to be activated and power on some VMs.
Can I consider this workflow ok or is there any ip/network information of
the host stored into engine db or other parts on engine or hosts?
I have then a question for the ovirtmgmt logical network itself, but I will
open a new thread for it...
Thanks in advance,
Gianluca
7 years, 7 months
Real Noob question- setting a static IP on host
by Alexander Witte
--_000_78BA09B27208464C8062331367F3EA13baicanadacom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SSBhbSBpbmNyZWRpYmx5IHNvcnJ5IG92ZXIgdGhpcyBub29iIHF1ZXN0aW9uIGJ1dCBJIGFtIHJl
YWxseSBiYXNoaW5nIG15IGhlYWQgdHJ5aW5nIHRvIHNpbXBseSBjaGFuZ2UgYW4gSVAgYWRkcmVz
cyBvbiBhbiBPdmlydCBob3N0LiAgb1ZpcnQgd2FzIHB1c2hlZCB0byB0aGlzIGhvc3QgdGhyb3Vn
aCB0aGUgc2VydmVyIHdlYiBpbnRlcmZhY2UuICBJdCBpcyBydW5uaW5nIG9uIHRvcCBvZiBDZW50
b3MgNy4NCg0KRnJvbSB0aGUgZG9jcyBpdCBzYXlzIHRvIGxvZyBpbnRvIHRoZSBob3N0IGFuZCBl
ZGl0IHRoZSBpZmNmZy1vdmlydG1nbXQgZmlsZSBhbmQgSSBoYXZlIGRvbmUgYW5kIGhlcmUgYXJl
IHRoZSBsYXRlc3Qgc2V0dGluZ3M6DQoNCiNHZW5lcmF0ZWQgYnkgVkRTTSB2ZXJzaW9uIDQuMTku
MjgtMS5lMTcuY2VudG9zDQpERVZJQ0U6b3ZpcnRtZ210DQpUWVBFOkJyaWRnZQ0KREVMQVk9MA0K
U1RQPW9mZg0KT05CT09UPXllcw0KQk9PVFBST1RPPW5vbmUNCk1UVT0xNTAwDQpERUZST1VURT15
ZXMNCk5NX0NPTlRST0xMRUQ9bm8NCklQVjZJTklUPXllcw0KSVBWNl9BVVRPQ09ORj15ZXMNCklQ
QUREUj0xMC4wLjAuMjI2DQpHQVRFV0FZPTEwLjAuMC4xDQpQUkVGSVg9MTMNCkROUzE9MTAuMC4w
LjkNCkROUzI9OC44LjguOA0KDQpUaGUgc2VydmVyIGNhbiByZWFjaCBldmVyeXRoaW5nIG9uIHRo
ZSBuZXR3b3JrIGZpbmUuICBBbHRob3VnaCBpdCBjYW5ub3QgYmUgcmVhY2hlZCB0aHJvdWdoIHRo
ZSBvVmlydCB3ZWIgaW50ZXJmYWNlIGFuZCB0aGUgaG9zdCBpcyBpbiBhIOKAnGNvbm5lY3Rpbmfi
gJ0gc3RhdHVzLiAgSW4gdGhlIG9WaXJ0IHdlYiBpbnRlcmZhY2UgaWYgSSBhdHRlbXB0IHRvIGVk
aXQgdGhlIE5JQyBzZXR0aW5ncyBmcm9tIERIQ1AgdG8gU3RhdGljIHRvIHJlZmxlY3QgdGhlIGNo
YW5nZXMgSXZlIG1hZGUgYWJvdmUgSSBydW4gaW50byB0aGlzIGVycm9yOg0KDQoNCiAgKiAgIENh
bm5vdCBzZXR1cCBOZXR3b3Jrcy4gQW5vdGhlciBTZXR1cCBOZXR3b3JrcyBvciBIb3N0IFJlZnJl
c2ggcHJvY2VzcyBpbiBwcm9ncmVzcyBvbiB0aGUgaG9zdC4gUGxlYXNlIHRyeSBsYXRlci4NCg0K
V2hhdCBpcyB0aGUgY29ycmVjdCBwcm9jZWR1cmUgdG8gY2hhbmdlIGEgaG9zdCBtYW5hZ2VtZW50
IElQIGZyb20gREhDUCB0byBTVEFUSUM/ICBTaG91bGQgSSBtYWtlIHRoZXNlIGNoYW5nZXMgbWFu
dWFsbHkgb24gdGhlIGhvc3Qgb3IgdGhyb3VnaCB0aGUgTklDIHNldHRpbmdzIGluIHRoZSBvVmly
dCB3ZWIgaW50ZXJmYWNlICh3aGVuIEkgdHJpZWQgdGhpcyBpdCBqdXN0IHNlZW1lZCB0byBoYW5n
Li4pDQoNCkFueSBoZWxwIGlzIGdyZWF0bHkgYXBwcmVjaWF0ZWQuDQoNClRoYW5rcyEhDQoNCg0K
QWxleA0KDQoNCg==
--_000_78BA09B27208464C8062331367F3EA13baicanadacom_
Content-Type: text/html; charset="utf-8"
Content-ID: <6FB4F688113E0C42AD7DBCD1BEC27217(a)baicanada.local>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5JIGFtIGlu
Y3JlZGlibHkgc29ycnkgb3ZlciB0aGlzIG5vb2IgcXVlc3Rpb24gYnV0IEkgYW0gcmVhbGx5IGJh
c2hpbmcgbXkgaGVhZCB0cnlpbmcgdG8gc2ltcGx5IGNoYW5nZSBhbiBJUCBhZGRyZXNzIG9uIGFu
IE92aXJ0IGhvc3QuICZuYnNwO29WaXJ0IHdhcyBwdXNoZWQgdG8gdGhpcyBob3N0IHRocm91Z2gg
dGhlIHNlcnZlciB3ZWIgaW50ZXJmYWNlLiAmbmJzcDtJdCBpcyBydW5uaW5nIG9uIHRvcCBvZiBD
ZW50b3MgNy48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPkZyb20gdGhlIGRvY3MgaXQgc2F5cyB0byBsb2cgaW50byB0aGUgaG9zdCBhbmQg
ZWRpdCB0aGUgaWZjZmctb3ZpcnRtZ210IGZpbGUgYW5kIEkgaGF2ZSBkb25lIGFuZCBoZXJlIGFy
ZSB0aGUgbGF0ZXN0IHNldHRpbmdzOjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+
DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+I0dlbmVyYXRlZCBieSBWRFNNIHZlcnNpb24gNC4xOS4y
OC0xLmUxNy5jZW50b3M8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+REVWSUNFOm92aXJ0bWdtdDwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj5UWVBFOkJyaWRnZTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5ERUxBWT0w
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlNUUD1vZmY8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+T05CT09U
PXllczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xvcj0iI2ZmMmEwZSIgY2xhc3M9IiI+
Qk9PVFBST1RPPW5vbmU8L2ZvbnQ+PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPk1UVT0xNTAwPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPkRFRlJPVVRFPXllczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5OTV9DT05U
Uk9MTEVEPW5vPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPklQVjZJTklUPXllczwvZGl2Pg0KPGRpdiBj
bGFzcz0iIj5JUFY2X0FVVE9DT05GPXllczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xv
cj0iI2ZmMjUxMyIgY2xhc3M9IiI+SVBBRERSPTEwLjAuMC4yMjY8L2ZvbnQ+PC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjxmb250IGNvbG9yPSIjZmYyMDE1IiBjbGFzcz0iIj5HQVRFV0FZPTEwLjAuMC4x
PC9mb250PjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xvcj0iI2ZmMWMwZCIgY2xhc3M9
IiI+UFJFRklYPTEzPC9mb250PjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5ETlMxPTEwLjAuMC45PC9k
aXY+DQo8ZGl2IGNsYXNzPSIiPkROUzI9OC44LjguODwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIg
Y2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhlIHNlcnZlciBjYW4gcmVhY2ggZXZl
cnl0aGluZyBvbiB0aGUgbmV0d29yayBmaW5lLiAmbmJzcDtBbHRob3VnaCBpdCBjYW5ub3QgYmUg
cmVhY2hlZCB0aHJvdWdoIHRoZSBvVmlydCB3ZWIgaW50ZXJmYWNlIGFuZCB0aGUgaG9zdCBpcyBp
biBhIOKAnGNvbm5lY3RpbmfigJ0gc3RhdHVzLiAmbmJzcDtJbiB0aGUgb1ZpcnQgd2ViIGludGVy
ZmFjZSBpZiBJIGF0dGVtcHQgdG8gZWRpdCB0aGUgTklDIHNldHRpbmdzIGZyb20gREhDUCB0byBT
dGF0aWMNCiB0byByZWZsZWN0IHRoZSBjaGFuZ2VzIEl2ZSBtYWRlIGFib3ZlIEkgcnVuIGludG8g
dGhpcyBlcnJvcjo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8
ZGl2IGNsYXNzPSIiPg0KPHVsIHN0eWxlPSJib3gtc2l6aW5nOiBib3JkZXItYm94OyBtYXJnaW4t
dG9wOiAwcHg7IG1hcmdpbi1ib3R0b206IDEwcHg7IGZvbnQtZmFtaWx5OiAnQXJpYWwgVW5pY29k
ZSBNUycsIEFyaWFsLCBzYW5zLXNlcmlmOyIgY2xhc3M9IiI+DQo8bGkgc3R5bGU9ImJveC1zaXpp
bmc6IGJvcmRlci1ib3g7IiBjbGFzcz0iIj5DYW5ub3Qgc2V0dXAgTmV0d29ya3MuIEFub3RoZXIg
U2V0dXAgTmV0d29ya3Mgb3IgSG9zdCBSZWZyZXNoIHByb2Nlc3MgaW4gcHJvZ3Jlc3Mgb24gdGhl
IGhvc3QuIFBsZWFzZSB0cnkgbGF0ZXIuPC9saT48L3VsPg0KPGRpdiBjbGFzcz0iIj48YnIgY2xh
c3M9IiI+DQo8L2Rpdj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5XaGF0IGlzIHRoZSBjb3JyZWN0
IHByb2NlZHVyZSB0byBjaGFuZ2UgYSBob3N0IG1hbmFnZW1lbnQgSVAgZnJvbSBESENQIHRvIFNU
QVRJQz8gJm5ic3A7U2hvdWxkIEkgbWFrZSB0aGVzZSBjaGFuZ2VzIG1hbnVhbGx5IG9uIHRoZSBo
b3N0IG9yIHRocm91Z2ggdGhlIE5JQyBzZXR0aW5ncyBpbiB0aGUgb1ZpcnQgd2ViIGludGVyZmFj
ZSAod2hlbiBJIHRyaWVkIHRoaXMgaXQganVzdCBzZWVtZWQgdG8gaGFuZy4uKTwvZGl2Pg0KPGRp
diBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+QW55IGhlbHAg
aXMgZ3JlYXRseSBhcHByZWNpYXRlZC48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIi
Pg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRoYW5rcyEhPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxi
ciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGJyIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYg
c3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyBvcnBoYW5zOiBh
dXRvOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdpZG93czogYXV0bzsgd29yZC1zcGFjaW5nOiAw
cHg7IC13ZWJraXQtdGV4dC1zaXplLWFkanVzdDogYXV0bzsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB3b3JkLXdyYXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFj
ZTsgLXdlYmtpdC1saW5lLWJyZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRp
diBzdHlsZT0iY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsiPg0KQWxleDxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPC9k
aXY+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_78BA09B27208464C8062331367F3EA13baicanadacom_--
7 years, 7 months
Host cannot connect to hosted storage domain
by Alexander Witte
--_000_99C25ABF4B9749298A9C9C8058D4162Ebaicanadacom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGkhDQoNClF1ZXN0aW9uIGhvcGVmdWxseSBzb21lb25lIGNhbiBoZWxwIG1lIG91dCB3aXRoOg0K
DQpJbiBteSBTZWxmIEhvc3RlZCBFbmdpbmUgZW52aXJvbm1lbnQsIHRoZSBsb2NhbCBzdG9yYWdl
IGRvbWFpbiBEQVRBIChORlMpIHRoYXQgd2FzIGNyZWF0ZWQgd2l0aCB0aGUgc2VsZiBlbmdpbmUg
aW5zdGFsbGF0aW9uIGhhcyBiZWVuIGNvbmZpZ3VyZWQgYXMgbG9jYWxob3N0Oi9zaGFyZXMNCg0K
SSBzdXNwZWN0IHRoaXMgaXMgcHJldmVudGluZyBtZSBmcm9tIGFkZGluZyBhbnkgYWRkaXRpb25h
bCBob3N0cyB0byB0aGUgb1ZpcnQgRGF0YWNlbnRlciBhcyBJIGFtIHJlY2VpdmluZyBhIFZEU00g
ZXJyb3IgdGhhdCBJIGNhbm5vdCBtb3VudCB0aGF0IGRvbWFpbi4gIEkgdGhpbmsgc2luY2UgdGhl
IGRvbWFpbiBpcyBzZXQgYXMgbG9jYWxob3N0IGl0IGNhbm5vdCBiZSByZXNvbHZlZCBjb3JyZWN0
IGJ5IGFueSBhZGRpdGlvbmFsIGhvc3RzLi4uPyAgVGhlIElTTywgRGF0YShNYXN0ZXIpIGFuZCBF
WFBPUlQgZG9tYWlucyBhcmUgc2V0IGFzIEZRRE46L3NoYXJlcy9pc28gYW5kIEkgYW0gbm90IHNl
ZWluZyBwcm9ibGVtcyBzcGVjaWZpYyB0byB0aGVtLg0KDQpJIGFtIGN1cmlvdXMgd2hhdCB0aGUg
Y29ycmVjdCBwcm9jZWR1cmUgaXMgdG8gY2hhbmdlIHRoaXMgaG9zdGVkIGVuZ2luZSBzdG9yYWdl
IGRvbWFpbiBwYXRoIGZyb20gbG9jYWxob3N0Oi9zaGFyZXMgdG8gRlFETjovc2hhcmVzID8gIEkg
aGF2ZSBhdHRlbXB0ZWQgdGhpczoNCg0KMSkgUHV0IGhvc3RlZCBlbmdpbmUgaW4gR2xvYmFsIE1h
aW50ZW5hbmNlIE1vZGUNCjIpIFNodXRkb3duIGhvc3RlZCBlbmdpbmUNCjMpIGVkaXQgdGhlIC9l
dGMvb3ZpcnQtaG9zdGVkLWVuZ2luZS9ob3N0ZWQtZW5naW5lLmNvbmYgZmlsZSBmaWxlIGFuZCBj
aGFuZ2U6DQpzdG9yYWdlPTEwLjAuMC4yMjM6L3NoYXJlcyAgIHRvDQpzdG9yYWdlPW1lbm1hc3Rl
ci50cmFpbmRlbW8ubG9jYWw6L3NoYXJlcw0KNCkgUmVzdGFydCBob3N0ZWQgZW5naW5lDQoNCkFs
dGhvdWdoIEnigJltIG5vdCBoYXZpbmcgYW55IGx1Y2sgcmVzdGFydGluZyB0aGUgaG9zdGVkIGVu
Z2luZSBhZnRlciBhbmQgcnVubmluZyBhIGpvdXJuYWxjdGwgLXUgb24gdGhlIG92ZXJ0LWhhLWFn
ZW50IHNlcnZpY2UgaXMgZ2l2aW5nIG1lIHRoaXM6DQoNClNlcCAyNyAyMDoxNzoxOSBtZW5tYXN0
ZXIudHJhaW5kZW1vLmxvY2FsIG92aXJ0LWhhLWFnZW50WzIwNTJdOiBvdmlydC1oYS1hZ2VudCBv
dmlydF9ob3N0ZWRfZW5naW5lX2hhLmFnZW50LmFnZW50LkFnZW50IEVSUk9SIFRyYWNlYmFjayAo
bW9zdCByZWNlbnQgY2FsbCBsYXN0KToNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEZpbGUgIi91c3IvbGliL3B5dGhvbjIu
Ny9zaXRlLXBhY2thZ2VzL292aXJ0X2hvc3RlZF9lbmdpbmVfaGEvYWdlbnQvYWdlbnQucHkiLCBs
aW5lIDE5MSwgaW4gX3J1bl9hZ2VudA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICByZXR1cm4gYWN0aW9uKGhlKQ0KICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgRmlsZSAiL3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVk
X2VuZ2luZV9oYS9hZ2VudC9hZ2VudC5weSIsIGxpbmUgNjQsIGluIGFjdGlvbl9wcm9wZXINCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgcmV0dXJuIGhlLnN0YXJ0X21vbml0b3JpbmcoKQ0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgRmlsZSAiL3Vz
ci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS9hZ2Vu
dC9ob3N0ZWRfZW5naW5lLnB5IiwgbGluZSA0MDksIGluIHN0YXJ0X21vbml0b3JpbmcNCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgc2VsZi5faW5pdGlhbGl6ZV9zdG9yYWdlX2ltYWdlcyhmb3JjZT1UcnVlKQ0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgRmlsZSAiL3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2Vu
Z2luZV9oYS9hZ2VudC9ob3N0ZWRfZW5naW5lLnB5IiwgbGluZSA2NTEsIGluIF9pbml0aWFsaXpl
X3N0b3JhZ2VfaW1hZ2VzDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGltZy50ZWFyZG93bl9pbWFnZXMoKQ0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgRmlsZSAiL3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2Vu
Z2luZV9oYS9saWIvaW1hZ2UucHkiLCBsaW5lIDIxOCwgaW4gdGVhcmRvd25faW1hZ2VzDQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHZvbHVtZUlEPXZvbFVVSUQsDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBGaWxlICIvdXNyL2xpYi9weXRob24y
Ljcvc2l0ZS1wYWNrYWdlcy92ZHNtL2pzb25ycGN2ZHNjbGkucHkiLCBsaW5lIDE1NSwgaW4gX2Nh
bGxNZXRob2QNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKG1ldGhvZE5hbWUsIGFyZ3MsIGUpKQ0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEV4Y2Vw
dGlvbjogQXR0ZW1wdCB0byBjYWxsIGZ1bmN0aW9uOiB0ZWFyZG93bkltYWdlIHdpdGggYXJndW1l
bnRzOiAoKSBlcnJvcjogJ3RlYXJkb3duSW1hZ2UnDQpTZXAgMjcgMjA6MTc6MTkgbWVubWFzdGVy
LnRyYWluZGVtby5sb2NhbCBvdmlydC1oYS1hZ2VudFsyMDUyXTogb3ZpcnQtaGEtYWdlbnQgb3Zp
cnRfaG9zdGVkX2VuZ2luZV9oYS5hZ2VudC5hZ2VudC5BZ2VudCBFUlJPUiBUcnlpbmcgdG8gcmVz
dGFydCBhZ2VudA0KU2VwIDI3IDIwOjE3OjI0IG1lbm1hc3Rlci50cmFpbmRlbW8ubG9jYWwgb3Zp
cnQtaGEtYWdlbnRbMjA1Ml06IG92aXJ0LWhhLWFnZW50IG92aXJ0X2hvc3RlZF9lbmdpbmVfaGEu
YWdlbnQuYWdlbnQuQWdlbnQgRVJST1IgVG9vIG1hbnkgZXJyb3JzIG9jY3VycmVkLCBnaXZpbmcg
dXAuIFBsZWFzZSByZXZpZXcgdGhlIGxvZyBhbmQgY29uc2lkZXIgZmlsaW5nDQpTZXAgMjcgMjA6
MTc6MjQgbWVubWFzdGVyLnRyYWluZGVtby5sb2NhbCBzeXN0ZW1kWzFdOiBvdmlydC1oYS1hZ2Vu
dC5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTE1Ny9u
L2ENClNlcCAyNyAyMDoxNzoyNCBtZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2FsIHN5c3RlbWRbMV06
IFVuaXQgb3ZpcnQtaGEtYWdlbnQuc2VydmljZSBlbnRlcmVkIGZhaWxlZCBzdGF0ZS4NClNlcCAy
NyAyMDoxNzoyNCBtZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2FsIHN5c3RlbWRbMV06IG92aXJ0LWhh
LWFnZW50LnNlcnZpY2UgZmFpbGVkLg0KU2VwIDI3IDIwOjE3OjI1IG1lbm1hc3Rlci50cmFpbmRl
bW8ubG9jYWwgc3lzdGVtZFsxXTogb3ZpcnQtaGEtYWdlbnQuc2VydmljZSBob2xkb2ZmIHRpbWUg
b3Zlciwgc2NoZWR1bGluZyByZXN0YXJ0Lg0KU2VwIDI3IDIwOjE3OjI1IG1lbm1hc3Rlci50cmFp
bmRlbW8ubG9jYWwgc3lzdGVtZFsxXTogU3RhcnRlZCBvVmlydCBIb3N0ZWQgRW5naW5lIEhpZ2gg
QXZhaWxhYmlsaXR5IE1vbml0b3JpbmcgQWdlbnQuDQpTZXAgMjcgMjA6MTc6MjUgbWVubWFzdGVy
LnRyYWluZGVtby5sb2NhbCBzeXN0ZW1kWzFdOiBTdGFydGluZyBvVmlydCBIb3N0ZWQgRW5naW5l
IEhpZ2ggQXZhaWxhYmlsaXR5IE1vbml0b3JpbmcgQWdlbnQuLi4NClNlcCAyNyAyMDoxNzozNSBt
ZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2FsIG92aXJ0LWhhLWFnZW50WzI2MjZdOiBvdmlydC1oYS1h
Z2VudCBvdmlydF9ob3N0ZWRfZW5naW5lX2hhLmxpYi5zdG9yYWdlX3NlcnZlci5TdG9yYWdlU2Vy
dmVyIEVSUk9SIFRoZSBob3N0ZWQtZW5naW5lIHN0b3JhZ2UgZG9tYWluIGlzIGFscmVhZHkgbW91
bnRlZCBvbiAnL3JoZXYvZA0KU2VwIDI3IDIwOjE3OjQyIG1lbm1hc3Rlci50cmFpbmRlbW8ubG9j
YWwgb3ZpcnQtaGEtYWdlbnRbMjYyNl06IG92aXJ0LWhhLWFnZW50IG92aXJ0X2hvc3RlZF9lbmdp
bmVfaGEuYWdlbnQuYWdlbnQuQWdlbnQgRVJST1IgVHJhY2ViYWNrIChtb3N0IHJlY2VudCBjYWxs
IGxhc3QpOg0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgRmlsZSAiL3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMv
b3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS9hZ2VudC9hZ2VudC5weSIsIGxpbmUgMTkxLCBpbiBfcnVu
X2FnZW50DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHJldHVybiBhY3Rpb24oaGUpDQogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBGaWxlICIvdXNy
L2xpYi9weXRob24yLjcvc2l0ZS1wYWNrYWdlcy9vdmlydF9ob3N0ZWRfZW5naW5lX2hhL2FnZW50
L2FnZW50LnB5IiwgbGluZSA2NCwgaW4gYWN0aW9uX3Byb3Blcg0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICByZXR1cm4g
aGUuc3RhcnRfbW9uaXRvcmluZygpDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBGaWxlICIvdXNyL2xpYi9weXRob24yLjcv
c2l0ZS1wYWNrYWdlcy9vdmlydF9ob3N0ZWRfZW5naW5lX2hhL2FnZW50L2hvc3RlZF9lbmdpbmUu
cHkiLCBsaW5lIDQwOSwgaW4gc3RhcnRfbW9uaXRvcmluZw0KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzZWxmLl9pbml0
aWFsaXplX3N0b3JhZ2VfaW1hZ2VzKGZvcmNlPVRydWUpDQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBGaWxlICIvdXNyL2xp
Yi9weXRob24yLjcvc2l0ZS1wYWNrYWdlcy9vdmlydF9ob3N0ZWRfZW5naW5lX2hhL2FnZW50L2hv
c3RlZF9lbmdpbmUucHkiLCBsaW5lIDY1MSwgaW4gX2luaXRpYWxpemVfc3RvcmFnZV9pbWFnZXMN
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgaW1nLnRlYXJkb3duX2ltYWdlcygpDQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBGaWxlICIvdXNyL2xp
Yi9weXRob24yLjcvc2l0ZS1wYWNrYWdlcy9vdmlydF9ob3N0ZWRfZW5naW5lX2hhL2xpYi9pbWFn
ZS5weSIsIGxpbmUgMjE4LCBpbiB0ZWFyZG93bl9pbWFnZXMNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdm9sdW1lSUQ9
dm9sVVVJRCwNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIEZpbGUgIi91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2Vz
L3Zkc20vanNvbnJwY3Zkc2NsaS5weSIsIGxpbmUgMTU1LCBpbiBfY2FsbE1ldGhvZA0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAobWV0aG9kTmFtZSwgYXJncywgZSkpDQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgRXhjZXB0aW9uOiBBdHRlbXB0IHRv
IGNhbGwgZnVuY3Rpb246IHRlYXJkb3duSW1hZ2Ugd2l0aCBhcmd1bWVudHM6ICgpIGVycm9yOiAn
dGVhcmRvd25JbWFnZScNClNlcCAyNyAyMDoxNzo0MiBtZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2Fs
IG92aXJ0LWhhLWFnZW50WzI2MjZdOiBvdmlydC1oYS1hZ2VudCBvdmlydF9ob3N0ZWRfZW5naW5l
X2hhLmFnZW50LmFnZW50LkFnZW50IEVSUk9SIFRyeWluZyB0byByZXN0YXJ0IGFnZW50DQoNCg0K
DQpBbnkgdGhvdWdodHM/ICBJIGNhbiBhdHRhY2ggbW9yZSBsb2dzIHRvbW9ycm93IG9yIHByb3Zp
ZGUgZnVydGhlciBpbmZvcm1hdGlvbiBpZiBuZWVkZWQuDQoNClRoYW5rcyEhDQoNCg0KQWxleCBX
aXR0ZQ0KDQoNCg0KDQo=
--_000_99C25ABF4B9749298A9C9C8058D4162Ebaicanadacom_
Content-Type: text/html; charset="utf-8"
Content-ID: <E38B5B8741CD7B4C8D310DCB50FCB58B(a)baicanada.local>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5IaSE8L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlF1
ZXN0aW9uIGhvcGVmdWxseSBzb21lb25lIGNhbiBoZWxwIG1lIG91dCB3aXRoOjwvZGl2Pg0KPGRp
diBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SW4gbXkgU2Vs
ZiBIb3N0ZWQgRW5naW5lIGVudmlyb25tZW50LCB0aGUgbG9jYWwgc3RvcmFnZSBkb21haW4gREFU
QSAoTkZTKSB0aGF0IHdhcyBjcmVhdGVkIHdpdGggdGhlIHNlbGYgZW5naW5lIGluc3RhbGxhdGlv
biBoYXMgYmVlbiBjb25maWd1cmVkIGFzIGxvY2FsaG9zdDovc2hhcmVzPC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5JIDxiIGNsYXNzPSIi
PnN1c3BlY3Q8L2I+IHRoaXMgaXMgcHJldmVudGluZyBtZSBmcm9tIGFkZGluZyBhbnkgYWRkaXRp
b25hbCBob3N0cyB0byB0aGUgb1ZpcnQgRGF0YWNlbnRlciBhcyBJIGFtIHJlY2VpdmluZyBhIFZE
U00gZXJyb3IgdGhhdCBJIGNhbm5vdCBtb3VudCB0aGF0IGRvbWFpbi4gJm5ic3A7SSB0aGluayBz
aW5jZSB0aGUgZG9tYWluIGlzIHNldCBhcyBsb2NhbGhvc3QgaXQgY2Fubm90IGJlIHJlc29sdmVk
IGNvcnJlY3QNCiBieSBhbnkgYWRkaXRpb25hbCBob3N0cy4uLj8gJm5ic3A7VGhlIElTTywgRGF0
YShNYXN0ZXIpIGFuZCBFWFBPUlQgZG9tYWlucyBhcmUgc2V0IGFzIEZRRE46L3NoYXJlcy9pc28g
YW5kIEkgYW0gbm90IHNlZWluZyBwcm9ibGVtcyBzcGVjaWZpYyB0byB0aGVtLjwvZGl2Pg0KPGRp
diBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SSBhbSBjdXJp
b3VzIHdoYXQgdGhlIGNvcnJlY3QgcHJvY2VkdXJlIGlzIHRvIGNoYW5nZSB0aGlzIGhvc3RlZCBl
bmdpbmUgc3RvcmFnZSBkb21haW4gcGF0aCBmcm9tDQo8aSBjbGFzcz0iIj5sb2NhbGhvc3Q6L3No
YXJlczwvaT4gdG8gPGkgY2xhc3M9IiI+RlFETjovc2hhcmVzPC9pPiA/ICZuYnNwO0kgaGF2ZSBh
dHRlbXB0ZWQgdGhpczo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPjEpIFB1dCBob3N0ZWQgZW5naW5lIGluIEdsb2JhbCBNYWludGVuYW5j
ZSBNb2RlPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjIpIFNodXRkb3duIGhvc3RlZCBlbmdpbmU8L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+MykgZWRpdCB0aGUgLzxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDY5
LCA2OSwgNjkpOyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsiIGNsYXNzPSIiPmV0Yy9v
dmlydC1ob3N0ZWQtZW5naW5lL2hvc3RlZC1lbmdpbmUuY29uZiBmaWxlIGZpbGUgYW5kIGNoYW5n
ZTo8L3NwYW4+PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDY5
LCA2OSwgNjkpOyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsiIGNsYXNzPSIiPjxzcGFu
IGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOnByZSI+PC9zcGFuPjwv
c3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYig2OSwgNjksIDY5KTsgZm9udC1mYW1pbHk6ICdI
ZWx2ZXRpY2EgTmV1ZSc7IiBjbGFzcz0iIj5zdG9yYWdlPTEwLjAuMC4yMjM6L3NoYXJlcw0KICZu
YnNwOyB0bzwvc3Bhbj48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PHNwYW4gc3R5bGU9ImNvbG9yOiBy
Z2IoNjksIDY5LCA2OSk7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyIgY2xhc3M9IiI+
PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6cHJlIj48L3Nw
YW4+PC9zcGFuPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyBmb250LWZhbWls
eTogJ0hlbHZldGljYSBOZXVlJzsiIGNsYXNzPSIiPnN0b3JhZ2U9bWVubWFzdGVyLnRyYWluZGVt
by5sb2NhbDovc2hhcmVzPC9zcGFuPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xvcj0i
IzQ1NDU0NSIgZmFjZT0iSGVsdmV0aWNhIE5ldWUiIGNsYXNzPSIiPjQpIFJlc3RhcnQgaG9zdGVk
IGVuZ2luZTwvZm9udD48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGZvbnQgY29sb3I9IiM0NTQ1NDUi
IGZhY2U9IkhlbHZldGljYSBOZXVlIiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2ZvbnQ+PC9k
aXY+DQo8ZGl2IGNsYXNzPSIiPjxmb250IGNvbG9yPSIjNDU0NTQ1IiBmYWNlPSJIZWx2ZXRpY2Eg
TmV1ZSIgY2xhc3M9IiI+QWx0aG91Z2ggSeKAmW0gbm90IGhhdmluZyBhbnkgbHVjayByZXN0YXJ0
aW5nIHRoZSBob3N0ZWQgZW5naW5lIGFmdGVyIGFuZCBydW5uaW5nIGEgam91cm5hbGN0bCAtdSBv
biB0aGUmbmJzcDtvdmVydC1oYS1hZ2VudCBzZXJ2aWNlIGlzIGdpdmluZyBtZSB0aGlzOjwvZm9u
dD48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGZvbnQgY29sb3I9IiM0NTQ1NDUiIGZhY2U9IkhlbHZl
dGljYSBOZXVlIiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2ZvbnQ+PC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPg0KPGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZv
bnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xh
c3M9IiI+DQpTZXAgMjcgMjA6MTc6MTkgbWVubWFzdGVyLnRyYWluZGVtby5sb2NhbCBvdmlydC1o
YS1hZ2VudFsyMDUyXTogb3ZpcnQtaGEtYWdlbnQgb3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS5hZ2Vu
dC5hZ2VudC5BZ2VudCBFUlJPUiBUcmFjZWJhY2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdCk6PC9k
aXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1m
YW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0i
Ij4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7IEZpbGUgJnF1b3Q7L3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRf
aG9zdGVkX2VuZ2luZV9oYS9hZ2VudC9hZ2VudC5weSZxdW90OywgbGluZSAxOTEsIGluIF9ydW5f
YWdlbnQ8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFs
OyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsi
IGNsYXNzPSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHJldHVybiBhY3Rpb24oaGUpPC9kaXY+DQo8ZGl2IHN0eWxl
PSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRp
Y2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0iIj4NCiZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IEZpbGUgJnF1
b3Q7L3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2VuZ2luZV9o
YS9hZ2VudC9hZ2VudC5weSZxdW90OywgbGluZSA2NCwgaW4gYWN0aW9uX3Byb3BlcjwvZGl2Pg0K
PGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZvbnQtZmFtaWx5
OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xhc3M9IiI+DQom
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgcmV0dXJuIGhlLnN0YXJ0X21vbml0b3JpbmcoKTwvZGl2Pg0KPGRpdiBzdHlsZT0i
bWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNh
IE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xhc3M9IiI+DQombmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBGaWxlICZxdW90
Oy91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2VzL292aXJ0X2hvc3RlZF9lbmdpbmVfaGEv
YWdlbnQvaG9zdGVkX2VuZ2luZS5weSZxdW90OywgbGluZSA0MDksIGluIHN0YXJ0X21vbml0b3Jp
bmc8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBm
b250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNs
YXNzPSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7IHNlbGYuX2luaXRpYWxpemVfc3RvcmFnZV9pbWFnZXMoZm9yY2U9
VHJ1ZSk8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFs
OyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsi
IGNsYXNzPSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgRmlsZSAmcXVvdDsvdXNyL2xpYi9weXRob24yLjcvc2l0ZS1wYWNrYWdl
cy9vdmlydF9ob3N0ZWRfZW5naW5lX2hhL2FnZW50L2hvc3RlZF9lbmdpbmUucHkmcXVvdDssIGxp
bmUgNjUxLCBpbiBfaW5pdGlhbGl6ZV9zdG9yYWdlX2ltYWdlczwvZGl2Pg0KPGRpdiBzdHlsZT0i
bWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNh
IE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xhc3M9IiI+DQombmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgaW1n
LnRlYXJkb3duX2ltYWdlcygpPC9kaXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1o
ZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2Io
NjksIDY5LCA2OSk7IiBjbGFzcz0iIj4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IEZpbGUgJnF1b3Q7L3Vzci9saWIvcHl0aG9uMi43
L3NpdGUtcGFja2FnZXMvb3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS9saWIvaW1hZ2UucHkmcXVvdDss
IGxpbmUgMjE4LCBpbiB0ZWFyZG93bl9pbWFnZXM8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjog
MHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsg
Y29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IHZvbHVtZUlEPXZv
bFVVSUQsPC9kaXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1h
bDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7
IiBjbGFzcz0iIj4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7IEZpbGUgJnF1b3Q7L3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2Fn
ZXMvdmRzbS9qc29ucnBjdmRzY2xpLnB5JnF1b3Q7LCBsaW5lIDE1NSwgaW4gX2NhbGxNZXRob2Q8
L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250
LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNz
PSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7IChtZXRob2ROYW1lLCBhcmdzLCBlKSk8L2Rpdj4NCjxkaXYgc3R5bGU9
Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hlbHZldGlj
YSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBFeGNlcHRpb246IEF0dGVt
cHQgdG8gY2FsbCBmdW5jdGlvbjogdGVhcmRvd25JbWFnZSB3aXRoIGFyZ3VtZW50czogKCkgZXJy
b3I6ICd0ZWFyZG93bkltYWdlJzwvZGl2Pg0KPGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUt
aGVpZ2h0OiBub3JtYWw7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdi
KDY5LCA2OSwgNjkpOyIgY2xhc3M9IiI+DQpTZXAgMjcgMjA6MTc6MTkgbWVubWFzdGVyLnRyYWlu
ZGVtby5sb2NhbCBvdmlydC1oYS1hZ2VudFsyMDUyXTogb3ZpcnQtaGEtYWdlbnQgb3ZpcnRfaG9z
dGVkX2VuZ2luZV9oYS5hZ2VudC5hZ2VudC5BZ2VudCBFUlJPUiBUcnlpbmcgdG8gcmVzdGFydCBh
Z2VudDwvZGl2Pg0KPGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7
IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIg
Y2xhc3M9IiI+DQpTZXAgMjcgMjA6MTc6MjQgbWVubWFzdGVyLnRyYWluZGVtby5sb2NhbCBvdmly
dC1oYS1hZ2VudFsyMDUyXTogb3ZpcnQtaGEtYWdlbnQgb3ZpcnRfaG9zdGVkX2VuZ2luZV9oYS5h
Z2VudC5hZ2VudC5BZ2VudCBFUlJPUiBUb28gbWFueSBlcnJvcnMgb2NjdXJyZWQsIGdpdmluZyB1
cC4gUGxlYXNlIHJldmlldyB0aGUgbG9nIGFuZCBjb25zaWRlciBmaWxpbmc8L2Rpdj4NCjxkaXYg
c3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hl
bHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KU2VwIDI3
IDIwOjE3OjI0IG1lbm1hc3Rlci50cmFpbmRlbW8ubG9jYWwgc3lzdGVtZFsxXTogb3ZpcnQtaGEt
YWdlbnQuc2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0x
NTcvbi9hPC9kaXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1h
bDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7
IiBjbGFzcz0iIj4NClNlcCAyNyAyMDoxNzoyNCBtZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2FsIHN5
c3RlbWRbMV06IFVuaXQgb3ZpcnQtaGEtYWdlbnQuc2VydmljZSBlbnRlcmVkIGZhaWxlZCBzdGF0
ZS48L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBm
b250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNs
YXNzPSIiPg0KU2VwIDI3IDIwOjE3OjI0IG1lbm1hc3Rlci50cmFpbmRlbW8ubG9jYWwgc3lzdGVt
ZFsxXTogb3ZpcnQtaGEtYWdlbnQuc2VydmljZSBmYWlsZWQuPC9kaXY+DQo8ZGl2IHN0eWxlPSJt
YXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRpY2Eg
TmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0iIj4NClNlcCAyNyAyMDoxNzoy
NSBtZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2FsIHN5c3RlbWRbMV06IG92aXJ0LWhhLWFnZW50LnNl
cnZpY2UgaG9sZG9mZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC48L2Rpdj4NCjxkaXYg
c3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hl
bHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KU2VwIDI3
IDIwOjE3OjI1IG1lbm1hc3Rlci50cmFpbmRlbW8ubG9jYWwgc3lzdGVtZFsxXTogU3RhcnRlZCBv
VmlydCBIb3N0ZWQgRW5naW5lIEhpZ2ggQXZhaWxhYmlsaXR5IE1vbml0b3JpbmcgQWdlbnQuPC9k
aXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1m
YW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0i
Ij4NClNlcCAyNyAyMDoxNzoyNSBtZW5tYXN0ZXIudHJhaW5kZW1vLmxvY2FsIHN5c3RlbWRbMV06
IFN0YXJ0aW5nIG9WaXJ0IEhvc3RlZCBFbmdpbmUgSGlnaCBBdmFpbGFiaWxpdHkgTW9uaXRvcmlu
ZyBBZ2VudC4uLjwvZGl2Pg0KPGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBu
b3JtYWw7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwg
NjkpOyIgY2xhc3M9IiI+DQpTZXAgMjcgMjA6MTc6MzUgbWVubWFzdGVyLnRyYWluZGVtby5sb2Nh
bCBvdmlydC1oYS1hZ2VudFsyNjI2XTogb3ZpcnQtaGEtYWdlbnQgb3ZpcnRfaG9zdGVkX2VuZ2lu
ZV9oYS5saWIuc3RvcmFnZV9zZXJ2ZXIuU3RvcmFnZVNlcnZlciBFUlJPUiBUaGUgaG9zdGVkLWVu
Z2luZSBzdG9yYWdlIGRvbWFpbiBpcyBhbHJlYWR5IG1vdW50ZWQgb24gJy9yaGV2L2Q8L2Rpdj4N
CjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWls
eTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0K
U2VwIDI3IDIwOjE3OjQyIG1lbm1hc3Rlci50cmFpbmRlbW8ubG9jYWwgb3ZpcnQtaGEtYWdlbnRb
MjYyNl06IG92aXJ0LWhhLWFnZW50IG92aXJ0X2hvc3RlZF9lbmdpbmVfaGEuYWdlbnQuYWdlbnQu
QWdlbnQgRVJST1IgVHJhY2ViYWNrIChtb3N0IHJlY2VudCBjYWxsIGxhc3QpOjwvZGl2Pg0KPGRp
diBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZvbnQtZmFtaWx5OiAn
SGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xhc3M9IiI+DQombmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBG
aWxlICZxdW90Oy91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2VzL292aXJ0X2hvc3RlZF9l
bmdpbmVfaGEvYWdlbnQvYWdlbnQucHkmcXVvdDssIGxpbmUgMTkxLCBpbiBfcnVuX2FnZW50PC9k
aXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1m
YW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0i
Ij4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyByZXR1cm4gYWN0aW9uKGhlKTwvZGl2Pg0KPGRpdiBzdHlsZT0ibWFyZ2lu
OiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUn
OyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xhc3M9IiI+DQombmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBGaWxlICZxdW90Oy91c3Iv
bGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2VzL292aXJ0X2hvc3RlZF9lbmdpbmVfaGEvYWdlbnQv
YWdlbnQucHkmcXVvdDssIGxpbmUgNjQsIGluIGFjdGlvbl9wcm9wZXI8L2Rpdj4NCjxkaXYgc3R5
bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hlbHZl
dGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
IHJldHVybiBoZS5zdGFydF9tb25pdG9yaW5nKCk8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjog
MHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsg
Y29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgRmlsZSAmcXVvdDsvdXNyL2xp
Yi9weXRob24yLjcvc2l0ZS1wYWNrYWdlcy9vdmlydF9ob3N0ZWRfZW5naW5lX2hhL2FnZW50L2hv
c3RlZF9lbmdpbmUucHkmcXVvdDssIGxpbmUgNDA5LCBpbiBzdGFydF9tb25pdG9yaW5nPC9kaXY+
DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1p
bHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0iIj4N
CiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyBzZWxmLl9pbml0aWFsaXplX3N0b3JhZ2VfaW1hZ2VzKGZvcmNlPVRydWUpPC9k
aXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1m
YW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0i
Ij4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7IEZpbGUgJnF1b3Q7L3Vzci9saWIvcHl0aG9uMi43L3NpdGUtcGFja2FnZXMvb3ZpcnRf
aG9zdGVkX2VuZ2luZV9oYS9hZ2VudC9ob3N0ZWRfZW5naW5lLnB5JnF1b3Q7LCBsaW5lIDY1MSwg
aW4gX2luaXRpYWxpemVfc3RvcmFnZV9pbWFnZXM8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjog
MHB4OyBsaW5lLWhlaWdodDogbm9ybWFsOyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsg
Y29sb3I6IHJnYig2OSwgNjksIDY5KTsiIGNsYXNzPSIiPg0KJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7IGltZy50ZWFyZG93
bl9pbWFnZXMoKTwvZGl2Pg0KPGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBu
b3JtYWw7IGZvbnQtZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwg
NjkpOyIgY2xhc3M9IiI+DQombmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyBGaWxlICZxdW90Oy91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBh
Y2thZ2VzL292aXJ0X2hvc3RlZF9lbmdpbmVfaGEvbGliL2ltYWdlLnB5JnF1b3Q7LCBsaW5lIDIx
OCwgaW4gdGVhcmRvd25faW1hZ2VzPC9kaXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGlu
ZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiBy
Z2IoNjksIDY5LCA2OSk7IiBjbGFzcz0iIj4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyB2b2x1bWVJRD12b2xVVUlELDwv
ZGl2Pg0KPGRpdiBzdHlsZT0ibWFyZ2luOiAwcHg7IGxpbmUtaGVpZ2h0OiBub3JtYWw7IGZvbnQt
ZmFtaWx5OiAnSGVsdmV0aWNhIE5ldWUnOyBjb2xvcjogcmdiKDY5LCA2OSwgNjkpOyIgY2xhc3M9
IiI+DQombmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyBGaWxlICZxdW90Oy91c3IvbGliL3B5dGhvbjIuNy9zaXRlLXBhY2thZ2VzL3Zkc20v
anNvbnJwY3Zkc2NsaS5weSZxdW90OywgbGluZSAxNTUsIGluIF9jYWxsTWV0aG9kPC9kaXY+DQo8
ZGl2IHN0eWxlPSJtYXJnaW46IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1pbHk6
ICdIZWx2ZXRpY2EgTmV1ZSc7IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0iIj4NCiZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAobWV0aG9kTmFtZSwgYXJncywgZSkpPC9kaXY+DQo8ZGl2IHN0eWxlPSJtYXJnaW46
IDBweDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgZm9udC1mYW1pbHk6ICdIZWx2ZXRpY2EgTmV1ZSc7
IGNvbG9yOiByZ2IoNjksIDY5LCA2OSk7IiBjbGFzcz0iIj4NCiZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgRXhjZXB0aW9uOiBBdHRlbXB0IHRvIGNh
bGwgZnVuY3Rpb246IHRlYXJkb3duSW1hZ2Ugd2l0aCBhcmd1bWVudHM6ICgpIGVycm9yOiAndGVh
cmRvd25JbWFnZSc8L2Rpdj4NCjxkaXYgc3R5bGU9Im1hcmdpbjogMHB4OyBsaW5lLWhlaWdodDog
bm9ybWFsOyBmb250LWZhbWlseTogJ0hlbHZldGljYSBOZXVlJzsgY29sb3I6IHJnYig2OSwgNjks
IDY5KTsiIGNsYXNzPSIiPg0KU2VwIDI3IDIwOjE3OjQyIG1lbm1hc3Rlci50cmFpbmRlbW8ubG9j
YWwgb3ZpcnQtaGEtYWdlbnRbMjYyNl06IG92aXJ0LWhhLWFnZW50IG92aXJ0X2hvc3RlZF9lbmdp
bmVfaGEuYWdlbnQuYWdlbnQuQWdlbnQgRVJST1IgVHJ5aW5nIHRvIHJlc3RhcnQgYWdlbnQ8L2Rp
dj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xh
c3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4N
CjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5BbnkgdGhvdWdodHM/ICZuYnNwO0kgY2FuIGF0dGFjaCBt
b3JlIGxvZ3MgdG9tb3Jyb3cgb3IgcHJvdmlkZSBmdXJ0aGVyIGluZm9ybWF0aW9uIGlmIG5lZWRl
ZC48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNz
PSIiPlRoYW5rcyEhPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0K
PGJyIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYgc3R5bGU9ImNvbG9yOiByZ2IoMCwg
MCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHls
ZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFs
OyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyBvcnBoYW5zOiBhdXRvOyB0ZXh0LWFsaWduOiBzdGFy
dDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBu
b3JtYWw7IHdpZG93czogYXV0bzsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zaXpl
LWFkanVzdDogYXV0bzsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB3b3JkLXdyYXA6
IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJyZWFr
OiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBzdHlsZT0iY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsiPg0KQWxleCBXaXR0
ZTxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPC9k
aXY+DQo8YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjwvYm9keT4NCjwvaHRt
bD4NCg==
--_000_99C25ABF4B9749298A9C9C8058D4162Ebaicanadacom_--
7 years, 7 months
oVirt 4.2 alpha upgrade failed
by Maton, Brett
Upgrading from oVirt 4.1.7
hosted-engine VM:
4GB RAM
hosted-engine setup failed, setup log shows this error:
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql'...
2017-09-28 16:56:22,951+0100 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926
execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s',
'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l',
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20170928164338-0rkilb.log',
'-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql:2:
ERROR: check constraint "vm_static_max_memory_size_lower_bound" is
violated by some row
FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
2017-09-28 16:56:22,951+0100 ERROR
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:374
schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_0140_add_max_memory_constraint.sql
2017-09-28 16:56:22,952+0100 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py",
line 376, in _misc
raise RuntimeError(_('Engine schema refresh failed'))
RuntimeError: Engine schema refresh failed
What's the minimum RAM required now ?
Regards,
Brett
7 years, 7 months
Failure while using ovirt-image-template role
by Marc Seward
Hi,
I'm trying to use the ovirt-image-template role to import a Glance image as
a template into ovirt and I'm running into this error with
python-ovirt-engine-sdk4-4.1.6-1.el7ev.x86_64
I'd appreciate any pointers.
TASK [ovirt.ovirt-ansible-roles/roles/ovirt-image-template : Find data
domain]
************************************************************************************************************************************
task path:
/etc/ansible/roles/ovirt.ovirt-ansible-roles/roles/ovirt-image-template/tasks/glance_image.yml:21
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "You need to install \"jmespath\" prior to running json_query
filter"
}
TASK [ovirt.ovirt-ansible-roles/roles/ovirt-image-template : Logout from
oVirt] ***********************************************
7 years, 7 months
[ANN] oVirt 4.2.0 First Alpha Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Alpha Release of oVirt 4.2.0, as of September 28th, 2017
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first alpha release of the 4.2.0 version. This release
brings more than 120 enhancements and more than 670 bug fixes, including
more than 260 high or urgent severity fixes, on top of oVirt 4.1 series.
What's new in oVirt 4.2.0?
-
The Administration Portal has been completely redesigned using
Patternfly, a widely adopted standard in web application design. It now
features a cleaner, more intuitive design, for an improved user experience.
-
There is an all-new VM Portal for non-admin users.
-
A new High Performance virtual machine type has been added to the New VM
dialog box in the Administration Portal.
-
Open Virtual Network (OVN) adds support for Open vSwitch software
defined networking (SDN).
-
oVirt now supports Nvidia vGPU.
-
The ovirt-ansible-roles package helps users with common administration
tasks.
-
Virt-v2v now supports Debian/Ubuntu based VMs.
For more information about these and other features, check out the oVirt
4.2.0 blog post <https://ovirt.org/blog/2017/09/introducing-ovirt-4.2.0/>.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2 (available for x86_64 only)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available.
- An async release of oVirt Node will follow soon.
Additional Resources:
* Read more about the oVirt 4.2.0 release highlights:
http://www.ovirt.org/release/4.2.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.2.0/
[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
<http://www.teraplan.it/redhat-osd-2017/>
7 years, 7 months
Performance of cloning
by Gianluca Cecchi
Hello,
I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total of
about 200Gb to copy
The target I choose is on a different domain than the source one.
They are both FC storage domains, with the source on SSD disks and the
target on SAS disks.
The disks are preallocated
Now I have 3 processes of kind:
/usr/bin/qemu-img convert -p -t none -T none -f raw
/rhev/data-center/59b7af54-0155-01c2-0248-000000000195/fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450
-O raw
/rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/27980581-5935-4b23-989a-4811f80956ca
but despite capabilities it seems it is copying using very low system
resources.
I see this both using iotop and vmstat
vmstat 3 gives:
----io---- -system-- ------cpu-----
bi bo in cs us sy id wa st
2527 698 3771 29394 1 0 89 10 0
iotop -d 5 -k -o -P gives:
Total DISK READ : 472.73 K/s | Total DISK WRITE : 17.05 K/s
Actual DISK READ: 1113.23 K/s | Actual DISK WRITE: 55.86 K/s
PID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND
2124 be/4 sanlock 401.39 K/s 0.20 K/s 0.00 % 0.00 % sanlock daemon
2146 be/4 vdsm 50.96 K/s 0.00 K/s 0.00 % 0.00 % python
/usr/share/o~a-broker --no-daemon
30379 be/0 root 7.06 K/s 0.00 K/s 0.00 % 98.09 % lvm vgck
--config ~50-a7fe-f83675a2d676
30380 be/0 root 4.70 K/s 0.00 K/s 0.00 % 98.09 % lvm lvchange
--conf~59-b931-4eb61e43b56b
30381 be/0 root 4.70 K/s 0.00 K/s 0.00 % 98.09 % lvm lvchange
--conf~83675a2d676/metadata
30631 be/0 root 3.92 K/s 0.00 K/s 0.00 % 98.09 % lvm vgs
--config d~f6-9466-553849aba5e9
2052 be/3 root 0.00 K/s 2.35 K/s 0.00 % 0.00 % [jbd2/dm-34-8]
6458 be/4 qemu 0.00 K/s 4.70 K/s 0.00 % 0.00 % qemu-kvm -name
gues~x7 -msg timestamp=on
2064 be/3 root 0.00 K/s 0.00 K/s 0.00 % 0.00 % [jbd2/dm-32-8]
2147 be/4 root 0.00 K/s 4.70 K/s 0.00 % 0.00 % rsyslogd -n
9145 idle vdsm 0.00 K/s 0.59 K/s 0.00 % 24.52 % qemu-img
convert -p~23-989a-4811f80956ca
13313 be/4 root 0.00 K/s 0.00 K/s 0.00 % 0.00 % [kworker/u112:3]
9399 idle vdsm 0.00 K/s 0.59 K/s 0.00 % 24.52 % qemu-img
convert -p~51-9c8c-8d9aaa7e8f58
1310 ?dif root 0.00 K/s 0.00 K/s 0.00 % 0.00 % multipathd
3996 be/4 vdsm 0.00 K/s 0.78 K/s 0.00 % 0.00 % python
/usr/sbin/mo~c /etc/vdsm/mom.conf
6391 be/4 root 0.00 K/s 0.00 K/s 0.00 % 0.00 % [kworker/u112:0]
2059 be/3 root 0.00 K/s 3.14 K/s 0.00 % 0.00 % [jbd2/dm-33-8]
Is it expected? Any way to speed up the process?
Thanks,
Gianluca
7 years, 7 months
Renaming or deleting ovirtmgmt
by Gianluca Cecchi
Hello,
I notice that in a cluster I can create a new network and then select it as
the Management network, but at the same time the default "ovirtmgmt"
logical network in "Manage Networks" is always checked as both as "Assign"
and "Required" and also grayed so that I cannot change anything of it...
and any host in this cluster must have ovirtmgmt even if it doesn't use it
at all..
Is there any RFE to change this model?
So for example to be able to change its name or delete it after we have
defined a new network as management one.
It seems something similar to the concept of the"Default" DC that now we
can finally rename and /or delete
Gianluca
7 years, 7 months
Huge pages in guest with newer oVirt versions
by Gianluca Cecchi
Hello,
I would like to go again in deep with what preliminary tested and already
discussed here:
http://lists.ovirt.org/pipermail/users/2017-April/081320.html
I'm testing an oVirt node in 4.1.6-pre
I don't find the vdsm hook for huge pages; doing a search I get these:
vdsm-hook-ethtool-options.noarch : Allow setting custom ethtool options for
vdsm controlled nics
vdsm-hook-fcoe.noarch : Hook to enable FCoE support
vdsm-hook-openstacknet.noarch : OpenStack Network vNICs support for VDSM
vdsm-hook-vfio-mdev.noarch : Hook to enable mdev-capable devices.
vdsm-hook-vhostmd.noarch : VDSM hook set for interaction with vhostmd
vdsm-hook-vmfex-dev.noarch : VM-FEX vNIC support for VDSM
Did anything change between 4.1.1 and 4.1.5/4.1.6?
I'm making preliminary tests with an Oracle RDBMS and HammerDB in both a
physical server and a "big" VM inside another same hw server configured
with oVirt.
Results are not bad, but I would like to see having huge pages inside the
guest how could change results.
Just for reference:
The 2 Physical server are blades with each one:
2 sockets, each one with 14 cores and ht enabled, so in total 56
computational threads
256Gb ram
huge pages enabled
VM configured with this virtual hw on one of them:
2 sockets, each one with 6 cores and ht so in total 24 computational threads
64Gb ram
no huge pages at the moment
Oracle SGA is 32Gb on both physical rdbms and virtual one.
Thanks for any insight to test huge pages in guest
Gianluca
7 years, 7 months
oVirt Node update question
by Matthias Leopold
hi,
i still don't completely understand the oVirt Node update process and
the involved rpm packages.
We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
available updates
'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'
(i don't want run release candidates), one of them shows
'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i like).
The node that doesn't want to upgrade to '4.1.6-0.1.rc1' lacks the rpm
package 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has
'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'. Also
the version of ovirt-node-ng-nodectl is '4.1.3-0.20170709.0.el7' instead
of '4.1.3-0.20170705.0.el7'. This node was the last one i installed and
never made a version update before.
I only began using oVirt starting with 4.1, but already completed minor
version upgrades of oVirt nodes. IIRC this 'mysterious'
ovirt-node-ng-image-update package comes into place when updating a node
for the first time after initial installation. Usually i wouldn't care
about all of this, but now i have this RC update situation that i don't
want. How is this supposed to work? How can i resolve it?
thx
matthias
7 years, 7 months
Qemu prevents vm from starting up properly
by M R
Hello!
I have followed instructions in https://www.ovirt.org/
develop/developer-guide/vdsm/hook/qemucmdline/
After adding any command for qemu cmdline, vm will try to start, but will
immediately shutdown.
Is this a bug? or is the information in the link insufficient?
If it would be possible to confirm this and if there's a way to fix, I
would really like to have step by step guide of how to get this working.
thank you,
Mikko
7 years, 7 months
resizing and reordering column in webui
by Nathanaël Blanchet
Webui currently usefully allows us to reorder columns by right clicking.
Is there a way to keep the wanted configuration for next sessions?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7 years, 7 months
Restrict a single storage domain for users/roles
by Marco Paolone
Hello,
this is my first message on ML.
First off, a short recap: I'm setting up an oVirt test environment based
on version 4.1.5.2. Once in production, I'm planning to let users access
the dashboard with a custom role for basic operations (like startup,
shutdown or creation of VMs).
For storage domains I'm using both NFS and Cinder, and I'd like to know
if there's a way to limit, for users, creation of new volumes only on
Cinder.
Best regards,
Marco
7 years, 7 months
ansible ovirt_vms parameter cloud_init_nics
by TranceWorldLogic .
Hi,
I was trying to initialize more than one nic via cloud init using ansible
as shown below
vars:
myNicList: [ { nic_name: "eth0, nic_boot_protocol: "dhcp", nic_on_boot:
"true"},{ nic_name: "eth0, nic_boot_protocol: "dhcp", nic_on_boot: "true"} ]
ovirt_vms:
auth: "{{ ovirt_auth }}"
name: test
...
cloud_init_nics : "{{ myNicList }}"
Here I am getting error object is type none.
When I tired to debug ovirt_vms module it showed me below outpu:
cloud_init_nics: [
{},
{}
]
My quesion is, how can I pass list of dictionary in ansible to ovirt_vms
via variable ?
Please help me, I am stuck.
Thanks,
~Rohit
7 years, 7 months
ERROR Failed extracting VM OVF from the OVF_STORE volume
by gabriel_skupien@o2.pl
After a fresh installation of oVirt Node 4.1.5 and self-hosted engine I can see hundreds of logs like below. What is this about? How to mitigate it? 07:08 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:08 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:07 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:07 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:07 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:07 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:06 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:06 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:06 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:06 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:06 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:06 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:05 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:05 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:05 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:05 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:04 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:04 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:04 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf ovirt-ha-agent 07:04 ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf ERROR Unable to extract HEVM OVF ovirt-ha-agent 07:04 Regards, Gabriel
7 years, 7 months
Preventing VM's from starting on specific Hosts
by Mark Steele
Hello,
We've added two new hosts to our Ovirt 3.5.0.1 installation and want to
restrict which VM's can start on it - specifically we do not any VM's to be
able to auto select these two hosts.
How do I prevent VM's from starting on these two HV's without specifically
tying every VM to a host?
Best regards,
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
7 years, 7 months
Update engine from 4.1.5. to 4.1.6 pushes 4.1.7 for dwh
by Gianluca Cecchi
Hello,
I'm updating my engine from 4.1.5 to 4.1.6
But I see that the command
# yum update "ovirt-*-setup*"
retrieves these updates
ovirt-engine-dwh-setup noarch
4.1.7-1.el7.centos ovirt-4.1 69 k
ovirt-engine-setup noarch
4.1.6.2-1.el7.centos ovirt-4.1 9.8 k
ovirt-engine-setup-base noarch
4.1.6.2-1.el7.centos ovirt-4.1 98 k
ovirt-engine-setup-plugin-ovirt-engine noarch
4.1.6.2-1.el7.centos ovirt-4.1 158 k
ovirt-engine-setup-plugin-ovirt-engine-common noarch
4.1.6.2-1.el7.centos ovirt-4.1 96 k
ovirt-engine-setup-plugin-vmconsole-proxy-helper noarch
4.1.6.2-1.el7.centos ovirt-4.1 29 k
ovirt-engine-setup-plugin-websocket-proxy noarch
4.1.6.2-1.el7.centos ovirt-4.1 27 k
Updating for dependencies:
ovirt-engine-lib noarch
4.1.6.2-1.el7.centos ovirt-4.1 31 k
Is it correct the 4.1.7 version for ovirt-engine-dwh-setup?
Thanks,
Gianluca
7 years, 7 months
VM won't start if a Cinder disk is attached
by Maxence SARTIAUX
------=_Part_370_171547247.1506001347491
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hello
I have a ovirt 4.1.5.2-1 ovirt cluster with a ceph luminous & openstack ocata cinder.
I can create / remove / attach cinder disks with ovirt but when i attach a disk to a VM, the VM stay in "starting mode" (double up arrow grey) and never goes up, ovirt try every available hypervisors and end to detach the disk and stay in "starting up" state
All i see in the libvirt logs are "connection timeout" nothing more, the hypervisors can contact the ceph cluster
Nothing related in the ovirt logs & cinder
Any ideas ?
Thank you !
Maxence Sartiaux | System & Network Engineer
Boulevard Initialis, 28 - 7000 Mons
Tel : +32 (0)65 84 23 85 (ext: 6016)
Fax : +32 (0)65 84 66 76 www.it-optics.com
------=_Part_370_171547247.1506001347491
Content-Type: multipart/related;
boundary="----=_Part_371_349578632.1506001347492"
------=_Part_371_349578632.1506001347492
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo=
r: #000000'><style>p { margin: 0; }</style><div style=3D"font-family: arial=
,helvetica,sans-serif; font-size: 10pt; color: #000000"></div><div style=3D=
"font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000">=
<span style=3D"font-size: 10pt;">Hello</span></div><div style=3D"font-famil=
y: arial,helvetica,sans-serif; font-size: 10pt; color: #000000"><div><br></=
div><div>I have a ovirt 4.1.5.2-1 ovirt cluster with a <span style=3D"=
font-size: 13.3333px;">ceph</span><span style=3D"font-size: 10pt;"> lu=
minous & openstack ocata cinder.</span></div><div><br></div><div>I can =
create / remove / attach cinder disks with ovirt but when i attach a disk t=
o a VM, the VM stay in "starting mode" (double up arrow grey) and never goe=
s up, ovirt try every available hypervisors and end to detach the disk and =
stay in "starting up" state</div><div><br></div><div>All i see in the libvi=
rt logs are "connection timeout" nothing more, the hypervisors can contact =
the ceph cluster</div><div><br></div><div>Nothing related in the ovirt logs=
& cinder</div><div><br></div><div>Any ideas ?</div><div><br></div><div=
>Thank you !<br><br><div><span></span><div><br></div><div style=3D"font-fam=
ily: 'Lucida Grande', Verdana, Arial, Sans-Serif; border-top: 1px #3a342c d=
otted;"><br><table><tbody><tr><td><img src=3D"cid:91f9140b096b6d25c91e92aeb=
b182b7307ecaa61@zimbra"></td><td><div style=3D"font-family: 'Lucida Grande'=
, Verdana, Arial, Sans-Serif; min-height: 100px; line-height: 17px; margin:=
0; padding: 10px 0; font-size: 11.5px; color: #3a342c; min-width: 530px;">=
<span style=3D"font-family: 'Lucida Grande', Verdana, Arial, Sans-Serif; fo=
nt-weight: bold; color: #3a342c; font-size: 12px;">Maxence Sartiaux | Syste=
m & Network Engineer</span><br> Boulevard Initialis, 28 - 7000 Mons<br>=
<table style=3D"font-size: 11.5px; color: #3a342c; line-height: 17px;" cell=
padding=3D"0" cellspacing=3D"0"><tbody><tr><td width=3D"40">Tel :</td><td>+=
32 (0)65 84 23 85 (ext: 6016)</td></tr><tr><td>Fax :</td><td>+32 (0)65 84 6=
6 76</td></tr></tbody></table><a href=3D"http://www.it-optics.com/" style=
=3D"font-family: 'Lucida Grande', Verdana, Arial, Sans-Serif; color: #3676a=
f; text-decoration: none; border-bottom: 1px #333333 dotted; font-size: 11.=
5px;" target=3D"_blank"> <span style=3D"color: #3385cf;">www.it-optics.com<=
/span> </a></div></td></tr></tbody></table></div><span></span><br></div></d=
iv></div></div></body></html>
------=_Part_371_349578632.1506001347492
Content-Type: image/png; name=logo_mail-new.png
Content-Disposition: attachment; filename=logo_mail-new.png
Content-Transfer-Encoding: base64
Content-ID: <91f9140b096b6d25c91e92aebb182b7307ecaa61@zimbra>
iVBORw0KGgoAAAANSUhEUgAAAPoAAAA1CAYAAABoUvZcAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A
/wD/oL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB94IDQ0GO8lzmRIAABsbSURBVHja
7V15lGRVef99r6q6q6e7eu/pnu6ehZFNCIR1OJEYkC2oiYichEhwQ5Ig5kRFPRBPvcfJuyWORhQ8
GoNGgoKIMRAQUQyMQDASg7IIDJFl1t67Z3rvrq7l/fJH39fcebzqrl5qphnrO6dPVXW9uu/e797f
t9/7gDKVqUxlKlOZylSmgkTyWJJry5woU5kOP3ADADzPu43kFMl+z/OeI/m+sOvCyPO80PdlKlOZ
DjH5gPQ8r57kw/l8noODg97g4CCnp6eZy+Wo6Sck3+t53uYg8D3Pm2uH5MUkz857XrzM3TKVmrZu
64lv3dbze/p9mSELaPFOkr35fJ57u7q8HTt30v/btXu319Pby5HRUR/wYyS3e573dZIbAu3d/cpg
2vtNzxRJ7iJ5cy7vRQ/GWGzHKfj/pG2XJ/vwArf/etzWbT07tm7r4dZtPWcfrmCX5QCcJCzLgud5
l4rIXdPT0+jt64OIQETmrhORues9EnWJBNesWSOxWAwVFRUA8DKAGzJ578rtfekzH3hxlB6JpjVR
OfeoWqyrjaE2HrkOwCMi8r9mH/z7rCDQGRyniMSU6+bK8FgdlLRtpJQKnT/lukW38/mf9f4xyf8A
UAXAExGL5BHXndu+qwz015vtd4nIpcMjIxwZGREAoeAzQelbAVogcH1Hh2Q8wXd+vQ/DUznOfiX+
dbRE5IjGSpx/TG22Lh7ZReIay5IflWABJURkGEAk0PfmlFL7ihAUTQAuMoTbsIjcp1y3HGxYeeur
HsClJLOa1yMppe4pFuxbt/VsEZEnAFizU0x/7T5P8kQAvO7c9sOGX9ElANvX4utF5GHP847uGxjg
9NSUWJZVWKIY4PdBHI1E0NnRIS/0p3H3b4ZRGfV5LeZ1QgCv7EvjxcenY5saKo9625GJ+2eyXroi
KleTfMiyrC6zb8t1RYKCSkRYjDVA8iIR+ZbR/2GS/wlgsgzNFXcZPywiXzR4nbYdp6pYjS4iTwJ4
BcDRvkLRiu84EWm+9px1g4cTv6ylgJzkhSLy20wmc/Teri6mp6dlMQAjiYqKCnR2duIbvxzCAy+O
Ih6zDjD5gwLCEkFlVNAzluGdT+/HLf8zGH/01fFbReR5z/N+tRIgXw4p14WIRM2MAsmpYoREmZZg
ioq0+7zWr1O+Vi6Grj1nHQGcQrJHKxffijwBwODnf9aL31mga5BfC+AnExMT8e6eHpCUYvxkc1Jq
Ewk0trbjxkf7MDSZO+D7IjwNAYCpTB6/2DWB53una0XkVBHZsAq0TEMZggeN19GA5beUeM0kgA/r
3z9Gcs1157Zv56wg+N0Cuh9EI1nped6TJLf2DwxgcGhIigfo7GTkcjmsbWlBQ0MjqiLAB09vxikd
a5DOvubCsoh28p6H6soIPn12G4Ymc9I1mgHJrlXAz5aVDA6WqbCbJCI1K9HWdee2PwigTUQuuO7c
9umt23pw3WEG8gWB7nmeLy23ANidy+VO6+vv5+Tk5JyZXaw2FxGs7+xEdXX1VPdYNnf9T3vw8lAa
J3eswTVnteLE9jWojUeQyXmG9n99O9m8h9M6a3DVH7TgvudH8MSeCVREBJZleYeqwMZPvYlIU8jY
D7npXkxqcKnpw7CUpNnWQu0u9r5J256N25BrgtZiIT1R6B7jj/yTb8b3X3vOuowGfsE0a6l4eDDm
V4oIul0hIv8yPT0t/QMDLNZUD07Cxg0bfB/6opeH0vd9/+lhxCKzvnd1ZQTvOq4O7XUx7B7O4kfb
RzCazqMiIjDjJARw5RlNmMwQdz61D5YAliW4ckvLQGN1tHUFGJYAMCwikcBXTcp19xex6H9I8k+N
7EIPgONFZCpwaV65br5QmmiltZ9yXdiOIySPF5H3ADhJz+OrAO4G8LRy3fRi01NG2zEAQhIppTLB
celsxAUA3gEgQbJLRO4F8IRy3cmF7hv2fdK27xeRPzEUyX4A64ImPMl8SqlQXmuAiIjEjFhLZpn8
bgXwVpJ/DKBJRNIAniB5T0qp7kLjWYC/9ST/UETeDqCDZFpEfk3yhymlfuuPZb61JIW0r+d51SJy
O8mL9w8Pc2RkRCKRyFL9KdQmEqiurmZFRYVALD7VNSm/HUxjcCKHiUwengfUxiN425EJHNlcidF0
Ho++MoGesQzSWQ8bGyvx7uPr8UL/NB56aQyVUQsAIRBcfWbLr2oqo6cfbKDbjvN2kl8XkXbMpuSs
MEkYVOoikgPQrlx38CAAfDOAKwFc61twZn+Mrn4XwM3KdZ9cDOBtxzlfZxZ8668npVSH5ucFInID
gFPNlGog1XqniDjKdV9d4D6XkLxJRFoAxEhaRhsUETFrNgJUl1JqLGS+I3ouzJTvd0h+YDECOGnb
lQAuFZHPA2gLw5Om/SSvSin1g/kA74PWdpw/AvA5km8JKlejv1mSN4jI7cp1Xy3UphXmA5M8UkRG
QV68t6uLY2NjSwa53+bY+Dj6+vtlb1cX+vv75JT2OC47pQkfOXMt/uqMFjTXRDE+k8ePXxzFLU8M
4v4XRvGON9fiY29txRVntOAvT2nEXc/sx2OvjqMiYh1go1VGZPchCgh1AthAMqZBvoifsmTOfNK2
oVwXSdv+cwAvkPx7A+QMuF3U//9Lkk8kbfub+rfFDqTRb0uPv1334RkR+TGAU/UaoBk8M8zsywA8
azvOWYVcAU1vEpFOkpUArGC6NkyoGuMsxOt48Dc+f4oRpLpi8ngA3QBuI9kWdCHM2hHNq+8nbXtI
Ww+h7WqQf5bkIwDeEuAXTH7qtXc9gO1J275GC/fifHSStwGIpDMZ1lRXSywWg+d5WI67OTdgz0M6
ncaOXbvQ3d2NqbER1MVyuPota/GxP2rD+UfXorOuAvuncrjxsX7827P7UR+38MVH+jE0mdMT55si
fj7e2nGogr/m2AotKhNYBTTOioI8pRSStv1ZEfk+gEqjS/RN7EBqyh+DBeBK23HuE5FYUb6fCI02
KCKwHWc3gN83LQjtVx/QD0OxVAN4NGnbJ89jSTAI6qBlWojPhfgdmLc5AVTM/Oh+vl1EnhGRJj89
FxLEniv8MqyZJttxnkvadiysXdtxPkryM2aDvmIwxjRnwejLYiJyo+04nwjjYbQAA4YymQxm0mlp
bGxEA4lcLofRsTGMjY0hIJkXhwwRCIloNIpsNouR0VGMjI7O+urV1TixrQ6nra/GTM7DY6+O45nu
aUxkPORJRCzBgSsWiEYEWqIeCqr0F5m5iMPAHgBUTEoQnvfNtqRtf0hEPmMCyrBwhwE8QHKG5Nki
8iZDy4vei/AuADenlLp6kXEE0ffcYAJGL/DHRaQfwAkAjilg4fzMdpzNJIdD7lm1gMAtmAVaaV7r
4qgzSN6rayd818GfexGR50g+CaBam/Xig13TnSmlskEhLSJHkLzhQI+EAJAh+WUReQ5ABYBzAVwe
4N+zKaW+HNbnQpVxkySxf3gYQ/v2oa6uDlXxOBobGtDc1ISx8XGk02mk02lks1lYllU06LXofZ2/
RgDjExMYGR3F2pYW1NbWYlNjJZ7rncZMjqHt5Em011QSQOZgI1ybR98k+aCeiKyI/BpAs3FZP4Dz
SE4Hav+zAAYWG/wqRsvYjtNG8lZ9HzHu+QoAV7nuHYFxvFX7ln9gCCmS/IjtOA8r171nscab3w7J
ARH5Z+W61wfu2Qjgdh2cMwVjPYD3pZT6SoigvAHA9zBbkx4h+bCIrDe03aiInBLENclcSqmRleS1
5ue3NOD8G/rW0q0APp5Satz4yV/YjvNRAP9AsgnAHSmlPhfsk7bEjgNQa6wXEZFfKdcNxqBusx3n
/QC+CuBqAJMicmEhwVwQ6L5JEIlEMD4+jomJibmKtpbmZtQmEiCJ9MwM9g0NIZvLhQZbijXpfQb6
IM9ms/AIVEQFMzkiREfCI7A2EQWA8YMNdD1BGQCvGhI5HxhTjuSOlFJTBzFu8GDATKeIDJA8O6VU
T4gF8HjSts/Sdd+nBpTgPyZt+4GUUjOLCj7M/viZlFInh7gU0IHNd5qRcwNAHwPwFXPB6n7mMLv5
CbbjRAEENxnlNa9LzmMRuUZnMExLQkTkvcp17woG2vT7rwH4mu04N5G8rpCPLiIbQ255WTCyrtsk
gI8mbfvTAI5RrttXyN0q5KNPhdWmiwgymQy6uruxe88eDI+MQETQ2dmJTRs2oL6+HhWxmB+1n9c/
CtwQANDW2ora2lr09vVhYnISsYigImJhOuuFKo48ibaaKAGMYhXSSu+uKyI4dJSIvMmIFfiguzoI
cnOhaRPyPabvrvu/WUTOWywOSO5UrnuyGRTyF6gJRBH5IIC86e8D2Gw7zkbzuiAggokNw5WUBQTz
SsRAKkl+MeAWQUS+rVz3Lh9o5v3891rIfTylVHqeW2wIyYpMBjW1KURSSk2llHraD+QVDXQRGUeg
GCYYuPA8D6Ojo+jr68Ou3bsxtH8/6uvqsG7dOmxYvx7Nzc2vC4YEQT+3l53E+s5OVFVVYfeePZie
np4NslmCiohgJucdkAf0m8l75MbGSh4Kjb7aSE/8RgA1ATP62YXMb70A94jINSEC/oLFBl1FZHoh
cOnCl30k7zWiX37A7trVyGMN4osD7oaftfhEUJAFqRhrg+RzIam0rxb6bZhAWYxG75KFJ/OA1MHE
xAR27tqF/v5+jI2NoSoexxGbNmFdayvq6uoQiUQQ1BgAELEsbNywASTR1d09V43nc7AyKpjMeAGj
nbBEeMWWFqmLR27yyF++UQG61CqsAr/7WMhc3mQs0nkXIMmfhcz15YvtZzFWXEoppJSCiDxnamL9
9oTVOFeaT0eJiGVaF7oYZnipcxngd5d2+Wi4QhfbjvNL7ZMHBU9RVCjq3r+YVJApgabTaaRnZjA8
MoJYLIb6ujo01Nejob4e2WwWo2NjGB8fh4igsqIC7e3tSM/MoKenZy6o99o9iWhEMJ31S3Fn/7e2
JoYPbWkWAa4XERdvYNIBtA1aUxRj50dJ/ly57v8Eo+0A/iS4719vx1xQm+hFM0hyX6CUt3ElTd8Q
ofBiiAZrWcVTtjm49kn+00rwSFtWjyVtu1dE1gei+VsAnK59/K+IyBeU604F4wGLAjqAgWLr2MMA
7y+2bDaLgcFByNAQEokE4vE4mhob0dLcjLHxcdQmEpiYnMS+ffv8nXGv3ZMERNBYFcWU9tEzeQ+n
ra/GOZvX+N+PHCaW91sB3LQIn34bgPMKmGxmjnoIwHCx2ipp26Misg9Ak6mdbcd5s3LdF0sU2BoK
+XdiFc/VsSHztGcFLQYAOALA09qyMd0EIdkgIteTvN52nDsA3Kdc99+DwbpiTfe9ywkimf68r6XH
x8cxNDSE3Xv2oK+/H7WJBMbHxzEwMBDMMWv/20PUAhLxCNI5DyRw6UmNuPCYWgwNDZo+6BueSOaK
rcjSvJoOMeXrQgJS0wDSi+jKNMmJEI11ZAnHPhEIABZcl6uBRKQupBJvbCVduZRSeX3KzbZAoVGw
huVykj9I2nbadpzG+ay2QgztDwJvBRg0p7EnJiYwNT0NGv8Po5wH1MUtRC3BB05vwtEtlejp7cXU
1JT/u/zhAHQRicKoWJvvT89JWPGItwL9sEz/06BS1ilUh8R/Vu1hHWFxppVUOEYmBCmlzsNsyvO7
xlonDiyHpYhUAthnO857Cvnuoaa7ZVkZvai4koMIRu4RoslNaq2JcSZnydEtcQiAru4e5HK5ZVXm
rVL6OYBrC4AsqD0qROTBkAUyHgwGkawBsCZp2/uLjPjGAVSH8HVXCYXcupAxjq7iuRoJ0/IA+kp0
v6dSSl2etO33A3gXyXsCLrIZ/b/bdpwLlOs+tCDQTf/DKiGS8vk8rAU2ylTFZKYqFo1ns1n0Dwwg
m83icDvYQQdSdgP4wmIDNz54/fchLlBDsQc06PLLegCtQX/f3wpZIjrRXHP6tX8VW18vYLaK0Ozv
RgAl4ZHe4AJ9wOi9AKykbV+oaxAuNWIy/qS7Sdt+OKUU5zXdjUUyHo1GS7IBY86MP1CKBwUBAcTz
nsee3t4DNDnm+d0hXgRLNtWWEbgx338npD8XFrMbTQuKDpK1Bn+lVNo8aduwHcci+fu+qW7k039e
zPo5ROtgb8j9/q6QybyS5rzhwz+oXPcvAHwyuAuS5HF+aXAxPjoA/F8kWrrnJhgHTYYCJJfLyeTk
JPbs3SvBnXOG9F9tPnpocdZBPHnk1hAL7dPFWBW68x8OKZi5s1SaCrNHOF1ouod6nr+wkAsTzL2H
beUskY/+koh4xmcCeKftOEcupvx2voeFzLduzCq7lFJfwmz9vzlntSTrFwP0l6MlBnohDXhAai4A
bkMwEK+vdz7UlA6Mo4qkdTDqr/X9dovISKDYoj1p25/x68zDhJDO5Z9E8q8MsPjbQn+yXNdkHtB8
KSSo9ZBy3YWi2IRxhLYebhXJyEFg8w/NeTY2DqVs25ZCY/Z5bztONGnbsbB94/5n23HabMeJ+XMW
1p6BnQeC1k0YruYD+s6ormYrpUafzzx7owTd/EkkORTgVyNmt7LOTVYptbty3V0kHwlsmaSIOLbj
nBwEu3GSSQPJHxtg8yu+XgDwxBL6vFEHj15ndhoL/rOGj2kC+PYiBBrNoJi2PKoArJ9vflbIEpki
udV0NfTrpQQ+F+aO+TXoSdteo6sPfxN2nf68hWQvgOdtxzk2pdS87p2IfDpghU0iJN0n80jbyzOZ
zO1d3d0rfl6653lIJBKor6/H3r17sdgz4Uli8xFHkOTfWJb1zRUE7HLPjLuF5F8HBFNGuW6l9kf9
AOgGEdmpXLckroftOFkcGGj1AzYpAP9ggEtIvhPAvcHcsP58inLdpxe415+T/H7g9/7TdroBvIPk
/wHIawHkicgvAJxh9MvfaTeYUmptEfMkAL4rIu8NfJXF7FbXtO5HTEQ2k3yxwCOcqgFMBNbXbQA+
VMgKM/b8j8LYTmqM5TkAZ2N2/0Ve/88i+RERudmodrs9pdT7g1VttuPQOKhCSN4N4AoNYM84LosA
vqW/M3fRPSUipwefDjQfwsalRA9E8DfFyBKdqqqqKn9gq81H/wlm89mmpK+wHWcCwE9F5CEAfSLy
MsnGEgDcf3uBX4RjmLYkmQQwBOBRkttI7hKRe42FA6PvH1eu+/RS6reN/dkdAJ4FsEdEHgfwjIjk
9aENZurWfz2xSK1KAPcbT1jxX2Mke/VW3YcA9ADYHnY673ICY5onbzNcyDmBRfIEkvu01v4pyf8m
Oa5Bbu4qfG/Stt8cAPmTvsViWGWXABgl+RiAH4jIHQDuEZFdInJFQFhCRG4JewTYfEietkp49JHn
eRDLev2Zzgto8lgshta1a4XkM5Zl3Xow0Fvs+JXr3isiPdrsNUsXqzFbsnqOiDToCb+oVNFZ5bqP
ALgyEBn0P9YB+EMROUtEOkLGKQBuSSl181IOazC1kbH4WjGbkjrecMvEEC4zAM5Ekblo7XJ8D0C3
EXX2y35rReR8zetGvW4+uZx5DeNzSqmnALwbr6W25p4ZqPl8LIDzROQMXdBixpmEZK+IdJtuBcmP
6MpECTn77kySlwC4jOS7MXuKj3n8lQD4rnLdbxR9ZpymST8qXrLg0RI0eUd7OyzL+nfLsk4uhaVh
CpWldJXkOVpr+ws5+HRW//NXSxkzSCn1bQCnaY12wJbKkMCW/0yiCQCXpZS6yg/SLYWHJHcDeFYv
vjAewNDoL5G8QLnuL4oNWvrXichbRKTXaC8sOg8An5pvCZrzHTz7bT7rSbnufZg9OvtJv1Q1YGWE
Tr2IfBnAicp1x8yDJFJK/UpE2gBsFZGZ4LoLxqwMwSy6zQ8UfQqsqdHNkztXmnK5XFG+eVCTA3hc
RP6sFAKI5IyIjIrIkIgM6eDapD6bu9iF/rKIHAXgJX2ee56vUU63NULy/lIB3V88ynV/rVz3eJLv
IzlAMo3XtkBSl1XOkBwmeZNy3YRy3e+ZbSxRYE4o1z0Js/nlCZIzvktD0gOQ1kLlK8p1jwHwX0vU
rHsAdIrITgBTmM3C+GPL6X3xIyLy0wJNTOl+mfM9VuS9/ddnU0ptAXCR3vU56fdDk6d5PCEizwCI
K9e9RgIbsgxwTirX/XvlunEA/0pyhOQUSXNsHsmsjgP8FkCjct1rMPu8gMUpVZLHAHhh565dkZWO
fFMfDrm+sxOv7tiB+Y6SJoma6mq2tLQIgC+JyCdLeXJL0rYtUwCKCJXr5m3bhlrk4k/adoeIHKXz
miIioyR3ishe5bq5g/EAByOyLiQ3isgmAI0a7BMisgfATuW6maX0JxiM03PzvHLdE/T3Ub2WjgAQ
FZEMZnd7vbTUewYDY3qc7SJytA7GCckxEdlJck9KqVxI0Mv3ty2SliEkc8vsR4eIbNYZF9EA7QKw
wz9bfjFj1vO2Qc9bvV6bWQADAHYo1x0K9mGxQN8MYHtPT09FJpuVQgfwLxfoO3buDH20k3+/qqoq
tLW2guQtlmVdhTKtKgqLugPYrlz3+DJ3Vg9Z84Cxh+S/tLe3S/u6dWhuakIsFmM+n192MKOQbxyk
2tpatrW2gsCnLMu66lA9W61MpQlelmkVAF1E0pZl/S0AicViNyUSib7Ojg52dnSgqqpqLuAQOKR+
qcGb1/nkNTU1aGpsFJKftURuBHBIn39epjIdrkCfe7Us6xMiso7kOZWVld9qa23l+s5OaW5uRryy
csUfGNpQX8+W2cMlP2hZVnK5T4kpU5nKQF+c9n1MRK4UEcuyrE8lamp6161bl960aROq4vE5jbsY
DR9MadXU1KC+vl4A/LVlWd/2NXn5ueNlKtPBA/rcM9gsy7qR5HqSWyyR69va2tDZ0YHWtWtRVVVF
kvAWCfbGhgZfk79HRL5Z9snLVKZDAPSgdrUsKw/geRFxdaHBO6uqqna2tbamN23ciMaGBhh7jQs9
Fwue56Ghvh51dXUC4F2WZf1H2ScvU5lWjpa9D9UEo2VZPwaw2fO84yzLOrW+ru6GutrazpmZGczM
zHD/8LAEt6eSRHNzM2sTCSF5vmVZDy+0s61Mq4f0Y4wROCWmqsyZwwzoBWi7iGwHcLvneWfG43E7
Ho+fVFdX1zo2NobxiYk5F6C5qQk1NTU5AJdYlvVwWZO/4ehHmN1FlTF25/28zJbfITJ9bM/zGjzy
EpL/RZLZbJae5+VJ0vO8s4LXl2n103z7vA/iqTplWmUmnvk+SvJukoOe551Q5k6ZynQYa3mSFUEh
UKYylekwA3qYti9Tmcp0+JjxNSSv9zzvDpLHljlSpjKVlv4fThGvgoj8uYMAAAAASUVORK5CYII=
------=_Part_371_349578632.1506001347492--
------=_Part_370_171547247.1506001347491--
7 years, 7 months
Snapshot removal
by Lionel Caignec
Hi,
i'm wondering if it possible to delete at same time snapshots of differents VM ? Or is it necessary to do it only one at a time?
--
Lionel
7 years, 7 months
Fresh install vs. suspecious logs
by gabriel_skupien@o2.pl
It is a little bit strange for me that after the fresh install there are plenty of errors in the log. Is it normal? :) ps. for the OVF_STORE/ovirt-ha-agent problem I already posted the separate thread. 14:18 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 145 14:18 Could not set session3 priority. READ/WRITE throughout and latency could be affected. iscsid 14:18 flush_on_last_del in progress multipathd 2 14:18 360000000000000000e00000000020 can't flush multipathd 14:17 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 2 14:17 Could not set session2 priority. READ/WRITE throughout and latency could be affected. iscsid 14:15 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 6 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x619 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x641 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x639 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x611 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x619 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x641 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x639 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x611 kernel 14:15 kvm [9486]: vcpu2 unhandled rdmsr: 0x606 kernel 14:15 End of file while reading data: Input/output error virtlogd 14:15 Failed to acquire lock: File exists libvirtd 14:12 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 9 14:12 End of file while reading data: Input/output error virtlogd 14:12 internal error: End of file from monitor libvirtd 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found saslpasswd2 3 14:10 End of file while reading data: Input/output error libvirtd 14:10 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x639 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x611 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x619 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x641 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x639 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x611 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x619 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x641 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x639 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x611 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x606 kernel 14:05 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 End of file while reading data: Input/output error virtlogd 14:05 Failed to acquire lock: File exists libvirtd 13:49 connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295021332, last ping 4295026334, now 4295031344 kernel 13:49 error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found saslpasswd2 3 13:49 End of file while reading data: Input/output error libvirtd 13:45 Could not set session1 priority. READ/WRITE throughout and latency could be affected. iscsid 13:43 iSCSI daemon with pid=1554 started! iscsid 13:43 error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found saslpasswd2 3 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: mode dependency failed, not supported in mode balance-alb(6) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-alb(6) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: mode dependency failed, not supported in mode balance-tlb(5) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-tlb(5) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: mode dependency failed, not supported in mode 802.3ad(4) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode broadcast(3) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-xor(2) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode active-backup(1) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-rr(0) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 /dev/watchdog0 armed with fire_timeout 60 wdmd 13:43 wdmd started S0 H1 G179 wdmd 13:43 uevent trigger error multipathd 13:43 3644a842020ec46001d3fcaba23cb1 failed in domap for addition of new path sda multipathd 13:43 device-mapper: table: 253:5: multipath: error getting device kernel 13:43 sda: spurious uevent, path already in pathvec multipathd 15:43 cannot save any registration rpcbind 15:43 cannot open file = /run/rpcbind/portmap.xdr for writing rpcbind 15:43 cannot save any registration rpcbind 15:43 cannot open file = /run/rpcbind/rpcbind.xdr for writing rpcbind 15:43 can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf iscsid 15:43 can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi iscsid 15:43 Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formated InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or d iscsid 15:43 can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi iscsid 15:43 iSCSI daemon with pid=406 started! iscsid 15:43 could not find module by name='ipmi_msghandler' systemd-udevd 15:43 could not find module by name='ipmi_devintf' systemd-udevd 15:43 could not find module by name='ipmi_si' systemd-udevd 15:43 Cannot add dependency dev-onn-ovirt\x2dnode\x2dng\x2 to initrd.target, ignoring: Invalid argument systemd 15:43 i8042: No controller found kernel . . . . 14:18 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 145 14:18 Could not set session3 priority. READ/WRITE throughout and latency could be affected. iscsid 14:18 flush_on_last_del in progress multipathd 2 14:18 360000000000000000e00000000020 can't flush multipathd 14:17 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 2 14:17 Could not set session2 priority. READ/WRITE throughout and latency could be affected. iscsid 14:15 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 6 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x619 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x641 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x639 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x611 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x619 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x641 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x639 kernel 14:15 kvm [9486]: vcpu0 unhandled rdmsr: 0x611 kernel 14:15 kvm [9486]: vcpu2 unhandled rdmsr: 0x606 kernel 14:15 End of file while reading data: Input/output error virtlogd 14:15 Failed to acquire lock: File exists libvirtd 14:12 ovirt-ha-agent ovirt_hosted_engine_ha.agent.h ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs ovirt-ha-agent 9 14:12 End of file while reading data: Input/output error virtlogd 14:12 internal error: End of file from monitor libvirtd 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:12 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:11 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found saslpasswd2 3 14:10 End of file while reading data: Input/output error libvirtd 14:10 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:10 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:09 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:08 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:07 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:06 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x639 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x611 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x619 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x641 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x639 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x611 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x619 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x641 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x639 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x611 kernel 14:05 kvm [6290]: vcpu0 unhandled rdmsr: 0x606 kernel 14:05 vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packa line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packa line 102, in get_all_stats with broker.connection(self._retrie self._wait): File "/usr/lib64/python2.7/contextl line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/site-packa line 99, in connection self.connect(retries, wait) File "/usr/lib/python2.7/site-packa line 78, in connect raise BrokerConnectionError(error_ms BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 vdsm ovirt_hosted_engine_ha.lib.bro ERROR Failed to connect to broker, the number of errors has exceeded the limit (1) vdsm 14:05 End of file while reading data: Input/output error virtlogd 14:05 Failed to acquire lock: File exists libvirtd 13:49 connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295021332, last ping 4295026334, now 4295031344 kernel 13:49 error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found saslpasswd2 3 13:49 End of file while reading data: Input/output error libvirtd 13:45 Could not set session1 priority. READ/WRITE throughout and latency could be affected. iscsid 13:43 iSCSI daemon with pid=1554 started! iscsid 13:43 error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found saslpasswd2 3 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: mode dependency failed, not supported in mode balance-alb(6) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-alb(6) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: mode dependency failed, not supported in mode balance-tlb(5) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-tlb(5) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: mode dependency failed, not supported in mode 802.3ad(4) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode broadcast(3) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-xor(2) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode active-backup(1) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option arp_all_targets: invalid value (2) kernel 13:43 dhywgZgYAYtrCSW: option primary_reselect: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option xmit_hash_policy: invalid value (5) kernel 13:43 dhywgZgYAYtrCSW: option arp_validate: invalid value (7) kernel 13:43 dhywgZgYAYtrCSW: option lacp_rate: mode dependency failed, not supported in mode balance-rr(0) kernel 13:43 dhywgZgYAYtrCSW: option ad_select: invalid value (3) kernel 13:43 dhywgZgYAYtrCSW: option fail_over_mac: invalid value (3) kernel 13:43 /dev/watchdog0 armed with fire_timeout 60 wdmd 13:43 wdmd started S0 H1 G179 wdmd 13:43 uevent trigger error multipathd 13:43 3644a842020ec46001d3fcaba23cb1 failed in domap for addition of new path sda multipathd 13:43 device-mapper: table: 253:5: multipath: error getting device kernel 13:43 sda: spurious uevent, path already in pathvec multipathd 15:43 cannot save any registration rpcbind 15:43 cannot open file = /run/rpcbind/portmap.xdr for writing rpcbind 15:43 cannot save any registration rpcbind 15:43 cannot open file = /run/rpcbind/rpcbind.xdr for writing rpcbind 15:43 can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf iscsid 15:43 can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi iscsid 15:43 Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formated InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or d iscsid 15:43 can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi iscsid 15:43 iSCSI daemon with pid=406 started! iscsid 15:43 could not find module by name='ipmi_msghandler' systemd-udevd 15:43 could not find module by name='ipmi_devintf' systemd-udevd 15:43 could not find module by name='ipmi_si' systemd-udevd 15:43 Cannot add dependency dev-onn-ovirt\x2dnode\x2dng\x2 to initrd.target, ignoring: Invalid argument systemd 15:43 i8042: No controller found kernel
7 years, 7 months
Ovirt-NFS-ISO-Storage-Initial setup Fail --- Please support
by Anzar Esmail Sainudeen
This is a multipart message in MIME format.
------=_NextPart_000_002C_01D336EA.EB9739C0
Content-Type: multipart/alternative;
boundary="----=_NextPart_001_002D_01D336EA.EB9739C0"
------=_NextPart_001_002D_01D336EA.EB9739C0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Dear All,
hope all are going to fine
Ovirt Environment is almost ready for operation. I am struck in ISO storage
type.
I decided to setup NFS and done the configuration. But some errors are
coming in exportfs. I will share it in below
[root@ovirtnode4 ~]# exportfs -r
exportfs: No options for Create Exports: suggest Exports(sync) to avoid
warning
exportfs: Failed to resolve Exports
exportfs: Failed to resolve Exports
exportfs: No options for Create List: suggest List(sync) to avoid warning
exportfs: Failed to resolve List
exportfs: Failed to resolve List
exportfs: No host name given with Create (Create, suggest *(Create to avoid
warning
exportfs: /etc/exports:1: syntax error: bad option list
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
172.23.4.11 ovirtnode1.thumbaytechlabs.int
172.23.4.12 ovirtnode2.thumbaytechlabs.int
172.23.4.13 ovirtnode3.thumbaytechlabs.int
172.23.4.14 ovirtnode4.thumbaytechlabs.int
172.23.4.4 ovirtengine.thumbaytechlabs.int
vi /etc/exports
Create Exports List (Create Export List for NFS you want to share)
/exports/data 172.23.4.14(rw)
/exports/export *(rw)
/exports/iso
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
Following are the ref. to used to setup NFS in ovirt
https://www.ovirt.org/documentation/admin-guide/chap-Storage/
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-n
fs-storage-issues/
IPtables, firewall and selinux disabled for testing. But no changes in
exportfs output.
Please support.
Anzar Esmail Sainudeen
Group Datacenter Incharge| IT Infra Division | Thumbay Technologies |Thumbay
Group
P.O Box : 4184 | Ajman | United Arab Emirates.
Mobile: 055-8633699|Tel: 06 7431333 | Extn :1303
Email: <mailto:anzar@it.thumbay.com> anzar(a)it.thumbay.com | Website:
<http://www.thumbay.com/> www.thumbay.com
Disclaimer: This message contains confidential information and is intended
only for the individual named. If you are not the named addressee, you are
hereby notified that disclosing, copying, distributing or taking any action
in reliance on the contents of this e-mail is strictly prohibited. Please
notify the sender immediately by e-mail if you have received this e-mail by
mistake, and delete this material. Thumbay Group accepts no liability for
errors or omissions in the contents of this message, which arise as a result
of e-mail transmission.
------=_NextPart_001_002D_01D336EA.EB9739C0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><!--[if !mso]><style>v\:* =
{behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"Trebuchet MS";
panose-1:2 11 6 3 2 2 2 2 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Trebuchet MS","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>Dear All,<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>hope all are going to =
fine<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>Ovirt Environment is =
almost ready for operation. I am struck in ISO storage =
type.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>I decided to setup NFS =
and done the configuration. But some errors are coming in exportfs. I =
will share it in below <o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif";background:yellow;mso-highlight:yellow'>[root@ovirtnode4=
~]# exportfs -r</span><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: No options =
for Create Exports: suggest Exports(sync) to avoid =
warning<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: Failed to =
resolve Exports<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: Failed to =
resolve Exports<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: No options =
for Create List: suggest List(sync) to avoid =
warning<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: Failed to =
resolve List<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: Failed to =
resolve List<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: No host name =
given with Create (Create, suggest *(Create to avoid =
warning<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>exportfs: =
/etc/exports:1: syntax error: bad option list<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif";background:yellow;mso-highlight:yellow'>cat =
/etc/hosts</span><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>127.0.0.1 =
localhost localhost.localdomain localhost4 =
localhost4.localdomain4<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'>::1 =
localhost localhost.localdomain localhost6 =
localhost6.localdomain6<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>172.23.4.11 =
ovirtnode1.thumbaytechlabs.int<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>172.23.4.12 =
ovirtnode2.thumbaytechlabs.int<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>172.23.4.13 =
ovirtnode3.thumbaytechlabs.int<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>172.23.4.14 =
ovirtnode4.thumbaytechlabs.int<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>172.23.4.4 =
ovirtengine.thumbaytechlabs.int<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif";background:yellow;mso-highlight:yellow'>vi =
/etc/exports</span><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>Create Exports List =
(Create Export List for NFS you want to share)<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>/exports/data 172.23.4.14(rw)<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'>/exports/export =
*(rw)<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'>/exports/iso =
*(rw,sync,no_subtree_check,all_squash,anonuid=3D36,anongid=3D36)<o:p></o:=
p></span></p><p class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>Following are the ref. =
to used to setup NFS in ovirt<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'><a =
href=3D"https://www.ovirt.org/documentation/admin-guide/chap-Storage/">ht=
tps://www.ovirt.org/documentation/admin-guide/chap-Storage/</a><o:p></o:p=
></span></p><p class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><a =
href=3D"https://www.ovirt.org/documentation/how-to/troubleshooting/troubl=
eshooting-nfs-storage-issues/">https://www.ovirt.org/documentation/how-to=
/troubleshooting/troubleshooting-nfs-storage-issues/</a><o:p></o:p></span=
></p><p class=3DMsoNormal><span style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>IPtables, firewall and =
selinux disabled for testing. But no changes in exportfs =
output.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>Please =
support.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet =
MS","sans-serif"'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:"Trebuchet MS","sans-serif"'>Anzar =
Esmail Sainudeen</span><span =
style=3D'font-size:10.0pt;font-family:"Courier =
New"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:"Trebuchet MS","sans-serif"'>Group =
Datacenter Incharge| IT Infra Division | Thumbay Technologies |Thumbay =
Group </span><span style=3D'font-size:10.0pt;font-family:"Courier =
New"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:"Trebuchet MS","sans-serif"'>P.O =
Box : 4184 | Ajman | United Arab Emirates. </span><span =
style=3D'font-size:10.0pt;font-family:"Courier =
New"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:"Trebuchet =
MS","sans-serif"'>Mobile: 055-8633699|Tel: 06 7431333 | Extn =
:1303</span><span style=3D'font-size:10.0pt;font-family:"Courier =
New"'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Trebuchet MS","sans-serif"'>Email: <a =
href=3D"mailto:anzar@it.thumbay.com"><span =
style=3D'color:blue'>anzar(a)it.thumbay.com</span></a> | Website: <a =
href=3D"http://www.thumbay.com/" target=3D"_blank"><span =
style=3D'color:blue'>www.thumbay.com</span></a></span><b><span =
style=3D'font-size:10.0pt;font-family:"Trebuchet =
MS","sans-serif";color:black'><o:p></o:p></span></b></p><p =
class=3DMsoNormal><img border=3D0 width=3D574 height=3D91 =
id=3D"Picture_x0020_1" src=3D"cid:image003.jpg@01D336EA.EA5BD390" =
alt=3D"cid:image001.jpg@01D18D9D.15A17620"><o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:"Trebuchet =
MS","sans-serif";color:black'>Disclaimer: This message contains =
confidential information and is intended only for the individual named. =
If you are not the named addressee, you are hereby notified that =
disclosing, copying, distributing or taking any action in reliance on =
the contents of this e-mail is strictly prohibited. Please notify the =
sender immediately by e-mail if you have received this e-mail by =
mistake, and delete this material. Thumbay Group accepts no liability =
for errors or omissions in the contents of this message, which arise as =
a result of e-mail transmission.</span><o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p></div></body></html>
------=_NextPart_001_002D_01D336EA.EB9739C0--
------=_NextPart_000_002C_01D336EA.EB9739C0
Content-Type: image/jpeg;
name="image003.jpg"
Content-Transfer-Encoding: base64
Content-ID: <image003.jpg(a)01D336EA.EA5BD390>
/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg
IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCABbAj4DASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii
gAooooAKKKq3epWliQLmZULdB1P6Um0tWNK+xZpsciSoHjdXU91ORWZDqSatcT29pKyosX+tA5yT
1GasRWzafZz7ZS4ALKCAAvHbFSp31Ww3G25eoqtY3cd9apNE24Ec8dD3qzVJ3V0S1YKKKKYBTQ6n
kMCM44NMuUL2syKpYshAAbaTx69vrXEwaHfCz8qDTmiRbiJvnVEdtocHO1iGxkfNwTnnpQB3W4eo
6ZppkQAEuoB6HNcJb+HNWtYzBJA1zax2kSohkG513hnhPPbBx2IwKmv9ImmNo9vojQWyPKRB5MU5
XO3BKMwVc4PCk4/GgDt8j160m9cZ3DAOOvesfVdOkvBpMkMbhre4R2UNtCLtIOQDjjj1rlBoGsrZ
t/xL4yFcRiAhSHO0jzmG75myQc8c9qAPRNwJIyMjqKNy+o/OuU8N6RqFjq9zLeRNloyrynbhzkYI
YHc3/AgMdBVwj5Rn15yD16fp/SolLlGlc3wwPQg03zoxn94nHXkVnabxcMOfu5P+fwrD1zR9N/4T
Lw8v9n2m2ZrkyjyVxIdgOW4559acXdXBqx14YN0IP0NLXEyQ3lv4s1oaVNBZx29hAwXyQwON+FA4
AHXpV/RvEV3qd/ErrGkUulR3mwDkOzEHn04qhHT0Vw+na/rerm0SO5t4N+l/bJH8jcS+8rgDPTAp
9nr+p3NvpM15JA0Wr20pMMcZHklYyww2cnOOc0AdoCCMg5FLXC+Hb3UtH0Hw41xNDNaXey38kJtM
WVJVg2eenOfWnW/inUnvNPfzBLBe3P2c7LYiFM5wUkJBfGB2weaAO4pMgYGetcnY+JL+8fTtPIiX
UvtUkV6AvCpH95gM8bsrj/eqx4g/5G3w1/11m/8ARdAHSZoJA6nFcz4tW4bVfDwt7owhr7BGwNk7
GOf5/n7Vjahe6lrOi6bqkk0K2k+pQ7bYR8oolwDuzyeOR05oA9AooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAK5yZLeTxZKLvYYxEMbzxnFdB5iF9m5d3pnmqN3odnfXDTzq5dgAcNjpWVWLk
lbuXBpbmbo7Qx6/f+WyLFj5cEY/Ctq6miNpMPMT7jfxD0rm7DSLW41m9tpFYxw/cAatX/hGNO/uS
f99msqTnytJdy58t9WN8K/8AIHH++a2qr2VlDYQeTbghM55OasVvTi4xSZnJ3bYUUUVZIUUUUAFF
FFABRRRQAjfdP0rnxgKNv6Y/r710DcKfpWBnAz3zyTn6/r/WsqvQuJc03H2g9M7f8OlWLnTLe71G
yvZd/nWe/wArDYHzjByO/FV9N/4+GOD93n+dc9JrmsForlmuY7aUyYSK1jkJxnbs+bJ4HOQKqnsK
W5039j23268u8P5t5EsMvzcbVzjA7feNUz4TsBHbLDJdQGC3FqGilwzxD+Fj3/nWEPFN0Jki86/i
mZWeNLuyREk2jcRkNnp6Vr+JdblttKaLT5Ejv5rdpVZyMQoFyXI7+gHc/jVklrT/AAxp+meV9mEo
EVp9jXc5P7vcW/PJ60ReGrCGDTokEuzT0ZIMv2Zdpz68GqT3l0z+G50vHMdyQk0YAxJmJmJJ+oFI
ZriHxJI2oyahDDJOEtAjr5D/ACjCkDnJIbrigCzYeEtP0824RrmWK2UrBFNKXSLIwSB64J/Pio4v
BmnQtZ4lvGjspRLbRNMSkJHYD05xzmqelS38cFzFdSX41praSRLed08tjngpjjg7Rye9Mhe/OiXd
vbTam2rqkXmxTvH5iAnlkP3eQGxz2oA0dK0N4vEeo61dQxRT3KrCiRtuwq9WJwOTx+Qq5quhwatP
azSTXEM1qzNFJA+0gkYPY9qTQZopbOQRz3cjRylJBdkGSNhj5Tjj0PHrWdbxXupXmryR6lcQSW90
YYFBHlqAiMMrjnknNAF2Tw7DPFCtxd3sskEwmileUb0YDHBA6YJqCTwfp0kqHfdLDHOLlLdZiIlk
BzuC/Xt05rn7rxFeXkUMpuLi3inWxLrbrl13mTftwCTnaPyragnluLy20y0vLxImha5lmuFImK7t
oUbgMc98fzoA34YTC0hMskm992HOdvsPapar2dqbODyzcTz853zMGb6ZxWDbR32oPq9xHqVxFNb3
TxQpkeUAqqRlcc9TnmgDpqK5u18XLPot3fPauDaW0U7qG+8XTdgelQWutXVhc65LLayzWVteEySm
XmNNiE7V7gck9KAOrornr/xLHDfPYSbEaVH8p4Z1aTIQtkrj5eAcHmtG1ujP4ehulLhntVkBYgty
mee2aANCiuaGpXf/AAi+hXPnN51xLaLK+BlwzLu/PNOg8UtcQJLLYyQW87SxQyeYCxdAx6Y4BCHB
/SgDo6K5O18T3Fu8f2iFW0+PTY72S4kl/e4Oc5AGCeMdqlg8bWsrNH5aPOwUwxwTCTeWYKFJx8rZ
IyOaAOnorB0C5ubjWNbF2jRMk0QEXmb1X90vQ+h69Ko2niUzeNpbQ3kTWrlraOAEblkQAlj7HLD/
AICKAOsornIvFMsmipqbWUUMMjEIJ7pUyASM9PbgfyqrZa9c3OsXt1Y2st3ayWdtcKpl2hFIcnA5
yx9OM460AdbRXN+IdVEmg6fd2dxcRQXVzCC8C5kMbHkAAE5+gpYdYktZbexsba9v5JYpJw925iYB
WAIO5Qe/HFAHR0VzJ8WCfTTO1rNbx3FpLcWzhwXYIuTkYwp7jrVnTtcluVu0W2JjslVWnmmC+YxR
W9OPvcmgDdorlYfFtzfahZQ2NpbyJJLLDOftHCsihvlYDkYbP6cU6Hx1ZTyDykEsciuYhDKHkbaC
eUH3cgHHJ/CgDqKKoaPqLapZC5McSK33fLmEgI9zgYPtV+gAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKhuS3kukTATMh2fWpqzL67Wz1a0eZtsLoybj0B4/wAKmbstRpXZ
g2Fgt3byGKSSPVYWLHc33q1rXV7u9tilvBH9qj+WQO+Np9cVDryRW5j1O0mRLhCAQD/rBUtxp890
0GpaeVgumUFw3QgiuWMXBtR/4dd/U2bT1ZY0nS5LOSa4uZBJcznLFegpNBu5Lq2mWTB8mVkB9RVf
yfEP/Pxbfl/9aoLXTNaslcQS26h23tnnn8qtScWuWLsS1e92iS81O9tdZuIbSETjYrlT2461p6Vf
/wBpWSzlNjZKsPcVj/2XrX2t7rzbfznXYT7flWpoljLp9j5M5UvvLfKciim58+t7ahPl5TRooorp
MgooooAKKKKACiiigBG+6fpWB0A/px7mt9vunPTFc/n5RnGM9+fp+lZVOhcS5poBuCf9jFVk8O3U
X2cRavKq27M0Q8iM4zkenPBq1ppJuW542+v+c1j22r6pq3iCS2hk+yWwg86NgisCNxUZzyckHOOn
FVT2FLc0LrQLq9A+0aoXKhlVjax7lDDBwccZFW77QrDUbRobq2hkYwmJZXjVnUYxwSPxp9jqImsf
NuzHDIjmKTLYUMDg4J7VmyeKI59AF7ZmMSySCJUZg2wmTZubHbvVkl1NCt44dMijJRNOYNGFAAY7
CvP/AH0TRNo7XV4st1eSywxyebDDtUBH5wcgZOM8ZrPFzffaL+wm1TymskSc3KwpkxsG4ZTwMFTy
O2Kg/tHV4NC0y7luUZ57uNX3wgM0TuAo44B2nk0AasWjSCZ7ie/lmu/LMUUxRR5SkgnAAxnIGSfS
kj0aWMTzDUJft0wVWuTGuQq5woXGMcn86uahfJp9o0zqznIVI1+9Ix4Cj3JrP8P3uoaloby3hijv
RNNGdq5VSrso+uMUASWukT2V7E8VyxiZpJrotjdPIwAGeOAAOg9BUTeHnNxelNRnjtr2TzJYUVQc
lQpw+MjIUVTm1i+sJbu0eZLlklgiS5aMKEaUkEMBxwAD/wACGar33iC702WayurxQIZkWS9WIFkR
1JXKDjOVI6Yxg0Aa0nhq1e8jnR3jEZgKRrjaBFu2j/x79Kt3+lreSxXEcz291CCEmQAnB6gg8EcD
j2rmxruozxWUs1zLZWbxSO14lrvV/n2oWyCEyvzHPrV+71C7TxB5Et7NZ2QWIRyfZg0c7sTkGQjA
7DHHWgDRt7PUI7+F5r4zW6RtvBUKXckY4AxgAH86qnw4xlvVGo3C2l5KZZYFVQckAEBsZwcVXt9S
u2125ju7qeBVldbe1a1ws6Ko+65HzHOTwak8MX11eIft97K135StJaS2whMJPpwCR2zz0oAdqPhW
G9S5iguprSC6iWKaOJVwwUYXBI4wOOOtOk8MrJPef6dcC2vpfNuLfC7W4AwDjIBC8+tZXjzxhJ4e
jjsrQiK9uomeGdxuRCpHBXBzkE1k+B/Hl9fanFpesTR3dzdOxikhTYI1Vc8ggZyQaAOn/wCESh81
MXcwt45ZJY4QFADOrK2TjJ+8cZ6Vd0jTZ7CGW3nn82ABIoEP8MaoF59zgk0tzrlpa6na2Tyx7pxI
S3mABNoHB+uayr3XbuNdQv4Zo0tdOmETQMoJlX5cnd1H3uMUAXrbw6kDW8b3c0tnauHt7dgMIR0y
epAzwD7elA8NW4sLW186XbbSySq3GSXDg5/7+H8hUIbUo/EkNquptPFseaaNoUAROQgyOck/opqx
4evLu7ivlvZEkkt7ySEMibRtGMcfjQBGPC9uBChmkaJbMWUqMAfNjGcZPYgkninvoUlxam3u9Rnm
VWV4G2KrRMpyrZA5PHeoLnXLhvFNhY2iKbJpJI7iUj7ziNmCr9Mcn8PWrOvy3lnZTXtveRwpbxl/
LaMN5jdgT2B4HHPNAEmm6QdPkvJmupZ57tg8kjgDBChRgDoMCmjQYF0q1sVlkAtpElSXjduVs5P1
5z9aypNevhBNqTOkNva3K28tqyZJyVDHd1zluPaq1v4qujezBnEzwrO1zaCPHkCMHBV/4skAd+vt
QBrL4YjhgsktruaJ7RHjWTapJVzlhgjAPuOait/CYslVbPUrqFTbR20gAU70TODyOG+Y8iqVzrer
W1sUR1urm6shdQeTDuMZ3KGAUfeADZHfg1NPqN6ukRtZX9zdSNdCOaZbLMkC4+YGMDg9Oo70Aal9
oUdxplrZ20rWotJI5IWRQ20p04PBpbXSJItQjvbq9kubiOJ4QSioNrEHoP8AdrHutWuV+xGPVJls
GgaSTUEtA6ltwADcYTAznOOldSnKL827j73r70AYjeFLZtPtLTz5tltbS2ytxkiRdpJ9xUh8NW5s
by286XFzIkpbj5WQKBx3HyDg9ea2aKAMKLwyIrhLn7dM10tw1w0rKp3blCsuMYAwo+lWLTR5rFTD
bahKloqlYoTGp8vPTDYyQOwNatFAGfpelLprXMhlaWe5cPK5UKCQMfdHA/rWhRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUVy+oT6vZXMMTXiM07YUKvTmrN5LqOl6bPLc3Sy
SMVWPav3T3rH2y10ehp7PbU36gvLSG9gaKeMOp7d8+1U/tzQaF50kytOIt2cj7x6cVmvql5F4aS5
aXNxLJhWwOn+RRKrG1n2uJQfQXTtCi+1IZ7KVV2lvnkBAIPAx3rowABgDis67upbTQTOz/vhGPmx
/Ef/ANdUJtSuoPDtvKZM3U7AKcds1MXClp5XKalPU6GoBaqL03O99xTZt3fL+VZOn6vImjXFzePu
khcr9T2FR6HqF7c3s4u5MpGm4rjoar2sW0u4uRq50NFcrDqGo3VvcXa30cSRMdqso5rZ0m/ku9LF
zdKEIzk4wCB3pwrKTsKUGjRorlk1q9k1CCXfttJpiqrjqBx/Wr0l/PZ+IhBPL/osiFlyOnHr+FJV
4sHTaNuisXTLq71O9kuQ5jsVOEXH362q0hLmV0TJW0CiiiqEFFFFACN90/SsAcgdev09vwyP51vt
901z55UZxjPoD7fj9ayq9C4l3Tc/aG442df89KzNV8GJqLxiO6aGKOQyR7Qd8WTkhSCMqTzg5Ga0
tNP+ksP9jPX/ADmsvTfEOqaj5oaKwtWTc6rM7fNECRvB6Y459KqnsKW5uxaVapYJaSxJPEp3HzlD
7m6ljnvk1n6f4at9N0V7OGGzaZ1dTK8AwwZicMBgkDI79qi0XxBdagb77VBFFFDGJYZV3YlQ5+bB
5x8vHrUlv4rsTEqzyl5lijlmaGFyiK67gxOOFx61ZJFD4Shhs7iFJQrXciNclVOGRf4BkkgY4yST
ya09V03+0reCISeWIriKb7uc7GDY/SsqHxGH1vUBNLNHaWjLFGgtyRMzKp4bGSTu4A6jmtE6/ZfZ
RMrOzFzEIQhEm8DJUqeQQOTnoOaAJ9Q0qz1VI0voBKsbb0ySNpxjIx9ao6ZoB0XSbq202VI5ppJJ
EkdSwUsxIyCecA4684qsvjawaG6ZophJbxtJ5a7XLhRkgFSRnHP05qOHxbE2rOJ1uIrQ2cc6q1u2
5Ms4Zm44XgcnigC1ZaLdwaTcWVw9hN5nO4wP87k5Zny5JJ9iMVXt/CK2Dx3NjOiXqymUs6Fo2yu3
btznAHQ5zWvJq9lGLstNxaRLNMQCQEIJB9+FPSqkvivSoZJFkndREypI/lNsRmAKgtjAyCMUARf2
BdJpsmnx3sYtbgP9oJi+fLkltnOFHzEAEHFT3mjz3qrZvcRrpoKHy1jPmHYQQu7OMZA7ZqrqfiWO
KCxuLaRlja+W3uFeI71ypO3aRkH7v51fXXbR7Vp0W4fY+ySNYHMiHr8y4yOOaAIp9KvL28ilurqL
y7aRpbdY4iCGIIBYk84B7YzUllpk66k2oX08ctyYvJURIURUznoSSST79qUa/pzW09wtyGigCM7A
E8MAVI9c54xUl1q9taTiFvNkmKhjHDE0jKvqQBwPrQBU8QeG7bXogzhUu412xT4OVBYEjgjg4xVT
Q/B1tpV59rnKT3CNmFgGAiyuDjJPWtMa7Ym8+ziRsmTyvM2Hy/M/ub8Y3e2arJ4s0uQhY3ndmLLG
qwOTIVJDBRj5sY5xQA688NWF3q9revbWv7oSeYhgU+aWA5J9sfrUOo+F7fVNQSS4SBbdCh2pGQ7b
egJzjHA7ZxxWxaXcN9ax3FtIJIpBlWFcZqXjm5TUr2GzjiWG2QlGfBMrAkHOWGBkduaAOrsdP+yX
N5cSSebNcybi2MbVAwq/gP50aZp39nG8Pmb/ALTcvP0xt3Y4/SqeleJIL7QYtRuVNvlxFIvJCuSB
+WSOasSa/p0QvS0+fsTqk4VSSrNjAA7nntQBSl8H6adUs7yCEQtBM8zqC3zllI9eOTmptY0u/v72
3kt7m1FvD8wgnhZgZOzHDDOOw9easxa7p801rElype7iaaIYI3KOp9v/AKx9Kh/4SfS/sM14bnEE
IQuxQj7/AN3jHOaAK954Yj1PU0ub0QbEZHIijKs7Lj7xzjGR6Z6DNFl4dktpbRZblJLexLm3Xy8M
dwIO8554J6AZ71Y1TxFaaZaXk5WWc2io0iQoWI3dP8aor4pji1m7iuhOlslvFMim3bdGCXDM/GQO
F60AS2Phs6dLNd2rW0d9IvlqVibylTOSNu7PJ9/pViHTL20SaW3uoDd3M3mzvJESh+UKAoDAgAAd
zU8mt2MUN1K0/wC7tColYKTjcAR9chh0qCfxPpltdS28szh4pBE58piocgELnGMkHgUAQnQLmPTD
psF4gtZldZ2eLMjFyS5U5wM7j2OK2oYlghSKMYRFCqPYcVz974oijutMaB38ma4kgmjMLeYGEZYL
txkHOK0xrVq1l9qjE0iBijLHCzOrDqCoGRigDQorNGv6c1tPcLchooFVnYA9GGVI9c54xUl1q9ta
TLC3myTFQ5jhiZ2VemSAOB9aAL1FZ39u2P2v7P5rbt/l79h8sv8A3N2MbvaqsPi7SrhYmhknfz13
QAW75mHfYMfNjvjpQBt0VDa3MV5bR3Fu4kikGVYd6moAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooA525P2vxhBH1WBNxH6/4UvidvOlsrNTzJJnH6VvCJA5cIoc9WxzQ0aMwZkUsO
hI5FYuldNX3Zop6p9jl9e0ix06xV4EYSu4UZbNP1uA22maapU+VHjfjscD/69dK8aSY3orY6bhml
ZQ6lWAIPYjipdBa20uNVHpc5vWNSi1aGKy0/dI0jgtgYwKkuoxLrun2K4KWyBmH+fpWrez22j6fP
ePGqxwoWbYoyfaq8Go+XdrHqUEFpPMQsJEoYynGSo4ByMU/ZN6yfb8Bc6Wxi6dbPdahLbOP9Ghna
aT3PYU/T59mnaveZGXYqD/n6itiLX9IkLiG9gZgxUhTySMDHv1H51nWHii21HUILK2sSqTF2JlGz
5FON20jkk5GPakqFuvcbqX6GUq6b/Y6jLNfnoFzxz+XStK9urk6XZ6c3/H3cABh3C54zWhrF4+km
1a1sraQSybGZ32bOM54U54BP4U3StdtdRtLW8uEhtpLtylurOCz46duuBnFJUGla/kDqX6GRqlpd
6dBZ/aZYWiifCBFxjvz+VXfEoW5udPhA+eQ9R1wcVp2+r6VqczW8F1b3EiZJQEHGDgn9aqrr9nd6
3a2VmkVzvjaRpgwwgBwMcc8g/lTdDRpPR2D2mzNiGJIIlijUKiDAAp9Zeoa1Hp2p21tMqrFJFJNL
M77ViRMcn6kirUmp2UUUMsl1Csc7bYmLjDn0FdBkWqKo3GtadaXiWtxeQxzuAVRmwTk4H51BqHiG
ysbSeZXNw0KbzHCNzY/DoM96ANWis/8AteCG0kubwrbRR7QzO4PJA4wOcgnGDU1tqdleyGO2uoZX
CqxVXBIDDIOPcUAWW+6fpWBzgYOOfXH+cH+VbzfdP0rA/hGOfwz/AF9ayqdC4l3TTm4YZ52+lYMH
hG9VWF8mnX5BIjMxcbI8khQB9efWt7TR/pBPfb6/T865bTvH9/qmp39vBp0KxWau7yFmY4U46AdT
WtGnKcW10InJJm1pGgX1kt+tzcQsk0Iht0UsfKUbvlJPJALce1Gn+HLi003ULZ5Yi9zaRQKRnAKw
hCTx0zUul+JlvPDVtq88BjWU4ZVOdvOM0258VxJeRQ2kQuEfGWDY5PatVRm3ZIzlWhHdkMHhq7t1
81JYTPFPFPECTtYrAImDccZwcHntVPVPDd7K09/Id811vWeO358tSqgFQcbiNgz0JBNbeq62dPuY
7dUiMrpvCu5BbnGFABLGoNF8S/2rqU9m8Co0UfmBkcnPTIwQCOtY8yvY25Xa5nWvh+TVTe3NxPKr
So6Rh7QwBGaPy+ASTtA6D3PJq7DoV6yXbTtbrJPpy2YCMWAZd/OSBx8w/WqsnjESz2LQ293DC17J
bSq9ud0m1XwF98qOlbsOtWVw1qschLXTOsalSDuQZYH0IwetMkxbrw5qK2V1BZSWpa8sI7SVpSw2
MisNwAHIO72qZ/Dly2l31sJYt9xdxTqecAKI8g8dfkP5ircninTY3CBppHIkIWOFmOEba54HQGoL
zxPbT6NfzaZMTNFaPcQs0ZCuAOGXP3hmgCK88P3sl3Nc20tvvOox3iLJnG1YwhU4HU4NR3Ph/Ubm
5kvJHtme4Yeda+Y4iKhdqfMBk4JJ6DPtitKw8QWt5EoHneb5IlCmFlMq8fMgx8w57VLa67ZXc6QI
7pM+793IhVgVwSCD0OCDj0NAHOWWgNHqel2Kyh1srZFv9inY5Q5iGfXJJx6CtiKx1W1v5b6NbOSW
6jRZ42kZVRlyAVOCSMHocfWnzeKdMgthcGSV4vLMzNHEzbUBI3HA4BwcHvT5fEunQ3MkDSSb40R3
IjYgB/ujOOp7CgCkNAvBbDTPMg/s8XP2jzMnzceZ5mzGMfe/iz07U/TvD9xZyaS0ksRFkbkvtz83
mNkY+nersXiCxkdI2aWKVpVh8qWJlcMwJXIPQEA80lx4i0+23CSRywufsu1Y2YmTbu2gAelAD9D0
+TTNO+zzMrN5ssmV6YZ2Yfoa53VPAbXF/dT2N0IkulYMjlhsJOcjB5Gc8H1611NhqEGoxM9uWyjb
HR1KsjehB5BrltR8cXVpqt7axWETpauV3tIRuwu7piplJR3NqNCdZtQWxsweG4LXwzLpETZEitl2
7ued354/Ks6z8ITw3Wlzz3SSNFmS/wAA/wCkSgllYfRievYCr3hXxBJ4hs55prYW7xS+WUznsD/W
r8V62ow3a2LCOaCZoN0sZZdwxngEZHPrTTTV0RUpypScJbo5ubwTcm3vRDdxpO04Nm+D+5hycp+O
9/zFX7zwolzqlm6uq2EUHlTQHrIVBEZ/DcT+Aqi/iDVoLQXd1PaCFNSFo6w2zbmUSbSeWPUdsfjW
/Br1lO0SBpElkmMHlyRlWV9pbDA9PlGaZBjQ+F7+Pw7caa9xBJJdB/OnbdnIwIgPYKoBq/8A2Rdz
SanNMYEkvbNIAqMWCMA4POBx8wqebxHYQFQWld2ne3VI4mZjIgywAA9qry+J7O4spmsZXMjQSSW8
jRkJIyqSQpIwSMcj2oAz5/DGojTrmytpLUpdxwCR5GYFGjVVOAByCF4Parsvh+4klmYSxASanHej
r9xVUEfX5TU2ieIrfUrO03tIJ5bdZQXiKCXgFimev4VPH4gsnnWBjLFM0gjEcsTIckEr17HacHvi
gDNudAvxrR1G1ktiVvDcCOQsNymAR4yBwcjOeaSbw9fyK0pkgd7i4aa5ty7LE2UCqMgZO3aD0GT6
Vv2t5Deed5DFhFI0THHG4dQPWuOvvHl7bXLx2+mpOA7qAjMWAU4ycDvWlOlKo7RM6laNJXkPsvD7
R6lpVgsodbO3Vb/Yp2PsOYhk98knHoK2YrHVba/lvo1s5JbqNFnjZ2VUK5wVOCSMHoQPrVe38XL/
AMIous3FnL/rBG0UI3HJbbmiLxSkWsX8V4J0t4khdAYGzErA7i/HAyO9RKLi2mXGSklJbMX/AIR6
78hdO8yH7AtybkSZPmn5y+zGMfeP3s9O1SWGg3FqNB3yRH+zoXjlxn5iygcflWg+tWUcVzI0p2W0
ywyHaeGbbge/31qp/wAJZpm8qHnP71oFYQNh5VJBQHHLZHSkMs6Fp8ml6RFaTMrOjOSV6fM7MP0N
aNc5eeJ449R0ryGka3uTOkkSwkyb1Awu3GQRzxWg2v2YtFuVE8kRLBjHCzeXt+9uGPlx3zQBp0Vn
HXbAQTy+dlIGVHwpJy2NuB3zuGMVPfajBpyIZi5aQ7Y440LO5xnAA5PFAFqisaLxVps7wpE8zSTS
SRKghbduTG8EY4xkVIniPT3L/vHCKjyK7RkLIqjLFD/Fj2oA1aKzLrxBp9nAJp5isf2Y3edhP7sY
yf8Ax4cVLY6xa6hM0UXmJIFEgSWMoWQ9GAPUe9AF6iud8R+JJ9G1CztLa1WdrhWOWJ4x9Ki0PxPe
ajrTafe2SQEReYGVic/mKj2kb8p1LBVnS9rbS191sdPRXK3PjDzIoZbeC6t0TURazebAfnGWBC+5
IHuK2bfXbK5eCNHdZZpGiEboVZXVdxDA9DjnmrOU0aKx5fFGnRTCINNJK3mYSOFmP7ttr8AdjUN1
4ntJtJvJdPmPmraSz28jRkLJtU8qTw2DjNAG9RWPpviK1vLaMkzeaYBMA0LKZRgZKDHzDntU0GvW
U0scRMsUsknlCOWMowbbuAIPTI5HrQBpUVgP4wsk1KW1WC8kSKN5GnjgZkwpw3PfBB59eK1U1K2l
uobeOTfJND56BRkbOOc++aALVFFFABRRRQAUUUUAQXlpDf2ctrdIJIZVKup7isWHwjam7upbxpJw
8itCWmcuihNuC2c888V0NFAGE3gzRW+zl7Z2NuAsZMr5AGMDryBiiLwZo0EqSxW8iSRnMbCZ8pyT
xzx94/nW7RQBVvdOttQjWO5QsqhgAGI6qVPT2JrMTwZoqKqpbMqodygStwcYOOe/f1rdooA56bwX
pRhXyYXWaKLy4n89xgD7oJBzgH+tM0vwhBDHBNqJ86/hwqzRyMNqrnaB07HB45rpKKAMzVvD9jrK
n7XGSxhaEMDyqsQTgdM8DmqkngvRpUhjNr+5ik8wRZymcYxg9B3OOpreooAwz4P0h0kWSCSQSFWb
fM5+ZfusOeCOxqNvBGiM0pNvL+9+8PPfA+bcMDPGCOK6CigDF/4RLSvN8xYZEfzROCkzjEgGNw56
kdT3pLfwlpVrfveQwFJyysrI23aAMbRjse4PWtuigBD0OOtZP2C4x9wE5z94fjWvSVMoqQ07FGyt
ZIZmeQYBXHWuCtfAGtWN5q0kctsy3sUsSkSMpXc2QTx2r0ulrWlN0k1HqTKKluc9o2gT6d4Ph0m4
8mWZUIYhiFyTng4qPSPCzWl0lxeSI5Q5RE6Z9Sa6WkqvbTs13M3RhJpvoch4y8L6nrOp2N9pU8Ec
lujIyy55B+lM8GeE9S0TV7q+1KW2PmwrCiQ5woBGPwGK7OkrDkV7nT7SXLynPW3h+6hvbdmkgMFv
fTXSkE7mEgfgjGAQW9elIvh+7t5kuoJIHnivZ7hEckKyy5GCQCQRn0ro6KozOc0bw7d2F0J7meB2
MM6N5YIG6SXeMZ7AcVD/AMIpcnTYLY3EW6PSpLEkZwWbbhvp8tdTRQBzNzoOq38UJmubeCa1hMcL
QFsOTtDFuMqCBjAzjOc1mXfh+e1tzbwNFDf3d750C2wYrEhQJJkn/Zyc+uK7mkoA47X7N9NS+ttM
MZe/shbLA6tn5VKrsIGDwcYJGOtXX8NXDTTXCTxLMHtpYQQSA0SlSG9jk9K6SloA5u50G+vb1dSm
ktlvYnjMUKsxj2oWOC2MkncecccdapXOl6nZ3VnL/o8lzPqzXHyhvLRTCRgnGR0xn1P4V2NFAGbp
WnzWs13dXTIbm7cO6xklEAUKACevA64Fcrqvg7VbnWL+6tWtgtzIWBZyCV2gbWGPUZrvKSplFS3N
6GInQbcOpzPhbwxNpmkXdnqxiuBcS72AJIIwOv5Va0PwtaaPc3NwsEIle4eSJkzlEIAC8/jW7RTi
lFWRFWrKrNzluznx4enFqsXnRZGqfbs8/c8zdj64pLrQLptWm1G3khMn2mOeJHJAOIvLIYgHHUkY
zXQ0UzM5zT/D15BeW1zczQF4724uXEYOMSKQAM+maYPDFz/Y+nWZni32vnb25wd6SKMfi4rpqKAO
bh0HU5Leyiurm3iawgaOCWAElnMewOQRxjJOOay7rQJtPs72aTyVu7mOGO2WDe7NOhJVyT3JPPoM
13FJQBU0mx/s7TILYnc6LmRv7znlj+JJNc0+ga3G8q2xtQhdir+awbBkD+ntiuwoq4VHDYzqUlPc
55dAupfC9xYSPClzNKZhtJKA7w4HrjjFSSaJdXCa20jwrJqVssShSSEYIynJxyMmt6ipk3J3ZcYq
KSRy9x4c1Dybi2t5rXyLmeGeRnLblKbMgADHOwYOfwqxF4fnS0sojLFmDUXvGPPKlnOB7/OK6Cik
M5j/AIR7ULfVEvrWW1Zkurify5CwDLIqgDIBwRt64NQ3XhS8uI5t8tvN9q8x5opGZY0kbo6gfewA
Bg49a62igDkLXRd+vWMKvuhsrWNb0BTsklQYj57kZJ/Bat6jptpob2t9YLDbNE7go4fY+4AHJAJB
+UYOD+tdHRQByWh6NdXdwb+7Kokk12SoBUssmwKQD04U9farI8N3dxa2tjeTQi0so2SJos75MoUB
YHgYDHoTk10tFAHEa1ouqJo9zLM1uwg0xrOJIQzM53IQxGB/d6frXQWen3kmqR3+otArwwNDGkBJ
B3EFmJIH90cdvWtaloAwta0m7vNRhubbYQsRjOX2sp3Agg4PpUWlaNfW+sR3V04MUcTIA0m9skj2
GBxXQ0tRyK9zpWKmoez0taxzq+H7sSBfMg8pNTN8rZO4qSxKkY4IJ45/Klm0C7XVpNRt5IGlF358
cbkhSphEZBIBweM9DXQ0VZzHO6N4fu7HUUu7qaB223G8RggZklDjGewFV38KXTaRYWnnw77ayntm
bnBaRNoI9hXVUUAcxPoGqXsNuJrqCCWzhMcDwFvnJABLcfLkAjAz1znis+70CXTrLUGRYIry8mhe
yit9z7JUHUkjnvk8cZrt6SgDDtNAksL+ylhMUkMdobWYOSCctuLDjkk5yDjrT/D+hSaObgzzLMSR
HAQP9XAudi/UZP6VtUUAFFFFAH//2Q==
------=_NextPart_000_002C_01D336EA.EB9739C0--
7 years, 7 months
CentOS 7.4, qemu 2.9 -> Gluster 3.xx?
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-c8dc99c1-5226-437e-9e4d-d31e333baf70
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,=0A=
=0A=
given the fact that a current yum update will bring Centos 7.4 and qemu 2.9=
=0A=
to the nodes I wonder if a gluster update thorugh Ovirt repos is already cl=
ose =0A=
to release? Not only is 3.8 EOL but also I like to minimize the update step=
s to =0A=
a new stable package level.=0A=
=0A=
Best regards.=0A=
=0A=
Markus=0A=
=0A=
=
------=_NextPartTM-000-c8dc99c1-5226-437e-9e4d-d31e333baf70
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-c8dc99c1-5226-437e-9e4d-d31e333baf70--
7 years, 7 months
Cannot set storage domain on maintenance
by nicolas@devels.es
Hi,
We're running ovirt-engine-4.1.5.2-1 and we're trying to set one of the
storage domains on maintenance in order to remove it, as there's nothing
left on it besides some templates.
To do so, I open the 'Data Centers' tab, then the 'Storage' subtab,
select the storage domain and click on 'Maintenance'. When asked for
confirmation I click OK. After that, this error message is shown:
Error while executing action: Cannot deactivate Storage while there are
running tasks on this Storage.
-Please wait until tasks will finish and try again.
As far as I know, there are no pending tasks on this storage. The Events
tab only shows a movement of a disk which has nothing to do with this
storage domain.
What can I do to solve this?
Thanks.
Nicolás
7 years, 7 months
Hyper converged network setup
by Tailor, Bharat
Hi,
I am trying to deploy 3 hosts hyper converged setup.
I am using Centos and installed KVM on all hosts.
Host-1
Hostname - test1.localdomain
eth0 - 192.168.100.15/24
GW - 192.168.100.1
Hoat-2
Hostname - test2.localdomain
eth0 - 192.168.100.16/24
GW - 192.168.100.1
Host-3
Hostname - test3.localdomain
eth0 - 192.168.100.16/24
GW - 192.168.100.1
I have created two gluster volume "engine" & "data" with replica 3.
I have add fqdn entry in /etc/hosts for all host for DNS resolution.
I want to deploy Ovirt engine self hosted OVA to manage all the hosts and
production VM and my ovirt-engine VM should have HA enabled.
I found multiple docs over internet to deply Self-hosted-engine-ova but I
don't what kind of network configuration I've to do on Centos network card
& KVM. As KVM docs suggest that I've to create a bridge network for Pnic to
Vnic bridge. If I configure a bridge br0 for eth0 bridge that I can't see
eth0 while deploying ovirt-engine setup at NIC card choice.
Kindly help me to do correct configuration for Centos hosts, KVM &
ovirt-engine-vm for HA enabled DC.
Regrards
Bharat Kumar
G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
Udaipur (Raj.)
313001
Mob: +91-9950-9960-25
7 years, 7 months
Error cloning snapshot
by Maton, Brett
oVirt: Version 4.1.7.1-1.el7.centos
CentOS 7.4
Kernel 3.10.0-693.2.2.el7.x86_64
I'm trying to clone a snapshot to export but am getting the following
errors, any suggestions ?
In web-ui:
Uncaught exception occurred. Please try reloading the page. Details:
(TypeError) __gwt$exception: <skipped>: qib(...).e is null
Please have your administrator check the UI logs
ui.log:
2017-09-26 08:59:40,871+01 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-41) [] Permutation name: 7DB6BC4D2EE32C078419FA8B25F06F9C
2017-09-26 08:59:40,871+01 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-41) [] Uncaught exception:
com.google.gwt.core.client.JavaScriptException: (TypeError)
__gwt$exception: <skipped>: qib(...).e is null
at
org.ovirt.engine.ui.uicommonweb.models.vms.UnitVmModel.$isHostedEngine(UnitVmModel.java:1888)
at
org.ovirt.engine.ui.common.widget.uicommon.popup.AbstractVmPopupWidget.$edit(AbstractVmPopupWidget.java:1515)
at
org.ovirt.engine.ui.common.widget.uicommon.popup.AbstractVmPopupWidget.edit(AbstractVmPopupWidget.java:1515)
at
org.ovirt.engine.ui.common.widget.uicommon.popup.AbstractVmPopupWidget.edit(AbstractVmPopupWidget.java:1515)
at
org.ovirt.engine.ui.common.widget.uicommon.popup.AbstractModeSwitchingPopupWidget.edit(AbstractModeSwitchingPopupWidget.java:78)
at
org.ovirt.engine.ui.common.view.popup.AbstractModelBoundWidgetPopupView.$edit(AbstractModelBoundWidgetPopupView.java:36)
at
org.ovirt.engine.ui.common.view.popup.AbstractModelBoundWidgetPopupView.edit(AbstractModelBoundWidgetPopupView.java:36)
at
org.ovirt.engine.ui.common.presenter.AbstractModelBoundPopupPresenterWidget.$init(AbstractModelBoundPopupPresenterWidget.java:143)
at
org.ovirt.engine.ui.common.widget.popup.AbstractVmBasedPopupPresenterWidget.$init(AbstractVmBasedPopupPresenterWidget.java:68)
at
org.ovirt.engine.ui.common.widget.popup.AbstractVmBasedPopupPresenterWidget.init(AbstractVmBasedPopupPresenterWidget.java:68)
at
org.ovirt.engine.ui.common.widget.popup.AbstractVmBasedPopupPresenterWidget.init(AbstractVmBasedPopupPresenterWidget.java:68)
at
org.ovirt.engine.ui.common.uicommon.model.ModelBoundPopupHandler.$handleWindowModelChange(ModelBoundPopupHandler.java:96)
at
org.ovirt.engine.ui.common.uicommon.model.ModelBoundPopupHandler$1.$eventRaised(ModelBoundPopupHandler.java:77)
at
org.ovirt.engine.ui.common.uicommon.model.ModelBoundPopupHandler$1.eventRaised(ModelBoundPopupHandler.java:77)
at org.ovirt.engine.ui.uicompat.Event.$raise(Event.java:99)
at
org.ovirt.engine.ui.uicommonweb.models.Model.$onPropertyChanged(Model.java:471)
at
org.ovirt.engine.ui.uicommonweb.models.Model.onPropertyChanged(Model.java:471)
at
org.ovirt.engine.ui.uicommonweb.models.Model.$setWindow(Model.java:93)
at
org.ovirt.engine.ui.uicommonweb.models.vms.VmSnapshotListModel.$cloneVM(VmSnapshotListModel.java:703)
at
org.ovirt.engine.ui.uicommonweb.models.vms.VmSnapshotListModel.executeCommand(VmSnapshotListModel.java:912)
at
org.ovirt.engine.ui.uicommonweb.UICommand.$execute(UICommand.java:163)
at
org.ovirt.engine.ui.common.widget.action.UiCommandButtonDefinition.$onClick(UiCommandButtonDefinition.java:130)
at
org.ovirt.engine.ui.common.widget.action.UiCommandButtonDefinition.onClick(UiCommandButtonDefinition.java:130)
at
org.ovirt.engine.ui.common.widget.action.AbstractActionPanel$11.execute(AbstractActionPanel.java:575)
at com.google.gwt.user.client.ui.MenuBar$1.execute(MenuBar.java:893)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl.runScheduledTasks(SchedulerImpl.java:164)
[gwt-servlet.jar:]
at
com.google.gwt.core.client.impl.SchedulerImpl.$flushFinallyCommands(SchedulerImpl.java:270)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.exit(Impl.java:378)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335)
[gwt-servlet.jar:]
at Unknown.Ix/<(Unknown Source)
at Unknown.anonymous(Unknown Source)
7 years, 7 months
ovirt 4.0 repos
by Edward Clay
Hello, I've recently ran into a couple of problems. One that I'm
noticing right now is that when I do a yum check-update I'm getting a
404 .
yum clean allyum check-updateLoaded plugins: fastestmirror,
versionlockbase
| 3.6
kB 00:00:00 http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.
0/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not FoundTrying
other mirror.To address this issue please refer to the below knowledge
base article
https://access.redhat.com/articles/1320623
If above article doesn't help to resolve this issue please create a bug
on https://bugs.centos.org/
One of the configured repositories failed (CentOS-7 - oVirt 4.0), and
yum doesn't have enough cached data to continue. At this point the
only safe thing yum can do is fail. There are a few ways to work "fix"
this:
1. Contact the upstream for the repository and get them to fix the
problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a
working upstream. This is most often useful if you are using a
newer distribution release than is supported by the repository
(and the packages for the previous distribution release still
work).
3. Run the command with the repository temporarily
disabled yum --disablerepo=centos-ovirt40-release ...
4. Disable the repository permanently, so yum won't use it by
default. Yum will then just ignore the repository until you
permanently enable it again or use --enablerepo for temporary
usage:
yum-config-manager --disable centos-ovirt40-
release or subscription-manager repos --
disable=centos-ovirt40-release
5. Configure the failing repository to be skipped, if it is
unavailable. Note that yum will try to contact the repo. when it
runs most commands, so will have to try and fail each time (and
thus. yum will be be much slower). If it is a very temporary
problem though, this is often a nice compromise:
yum-config-manager --save --setopt=centos-ovirt40-
release.skip_if_unavailable=true
failure: repodata/repomd.xml from centos-ovirt40-release: [Errno 256]
No more mirrors to try.http://mirror.centos.org/centos/7/virt/x86_64/ov
irt-4.0/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found I
have the following two files in /etc/yum.repo.d that I believe are used
to point me at ovirt repos.
-rw-r--r--. 1 root root 1672 Feb 7 2017 ovirt-4.0-
dependencies.repo[ovirt-4.0-epel]name=Extra Packages for Enterprise
Linux 7 - $basearch#baseurl=http://download.fedoraproject.org/pub/epel/
7/$basearchmirrorlist=https://mirrors.fedoraproject.org/metalink?repo=e
pel-7&arch=$basearchfailovermethod=priorityenabled=1includepkgs=epel-
release,python-uinput,puppet,python-lockfile,python-cpopen,python-
ordereddict,python-pthreading,python-inotify,python-
argparse,novnc,python-ply,python-kitchen,python-daemon,python-
websockify,livecd-tools,spice-html5,mom,python-IPy,python-
ioprocess,ioprocess,safelease,python-paramiko,python2-paramiko,python2-
crypto,libtomcrypt,libtommath,python-cheetah,python-ecdsa,python2-
ecdsa,python-markdown,rubygem-rgen,ovirt-guest-agent*,userspace-
rcu,protobuf-
java,objenesis,python34*gpgcheck=1gpgkey=https://dl.fedoraproject.org/p
ub/epel/RPM-GPG-KEY-EPEL-7
[ovirt-4.0-centos-gluster37]name=CentOS-7 - Gluster 3.7baseurl=http://m
irror.centos.org/centos/7/storage/$basearch/gluster-
3.7/gpgcheck=1enabled=1gpgkey=https://raw.githubusercontent.com/CentOS-
Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-
SIG-Storage
[ovirt-4.0-patternfly1-noarch-epel]name=Copr repo for patternfly1 owned
by patternflybaseurl=http://copr-be.cloud.fedoraproject.org/results/pat
ternfly/patternfly1/epel-7-
$basearch/enabled=1skip_if_unavailable=1gpgcheck=0
[virtio-win-stable]name=virtio-win builds roughly matching what was
shipped in latest RHELbaseurl=http://fedorapeople.org/groups/virt/virti
o-win/repo/stableenabled=1skip_if_unavailable=1gpgcheck=0
[centos-ovirt40-release]name=CentOS-7 - oVirt 4.0baseurl=http://mirror.
centos.org/centos/7/virt/$basearch/ovirt-4.0/gpgcheck=0enabled=1
-rw-r--r--. 1 root root 289 Feb 7 2017 ovirt-4.0.repo[ovirt-
4.0]name=Latest oVirt 4.0 Release#baseurl=http://resources.ovirt.org/pu
b/ovirt-
4.0/rpm/el$releasever/mirrorlist=http://resources.ovirt.org/pub/yum-
repo/mirrorlist-ovirt-4.0-
el$releaseverenabled=1skip_if_unavailable=1gpgcheck=1gpgkey=file:///etc
/pki/rpm-gpg/RPM-GPG-ovirt-4.0
Should the 4.0 version/repos still be available and if so what's wrong
with my curent config?
7 years, 7 months
Engine migration and host import
by Ben Bradley
Hi All
I've been running a single-host ovirt setup for several months, having
previously used a basic QEMU/KVM for a few years in lab environments.
I currently have the ovirt engine running at the bare-metal level, with
the box also acting as the single host. I am also running this with
local storage.
I now have an extra host I can use and would like to migrate to a hosted
engine. The following documentation appears to be perfect and pretty
clear about the steps involved:
https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-en...
and
https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_...
However I'd like to try and get a bit more of an understanding of the
process that happens behind the scenes during the cut-over from one
engine to a new/hosted engine.
As an experiment I attempted the following:
- created a new VM within my current environment (bare-metal engine)
- creating an engine-backup
- stopped the bare-metal engine
- restored the backup into the new VM
- ran engine-setup within the new VM
The new engine started up ok and I was able to connect and login to the
web UI. However my host was "unresponsive" and I was unable to manage it
in any way from the VM. I shut the VM down and started the bare-metal
ovirt-engine again on the host and everything worked as before. I didn't
try very hard to make it work however.
The magic missing from the basic process I tried is the synchronising
and importing of the existing host, which is what the hosted-engine
utility does.
Can anyone describe that process in a bit more detail?
Is it possible to perform any part of that process manually?
I'm planning to expand my lab and dev environments so for me it's
important to discover the following...
- That I'm able to reverse the process back to bare-metal engine if I
ever need/want to
- That I can setup a new VM or host with nothing more than an
engine-backup but still be able to regain control of exiting hosts and
VMs within the cluster
My main concern after my basic attempt at a "restore/migration" above is
that I might not be able to re-import/sync an existing host after I have
restored engine from a backup.
I have been able to export VMs to storage, remove them from ovirt,
re-install engine and restore, then import VMs from the export domain.
That all worked fine. But it involved shutting down all VMs and removing
their definitions from the environment.
Are there any pre-requisites to being able to re-import an existing
running host (and VMs), such as placing ALL hosts into maintenance mode
and shutting down any VMs first?
Any insight into host recovery/import/sync processes and steps will be
greatly appreciated.
Best regards
Ben
7 years, 7 months
Re: [ovirt-users] [ovirt-devel] vdsm vds.dispatcher
by Gary Pedretty
--Apple-Mail=_BF0F8909-F772-4228-ADA6-BEDEE7C22B79
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
The bug I assumed this was related to is.
Bug 1417708 - ovirt-ha-agent should reuse json-rpc connections =20
This is a glusterized 4 host self hosted engine setup. All storage =
domains are glusterfs replica 3 volumes on 3 of the 4 hosts that are =
part of the cluster.
Centos 3.10.0-514.26.2.el7.x86_64
vdsm-4.18.21-1.el7.centos.x86_64
libvirt-2.0.0-10.el7_3.9.x86_64
glusterfs-3.7.20-1.el7.x86_64
qemu-kvm-tools-ev-2.6.0-28.el7_3.3.1.x86_64
Host and engine are all updated to the latest versions
Here are sample of messages log, vdsm,log, supervdsm.log and mom.log
messages
Aug 28 01:20:11 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:16 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:16 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:19 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:22 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:24 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:37 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:41 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:41 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:44 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:47 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
Aug 28 01:20:49 fai-kvm-1 journal: vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof
vdsm.log
Thread-316519::DEBUG::2017-08-28 =
01:21:13,242::bindingxmlrpc::319::vds::(wrapper) client [::1]
Thread-316519::DEBUG::2017-08-28 =
01:21:13,242::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::moving from state
init -> state preparing
Thread-316519::INFO::2017-08-28 =
01:21:13,242::logUtils::49::dispatcher::(wrapper) Run and protect: =
repoStats(options=3DNone)
Thread-316519::INFO::2017-08-28 =
01:21:13,243::logUtils::52::dispatcher::(wrapper) Run and protect: =
repoStats, Return
response: {u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code': 0, =
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000833501', =
'lastCheck': '1.1', 'valid': True}, =
u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': False, 'delay': '0.000893509', 'lastCheck': =
'1.1', 'valid': True}, u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': =
0, 'actual': True, 'version':
3, 'acquired': True, 'delay': '0.000776587', 'lastCheck': '1.0', =
'valid': True}, u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, =
'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000513211', =
'lastCheck': '1.1', 'valid': True}, =
u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000493711', 'lastCheck': =
'1.3', 'valid':
True}, u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, 'actual': =
True, 'version': 0, 'acquired': True, 'delay':
'0.000512272', 'lastCheck': '1.1', 'valid': True}}
Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::task::1193::Storage.TaskManager.Task::(prepare) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::finished: =
{u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code':
0, 'actual': True, 'version': 0,
'acquired': True, 'delay': '0.000833501', 'lastCheck': '1.1', 'valid': =
True}, u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': =
True, 'version': 3, 'acquired': False, 'delay': '0.000893509', =
'lastCheck': '1.1', 'valid': True}, =
u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000776587', 'lastCheck': =
'1.0', 'valid': True}, u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': =
0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000513211', 'lastCheck': '1.1', 'valid': True}, =
u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e':
{'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000493711', 'lastCheck': '1.3', 'valid': True}, =
u'feaebec7-aae3-45b5-9684-1feedede7bec':
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': =
'0.000512272', 'lastCheck': '1.1', 'valid': True}}
Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::moving from state
preparing -> state finished
Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::resourceManager::952::Storage.ResourceManager.Owner::(releas=
eAll) Owner.releaseAll requests {} resources {}
Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::resourceManager::989::Storage.ResourceManager.Owner::(cancel=
All) Owner.cancelAll requests {}
Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::task::995::Storage.TaskManager.Task::(_decref) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::ref 0 aborting False
Thread-316519::INFO::2017-08-28 =
01:21:13,243::bindingxmlrpc::331::vds::(wrapper) RPC call =
storageRepoGetStats finished (code=3D0) in 0.00 seconds
Thread-316519::INFO::2017-08-28 =
01:21:13,246::xmlrpc::91::vds.XMLRPCServer::(_process_requests) Request =
handler for
::1:51992 stopped
Thread-68::DEBUG::2017-08-28 =
01:21:14,093::fileSD::159::Storage.StorageDomainManifest::(__init__) =
Reading domain in
path =
/rhev/data-center/mnt/glusterSD/glustermount:data2/2bf8f623-1d82-444e-ae16=
-d32e5fe4dc9e
Thread-68::DEBUG::2017-08-28 =
01:21:14,093::persistent::194::Storage.PersistentDict::(__init__) =
Created a persistent
dict with FileMetadataRW backend
Thread-68::DEBUG::2017-08-28 =
01:21:14,096::persistent::236::Storage.PersistentDict::(refresh) read =
lines (FileMetadataRW)=3D['CLASS=3DData', 'DESCRIPTION=3Ddata2', =
'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3', 'LEASETIMESEC=3D60', =
'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'MASTER_VERSION=3D10', =
'POOL_DESCRIPTION=3DDefault', =
'POOL_DOMAINS=3Dcf2f4c75-966b-4f69-90a2-d8e7d21fc052:Active,5e39db25-561f-=
490a-81b6-46a7225f02b6:Active,95d4ed8a-9184-4863-84d5-af2bedd690da:Attache=
d,3e04b3ad-c46c-4903-a7b2-b9af681318d9:Attached,bd1f0364-0d3a-44fa-842a-5d=
339caac412:Attached,403df7fb-6cff-46e1-8e04-41213fbc0e6e:Active,2bf8f623-1=
d82-444e-ae16-d32e5fe4dc9e:Active,6654cb0c-3e57-42ca-b996-3660a3d32a43:Act=
ive,88886a7d-f4cc-45a6-901e-89999aa35d78:Attached,feaebec7-aae3-45b5-9684-=
1feedede7bec:Active', 'POOL_SPM_ID=3D-1', 'POOL_SPM_LVER=3D-1',
'POOL_UUID=3D5990e442-0395-0118-005c-0000000003a1', =
'REMOTE_PATH=3Dglustermount:data2', 'ROLE=3DMaster', =
'SDUUID=3D2bf8f623-1d82-444e-ae16-d32e5fe4dc9e', 'TYPE=3DGLUSTERFS', =
'VERSION=3D3', '_SHA_CKSUM=3D65b61dc4f10d80c67c6133ffaee280919458a06a']
Thread-68::DEBUG::2017-08-28 =
01:21:14,105::fileSD::679::Storage.StorageDomain::(imageGarbageCollector) =
Removing remnants of deleted images []
Thread-68::INFO::2017-08-28 =
01:21:14,105::sd::604::Storage.StorageDomain::(_registerResourceNamespaces=
) Resource namespace 2bf8f623-1d82-444e-ae16-d32e5fe4dc9e_imageNS =
already registered
Thread-68::INFO::2017-08-28 =
01:21:14,105::sd::612::Storage.StorageDomain::(_registerResourceNamespaces=
) Resource namespace 2bf8f623-1d82-444e-ae16-d32e5fe4dc9e_volumeNS =
already registered
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,244::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) =
Calling
'Host.getStats' in bridge with {}
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,245::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::moving from state init -> =
state preparing
jsonrpc.Executor/4::INFO::2017-08-28 =
01:21:15,245::logUtils::49::dispatcher::(wrapper) Run and protect: =
repoStats(options=3DNone)
jsonrpc.Executor/4::INFO::2017-08-28 =
01:21:15,245::logUtils::52::dispatcher::(wrapper) Run and protect: =
repoStats, Return response: {u'5e39db25-561f-490a-81b6-46a7225f02b6': =
{'code': 0, 'actual': True, 'version': 0, 'acquired': True,
'delay': '0.000833501', 'lastCheck': '3.1', 'valid': True}, =
u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': False,
'delay': '0.000893509', 'lastCheck': '3.1', 'valid': True}, =
u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000776587', 'lastCheck': =
'3.0', 'valid':
True}, u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, 'actual': =
True, 'version': 3, 'acquired': True, 'delay': '0.000513211', =
'lastCheck': '1.3', 'valid': True},
u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e':
{'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000493711', 'lastCheck': '1.1', 'valid': True}, =
u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, 'actual': True, =
'version': 0, 'acquired':
True, 'delay': '0.000512272', 'lastCheck': '3.1', 'valid': True}}
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::task::1193::Storage.TaskManager.Task::(prepare) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::finished: =
{u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code': 0, 'actual': True, =
'version': 0, 'acquired': True, 'delay': '0.000833501', 'lastCheck': =
'3.1', 'valid': True}, u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': =
0, 'actual':
True, 'version': 3, 'acquired': False, 'delay': '0.000893509', =
'lastCheck': '3.1', 'valid': True}, =
u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True,
'delay': '0.000776587', 'lastCheck': '3.0', 'valid': True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000513211', 'lastCheck': =
'1.3', 'valid':
True}, u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e': {'code': 0, 'actual': =
True, 'version': 3, 'acquired': True, 'delay': '0.000493711', =
'lastCheck': '1.1', 'valid': True},
u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, 'actual': True, =
'version': 0, 'acquired': True, 'delay': '0.000512272', 'lastCheck': =
'3.1', 'valid': True}}
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::moving from state =
preparing -> state finished
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::resourceManager::952::Storage.ResourceManager.Owner::(releas=
eAll) Owner.releaseAll requests {} resources {}
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::resourceManager::989::Storage.ResourceManager.Owner::(cancel=
All)
Owner.cancelAll requests {}
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::task::995::Storage.TaskManager.Task::(_decref) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::ref 0 aborting False
Reactor thread::INFO::2017-08-28 =
01:21:15,291::protocoldetector::76::ProtocolDetector.AcceptorImpl::(handle=
_accept)
Accepted connection from ::1:51994
Reactor thread::DEBUG::2017-08-28 =
01:21:15,299::protocoldetector::92::ProtocolDetector.Detector::(__init__) =
Using required_size=3D11
Reactor thread::INFO::2017-08-28 =
01:21:15,301::protocoldetector::128::ProtocolDetector.Detector::(handle_re=
ad) Detected protocol stomp from ::1:51994
Reactor thread::INFO::2017-08-28 =
01:21:15,301::stompreactor::101::Broker.StompAdapter::(_cmd_connect) =
Processing CONNECT request
Reactor thread::DEBUG::2017-08-28 =
01:21:15,301::stompreactor::492::protocoldetector.StompDetector::(handle_s=
ocket) Stomp detected from ('::1', 51994)
JsonRpc (StompReactor)::INFO::2017-08-28 =
01:21:15,313::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) =
Subscribe command received
jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,356::__init__::555::jsonrpc.JsonRpcServer::(_handle_request) =
Return 'Host.getStats' in bridge with {'cpuStatistics': {'1': =
{'cpuUser': '2.27',
'nodeIndex': 0, 'cpuSys': '1.80', 'cpuIdle': '95.93'}, '0': {'cpuUser': =
'0.67', 'nodeIndex': 0, 'cpuSys': '1.13', 'cpuIdle': '98.20'}, '3': =
{'cpuUser': '5.67', 'nodeIndex': 0, 'cpuSys': '0.60', 'cpuIdle': =
'93.73'},
'2': {'cpuUser': '0.93', 'nodeIndex': 0, 'cpuSys': '0.53',
'cpuIdle': '98.54'}, '5': {'cpuUser': '0.87', 'nodeIndex': 0, 'cpuSys': =
'1.40', 'cpuIdle': '97.73'}, '4': {'cpuUser': '1.27', 'nodeIndex': 0, =
'cpuSys': '0.53', 'cpuIdle': '98.20'}, '7': {'cpuUser': '0.27', =
'nodeIndex': 0, 'cpuSys':
'0.47', 'cpuIdle': '99.26'}, '6': {'cpuUser': '2.07', 'nodeIndex': 0, =
'cpuSys': '0.53', 'cpuIdle': '97.40'}}, 'numaNodeMemFree': {'0': =
{'memPercent': 98, 'memFree': '335'}}, 'memShared': 0,
'haScore': 3400, 'thpState': 'always', 'ksmMergeAcrossNodes': True, =
'rxRate': '0.04', 'vmCount': 1, 'memUsed': '41', 'storageDomains': =
{u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': False, 'delay': '0.000893509', 'lastCheck': =
'3.1', 'valid': True}, u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': =
0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000776587', 'lastCheck': '3.0', 'valid': True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0,
'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000513211', =
'lastCheck': '1.3', 'valid': True}, =
u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code': 0, 'actual': True, =
'version': 0,
'acquired': True, 'delay': '0.000833501', 'lastCheck': '3.1', 'valid': =
True}, u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e': {'code': 0, 'actual': =
True, 'version': 3, 'acquired': True, 'delay': '0.000493711', =
'lastCheck': '1.1', 'valid': True}, =
u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, 'actual': True, =
'version': 0, 'acquired': True, 'delay': '0.000512272', 'lastCheck': =
'3.1', 'valid': True}}, 'incomingVmMigrations': 0,
'network': {'enp2s0': {'rxErrors': '0', 'txRate': '0.2', 'rxRate': =
'0.1', 'txErrors': '0', 'speed': '1000', 'rxDropped': '18453', 'name': =
'enp2s0', 'tx': '884509425955', 'txDropped': '0', 'sampleTime':
1503912069.554311, 'rx': '134148067121', 'state': 'up'}, 'eno1': =
{'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors':
'0', 'speed': '1000',
'rxDropped': '0', 'name': 'eno1', 'tx': '54488653089', 'txDropped': '0', =
'sampleTime': 1503912069.554311, 'rx': '46979728698', 'state': 'up'}, =
';vdsmdummy;': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0',
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': =
';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '0', 'state': 'down'}, 'enp2s0.4': {'rxErrors': =
'0', 'txRate': '0.2', 'rxRate': '0.1', 'txErrors': '0', 'speed': '1000', =
'rxDropped': '0', 'name': 'enp2s0.4', 'tx': '843385870486', 'txDropped': =
'0', 'sampleTime': 1503912069.554311, 'rx': '124132253793', 'state': =
'up'}, 'lo': {'rxErrors': '0', 'txRate': '0.3', 'rxRate': '0.3', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': 'lo', 'tx': =
'444179233658', 'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': =
'444179233658', 'state': 'up'}, 'enp4s0': {'rxErrors': '0', 'txRate': =
'0.0', 'rxRate': '0.0', 'txErrors': '0', 'speed': '1000', 'rxDropped': =
'14290', 'name': 'enp4s0',
'tx': '9009494', 'txDropped': '0', 'sampleTime': 1503912069.554311, =
'rx': '187852658', 'state': 'up'}, 'enp4s0.100':
{'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', =
'speed': '1000', 'rxDropped': '0', 'name': 'enp4s0.100', 'tx': =
'8798732', 'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': =
'95777939',
'state': 'up'}, 'enp2s0.2': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': =
'0.0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': =
'enp2s0.2', 'tx': '1156', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '66329343', 'state': 'up'},
'enp2s0.3': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '0',
'name': 'enp2s0.3', 'tx': '1156', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '5930851', 'state': 'up'}, 'vnet0': =
{'rxErrors': '0', 'txRate': '0.0',
'rxRate': '0.0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0', =
'name': 'vnet0', 'tx': '97194738', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '8798642', 'state': 'up'}, 'ovirtmgmt':
{'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', =
'speed': '1000', 'rxDropped': '2', 'name': 'ovirtmgmt', 'tx': =
'52893439231', 'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': =
'43528048077', 'state': 'up'}, 'ovirt-3': {'rxErrors': '0', 'txRate': =
'0.0', 'rxRate': '0.0', 'txErrors': '0', 'speed': '1000', 'rxDropped': =
'2', 'name': 'ovirt-3', 'tx': '578', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '4621679',
'state': 'up'}, 'ovirt-4': {'rxErrors': '0', 'txRate': '0.2', 'rxRate': =
'0.1', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': =
'ovirt-4', 'tx': '843385869908', 'txDropped': '0',
'sampleTime': 1503912069.554311, 'rx': '124132039409', 'state': 'up'}, =
'ovirt-2': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0',
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': 'ovirt-2', =
'tx': '578', 'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': =
'19462774', 'state':
'up'}, 'dmz': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '4', 'name': 'dmz', 'tx': =
'578', 'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': =
'31428052',
'state': 'up'}}, 'txDropped': '0', 'cpuUser': '1.75', 'ksmPages': 1250, =
'elapsedTime': '127838.51', 'cpuLoad':
'2.39', 'cpuSys': '0.86', 'diskStats': {'/var/log': {'free': '47378'}, =
'/var/log/core': {'free': '47378'}, '/var/run/vdsm/': {'free': '7868'}, =
'/tmp': {'free': '47378'}}, 'cpuUserVdsmd': '2.13', 'netConfigDirty': =
'False', 'memCommitted': 2113, 'ksmState': False, 'vmMigrating': 0, =
'ksmCpu': 0, 'memAvailable': 9216, 'txRate': '0.06', 'bootTime': =
'1503727419', 'haStats': {'active':
True, 'configured': True, 'score': 3400, 'localMaintenance': False, =
'globalMaintenance': False}, 'momStatus': 'active', 'rxDropped': =
'32751', 'outgoingVmMigrations': 0, 'swapTotal': 7999,
'swapFree': 7842, 'dateTime': '2017-08-28T09:21:15 GMT', =
'anonHugePages': '2280', 'memFree': 9455, 'cpuIdle': '97.39', =
'vmActive': 1, 'v2vJobs': {}, 'cpuSysVdsmd': '0.87'}
jsonrpc.Executor/4::INFO::2017-08-28 =
01:21:15,357::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC =
call Host.getStats succeeded in 0.12 seconds
jsonrpc.Executor/6::DEBUG::2017-08-28 =
01:21:15,580::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) =
Calling
'Host.getHardwareInfo' in bridge with {}
jsonrpc.Executor/6::DEBUG::2017-08-28 =
01:21:15,581::__init__::555::jsonrpc.JsonRpcServer::(_handle_request) =
Return 'Host.getHardwareInfo' in bridge with {'systemProductName': '', =
'systemUUID': '20DC6410-4F9F-DF11-9A25-7071BC772964', =
'systemSerialNumber': '', 'systemVersion': '', 'systemManufacturer': ''}
jsonrpc.Executor/6::INFO::2017-08-28 =
01:21:15,581::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC =
call Host.getHardwareInfo succeeded in 0.00 seconds
JsonRpc (StompReactor)::ERROR::2017-08-28 =
01:21:15,583::betterAsyncore::113::vds.dispatcher::(recv) SSL error =
during
reading data: unexpected eof
Reactor thread::INFO::2017-08-28 =
01:21:16,325::protocoldetector::76::ProtocolDetector.AcceptorImpl::(handle=
_accept)
Accepted connection from ::1:51996
Reactor thread::DEBUG::2017-08-28 =
01:21:16,332::protocoldetector::92::ProtocolDetector.Detector::(__init__) =
Using required_size=3D11
Reactor thread::INFO::2017-08-28 =
01:21:16,332::protocoldetector::128::ProtocolDetector.Detector::(handle_re=
ad) Detected protocol xml from ::1:51996
Reactor thread::DEBUG::2017-08-28 =
01:21:16,332::bindingxmlrpc::1307::XmlDetector::(handle_socket) xml over =
http detected from ('::1', 51996)
BindingXMLRPC::INFO::2017-08-28 =
01:21:16,332::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting =
request handler for ::1:51996
supervdsm.log
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,517::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev eno1 classid 0:1388 (cwd None)
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,545::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =
=3D 0
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,545::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp2s0 classid 0:4 (cwd None)
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,572::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =
=3D 0
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,573::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp2s0 classid 0:2 (cwd None)
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,600::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =
=3D 0
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,600::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp4s0 classid 0:64 (cwd None)
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,628::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =
=3D 0
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,628::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp2s0 classid 0:3 (cwd None)
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,689::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =
=3D 0
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,690::vsctl::57::root::(commit) Executing commands: =
/usr/bin/ovs-vsctl --oneline --format=3Djson -- list Bridge -- list Port =
-- list Interface
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,690::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/bin/ovs-vsctl --oneline --format=3Djson -- list Bridge -- list =
Port -- list Interface (cwd
None)
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,720::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =
=3D 0
MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,724::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper) =
return network_caps with {'bridges': {'ovirtmgmt': {'ipv6autoconf': =
False, 'addr': '10.9.2.61', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', =
'DNS1': '127.0.0.1', 'IPADDR': '10.9.2.61', 'GATEWAY': '10.9.2.1', =
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', =
'BOOTPROTO': 'none', 'STP': 'off', 'DNS2': '209.193.4.7', 'DEVICE': =
'ovirtmgmt', 'MTU': '1500', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, =
'ipv6addrs': [], 'gateway': '10.9.2.1', 'dhcpv4': False, 'netmask': =
'255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': =
['10.9.2.61/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['eno1'], =
'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', =
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', =
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', =
'hello_timer': '104', 'multicast_querier_interval': '25500', 'max_age': =
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': =
'0', 'priority': '32768', 'multicast_membership_interval': '26000', =
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.7071bc772964', 'bridge_id': '8000.7071bc772964', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '16704', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'ovirt-4': {'ipv6autoconf': False, 'addr': '10.9.3.61', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'IPADDR': '10.9.3.61', 'MTU': =
'1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', =
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirt-4', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'gateway': '10.9.3.1', =
'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': =
'off', 'ipv4addrs': ['10.9.3.61/24'], 'mtu': '1500', 'ipv6gateway': =
'::', 'ports': ['enp2s0.4'], 'opts': {'multicast_last_member_count': =
'2', 'hash_elasticity': '4', 'multicast_query_response_interval': =
'1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', =
'multicast_startup_query_interval': '3125', 'hello_timer': '102', =
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': =
'512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': =
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': =
'0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.6805ca4606af', 'bridge_id': '8000.6805ca4606af', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '8307', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'dmz': {'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': =
'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': =
'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'dmz', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'gateway': '', 'dhcpv4': =
False, 'netmask': '', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': [], =
'mtu': '1500', 'ipv6gateway': '::', 'ports': ['vnet0', 'enp4s0.100'], =
'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', =
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', =
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', =
'hello_timer': '103', 'multicast_querier_interval': '25500', 'max_age': =
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': =
'0', 'priority': '32768', 'multicast_membership_interval': '26000', =
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.001b212f794d', 'bridge_id': '8000.001b212f794d', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '7181', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'ovirt-2': {'ipv6autoconf': False, 'addr': '', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': =
'ovirt-2', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], =
'gateway': '', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'stp': =
'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.2'], 'opts': {'multicast_last_member_count': '2', =
'hash_elasticity': '4', 'multicast_query_response_interval': '1000', =
'group_fwd_mask': '0x0', 'multicast_snooping': '1', =
'multicast_startup_query_interval': '3125', 'hello_timer': '2', =
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': =
'512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': =
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': =
'0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.6805ca4606af', 'bridge_id': '8000.6805ca4606af', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '320', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'ovirt-3': {'ipv6autoconf': False, 'addr': '', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': =
'ovirt-3', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], =
'gateway': '', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'stp': =
'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.3'], 'opts': {'multicast_last_member_count': '2', =
'hash_elasticity': '4', 'multicast_query_response_interval': '1000', =
'group_fwd_mask': '0x0', 'multicast_snooping': '1', =
'multicast_startup_query_interval': '3125', 'hello_timer': '1', =
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': =
'512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': =
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': =
'0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.6805ca4606af', 'bridge_id': '8000.6805ca4606af', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '14656', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}}, 'bondings': {}, 'nameservers': ['127.0.0.1', '209.193.4.7', =
'209.112.128.2'], 'nics': {'enp2s0': {'ipv6gateway': '::', =
'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': 'no', 'MTU': =
'1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0', =
'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, =
'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': =
'68:05:ca:46:06:af', 'speed': 1000, 'gateway': ''}, 'eno1': =
{'ipv6gateway': '::', 'ipv6autoconf': False, 'addr': '', 'cfg': =
{'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'MTU': '1500', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'eno1', 'ONBOOT': =
'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', =
'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': '70:71:bc:77:29:64', =
'speed': 1000, 'gateway': ''}, 'enp4s0': {'ipv6gateway': '::', =
'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': 'no', 'MTU': =
'1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp4s0', =
'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, =
'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': =
'00:1b:21:2f:79:4d', 'speed': 1000, 'gateway': ''}}, 'supportsIPv6': =
True, 'vlans': {'enp2s0.4': {'iface': 'enp2s0', 'ipv6autoconf': False, =
'addr': '', 'cfg': {'BRIDGE': 'ovirt-4', 'IPV6INIT': 'no', 'VLAN': =
'yes', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', =
'DEVICE': 'enp2s0.4', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'vlanid': 4, =
'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, =
'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': ''}, 'enp4s0.100': =
{'iface': 'enp4s0', 'ipv6autoconf': False, 'addr': '', 'cfg': {'BRIDGE': =
'dmz', 'IPV6INIT': 'no', 'VLAN': 'yes', 'MTU': '1500', 'NM_CONTROLLED': =
'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp4s0.100', 'ONBOOT': 'yes'}, =
'ipv6addrs': [], 'vlanid': 100, 'mtu': '1500', 'dhcpv4': False, =
'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'ipv6gateway': '::', =
'gateway': ''}, 'enp2s0.2': {'iface': 'enp2s0', 'ipv6autoconf': False, =
'addr': '', 'cfg': {'BRIDGE': 'ovirt-2', 'IPV6INIT': 'no', 'VLAN': =
'yes', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', =
'DEVICE': 'enp2s0.2', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'vlanid': 2, =
'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, =
'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': ''}, 'enp2s0.3': =
{'iface': 'enp2s0', 'ipv6autoconf': False, 'addr': '', 'cfg': {'BRIDGE': =
'ovirt-3', 'IPV6INIT': 'no', 'VLAN': 'yes', 'MTU': '1500', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0.3', =
'ONBOOT': 'yes'}, 'ipv6addrs': [], 'vlanid': 3, 'mtu': '1500', 'dhcpv4': =
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'ipv6gateway': =
'::', 'gateway': ''}}, 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', =
'ipv6autoconf': False, 'addr': '10.9.2.61', 'cfg': {'IPV6INIT': 'no', =
'DEFROUTE':
'yes', 'DNS1': '127.0.0.1', 'IPADDR': '10.9.2.61', 'GATEWAY': =
'10.9.2.1', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': =
'255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'off', 'DNS2': =
'209.193.4.7', 'DEVICE': 'ovirtmgmt', 'MTU': '1500', 'TYPE': 'Bridge', =
'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs': [], 'switch': 'legacy', =
'gateway': '10.9.2.1', 'dhcpv4': False, 'netmask': '255.255.255.0', =
'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['10.9.2.61/24'], 'mtu': =
'1500', 'ipv6gateway': '::', 'ports': ['eno1']}, 'ovirt-4': {'iface': =
'ovirt-4', 'ipv6autoconf': False, 'addr': '10.9.3.61', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'IPADDR': '10.9.3.61', 'MTU': =
'1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', =
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirt-4', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs': [],
'switch': 'legacy', 'gateway': '10.9.3.1', 'dhcpv4': False, 'netmask': =
'255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': =
['10.9.3.61/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.4']}, 'ovirt-2': {'iface': 'ovirt-2', 'ipv6autoconf': False, =
'addr': '', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'no', 'MTU': '1500', =
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', =
'DEVICE': 'ovirt-2', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'bridged': =
True, 'ipv6addrs': [], 'switch': 'legacy', 'gateway': '', 'dhcpv4': =
False, 'netmask': '', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': [], =
'mtu': '1500', 'ipv6gateway': '::', 'ports': ['enp2s0.2']}, 'dmz': =
{'iface': 'dmz', 'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': =
'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': =
'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'dmz', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs': [], 'switch': =
'legacy', 'gateway': '', 'dhcpv4': False, 'netmask': '', 'dhcpv6': =
False, 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': =
'::', 'ports': ['vnet0', 'enp4s0.100']}, 'ovirt-3': {'iface': 'ovirt-3', =
'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': =
'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': =
'none', 'STP': 'off', 'DEVICE': 'ovirt-3', 'TYPE': 'Bridge', 'ONBOOT': =
'yes'}, 'bridged': True, 'ipv6addrs': [], 'switch': 'legacy', 'gateway': =
'', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'stp': 'off', =
'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.3']}}}
MainProcess|jsonrpc.Executor/3::DEBUG::2017-08-28 =
01:21:58,027::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper) =
call getHardwareInfo with () {}
MainProcess|jsonrpc.Executor/3::DEBUG::2017-08-28 =
01:21:58,027::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper) =
return getHardwareInfo with {'systemProductName': '', 'systemUUID': =
'20DC6410-4F9F-DF11-9A25-7
------------------------------------------------------------------------
Gary Pedretty gary(a)ravnalaska.net =
<mailto:gary@eraalaska.net>
Systems Manager www.flyravn.com =
<http://www.flyravn.com/>
Ravn Alaska /\ 907-450-7251
5245 Airport Industrial Road / \/\ 907-450-7238 fax
Fairbanks, Alaska 99709 /\ / \ \ Second greatest commandment
Serving All of Alaska / \/ /\ \ \/\ =E2=80=9CLove your =
neighbor as
Green, green as far as the eyes can see yourself=E2=80=9D =
Matt 22:39
------------------------------------------------------------------------
> On Aug 27, 2017, at 11:24 PM, Yaniv Kaul <ykaul(a)redhat.com =
<mailto:ykaul@redhat.com>> wrote:
>=20
> vdsm vds.dispatcher ERROR SSL error during reading data: unexpected =
eof
--Apple-Mail=_BF0F8909-F772-4228-ADA6-BEDEE7C22B79
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><div class=3D"" style=3D"font-family: LucidaGrande;">The bug =
I assumed this was related to is.</div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><span =
style=3D"font-family: LucidaGrande;" class=3D"">Bug =
1417708 - ovirt-ha-agent should reuse json-rpc connections =
</span><div class=3D"" style=3D"font-family: LucidaGrande;"><br =
class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;">This is a glusterized 4 host self hosted engine setup. =
All storage domains are glusterfs replica 3 volumes on 3 of the 4 =
hosts that are part of the cluster.</div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><div class=3D"">Centos =
3.10.0-514.26.2.el7.x86_64</div></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: =
LucidaGrande;">vdsm-4.18.21-1.el7.centos.x86_64</div><div class=3D"" =
style=3D"font-family: =
LucidaGrande;">libvirt-2.0.0-10.el7_3.9.x86_64</div><div class=3D"" =
style=3D"font-family: =
LucidaGrande;">glusterfs-3.7.20-1.el7.x86_64</div><div class=3D"" =
style=3D"font-family: =
LucidaGrande;">qemu-kvm-tools-ev-2.6.0-28.el7_3.3.1.x86_64</div><div =
class=3D"" style=3D"font-family: LucidaGrande;"><br class=3D""></div><div =
class=3D"" style=3D"font-family: LucidaGrande;">Host and engine are all =
updated to the latest versions</div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;">Here are sample of messages log, vdsm,log, supervdsm.log =
and mom.log</div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;">messages</div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><div class=3D"">Aug 28 01:20:11 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:16 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:16 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:19 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:22 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:24 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:37 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:41 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:41 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:44 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:47 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div><div class=3D"">Aug 28 01:20:49 fai-kvm-1 journal: vdsm =
vds.dispatcher ERROR SSL error during reading data: unexpected =
eof</div></div><div class=3D"" style=3D"font-family: LucidaGrande;"><br =
class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;">vdsm.log</div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><div class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,242::bindingxmlrpc::319::vds::(wrapper) client [::1]</div><div =
class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,242::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::moving from =
state</div><div class=3D"">init -> state preparing</div><div =
class=3D"">Thread-316519::INFO::2017-08-28 =
01:21:13,242::logUtils::49::dispatcher::(wrapper) Run and protect: =
repoStats(options=3DNone)</div><div =
class=3D"">Thread-316519::INFO::2017-08-28 =
01:21:13,243::logUtils::52::dispatcher::(wrapper) Run and protect: =
repoStats, Return</div><div class=3D"">response: =
{u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code': 0, 'actual': True, =
'version': 0, 'acquired': True, 'delay': '0.000833501', 'lastCheck': =
'1.1', 'valid': True}, u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': =
0, 'actual': True, 'version': 3, 'acquired': False, 'delay': =
'0.000893509', 'lastCheck': '1.1', 'valid': True}, =
u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': 0, 'actual': True, =
'version':</div><div class=3D"">3, 'acquired': True, 'delay': =
'0.000776587', 'lastCheck': '1.0', 'valid': True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000513211', 'lastCheck': =
'1.1', 'valid': True}, u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e': {'code': =
0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000493711', 'lastCheck': '1.3', 'valid':</div><div class=3D"">True}, =
u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, 'actual': True, =
'version': 0, 'acquired': True, 'delay':</div><div =
class=3D"">'0.000512272', 'lastCheck': '1.1', 'valid': True}}</div><div =
class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::task::1193::Storage.TaskManager.Task::(prepare) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::finished: =
{u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code':</div><div class=3D"">0,=
'actual': True, 'version': 0,</div><div class=3D"">'acquired': True, =
'delay': '0.000833501', 'lastCheck': '1.1', 'valid': True}, =
u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': False, 'delay': '0.000893509', 'lastCheck': =
'1.1', 'valid': True}, u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': =
0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000776587', 'lastCheck': '1.0', 'valid': True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000513211', 'lastCheck': =
'1.1', 'valid': True}, =
u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e':</div><div class=3D"">{'code': =
0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000493711', 'lastCheck': '1.3', 'valid': True}, =
u'feaebec7-aae3-45b5-9684-1feedede7bec':</div><div class=3D"">{'code': =
0, 'actual': True, 'version': 0, 'acquired': True, 'delay': =
'0.000512272', 'lastCheck': '1.1', 'valid': True}}</div><div =
class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::moving from =
state</div><div class=3D"">preparing -> state finished</div><div =
class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::resourceManager::952::Storage.ResourceManager.Owner::(releas=
eAll) Owner.releaseAll requests {} resources {}</div><div =
class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::resourceManager::989::Storage.ResourceManager.Owner::(cancel=
All) Owner.cancelAll requests {}</div><div =
class=3D"">Thread-316519::DEBUG::2017-08-28 =
01:21:13,243::task::995::Storage.TaskManager.Task::(_decref) =
Task=3D`7c850537-b9e2-4e4b-b346-5fc8ba40f2d1`::ref 0 aborting =
False</div><div class=3D"">Thread-316519::INFO::2017-08-28 =
01:21:13,243::bindingxmlrpc::331::vds::(wrapper) RPC call =
storageRepoGetStats finished (code=3D0) in 0.00 seconds</div><div =
class=3D"">Thread-316519::INFO::2017-08-28 =
01:21:13,246::xmlrpc::91::vds.XMLRPCServer::(_process_requests) Request =
handler for</div><div class=3D"">::1:51992 stopped</div><div =
class=3D"">Thread-68::DEBUG::2017-08-28 =
01:21:14,093::fileSD::159::Storage.StorageDomainManifest::(__init__) =
Reading domain in</div><div class=3D"">path =
/rhev/data-center/mnt/glusterSD/glustermount:data2/2bf8f623-1d82-444e-ae16=
-d32e5fe4dc9e</div><div class=3D"">Thread-68::DEBUG::2017-08-28 =
01:21:14,093::persistent::194::Storage.PersistentDict::(__init__) =
Created a persistent</div><div class=3D"">dict with FileMetadataRW =
backend</div><div class=3D"">Thread-68::DEBUG::2017-08-28 =
01:21:14,096::persistent::236::Storage.PersistentDict::(refresh) read =
lines (FileMetadataRW)=3D['CLASS=3DData', 'DESCRIPTION=3Ddata2', =
'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3', 'LEASETIMESEC=3D60', =
'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'MASTER_VERSION=3D10', =
'POOL_DESCRIPTION=3DDefault', =
'POOL_DOMAINS=3Dcf2f4c75-966b-4f69-90a2-d8e7d21fc052:Active,5e39db25-561f-=
490a-81b6-46a7225f02b6:Active,95d4ed8a-9184-4863-84d5-af2bedd690da:Attache=
d,3e04b3ad-c46c-4903-a7b2-b9af681318d9:Attached,bd1f0364-0d3a-44fa-842a-5d=
339caac412:Attached,403df7fb-6cff-46e1-8e04-41213fbc0e6e:Active,2bf8f623-1=
d82-444e-ae16-d32e5fe4dc9e:Active,6654cb0c-3e57-42ca-b996-3660a3d32a43:Act=
ive,88886a7d-f4cc-45a6-901e-89999aa35d78:Attached,feaebec7-aae3-45b5-9684-=
1feedede7bec:Active', 'POOL_SPM_ID=3D-1', 'POOL_SPM_LVER=3D-1',</div><div =
class=3D"">'POOL_UUID=3D5990e442-0395-0118-005c-0000000003a1', =
'REMOTE_PATH=3Dglustermount:data2', 'ROLE=3DMaster', =
'SDUUID=3D2bf8f623-1d82-444e-ae16-d32e5fe4dc9e', 'TYPE=3DGLUSTERFS', =
'VERSION=3D3', =
'_SHA_CKSUM=3D65b61dc4f10d80c67c6133ffaee280919458a06a']</div><div =
class=3D"">Thread-68::DEBUG::2017-08-28 =
01:21:14,105::fileSD::679::Storage.StorageDomain::(imageGarbageCollector) =
Removing remnants of deleted images []</div><div =
class=3D"">Thread-68::INFO::2017-08-28 =
01:21:14,105::sd::604::Storage.StorageDomain::(_registerResourceNamespaces=
) Resource namespace 2bf8f623-1d82-444e-ae16-d32e5fe4dc9e_imageNS =
already registered</div><div class=3D"">Thread-68::INFO::2017-08-28 =
01:21:14,105::sd::612::Storage.StorageDomain::(_registerResourceNamespaces=
) Resource namespace 2bf8f623-1d82-444e-ae16-d32e5fe4dc9e_volumeNS =
already registered</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,244::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) =
Calling</div><div class=3D"">'Host.getStats' in bridge with {}</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,245::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::moving from state init =
-> state preparing</div><div =
class=3D"">jsonrpc.Executor/4::INFO::2017-08-28 =
01:21:15,245::logUtils::49::dispatcher::(wrapper) Run and protect: =
repoStats(options=3DNone)</div><div =
class=3D"">jsonrpc.Executor/4::INFO::2017-08-28 =
01:21:15,245::logUtils::52::dispatcher::(wrapper) Run and protect: =
repoStats, Return response: {u'5e39db25-561f-490a-81b6-46a7225f02b6': =
{'code': 0, 'actual': True, 'version': 0, 'acquired': True,</div><div =
class=3D"">'delay': '0.000833501', 'lastCheck': '3.1', 'valid': True}, =
u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': False,</div><div class=3D"">'delay': =
'0.000893509', 'lastCheck': '3.1', 'valid': True}, =
u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000776587', 'lastCheck': =
'3.0', 'valid':</div><div class=3D"">True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000513211', 'lastCheck': =
'1.3', 'valid': True},</div><div =
class=3D"">u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e':</div><div =
class=3D"">{'code': 0, 'actual': True, 'version': 3, 'acquired': True, =
'delay': '0.000493711', 'lastCheck': '1.1', 'valid': True}, =
u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, 'actual': True, =
'version': 0, 'acquired':</div><div class=3D"">True, 'delay': =
'0.000512272', 'lastCheck': '3.1', 'valid': True}}</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::task::1193::Storage.TaskManager.Task::(prepare) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::finished: =
{u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code': 0, 'actual': True, =
'version': 0, 'acquired': True, 'delay': '0.000833501', 'lastCheck': =
'3.1', 'valid': True}, u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': =
0, 'actual':</div><div class=3D"">True, 'version': 3, 'acquired': False, =
'delay': '0.000893509', 'lastCheck': '3.1', 'valid': True}, =
u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True,</div><div class=3D"">'delay': =
'0.000776587', 'lastCheck': '3.0', 'valid': True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000513211', 'lastCheck': =
'1.3', 'valid':</div><div class=3D"">True}, =
u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000493711', 'lastCheck': =
'1.1', 'valid': True},</div><div =
class=3D"">u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': 0, =
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000512272', =
'lastCheck': '3.1', 'valid': True}}</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::task::597::Storage.TaskManager.Task::(_updateState) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::moving from state =
preparing -> state finished</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::resourceManager::952::Storage.ResourceManager.Owner::(releas=
eAll) Owner.releaseAll requests {} resources {}</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::resourceManager::989::Storage.ResourceManager.Owner::(cancel=
All)</div><div class=3D"">Owner.cancelAll requests {}</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,246::task::995::Storage.TaskManager.Task::(_decref) =
Task=3D`5d8c4aff-86cb-447c-a833-e27e5c23c54d`::ref 0 aborting =
False</div><div class=3D"">Reactor thread::INFO::2017-08-28 =
01:21:15,291::protocoldetector::76::ProtocolDetector.AcceptorImpl::(handle=
_accept)</div><div class=3D"">Accepted connection from =
::1:51994</div><div class=3D"">Reactor thread::DEBUG::2017-08-28 =
01:21:15,299::protocoldetector::92::ProtocolDetector.Detector::(__init__) =
Using required_size=3D11</div><div class=3D"">Reactor =
thread::INFO::2017-08-28 =
01:21:15,301::protocoldetector::128::ProtocolDetector.Detector::(handle_re=
ad) Detected protocol stomp from ::1:51994</div><div class=3D"">Reactor =
thread::INFO::2017-08-28 =
01:21:15,301::stompreactor::101::Broker.StompAdapter::(_cmd_connect) =
Processing CONNECT request</div><div class=3D"">Reactor =
thread::DEBUG::2017-08-28 =
01:21:15,301::stompreactor::492::protocoldetector.StompDetector::(handle_s=
ocket) Stomp detected from ('::1', 51994)</div><div class=3D"">JsonRpc =
(StompReactor)::INFO::2017-08-28 =
01:21:15,313::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) =
Subscribe command received</div><div =
class=3D"">jsonrpc.Executor/4::DEBUG::2017-08-28 =
01:21:15,356::__init__::555::jsonrpc.JsonRpcServer::(_handle_request) =
Return 'Host.getStats' in bridge with {'cpuStatistics': {'1': =
{'cpuUser': '2.27',</div><div class=3D"">'nodeIndex': 0, 'cpuSys': =
'1.80', 'cpuIdle': '95.93'}, '0': {'cpuUser': '0.67', 'nodeIndex': 0, =
'cpuSys': '1.13', 'cpuIdle': '98.20'}, '3': {'cpuUser': '5.67', =
'nodeIndex': 0, 'cpuSys': '0.60', 'cpuIdle': '93.73'},</div><div =
class=3D"">'2': {'cpuUser': '0.93', 'nodeIndex': 0, 'cpuSys': =
'0.53',</div><div class=3D"">'cpuIdle': '98.54'}, '5': {'cpuUser': =
'0.87', 'nodeIndex': 0, 'cpuSys': '1.40', 'cpuIdle': '97.73'}, '4': =
{'cpuUser': '1.27', 'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle': =
'98.20'}, '7': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys':</div><div =
class=3D"">'0.47', 'cpuIdle': '99.26'}, '6': {'cpuUser': '2.07', =
'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle': '97.40'}}, =
'numaNodeMemFree': {'0': {'memPercent': 98, 'memFree': '335'}}, =
'memShared': 0,</div><div class=3D"">'haScore': 3400, 'thpState': =
'always', 'ksmMergeAcrossNodes': True, 'rxRate': '0.04', 'vmCount': 1, =
'memUsed': '41', 'storageDomains': =
{u'403df7fb-6cff-46e1-8e04-41213fbc0e6e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': False, 'delay': '0.000893509', 'lastCheck': =
'3.1', 'valid': True}, u'cf2f4c75-966b-4f69-90a2-d8e7d21fc052': {'code': =
0, 'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000776587', 'lastCheck': '3.0', 'valid': True}, =
u'6654cb0c-3e57-42ca-b996-3660a3d32a43': {'code': 0,</div><div =
class=3D"">'actual': True, 'version': 3, 'acquired': True, 'delay': =
'0.000513211', 'lastCheck': '1.3', 'valid': True}, =
u'5e39db25-561f-490a-81b6-46a7225f02b6': {'code': 0, 'actual': True, =
'version': 0,</div><div class=3D"">'acquired': True, 'delay': =
'0.000833501', 'lastCheck': '3.1', 'valid': True}, =
u'2bf8f623-1d82-444e-ae16-d32e5fe4dc9e': {'code': 0, 'actual': True, =
'version': 3, 'acquired': True, 'delay': '0.000493711', 'lastCheck': =
'1.1', 'valid': True}, u'feaebec7-aae3-45b5-9684-1feedede7bec': {'code': =
0, 'actual': True, 'version': 0, 'acquired': True, 'delay': =
'0.000512272', 'lastCheck': '3.1', 'valid': True}}, =
'incomingVmMigrations': 0,</div><div class=3D"">'network': {'enp2s0': =
{'rxErrors': '0', 'txRate': '0.2', 'rxRate': '0.1', 'txErrors': '0', =
'speed': '1000', 'rxDropped': '18453', 'name': 'enp2s0', 'tx': =
'884509425955', 'txDropped': '0', 'sampleTime':</div><div =
class=3D"">1503912069.554311, 'rx': '134148067121', 'state': 'up'}, =
'eno1': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', =
'txErrors':</div><div class=3D"">'0', 'speed': '1000',</div><div =
class=3D"">'rxDropped': '0', 'name': 'eno1', 'tx': '54488653089', =
'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': '46979728698', =
'state': 'up'}, ';vdsmdummy;': {'rxErrors': '0', 'txRate': '0.0', =
'rxRate': '0.0',</div><div class=3D"">'txErrors': '0', 'speed': '1000', =
'rxDropped': '0', 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', =
'sampleTime': 1503912069.554311, 'rx': '0', 'state': 'down'}, =
'enp2s0.4': {'rxErrors': '0', 'txRate': '0.2', 'rxRate': '0.1', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': 'enp2s0.4', =
'tx': '843385870486', 'txDropped': '0', 'sampleTime': 1503912069.554311, =
'rx': '124132253793', 'state': 'up'}, 'lo': {'rxErrors': '0', 'txRate': =
'0.3', 'rxRate': '0.3', 'txErrors': '0', 'speed': '1000', 'rxDropped': =
'0', 'name': 'lo', 'tx': '444179233658', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '444179233658', 'state': 'up'}, 'enp4s0': =
{'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', =
'speed': '1000', 'rxDropped': '14290', 'name': 'enp4s0',</div><div =
class=3D"">'tx': '9009494', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '187852658', 'state': 'up'}, =
'enp4s0.100':</div><div class=3D"">{'rxErrors': '0', 'txRate': '0.0', =
'rxRate': '0.0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0', =
'name': 'enp4s0.100', 'tx': '8798732', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '95777939',</div><div class=3D"">'state': =
'up'}, 'enp2s0.2': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': 'enp2s0.2', =
'tx': '1156', 'txDropped': '0', 'sampleTime': 1503912069.554311, 'rx': =
'66329343', 'state': 'up'},</div><div class=3D"">'enp2s0.3': =
{'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', =
'speed': '1000', 'rxDropped': '0',</div><div class=3D"">'name': =
'enp2s0.3', 'tx': '1156', 'txDropped': '0', 'sampleTime': =
1503912069.554311, 'rx': '5930851', 'state': 'up'}, 'vnet0': =
{'rxErrors': '0', 'txRate': '0.0',</div><div class=3D"">'rxRate': '0.0', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '0', 'name': 'vnet0', =
'tx': '97194738', 'txDropped': '0', 'sampleTime': 1503912069.554311, =
'rx': '8798642', 'state': 'up'}, 'ovirtmgmt':</div><div =
class=3D"">{'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', =
'txErrors': '0', 'speed': '1000', 'rxDropped': '2', 'name': 'ovirtmgmt', =
'tx': '52893439231', 'txDropped': '0', 'sampleTime': 1503912069.554311, =
'rx': '43528048077', 'state': 'up'}, 'ovirt-3': {'rxErrors': '0', =
'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'speed': '1000', =
'rxDropped': '2', 'name': 'ovirt-3', 'tx': '578', 'txDropped': '0', =
'sampleTime': 1503912069.554311, 'rx': '4621679',</div><div =
class=3D"">'state': 'up'}, 'ovirt-4': {'rxErrors': '0', 'txRate': '0.2', =
'rxRate': '0.1', 'txErrors': '0', 'speed': '1000', 'rxDropped': '0', =
'name': 'ovirt-4', 'tx': '843385869908', 'txDropped': '0',</div><div =
class=3D"">'sampleTime': 1503912069.554311, 'rx': '124132039409', =
'state': 'up'}, 'ovirt-2': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': =
'0.0',</div><div class=3D"">'txErrors': '0', 'speed': '1000', =
'rxDropped': '0', 'name': 'ovirt-2', 'tx': '578', 'txDropped': '0', =
'sampleTime': 1503912069.554311, 'rx': '19462774', 'state':</div><div =
class=3D"">'up'}, 'dmz': {'rxErrors': '0', 'txRate': '0.0', 'rxRate': =
'0.0', 'txErrors': '0', 'speed': '1000', 'rxDropped': '4', 'name': =
'dmz', 'tx': '578', 'txDropped': '0', 'sampleTime': 1503912069.554311, =
'rx': '31428052',</div><div class=3D"">'state': 'up'}}, 'txDropped': =
'0', 'cpuUser': '1.75', 'ksmPages': 1250, 'elapsedTime': '127838.51', =
'cpuLoad':</div><div class=3D"">'2.39', 'cpuSys': '0.86', 'diskStats': =
{'/var/log': {'free': '47378'}, '/var/log/core': {'free': '47378'}, =
'/var/run/vdsm/': {'free': '7868'}, '/tmp': {'free': '47378'}}, =
'cpuUserVdsmd': '2.13', 'netConfigDirty': 'False', 'memCommitted': 2113, =
'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 9216, =
'txRate': '0.06', 'bootTime': '1503727419', 'haStats': =
{'active':</div><div class=3D"">True, 'configured': True, 'score': 3400, =
'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': =
'active', 'rxDropped': '32751', 'outgoingVmMigrations': 0, 'swapTotal': =
7999,</div><div class=3D"">'swapFree': 7842, 'dateTime': =
'2017-08-28T09:21:15 GMT', 'anonHugePages': '2280', 'memFree': 9455, =
'cpuIdle': '97.39', 'vmActive': 1, 'v2vJobs': {}, 'cpuSysVdsmd': =
'0.87'}</div><div class=3D"">jsonrpc.Executor/4::INFO::2017-08-28 =
01:21:15,357::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC =
call Host.getStats succeeded in 0.12 seconds</div><div =
class=3D"">jsonrpc.Executor/6::DEBUG::2017-08-28 =
01:21:15,580::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) =
Calling</div><div class=3D"">'Host.getHardwareInfo' in bridge with =
{}</div><div class=3D"">jsonrpc.Executor/6::DEBUG::2017-08-28 =
01:21:15,581::__init__::555::jsonrpc.JsonRpcServer::(_handle_request) =
Return 'Host.getHardwareInfo' in bridge with {'systemProductName': '', =
'systemUUID': '20DC6410-4F9F-DF11-9A25-7071BC772964', =
'systemSerialNumber': '', 'systemVersion': '', 'systemManufacturer': =
''}</div><div class=3D"">jsonrpc.Executor/6::INFO::2017-08-28 =
01:21:15,581::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC =
call Host.getHardwareInfo succeeded in 0.00 seconds</div><div =
class=3D"">JsonRpc (StompReactor)::ERROR::2017-08-28 =
01:21:15,583::betterAsyncore::113::vds.dispatcher::(recv) SSL error =
during</div><div class=3D"">reading data: unexpected eof</div><div =
class=3D"">Reactor thread::INFO::2017-08-28 =
01:21:16,325::protocoldetector::76::ProtocolDetector.AcceptorImpl::(handle=
_accept)</div><div class=3D"">Accepted connection from =
::1:51996</div><div class=3D"">Reactor thread::DEBUG::2017-08-28 =
01:21:16,332::protocoldetector::92::ProtocolDetector.Detector::(__init__) =
Using required_size=3D11</div><div class=3D"">Reactor =
thread::INFO::2017-08-28 =
01:21:16,332::protocoldetector::128::ProtocolDetector.Detector::(handle_re=
ad) Detected protocol xml from ::1:51996</div><div class=3D"">Reactor =
thread::DEBUG::2017-08-28 =
01:21:16,332::bindingxmlrpc::1307::XmlDetector::(handle_socket) xml over =
http detected from ('::1', 51996)</div><div =
class=3D"">BindingXMLRPC::INFO::2017-08-28 =
01:21:16,332::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting =
request handler for ::1:51996</div></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: LucidaGrande;">supervdsm.log</div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,517::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev eno1 classid 0:1388 (cwd None)</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,545::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; =
<rc> =3D 0</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,545::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp2s0 classid 0:4 (cwd None)</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,572::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; =
<rc> =3D 0</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,573::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp2s0 classid 0:2 (cwd None)</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,600::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; =
<rc> =3D 0</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,600::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp4s0 classid 0:64 (cwd None)</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,628::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; =
<rc> =3D 0</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,628::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/sbin/tc class show dev enp2s0 classid 0:3 (cwd None)</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,689::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; =
<rc> =3D 0</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,690::vsctl::57::root::(commit) Executing commands: =
/usr/bin/ovs-vsctl --oneline --format=3Djson -- list Bridge -- list Port =
-- list Interface</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,690::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list =
0-7 /usr/bin/ovs-vsctl --oneline --format=3Djson -- list Bridge -- list =
Port -- list Interface (cwd</div><div class=3D"">None)</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,720::commands::86::root::(execCmd) SUCCESS: <err> =3D ''; =
<rc> =3D 0</div><div =
class=3D"">MainProcess|Thread-316590::DEBUG::2017-08-28 =
01:21:56,724::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper) =
return network_caps with {'bridges': {'ovirtmgmt': {'ipv6autoconf': =
False, 'addr': '10.9.2.61', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', =
'DNS1': '127.0.0.1', 'IPADDR': '10.9.2.61', 'GATEWAY': '10.9.2.1', =
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', =
'BOOTPROTO': 'none', 'STP': 'off', 'DNS2': '209.193.4.7', 'DEVICE': =
'ovirtmgmt', 'MTU': '1500', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, =
'ipv6addrs': [], 'gateway': '10.9.2.1', 'dhcpv4': False, 'netmask': =
'255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': =
['10.9.2.61/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['eno1'], =
'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', =
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', =
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', =
'hello_timer': '104', 'multicast_querier_interval': '25500', 'max_age': =
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': =
'0', 'priority': '32768', 'multicast_membership_interval': '26000', =
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.7071bc772964', 'bridge_id': '8000.7071bc772964', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '16704', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'ovirt-4': {'ipv6autoconf': False, 'addr': '10.9.3.61', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'IPADDR': '10.9.3.61', 'MTU': =
'1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', =
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirt-4', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'gateway': '10.9.3.1', =
'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': =
'off', 'ipv4addrs': ['10.9.3.61/24'], 'mtu': '1500', 'ipv6gateway': =
'::', 'ports': ['enp2s0.4'], 'opts': {'multicast_last_member_count': =
'2', 'hash_elasticity': '4', 'multicast_query_response_interval': =
'1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', =
'multicast_startup_query_interval': '3125', 'hello_timer': '102', =
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': =
'512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': =
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': =
'0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.6805ca4606af', 'bridge_id': '8000.6805ca4606af', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '8307', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'dmz': {'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': =
'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': =
'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'dmz', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'gateway': '', 'dhcpv4': =
False, 'netmask': '', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': [], =
'mtu': '1500', 'ipv6gateway': '::', 'ports': ['vnet0', 'enp4s0.100'], =
'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', =
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', =
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', =
'hello_timer': '103', 'multicast_querier_interval': '25500', 'max_age': =
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': =
'0', 'priority': '32768', 'multicast_membership_interval': '26000', =
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.001b212f794d', 'bridge_id': '8000.001b212f794d', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '7181', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'ovirt-2': {'ipv6autoconf': False, 'addr': '', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': =
'ovirt-2', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], =
'gateway': '', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'stp': =
'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.2'], 'opts': {'multicast_last_member_count': '2', =
'hash_elasticity': '4', 'multicast_query_response_interval': '1000', =
'group_fwd_mask': '0x0', 'multicast_snooping': '1', =
'multicast_startup_query_interval': '3125', 'hello_timer': '2', =
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': =
'512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': =
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': =
'0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.6805ca4606af', 'bridge_id': '8000.6805ca4606af', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '320', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}, 'ovirt-3': {'ipv6autoconf': False, 'addr': '', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': =
'ovirt-3', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': [], =
'gateway': '', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'stp': =
'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.3'], 'opts': {'multicast_last_member_count': '2', =
'hash_elasticity': '4', 'multicast_query_response_interval': '1000', =
'group_fwd_mask': '0x0', 'multicast_snooping': '1', =
'multicast_startup_query_interval': '3125', 'hello_timer': '1', =
'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': =
'512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': =
'32768', 'multicast_membership_interval': '26000', 'root_path_cost': =
'0', 'root_port': '0', 'multicast_querier': '0', =
'multicast_startup_query_count': '2', 'nf_call_iptables': '0', =
'topology_change': '0', 'hello_time': '200', 'root_id': =
'8000.6805ca4606af', 'bridge_id': '8000.6805ca4606af', =
'topology_change_timer': '0', 'ageing_time': '30000', =
'nf_call_ip6tables': '0', 'gc_timer': '14656', 'nf_call_arptables': '0', =
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', =
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': =
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': =
'0'}}}, 'bondings': {}, 'nameservers': ['127.0.0.1', '209.193.4.7', =
'209.112.128.2'], 'nics': {'enp2s0': {'ipv6gateway': '::', =
'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': 'no', 'MTU': =
'1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0', =
'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, =
'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': =
'68:05:ca:46:06:af', 'speed': 1000, 'gateway': ''}, 'eno1': =
{'ipv6gateway': '::', 'ipv6autoconf': False, 'addr': '', 'cfg': =
{'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'MTU': '1500', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'eno1', 'ONBOOT': =
'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', =
'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': '70:71:bc:77:29:64', =
'speed': 1000, 'gateway': ''}, 'enp4s0': {'ipv6gateway': '::', =
'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': 'no', 'MTU': =
'1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp4s0', =
'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, =
'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': =
'00:1b:21:2f:79:4d', 'speed': 1000, 'gateway': ''}}, 'supportsIPv6': =
True, 'vlans': {'enp2s0.4': {'iface': 'enp2s0', 'ipv6autoconf': False, =
'addr': '', 'cfg': {'BRIDGE': 'ovirt-4', 'IPV6INIT': 'no', 'VLAN': =
'yes', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', =
'DEVICE': 'enp2s0.4', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'vlanid': 4, =
'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, =
'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': ''}, 'enp4s0.100': =
{'iface': 'enp4s0', 'ipv6autoconf': False, 'addr': '', 'cfg': {'BRIDGE': =
'dmz', 'IPV6INIT': 'no', 'VLAN': 'yes', 'MTU': '1500', 'NM_CONTROLLED': =
'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp4s0.100', 'ONBOOT': 'yes'}, =
'ipv6addrs': [], 'vlanid': 100, 'mtu': '1500', 'dhcpv4': False, =
'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'ipv6gateway': '::', =
'gateway': ''}, 'enp2s0.2': {'iface': 'enp2s0', 'ipv6autoconf': False, =
'addr': '', 'cfg': {'BRIDGE': 'ovirt-2', 'IPV6INIT': 'no', 'VLAN': =
'yes', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', =
'DEVICE': 'enp2s0.2', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'vlanid': 2, =
'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, =
'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': ''}, 'enp2s0.3': =
{'iface': 'enp2s0', 'ipv6autoconf': False, 'addr': '', 'cfg': {'BRIDGE': =
'ovirt-3', 'IPV6INIT': 'no', 'VLAN': 'yes', 'MTU': '1500', =
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0.3', =
'ONBOOT': 'yes'}, 'ipv6addrs': [], 'vlanid': 3, 'mtu': '1500', 'dhcpv4': =
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'ipv6gateway': =
'::', 'gateway': ''}}, 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', =
'ipv6autoconf': False, 'addr': '10.9.2.61', 'cfg': {'IPV6INIT': 'no', =
'DEFROUTE':</div><div class=3D"">'yes', 'DNS1': '127.0.0.1', 'IPADDR': =
'10.9.2.61', 'GATEWAY': '10.9.2.1', 'DELAY': '0', 'NM_CONTROLLED': 'no', =
'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'off', 'DNS2': =
'209.193.4.7', 'DEVICE': 'ovirtmgmt', 'MTU': '1500', 'TYPE': 'Bridge', =
'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs': [], 'switch': 'legacy', =
'gateway': '10.9.2.1', 'dhcpv4': False, 'netmask': '255.255.255.0', =
'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['10.9.2.61/24'], 'mtu': =
'1500', 'ipv6gateway': '::', 'ports': ['eno1']}, 'ovirt-4': {'iface': =
'ovirt-4', 'ipv6autoconf': False, 'addr': '10.9.3.61', 'cfg': =
{'IPV6INIT': 'no', 'DEFROUTE': 'no', 'IPADDR': '10.9.3.61', 'MTU': =
'1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', =
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirt-4', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs': [],</div><div =
class=3D"">'switch': 'legacy', 'gateway': '10.9.3.1', 'dhcpv4': False, =
'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': =
['10.9.3.61/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.4']}, 'ovirt-2': {'iface': 'ovirt-2', 'ipv6autoconf': False, =
'addr': '', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'no', 'MTU': '1500', =
'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', =
'DEVICE': 'ovirt-2', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'bridged': =
True, 'ipv6addrs': [], 'switch': 'legacy', 'gateway': '', 'dhcpv4': =
False, 'netmask': '', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': [], =
'mtu': '1500', 'ipv6gateway': '::', 'ports': ['enp2s0.2']}, 'dmz': =
{'iface': 'dmz', 'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': =
'no', 'DEFROUTE': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': =
'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'dmz', 'TYPE': =
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs': [], 'switch': =
'legacy', 'gateway': '', 'dhcpv4': False, 'netmask': '', 'dhcpv6': =
False, 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': =
'::', 'ports': ['vnet0', 'enp4s0.100']}, 'ovirt-3': {'iface': 'ovirt-3', =
'ipv6autoconf': False, 'addr': '', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': =
'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': =
'none', 'STP': 'off', 'DEVICE': 'ovirt-3', 'TYPE': 'Bridge', 'ONBOOT': =
'yes'}, 'bridged': True, 'ipv6addrs': [], 'switch': 'legacy', 'gateway': =
'', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'stp': 'off', =
'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['enp2s0.3']}}}</div><div =
class=3D"">MainProcess|jsonrpc.Executor/3::DEBUG::2017-08-28 =
01:21:58,027::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper) =
call getHardwareInfo with () {}</div><div =
class=3D"">MainProcess|jsonrpc.Executor/3::DEBUG::2017-08-28 =
01:21:58,027::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper) =
return getHardwareInfo with {'systemProductName': '', 'systemUUID': =
'20DC6410-4F9F-DF11-9A25-7</div></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""></div><div class=3D"" =
style=3D"font-family: LucidaGrande;"><br class=3D""><div class=3D""><div =
class=3D"" style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><font face=3D"Menlo" class=3D"" =
style=3D"font-size: 12px;"><div =
class=3D"">---------------------------------------------------------------=
---------</div><div class=3D"">Gary Pedretty =
=
<a =
href=3D"mailto:gary@eraalaska.net" =
class=3D"">gary(a)ravnalaska.net</a></div><div class=3D"">Systems Manager =
=
=
<a href=3D"http://www.flyravn.com" =
class=3D"">www.flyravn.com</a></div><div class=3D"">Ravn Alaska =
=
/\ =
907-450-7251</div><div class=3D"">5245 Airport Industrial =
Road / \/\ =
907-450-7238 fax</div><div class=3D"">Fairbanks, Alaska =
99709 /\ / \ \ =
Second greatest commandment</div></font><font face=3D"Monaco" =
class=3D""><span class=3D"" style=3D"font-size: 12px;">Serving All of =
Alaska / \/ /\ \ \/\ =
=E2=80=9CLove your neighbor as</span></font><br class=3D"" =
style=3D"font-family: Monaco;"><font face=3D"Menlo" class=3D""><span =
class=3D"" style=3D"font-size: 12px;">Green, green as far as the eyes =
can see yourself=E2=80=9D Matt =
22:39</span></font><div class=3D"" style=3D"font-family: =
Menlo;"></div><font face=3D"Menlo" class=3D"" style=3D"font-size: =
12px;"></font><span class=3D"" style=3D"font-size: 12px;"><font =
face=3D"Menlo" class=3D""><div =
class=3D"">---------------------------------------------------------------=
---------</div></font></span><div class=3D""><font face=3D"Menlo" =
class=3D"" style=3D"font-size: 12px;"><br =
class=3D""></font></div></div><span class=3D"" style=3D"font-size: =
12px;"><br class=3D"Apple-interchange-newline"></span></div><span =
class=3D"" style=3D"font-size: 12px;"><br =
class=3D"Apple-interchange-newline"></span></div><span class=3D"" =
style=3D"font-size: 12px;"><br =
class=3D"Apple-interchange-newline"></span></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"></div><br =
class=3D"Apple-interchange-newline"><br =
class=3D"Apple-interchange-newline"></div><br class=3D""><blockquote =
type=3D"cite" class=3D"">On Aug 27, 2017, at 11:24 PM, Yaniv Kaul <<a =
href=3D"mailto:ykaul@redhat.com" class=3D"">ykaul(a)redhat.com</a>> =
wrote:<br class=3D""><br class=3D"">vdsm vds.dispatcher ERROR SSL error =
during reading data: unexpected eof</blockquote></div></body></html>=
--Apple-Mail=_BF0F8909-F772-4228-ADA6-BEDEE7C22B79--
7 years, 7 months
Addition of hosts fail at network setup
by Paul-Erik Törrönen
Hello,
I'm attempting to add another host to the cluster, and I'm getting the
following error when I add the ovirtmgmt network to the physical network
device:
Error while executing action HostSetupNetworks: Unexpected exception
In the host vdsm.log I have the following:
2017-09-25 19:58:35,060+0300 ERROR (jsonrpc/4) [jsonrpc.JsonRpcServer]
Internal server error (__init__:577)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
572, in _handle_request
res = method(**params)
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 202,
in _dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 1575, in setupNetworks
supervdsm.getProxy().setupNetworks(networks, bondings, options)
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
__call__
return callMethod()
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in
<lambda>
**kwargs)
File "<string>", line 2, in setupNetworks
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
IOError: [Errno 2] No such file or directory:
u'/etc/sysconfig/network-scripts/ifcfg-ovirtmgmt'
2017-09-25 19:58:35,061+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer]
RPC call Host.setupNetworks failed (error -32603) in 0.18 seconds
(__init__:539)
I thought the ifcfg-ovirtmgmt-file was supposed to be created by the
ovirt setup?
I'm running the latest ovirt 4.1.6 on CentOS7.4 x86_64
(ovirt-release41-4.1.6-1.el7.centos.noarch).
Poltsi
7 years, 7 months
Clone of VM and copy of disks doubts
by Gianluca Cecchi
Hello,
I have a VM VM1 that I would like to clone.
It has 3 disks: 2 of them on storage domain SDA, 1 of them (size 80Gb) on a
second storage domain SDB
I want the new VM to have all the 3 disks on SDA, because SDB has not
enough space to accomodate the cloned disk
I power off VM1, select it and then "Clone VM". It seems I have no options
at all and that I can only specify the name of the new VM, while oVirt
would copy the VM with exactlly the same disks and so I receive error
Error while executing action:
VM2:
- Cannot add VM. Low disk space on Storage Domain SDB.
Is this expected? In case it seems not very flexible.
So I create a snapshot of VM1 and then "Clone". Inside the "Resource
Allocation" section, the drop-down for the 3 disks seems not enabled, in
the sense that for every disk it contains only the name of the origin SD,
so for 2 disks it contains SDA and no other SD to be chosen, for 1 disk it
contains SDB and no other SD to be chosen.
See here:
https://drive.google.com/file/d/0BwoPbcrMv8mvdFd0RFMzRXhmRzQ/view?usp=sha...
Can anyone replicate this in 4.1.5 to see if it is a problem of mine?
So as a last resource I decide to copy disk by disk.
Apparently my VM1 disk in SDB is preallocated: at least this is what I see
if I select VM1, Snapshots, select the "Current" line (it is the only oe as
therer are no snapshots), "DIsks" in right subpane
See here for screenshot:
https://drive.google.com/file/d/0BwoPbcrMv8mvclV5bDdsdE9Ubmc/view?usp=sha...
Any more comfortable and correct way from web admin gui to see disk type?
When I copy the disk it seems it tries to create a thin provisioned disk
and it takes about 4 hours to copy 80Gb of disks from SSD to SAS SD with
very few I/O activity on hypervisor in the mean time (I have only one node
in this test)
when I would expect around 40minutes with a throughput of at elast 40MB/s
with a sort of dd as the cemu-img command should be... instead the command
I see from command line is kind of
/usr/bin/qemu-img convert -p -t none -T none -f qcow2
/rhev/data-center/59b7af54-0155-01c2-0248-000000000195/70411599-c19f-4a6a-80b5-abdeffc1616e/images/3d357b96-57e1-47bd-be65-558252a591d3/77ba5e2b-dfbe-4801-b5ec-dc9ffae2c481
-O qcow2 -o compat=1.1
/rhev/data-center/mnt/blockSD/fad05d79-254d-4f40-8201-360757128ede/images/73292653-f5d9-49ae-8132-f17d288bf822/deb4b841-95cd-4523-9523-f2ffdbb9b4a9
Also, if get this VM and create a snapshot of it I see this:
- starting point all the 3 disks seem preallocated (same method as above)
- after creating a snapshot, if I select the "Current" line the 3 disks are
all thin-provision, if I select the line of the taken snapshot instead the
disks are all preallocated.
Any comments or suggestions about the points above?
Thanks,
Gianluca
7 years, 7 months
Snapshot removal vs selinux enforced
by Lionel Caignec
Hi,
i have a problem with selinux enforced.
When i tried to live remove a snapshot the operation failed . After some headache i found the problem source : selinux.
When i "setenfore 0" the removal task work, when i "setenforce 1" removal task failed.
log from audit.log:
vc: denied {write} for pid = 28360 tmptext = system_u: object_r : fixed_disk_device_t: s0 tclass = blk_file
I'm with RHEL 7.4 and ovirt 4.1, is it some specific configuration to do?.
Thanks for help.
7 years, 7 months
Failed gdeploy
by Sean McMurray
My latest attempt to deploy went like this
(/tmp/tmpaQJuTG/run-script.yml and /tmp/gdeployConfig.conf are pasted
below the gdeploy transcript):
# gdeploy -k -vv -c /tmp/gdeployConfig.conf --trace
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: run-script.yml
*********************************************************************************************************************
1 plays in /tmp/tmpaQJuTG/run-script.yml
PLAY [gluster_servers]
***********************************************************************************************************************
META: ran handlers
TASK [Run a shell script]
********************************************************************************************************************
task path: /tmp/tmpaQJuTG/run-script.yml:7
changed: [192.168.1.3] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.1.1,192.168.1.2,192.168.1.3) => {"changed": true, "failed":
false, "failed_when_result": false, "item":
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.1.1,192.168.1.2,192.168.1.3", "rc": 0, "stderr": "Shared
connection to 192.168.1.3 closed.\r\n", "stdout": "", "stdout_lines": []}
changed: [192.168.1.2] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.1.1,192.168.1.2,192.168.1.3) => {"changed": true, "failed":
false, "failed_when_result": false, "item":
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.1.1,192.168.1.2,192.168.1.3", "rc": 0, "stderr": "Shared
connection to 192.168.1.2 closed.\r\n", "stdout": "", "stdout_lines": []}
changed: [192.168.1.1] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.1.1,192.168.1.2,192.168.1.3) => {"changed": true, "failed":
false, "failed_when_result": false, "item":
"/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.1.1,192.168.1.2,192.168.1.3", "rc": 0, "stderr": "Shared
connection to 192.168.1.1 closed.\r\n", "stdout": "scp:
/tmp/*_host_ip_2017_09_22.txt: No such file or directory\r\nscp:
/tmp/*_host_ip_2017_09_22.txt: No such file or directory\r\n",
"stdout_lines": ["scp: /tmp/*_host_ip_2017_09_22.txt: No such file or
directory", "scp: /tmp/*_host_ip_2017_09_22.txt: No such file or
directory"]}
META: ran handlers
META: ran handlers
PLAY RECAP
***********************************************************************************************************************************
192.168.1.1 : ok=1 changed=1 unreachable=0 failed=0
192.168.1.2 : ok=1 changed=1 unreachable=0 failed=0
192.168.1.3 : ok=1 changed=1 unreachable=0 failed=0
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: chkconfig_service.yml
**************************************************************************************************************
1 plays in /tmp/tmpaQJuTG/chkconfig_service.yml
PLAY [gluster_servers]
***********************************************************************************************************************
META: ran handlers
TASK [Enable or disable services]
************************************************************************************************************
task path: /tmp/tmpaQJuTG/chkconfig_service.yml:7
ok: [192.168.1.3] => (item=chronyd) => {"changed": false, "enabled":
true, "item": "chronyd", "name": "chronyd", "status":
{"ActiveEnterTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"ActiveEnterTimestampMonotonic": "57218106256", "ActiveExitTimestamp":
"Fri 2017-09-22 08:05:49 PDT", "ActiveExitTimestampMonotonic":
"57218037256", "ActiveState": "active", "After":
"systemd-journald.socket var.mount tmp.mount ntpd.service -.mount
system.slice sntp.service basic.target ntpdate.service", "AllowIsolate":
"no", "AmbientCapabilities": "0", "AssertResult": "yes",
"AssertTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"AssertTimestampMonotonic": "57218053902", "Before": "multi-user.target
imgbase-config-vdsm.service shutdown.target", "BlockIOAccounting": "no",
"BlockIOWeight": "18446744073709551615", "CPUAccounting": "no",
"CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no",
"CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload":
"no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet":
"18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp":
"Fri 2017-09-22 08:05:49 PDT", "ConditionTimestampMonotonic":
"57218053870", "Conflicts": "shutdown.target systemd-timesyncd.service
ntpd.service", "ControlGroup": "/system.slice/chronyd.service",
"ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no",
"Description": "NTP client/server", "DevicePolicy": "auto",
"Documentation": "man:chronyd(8) man:chrony.conf(5)", "EnvironmentFile":
"/etc/sysconfig/chronyd (ignore_errors=yes)", "ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "7300",
"ExecMainStartTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"ExecMainStartTimestampMonotonic": "57218077328", "ExecMainStatus": "0",
"ExecStart": "{ path=/usr/sbin/chronyd ; argv[]=/usr/sbin/chronyd
$OPTIONS ; ignore_errors=no ; start_time=[Fri 2017-09-22 08:05:49 PDT] ;
stop_time=[Fri 2017-09-22 08:05:49 PDT] ; pid=7298 ; code=exited ;
status=0 }", "ExecStartPost": "{ path=/usr/libexec/chrony-helper ;
argv[]=/usr/libexec/chrony-helper update-daemon ; ignore_errors=no ;
start_time=[Fri 2017-09-22 08:05:49 PDT] ; stop_time=[Fri 2017-09-22
08:05:49 PDT] ; pid=7302 ; code=exited ; status=0 }", "FailureAction":
"none", "FileDescriptorStoreMax": "0", "FragmentPath":
"/usr/lib/systemd/system/chronyd.service", "GuessMainPID": "yes",
"IOScheduling": "0", "Id": "chronyd.service", "IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes",
"InactiveEnterTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"InactiveEnterTimestampMonotonic": "57218052782",
"InactiveExitTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"InactiveExitTimestampMonotonic": "57218054578", "JobTimeoutAction":
"none", "JobTimeoutUSec": "0", "KillMode": "control-group",
"KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE":
"18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA":
"18446744073709551615", "LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096",
"LimitNPROC": "62043", "LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "62043", "LimitSTACK": "18446744073709551615",
"LoadState": "loaded", "MainPID": "7300", "MemoryAccounting": "no",
"MemoryCurrent": "18446744073709551615", "MemoryLimit":
"18446744073709551615", "MountFlags": "0", "Names": "chronyd.service",
"NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no",
"NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0",
"OnFailureJobMode": "replace", "PIDFile": "/var/run/chronyd.pid",
"PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork":
"no", "PrivateTmp": "yes", "ProtectHome": "yes", "ProtectSystem":
"full", "RefuseManualStart": "no", "RefuseManualStop": "no",
"RemainAfterExit": "no", "Requires": "basic.target -.mount var.mount",
"RequiresMountsFor": "/var/tmp", "Restart": "no", "RestartUSec":
"100ms", "Result": "success", "RootDirectoryStartOnly": "no",
"RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits":
"0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice",
"StandardError": "inherit", "StandardInput": "null", "StandardOutput":
"journal", "StartLimitAction": "none", "StartLimitBurst": "5",
"StartLimitInterval": "10000000", "StartupBlockIOWeight":
"18446744073709551615", "StartupCPUShares": "18446744073709551615",
"StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running",
"SyslogLevelPrefix": "yes", "SyslogPriority": "30",
"SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no",
"TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent":
"18446744073709551615", "TasksMax": "18446744073709551615",
"TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000", "Transient": "no", "Type": "forking",
"UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState":
"enabled", "WantedBy": "multi-user.target", "Wants": "system.slice",
"WatchdogTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"WatchdogTimestampMonotonic": "57218077340", "WatchdogUSec": "0"}}
ok: [192.168.1.2] => (item=chronyd) => {"changed": false, "enabled":
true, "item": "chronyd", "name": "chronyd", "status":
{"ActiveEnterTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"ActiveEnterTimestampMonotonic": "139348506558", "ActiveExitTimestamp":
"Fri 2017-09-22 15:04:50 PDT", "ActiveExitTimestampMonotonic":
"139348424046", "ActiveState": "active", "After": "ntpd.service
sntp.service systemd-journald.socket tmp.mount ntpdate.service -.mount
system.slice basic.target var.mount", "AllowIsolate": "no",
"AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp":
"Fri 2017-09-22 15:04:50 PDT", "AssertTimestampMonotonic":
"139348447946", "Before": "shutdown.target multi-user.target
imgbase-config-vdsm.service", "BlockIOAccounting": "no",
"BlockIOWeight": "18446744073709551615", "CPUAccounting": "no",
"CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no",
"CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload":
"no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet":
"18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp":
"Fri 2017-09-22 15:04:50 PDT", "ConditionTimestampMonotonic":
"139348447884", "Conflicts": "shutdown.target systemd-timesyncd.service
ntpd.service", "ControlGroup": "/system.slice/chronyd.service",
"ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no",
"Description": "NTP client/server", "DevicePolicy": "auto",
"Documentation": "man:chronyd(8) man:chrony.conf(5)", "EnvironmentFile":
"/etc/sysconfig/chronyd (ignore_errors=yes)", "ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "10148",
"ExecMainStartTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"ExecMainStartTimestampMonotonic": "139348474428", "ExecMainStatus":
"0", "ExecStart": "{ path=/usr/sbin/chronyd ; argv[]=/usr/sbin/chronyd
$OPTIONS ; ignore_errors=no ; start_time=[Fri 2017-09-22 15:04:50 PDT] ;
stop_time=[Fri 2017-09-22 15:04:50 PDT] ; pid=10146 ; code=exited ;
status=0 }", "ExecStartPost": "{ path=/usr/libexec/chrony-helper ;
argv[]=/usr/libexec/chrony-helper update-daemon ; ignore_errors=no ;
start_time=[Fri 2017-09-22 15:04:50 PDT] ; stop_time=[Fri 2017-09-22
15:04:50 PDT] ; pid=10150 ; code=exited ; status=0 }", "FailureAction":
"none", "FileDescriptorStoreMax": "0", "FragmentPath":
"/usr/lib/systemd/system/chronyd.service", "GuessMainPID": "yes",
"IOScheduling": "0", "Id": "chronyd.service", "IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes",
"InactiveEnterTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"InactiveEnterTimestampMonotonic": "139348445639",
"InactiveExitTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"InactiveExitTimestampMonotonic": "139348449304", "JobTimeoutAction":
"none", "JobTimeoutUSec": "0", "KillMode": "control-group",
"KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE":
"18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA":
"18446744073709551615", "LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096",
"LimitNPROC": "62271", "LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "62271", "LimitSTACK": "18446744073709551615",
"LoadState": "loaded", "MainPID": "10148", "MemoryAccounting": "no",
"MemoryCurrent": "18446744073709551615", "MemoryLimit":
"18446744073709551615", "MountFlags": "0", "Names": "chronyd.service",
"NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no",
"NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0",
"OnFailureJobMode": "replace", "PIDFile": "/var/run/chronyd.pid",
"PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork":
"no", "PrivateTmp": "yes", "ProtectHome": "yes", "ProtectSystem":
"full", "RefuseManualStart": "no", "RefuseManualStop": "no",
"RemainAfterExit": "no", "Requires": "basic.target -.mount var.mount",
"RequiresMountsFor": "/var/tmp", "Restart": "no", "RestartUSec":
"100ms", "Result": "success", "RootDirectoryStartOnly": "no",
"RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits":
"0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice",
"StandardError": "inherit", "StandardInput": "null", "StandardOutput":
"journal", "StartLimitAction": "none", "StartLimitBurst": "5",
"StartLimitInterval": "10000000", "StartupBlockIOWeight":
"18446744073709551615", "StartupCPUShares": "18446744073709551615",
"StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running",
"SyslogLevelPrefix": "yes", "SyslogPriority": "30",
"SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no",
"TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent":
"18446744073709551615", "TasksMax": "18446744073709551615",
"TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000", "Transient": "no", "Type": "forking",
"UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState":
"enabled", "WantedBy": "multi-user.target", "Wants": "system.slice",
"WatchdogTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"WatchdogTimestampMonotonic": "139348474458", "WatchdogUSec": "0"}}
ok: [192.168.1.1] => (item=chronyd) => {"changed": false, "enabled":
true, "item": "chronyd", "name": "chronyd", "status":
{"ActiveEnterTimestamp": "Fri 2017-09-22 08:05:55 PDT",
"ActiveEnterTimestampMonotonic": "64535125522", "ActiveExitTimestamp":
"Fri 2017-09-22 08:05:54 PDT", "ActiveExitTimestampMonotonic":
"64535042941", "ActiveState": "active", "After": "ntpd.service tmp.mount
systemd-journald.socket system.slice sntp.service basic.target var.mount
ntpdate.service -.mount", "AllowIsolate": "no", "AmbientCapabilities":
"0", "AssertResult": "yes", "AssertTimestamp": "Fri 2017-09-22 08:05:54
PDT", "AssertTimestampMonotonic": "64535053307", "Before":
"shutdown.target multi-user.target imgbase-config-vdsm.service",
"BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615",
"CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity",
"CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615",
"CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop":
"yes", "CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes", "ConditionTimestamp": "Fri 2017-09-22 08:05:54
PDT", "ConditionTimestampMonotonic": "64535053202", "Conflicts":
"shutdown.target ntpd.service systemd-timesyncd.service",
"ControlGroup": "/system.slice/chronyd.service", "ControlPID": "0",
"DefaultDependencies": "yes", "Delegate": "no", "Description": "NTP
client/server", "DevicePolicy": "auto", "Documentation": "man:chronyd(8)
man:chrony.conf(5)", "EnvironmentFile": "/etc/sysconfig/chronyd
(ignore_errors=yes)", "ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "9436",
"ExecMainStartTimestamp": "Fri 2017-09-22 08:05:55 PDT",
"ExecMainStartTimestampMonotonic": "64535083184", "ExecMainStatus": "0",
"ExecStart": "{ path=/usr/sbin/chronyd ; argv[]=/usr/sbin/chronyd
$OPTIONS ; ignore_errors=no ; start_time=[Fri 2017-09-22 08:05:54 PDT] ;
stop_time=[Fri 2017-09-22 08:05:55 PDT] ; pid=9434 ; code=exited ;
status=0 }", "ExecStartPost": "{ path=/usr/libexec/chrony-helper ;
argv[]=/usr/libexec/chrony-helper update-daemon ; ignore_errors=no ;
start_time=[Fri 2017-09-22 08:05:55 PDT] ; stop_time=[Fri 2017-09-22
08:05:55 PDT] ; pid=9438 ; code=exited ; status=0 }", "FailureAction":
"none", "FileDescriptorStoreMax": "0", "FragmentPath":
"/usr/lib/systemd/system/chronyd.service", "GuessMainPID": "yes",
"IOScheduling": "0", "Id": "chronyd.service", "IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes",
"InactiveEnterTimestamp": "Fri 2017-09-22 08:05:54 PDT",
"InactiveEnterTimestampMonotonic": "64535051400",
"InactiveExitTimestamp": "Fri 2017-09-22 08:05:54 PDT",
"InactiveExitTimestampMonotonic": "64535054635", "JobTimeoutAction":
"none", "JobTimeoutUSec": "0", "KillMode": "control-group",
"KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE":
"18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA":
"18446744073709551615", "LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096",
"LimitNPROC": "14440", "LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "14440", "LimitSTACK": "18446744073709551615",
"LoadState": "loaded", "MainPID": "9436", "MemoryAccounting": "no",
"MemoryCurrent": "18446744073709551615", "MemoryLimit":
"18446744073709551615", "MountFlags": "0", "Names": "chronyd.service",
"NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no",
"NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0",
"OnFailureJobMode": "replace", "PIDFile": "/var/run/chronyd.pid",
"PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork":
"no", "PrivateTmp": "yes", "ProtectHome": "yes", "ProtectSystem":
"full", "RefuseManualStart": "no", "RefuseManualStop": "no",
"RemainAfterExit": "no", "Requires": "basic.target -.mount var.mount",
"RequiresMountsFor": "/var/tmp", "Restart": "no", "RestartUSec":
"100ms", "Result": "success", "RootDirectoryStartOnly": "no",
"RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits":
"0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice",
"StandardError": "inherit", "StandardInput": "null", "StandardOutput":
"journal", "StartLimitAction": "none", "StartLimitBurst": "5",
"StartLimitInterval": "10000000", "StartupBlockIOWeight":
"18446744073709551615", "StartupCPUShares": "18446744073709551615",
"StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running",
"SyslogLevelPrefix": "yes", "SyslogPriority": "30",
"SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no",
"TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent":
"18446744073709551615", "TasksMax": "18446744073709551615",
"TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000", "Transient": "no", "Type": "forking",
"UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState":
"enabled", "WantedBy": "multi-user.target", "Wants": "system.slice",
"WatchdogTimestamp": "Fri 2017-09-22 08:05:55 PDT",
"WatchdogTimestampMonotonic": "64535083221", "WatchdogUSec": "0"}}
META: ran handlers
META: ran handlers
PLAY RECAP
***********************************************************************************************************************************
192.168.1.1 : ok=1 changed=0 unreachable=0 failed=0
192.168.1.2 : ok=1 changed=0 unreachable=0 failed=0
192.168.1.3 : ok=1 changed=0 unreachable=0 failed=0
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: service_management.yml
*************************************************************************************************************
1 plays in /tmp/tmpaQJuTG/service_management.yml
PLAY [gluster_servers]
***********************************************************************************************************************
META: ran handlers
TASK [start/stop/restart/reload services]
****************************************************************************************************
task path: /tmp/tmpaQJuTG/service_management.yml:7
changed: [192.168.1.3] => (item=chronyd) => {"changed": true, "item":
"chronyd", "name": "chronyd", "state": "started", "status":
{"ActiveEnterTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"ActiveEnterTimestampMonotonic": "57218106256", "ActiveExitTimestamp":
"Fri 2017-09-22 08:05:49 PDT", "ActiveExitTimestampMonotonic":
"57218037256", "ActiveState": "active", "After":
"systemd-journald.socket var.mount tmp.mount ntpd.service -.mount
system.slice sntp.service basic.target ntpdate.service", "AllowIsolate":
"no", "AmbientCapabilities": "0", "AssertResult": "yes",
"AssertTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"AssertTimestampMonotonic": "57218053902", "Before": "multi-user.target
imgbase-config-vdsm.service shutdown.target", "BlockIOAccounting": "no",
"BlockIOWeight": "18446744073709551615", "CPUAccounting": "no",
"CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no",
"CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload":
"no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet":
"18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp":
"Fri 2017-09-22 08:05:49 PDT", "ConditionTimestampMonotonic":
"57218053870", "Conflicts": "shutdown.target systemd-timesyncd.service
ntpd.service", "ControlGroup": "/system.slice/chronyd.service",
"ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no",
"Description": "NTP client/server", "DevicePolicy": "auto",
"Documentation": "man:chronyd(8) man:chrony.conf(5)", "EnvironmentFile":
"/etc/sysconfig/chronyd (ignore_errors=yes)", "ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "7300",
"ExecMainStartTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"ExecMainStartTimestampMonotonic": "57218077328", "ExecMainStatus": "0",
"ExecStart": "{ path=/usr/sbin/chronyd ; argv[]=/usr/sbin/chronyd
$OPTIONS ; ignore_errors=no ; start_time=[Fri 2017-09-22 08:05:49 PDT] ;
stop_time=[Fri 2017-09-22 08:05:49 PDT] ; pid=7298 ; code=exited ;
status=0 }", "ExecStartPost": "{ path=/usr/libexec/chrony-helper ;
argv[]=/usr/libexec/chrony-helper update-daemon ; ignore_errors=no ;
start_time=[Fri 2017-09-22 08:05:49 PDT] ; stop_time=[Fri 2017-09-22
08:05:49 PDT] ; pid=7302 ; code=exited ; status=0 }", "FailureAction":
"none", "FileDescriptorStoreMax": "0", "FragmentPath":
"/usr/lib/systemd/system/chronyd.service", "GuessMainPID": "yes",
"IOScheduling": "0", "Id": "chronyd.service", "IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes",
"InactiveEnterTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"InactiveEnterTimestampMonotonic": "57218052782",
"InactiveExitTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"InactiveExitTimestampMonotonic": "57218054578", "JobTimeoutAction":
"none", "JobTimeoutUSec": "0", "KillMode": "control-group",
"KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE":
"18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA":
"18446744073709551615", "LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096",
"LimitNPROC": "62043", "LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "62043", "LimitSTACK": "18446744073709551615",
"LoadState": "loaded", "MainPID": "7300", "MemoryAccounting": "no",
"MemoryCurrent": "18446744073709551615", "MemoryLimit":
"18446744073709551615", "MountFlags": "0", "Names": "chronyd.service",
"NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no",
"NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0",
"OnFailureJobMode": "replace", "PIDFile": "/var/run/chronyd.pid",
"PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork":
"no", "PrivateTmp": "yes", "ProtectHome": "yes", "ProtectSystem":
"full", "RefuseManualStart": "no", "RefuseManualStop": "no",
"RemainAfterExit": "no", "Requires": "basic.target -.mount var.mount",
"RequiresMountsFor": "/var/tmp", "Restart": "no", "RestartUSec":
"100ms", "Result": "success", "RootDirectoryStartOnly": "no",
"RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits":
"0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice",
"StandardError": "inherit", "StandardInput": "null", "StandardOutput":
"journal", "StartLimitAction": "none", "StartLimitBurst": "5",
"StartLimitInterval": "10000000", "StartupBlockIOWeight":
"18446744073709551615", "StartupCPUShares": "18446744073709551615",
"StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running",
"SyslogLevelPrefix": "yes", "SyslogPriority": "30",
"SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no",
"TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent":
"18446744073709551615", "TasksMax": "18446744073709551615",
"TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000", "Transient": "no", "Type": "forking",
"UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState":
"enabled", "WantedBy": "multi-user.target", "Wants": "system.slice",
"WatchdogTimestamp": "Fri 2017-09-22 08:05:49 PDT",
"WatchdogTimestampMonotonic": "57218077340", "WatchdogUSec": "0"}}
changed: [192.168.1.2] => (item=chronyd) => {"changed": true, "item":
"chronyd", "name": "chronyd", "state": "started", "status":
{"ActiveEnterTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"ActiveEnterTimestampMonotonic": "139348506558", "ActiveExitTimestamp":
"Fri 2017-09-22 15:04:50 PDT", "ActiveExitTimestampMonotonic":
"139348424046", "ActiveState": "active", "After": "ntpd.service
sntp.service systemd-journald.socket tmp.mount ntpdate.service -.mount
system.slice basic.target var.mount", "AllowIsolate": "no",
"AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp":
"Fri 2017-09-22 15:04:50 PDT", "AssertTimestampMonotonic":
"139348447946", "Before": "shutdown.target multi-user.target
imgbase-config-vdsm.service", "BlockIOAccounting": "no",
"BlockIOWeight": "18446744073709551615", "CPUAccounting": "no",
"CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no",
"CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload":
"no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet":
"18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp":
"Fri 2017-09-22 15:04:50 PDT", "ConditionTimestampMonotonic":
"139348447884", "Conflicts": "shutdown.target systemd-timesyncd.service
ntpd.service", "ControlGroup": "/system.slice/chronyd.service",
"ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no",
"Description": "NTP client/server", "DevicePolicy": "auto",
"Documentation": "man:chronyd(8) man:chrony.conf(5)", "EnvironmentFile":
"/etc/sysconfig/chronyd (ignore_errors=yes)", "ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "10148",
"ExecMainStartTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"ExecMainStartTimestampMonotonic": "139348474428", "ExecMainStatus":
"0", "ExecStart": "{ path=/usr/sbin/chronyd ; argv[]=/usr/sbin/chronyd
$OPTIONS ; ignore_errors=no ; start_time=[Fri 2017-09-22 15:04:50 PDT] ;
stop_time=[Fri 2017-09-22 15:04:50 PDT] ; pid=10146 ; code=exited ;
status=0 }", "ExecStartPost": "{ path=/usr/libexec/chrony-helper ;
argv[]=/usr/libexec/chrony-helper update-daemon ; ignore_errors=no ;
start_time=[Fri 2017-09-22 15:04:50 PDT] ; stop_time=[Fri 2017-09-22
15:04:50 PDT] ; pid=10150 ; code=exited ; status=0 }", "FailureAction":
"none", "FileDescriptorStoreMax": "0", "FragmentPath":
"/usr/lib/systemd/system/chronyd.service", "GuessMainPID": "yes",
"IOScheduling": "0", "Id": "chronyd.service", "IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes",
"InactiveEnterTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"InactiveEnterTimestampMonotonic": "139348445639",
"InactiveExitTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"InactiveExitTimestampMonotonic": "139348449304", "JobTimeoutAction":
"none", "JobTimeoutUSec": "0", "KillMode": "control-group",
"KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE":
"18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA":
"18446744073709551615", "LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096",
"LimitNPROC": "62271", "LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "62271", "LimitSTACK": "18446744073709551615",
"LoadState": "loaded", "MainPID": "10148", "MemoryAccounting": "no",
"MemoryCurrent": "18446744073709551615", "MemoryLimit":
"18446744073709551615", "MountFlags": "0", "Names": "chronyd.service",
"NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no",
"NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0",
"OnFailureJobMode": "replace", "PIDFile": "/var/run/chronyd.pid",
"PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork":
"no", "PrivateTmp": "yes", "ProtectHome": "yes", "ProtectSystem":
"full", "RefuseManualStart": "no", "RefuseManualStop": "no",
"RemainAfterExit": "no", "Requires": "basic.target -.mount var.mount",
"RequiresMountsFor": "/var/tmp", "Restart": "no", "RestartUSec":
"100ms", "Result": "success", "RootDirectoryStartOnly": "no",
"RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits":
"0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice",
"StandardError": "inherit", "StandardInput": "null", "StandardOutput":
"journal", "StartLimitAction": "none", "StartLimitBurst": "5",
"StartLimitInterval": "10000000", "StartupBlockIOWeight":
"18446744073709551615", "StartupCPUShares": "18446744073709551615",
"StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running",
"SyslogLevelPrefix": "yes", "SyslogPriority": "30",
"SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no",
"TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent":
"18446744073709551615", "TasksMax": "18446744073709551615",
"TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000", "Transient": "no", "Type": "forking",
"UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState":
"enabled", "WantedBy": "multi-user.target", "Wants": "system.slice",
"WatchdogTimestamp": "Fri 2017-09-22 15:04:50 PDT",
"WatchdogTimestampMonotonic": "139348474458", "WatchdogUSec": "0"}}
changed: [192.168.1.1] => (item=chronyd) => {"changed": true, "item":
"chronyd", "name": "chronyd", "state": "started", "status":
{"ActiveEnterTimestamp": "Fri 2017-09-22 08:05:55 PDT",
"ActiveEnterTimestampMonotonic": "64535125522", "ActiveExitTimestamp":
"Fri 2017-09-22 08:05:54 PDT", "ActiveExitTimestampMonotonic":
"64535042941", "ActiveState": "active", "After": "ntpd.service tmp.mount
systemd-journald.socket system.slice sntp.service basic.target var.mount
ntpdate.service -.mount", "AllowIsolate": "no", "AmbientCapabilities":
"0", "AssertResult": "yes", "AssertTimestamp": "Fri 2017-09-22 08:05:54
PDT", "AssertTimestampMonotonic": "64535053307", "Before":
"shutdown.target multi-user.target imgbase-config-vdsm.service",
"BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615",
"CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity",
"CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615",
"CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop":
"yes", "CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes", "ConditionTimestamp": "Fri 2017-09-22 08:05:54
PDT", "ConditionTimestampMonotonic": "64535053202", "Conflicts":
"shutdown.target ntpd.service systemd-timesyncd.service",
"ControlGroup": "/system.slice/chronyd.service", "ControlPID": "0",
"DefaultDependencies": "yes", "Delegate": "no", "Description": "NTP
client/server", "DevicePolicy": "auto", "Documentation": "man:chronyd(8)
man:chrony.conf(5)", "EnvironmentFile": "/etc/sysconfig/chronyd
(ignore_errors=yes)", "ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "9436",
"ExecMainStartTimestamp": "Fri 2017-09-22 08:05:55 PDT",
"ExecMainStartTimestampMonotonic": "64535083184", "ExecMainStatus": "0",
"ExecStart": "{ path=/usr/sbin/chronyd ; argv[]=/usr/sbin/chronyd
$OPTIONS ; ignore_errors=no ; start_time=[Fri 2017-09-22 08:05:54 PDT] ;
stop_time=[Fri 2017-09-22 08:05:55 PDT] ; pid=9434 ; code=exited ;
status=0 }", "ExecStartPost": "{ path=/usr/libexec/chrony-helper ;
argv[]=/usr/libexec/chrony-helper update-daemon ; ignore_errors=no ;
start_time=[Fri 2017-09-22 08:05:55 PDT] ; stop_time=[Fri 2017-09-22
08:05:55 PDT] ; pid=9438 ; code=exited ; status=0 }", "FailureAction":
"none", "FileDescriptorStoreMax": "0", "FragmentPath":
"/usr/lib/systemd/system/chronyd.service", "GuessMainPID": "yes",
"IOScheduling": "0", "Id": "chronyd.service", "IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes",
"InactiveEnterTimestamp": "Fri 2017-09-22 08:05:54 PDT",
"InactiveEnterTimestampMonotonic": "64535051400",
"InactiveExitTimestamp": "Fri 2017-09-22 08:05:54 PDT",
"InactiveExitTimestampMonotonic": "64535054635", "JobTimeoutAction":
"none", "JobTimeoutUSec": "0", "KillMode": "control-group",
"KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE":
"18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA":
"18446744073709551615", "LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096",
"LimitNPROC": "14440", "LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "14440", "LimitSTACK": "18446744073709551615",
"LoadState": "loaded", "MainPID": "9436", "MemoryAccounting": "no",
"MemoryCurrent": "18446744073709551615", "MemoryLimit":
"18446744073709551615", "MountFlags": "0", "Names": "chronyd.service",
"NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no",
"NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0",
"OnFailureJobMode": "replace", "PIDFile": "/var/run/chronyd.pid",
"PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork":
"no", "PrivateTmp": "yes", "ProtectHome": "yes", "ProtectSystem":
"full", "RefuseManualStart": "no", "RefuseManualStop": "no",
"RemainAfterExit": "no", "Requires": "basic.target -.mount var.mount",
"RequiresMountsFor": "/var/tmp", "Restart": "no", "RestartUSec":
"100ms", "Result": "success", "RootDirectoryStartOnly": "no",
"RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits":
"0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice",
"StandardError": "inherit", "StandardInput": "null", "StandardOutput":
"journal", "StartLimitAction": "none", "StartLimitBurst": "5",
"StartLimitInterval": "10000000", "StartupBlockIOWeight":
"18446744073709551615", "StartupCPUShares": "18446744073709551615",
"StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running",
"SyslogLevelPrefix": "yes", "SyslogPriority": "30",
"SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no",
"TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent":
"18446744073709551615", "TasksMax": "18446744073709551615",
"TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000", "Transient": "no", "Type": "forking",
"UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState":
"enabled", "WantedBy": "multi-user.target", "Wants": "system.slice",
"WatchdogTimestamp": "Fri 2017-09-22 08:05:55 PDT",
"WatchdogTimestampMonotonic": "64535083221", "WatchdogUSec": "0"}}
META: ran handlers
META: ran handlers
PLAY RECAP
***********************************************************************************************************************************
192.168.1.1 : ok=1 changed=1 unreachable=0 failed=0
192.168.1.2 : ok=1 changed=1 unreachable=0 failed=0
192.168.1.3 : ok=1 changed=1 unreachable=0 failed=0
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: shell_cmd.yml
**********************************************************************************************************************
1 plays in /tmp/tmpaQJuTG/shell_cmd.yml
PLAY [gluster_servers]
***********************************************************************************************************************
META: ran handlers
TASK [Run a command in the shell]
************************************************************************************************************
task path: /tmp/tmpaQJuTG/shell_cmd.yml:7
changed: [192.168.1.3] => (item=vdsm-tool configure --force) =>
{"changed": true, "cmd": "vdsm-tool configure --force", "delta":
"0:00:00.666325", "end": "2017-09-22 08:13:13.528752", "item":
"vdsm-tool configure --force", "rc": 0, "start": "2017-09-22
08:13:12.862427", "stderr": "", "stderr_lines": [], "stdout":
"\nChecking configuration status...\n\nabrt is already configured for
vdsm\nlvm is configured for vdsm\nlibvirt is already configured for
vdsm\nSUCCESS: ssl configured to true. No conflicts\nCurrent revision of
multipath.conf detected, preserving\n\nRunning
configure...\nReconfiguration of abrt is done.\nReconfiguration of
passwd is done.\nReconfiguration of libvirt is done.\n\nDone configuring
modules to VDSM.", "stdout_lines": ["", "Checking configuration
status...", "", "abrt is already configured for vdsm", "lvm is
configured for vdsm", "libvirt is already configured for vdsm",
"SUCCESS: ssl configured to true. No conflicts", "Current revision of
multipath.conf detected, preserving", "", "Running configure...",
"Reconfiguration of abrt is done.", "Reconfiguration of passwd is
done.", "Reconfiguration of libvirt is done.", "", "Done configuring
modules to VDSM."]}
changed: [192.168.1.2] => (item=vdsm-tool configure --force) =>
{"changed": true, "cmd": "vdsm-tool configure --force", "delta":
"0:00:02.812365", "end": "2017-09-22 15:12:16.320833", "item":
"vdsm-tool configure --force", "rc": 0, "start": "2017-09-22
15:12:13.508468", "stderr": "", "stderr_lines": [], "stdout":
"\nChecking configuration status...\n\nabrt is already configured for
vdsm\nlvm is configured for vdsm\nlibvirt is already configured for
vdsm\nSUCCESS: ssl configured to true. No conflicts\nCurrent revision of
multipath.conf detected, preserving\n\nRunning
configure...\nReconfiguration of abrt is done.\nReconfiguration of
passwd is done.\nReconfiguration of libvirt is done.\n\nDone configuring
modules to VDSM.", "stdout_lines": ["", "Checking configuration
status...", "", "abrt is already configured for vdsm", "lvm is
configured for vdsm", "libvirt is already configured for vdsm",
"SUCCESS: ssl configured to true. No conflicts", "Current revision of
multipath.conf detected, preserving", "", "Running configure...",
"Reconfiguration of abrt is done.", "Reconfiguration of passwd is
done.", "Reconfiguration of libvirt is done.", "", "Done configuring
modules to VDSM."]}
changed: [192.168.1.1] => (item=vdsm-tool configure --force) =>
{"changed": true, "cmd": "vdsm-tool configure --force", "delta":
"0:00:02.004199", "end": "2017-09-22 08:13:18.078810", "item":
"vdsm-tool configure --force", "rc": 0, "start": "2017-09-22
08:13:16.074611", "stderr": "", "stderr_lines": [], "stdout":
"\nChecking configuration status...\n\nabrt is already configured for
vdsm\nlvm is configured for vdsm\nlibvirt is already configured for
vdsm\nSUCCESS: ssl configured to true. No conflicts\nCurrent revision of
multipath.conf detected, preserving\n\nRunning
configure...\nReconfiguration of abrt is done.\nReconfiguration of
passwd is done.\nReconfiguration of libvirt is done.\n\nDone configuring
modules to VDSM.", "stdout_lines": ["", "Checking configuration
status...", "", "abrt is already configured for vdsm", "lvm is
configured for vdsm", "libvirt is already configured for vdsm",
"SUCCESS: ssl configured to true. No conflicts", "Current revision of
multipath.conf detected, preserving", "", "Running configure...",
"Reconfiguration of abrt is done.", "Reconfiguration of passwd is
done.", "Reconfiguration of libvirt is done.", "", "Done configuring
modules to VDSM."]}
META: ran handlers
META: ran handlers
PLAY RECAP
***********************************************************************************************************************************
192.168.1.1 : ok=1 changed=1 unreachable=0 failed=0
192.168.1.2 : ok=1 changed=1 unreachable=0 failed=0
192.168.1.3 : ok=1 changed=1 unreachable=0 failed=0
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: run-script.yml
*********************************************************************************************************************
1 plays in /tmp/tmpaQJuTG/run-script.yml
PLAY [gluster_servers]
***********************************************************************************************************************
META: ran handlers
TASK [Run a shell script]
********************************************************************************************************************
task path: /tmp/tmpaQJuTG/run-script.yml:7
fatal: [192.168.1.3]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [192.168.1.2]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [192.168.1.1]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
to retry, use: --limit @/tmp/tmpaQJuTG/run-script.retry
PLAY RECAP
***********************************************************************************************************************************
192.168.1.1 : ok=0 changed=0 unreachable=0 failed=1
192.168.1.2 : ok=0 changed=0 unreachable=0 failed=1
192.168.1.3 : ok=0 changed=0 unreachable=0 failed=1
You can view the generated configuration files inside /tmp/tmpaQJuTG
/tmp/tmpaQJuTG/run-script.yml is:
---
- hosts: gluster_servers
remote_user: root
gather_facts: no
tasks:
- name: Run a shell script
script: "{{ item }}"
register: result
failed_when: result.rc != 0
with_items: "{{ script }}"
/tmp/gdeployConfig.conf is:
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
192.168.1.1
192.168.1.2
192.168.1.3
[script1]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.1.1,192.168.1.2,192.168.1.3
[disktype]
raid6
[diskcount]
12
[stripesize]
256
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no
[pv1]
action=create
devices=sdb
ignore_pv_errors=no
[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB
[lv2]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick
[lv3]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv4]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size
value=virt,36,36,30,on,off,enable,64MB
brick_dirs=192.168.1.1:/gluster_bricks/engine/engine,192.168.1.2:/gluster_bricks/engine/engine,192.168.1.3:/gluster_bricks/engine/engine
ignore_volume_errors=no
[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size
value=virt,36,36,30,on,off,enable,64MB
brick_dirs=192.168.1.1:/gluster_bricks/data/data,192.168.1.2:/gluster_bricks/data/data,192.168.1.3:/gluster_bricks/data/data
ignore_volume_errors=no
[volume3]
action=create
volname=vmstore
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size
value=virt,36,36,30,on,off,enable,64MB
brick_dirs=192.168.1.1:/gluster_bricks/vmstore/vmstore,192.168.1.2:/gluster_bricks/vmstore/vmstore,192.168.1.3:/gluster_bricks/vmstore/vmstore
ignore_volume_errors=no
7 years, 7 months
API Snapshot removal ending event
by Lionel Caignec
Hi,
For snap creation i use a loop like this to know when task is finished:
> snap = snapshots_service.add(types.Snapshot(description=description,
persist_memorystate=False))
> snap_service = snapshots_service.snapshot_service(snap.id)
> while snap.snapshot_status != types.SnapshotStatus.OK:
is it possible to do the same think for snap deletion? i did not find anything in api doc.
Thank you
--
Lionel
7 years, 7 months
Restoring from backup
by Tyson Landon
The final setup in restoring a backup has you run engine-setup. When I try to run engine-setup I get "[ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup detected, but Global Maintenance is not set." I am in Global Maintenance mode however. If I exit Global Maintenance the hosted engine reboots every few minutes and it fails liveliness checks. Is there a service i have to stop so that the setup wont think HA is running?
7 years, 7 months
update to centos 7.4
by Nathanaël Blanchet
Hi all,
Now centos 7.4 is available, is it recommanted to update nodes (and
engine os) knowing that ovirt 4.1 is officially supported for 7.3 or later?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7 years, 7 months
SSLHandshakeException: Received fatal alert: certificate_expired
by Neil
Hi guys,
Please could someone assist, my cluster is down and I can't access my vm's
to switch some of them back on.
I'm seeing the following error in the engine.log however I've checked my
certs on my hosts (as some of the goolge results said to check), but the
certs haven't expired...
2017-09-21 15:09:45,077 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-4) Command
GetCapabilitiesVDSCommand(HostName = node02.mydomain.za, HostId =
d2debdfe-76e7-40cf-a7fd-78a0f50f14d4, vds=Host[node02.mydomain.za])
execution failed. Exception: VDSNetworkException:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
certificate_expired
2017-09-21 15:09:45,086 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-10) Command
GetCapabilitiesVDSCommand(HostName = node01.mydomain.za, HostId =
b108549c-1700-11e2-b936-9f5243b8ce13, vds=Host[node01.mydomain.za])
execution failed. Exception: VDSNetworkException:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
certificate_expired
2017-09-21 15:09:48,173 ERROR
My engine and host info is below...
[root@engine01 ovirt-engine]# rpm -qa | grep -i ovirt
ovirt-engine-lib-3.4.0-1.el6.noarch
ovirt-engine-restapi-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
ovirt-host-deploy-java-1.2.0-1.el6.noarch
ovirt-engine-setup-3.4.0-1.el6.noarch
ovirt-host-deploy-1.2.0-1.el6.noarch
ovirt-engine-backend-3.4.0-1.el6.noarch
ovirt-image-uploader-3.4.0-1.el6.noarch
ovirt-engine-tools-3.4.0-1.el6.noarch
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
ovirt-engine-cli-3.4.0.5-1.el6.noarch
ovirt-engine-setup-base-3.4.0-1.el6.noarch
ovirt-iso-uploader-3.4.0-1.el6.noarch
ovirt-engine-userportal-3.4.0-1.el6.noarch
ovirt-log-collector-3.4.1-1.el6.noarch
ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
ovirt-engine-dbscripts-3.4.0-1.el6.noarch
[root@engine01 ovirt-engine]# cat /etc/redhat-release
CentOS release 6.5 (Final)
[root@node02 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -enddate
-noout ; date
notAfter=May 27 08:36:17 2019 GMT
Thu Sep 21 15:18:22 SAST 2017
CentOS release 6.5 (Final)
[root@node02 ~]# rpm -qa | grep vdsm
vdsm-4.14.6-0.el6.x86_64
vdsm-python-4.14.6-0.el6.x86_64
vdsm-cli-4.14.6-0.el6.noarch
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
[root@node01 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -enddate
-noout ; date
notAfter=Jun 13 16:09:41 2018 GMT
Thu Sep 21 15:18:52 SAST 2017
CentOS release 6.5 (Final)
[root@node01 ~]# rpm -qa | grep -i vdsm
vdsm-4.14.6-0.el6.x86_64
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-cli-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
vdsm-python-4.14.6-0.el6.x86_64
Please could I have some assistance, I'm rater desperate.
Thank you.
Regards.
Neil Wilson
7 years, 7 months