Use of virtual disks
by Langley, Robert
--_000_CY1PR09MB02986AF1BA90BAD8F52448A087CE0CY1PR09MB0298namp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
My first experience with this situation using oVirt.
I do come from using VMWare and have been also been using oVirt for several=
years.
We've also paid for RHEV, but migration is held up at the moment. I am tryi=
ng to prepare for that.
My problem is how it does not appear to be so simple to utilize the VM disk=
s. In VMWare it is so simple. Snapshot or not, in vSphere I can take the vi=
rtual disk file and use it for another VM when needed. It doesn't make sens=
e to me in oVirt. I have another entry here about my issue that lead to th=
is need, where I am not able to delete snapshot files for those disks I was=
attempting to live migrate and there was an issue... Now, the empty snapsh=
ot files are preventing some VMs from starting. It seems I should be able t=
o take the VM disk files, without the snapshots, and use them with another =
VM. But, that does not appear possible from what I can tell in oVirt.
I desperately need to get one specific VM going. The other two, no worries.=
I was able to restore from backup, one of the effected VMs. The third is n=
ot important at all and can easily by re-created.
Is anyone experienced with taking VM disks from one and using them (without=
snapshots) with another VM? I could really use some sort of workaround.
Thanks if anyone can come up with a good answer that would help.
-Robert L.
--_000_CY1PR09MB02986AF1BA90BAD8F52448A087CE0CY1PR09MB0298namp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<div>My first experience with this situation using oVirt.</div>
<div>I do come from using VMWare and have been also been using oVirt for se=
veral years.</div>
<div>We've also paid for RHEV, but migration is held up at the moment. I am=
trying to prepare for that.</div>
<div><br>
</div>
<div>My problem is how it does not appear to be so simple to utilize the VM=
disks. In VMWare it is so simple. Snapshot or not, in vSphere I can take t=
he virtual disk file and use it for another VM when needed. It doesn't make=
sense to me in oVirt. I have another
entry here about my issue that lead to this need, where I am not abl=
e to delete snapshot files for those disks I was attempting to live migrate=
and there was an issue... Now, the empty snapshot files are preventing som=
e VMs from starting. It seems I should
be able to take the VM disk files, without the snapshots, and use them wit=
h another VM. But, that does not appear possible from what I can tell in oV=
irt.</div>
<div><br>
</div>
<div>I desperately need to get one specific VM going. The other two, no wor=
ries. I was able to restore from backup, one of the effected VMs. The third=
is not important at all and can easily by re-created.</div>
<div>Is anyone experienced with taking VM disks from one and using them (wi=
thout snapshots) with another VM? I could really use some sort of workaroun=
d.</div>
<div><br>
</div>
<div>Thanks if anyone can come up with a good answer that would help.</div>
<div>-Robert L.<br>
</div>
</div>
</body>
</html>
--_000_CY1PR09MB02986AF1BA90BAD8F52448A087CE0CY1PR09MB0298namp_--
7 years, 2 months
Multiple 'scsi' controllers with index '0'.
by spfma.tech@e.mail.fr
--=_f3809b99ed958d24a203b2286897f294
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi, I just wanted to increase the number of CPUs for a VM and after va=
lidating, I got the following error when I try to start it: VM vm-test=
is down with error. Exit message: XML error: Multiple 'scsi' controller=
s with index '0'. I am sure it is a bug, but for now, what can I do in=
order to remove or edit conflicting devices definitions ? I need to be=
able to start this machine. 4.2.0.2-1.el7.centos (as I still don't ma=
nage to update the hosted engine to something newer) Regards =0A=0A-=
------------------------------------------------------------------------=
------------------------=0AFreeMail powered by mail.fr
--=_f3809b99ed958d24a203b2286897f294
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div> </div>=0A<div>Hi,</div>=0A<div> </div>=0A<div>I just wan=
ted to increase the number of CPUs for a VM and after validating, I got=
the following error when I try to start it:</div>=0A<div> </div>=
=0A<div>VM vm-test is down with error. Exit message: XML error: Multiple=
'scsi' controllers with index '0'.</div>=0A<div> </div>=0A<div>I a=
m sure it is a bug, but for now, what can I do in order to remove or edi=
t conflicting devices definitions ? I need to be able to start this mach=
ine.</div>=0A<div> </div>=0A<div><span class=3D"gwt-InlineLabel">4.=
2.0.2-1.el7.centos</span> (as I still don't manage to update the hosted=
engine to something newer)</div>=0A<div> </div>=0A<div>Regards</di=
v>=0A<div> </div>=0A <br/><hr>FreeMail powered b=
y <a href=3D"https://mail.fr" target=3D"_blank">mail.fr</a>=0A
--=_f3809b99ed958d24a203b2286897f294--
7 years, 2 months
Disk image upload pausing
by spfma.tech@e.mail.fr
--=_9f0dbcb8a9e83a911a0d7503eb3c2110
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi, I am trying to build a new vm based on a vhd image coming from a win=
dows machine. I converted the image to raw, and I am now trying to impor=
t it in the engine. After setting up the CA in my browser, the import pr=
ocess starts but stops after a while with "paused by system" status. I c=
an resume it, but it pauses without transferring more. The engine logs d=
on't explain much, I see a line for the start and the next one for the p=
ause. My network seems to work correctly, and I have plenty of space in=
the storage domain. What can cause the process to pause ? Regards =0A=
=0A---------------------------------------------------------------------=
----------------------------=0AFreeMail powered by mail.fr
--=_9f0dbcb8a9e83a911a0d7503eb3c2110
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div> </div>=0A<div>Hi,</div>=0A<div>I am trying to build a new vm=
based on a vhd image coming from a windows machine. I converted the ima=
ge to raw, and I am now trying to import it in the engine.</div>=0A<div>=
After setting up the CA in my browser, the import process starts but sto=
ps after a while with "paused by system" status. I can resume it, but it=
pauses without transferring more.</div>=0A<div>The engine logs don't ex=
plain much, I see a line for the start and the next one for the pause.</=
div>=0A<div>My network seems to work correctly, and I have plenty of spa=
ce in the storage domain.</div>=0A<div>What can cause the process to pau=
se ?</div>=0A<div>Regards</div>=0A <br/><hr>FreeMail=
powered by <a href=3D"https://mail.fr" target=3D"_blank">mail.fr</a>=0A
--=_9f0dbcb8a9e83a911a0d7503eb3c2110--
7 years, 2 months
Reinitializing lockspace
by Jamie Lawrence
Hello,
I have a sanlock problem. I don't fully understand the logs, but from what I can gather, messages like this means it ain't working.
2018-02-16 14:51:46 22123 [15036]: s1 renewal error -107 delta_length 0 last_success 22046
2018-02-16 14:51:47 22124 [15036]: 53977885 aio collect RD 0x7fe5040008c0:0x7fe5040008d0:0x7fe518922000 result -107:0 match res
2018-02-16 14:51:47 22124 [15036]: s1 delta_renew read rv -107 offset 0 /rhev/data-center/mnt/glusterSD/sc5-gluster-10g-1.squaretrade.com:ovirt__images/53977885-0887-48d0-a02c-8d9e3faec93c/dom_md/ids
I attempted `hosted-engine --reinitialize-lockspace --force`, which didn't appear to do anything, but who knows.
I downed everything and and tried `sanlock direct init -s ....`, which caused sanlock to dump core.
At this point the only thing I can think of to do is down everything, whack and manually recreate the lease files and try again. I'm worried that that will lose something that the setup did or will otherwise destroy the installation. It looks like this has been done by others[1], but the references I can find are a bit old, so I'm unsure if that is still a valid approach.
So, questions:
- Will that work?
- Is there something I should do instead of that?
Thanks,
-j
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1116469
7 years, 2 months
Fwd: Fwd: why host is not capable to run HE?
by Artem Tambovskiy
Thanks Martin.
As you suggested I updated hosted-engine.conf with correct host_id values
and restarted ovirt-ha-agent services on both hosts and now I run into the
problem with status "unknown-stale-data" :(
And second host still doesn't looks as capable to run HE.
Should I stop HE VM, bring down ovirt-ha-agents and reinitialize-lockspace
and start ovirt-ha-agents again?
Regards,
Artem
On Mon, Feb 19, 2018 at 6:45 PM, Martin Sivak <msivak(a)redhat.com> wrote:
> Hi Artem,
>
> just a restart of ovirt-ha-agent services should be enough.
>
> Best regards
>
> Martin Sivak
>
> On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy
> <artem.tambovskiy(a)gmail.com> wrote:
> > Ok, understood.
> > Once I set correct host_id on both hosts how to take changes in force?
> With
> > minimal downtime? Or i need reboot both hosts anyway?
> >
> > Regards,
> > Artem
> >
> > 19 февр. 2018 г. 18:18 пользователь "Simone Tiraboschi"
> > <stirabos(a)redhat.com> написал:
> >
> >>
> >>
> >> On Mon, Feb 19, 2018 at 4:12 PM, Artem Tambovskiy
> >> <artem.tambovskiy(a)gmail.com> wrote:
> >>>
> >>>
> >>> Thanks a lot, Simone!
> >>>
> >>> This is clearly shows a problem:
> >>>
> >>> [root@ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select
> >>> vds_name, vds_spm_id from vds'
> >>> vds_name | vds_spm_id
> >>> -----------------+------------
> >>> ovirt1.local | 2
> >>> ovirt2.local | 1
> >>> (2 rows)
> >>>
> >>> While hosted-engine.conf on ovirt1.local have host_id=1, and
> ovirt2.local
> >>> host_id=2. So totally opposite values.
> >>> So how to get this fixed in the simple way? Update the engine DB?
> >>
> >>
> >> I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf
> on
> >> both the hosts
> >>
> >>>
> >>>
> >>> Regards,
> >>> Artem
> >>>
> >>> On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi <
> stirabos(a)redhat.com>
> >>> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy
> >>>> <artem.tambovskiy(a)gmail.com> wrote:
> >>>>>
> >>>>> Hello,
> >>>>>
> >>>>> Last weekend my cluster suffered form a massive power outage due to
> >>>>> human mistake.
> >>>>> I'm using SHE setup with Gluster, I managed to bring the cluster up
> >>>>> quickly, but once again I have a problem with duplicated host_id
> >>>>> (https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second
> host and due
> >>>>> to this second host is not capable to run HE.
> >>>>>
> >>>>> I manually updated file hosted_engine.conf with correct host_id and
> >>>>> restarted agent & broker - no effect. Than I rebooted the host
> itself -
> >>>>> still no changes. How to fix this issue?
> >>>>
> >>>>
> >>>> I'd suggest to run this command on the engine VM:
> >>>> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c
> >>>> 'select vds_name, vds_spm_id from vds'
> >>>> (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id
> >>>> from vds' if still on 4.1) and check
> >>>> /etc/ovirt-hosted-engine/hosted-engine.conf on all the involved host.
> >>>> Maybe you can also have a leftover configuration file on undeployed
> >>>> host.
> >>>>
> >>>> When you find a conflict you should manually bring down sanlock
> >>>> In doubt a reboot of both the hosts will solve for sure.
> >>>>
> >>>>
> >>>>>
> >>>>>
> >>>>> Regards,
> >>>>> Artem
> >>>>>
> >>>>> _______________________________________________
> >>>>> Users mailing list
> >>>>> Users(a)ovirt.org
> >>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>
> >>>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users(a)ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
7 years, 2 months
cloud-init issue /IP address lost
by Shamam Amir
Hi All,
I am using a centos template which I had imported from
ovirt-image-repository. Whenever I make a VM from this template and
configure its initial run section and run it for the first time, everything
goes well. As long as I reboot the server, the VM lost its network
configuration (IP, gateway, dns) Then I have to put these parameters again
manually. In addition, when I shut down the VM and change its name, the
same problem happens again. This actually causes me pretty much long
downtime in order to configure the parameters again.
Your help is highly appreciated.
Best Regards
7 years, 2 months
CPU queues on ovirt hosts.
by Endre Karlson
Hi guys, is there a way to have CPU queues go down when having a java app
on a ovirt hosT ?
we have a idm app where the cpu queue is constantly 2-3 when we are doing
things with the configuration but on esx on a similar host it is much faster
7 years, 2 months
Spice Client Connection Issues Using aSpice
by Jeremy Tourville
--_000_BLUPR02MB100378235058BDDF660037FFAC90BLUPR02MB100namprd_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
I am having trouble connecting to my guest vm (Kali Linux) which is running=
spice. My engine is running version: 4.2.1.7-1.el7.centos.
I am using oVirt Node as my host running version: 4.2.1.1.
I have taken the following steps to try and get everything running properly=
.
1. Download the root CA certificate https://ovirtengine.lan/ovirt-engine=
/services/pki-resource?resource=3Dca-certificate&format=3DX509-PEM-CA
2. Edit the vm and define the graphical console entries. Video type is =
set to QXL, Graphics protocol is spice, USB support is enabled.
3. Install the guest agent in Debian per the instructions here - https:/=
/www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-=
debian/ It is my understanding that installing the guest agent will also i=
nstall the virt IO device drivers.
4. Install the spice-vdagent per the instructions here - https://www.ovi=
rt.org/documentation/how-to/guest-agent/install-the-spice-guest-agent/
5. On the aSpice client I have imported the CA certficate from step 1 a=
bove. I defined the connection using the IP of my Node and TLS port 5901.
To troubleshoot my connection issues I confirmed the port being used to lis=
ten.
virsh # domdisplay Kali
spice://172.30.42.12?tls-port=3D5901
I see the following when attempting to connect.
tail -f /var/log/libvirt/qemu/Kali.log
140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert int=
ernal error:s3_pkt.c:1493:SSL alert number 80
((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept:=
SSL_accept failed, error=3D1
I came across some documentation that states in the caveat section "Certifi=
cate of spice SSL should be separate certificate."
https://www.ovirt.org/develop/release-management/features/infra/pki/
Is this still the case for version 4? The document references version 3.2 =
and 3.3. If so, how do I generate a new certificate for use with spice? P=
lease let me know if you require further info to troubleshoot, I am happy t=
o provide it. Many thanks in advance.
<https://www.ovirt.org/develop/release-management/features/infra/pki/>
--_000_BLUPR02MB100378235058BDDF660037FFAC90BLUPR02MB100namprd_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p style=3D"margin-top:0;margin-bottom:0">Hello,</p>
<p style=3D"margin-top:0;margin-bottom:0">I am having trouble connecting to=
my guest vm (Kali Linux) which is running spice. My engine is running vers=
ion: <span class=3D"gwt-InlineLabel GNEKTHVBIXB"></span><span class=3D=
"gwt-InlineLabel">4.2.1.7-1.el7.centos</span>.</p>
<p style=3D"margin-top:0;margin-bottom:0">I am using oVirt Node as my host =
running version:<span> 4.2.1.1.
<br>
</span></p>
<p style=3D"margin-top:0;margin-bottom:0"><span><br>
</span></p>
<p style=3D"margin-top:0;margin-bottom:0"><span>I have taken the following =
steps to try and get everything running properly.</span></p>
<ol style=3D"margin-bottom: 0px; margin-top: 0px;">
<li><span>Download the root CA certificate <a href=3D"https://ovirteng=
ine.lan/ovirt-engine/services/pki-resource?resource=3Dca-certificate&fo=
rmat=3DX509-PEM-CA" class=3D"OWAAutoLink" id=3D"LPlnk141717" previewremoved=
=3D"true">https://ovirtengine.lan/ovirt-engine/services/pki-resource?resour=
ce=3Dca-certificate&format=3DX509-PEM-CA</a></span></li><li><span>Edit =
the vm and define the graphical console entries. Video type is set to=
QXL, Graphics protocol is spice, USB support is enabled.</span></li><li><s=
pan>Install the guest agent in Debian per the instructions here - <a href=
=3D"https://www.ovirt.org/documentation/how-to/guest-agent/install-the-gues=
t-agent-in-debian/" class=3D"OWAAutoLink" id=3D"LPlnk263752" previewremoved=
=3D"true">
https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-ag=
ent-in-debian/</a> It is my understanding that installing the guest a=
gent will also install the virt IO device drivers.<br>
</span></li><li><span>Install the spice-vdagent per the instructions here -=
<a href=3D"https://www.ovirt.org/documentation/how-to/guest-agent/install-=
the-spice-guest-agent/" class=3D"OWAAutoLink" id=3D"LPlnk313725" previewrem=
oved=3D"true">
https://www.ovirt.org/documentation/how-to/guest-agent/install-the-spice-gu=
est-agent/</a></span></li><li><span> On the aSpice client I have impor=
ted the CA certficate from step 1 above. I defined the connection usi=
ng the IP of my Node and TLS port 5901.</span></li></ol>
<span><br>
To troubleshoot my connection issues I confirmed the port being used to lis=
ten. <br>
<div>virsh # domdisplay Kali<br>
<span>spice://172.30.42.12?tls-port=3D5901</span></div>
<br>
I see the following when attempting to connect.<br>
tail -f <span>/var/log/libvirt/qemu</span>/Kali.log<br>
<br>
<div>
<div>140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 aler=
t internal error:s3_pkt.c:1493:SSL alert number 80<br>
((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept:=
SSL_accept failed, error=3D1<br>
<br>
I came across some documentation that states in the caveat section "<s=
pan>Certificate of spice SSL should be separate certificate."</span><b=
r>
<a href=3D"https://www.ovirt.org/develop/release-management/features/infra/=
pki/" class=3D"OWAAutoLink" id=3D"LPlnk743161" previewremoved=3D"true">http=
s://www.ovirt.org/develop/release-management/features/infra/pki/</a><br>
<br>
Is this still the case for version 4? The document references version=
3.2 and 3.3. If so, how do I generate a new certificate for use with=
spice? Please let me know if you require further info to troubleshoo=
t, I am happy to provide it. Many thanks in advance.<br>
<a href=3D"https://www.ovirt.org/develop/release-management/features/infra/=
pki/" class=3D"OWAAutoLink" id=3D"LPlnk743161" previewremoved=3D"true"></a>=
<br>
<br>
</div>
<br>
<br>
</div>
<br>
</span><br>
<span><br>
<br>
</span>
<p style=3D"margin-top:0;margin-bottom:0"><br>
</p>
</div>
</body>
</html>
--_000_BLUPR02MB100378235058BDDF660037FFAC90BLUPR02MB100namprd_--
7 years, 2 months
4.2 aaa LDAP setup issue
by Jamie Lawrence
Hello,
I'm bringing up a new 4.2 cluster and would like to use LDAP auth. Our LDAP servers are fine and function normally for a number of other services, but I can't get this working.
Our LDAP setup requires startTLS and a login. That last bit seems to be where the trouble is. After ovirt-engine-extension-aaa-ldap-setup asks for the cert and I pass it the path to the same cert used via nslcd/PAM for logging in to the host, it replies:
[ INFO ] Connecting to LDAP using 'ldap://x.squaretrade.com:389'
[ INFO ] Executing startTLS
[WARNING] Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'}
[ ERROR ] Cannot connect using any of available options
"Unwilling to perform" makes me think -aaa-ldap-setup is trying something the backend doesn't support, but I'm having trouble guessing what that could be since the tool hasn't gathered sufficient information to connect yet - it asks for a DN/pass later in the script. And the log isn't much more forthcoming.
I double-checked the cert with openssl; it is a valid, PEM-encoded cert.
Before I head in to the code, has anyone seen this?
Thanks,
-j
- - - - snip - - - -
Relevant log details:
2018-02-08 15:15:08,625-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._getURLs:281 URLs: ['ldap://x.squaretrade.com:389']
2018-02-08 15:15:08,626-0800 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:391 Connecting to LDAP using 'ldap://x.squaretrade.com:389'
2018-02-08 15:15:08,627-0800 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:442 Executing startTLS
2018-02-08 15:15:08,640-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:445 Perform search
2018-02-08 15:15:08,641-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:459 Exception
Traceback (most recent call last):
File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 451, in _connectLDAP
timeout=60,
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 555, in search_st
return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout)
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 546, in search_ext_s
return self.result(msgid,all=1,timeout=timeout)[1]
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 458, in result
resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout)
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 462, in result2
resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout)
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 469, in result3
resp_ctrl_classes=resp_ctrl_classes
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 476, in result4
ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in _ldap_call
result = func(*args,**kwargs)
UNWILLING_TO_PERFORM: {'info': 'authentication required', 'desc': 'Server is unwilling to perform'}
2018-02-08 15:15:08,642-0800 WARNING otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:463 Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'}
2018-02-08 15:15:08,643-0800 ERROR otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._customization_late:787 Cannot connect using any of available options
2018-02-08 15:15:08,644-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._customization_late:788 Exception
Traceback (most recent call last):
File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 782, in _customization_late
insecure=insecure,
File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 468, in _connectLDAP
_('Cannot connect using any of available options')
SoftRuntimeError: Cannot connect using any of available options
7 years, 2 months
oVirt 4.2 with cheph
by Christoph Köhler
Hello,
does someone have experience with cephfs as a vm-storage domain? I think
about that but without any hints...
Thanks for pointing me...
--
Christoph Köhler
Leibniz Universität IT Services
Schloßwender Straße 5, 30159 Hannover
Tel.: +49 511 762 794721
koehler(a)luis.uni-hannover.de
http://www.luis.uni-hannover.de/scientific_computing.html
7 years, 2 months
Console button greyed out (4.2)
by nicolas@devels.es
--=_e39231c6e7557ffafe2fcaabdd9c46db
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8;
format=flowed
Hi,
We upgraded one of our infrastructures to 4.2.0 recently and since then
some of our machines have the "Console" button greyed-out in the Admin
UI, like they were disabled.
I changed their compatibility to 4.2 but with no luck, as they're still
disabled.
Is there a way to know why is that, and how to solve it?
I'm attaching a screenshot.
Thanks.
--=_e39231c6e7557ffafe2fcaabdd9c46db
Content-Transfer-Encoding: base64
Content-Type: image/png;
name="Captura de pantalla de 2018-02-15 13-47-13.png"
Content-Disposition: attachment;
filename="Captura de pantalla de 2018-02-15 13-47-13.png";
size=1280
iVBORw0KGgoAAAANSUhEUgAAAG4AAAA1CAIAAAAvcEmYAAAAA3NCSVQICAjb4U/gAAAAGXRFWHRT
b2Z0d2FyZQBnbm9tZS1zY3JlZW5zaG907wO/PgAABJNJREFUeJztm99PWlccwL/u3nCSQyDFQEID
kalTNBVqtY2ztZbawdZUnS1tbGq37MG3/T176WuzpE7X2KzdKloVf05nM6VuIAYsiaQYjNdAvMm5
uTf2AZiuyrW449zq+bxww+F77vd+cr7n3HtyKdre3gYGDT467gQ+HJhKajCV1GAqqcFUUoOppAZT
SQ2mkhpMJTWYSmowldRgKqnBVFKDqaQGU0kNppIafOYjFApJknS8qfwfQQjZ7fbMcValJEnV1dXH
l9KREwwGj+ICg8HgX8eswKnBVFKDqaQGU0kNppIavHqzTIii0sxxiD+gh5NDfhGyMNnb9ydBnFq4
opia7t2o0FLP60BItOfBL/B5d2cl+vdPvi8qYyr1RqnouO8yqfxEjvU9WhHlCu3Ob+SN8PTYbCgm
SAAaQ0lNyxeN1v/KxR4tlMtzY77/4fiaruxTd6OBE4V4AuGT4REoqySxyak1rsTTmSt5uyPXkljw
Dc1GBQk0pyovetwOE0+iPQ/8+vqSVDCUEEFjPONpd5VrAeRkYMA3Gd2UgMPGKne762PtfuG7T5tc
8A1MRwWF05U0fOa5cExVQHMFl4WVVYWz1pa+O3WSmO/xRAzXtd7xttbjyGj/4CoBAAAxHJRr27/p
vttUvP7H8GxSBiBR/0gUnDfvfXW3rbm2tFirEp7tfPjxxKquydvldVuEqWcvXhM4BCT2on8sthNK
4v4nA5GtAnqgejMkiwogPd7zdSIQU3QNrvpys7n8vKtBJ0UWEjIAABjqG+0mrdb0icMIYjKtAADC
HKRfLycIttirbXrV8EznEUnnvFxjLTbbL9YZpXgwKb+bwXuALHVlwnDPSGwLAEjc3+tL2upshayn
qgW+FZ8bH8cqS7iyKci6XZ0hDsjWnkFBRFEBrM+mhfUYlJRIAAFAbirludxZkO1ap2vCNzX8w6Lf
WOPpuFoG+cNznafnvv9uLnc6pHb7lh/e4LzRAc/6e55XWddDQk2rt1Ztxd2nA7VGaTO8uHlAB/jM
rmRKTRBaXYwTq233dIUw5kBIbQEgABBTInAGjCDfFSOT41qXo+nNvO/Hcd9YRbdbNRxhzIHu3J0O
R7YaOIQPuwBkbf4crirYI6gWuP604eB4zmjZyRyVXDqrk5af9gwuLEVikfDib3PLKQDeXFepSc+M
vowkEpG50Zm0przWnC9Rklj8PRxPbog8xgg44A8I583Ock06MLOyQRSZpJPxtULmtz3wBueXXbfP
F+wR1EYlb7h0/9sGOTPvKKn5vofTmRGqqb75dYs5U5D83x92+NPNXi/2+19OPA8BgMZQ1mgHAGRp
uXVFGZp+2vsraE6VXW5rsSHItzjI669G/GMSAGDLWU+zmQdQDUc2960rg0PTTx5NAHA6y4W2isI1
0KAo8wJ1IBBQ3RklqyM/+eNEAQAOmeqvX688hgecf8LRbf06nc7M8XsOZGS9eruLeiIfFmxniBpM
JTWYSmowldRgKqmRvRlaWloi5FDbACeb3a8UFLE/5tGCFTg1mEpqMJXUYCqpwVRSg6mkBlNJDaaS
GkwlNd4CGF/UiSluffUAAAAASUVORK5CYII=
--=_e39231c6e7557ffafe2fcaabdd9c46db--
7 years, 2 months
Cannot delete auto-generated snapshot
by Langley, Robert
--_000_BN1PR09MB0289E063FD0949DD7A39161987CF0BN1PR09MB0289namp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
I was moving some virtual disks from one storage server to another. Now, I =
have a couple servers that have the auto-generated snapshot, without disks,=
and I cannot delete them. The VM will not start and there is the complaint=
that the disks are illegal.
Any help would be appreciated. I'm going to bed for now, but will try to wa=
ke up earlier.
--_000_BN1PR09MB0289E063FD0949DD7A39161987CF0BN1PR09MB0289namp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p style=3D"margin-top:0;margin-bottom:0">I was moving some virtual disks f=
rom one storage server to another. Now, I have a couple servers that have t=
he auto-generated snapshot, without disks, and I cannot delete them. The VM=
will not start and there is the complaint
that the disks are illegal.</p>
<p style=3D"margin-top:0;margin-bottom:0">Any help would be appreciated. I'=
m going to bed for now, but will try to wake up earlier.<br>
</p>
</div>
</body>
</html>
--_000_BN1PR09MB0289E063FD0949DD7A39161987CF0BN1PR09MB0289namp_--
7 years, 2 months
4.2 VM Portal -Create- VM section issue
by Vrgotic, Marko
--_000_DCB28C19FFF840AC897ED97221D26A2Cactivevideocom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBvVmlydCwNCg0KQWZ0ZXIgc2V0dGluZyBhbGwgcGFyYW1ldGVycyBmb3IgbmV3IFZNIGFu
ZCBjbGlja2luZyBvbiDigJxDcmVhdGXigJ0gYnV0dG9uLCBubyBwcm9ncmVzcyBzdGF0dXMgb3Ig
dGhhdCBhY3Rpb24gaXMgYWNjZXB0ZWQgaXMgc2VlbiBmcm9tIHdlYnVpLg0KSW4gYWRkaXRpb24s
IHdoZW4gY2xvc2luZyB0aGUgYWRkIFZNIHNlY3Rpb24sIEkgYW0gYXNrZWQgaWYgSSBhbSBzdXJl
LCBkdWUgdG8gY2hhbmdlcyBtYWRlLg0KDQpJcyB0aGlzIGV4cGVjdGVkIGJlaGF2aW91cj8gQ2Fu
IHNvbWV0aGluZyBiZSBkb25lIGFib3V0Pw0KDQpLaW5kbHkgYXdhaXRpbmcgeW91ciByZXBseS4N
Cg0KLS0NCk1ldCB2cmllbmRlbGlqa2UgZ3JvZXQgLyBCZXN0IHJlZ2FyZHMsDQpNYXJrbyBWcmdv
dGljDQpTeXN0ZW0gRW5naW5lZXIvQ3VzdG9tZXIgQ2FyZQ0KQWN0aXZlVmlkZW8NCg0K
--_000_DCB28C19FFF840AC897ED97221D26A2Cactivevideocom_
Content-Type: text/html; charset="utf-8"
Content-ID: <B57959ED133FE44B8D66555EBE087783(a)namprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWls
eTotd2Via2l0LXN0YW5kYXJkO30NCi8qIFN0eWxlIERlZmluaXRpb25zICovDQpwLk1zb05vcm1h
bCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJe21hcmdpbjowY207DQoJbWFyZ2luLWJv
dHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMi4wcHQ7DQoJZm9udC1mYW1pbHk6Q2FsaWJyaTsN
Cgltc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUzt9DQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5r
DQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjojMDU2M0MxOw0KCXRleHQtZGVjb3Jh
dGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJ
e21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjojOTU0RjcyOw0KCXRleHQtZGVjb3JhdGlv
bjp1bmRlcmxpbmU7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTcNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29u
YWwtY29tcG9zZTsNCglmb250LWZhbWlseTpDYWxpYnJpOw0KCWNvbG9yOndpbmRvd3RleHQ7fQ0K
c3Bhbi5tc29JbnMNCgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0LW9ubHk7DQoJbXNvLXN0eWxlLW5h
bWU6IiI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTsNCgljb2xvcjp0ZWFsO30NCi5Nc29D
aHBEZWZhdWx0DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtZmFtaWx5OkNh
bGlicmk7DQoJbXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4tVVM7fQ0KQHBhZ2UgV29yZFNlY3Rpb24x
DQoJe3NpemU6NTk1LjBwdCA4NDIuMHB0Ow0KCW1hcmdpbjo3Mi4wcHQgNzIuMHB0IDcyLjBwdCA3
Mi4wcHQ7fQ0KZGl2LldvcmRTZWN0aW9uMQ0KCXtwYWdlOldvcmRTZWN0aW9uMTt9DQotLT48L3N0
eWxlPg0KPC9oZWFkPg0KPGJvZHkgYmdjb2xvcj0id2hpdGUiIGxhbmc9IkVOLUdCIiBsaW5rPSIj
MDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPkRlYXIgb1Zp
cnQsPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMS4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5BZnRlciBzZXR0
aW5nIGFsbCBwYXJhbWV0ZXJzIGZvciBuZXcgVk0gYW5kIGNsaWNraW5nIG9uIOKAnENyZWF0ZeKA
nSBidXR0b24sIG5vIHByb2dyZXNzIHN0YXR1cyBvciB0aGF0IGFjdGlvbiBpcyBhY2NlcHRlZCBp
cyBzZWVuIGZyb20gd2VidWkuPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05v
cm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPkluIGFkZGl0aW9uLCB3aGVuIGNs
b3NpbmcgdGhlIGFkZCBWTSBzZWN0aW9uLCBJIGFtIGFza2VkIGlmIEkgYW0gc3VyZSwgZHVlIHRv
IGNoYW5nZXMgbWFkZS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQi
PklzIHRoaXMgZXhwZWN0ZWQgYmVoYXZpb3VyPyBDYW4gc29tZXRoaW5nIGJlIGRvbmUgYWJvdXQ/
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZToxMS4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5LaW5kbHkgYXdhaXRp
bmcgeW91ciByZXBseS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC41cHQ7
Zm9udC1mYW1pbHk6LXdlYmtpdC1zdGFuZGFyZDtjb2xvcjpibGFjazttc28tZmFyZWFzdC1sYW5n
dWFnZTpFTi1HQiI+LS08bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjVwdDtmb250LWZhbWlseTotd2Via2l0LXN0YW5k
YXJkO2NvbG9yOmJsYWNrO21zby1mYXJlYXN0LWxhbmd1YWdlOkVOLUdCIj5NZXQgdnJpZW5kZWxp
amtlIGdyb2V0IC8gQmVzdCByZWdhcmRzLDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjEwLjVwdDtm
b250LWZhbWlseTotd2Via2l0LXN0YW5kYXJkO2NvbG9yOmJsYWNrO21zby1mYXJlYXN0LWxhbmd1
YWdlOkVOLUdCIj5NYXJrbyBWcmdvdGljPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTAuNXB0O2Zv
bnQtZmFtaWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNvLWZhcmVhc3QtbGFuZ3Vh
Z2U6RU4tR0IiPlN5c3RlbSBFbmdpbmVlci9DdXN0b21lciBDYXJlPG86cD48L286cD48L3NwYW4+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250
LXNpemU6MTAuNXB0O2ZvbnQtZmFtaWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNv
LWZhcmVhc3QtbGFuZ3VhZ2U6RU4tR0IiPkFjdGl2ZVZpZGVvPG86cD48L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjwv
Ym9keT4NCjwvaHRtbD4NCg==
--_000_DCB28C19FFF840AC897ED97221D26A2Cactivevideocom_--
7 years, 2 months
Re: [ovirt-users] 2 Master on Storage Pool [Event Error]
by michael pagdanganan
------=_Part_2010355_281080504.1519108987392
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Sorry can't attached log file it's too big file
VDSM.log for node 1
2766', 'lastCheck': '4.9', 'valid': True}} from=3Dinternal, task_id=3D645d4=
56e-f59f-4b1c-9e97-fc82d19a36b1 (api:52)
2018-02-20 14:38:47,222+0800 INFO=C2=A0 (jsonrpc/3) [vdsm.api] START repoSt=
ats(options=3DNone) from=3D::ffff:10.10.43.1,60554, flow_id=3D3b2e802e, tas=
k_id=3D28c795d1-1639-4e68-a9fe-a00006be268f (api:46)
2018-02-20 14:38:47,222+0800 INFO=C2=A0 (jsonrpc/3) [vdsm.api] FINISH repoS=
tats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual'=
: True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck'=
: '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0=
, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', '=
lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': =
{'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000=
22988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f4=
40a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay=
': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a=
3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} from=3D::ff=
ff:10.10.43.1,60554, flow_id=3D3b2e802e, task_id=3D28c795d1-1639-4e68-a9fe-=
a00006be268f (api:52)
2018-02-20 14:38:47,226+0800 INFO=C2=A0 (jsonrpc/3) [jsonrpc.JsonRpcServer]=
RPC call Host.getStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:38:55,252+0800 INFO=C2=A0 (jsonrpc/4) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:38:58,566+0800 INFO=C2=A0 (jsonrpc/2) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:39:01,093+0800 INFO=C2=A0 (periodic/1) [vdsm.api] START repoS=
tats(options=3DNone) from=3Dinternal, task_id=3D04f55ded-5841-44ae-a376-4f6=
e723e4b10 (api:46)
2018-02-20 14:39:01,093+0800 INFO=C2=A0 (periodic/1) [vdsm.api] FINISH repo=
Stats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual=
': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', 'lastCheck'=
: '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0=
, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000197535', '=
lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': =
{'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000=
191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f=
440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'dela=
y': '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a=
3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=3Dinte=
rnal, task_id=3D04f55ded-5841-44ae-a376-4f6e723e4b10 (api:52)
2018-02-20 14:39:02,295+0800 INFO=C2=A0 (jsonrpc/6) [vdsm.api] START repoSt=
ats(options=3DNone) from=3D::ffff:10.10.43.1,60554, flow_id=3D13149d6e, tas=
k_id=3D5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:46)
2018-02-20 14:39:02,295+0800 INFO=C2=A0 (jsonrpc/6) [vdsm.api] FINISH repoS=
tats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual'=
: True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck':=
'1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0,=
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', 'l=
astCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {=
'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.0002=
22609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f4=
40a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay=
': '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a=
3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '0.000243658', 'lastCheck': '1.1', 'valid': True}} from=3D::ff=
ff:10.10.43.1,60554, flow_id=3D13149d6e, task_id=3D5f19d76a-d343-4dc4-ad09-=
024ec27f7443 (api:52)
2018-02-20 14:39:02,300+0800 INFO=C2=A0 (jsonrpc/6) [jsonrpc.JsonRpcServer]=
RPC call Host.getStats succeeded in 0.01 seconds (__init__:539)
2018-02-20 14:39:10,270+0800 INFO=C2=A0 (jsonrpc/1) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:39:13,631+0800 INFO=C2=A0 (jsonrpc/0) [vdsm.api] START connec=
tStorageServer(domType=3D1, spUUID=3Du'00000000-0000-0000-0000-000000000000=
', conList=3D[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123', u'connection=
': u'dev2node1.lares.com.ph:/run/media/root/Slave1Data/dataNode1', u'iqn': =
u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password'=
: '********', u'port': u''}], options=3DNone) from=3D::ffff:10.10.43.1,6055=
4, flow_id=3D15b57417, task_id=3Dbfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:=
46)
2018-02-20 14:39:13,633+0800 INFO=C2=A0 (jsonrpc/0) [vdsm.api] FINISH conne=
ctStorageServer return=3D{'statuslist': [{'status': 0, 'id': u'5e4c94f1-f3b=
9-4fbd-a6c7-e732d0fe3123'}]} from=3D::ffff:10.10.43.1,60554, flow_id=3D15b5=
7417, task_id=3Dbfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:52)
2018-02-20 14:39:13,634+0800 INFO=C2=A0 (jsonrpc/0) [jsonrpc.JsonRpcServer]=
RPC call StoragePool.connectStorageServer succeeded in 0.00 seconds (__ini=
t__:539)
2018-02-20 14:39:13,806+0800 INFO=C2=A0 (jsonrpc/7) [vdsm.api] START connec=
tStoragePool(spUUID=3Du'5a865884-0366-0330-02b8-0000000002d4', hostID=3D1, =
msdUUID=3Du'f3e372e3-1251-4195-a4b9-1027e40059df', masterVersion=3D65, doma=
insMap=3D{u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': u'active', u'4bf2ba2f-f5=
7a-4d9f-b42a-fb78f440a358': u'active', u'225e1975-8121-4370-b317-86e964ae32=
6f': u'attached', u'f3e372e3-1251-4195-a4b9-1027e40059df': u'active', u'65c=
a2e2d-b472-4bee-85b4-09a161464b20': u'active', u'42e591b7-f86c-4b67-a3d2-40=
cc007f7662': u'active'}, options=3DNone) from=3D::ffff:10.10.43.1,60554, fl=
ow_id=3D15b57417, task_id=3D927a5d9a-4304-4776-b5c9-22ba5d0cb853 (api:46)
2018-02-20 14:39:13,807+0800 INFO=C2=A0 (jsonrpc/7) [storage.StoragePoolMem=
oryBackend] new storage pool master version 65 and domains map {u'e83d0d46-=
6ea6-4aa3-80bf-6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f440=
a358': u'Active', u'225e1975-8121-4370-b317-86e964ae326f': u'Attached', u'f=
3e372e3-1251-4195-a4b9-1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b4-=
09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'Active=
'} (spbackends:450)
VDSM.log 22018-02-20 14:41:14,598+0800 INFO=C2=A0 (periodic/3) [vdsm.api] F=
INISH repoStats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': =
0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', =
'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454':=
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.55=
67e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a16=
1464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'dela=
y': '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b=
42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': Tr=
ue, 'delay': '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f=
86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acq=
uired': True, 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} fr=
om=3Dinternal, task_id=3D6213712b-9903-4db8-9836-3baf85cd63e4 (api:52)
2018-02-20 14:41:18,074+0800 INFO=C2=A0 (jsonrpc/6) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:41:24,333+0800 INFO=C2=A0 (jsonrpc/4) [vdsm.api] START repoSt=
ats(options=3DNone) from=3D::ffff:10.10.43.1,56540, flow_id=3D73f86113, tas=
k_id=3De8231ecb-3543-4d8f-af54-4cf2b06ee98a (api:46)
2018-02-20 14:41:24,334+0800 INFO=C2=A0 (jsonrpc/4) [vdsm.api] FINISH repoS=
tats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual'=
: True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck'=
: '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0=
, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', 'la=
stCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'=
code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00011=
7677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f44=
0a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay'=
: '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3=
d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tru=
e, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} from=3D::ffff=
:10.10.43.1,56540, flow_id=3D73f86113, task_id=3De8231ecb-3543-4d8f-af54-4c=
f2b06ee98a (api:52)
2018-02-20 14:41:24,338+0800 INFO=C2=A0 (jsonrpc/4) [jsonrpc.JsonRpcServer]=
RPC call Host.getStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:41:27,087+0800 INFO=C2=A0 (jsonrpc/0) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:41:29,607+0800 INFO=C2=A0 (periodic/1) [vdsm.api] START repoS=
tats(options=3DNone) from=3Dinternal, task_id=3Df14b7aab-b64a-4903-9368-d66=
5e39b49d1 (api:46)
2018-02-20 14:41:29,608+0800 INFO=C2=A0 (periodic/1) [vdsm.api] FINISH repo=
Stats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual=
': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'lastCheck=
': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': =
0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e-05', '=
lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': =
{'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.714=
5e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f4=
40a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay=
': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a=
3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=3Dinter=
nal, task_id=3Df14b7aab-b64a-4903-9368-d665e39b49d1 (api:52)
2018-02-20 14:41:33,079+0800 INFO=C2=A0 (jsonrpc/7) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:41:40,456+0800 INFO=C2=A0 (jsonrpc/2) [vdsm.api] START repoSt=
ats(options=3DNone) from=3D::ffff:10.10.43.1,56540, flow_id=3D3be8150c, tas=
k_id=3D1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:46)
2018-02-20 14:41:40,457+0800 INFO=C2=A0 (jsonrpc/2) [vdsm.api] FINISH repoS=
tats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual'=
: True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck'=
: '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0=
, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'l=
astCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {=
'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.0001=
45955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f4=
40a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay=
': '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a=
3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': Tr=
ue, 'delay': '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} from=3D::fff=
f:10.10.43.1,56540, flow_id=3D3be8150c, task_id=3D1b0c86a8-6fd8-4882-a742-f=
bd56ccb4037 (api:52)
2018-02-20 14:41:40,461+0800 INFO=C2=A0 (jsonrpc/2) [jsonrpc.JsonRpcServer]=
RPC call Host.getStats succeeded in 0.01 seconds (__init__:539)
2018-02-20 14:41:42,106+0800 INFO=C2=A0 (jsonrpc/3) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-02-20 14:41:44,622+0800 INFO=C2=A0 (periodic/3) [vdsm.api] START repoS=
tats(options=3DNone) from=3Dinternal, task_id=3D20e0b29b-d3cd-4e44-b92c-213=
f9c984ab2 (api:46)
2018-02-20 14:41:44,622+0800 INFO=C2=A0 (periodic/3) [vdsm.api] FINISH repo=
Stats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual=
': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck=
': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': =
0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', '=
lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': =
{'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000=
145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f=
440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'dela=
y': '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-=
a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': T=
rue, 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=3Dinte=
rnal, task_id=3D20e0b29b-d3cd-4e44-b92c-213f9c984ab2 (api:52)
2018-02-20 14:41:49,083+0800 INFO=C2=A0 (jsonrpc/1) [jsonrpc.JsonRpcServer]=
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
Engine log
skId '404ccecc-aa7f-45ea-89e4-726956269bc9' task status 'finished'
2018-02-20 14:42:21,966+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] spm=
Start polling ended, spm status: SPM
2018-02-20 14:42:21,967+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9]=
START, HSMClearTaskVDSCommand(HostName =3D Node1, HSMTaskGuidBaseVDSComman=
dParameters:{runAsync=3D'true', hostId=3D'7dee35bb-8c97-4f6a-b6cd-abc425854=
0e4', taskId=3D'404ccecc-aa7f-45ea-89e4-726956269bc9'}), log id: 71688f70
2018-02-20 14:42:22,922+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9]=
FINISH, HSMClearTaskVDSCommand, log id: 71688f70
2018-02-20 14:42:22,923+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] FIN=
ISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentit=
ies.SpmStatusResult@78332453, log id: 3ea35d5
2018-02-20 14:42:22,935+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.IrsProxy] (org.ovirt.thread.pool-7-thread-38) [29528f9] Initialize Ir=
s proxy from vds: dev2node1.lares.com.ph
2018-02-20 14:42:22,951+08 INFO=C2=A0 [org.ovirt.engine.core.dal.dbbroker.a=
uditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-38) [2952=
8f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call Stack: nu=
ll, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Manager run=
s on Host Node1 (Address: dev2node1.lares.com.ph).
2018-02-20 14:42:22,952+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29=
528f9] -- executeIrsBrokerCommand: Attempting on storage pool '5a865884-036=
6-0330-02b8-0000000002d4'
2018-02-20 14:42:22,952+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29=
528f9] START, HSMGetAllTasksInfoVDSCommand(HostName =3D Node1, VdsIdVDSComm=
andParametersBase:{runAsync=3D'true', hostId=3D'7dee35bb-8c97-4f6a-b6cd-abc=
4258540e4'}), log id: 1bdbea9d
2018-02-20 14:42:22,955+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-7) [295=
28f9] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{run=
Async=3D'true', storagePoolId=3D'5a865884-0366-0330-02b8-0000000002d4', ign=
oreFailoverLimit=3D'false'}), log id: 5c2422d6
2018-02-20 14:42:23,956+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29=
528f9] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d
2018-02-20 14:42:23,956+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29=
528f9] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7
2018-02-20 14:42:23,956+08 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Asyn=
cTaskManager] (org.ovirt.thread.pool-7-thread-38) [29528f9] Discovered no t=
asks on Storage Pool 'UnsecuredEnv'
2018-02-20 14:42:24,936+08 INFO=C2=A0 [org.ovirt.vdsm.jsonrpc.client.reacto=
rs.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.=
ph/10.10.43.2
2018-02-20 14:42:27,012+08 WARN=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.IrsProxy] (org.ovirt.thread.pool-7-thread-43) [] Master domain is not=
in sync between DB and VDSM. Domain Node1Container marked as master in DB =
and not in the storage
2018-02-20 14:42:27,026+08 WARN=C2=A0 [org.ovirt.engine.core.dal.dbbroker.a=
uditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-43) [] EV=
ENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC(990), Correlation ID: null, Call S=
tack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Ma=
ster Domain between Host Node1 and oVirt Engine. Domain: Node1Container is =
marked as Master in oVirt Engine database but not on the Storage side. Plea=
se consult with Support on how to fix this issue.
2018-02-20 14:42:27,103+08 INFO=C2=A0 [org.ovirt.engine.core.bll.storage.po=
ol.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-43) [3e5=
965ca] Running command: ReconstructMasterDomainCommand internal: true. Enti=
ties affected :=C2=A0 ID: f3e372e3-1251-4195-a4b9-1027e40059df Type: Storag=
e
2018-02-20 14:42:27,137+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.ResetIrsVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] ST=
ART, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{runAsync=3D'true', s=
toragePoolId=3D'5a865884-0366-0330-02b8-0000000002d4', ignoreFailoverLimit=
=3D'false', vdsId=3D'7dee35bb-8c97-4f6a-b6cd-abc4258540e4', ignoreStopFaile=
d=3D'true'}), log id: 3e0a239d
2018-02-20 14:42:27,140+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] STA=
RT, SpmStopVDSCommand(HostName =3D Node1, SpmStopVDSCommandParameters:{runA=
sync=3D'true', hostId=3D'7dee35bb-8c97-4f6a-b6cd-abc4258540e4', storagePool=
Id=3D'5a865884-0366-0330-02b8-0000000002d4'}), log id: 7c67bf06
2018-02-20 14:42:28,144+08 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsb=
roker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] Spm=
StopVDSCommand::Stopping SPM on vds 'Node1', pool id '5a865884-0366-0330-02=
b8-0000000002d4'
=20
On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan <mhke_aj5566=
@yahoo.com> wrote:
=20
Thanks for quick response,
see attachment.
=20
On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky <eshenitz(a)redhat.=
com> wrote:
=20
Hi,=C2=A0
Can you please attach full Engine and VDSM logs?
Thanks,
On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan <mhke_aj5566(a)yahoo.com=
> wrote:
My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node=
1). and my old master domain (DATANd01 on Node1 ) hung on preparing for mai=
ntenance. When I tried to activate my old master domain (DATANd01 on Node1 =
) all=C2=A0 storage domain goes down and up master keep on rotating.
Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos)
Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine=
. Domain Stored2 is marked as master in ovirt engine Database but not on st=
orage side Please consult with support
VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it'=
s version: u'SD=3Df3e372e3-1251-4195-a4b9- 1027e40059df, pool=3D5a865884-03=
66-0330-02b8- 0000
VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: ()
Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv)
Here's logs from engine:
------------------------------ ------------------------------ -------------=
----------------- -------------[root@dev2engine ~]# tail /var/log/messages
Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root.
Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root.
Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root.
Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root.
Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root.
Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root.
Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root.
Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root.
Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root.
Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root.
------------------------------ ------------------------------ -------------=
----------------- -------------[root@dev2engine ~]# tail /var/log/ovirt-eng=
ine/engine. log
2018-02-20 08:01:16,062+08 INFO=C2=A0 [org.ovirt.engine.core.bll. eventqueu=
e.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) [102e9d3c] Finish=
ed reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing e=
vent queue
2018-02-20 08:01:27,825+08 WARN=C2=A0 [org.ovirt.engine.core. vdsbroker.irs=
broker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is n=
ot in sync between DB and VDSM. Domain Stored2 marked as master in DB and n=
ot in the storage
2018-02-20 08:01:27,862+08 WARN=C2=A0 [org.ovirt.engine.core.bll. storage.p=
ool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) =
[213f42b9] Validation of action 'ReconstructMasterDomain' failed for user S=
YSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAI=
N,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForM=
aintenance
2018-02-20 08:01:27,882+08 INFO=C2=A0 [org.ovirt.engine.core.bll. eventqueu=
e.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finishe=
d reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing ev=
ent queue
2018-02-20 08:01:40,106+08 WARN=C2=A0 [org.ovirt.engine.core. vdsbroker.irs=
broker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] Master domain is n=
ot in sync between DB and VDSM. Domain Stored2 marked as master in DB and n=
ot in the storage
2018-02-20 08:01:40,197+08 WARN=C2=A0 [org.ovirt.engine.core.bll. storage.p=
ool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) =
[7af552c1] Validation of action 'ReconstructMasterDomain' failed for user S=
YSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAI=
N,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForM=
aintenance
2018-02-20 08:01:40,246+08 INFO=C2=A0 [org.ovirt.engine.core.bll. eventqueu=
e.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) [73673040] Finish=
ed reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing e=
vent queue
2018-02-20 08:01:51,809+08 WARN=C2=A0 [org.ovirt.engine.core. vdsbroker.irs=
broker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] Master domain is n=
ot in sync between DB and VDSM. Domain Stored2 marked as master in DB and n=
ot in the storage
2018-02-20 08:01:51,846+08 WARN=C2=A0 [org.ovirt.engine.core.bll. storage.p=
ool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) =
[20307cbe] Validation of action 'ReconstructMasterDomain' failed for user S=
YSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAI=
N,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForM=
aintenance
2018-02-20 08:01:51,866+08 INFO=C2=A0 [org.ovirt.engine.core.bll. eventqueu=
e.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finish=
ed reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing e=
vent queue
______________________________ _________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
--=20
Regards,Eyal Shenitzky
=20
=20
------=_Part_2010355_281080504.1519108987392
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font=
-size:10px"><div id=3D"yui_3_16_0_ym19_1_1519085807369_70843">Sorry can't a=
ttached log file it's too big file<br></div><div id=3D"yui_3_16_0_ym19_1_15=
19085807369_70846"><br></div><div><br></div><div>VDSM.log for node 1</div><=
div id=3D"yui_3_16_0_ym19_1_1519085807369_70844"><br></div><div id=3D"yui_3=
_16_0_ym19_1_1519085807369_70530" dir=3D"ltr">2766', 'lastCheck': '4.9', 'v=
alid': True}} from=3Dinternal, task_id=3D645d456e-f59f-4b1c-9e97-fc82d19a36=
b1 (api:52)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70565">2018-02-20 14:3=
8:47,222+0800 INFO (jsonrpc/3) [vdsm.api] START repoStats(options=3DN=
one) from=3D::ffff:10.10.43.1,60554, flow_id=3D3b2e802e, task_id=3D28c795d1=
-1639-4e68-a9fe-a00006be268f (api:46)<br id=3D"yui_3_16_0_ym19_1_1519085807=
369_70566">2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] F=
INISH repoStats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': =
0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000119109', =
'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454':=
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00=
0195263', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a1=
61464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'del=
ay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-=
b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': T=
rue, 'delay': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-=
f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'ac=
quired': True, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} =
from=3D::ffff:10.10.43.1,60554, flow_id=3D3b2e802e, task_id=3D28c795d1-1639=
-4e68-a9fe-a00006be268f (api:52)<br id=3D"yui_3_16_0_ym19_1_1519085807369_7=
0567">2018-02-20 14:38:47,226+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcS=
erver] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539)<br i=
d=3D"yui_3_16_0_ym19_1_1519085807369_70568">2018-02-20 14:38:55,252+0800 IN=
FO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats su=
cceeded in 0.00 seconds (__init__:539)<br id=3D"yui_3_16_0_ym19_1_151908580=
7369_70569">2018-02-20 14:38:58,566+0800 INFO (jsonrpc/2) [jsonrpc.Js=
onRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init_=
_:539)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70570">2018-02-20 14:39:01,=
093+0800 INFO (periodic/1) [vdsm.api] START repoStats(options=3DNone)=
from=3Dinternal, task_id=3D04f55ded-5841-44ae-a376-4f6e723e4b10 (api:46)<b=
r id=3D"yui_3_16_0_ym19_1_1519085807369_70571">2018-02-20 14:39:01,093+0800=
INFO (periodic/1) [vdsm.api] FINISH repoStats return=3D{'f3e372e3-12=
51-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acqu=
ired': True, 'delay': '9.3888e-05', 'lastCheck': '9.9', 'valid': True}, 'e8=
3d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version':=
0, 'acquired': True, 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': =
True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, =
'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8',=
'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actua=
l': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', 'lastCheck=
': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': =
0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', =
'lastCheck': '0.0', 'valid': True}} from=3Dinternal, task_id=3D04f55ded-584=
1-44ae-a376-4f6e723e4b10 (api:52)<br id=3D"yui_3_16_0_ym19_1_1519085807369_=
70572">2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] START=
repoStats(options=3DNone) from=3D::ffff:10.10.43.1,60554, flow_id=3D13149d=
6e, task_id=3D5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:46)<br id=3D"yui_3_=
16_0_ym19_1_1519085807369_70573">2018-02-20 14:39:02,295+0800 INFO (j=
sonrpc/6) [vdsm.api] FINISH repoStats return=3D{'f3e372e3-1251-4195-a4b9-10=
27e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'd=
elay': '9.4759e-05', 'lastCheck': '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa=
3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired':=
True, 'delay': '0.000183158', 'lastCheck': '1.0', 'valid': True}, '65ca2e2=
d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, '=
acquired': True, 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}=
, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'vers=
ion': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', 'val=
id': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': T=
rue, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '=
1.1', 'valid': True}} from=3D::ffff:10.10.43.1,60554, flow_id=3D13149d6e, t=
ask_id=3D5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:52)<br id=3D"yui_3_16_0_=
ym19_1_1519085807369_70574">2018-02-20 14:39:02,300+0800 INFO (jsonrp=
c/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 secon=
ds (__init__:539)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70575">2018-02-2=
0 14:39:10,270+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call=
Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)<br id=3D"yui_3=
_16_0_ym19_1_1519085807369_70576">2018-02-20 14:39:13,631+0800 INFO (=
jsonrpc/0) [vdsm.api] START connectStorageServer(domType=3D1, spUUID=3Du'00=
000000-0000-0000-0000-000000000000', conList=3D[{u'id': u'5e4c94f1-f3b9-4fb=
d-a6c7-e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/media/ro=
ot/Slave1Data/dataNode1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'proto=
col_version': u'auto', u'password': '********', u'port': u''}], options=3DN=
one) from=3D::ffff:10.10.43.1,60554, flow_id=3D15b57417, task_id=3Dbfce1e70=
-ef4a-4e13-aaaa-7a66aaf44429 (api:46)<br id=3D"yui_3_16_0_ym19_1_1519085807=
369_70577">2018-02-20 14:39:13,633+0800 INFO (jsonrpc/0) [vdsm.api] F=
INISH connectStorageServer return=3D{'statuslist': [{'status': 0, 'id': u'5=
e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123'}]} from=3D::ffff:10.10.43.1,60554, flo=
w_id=3D15b57417, task_id=3Dbfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:52)<br=
id=3D"yui_3_16_0_ym19_1_1519085807369_70578">2018-02-20 14:39:13,634+0800 =
INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool.connect=
StorageServer succeeded in 0.00 seconds (__init__:539)<br id=3D"yui_3_16_0_=
ym19_1_1519085807369_70579">2018-02-20 14:39:13,806+0800 INFO (jsonrp=
c/7) [vdsm.api] START connectStoragePool(spUUID=3Du'5a865884-0366-0330-02b8=
-0000000002d4', hostID=3D1, msdUUID=3Du'f3e372e3-1251-4195-a4b9-1027e40059d=
f', masterVersion=3D65, domainsMap=3D{u'e83d0d46-6ea6-4aa3-80bf-6e95c66b045=
4': u'active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'active', u'225e19=
75-8121-4370-b317-86e964ae326f': u'attached', u'f3e372e3-1251-4195-a4b9-102=
7e40059df': u'active', u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'active', =
u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'active'}, options=3DNone) from=
=3D::ffff:10.10.43.1,60554, flow_id=3D15b57417, task_id=3D927a5d9a-4304-477=
6-b5c9-22ba5d0cb853 (api:46)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70580=
">2018-02-20 14:39:13,807+0800 INFO (jsonrpc/7) [storage.StoragePoolM=
emoryBackend] new storage pool master version 65 and domains map {u'e83d0d4=
6-6ea6-4aa3-80bf-6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f4=
40a358': u'Active', u'225e1975-8121-4370-b317-86e964ae326f': u'Attached', u=
'f3e372e3-1251-4195-a4b9-1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b=
4-09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'Acti=
ve'} (spbackends:450)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70581"></div=
><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_70649"><br></div><d=
iv dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_70648">VDSM.log 2</div=
><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_70650">2018-02-20 1=
4:41:14,598+0800 INFO (periodic/3) [vdsm.api] FINISH repoStats return=
=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 've=
rsion': 4, 'acquired': True, 'delay': '0.000252423', 'lastCheck': '6.0', 'v=
alid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual':=
True, 'version': 0, 'acquired': True, 'delay': '7.5567e-05', 'lastCheck': =
'6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, =
'actual': True, 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'las=
tCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'c=
ode': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237=
247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f=
7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay':=
'8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=3Dinternal, task_id=
=3D6213712b-9903-4db8-9836-3baf85cd63e4 (api:52)<br id=3D"yui_3_16_0_ym19_1=
_1519085807369_70683">2018-02-20 14:41:18,074+0800 INFO (jsonrpc/6) [=
jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 second=
s (__init__:539)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70684">2018-02-20=
14:41:24,333+0800 INFO (jsonrpc/4) [vdsm.api] START repoStats(option=
s=3DNone) from=3D::ffff:10.10.43.1,56540, flow_id=3D73f86113, task_id=3De82=
31ecb-3543-4d8f-af54-4cf2b06ee98a (api:46)<br id=3D"yui_3_16_0_ym19_1_15190=
85807369_70685">2018-02-20 14:41:24,334+0800 INFO (jsonrpc/4) [vdsm.a=
pi] FINISH repoStats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'co=
de': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.0001729=
53', 'lastCheck': '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0=
454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': =
'7.701e-05', 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4-0=
9a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, '=
delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4=
d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired=
': True, 'delay': '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e59=
1b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4,=
'acquired': True, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True=
}} from=3D::ffff:10.10.43.1,56540, flow_id=3D73f86113, task_id=3De8231ecb-3=
543-4d8f-af54-4cf2b06ee98a (api:52)<br id=3D"yui_3_16_0_ym19_1_151908580736=
9_70686">2018-02-20 14:41:24,338+0800 INFO (jsonrpc/4) [jsonrpc.JsonR=
pcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539)<b=
r id=3D"yui_3_16_0_ym19_1_1519085807369_70687">2018-02-20 14:41:27,087+0800=
INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats=
succeeded in 0.00 seconds (__init__:539)<br id=3D"yui_3_16_0_ym19_1_151908=
5807369_70688">2018-02-20 14:41:29,607+0800 INFO (periodic/1) [vdsm.a=
pi] START repoStats(options=3DNone) from=3Dinternal, task_id=3Df14b7aab-b64=
a-4903-9368-d665e39b49d1 (api:46)<br id=3D"yui_3_16_0_ym19_1_1519085807369_=
70689">2018-02-20 14:41:29,608+0800 INFO (periodic/1) [vdsm.api] FINI=
SH repoStats return=3D{'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, =
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'la=
stCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'=
code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e=
-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a16146=
4b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay':=
'9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a=
-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True,=
'delay': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c=
-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquir=
ed': True, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=
=3Dinternal, task_id=3Df14b7aab-b64a-4903-9368-d665e39b49d1 (api:52)<br id=
=3D"yui_3_16_0_ym19_1_1519085807369_70690">2018-02-20 14:41:33,079+0800 INF=
O (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats suc=
ceeded in 0.00 seconds (__init__:539)<br id=3D"yui_3_16_0_ym19_1_1519085807=
369_70691">2018-02-20 14:41:40,456+0800 INFO (jsonrpc/2) [vdsm.api] S=
TART repoStats(options=3DNone) from=3D::ffff:10.10.43.1,56540, flow_id=3D3b=
e8150c, task_id=3D1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:46)<br id=3D"yu=
i_3_16_0_ym19_1_1519085807369_70692">2018-02-20 14:41:40,457+0800 INFO =
; (jsonrpc/2) [vdsm.api] FINISH repoStats return=3D{'f3e372e3-1251-4195-a4b=
9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True=
, 'delay': '0.000174027', 'lastCheck': '1.8', 'valid': True}, 'e83d0d46-6ea=
6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acqui=
red': True, 'delay': '0.00011454', 'lastCheck': '1.8', 'valid': True}, '65c=
a2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': =
4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': T=
rue}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, '=
version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', =
'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual=
': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck'=
: '1.9', 'valid': True}} from=3D::ffff:10.10.43.1,56540, flow_id=3D3be8150c=
, task_id=3D1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:52)<br id=3D"yui_3_16=
_0_ym19_1_1519085807369_70693">2018-02-20 14:41:40,461+0800 INFO (jso=
nrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 se=
conds (__init__:539)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70694">2018-0=
2-20 14:41:42,106+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC c=
all Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)<br id=3D"yu=
i_3_16_0_ym19_1_1519085807369_70695">2018-02-20 14:41:44,622+0800 INFO =
; (periodic/3) [vdsm.api] START repoStats(options=3DNone) from=3Dinternal, =
task_id=3D20e0b29b-d3cd-4e44-b92c-213f9c984ab2 (api:46)<br id=3D"yui_3_16_0=
_ym19_1_1519085807369_70696">2018-02-20 14:41:44,622+0800 INFO (perio=
dic/3) [vdsm.api] FINISH repoStats return=3D{'f3e372e3-1251-4195-a4b9-1027e=
40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'dela=
y': '0.000174027', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-=
80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': T=
rue, 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b=
472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acq=
uired': True, 'delay': '0.000145955', 'lastCheck': '6.0', 'valid': True}, '=
4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version=
': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '5.9', 'valid'=
: True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True=
, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '6.0'=
, 'valid': True}} from=3Dinternal, task_id=3D20e0b29b-d3cd-4e44-b92c-213f9c=
984ab2 (api:52)<br id=3D"yui_3_16_0_ym19_1_1519085807369_70697">2018-02-20 =
14:41:49,083+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call H=
ost.getAllVmStats succeeded in 0.00 seconds (__init__:539)<br id=3D"yui_3_1=
6_0_ym19_1_1519085807369_70698"></div><div dir=3D"ltr"><br></div><div dir=
=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_70767"><br></div><div dir=3D=
"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_70768">Engine log</div><div dir=
=3D"ltr"><br></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_7=
0769">skId '404ccecc-aa7f-45ea-89e4-726956269bc9' task status 'finished'<br=
id=3D"yui_3_16_0_ym19_1_1519085807369_70808">2018-02-20 14:42:21,966+08 IN=
FO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (or=
g.ovirt.thread.pool-7-thread-38) [29528f9] spmStart polling ended, spm stat=
us: SPM<br id=3D"yui_3_16_0_ym19_1_1519085807369_70809">2018-02-20 14:42:21=
,967+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskV=
DSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] START, HSMClearTas=
kVDSCommand(HostName =3D Node1, HSMTaskGuidBaseVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'7dee35bb-8c97-4f6a-b6cd-abc4258540e4', taskId=3D'404=
ccecc-aa7f-45ea-89e4-726956269bc9'}), log id: 71688f70<br id=3D"yui_3_16_0_=
ym19_1_1519085807369_70810">2018-02-20 14:42:22,922+08 INFO [org.ovir=
t.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread=
.pool-7-thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f=
70<br id=3D"yui_3_16_0_ym19_1_1519085807369_70811">2018-02-20 14:42:22,923+=
08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand=
] (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, SpmStartVDSCommand,=
return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7833=
2453, log id: 3ea35d5<br id=3D"yui_3_16_0_ym19_1_1519085807369_70812">2018-=
02-20 14:42:22,935+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker=
.IrsProxy] (org.ovirt.thread.pool-7-thread-38) [29528f9] Initialize Irs pro=
xy from vds: dev2node1.lares.com.ph<br id=3D"yui_3_16_0_ym19_1_151908580736=
9_70813">2018-02-20 14:42:22,951+08 INFO [org.ovirt.engine.core.dal.d=
bbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-=
38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call =
Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Ma=
nager runs on Host Node1 (Address: dev2node1.lares.com.ph).<br id=3D"yui_3_=
16_0_ym19_1_1519085807369_70814">2018-02-20 14:42:22,952+08 INFO [org=
.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.o=
virt.thread.pool-7-thread-38) [29528f9] -- executeIrsBrokerCommand: Attempt=
ing on storage pool '5a865884-0366-0330-02b8-0000000002d4'<br id=3D"yui_3_1=
6_0_ym19_1_1519085807369_70815">2018-02-20 14:42:22,952+08 INFO [org.=
ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ov=
irt.thread.pool-7-thread-38) [29528f9] START, HSMGetAllTasksInfoVDSCommand(=
HostName =3D Node1, VdsIdVDSCommandParametersBase:{runAsync=3D'true', hostI=
d=3D'7dee35bb-8c97-4f6a-b6cd-abc4258540e4'}), log id: 1bdbea9d<br id=3D"yui=
_3_16_0_ym19_1_1519085807369_70816">2018-02-20 14:42:22,955+08 INFO [=
org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (or=
g.ovirt.thread.pool-7-thread-7) [29528f9] START, SPMGetAllTasksInfoVDSComma=
nd( IrsBaseVDSCommandParameters:{runAsync=3D'true', storagePoolId=3D'5a8658=
84-0366-0330-02b8-0000000002d4', ignoreFailoverLimit=3D'false'}), log id: 5=
c2422d6<br id=3D"yui_3_16_0_ym19_1_1519085807369_70817">2018-02-20 14:42:23=
,956+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTask=
sInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, HSMG=
etAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d<br id=3D"yui_3_16_0_=
ym19_1_1519085807369_70818">2018-02-20 14:42:23,956+08 INFO [org.ovir=
t.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.=
thread.pool-7-thread-38) [29528f9] FINISH, SPMGetAllTasksInfoVDSCommand, re=
turn: [], log id: 58cbe7b7<br id=3D"yui_3_16_0_ym19_1_1519085807369_70819">=
2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core.bll.tasks.Asyn=
cTaskManager] (org.ovirt.thread.pool-7-thread-38) [29528f9] Discovered no t=
asks on Storage Pool 'UnsecuredEnv'<br id=3D"yui_3_16_0_ym19_1_151908580736=
9_70820">2018-02-20 14:42:24,936+08 INFO [org.ovirt.vdsm.jsonrpc.clie=
nt.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.l=
ares.com.ph/10.10.43.2<br id=3D"yui_3_16_0_ym19_1_1519085807369_70821">2018=
-02-20 14:42:27,012+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroke=
r.IrsProxy] (org.ovirt.thread.pool-7-thread-43) [] Master domain is not in =
sync between DB and VDSM. Domain Node1Container marked as master in DB and =
not in the storage<br id=3D"yui_3_16_0_ym19_1_1519085807369_70822">2018-02-=
20 14:42:27,026+08 WARN [org.ovirt.engine.core.dal.dbbroker.auditlogh=
andling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-43) [] EVENT_ID: =
SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC(990), Correlation ID: null, Call Stack: nu=
ll, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Master Dom=
ain between Host Node1 and oVirt Engine. Domain: Node1Container is marked a=
s Master in oVirt Engine database but not on the Storage side. Please consu=
lt with Support on how to fix this issue.<br id=3D"yui_3_16_0_ym19_1_151908=
5807369_70823">2018-02-20 14:42:27,103+08 INFO [org.ovirt.engine.core=
.bll.storage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-=
thread-43) [3e5965ca] Running command: ReconstructMasterDomainCommand inter=
nal: true. Entities affected : ID: f3e372e3-1251-4195-a4b9-1027e40059=
df Type: Storage<br id=3D"yui_3_16_0_ym19_1_1519085807369_70824">2018-02-20=
14:42:27,137+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.Rese=
tIrsVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] START, Reset=
IrsVDSCommand( ResetIrsVDSCommandParameters:{runAsync=3D'true', storagePool=
Id=3D'5a865884-0366-0330-02b8-0000000002d4', ignoreFailoverLimit=3D'false',=
vdsId=3D'7dee35bb-8c97-4f6a-b6cd-abc4258540e4', ignoreStopFailed=3D'true'}=
), log id: 3e0a239d<br id=3D"yui_3_16_0_ym19_1_1519085807369_70825">2018-02=
-20 14:42:27,140+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.S=
pmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] START, Spm=
StopVDSCommand(HostName =3D Node1, SpmStopVDSCommandParameters:{runAsync=3D=
'true', hostId=3D'7dee35bb-8c97-4f6a-b6cd-abc4258540e4', storagePoolId=3D'5=
a865884-0366-0330-02b8-0000000002d4'}), log id: 7c67bf06<br id=3D"yui_3_16_=
0_ym19_1_1519085807369_70826">2018-02-20 14:42:28,144+08 INFO [org.ov=
irt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.po=
ol-7-thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', =
pool id '5a865884-0366-0330-02b8-0000000002d4'<br></div><div id=3D"yui_3_16=
_0_ym19_1_1519085807369_70529"><span></span></div> <div class=3D"qtdSeparat=
eBR"><br><br></div><div class=3D"yahoo_quoted" style=3D"display: block;"> <=
div style=3D"font-family: Helvetica Neue, Helvetica, Arial, Lucida Grande, =
sans-serif; font-size: 10px;"> <div style=3D"font-family: HelveticaNeue, He=
lvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;=
"> <div dir=3D"ltr"><font size=3D"2" face=3D"Arial"> On Tuesday, February 2=
0, 2018 2:33 PM, michael pagdanganan <mhke_aj5566(a)yahoo.com> wrote:<b=
r></font></div> <br><br> <div class=3D"y_msg_container"><div id=3D"yiv0115=
319588"><div><div style=3D"color:#000;background-color:#fff;font-family:Hel=
vetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:10px;"><=
div id=3D"yiv0115319588yui_3_16_0_ym19_1_1519085807369_60672"><font size=3D=
"3">Thanks for quick response,</font></div><div id=3D"yiv0115319588yui_3_16=
_0_ym19_1_1519085807369_62803"><font size=3D"3"><br clear=3D"none"></font><=
/div><div id=3D"yiv0115319588yui_3_16_0_ym19_1_1519085807369_64423"><font s=
ize=3D"3">see attachment.<br clear=3D"none"></font></div><div id=3D"yiv0115=
319588yui_3_16_0_ym19_1_1519085807369_60552"><span></span></div> <div class=
=3D"yiv0115319588qtdSeparateBR"><br clear=3D"none"><br clear=3D"none"></div=
><div class=3D"yiv0115319588yahoo_quoted" style=3D"display:block;"> <div st=
yle=3D"font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-se=
rif;font-size:10px;"> <div style=3D"font-family:HelveticaNeue, Helvetica Ne=
ue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:16px;"> <div clas=
s=3D"yiv0115319588yqt2980525560" id=3D"yiv0115319588yqtfd88886"><div dir=3D=
"ltr"><font size=3D"2" face=3D"Arial"> On Tuesday, February 20, 2018 2:10 P=
M, Eyal Shenitzky <eshenitz(a)redhat.com> wrote:<br clear=3D"none"></fo=
nt></div> <br clear=3D"none"><br clear=3D"none"> <div class=3D"yiv01153195=
88y_msg_container"><div id=3D"yiv0115319588"><div><div dir=3D"ltr">Hi, =
;<div><br clear=3D"none"></div><div>Can you please attach full Engine and V=
DSM logs?</div><div><br clear=3D"none"></div><div>Thanks,</div></div><div c=
lass=3D"yiv0115319588gmail_extra"><br clear=3D"none"><div class=3D"yiv01153=
19588gmail_quote">On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan <spa=
n dir=3D"ltr"><<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:mhke=
_aj5566(a)yahoo.com" target=3D"_blank" href=3D"mailto:mhke_aj5566@yahoo.com">=
mhke_aj5566(a)yahoo.com</a>></span> wrote:<br clear=3D"none"><blockquote c=
lass=3D"yiv0115319588gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex;"><div class=3D"yiv0115319588yqt7135196070" i=
d=3D"yiv0115319588yqt29383"><div><div style=3D"color:#000;background-color:=
#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-seri=
f;font-size:10px;"><div dir=3D"ltr" id=3D"yiv0115319588m_-46053014922747698=
28yui_3_16_0_ym19_1_1519085807369_10542"><font id=3D"yiv0115319588m_-460530=
1492274769828yui_3_16_0_ym19_1_1519085807369_15967" size=3D"3">My storage p=
ool has 2 Master Domain(</font><font id=3D"yiv0115319588m_-4605301492274769=
828yui_3_16_0_ym19_1_1519085807369_16063" size=3D"3"><font id=3D"yiv0115319=
588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_15968" size=3D"3">=
Stored2 on Node2,Node1Container on Node1)</font>. and my old master domain =
</font><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151=
9085807369_15934" size=3D"3"><font id=3D"yiv0115319588m_-460530149227476982=
8yui_3_16_0_ym19_1_1519085807369_16064" size=3D"3">(</font><font id=3D"yiv0=
115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_16065" size=
=3D"3"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151=
9085807369_16066" size=3D"3">DATANd01 on Node1 )</font></font> hung on prep=
aring for maintenance. When I tried to activate my old master domain (</fon=
t><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_15190858=
07369_10517" size=3D"3"><font id=3D"yiv0115319588m_-4605301492274769828yui_=
3_16_0_ym19_1_1519085807369_15935" size=3D"3">DATANd01 on Node1 )</font> al=
l storage domain goes down and up master keep on rotating.</font></di=
v><div id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908580=
7369_10905"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_=
1_1519085807369_10517" size=3D"3"><br clear=3D"none"></font></div><div id=
=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_1069=
2"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085=
807369_10517" size=3D"3"><br clear=3D"none"></font></div><div dir=3D"ltr" i=
d=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_107=
10"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908=
5807369_10517" size=3D"3">Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.=
centos)<br clear=3D"none"></font></div><div dir=3D"ltr" id=3D"yiv0115319588=
m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_14202"><font id=3D"yi=
v0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10517" siz=
e=3D"3"><br clear=3D"none"></font></div><div dir=3D"ltr" id=3D"yiv011531958=
8m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_12604"><font id=3D"y=
iv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10517" si=
ze=3D"3"><br clear=3D"none"></font></div><div dir=3D"ltr" id=3D"yiv01153195=
88m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_14203"><font id=3D"=
yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10517" s=
ize=3D"3">Event Error:</font></div><div dir=3D"ltr" id=3D"yiv0115319588m_-4=
605301492274769828yui_3_16_0_ym19_1_1519085807369_14215"><font id=3D"yiv011=
5319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10517" size=3D=
"3">Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain=
Stored2 is marked as master in ovirt engine Database but not on storage si=
de Please consult with support<br id=3D"yiv0115319588m_-4605301492274769828=
yui_3_16_0_ym19_1_1519085807369_12607" clear=3D"none"></font></div><div dir=
=3D"ltr" id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085=
807369_14204"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym1=
9_1_1519085807369_10517" size=3D"3"><br clear=3D"none"></font></div><div di=
r=3D"ltr" id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908=
5807369_15891"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym=
19_1_1519085807369_10517" size=3D"3">VDSM Node2 command ConnectStoragePoolV=
DS failed: Wrong Master domain or it's version: u'SD=3Df3e372e3-1251-4195-a=
4b9- 1027e40059df, pool=3D5a865884-0366-0330-02b8- 0000</font></div><div di=
r=3D"ltr" id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908=
5807369_15895"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym=
19_1_1519085807369_10517" size=3D"3"><br clear=3D"none"></font></div><div d=
ir=3D"ltr" id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_15190=
85807369_15896"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_y=
m19_1_1519085807369_10517" size=3D"3">VDSM Node2 command HSMGetAllTastsStat=
usesVDS failed: Not SPM: ()<br id=3D"yiv0115319588m_-4605301492274769828yui=
_3_16_0_ym19_1_1519085807369_15901" clear=3D"none"><br id=3D"yiv0115319588m=
_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_15902" clear=3D"none">=
Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv)<br =
clear=3D"none"></font></div><div id=3D"yiv0115319588m_-4605301492274769828y=
ui_3_16_0_ym19_1_1519085807369_10545"><font id=3D"yiv0115319588m_-460530149=
2274769828yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><br clear=3D"no=
ne"></font></div><div id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_y=
m19_1_1519085807369_10551"><font id=3D"yiv0115319588m_-4605301492274769828y=
ui_3_16_0_ym19_1_1519085807369_10517" size=3D"3">Here's logs from engine:</=
font></div><div id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_=
1519085807369_10620"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_1=
6_0_ym19_1_1519085807369_10517" size=3D"3"><br clear=3D"none"></font></div>=
<div id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_15190858073=
69_10573"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_=
1519085807369_10517" size=3D"3">------------------------------ ------------=
------------------ ------------------------------ -------------</font></div=
><div dir=3D"ltr" id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_=
1_1519085807369_10588"><font id=3D"yiv0115319588m_-4605301492274769828yui_3=
_16_0_ym19_1_1519085807369_10517" size=3D"3">[root@dev2engine ~]# tail /var=
/log/messages<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1=
_1519085807369_10609" clear=3D"none">Feb 20 07:01:01 dev2engine systemd: St=
arting Session 20 of user root.<br id=3D"yiv0115319588m_-460530149227476982=
8yui_3_16_0_ym19_1_1519085807369_10610" clear=3D"none">Feb 20 07:01:01 dev2=
engine systemd: Removed slice User Slice of root.<br id=3D"yiv0115319588m_-=
4605301492274769828yui_3_16_0_ym19_1_1519085807369_10611" clear=3D"none">Fe=
b 20 07:01:01 dev2engine systemd: Stopping User Slice of root.<br id=3D"yiv=
0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10612" clea=
r=3D"none">Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of =
root.<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085=
807369_10613" clear=3D"none">Feb 20 07:58:52 dev2engine systemd: Starting U=
ser Slice of root.<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_y=
m19_1_1519085807369_10614" clear=3D"none">Feb 20 07:58:52 dev2engine system=
d-logind: New session 21 of user root.<br id=3D"yiv0115319588m_-46053014922=
74769828yui_3_16_0_ym19_1_1519085807369_10615" clear=3D"none">Feb 20 07:58:=
52 dev2engine systemd: Started Session 21 of user root.<br id=3D"yiv0115319=
588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10616" clear=3D"no=
ne">Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root.<b=
r id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_=
10617" clear=3D"none">Feb 20 08:01:01 dev2engine systemd: Started Session 2=
2 of user root.<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19=
_1_1519085807369_10618" clear=3D"none">Feb 20 08:01:01 dev2engine systemd: =
Starting Session 22 of user root.</font></div><div dir=3D"ltr" id=3D"yiv011=
5319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10619"><font i=
d=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_105=
17" size=3D"3"><br clear=3D"none"></font></div><div dir=3D"ltr" id=3D"yiv01=
15319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10623"><br cl=
ear=3D"none"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19=
_1_1519085807369_10517" size=3D"3"><font id=3D"yiv0115319588m_-460530149227=
4769828yui_3_16_0_ym19_1_1519085807369_10626" size=3D"3">------------------=
------------ ------------------------------ ------------------------------ =
-------------</font></font></div><div dir=3D"ltr" id=3D"yiv0115319588m_-460=
5301492274769828yui_3_16_0_ym19_1_1519085807369_10654"><font id=3D"yiv01153=
19588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3=
"><font id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_15190858=
07369_10626" size=3D"3">[root@dev2engine ~]# tail /var/log/ovirt-engine/eng=
ine. log<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519=
085807369_10677" clear=3D"none">2018-02-20 08:01:16,062+08 INFO [org.=
ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool=
-7- thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330=
-02b8- 0000000002d4'. Clearing event queue<br id=3D"yiv0115319588m_-4605301=
492274769828yui_3_16_0_ym19_1_1519085807369_10678" clear=3D"none">2018-02-2=
0 08:01:27,825+08 WARN [org.ovirt.engine.core. vdsbroker.irsbroker.Ir=
sProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is not in syn=
c between DB and VDSM. Domain Stored2 marked as master in DB and not in the=
storage<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519=
085807369_10679" clear=3D"none">2018-02-20 08:01:27,862+08 WARN [org.=
ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.=
ovirt.thread.pool-7- thread-23) [213f42b9] Validation of action 'Reconstruc=
tMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ M=
ASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS=
_ ILLEGAL2,$status PreparingForMaintenance<br id=3D"yiv0115319588m_-4605301=
492274769828yui_3_16_0_ym19_1_1519085807369_10680" clear=3D"none">2018-02-2=
0 08:01:27,882+08 INFO [org.ovirt.engine.core.bll. eventqueue.EventQu=
eueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finished reconst=
ruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue=
<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908580736=
9_10681" clear=3D"none">2018-02-20 08:01:40,106+08 WARN [org.ovirt.en=
gine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-1=
7) [] Master domain is not in sync between DB and VDSM. Domain Stored2 mark=
ed as master in DB and not in the storage<br id=3D"yiv0115319588m_-46053014=
92274769828yui_3_16_0_ym19_1_1519085807369_10682" clear=3D"none">2018-02-20=
08:01:40,197+08 WARN [org.ovirt.engine.core.bll. storage.pool. Recon=
structMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) [7af552c1]=
Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Rea=
sons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_T=
YPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance=
<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908580736=
9_10683" clear=3D"none">2018-02-20 08:01:40,246+08 INFO [org.ovirt.en=
gine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thre=
ad-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0=
000000002d4'. Clearing event queue<br id=3D"yiv0115319588m_-460530149227476=
9828yui_3_16_0_ym19_1_1519085807369_10684" clear=3D"none">2018-02-20 08:01:=
51,809+08 WARN [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] =
(org.ovirt.thread.pool-7- thread-26) [] Master domain is not in sync betwee=
n DB and VDSM. Domain Stored2 marked as master in DB and not in the storage=
<br id=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_151908580736=
9_10685" clear=3D"none">2018-02-20 08:01:51,846+08 WARN [org.ovirt.en=
gine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.th=
read.pool-7- thread-26) [20307cbe] Validation of action 'ReconstructMasterD=
omain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VA=
R__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGA=
L2,$status PreparingForMaintenance<br id=3D"yiv0115319588m_-460530149227476=
9828yui_3_16_0_ym19_1_1519085807369_10686" clear=3D"none">2018-02-20 08:01:=
51,866+08 INFO [org.ovirt.engine.core.bll. eventqueue.EventQueueMonit=
or] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finished reconstruct fo=
r pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue<br id=
=3D"yiv0115319588m_-4605301492274769828yui_3_16_0_ym19_1_1519085807369_1068=
7" clear=3D"none"><br clear=3D"none"></font></font></div></div></div></div>=
<br clear=3D"none">______________________________ _________________<br clea=
r=3D"none">
Users mailing list<br clear=3D"none">
<a rel=3D"nofollow" shape=3D"rect" ymailto=3D"mailto:Users@ovirt.org" targe=
t=3D"_blank" href=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a><br clear=
=3D"none">
<a rel=3D"nofollow" shape=3D"rect" target=3D"_blank" href=3D"http://lists.o=
virt.org/mailman/listinfo/users">http://lists.ovirt.org/ mailman/listinfo/u=
sers</a><br clear=3D"none">
<br clear=3D"none"></blockquote></div><br clear=3D"none"><br clear=3D"all">=
<div><br clear=3D"none"></div>-- <br clear=3D"none"><div class=3D"yiv011531=
9588gmail_signature"><div dir=3D"ltr">Regards,<div>Eyal Shenitzky</div></di=
v></div>
</div></div></div><br clear=3D"none"><br clear=3D"none"></div> </div></div=
><div class=3D"yiv0115319588yqt2980525560" id=3D"yiv0115319588yqtfd94549"> =
</div></div><div class=3D"yiv0115319588yqt2980525560" id=3D"yiv0115319588yq=
tfd67072"> </div></div></div></div></div><br><br></div> </div> </div> </=
div></div></body></html>
------=_Part_2010355_281080504.1519108987392--
7 years, 2 months
2 Master on Storage Pool [Event Error]
by michael pagdanganan
------=_Part_1848497_1716892283.1519088345205
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node=
1). and my old master domain (DATANd01 on Node1 ) hung on preparing for mai=
ntenance. When I tried to activate my old master domain (DATANd01 on Node1 =
) all=C2=A0 storage domain goes down and up master keep on rotating.
Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos)
Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine=
. Domain Stored2 is marked as master in ovirt engine Database but not on st=
orage side Please consult with support
VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it'=
s version: u'SD=3Df3e372e3-1251-4195-a4b9-1027e40059df, pool=3D5a865884-036=
6-0330-02b8-0000
VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: ()
Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv)
Here's logs from engine:
---------------------------------------------------------------------------=
----------------------------[root@dev2engine ~]# tail /var/log/messages
Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root.
Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root.
Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root.
Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root.
Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root.
Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root.
Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root.
Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root.
Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root.
Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root.
---------------------------------------------------------------------------=
----------------------------[root@dev2engine ~]# tail /var/log/ovirt-engine=
/engine.log
2018-02-20 08:01:16,062+08 INFO=C2=A0 [org.ovirt.engine.core.bll.eventqueue=
.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-32) [102e9d3c] Finished=
reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing even=
t queue
2018-02-20 08:01:27,825+08 WARN=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.IrsProxy] (org.ovirt.thread.pool-7-thread-23) [] Master domain is not=
in sync between DB and VDSM. Domain Stored2 marked as master in DB and not=
in the storage
2018-02-20 08:01:27,862+08 WARN=C2=A0 [org.ovirt.engine.core.bll.storage.po=
ol.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-23) [213=
f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTE=
M. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTI=
ON_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenan=
ce
2018-02-20 08:01:27,882+08 INFO=C2=A0 [org.ovirt.engine.core.bll.eventqueue=
.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-20) [929330e] Finished =
reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event=
queue
2018-02-20 08:01:40,106+08 WARN=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.IrsProxy] (org.ovirt.thread.pool-7-thread-17) [] Master domain is not=
in sync between DB and VDSM. Domain Stored2 marked as master in DB and not=
in the storage
2018-02-20 08:01:40,197+08 WARN=C2=A0 [org.ovirt.engine.core.bll.storage.po=
ol.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-17) [7af=
552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTE=
M. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTI=
ON_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenan=
ce
2018-02-20 08:01:40,246+08 INFO=C2=A0 [org.ovirt.engine.core.bll.eventqueue=
.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-22) [73673040] Finished=
reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing even=
t queue
2018-02-20 08:01:51,809+08 WARN=C2=A0 [org.ovirt.engine.core.vdsbroker.irsb=
roker.IrsProxy] (org.ovirt.thread.pool-7-thread-26) [] Master domain is not=
in sync between DB and VDSM. Domain Stored2 marked as master in DB and not=
in the storage
2018-02-20 08:01:51,846+08 WARN=C2=A0 [org.ovirt.engine.core.bll.storage.po=
ol.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-26) [203=
07cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTE=
M. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTI=
ON_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenan=
ce
2018-02-20 08:01:51,866+08 INFO=C2=A0 [org.ovirt.engine.core.bll.eventqueue=
.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-49) [2c11a866] Finished=
reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing even=
t queue
------=_Part_1848497_1716892283.1519088345205
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font=
-size:10px"><div id=3D"yui_3_16_0_ym19_1_1519085807369_10542" dir=3D"ltr"><=
font id=3D"yui_3_16_0_ym19_1_1519085807369_15967" size=3D"3">My storage poo=
l has 2 Master Domain(</font><font id=3D"yui_3_16_0_ym19_1_1519085807369_16=
063" size=3D"3"><font id=3D"yui_3_16_0_ym19_1_1519085807369_15968" size=3D"=
3">Stored2 on Node2,Node1Container on Node1)</font>. and my old master doma=
in </font><font id=3D"yui_3_16_0_ym19_1_1519085807369_15934" size=3D"3"><fo=
nt id=3D"yui_3_16_0_ym19_1_1519085807369_16064" size=3D"3">(</font><font id=
=3D"yui_3_16_0_ym19_1_1519085807369_16065" size=3D"3"><font id=3D"yui_3_16_=
0_ym19_1_1519085807369_16066" size=3D"3">DATANd01 on Node1 )</font></font> =
hung on preparing for maintenance. When I tried to activate my old master d=
omain (</font><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"=
><font id=3D"yui_3_16_0_ym19_1_1519085807369_15935" size=3D"3">DATANd01 on =
Node1 )</font> all storage domain goes down and up master keep on rot=
ating.</font></div><div id=3D"yui_3_16_0_ym19_1_1519085807369_10905"><font =
id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><br></font></div><d=
iv id=3D"yui_3_16_0_ym19_1_1519085807369_10692"><font id=3D"yui_3_16_0_ym19=
_1_1519085807369_10517" size=3D"3"><br></font></div><div id=3D"yui_3_16_0_y=
m19_1_1519085807369_10710" dir=3D"ltr"><font id=3D"yui_3_16_0_ym19_1_151908=
5807369_10517" size=3D"3">Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.=
centos)<br></font></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807=
369_14202"><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><b=
r></font></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_12604=
"><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><br></font>=
</div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_14203"><font i=
d=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3">Event Error:</font><=
/div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_14215"><font id=
=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3">Sync Error on Master =
Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as mas=
ter in ovirt engine Database but not on storage side Please consult with su=
pport<br id=3D"yui_3_16_0_ym19_1_1519085807369_12607"></font></div><div dir=
=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_14204"><font id=3D"yui_3_16_=
0_ym19_1_1519085807369_10517" size=3D"3"><br></font></div><div dir=3D"ltr" =
id=3D"yui_3_16_0_ym19_1_1519085807369_15891"><font id=3D"yui_3_16_0_ym19_1_=
1519085807369_10517" size=3D"3">VDSM Node2 command ConnectStoragePoolVDS fa=
iled: Wrong Master domain or it's version: u'SD=3Df3e372e3-1251-4195-a4b9-1=
027e40059df, pool=3D5a865884-0366-0330-02b8-0000</font></div><div dir=3D"lt=
r" id=3D"yui_3_16_0_ym19_1_1519085807369_15895"><font id=3D"yui_3_16_0_ym19=
_1_1519085807369_10517" size=3D"3"><br></font></div><div dir=3D"ltr" id=3D"=
yui_3_16_0_ym19_1_1519085807369_15896"><font id=3D"yui_3_16_0_ym19_1_151908=
5807369_10517" size=3D"3">VDSM Node2 command HSMGetAllTastsStatusesVDS fail=
ed: Not SPM: ()<br id=3D"yui_3_16_0_ym19_1_1519085807369_15901"><br id=3D"y=
ui_3_16_0_ym19_1_1519085807369_15902">Failed to deactivate Storage Domain D=
ATANd01 (Data Center UnsecuredEnv)<br></font></div><div id=3D"yui_3_16_0_ym=
19_1_1519085807369_10545"><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517=
" size=3D"3"><br></font></div><div id=3D"yui_3_16_0_ym19_1_1519085807369_10=
551"><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3">Here's l=
ogs from engine:</font></div><div id=3D"yui_3_16_0_ym19_1_1519085807369_106=
20"><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><br></fon=
t></div><div id=3D"yui_3_16_0_ym19_1_1519085807369_10573"><font id=3D"yui_3=
_16_0_ym19_1_1519085807369_10517" size=3D"3">------------------------------=
-------------------------------------------------------------------------</=
font></div><div id=3D"yui_3_16_0_ym19_1_1519085807369_10588" dir=3D"ltr"><f=
ont id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3">[root@dev2engin=
e ~]# tail /var/log/messages<br id=3D"yui_3_16_0_ym19_1_1519085807369_10609=
">Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root.<br =
id=3D"yui_3_16_0_ym19_1_1519085807369_10610">Feb 20 07:01:01 dev2engine sys=
temd: Removed slice User Slice of root.<br id=3D"yui_3_16_0_ym19_1_15190858=
07369_10611">Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of roo=
t.<br id=3D"yui_3_16_0_ym19_1_1519085807369_10612">Feb 20 07:58:52 dev2engi=
ne systemd: Created slice User Slice of root.<br id=3D"yui_3_16_0_ym19_1_15=
19085807369_10613">Feb 20 07:58:52 dev2engine systemd: Starting User Slice =
of root.<br id=3D"yui_3_16_0_ym19_1_1519085807369_10614">Feb 20 07:58:52 de=
v2engine systemd-logind: New session 21 of user root.<br id=3D"yui_3_16_0_y=
m19_1_1519085807369_10615">Feb 20 07:58:52 dev2engine systemd: Started Sess=
ion 21 of user root.<br id=3D"yui_3_16_0_ym19_1_1519085807369_10616">Feb 20=
07:58:52 dev2engine systemd: Starting Session 21 of user root.<br id=3D"yu=
i_3_16_0_ym19_1_1519085807369_10617">Feb 20 08:01:01 dev2engine systemd: St=
arted Session 22 of user root.<br id=3D"yui_3_16_0_ym19_1_1519085807369_106=
18">Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root.</=
font></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_10619"><f=
ont id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><br></font></di=
v><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_10623"><br><font i=
d=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><font id=3D"yui_3_16=
_0_ym19_1_1519085807369_10626" size=3D"3">---------------------------------=
----------------------------------------------------------------------</fon=
t></font></div><div dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1519085807369_10654=
"><font id=3D"yui_3_16_0_ym19_1_1519085807369_10517" size=3D"3"><font id=3D=
"yui_3_16_0_ym19_1_1519085807369_10626" size=3D"3">[root@dev2engine ~]# tai=
l /var/log/ovirt-engine/engine.log<br id=3D"yui_3_16_0_ym19_1_1519085807369=
_10677">2018-02-20 08:01:16,062+08 INFO [org.ovirt.engine.core.bll.ev=
entqueue.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-32) [102e9d3c] =
Finished reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clear=
ing event queue<br id=3D"yui_3_16_0_ym19_1_1519085807369_10678">2018-02-20 =
08:01:27,825+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsPr=
oxy] (org.ovirt.thread.pool-7-thread-23) [] Master domain is not in sync be=
tween DB and VDSM. Domain Stored2 marked as master in DB and not in the sto=
rage<br id=3D"yui_3_16_0_ym19_1_1519085807369_10679">2018-02-20 08:01:27,86=
2+08 WARN [org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDo=
mainCommand] (org.ovirt.thread.pool-7-thread-23) [213f42b9] Validation of a=
ction 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTIO=
N__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE=
_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenance<br id=3D"yui_3_16_0=
_ym19_1_1519085807369_10680">2018-02-20 08:01:27,882+08 INFO [org.ovi=
rt.engine.core.bll.eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7-t=
hread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8-=
0000000002d4'. Clearing event queue<br id=3D"yui_3_16_0_ym19_1_151908580736=
9_10681">2018-02-20 08:01:40,106+08 WARN [org.ovirt.engine.core.vdsbr=
oker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thread-17) [] Master doma=
in is not in sync between DB and VDSM. Domain Stored2 marked as master in D=
B and not in the storage<br id=3D"yui_3_16_0_ym19_1_1519085807369_10682">20=
18-02-20 08:01:40,197+08 WARN [org.ovirt.engine.core.bll.storage.pool=
.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-17) [7af55=
2c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM.=
Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION=
_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenance=
<br id=3D"yui_3_16_0_ym19_1_1519085807369_10683">2018-02-20 08:01:40,246+08=
INFO [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] (org.o=
virt.thread.pool-7-thread-22) [73673040] Finished reconstruct for pool '5a8=
65884-0366-0330-02b8-0000000002d4'. Clearing event queue<br id=3D"yui_3_16_=
0_ym19_1_1519085807369_10684">2018-02-20 08:01:51,809+08 WARN [org.ov=
irt.engine.core.vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thre=
ad-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 =
marked as master in DB and not in the storage<br id=3D"yui_3_16_0_ym19_1_15=
19085807369_10685">2018-02-20 08:01:51,846+08 WARN [org.ovirt.engine.=
core.bll.storage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.poo=
l-7-thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' fa=
iled for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__S=
TORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status Pr=
eparingForMaintenance<br id=3D"yui_3_16_0_ym19_1_1519085807369_10686">2018-=
02-20 08:01:51,866+08 INFO [org.ovirt.engine.core.bll.eventqueue.Even=
tQueueMonitor] (org.ovirt.thread.pool-7-thread-49) [2c11a866] Finished reco=
nstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event que=
ue<br id=3D"yui_3_16_0_ym19_1_1519085807369_10687"><br></font></font></div>=
</div></body></html>
------=_Part_1848497_1716892283.1519088345205--
7 years, 2 months
Setup ovirt-guest-agent from tarball possible?
by Oliver Dietzel
Hi, i try to install ovirt-guest-agent on a Clearlinux vm (already up and running in our ovirt test cluster).
The usual fedora / el7 rpm's do not work.
Is it possible to install ovirt-guest-agent from a tarball? Or do i have to rebuild a src rpm?
And where do i find the latest tarball and src rpm of this package / these packages?
Any help appreciated, thx in advance
Oli
___________________________________________________________
Oliver Dietzel
RTO GmbH
Hanauer Landstraße 439
60314 Frankfurt
7 years, 2 months
Moving VMs to another cluster
by Yeun, Chris (DNWK)
--_000_151873643674288456smithsdetectioncom_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
?Hello,
How do you move a VM to another cluster within the same data center? I hav=
e a cluster running ovirt 3.5 nodes. I created another cluster with hosts =
running CentOS 7 (ovirt 3.6 version) and want to move VMs to this cluster.=
The compatibility mode for everything is 3.5.
I tried shutting down a VM, but I cannot select the other cluster. Also li=
ve migration fails as well to the new cluster.
Thanks,
Chris
______________________________________________________________________
This email has been scanned by the Boundary Defense for Email Security Sys=
tem. For more information please visit http://www.apptix.com/email-securit=
y/antispam-virus
______________________________________________________________________
--_000_151873643674288456smithsdetectioncom_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859=
-1">
<style type=3D"text/css" style=3D"display:none"><!-- p { margin-top: 0px; =
margin-bottom: 0px; }--></style>
</head>
<body dir=3D"ltr" style=3D"font-size:12pt;color:#000000;background-color:#=
FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>​Hello,<br>
</p>
<p><br>
</p>
<p>How do you move a VM to another cluster within the same data center? I =
have a cluster running ovirt 3.5 nodes. I created another cluster with hos=
ts running CentOS 7 (ovirt 3.6 version) and want to move VMs to this clust=
er. The compatibility mode for everything
is 3.5.</p>
<p><br>
</p>
<p>I tried shutting down a VM, but I cannot select the other cluster. Also=
live migration fails as well to the new cluster.<br>
</p>
<p><br>
</p>
<p>Thanks,<br>
</p>
<p>Chris<br>
</p>
<br clear=3D"both">
______________________________________________________________________<BR>=
This email has been scanned by the Boundary Defense for Email Security Sys=
tem. For more information please visit http://www.apptix.com/email-securit=
y/antispam-virus<BR>
______________________________________________________________________<BR>=
</body>
</html>
--_000_151873643674288456smithsdetectioncom_--
7 years, 2 months
Fwd: why host is not capable to run HE?
by Artem Tambovskiy
Thanks a lot, Simone!
This is clearly shows a problem:
[root@ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select
vds_name, vds_spm_id from vds'
vds_name | vds_spm_id
-----------------+------------
ovirt1.local | 2
ovirt2.local | 1
(2 rows)
While hosted-engine.conf on ovirt1.local have host_id=1, and ovirt2.local
host_id=2. So totally opposite values.
So how to get this fixed in the simple way? Update the engine DB?
Regards,
Artem
On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy <
> artem.tambovskiy(a)gmail.com> wrote:
>
>> Hello,
>>
>> Last weekend my cluster suffered form a massive power outage due to human
>> mistake.
>> I'm using SHE setup with Gluster, I managed to bring the cluster up
>> quickly, but once again I have a problem with duplicated host_id (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and
>> due to this second host is not capable to run HE.
>>
>> I manually updated file hosted_engine.conf with correct host_id and
>> restarted agent & broker - no effect. Than I rebooted the host itself -
>> still no changes. How to fix this issue?
>>
>
> I'd suggest to run this command on the engine VM:
> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c 'select
> vds_name, vds_spm_id from vds'
> (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id
> from vds' if still on 4.1) and check /etc/ovirt-hosted-engine/hosted-engine.conf
> on all the involved host.
> Maybe you can also have a leftover configuration file on undeployed host.
>
> When you find a conflict you should manually bring down sanlock
> In doubt a reboot of both the hosts will solve for sure.
>
>
>
>>
>> Regards,
>> Artem
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
7 years, 2 months
How to specify a logical network for ovirt live migration traffic
by simone.sanna@trssistemi.com
Hello to everyone,
I have founded the article "How to specify a logical network for RHEV
live migration traffic" at https://access.redhat.com/solutions/70412 but
i can't read it because i not have an "active Red Hat subscription".
It is there an article as "How to specify a logical network for ovirt
live migration traffic" or similar?
It is possible to do that (for example to specify a nic ethX for live
migration traffic between two host)?
Many thanks for your replies,
Simone
7 years, 2 months
why host is not capable to run HE?
by Artem Tambovskiy
Hello,
Last weekend my cluster suffered form a massive power outage due to human
mistake.
I'm using SHE setup with Gluster, I managed to bring the cluster up
quickly, but once again I have a problem with duplicated host_id (
https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and due
to this second host is not capable to run HE.
I manually updated file hosted_engine.conf with correct host_id and
restarted agent & broker - no effect. Than I rebooted the host itself -
still no changes. How to fix this issue?
Regards,
Artem
7 years, 2 months
WG: IPMI config
by Markus.Schaufler@ooe.gv.at
--_004_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_
Content-Type: multipart/alternative;
boundary="_000_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_"
--_000_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi!
When configuring Powermanagement respectively Fence Agent I get following e=
rror:
Any idea on this?
2018-02-19 15:01:26,625+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker=
.FenceVdsVDSCommand] (default task-14) [7ab055b8-7afa-4495-a287-b3b66fd6a81=
e] START, FenceVdsVDSCommand(HostName =3D VIGT01-101.res01.ads.ooe.local, F=
enceVdsVDSCommandParameters:{hostId=3D'1210495a-0680-4f5a-bcd0-345b9debf48c=
', targetVdsId=3D'169e902e-9993-42c2-ad06-0925d3f217d6', action=3D'STATUS',=
agent=3D'FenceAgent:{id=3D'null', hostId=3D'null', order=3D'1', type=3D'ip=
milan', ip=3D'10.1.46.115', port=3D'623', user=3D's.oVirt', password=3D'***=
', encryptOptions=3D'false', options=3D''}', policy=3D'null'}), log id: 727=
4de5a
2018-02-19 15:01:26,732+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditl=
oghandling.AuditLogDirector] (default task-14) [7ab055b8-7afa-4495-a287-b3b=
66fd6a81e] EVENT_ID: VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management t=
est failed for Host VIRZ01-101.res01.ads.ooe.local.Internal JSON-RPC error
[cid:image001.png@01D3A993.2380B300]
--_000_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:windowtext;}
span.E-MailFormatvorlage18
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:#1F497D;}
span.E-MailFormatvorlage19
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang=3D"DE-AT" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"color:black">Hi!<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span style=3D"color:black"><o:p> </o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"color:black">When configuring Powerma=
nagement respectively Fence Agent I get following error:<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal"><span lang=3D"DE" style=3D"color:black">Any idea on =
this?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE" style=3D"color:#1F497D"><o:p> =
;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:35.4pt"><span lang=3D"DE">2018-=
02-19 15:01:26,625+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbr=
oker.FenceVdsVDSCommand] (default task-14) [7ab055b8-7afa-4495-a287-b3b66fd=
6a81e] START, FenceVdsVDSCommand(HostName =3D VIGT01-101.res01.ads.ooe.loca=
l,
FenceVdsVDSCommandParameters:{hostId=3D'1210495a-0680-4f5a-bcd0-345b9debf4=
8c', targetVdsId=3D'169e902e-9993-42c2-ad06-0925d3f217d6', action=3D'STATUS=
', agent=3D'FenceAgent:{id=3D'null', hostId=3D'null', order=3D'1', type=3D'=
ipmilan', ip=3D'10.1.46.115', port=3D'623', user=3D's.oVirt',
password=3D'***', encryptOptions=3D'false', options=3D''}', policy=3D'null=
'}), log id: 7274de5a<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:35.4pt"><span lang=3D"DE">2018-=
02-19 15:01:26,732+01 WARN [org.ovirt.engine.core.dal.dbbroker.au=
ditloghandling.AuditLogDirector] (default task-14) [7ab055b8-7afa-4495-a287=
-b3b66fd6a81e] EVENT_ID: VDS_ALERT_FENCE_TEST_FAILED(9,001),
Power Management test failed for Host VIRZ01-101.res01.ads.ooe.local.Inter=
nal JSON-RPC error<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:35.4pt"><span lang=3D"DE"><o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:35.4pt"><span lang=3D"DE"><o:p>=
</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE"><o:p> </o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:35.4pt"><span style=3D"mso-fare=
ast-language:DE-AT"><img width=3D"499" height=3D"498" id=3D"Grafik_x0020_1"=
src=3D"cid:image001.png@01D3A993.2380B300"></span><span lang=3D"DE"><o:p><=
/o:p></span></p>
</div>
</body>
</html>
--_000_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_--
--_004_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_
Content-Type: image/png; name="image001.png"
Content-Description: image001.png
Content-Disposition: inline; filename="image001.png"; size=16035;
creation-date="Mon, 19 Feb 2018 14:05:54 GMT";
modification-date="Mon, 19 Feb 2018 14:05:54 GMT"
Content-ID: <image001.png(a)01D3A993.2380B300>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAfMAAAHyCAIAAABwMbC5AAAAAXNSR0IArs4c6QAAAAlwSFlzAAAO
xAAADsQBlSsOGwAAPkhJREFUeF7tnQuYFOWZ7xdWDWLWDVFg0GxWxOCuyE1gJkYzXOSmEtk8JxKD
DCEqaJRoDrdsEtFdyUYDyPECWRE3Em5u4NlnQ0S5ZhYQdZgYueWychSz2RgHUMwuihwUOX94j9+p
dDc91dPVX3VV/+qBfqq/+r738vtq/vX1WzU9rbZu3frOO+/8Sdzb3r17P/WpT7U4ijlz5kyaNCn/
8DB9ZOTrX//6Jz/5STP129/+dvr06T/84Q9ffPHFNWvWfOtb37IWO3rhhRfmbNEh2enatevzzz+v
/bPOOkuutbN8+XK9jho1Sq+PPPJIxlEXvA59/OMff+qpp6xlxowZFo+GZzd++ctfVng66naCEV59
9dUaYh20bdiw4eWXX77lllucL+X14IMP2ts77rjjkksuyeklmI5lHeymgTKrmJ9++uk333zT7Hz3
u9996aWXguk7hsFpUkiLFy8OBuAGqtFFrn21X3DBBY6AO+RIuv5uEg2FxeOm0liZR/Gx6dCQq666
SpFYdi5CtX/60582p+qsV9u/9NJLDWOQQ11d3eDBg5VmY2Pj7t27hSLoIrunJWWUbLO5llMb67y4
8KxRHAxaMNqcSbmT0HoGaVu0wblgP00EWtXX19fU1KQpJXKBAAQgUOEEWld4/qQPAQhAIH0EUPb0
zSkZQQAClU4AZa/0M4D8IQCB9BFA2dM3p2QEAQhUOgGUvdLPAPKHAATSRwBlT9+ckhEEIFDpBFD2
Sj8DyB8CEEgfAZQ9fXNKRhCAQKUTQNkr/QwgfwhAIH0EUPb0zSkZQQAClU4AZa/0M4D8IQCB9BFA
2dM3p2QEAQhUOoHWbdu2rXQG5A8BCEAgXQRYs6drPskGAhBIIAH73uaMrZjvV0fZE3gWEDIEIJAi
Aq+99trMmTPt7ze4TY333HPPz3/+85YlWlplf/TRRwcOHKhXBbdkyRLt/+u//muzgarPiBEj1Pm6
66574403mu1PBwhAAAJJJ/DMM884cZesP/TQQ4cOHWpxUkUpu/5ajfTXbabgJ9s6dOjQsWPHM844
wzpIvnOqvKRcKeljyMiRIwcNGnT22We3OLfyGag/OSQ4ei2fkIgEAhAoEwLnnnvu7bffrlueJu5O
1seNG9enT5+WBVmUspvLK6644ksntm7duuUJYujQof/8z/+sV+vzT//0Txs3bszub388rEePHvqr
dRMmTGhZVuU2qqGh4YknntBfBCy3wIgHAhAoBwJBcb/vvvu0Wi9G1pVRK4lO9+7dW5ab1uw7d+78
zne+c9lllzkLO3bs+MEPfqB2rdD1NyRXrlwp0ZdGa9EqddO+VuIPP/ywOtgQibj7U5xa1d55551O
Ad3An/zkJ1rFy+ANN9xg1wa5VrcvfvGLP/rRj7QjIzrUs2dPHVq3bp0uIa+++qr2FcC9996rHXnP
NuJill9daXbt2mVeFKFdVIKmrLOFpHblKL/6CHLNNddY55OFpDKU4nE3QzJwtYw8oyAAgfQRUFV9
4cKFyuuv//qvb7311mISjGDNLk2UqGmTpisUk3VJrVRVf1E3Z3Dt27dXuzRUQjlgwIBgn4svvtgd
0ocAux6okqOe+rQimX722Wetv4RVruWlc+fO8ij1VKOOqs++fftUzNGHCbkwWT+ZETOlPxmsi6T+
4K+8vP322+psuVipS1qsdr2VQYm4uVA3NeqiqM6uDJUzJKn/Rz/6URuuIRYSGwQgAIEgARVhrM4u
ofv1r3+dcUO1UFYRKLtWxxJWbRI7Vcm1Iy3TMlzlFN0CzQ5IMqrSvMm3hPLzn/+865NxSB8F9JFC
R3XZUM8bb7xR+7/85S9df7XIi46qxe42/OxnP9Or1tFq1/Jfr3qb34hZs4DlRWP1duvWrVJwW8Ir
DCs07d+/37lQaupsHwjyh6QETc2VtYYox0Inif4QgEC6CQRr68Gae4uzjkDZtaT9txObFNCq5F26
dLGAtJpucWQ20Ioq0lzJopTatdhRreX1akWYYP+Min9+IxqoFfo3v/lNeyBHa3AzpXR0iXrllVck
8XbB6NevnwtAHxfs1rFa1McFkB1SkQQYDgEIpJtAxi3TYM39ySefbFnuESh70PGFF14YVDqT1GI2
uzZIdnX9sC3n5wDnwvoHF9F626wRLb21rpdl2VfNxFnTxVPLdl1RVFZSeWfMmDHOmuoqLiSFV0yO
jIUABCqZwMc+9jE9BBi8ZWrirsZLLrmkZWQiUHZXZ9dTjApFFXapoSokDzzwgNW+szerO2upqwp1
/mcldTNTPWVHYq1Nnwzy52nLat0slXcpr2lus0ZUR1I3lZLkwko3tj311FN2M0AlGqVmjy3aOl0u
LCQt51XWzx+VfYXDqlWrFJVV8NkgAAEIGAHVBqZOnZrxgKPE/e6779ZryyhFoOyuzm5laD2jInFX
tV0SaZXx7E31Ey2BdQFQ6SP/0/haJsuI+qintj179rgn4nNaVglF1zpdOfRMzk9/+lNbrTdrRC5k
VkOk1BpuliX0utOrm6LmWhcwFYUk7gpe1wxVXaxdaeYPSab0aUBXCPUMXjZaNmGMggAEINAsgaKe
emzWeqI7SL51bdAq2+r4enhR4i7dD97yTXSCBA8BCKSVQARr9rSisSfQ9ZC77qCq0FRfX6+3LX72
P62UyAsCEChDAqzZTzopqsbcf//9Vj9RvUWarodngr+TVYbTSUgQgAAERABl5zSAAAQgkDYCVGPS
NqPkAwEIQABl5xyAAAQgkDYCKHvaZpR8IAABCKDsnAMQgAAE0kYAZU/bjJIPBCAAAZSdcwACEIBA
2gig7GmbUfKBAAQggLJzDkAAAhBIGwGUPW0zSj4QgAAEUHbOAQhAAAJpI4Cyp21GyQcCEIAAys45
AAEIQCBtBI5/I1hNTU3a0iIfCEAAAhVMgDV7BU8+qUMAAiklgLKndGJJCwIQqGACKHsFTz6pQwAC
KSWAsqd0YkkLAhCoYAIoewVPPqlDAAIpJYCyp3RiSQsCEKhgAih7BU8+qUMAAiklgLKndGJJCwIQ
qGACKHsFTz6pQwACKSWAsqd0YkkLAhCoYAIoewVPPqlDAAIpJYCyp3RiSQsCEKhgAih7BU8+qUMA
AiklgLKndGJJCwIQqGACKHsFTz6pQwACKSWAsqd0YkkLAhCoYAIoewVPPqlDAAIpJYCyp3RiSQsC
EKhgAih7BU8+qUMAAiklgLKndGJJCwIQqGACrQ8dOlTB6ZM6BCAAgRQSaN22bdsUpkVKEIAABCqY
ANWYCp58UocABFJKAGVP6cSSFgQgUMEEWjU0NNTU1FQwAVKHQCgCO3bs2L9/f6iudIJALgLt2rXr
06ePHzYoux/OeEk8gQ0bNlRXVyc+DRKIj0BjY+PgwYP9+EfZ/XDGS+IJSNkHDBiQ+DRIID4CGzdu
RNnjw49nCOQiIGX39mPJDKSSgM9TiDuoqTyFSAoCEKhoAvEo+/jx45cvX34y8H379s0+NH369DxD
KnoOSR4CEIDAHxOIoc6+e/fuSZMmderUacGCBTmnQ8r+wgsvZBySsnfv3n3UqFHMIARKQeDRE1sp
LGOzBQQmnNhaMLCch6S8GrNu3bqxY8dqAiTx5TwNxAYBCEAgoQRiqMasWbNGS+8hQ4ZI4h011We0
VNemtblrVPnFGoP1GfXcvHlzsNH1mTt3ro11LSNGjNBb1z/DfkLnjLAhAAEI5CfgW9klsr1791ZM
EndJvAVniqwKjDaVXKxRK/qZM2cuW7bM2levXu0yUbs1qkVCP2fOHHsrg7Iva9OmTbOWVatWqc/i
xYudnRkzZnBOQAACEEg3Ad/Kvn79emm0rambmpokxOK7d+9eLeENtKuk66iuAV27drX2K6+80s2E
hNvtb9u2TVV7Z1CjJk6cKOkPLvPr6upGjx6ta0C655LsIAABCBgB38ouIbbVtDattSX0xc+EM6gd
uzBYi8Td1Ly2tlZvpe9qcRWb4v1iAQIQgEB5EvCq7KqbDx8+3IGQ4FqNpWPHjk7i3aONVVVVugy4
u6zBakwQpdb1JxNrqfnrr7/uLMidajKuBFSe80FUEIAABIon4FXZFy1a1KNHj2DQqrFIl1U/UaNV
VHbt2iVN11vVYVR1URXF2seNG5czWz06KbF2t0yl4zLo3upCIjvu9qysBSs5xePDAgQgAIEyJBDD
8+xlSIGQIMDz7GV1DvA8e5HT4XXNXmSsDIcABCAAgTAEUPYwlOgDAQhAIEkEUPYkzRaxQgACEAhD
AGUPQ4k+EIAABJJEAGVP0mwRKwQgAIEwBFD2MJToAwEIQCBJBHjqMUmzRawxEvD5Fawxponr0hHw
eQqxZi/dPGIZAhCAQDwEUPZ4uOMVAhCAQOkIoOylY4tlCEAAAvEQoM4eD3e8Jo7A1q1bDx48mLiw
Cbh8CLRv375nz55+4kHZ/XDGCwQgAAF/BKjG+GONJwhAAAJ+CKDsfjjjBQIQgIA/Aii7P9Z4ggAE
IOCHAMruhzNeIAABCPgjgLL7Y40nCEAAAn4IoOx+OOMFAhCAgD8CKLs/1niCAAQg4IcAyu6HM14g
AAEI+COAsvtjjScIQAACfgig7H444wUCEICAPwIouz/WeIIABCDghwDK7oczXiAAAQj4I4Cy+2ON
JwhAAAJ+CKDsfjjjBQIQgIA/AnyLrz/WeEo0gR07duzfvz/RKRB8vATatWvXp08fPzGg7H444yXx
BPTniaurqxOfBgnER6CxsXHw4MF+/KPsfjjjJfEEpOz9+/dPfBokEB+BTZs2oezx4cczBHIRkLJ7
+7FkBlJJwOcpxB3UVJ5CJAUBCFQ0Ad/KPmLEiN27dzvky5cvnz59evEzICOyHLST4ah4F1iAAAQg
kBQCrQ8dOpSUWJuNc+7cuc32oQMEIACB1BNo3bZt23QkOXbs2IULFwY/EKQjL7KAAAQgUCgB39WY
k8UnRe774TZ+/Hjrph1rC7Zs3rzZGrNNTZs2bdasWRntKvg4y7aoly8ZVAHH2q0iZPtu1R+MRx0K
xUp/CJyMwM0337xixYrg0ZqaGlYknDDREigXZV+8ePGcOXNeOLEtWLBASUpke/bsaS1VVVVOc2fO
nGmN2SBGjRqlxgwhVqP117ZmzRobtW3btu7du6tFTmXQ9pctW6ZVv3WYNGmS3toodeAHL9rTrpKt
6QGbXbt2OQJS+V69enXt2jVb8SuZErkXSaBclL2urk5iGrwLql/5k87aUnr16tV79+61VLUwz5Pz
1KlTJcTBDm6NLztNTU12SJcKuwzU1tbq1fb106V2ibg29Rw9erT7cOAGFomb4RC49tpr165d6zhI
5e1hyvnz5+tQNh9Jf8YaH4YQaJZAuSi7VNVW0BJT97SMW8Xr0IwZM5pNxtT5yiuvlIVOnTrprTQ6
uPoOY8FJv1vpa8cuAGwQyElg5MiRqqjYltFBouwOaZFhR4cNG+bEWipvgq41u3W46667tGOjdNrP
PrFp3w1nFiDQLAHfyj58+PBgKVzr6yFDhrgoTd+1QleLSjEq0TSbQHYH/TDIguotTqNlVvvhy+WF
9m9BkAxJDQFp9NChQ7d+uAXz0sJCouwO6QOllfVU/dMvrWjHSjHZKNTTRmmNosvAlClTtM/yIjXn
jIdEfCv7xIkTVfFwtzS1Krfz1d3D1CGrt2T0DK/LGusqNtLo3r17mzt97JXrkExXrVqlq46LM+Qo
ulUgAa24161bl/MuqCqK0mXHRPtq0VsN2b59u3ZcKSaDm6S8AkmScpQEGhoajrFBAALNEVi/fn3+
Ltec2IJ97LEr16J9tdhb29dXjLmjEyZM0PeKuEM5RzUXI8fLmkCzp1CE0ftes0d5UcIWBMqJwMqV
K/WhUNVwq5IrNFUUVUa3Coxeta8WC1kFmSVLlgRX9OWUCrEkngDKnvgpJIF4CQTvkXbs2DFYDVcx
UHeV9NyXhF6v2rdbOFaQ0QNX0vdmg9eTM9xBbZYSHTII8C2+nBIQCEXA5xf1hQqITkkj4PMUYs2e
tLODeCEAAQg0RwBlb44QxyEAAQgkjQDKnrQZI14IQAACzRFA2ZsjxHEIQAACSSOAsidtxogXAhCA
QHMEUPbmCHEcAhCAQNIIoOxJmzHihQAEINAcAZ5nb44QxyFwgoC+k+vgwYPAgECLCbRv3979EnKL
jYQciLKHBEU3CEAAAokhQDUmMVNFoBCAAARCEkDZQ4KiGwQgAIHEEEDZEzNVBAoBCEAgJAGUPSQo
ukEAAhBIDAGUPTFTRaAQgAAEQhJA2UOCohsEIACBxBBA2RMzVQQKAQhAICQBlD0kKLpBAAIQSAwB
lD0xU0WgEIAABEISQNlDgqIbBCAAgcQQQNkTM1UECgEIQCAkAZQ9JCi6QQACEEgMAZQ9MVNFoBCA
AARCEkDZQ4KiGwQgAIHEEOBbfBMzVQQaL4EdO3bs378/3hjwnmgC7dq169Onj58UUHY/nPGSeAIb
Nmyorq5OfBokEB+BxsbGwYMH+/GPsvvhjJfEE5CyDxgwIPFpkEB8BDZu3Iiyx4cfzxDIRUDK7u3H
khlIJQGfpxB3UFN5CpEUBCBQ0QR8K/uIESP6frjNnTu3fNgrsN27d5dPPEQCAQhAoMUEfCu7Ap0z
Z84LJ7Y1a9YsX768xaEzEAIQgAAEchKIQdldHGPHjt21axcTAwEIQAAC0RKIU9n37dtnyWjlnl2i
cS2qk6jP5s2bXcv06dM1RK82XDvjx493+1bkUaPrb4dUbFE3HVW7jXV93PBo4WINAhCAQDwEGhoa
jnncrr766k2bNplDPbTv9l0I6qD9hx9++Ec/+lEwrptuuumll14Ktmi4vdUQbXbUdjRW/e2oXJhN
tWuILFu7+li79dGhDPseqeAqAQTWr1+fgCgJsYwJ+DyFYlizT5o0yVbT06ZNq62tzViPNzU1qWXi
xIkzZ85UH3e5q6urGz16dHBx3bt3by3ktXjXzvDhw7dv3263QLt27aoiz5AhQ2ysubBDVVVVsmzt
+sSgUa6PDsVzacUrBCAAgagJxKDs7g7qqFGjTHOl9cuWLbPbqi5BeytxNzWXQOut9F0tVm/p2bPn
zp07TcR79Oih66HE3Yl11KCwB4F8BEaOHFlzYgs+YXXXXXdZ44oVK9xg7VujjrpGjbJG2Qm6KZFZ
5jL1BGJQ9mymWi9roa327EdlpOavv/66+2mRvusaoIdq1Hno0KH6Ko9t27apUZu6SeUl8TrUvXv3
RYsWmSOzafaDW4cOHcyONq397bMCGwRaQODmm292548WH2Zh3rx5a9eutf3Zs2frHLMzTfvWqKPq
Y/tulOzImjWWyGwLEmRI4gjEr+zSXJVTrD4jabaqiN3ntE3LcPXRyt3eqiajMo76qFFqrrEGXd1W
r15ttRd9GtBb66+qzqpVq7InRn2c38WLFzs7iZtCAo6dgD4sBmMwEdcHymCjPQaW8TCY9bH+bnPW
SmQ2dlwE4IEA3xvjATIu0kAgz6+Ga3EdVOGtW7cqYa3H3QdHvZ01a5aWHRLxqVOnOhx68Pe2227T
W9VhXGOvXr3mz5+vtyUym4bJSGYOfLtAMueNqCuVwOTJk13qU6ZMsX1JtrstP2zYMPs0qVftWwcd
NVnX5kZp31krkdlKnaXKyrtVfX39wIEDKytpsoVA4QR8LrgKj44RCSDg8xRq3bZt2wQgIUQIQAAC
EAhNIP47qKFDpSMEIAABCIQigLKHwkQnCEAAAgkigLInaLIIFQIQgEAoAih7KEx0ggAEIJAgAih7
giaLUCEAAQiEIoCyh8JEJwhAAAIJIoCyJ2iyCBUCEIBAKAJ8u0AoTHSCgL4z4ODBg3CAQIsJtG/f
Xt9Q2+LhBQ1E2QvCRWcIQAACCSBANSYBk0SIEIAABAoigLIXhIvOEIAABBJAAGVPwCQRIgQgAIGC
CKDsBeGiMwQgAIEEEEDZEzBJhAgBCECgIAIoe0G46AwBCEAgAQRQ9gRMEiFCAAIQKIgAyl4QLjpD
AAIQSAABlD0Bk0SIEIAABAoigLIXhIvOEIAABBJAAGVPwCQRIgQgAIGCCKDsBeGiMwQgAIEEEEDZ
EzBJhAgBCECgIAIoe0G46AwBCEAgAQT4Ft8ETBIhxk7g2LFj27ZtO3DgQOyREEByCZx55pn9+vVr
1aqVhxRQdg+QcZFsApJ1bfX19bW1tcnOhOhjJbB58+ZBgwZJ2T2IO9WYWKca5wkhcPTo0YRESphl
TcDbiYSyl/V5QHDlQEAL9g8++ECR2OKdDQItI6BTSCeSxno4q1F2D5BxkXgCfn4aE4+JBJoj4O1E
8qrsI0aM6PvH29y5c5tDwXEIlAuBlq3UGAUBI+DzPPaq7KtWrXrhhRfmzJlTVVWlHW0TJ070mS2+
IAABCFQCAa/KXglAyTHFBFh7QqAYAj5/NGJWdj0GpBKNS1jFGW1qnD59uivduIqN2l0tR/s+MeEL
AicjMG78bSuffLoYPt+57/77H5wnC1uea/jimBv0Wow1xkJABGJWdj0g3KlTJyfTa9asGTp0qMJa
vXq1ijZWsVm4cKF1mDRpkrUsW7ZM+8wfBDwTONl67fTT2xSzlNPY09sct+DSKdIaw8uTgM/TNWZl
V6p1dXXr16/XjuRbKt+1a1ft9+7d23a0XXnllU1NTSbutmYfPXq09nfv3u2TFL4gkJPAwgXzhlwx
sBg4d/7t5K9OuKEYC4yFQAaB+JVdy3b93rZkWvoulc8zQ5J7W7Pb5qSfSYWAHwI5V4LX1d34+KKl
OnTn3/2D6ip6Vcutt0/Ztn2nGlVaUbnmHx/9gV7Vrh11dvtmUEP0zz0+YTvqJiMaYtUeazQL1j5p
2rfNBVtcBPbtf0MTp1cXgKpqmvGTxePnLDUv8Su7ghg+fPi6deuk7+63t03rbWGuykyvXr3sAkB5
3efJga9CCbz8yp7+n/3MxK+Ob9PmIwt+sMiGHz58+NC77970lbphQwZteubZ137f5PaffX7ryVx8
6oIu428Y+8Six4YPveKJ5f/iuv3vl18Zfd0X5OLAW3+o3/RMoRHSP0IC7c8+q1+f3vfN+l/733hT
Zuc89H29XnZpTYQuWmyqLJRdzz6qmC59d2loea5KuhVepk2bZstzK69bQSZ437XFyTMQAgURyLkW
Mwt2qPvFF10xsP9nPl0tOT7w1lsSequeD6y9XI0XdDlf+5J+7X95zJfcqKBZ1ygF17XhS2Nv2vzM
c65RO/36XqLh+nfuOZ0OHHgrruUqfo3A564e3v+zl0ncTdb/59e+modMQSdbkZ1jUHatvvVge3bc
du/Ubfbwu7ZRo0ZZo/TdlWJyWiiSBcMhkJ/AyZTdFVLeffew62NybAZ79rjY7QdlOtgn2OEnq1av
XV+vpf3ShY+O/NyVwSGfOPcc5y7ntQHN9UxgxFXDdOHWXHx94i35Xfv8+YpB2bPT03ONwVumPvPH
FwQiJKBF+ob6jc81NEqXP96uXZfzO7fM+LuHD9vAV/a8KlMtM8IobwS0cpese3MXxlHMym6PqKsU
s2DBgjDh0gcC5UxA9RYJ8bxHHjt8+P9oxd3iUAcNqNVVYeachx54+BGVX1psh4EVS4DvZ6/YqSfx
sAT0/XxHjhzZsmVLdXV1njF3z7ivy/nnjb3+urB26VdhBBobGy+//PLTTjutdeuSL6lL7qDC5o50
00wgTBXVc5EXdwki4PNnA2X3SRtfEIAABHwQoBrjgzI+Ek3AVWN0TyjRiRB8vAT0aB/VmHinAO8Q
gAAEEkyAakyCJ4/QPRNIUEmXUMuQgM/TFWX3SRtfEIAABHwQQNl9UMYHBCAAAZ8EuIPqkza+EknA
7qC++OKLhw4dSmQCBF0eBNq1a9etWzc/z7O3qq+vHziwqG+XLg9oRAGBUhGQsr/33nv6yka9aj/4
BS+lconddBFo1aqVfjvp1FNPbdOmjV49/KYSa/Z0nUFkUwICkvL3339fy3a9Br+NqwSuMJlOAlJ2
baeccooW7HrVfqnzRNlLTRj7iScgNddS3TYW7ImfzpgSsGW7bSh7TJOAWwj8MQETdGSd86IYAibo
HmT9uJeGhoaamrL4IyDFIGMsBCAAAQg4Ajz1yMkAAQhAIG0EUPa0zSj5QAACEEDZOQcgAAEIpI0A
yp62GSUfCEAAAig75wAEIACBtBFA2dM2o+QDAQhAAGXnHIAABCCQNgIoe9pmlHwgAAEIoOycAxCA
AATSRgBlT9uMkg8EIAABvl2AcwACoQjs2LFj//79obrSCQK5COj72fv06eOHDcruhzNeEk9gw4YN
1dXViU+DBOIj0NjYOHjwYD/+UXY/nPGSeAJS9gEDBiQ+DRKIj8DGjRtR9vjw4xkCuQhI2b39WDID
qSTg8xTiDmoqTyGSggAEKppADMq+fPnyvh9u48ePr2j8JA8BCECgBAR8K/vcuXNnzpz5wodbz549
R4wYET4vXRF2794dvj89IQABCFQgAd/KvnDhwmXLljnQEydO7NSpk1bxedAH1/W6InTt2rUC54mU
IQABCIQn4FXZN2/eXFVVlSHNWrbv2rVLEU+fPl0releosbW53m7btk2vpu/asdxkyvV0Fwb1CVrI
7ikX4dHQEwIQgEBCCXhVdjHSCj2DVIcOHVyLfhnE6jTTpk2bNGmS2rXA18VALQsWLAgO1FEdss6L
Fi2S0NtRZ2HcuHGm44sXL3Y9Z8yYkdB5ImwIQAAC4Qn4VvbXX389I7h9+/a5liFDhtj+qFGjmpqa
TpaGdLx3795u7T98+PCdO3da57q6OttxFwy1jB49mlu14c8JekIAAkkn4FXZa2trpdcZt0C1yu7e
vXvpOMqp1vXSd1VvVKspnSMsQwACECgTAl6VXTmrSKIVtEteUqtVvFbo1rJ+/XrbUbtW5SdjJLFW
8d1VYHRXtkePHvmBaohqMmvWrCkT7oQBAQhAoHQEfCu7HoZRDd3d/NSCfdWqVS49ldTtkMTaCusq
uag07+6gup6SaZXarbMMSrhPxkh1GOumK4p6lg4lliEAAQiUCYEy+t4Y3fBUWcat38sEEGFAwAj4
/NVwmKeSgM9TyPeaPZUTRlIQgAAEyooAyl5W00EwEIAABCIgUEbKrofNKcVEMKWYgAAEKp5AGSl7
xc8FACAAAQhEQwBlj4YjViAAAQiUDwGUvXzmgkggAAEIREMAZY+GI1YgAAEIlA8BlL185oJIIAAB
CERDAGWPhiNWIAABCJQPAZS9fOaCSCAAAQhEQ6CMvl0gmoSwAoHSENi6devBgwdLYxurFUGgffv2
+kNDflJF2f1wxgsEIAABfwSoxvhjjScIQAACfgig7H444wUCEICAPwIouz/WeIIABCDghwDK7ocz
XiAAAQj4I4Cy+2ONJwhAAAJ+CKDsfjjjBQIQgIA/Aii7P9Z4ggAEIOCHAMruhzNeIAABCPgjgLL7
Y40nCEAAAn4IoOx+OOMFAhCAgD8CKLs/1niCAAQg4IcAyu6HM14gAAEI+COAsvtjjScIQAACfgig
7H444wUCEICAPwIouz/WeIIABCDghwDK7oczXiAAAQj4I4Cy+2ONJwhAAAJ+CKDsfjjjBQIQgIA/
Aii7P9Z4ggAEIOCHgG9lHzlyZM2H27x58yzJm2++efPmzX4Sxku8BHQC7N69OxhD5LO/YsUKO8Wc
I51d8iKnOuXcWeeNQ3bKzbpuwZBmbdKhogj4VnbBnTVrlv4MvLZ169bph7CicJNsHgIS30jOh9mz
Zy9evFgnWNeuXZ27jh07Jgt+MPhkRU605UAgBmV3aY8ZM2bXrl3lQIEYiiFgy+Hit/nz51977bU5
7dx1110ZK/087qqqqjJkUS3JUvaePXsWzxMLlUwgTmXft29fNvrsWo1+ql2jFW306lp01Iy4Oo9r
ccaDn/e1KrQOhRrRp3jnNHtpGbRmQQbDtkikTYrEtVskZtOVCJSFKyboqNt36ulagqNcpnLhgnRD
tGONwZZgOm64G6swrNHQWbveZkNQ+/bt253xbA5BFPmvAW6aMryofe3atXV1dRZDcHOTbpbNe1NT
UwYcCf1tt90WHGg9bR4zUKgxGKfNQnA2LdmMbtaSk7/zG6yxBOtC2RN0zz33VLIqkXsEBBoaGo55
3K655ppNmzaZw+rqatufMGFCxo4agz2tv/pMnz7d+r/00kvBqNW+fPlya9FRt+9anFMdapkR506u
5SKDmXIJhiQvro9cKxf1Vwd1s9jU6Pat3Qyqp4VnfNx+Ng3rnBGG+rtM7dDcE5vt66jtKzZnWS3Z
6Tg7OpTtxXKxUdoJdnCJuKSUr+tgWWfMXfbsZ3tUS8aoYDqWpmWUEU8GH6NhYdgh7bv03awF3bme
GaaMv0WV06k7S123oFk3LzknKNsXLRAoiEAMa/apU6faGnDKlCm1tbXBq5NWf+6oVl579+4NLql0
yFquv/56reCCC6sdO3aoumpmZSTnp4GMy2ChRtxiWa4VW8YCsFevXsEKgKpMgwcPtj6WoxUTVBaw
goM12r4Gqt1VG1SksoGy6Yzo47nlHlwUWxhuOalDGitKbsWtozt37ly0aJGR0crXjGhzlocMGeLS
cUvgYM/Jkye7ZPNAsNj0ar5EybLWXAwdOtShUKbNrkfUP2PRnXOIJl3B2yHt6G2zlg2IEKkKb501
U/aZQ5tOIUMhjOvXr9eO8h02bFgQsvW0c09xWje9uhytg52HGedJzvBONkFhcqEPBE5GIAZld3dQ
cxZV7eaqbeqgHy3daLW3GugEQm8lzcGff7tpZlvGR++cyUtbwxuRZunH3ozLUVznk4RSquQytTD0
yd0CU0a6SBgokXFVKQdch/J8zJdaSZ7MlMlZxhYGgq5Gbha007LbgJo+jZV3ZVGKh6a0pAiW7MaO
HetiXrlypfzqxNPlUDsbNmywC62DbD11S8AuJzo5taNXu8bo0ijj1kcoQp4nIScopDW6QUAEYlD2
PNz1w5D9UJq7m6Qfs+BYCZk0zn601GfJkiV5LLu1aka38EbcD6ot0zIi0QIteIuve/fuzpEVc1um
cdkZuduDeR4jMX3XulvDe/TosXTp0mw7DqbidIQVtvW0sdlbHgjqLJjikKHFHTp0sDnSpkNhlrHW
WfoulcyY9GBIwUkPZpH/B1tAbElhZ5pSNhHP2HRtsw55Jk6HFIO66dV1M5g6GYQi26xL3zk92QTl
z4KjEMhPoLyUXUshqYD7PKsfD/0Q6lO2tbjHG9wdJ33kt1qBllTSbjcwQ1zUx9Vq3KfmgoxY8cTs
5wSqZZfd4rNlpsK2koJ9xreVYPGbSYmZVRkhu7IRvNEnWTR9FDdHJng9sEbhtYW8PgO5UpiWsdnR
5oRgpSTZsQKFrrXOiBWFhMLFrGtMmJWsi1boLDbBzL6DGpx0zX5Bdx3VWWeacCkpgXIe3VpeS3WJ
r6tZnWzu1CHYTaMsTkHI/tyj8zCbcJ4JKv6EwULFEmilO6gnU6uKhZL6xKXC0vGMmxypz7qgBLWq
kEZbUYgNAokjUF5r9sThI+C0ElB5J+fNhrTmS14pI4Cyp2xCSadYAvb8jytSFWuO8RCIgwDVmDio
4xMCEIBAKQmwZi8lXWxDAAIQiIMAyh4HdXxCAAIQKCUBlL2UdLENAQhAIA4C/urs+tKDbdu2HThw
wFuaZ555Zr9+/Vq1auXNI44gAAEIlAMBT8pu32VTX1/v8xlq/cbQoEGDpOyIezmcasQAAQh4I+Cv
GnP06FFvWTlHsTj1nyYeIQABCAQJeFJ2Ldg/+OADOS7oiyiL7Cx3ciojTDkEIACBiiLgSdlN0/2T
jcWp/zTxCAEIQCCGNbtzWeQyvKDhzDQEIACByiTgb81emXzJGgIQgIB/Ar6VvaBFd5Gd/dPEIwQg
AIFyIODpqUfdyTxy5MiWLVsuu+yyjLS/ffd3Xn5lT7Dxgi7n/8Pf31k8nWefffbyyy8/7bTTWrf2
fQErPngsQAACEGgxAd+Sl70M/8Lnr/narRP+5pqrlYN29E8tRa7WbXiLoTAQAhCAQKIJ+Fb2bFi9
ena/7NKav/jEuTqkHf1TS0imWuxfV3djyM50gwAEIFAhBHwre57FuIgHjz6+aOm48bdJuCdN+/a2
7TuPfznB9p3aV4v+3f/gPL39zn33a5S1ZFuukCkkTQhAAAIZBHwre8gJ2FC/cdMzz930lbonFj3W
/eKLHpx3/E/FP7Xm+J+WVsuMu79V06+Plvb9P/sZa9G/kJbpBgEIQCD1BHwre8g1+85f/Orw4cNz
/3HBl8betHZ9vfY1sLpv7wNv/WHyN+781a9f+tQFXVwl/WQ2Uz95JAgBCEAgJ4FyUXYLzmm09s/p
VLV04aPunw5dMbD/9/7h7/r1vURa/73ZD7jbpCg7JzcEIACBIAHfyh6S/sUX/dXvX29a/i8/fuPN
Azt2/kLFGQ3U62uv/X7U//iboYMH6qhazj2nk16fa2hUn5CW6QYBCEAg9QTKVNkHDxow8nNXPbPl
+Tsm/+1D33/0lT2/0Uy89vvXtX/9uAk/fvJpHVVLr549upzfed4jjz32+OLUTxUJQgACEAhJwPdv
KlVXV4eMrPhujY2N/KZS8RixAAEIJI6A7zV7JL+CFNJI4iaDgCEAAQhEQsC3skcSNEYgAAEIQCAP
Ad/KHnK5HUk3Jh4CEIBAZRLwreyVSZmsIQABCPgk4FvZI1mMhzTikyO+IAABCJQPAd/KXj6ZEwkE
IACBtBJA2dM6s+QFAQhULgGvz7O/+OKLhw4d8ga7Xbt23bp14y9veAOOIwhAoEwI+FP29957T1/s
pVf9faVS/1mMVq1a6e8onXrqqW3atNErf1OpTM42woAABPwQ8KTskvL3339ffzBPrx7+4JGUXdsp
p5yiBbtete+HJl4gAAEIlAMBf8qupbptpV6wG1ZbttuGspfDqUYMEICANwKelF35mKD7kXUn7ibx
3mjiCAIQgEA5EPD3bIxVSNw62sOOeSwHysQAAQhAwCcBf8qurEaOHFnzx9u8efOKzFYWzGR+Ozff
fPPmzZvD+LrrrrtWrFiRs6csKIUwRiLvkx2/glSo5khHHVcX/O7du4OwXUjKQu1BGmKYPRGyExzu
+gcbg6Ay3IWkHTkoDEIAAiLgVdlXrly5devWWbNmVVVVaUfbbbfdVsw0SE0WLVpkpnLakeSdTKaL
8VtWYyXKHTt2NAjarr32WoUnYa2rqxNqa9SOFFm4LHLxv//+438cPP/Wq1cvN3zq1Kmu8+LFi619
9uzZpuAZ7nSotra2OfMchwAESkXAq7IXk4T0K+cyUOqTx+z8+fNN6fxsirD4TyGFhrp3797u3btn
jFq6dOmUKVOcvGpn7NixS5Ysccres2fP8KFquC4G7sLgfMnmrl279DbDXaEp0B8CEIiWQFkou6oK
9hnf1TpcjUWNVnbQ2lzLxuDCU+1alm7fvt36BKsBWqobppxFGFdPcNIWrDw4vnLqyh3Z0BWqC9vc
yYgiVJwWj1qy81KLFUMsEQ0MZuq8uAjDFH8k61o7Z8iusEi7g2Gr244dO1zLPffco1CzxTrn6WXX
1K5du2Yc1UVFLTIid/mvoEHC5lQtyt3qSLavV6vXmTtXu3OzmTEk2p8ErEEgTQTiV3YTQft0P3To
UBPTdevWBcsLkiGtza224PRFUqIWqxhoX+1uiMlNznmSTLgahbxIRNRTyugqDGvXrg0zwU1NTa4G
YjqlGLSG1Wbx5MxLPaX+FqclsnPnTnurge5C4hKROjdbsJYvBa+LnLuiWPwZQqxFd0ZeWtTnr8nY
VVObuqmSljFc3MRqyJAhas82Huysnvq4kF3Y0aXl+uuvdxUkzYJNjT4iCIVOBhvSo0cPRyZjSJiZ
og8EKpBA/Mquj/MSCFMQ/dzaMlA/1XobvlxgM+eWupIkKW/O6dQhW/trUx+506suD04Hhw0bZgN1
OdGW55xwNwkk8dndcualbtLTYGdJm73t0KGDa3fLVZExIPk3d1WTgDpoGde2bCC2yg7eh3B+zZ1d
NaW2Gd7tKmJ1fON2Mto2UJ8V1MGYC77rLNTBcryubW4WNMSuGdq04z5tZAxpDgzHIVChBOJXdhM7
t0pVZVwtEk21aMd9Nm92fqRKzk7+4rvz5VaLzRpvWYfsvMLY0acKt1x1l5kwA9VnzJgxdiUQgWDt
RS260mTUZ9So64pWyu6iYre4jbzbrMgevAC4zzemy5LjjA7Z0SqRIPaQ6dANAhBoGYH4lV31X3dn
LyMH6bvEccOGDSFzM+Wysu/JhkjyMj4KSJXU361wXTUmf5292ZDy5BVmrPUJWRpyBiXf9gHCJNtV
crSjz0PS/QzXkmZprnrmD0nWTjZHNlCWs2v9zqbmRYmErOnbKA1xHrWTfU1qliEdIFDJBOJXdtUE
rPZim8mueyu9sJLI4MGDM+6gZkybxMWqBOqWZ6mrzwQqrzv7khstOXX9sLHaVBNo8QmhuoG7g5oz
rzCWJaOuXhQmmODD7LJvNSJJtlbWzo4V97Nvgapn/oqTBWzL9jz3k+2eh2MojMFM5Td41N0RzUND
UenDh82IdsIEGYYtfSBQIQT8fbtAhQD1lqZ0VsvzIn8hwFu0OIIABHwSiH/N7jPbdPiyByVVRkfW
0zGhZAGByAmwZo8cKQYhAAEIxEyANXvME4B7CEAAApETQNkjR4pBCEAAAjETQNljngDcQwACEIic
AMoeOVIMQgACEIiZAMoe8wTgHgIQgEDkBFD2yJFiEAIQgEDMBFD2mCcA9xCAAAQiJ4CyR44UgxCA
AARiJoCyxzwBuIcABCAQOQGUPXKkGIQABCAQMwGUPeYJwD0EIACByAmg7JEjxSAEIACBmAmg7DFP
AO4hAAEIRE4AZY8cKQYhAAEIxEwAZY95AnAPAQhAIHICKHvkSDEIAQhAIGYCKHvME4B7CEAAApET
QNkjR4pBCEAAAjETQNljngDcQwACEIicAMoeOVIMQgACEIiZAMoe8wTgHgIQgEDkBFD2yJFiEAIQ
gEDMBFD2mCcA9xCAAAQiJ4CyR44UgxCAAARiJoCyxzwBuIcABCAQOQGUPXKkGIQABCAQMwGUPeYJ
wD0EIACByAmg7JEjxSAEIACBmAmg7DFPAO4hAAEIRE4AZY8cKQYhAAEIxEwAZY95AnAPAQhAIHIC
KHvkSDEIAQhAIGYCKHvME4B7CEAAApETQNkjR4pBCEAAAjETQNljngDcQwACEIicAMoeOVIMQgAC
EIiZAMoe8wTgHgIQgEDkBFD2yJFiEAIQgEDMBFD2mCcA9xCAAAQiJ4CyR44UgxCAAARiJoCyxzwB
uIcABCAQOQGUPXKkGIQABCAQMwGUPeYJwD0EIACByAmg7JEjxSAEIACBmAmg7DFPAO4hAAEIRE4A
ZY8cKQYhAAEIxEwAZY95AnAPAQhAIHICKHvkSDEIAQhAIGYCKHvME4B7CEAAApETQNkjR4pBCEAA
AjETQNljngDcQwACEIicAMoeOVIMQgACEIiZAMoe8wTgHgIQgEDkBFD2yJFiEAIQgEDMBFD2mCcA
9xCAAAQiJ4CyR44UgxCAAARiJoCyxzwBuIcABCAQOQGUPXKkGIQABCAQMwGUPeYJwD0EIACByAmg
7JEjxSAEIACBmAmg7DFPAO4hAAEIRE4AZY8cKQYhAAEIxEwAZY95AnAPAQhAIHICKHvkSDEIAQhA
IGYCKHvME4B7CEAAApETQNkjR4pBCEAAAjETQNljngDcQwACEIicAMoeOVIMQgACEIiZAMoe8wTg
HgIQgEDkBFD2yJFiEAIQgEDMBFD2mCcA9xCAAAQiJ4CyR44UgxCAAARiJoCyxzwBuIcABCAQOQGU
PXKkGIQABCAQMwGUPeYJwD0EIACByAmg7JEjxSAEIACBmAmg7DFPAO4hAAEIRE4AZY8cKQYhAAEI
xEwAZY95AnAPAQhAIHICKHvkSDEIAQhAIGYCKHvME4B7CEAAApETQNkjR4pBCEAAAjETQNljngDc
QwACEIicAMoeOVIMQgACEIiZAMoe8wTgHgIQgEDkBFD2yJFiEAIQgEDMBFD2mCcA9xCAAAQiJ4Cy
R44UgxCAAARiJoCyxzwBuIcABCAQOQGUPXKkGIQABCAQMwGUPeYJwD0EIACByAmg7JEjxSAEIACB
mAmg7DFPAO4hAAEIRE4AZY8cKQYhAAEIxEwAZY95AnAPAQhAIHICKHvkSDEIAQhAIGYCKHvME4B7
CEAAApETQNkjR4pBCEAAAjETQNljngDcQwACEIicAMoeOVIMQgACEIiZAMoe8wTgHgIQgEDkBFD2
yJFiEAIQgEDMBFD2mCcA9xCAAAQiJ4CyR44UgxCAAARiJtCqoaHh9NNPjzkK3EMAAhCAQHQEjit7
TU1NdAaxBAEIQAACMROgGhPzBOAeAhCAQOQEUPbIkWIQAhCAQMwEUPaYJwD3EIAABCIngLJHjhSD
EIAABGImgLLHPAG4hwAEIBA5AZQ9cqQYhAAEIBAzAZQ95gnAPQQgAIHICaDskSPFIAQgAIGYCaDs
MU8A7iEAAQhETgBljxwpBiEAAQjETABlj3kCcA8BCEAgcgIoe+RIMQgBCEAgZgIoe8wTgHsIQAAC
kRPgux4jR4pBCEAgMgLHjh3ree/Tu37TFJnFPzZ0XtVZe/5+ZKtWrdQsX9u2bTtw4ECJfBVj9swz
z+zXr5/FGWZD2cNQog8EIBADAUmttj+95fG/uOTSc8/8SOQR7HvnyJ6fPXf0ka84Za+vr+/fv3/k
joo3uGnTpkGDBinOkOJONaZ45liAAARKReDo0aMy/cGxPzl67Fjk/2RWm7lwO3Y5KbctGGcY1ih7
GEr0gQAEYiAgef3ggw/k+MRO9P+OnVB2uTAdd77KTdYVj4sz5DSg7CFB0Q0CEIiBgIma/h89vmyP
+J8U3S4blpjbiSHPEC4LCg9lD0GULhCAQKwEpL0lWUfHmlRJnaPsJcWLcQhAIAICJVqzf7hY/6MI
S3IJKdpooRBR9kKJ0R8CEPBNQGWTUtTZrRqTsRUtwiUxUChxlL1QYvSHAAR8E5ACR15kl8Gca3bf
uZXGH8peGq5YhQAEoiOgpfXxZXvWv+Gd/2zl58/79xsv1L+GMRfceWkH12fNFzrPG3yuvVW7Oqhz
toWQa/Z9+9+4/8F5X77p1i+OueGrX5u88smnS7Ey3/Jcg+zrNdt4oSxR9kKJ0R8CEPBNQEqXvWav
OuPUe2ur/vwjfzr3xTfueX7vC03vjrmo3cwB51hPbVadH975TLUv+uVbT77y3xlGctRicmW2/403
750553e/e+36677wtVsn1H72M6ef3sY3ggL9oewFAqM7BCDgnYB+pSi7zj6p79mnn9L69p++dv/P
9j++88D4Nf+5be+7w877qPU8Ie3HOp1xyvf6Vz372jt3b2nKUanPJe3Z6+Wf/tum37/e9NUJNwwe
NOAzn67+4hc+r51SrNn/3yOeuUwXihxlL5QY/SEAAd8ETqy+M38HtWeH01/9ryMNv3/HHdr8u7el
9Ved/2dq0RD9WzHyL5veef9LT/5Hzt9fDfmE+C9++esLupyvfxlpb6jfOPkbd35p7E3698iCx+3o
9L//7pyHvm/tt90x9SerVlv7wsXLvjJhojW+/MqeYIs6b9+xK1qmKHu0PLEGAQhET+DE8+yZ/8xN
sP3Au8e/J0Arebs1evknzuh0xqkZfTLsZMcafjF+3l9+cvjQK5b9cMG0SbdveuZZVx/f9Ytfqf22
W25q1+5ja9fXy6D0XTvDhgxSo167nN95vT4IPPPcjePGaHj3iy96cN5883si2hxboUxR9kKJ0R8C
EPBN4ESdPfOfBRFs/1ibP1XL3kPvq1E7v/mvI9O3NHX+89MeuELF9xwWjn9xQRHbq7/5jzXrfjr6
y+Mfe3xx0IxW9yrXXHZpzQVdOh946y0dennPq+d0qlIZR42fu3q4Wnb98teHDx+e98hjGi7R134R
geQYirJHyxNrEIBA9ASOPxuT9b0xP286JNXuV9XWHar9xBnvvv/Bj3f/4cQ3wfzJb//7vQXb33jm
d29fdf6ZwW6uf85qTPaCucv5572y51XdRw0eUkXlBz9c2q/vJUsXPnr3nd+wnM3gOZ06up6u0XaC
7dJ6jXX/gsHwbEz05xAWIQCBciNw4rseM/999/m90vF/HPqJqTUdbuhx1sKr/7JPVdtVHz4Ac1xJ
Tzwbc8eG1959/9i9tZ2yLdh91ma3q68c1qZNm+/NfkCF9ecaGp98ao1e3377HRv4xpsHljyxPL+R
Lp3P0z3Y5f/yYw2UEV0nLr7or6xFw3fs/IUamw2joA6s2QvCRWcIQCAGAqqxZz+K/h//feRrG373
1uGjk/t1+G5tpx7t28xu3HfL2v+0nrZE1o66zdq696Kz23yj5v8/7W59Qq7Zz/p4u699dXy7du0e
X7RM9ZN1G/7t0KFDPbp3q738M3qw/Rvf/rvT27TRAtwVyt3y3C3kR1w1bOjggaq6aPjKJ1er5xUD
+18z4spntjx/x+S/fej7j76y5zfR1tn5yxsxnKa4hAAEwhDQ1+oeOXLk9NuXtrmw70dOiX4ZeuTo
B+/++wvvPnT9aaedpnjka8uWLdXV1WFi89ynsbHx8ssvV5ytW4fiEKqT5xxwBwEIQCBIQEvwUnxv
DN8IxmkGAQhAIDYCeoilFN8bk7POHv6pR589C0XPmr1QYvSHAAR8EyidhvrOxJc/lN0XafxAAAIt
JVCq73psaTzlPw5lL/85IkIIVDqBnM+zR1B5D/e9MaX7xBDecqFnAMpeKDH6QwACvgnk/B3UnL9W
WlBjyKcew+tv6XoWSpynHgslRn8IQMATAXvq8by7frL3DwdL5LJzp7N/9c1h7qnHF198Uc+ql8hX
MWb1NH23bt3CP/WIshdDm7EQgEAJCUjZ33vvPX2nil61n3OJ3WL3rVq10rPhp556qn6/VK+yUzpf
LQ5SAzPiDPk8O8peDHPGQgACJSQgKX///fe1bNdr8Dc8I3EpxdR2yimnaCGsV9ksna9iAs6IU2/D
WEPZw1CiDwQgEAOB418P8OEW7YLdkrHlsG16W1JfxeALxomyF0OSsRCAQFkQMEEvhaw7cTeJd15K
56sYoBZhSFk/3rOhoaGmpqYYl4yFAAQgAIGyIsBTj2U1HQQDAQhAIAICKHsEEDEBAQhAoKwIoOxl
NR0EAwEIQCACAih7BBAxAQEIQKCsCKDsZTUdBAMBCEAgAgIoewQQMQEBCECgrAig7GU1HQQDAQhA
IAICKHsEEDEBAQhAoKwIoOxlNR0EAwEIQCACAih7BBAxAQEIQKCsCKDsZTUdBAMBCEAgAgL/F5Em
oZWk3jwlAAAAAElFTkSuQmCC
--_004_9D6F18D2AC0D5245BE068C2BEBC06946284F62msli01202res01ads_--
7 years, 2 months
VM Portal - ADD Nic
by Thomas Fecke
--_000_f41b08a46f1546808d63ad2080f5822dDR1XEXCH01Besetcorp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hey Guys,
We got about 50 Users and 50 VLANS. Every User has his own Vlan.
With 4.1 they could login in that User Portal. Select an Template or create=
a new VM. Add a Disk and connect to there Nic.
I see that is no option to add a Disk anymore with 4.2 -> okay that's fine =
for me
So they just can use Templates. But, there is now option to add the VM to a=
nic. So I guess the Template nic is being used.
But our Templates don't got a nic because the user has his own networks.
That mean I need to add about XX more Templates with every nic in it?
Oh common :)
No way to add a nic via VM Portal? That really make the VM Portal unusable =
for us
We can=B4t be the only one using Templates like that. Now every VM set up i=
n VM Portal is in one network, that's not good or do I miss something?
--_000_f41b08a46f1546808d63ad2080f5822dDR1XEXCH01Besetcorp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:"Calibri Light";
panose-1:2 15 3 2 2 2 4 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hey Guys,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">We got about 50 Users and 50 VLANS. Every User has h=
is own Vlan.
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">With 4.1 they could login in that User Portal. Selec=
t an Template or create a new VM. Add a Disk and connect to there Nic.<o:p>=
</o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I see that is no option to add a Disk anymore with 4=
.2 -> okay that’s fine for me<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">So they just can use Templates. But, there is now op=
tion to add the VM to a nic. So I guess the Template nic is being used.<o:p=
></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">But our Templates don’t got a nic because the =
user has his own networks.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">That mean I need to add about XX more Templates with=
every nic in it?
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Oh common <span style=3D"font-family:Wingdings">J</s=
pan><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">No way to add a nic via VM Portal? That really make =
the VM Portal unusable for us<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">We can=B4t be the only one using Templates like that=
. Now every VM set up in VM Portal is in one network, that’s not good=
or do I miss something?
<span style=3D"font-size:12.0pt;font-family:"Calibri Light",sans-=
serif;mso-fareast-language:EN-GB">
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_f41b08a46f1546808d63ad2080f5822dDR1XEXCH01Besetcorp_--
7 years, 2 months
Install Windows VM issues
by Markus.Schaufler@ooe.gv.at
--_000_9D6F18D2AC0D5245BE068C2BEBC06946284C58msli01202res01ads_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi!
I'm new here - hope you can forgive my "newbie questions".
I want to install a Server 2016 - so I uploaded both the Windows ISO and th=
e virtio drivers iso to the ISO Domain location. In the VM Options I can ch=
oose both ISO files.
But as refered in a Howto, I need to use a Floppy device with a flv file. I=
found the FLV drivers file, but I cannot find any Floppy device - there's =
no option to choose.
So I tried to add a second CD-Rom because in Proxmox that already did work.=
But I cannot find any option to add a second CD-Rom too.
Any idea how I can provide the drivers for the windows installation?
Thanks for any help!
Markus
--_000_9D6F18D2AC0D5245BE068C2BEBC06946284C58msli01202res01ads_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE-AT" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"DE">Hi!<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE">I’m new here – hope yo=
u can forgive my „newbie questions“.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE">I want to install a Server 2016 &#=
8211; so I uploaded both the Windows ISO and the virtio drivers iso to the =
ISO Domain location. In the VM Options I can choose both ISO files.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE">But as refered in a Howto, I need =
to use a Floppy device with a flv file. I found the FLV drivers file, but I=
cannot find any Floppy device – there’s no option to choose.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE">So I tried to add a second CD-Rom =
because in Proxmox that already did work. But I cannot find any option to a=
dd a second CD-Rom too.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE">Any idea how I can provide the dri=
vers for the windows installation?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"DE">Thanks for any help!<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal"><span lang=3D"DE">Markus<o:p></o:p></span></p>
</div>
</body>
</html>
--_000_9D6F18D2AC0D5245BE068C2BEBC06946284C58msli01202res01ads_--
7 years, 2 months
Fwd: Install Windows VM issues
by Jon bae
Hi Markus,
I install Windows 2016 just for two weeks, so in general it must work.
Have you insert the windows iso to the dvd drive and the floppy file to the
floppy? After booting and selecting installation, a bit after must come a
menu where you normally choice the hard drive for installation, but this is
empty.
There is a button with install driver or something similar. When you click
on it, you can navigate to the floppy drive.
When this is not working, you can also change your HDD to IDE, install
Windows. Add a second drive with virtIO, insert den driver CD install the
driver. Reboot, remove the second HDD and change the main disk to virtIO to.
But this is a bit hacky :).
Jonathan
Hi Jonathan,
thanks for your quick reply!
I startet with run once and attached the flv to the floppy, but still
there’s no floppy or cd-rom drive with the drivers on it.
Any hint on this?
Markus
Hi Markus,
you need to use the "run onces" option, to be able to insert the floppy.
Jonathan
2018-02-19 8:58 GMT+01:00 <Markus.Schaufler(a)ooe.gv.at>:
Hi!
I’m new here – hope you can forgive my „newbie questions“.
I want to install a Server 2016 – so I uploaded both the Windows ISO and
the virtio drivers iso to the ISO Domain location. In the VM Options I can
choose both ISO files.
But as refered in a Howto, I need to use a Floppy device with a flv file. I
found the FLV drivers file, but I cannot find any Floppy device – there’s
no option to choose.
So I tried to add a second CD-Rom because in Proxmox that already did work.
But I cannot find any option to add a second CD-Rom too.
Any idea how I can provide the drivers for the windows installation?
Thanks for any help!
Markus
7 years, 2 months
Unable to connect to the graphic server
by Alex Bartonek
This is a multi-part message in MIME format.
--b1_576a8a5bb40fefc5c569d4cd3c0b589a
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
SSd2ZSBidWlsdCBhbmQgcmVidWlsdCBhYm91dCA0IG9WaXJ0IHNlcnZlcnMuICBDb25zaWRlciBt
eXNlbGYgcHJldHR5IGdvb2QgYXQgdGhpcy4gIExPTC4KU28gSSBhbSBzZXR0aW5nIHVwIGEgb1Zp
cnQgc2VydmVyIGZvciBhIGZyaWVuZCBvbiBoaXMgcjcxMC4gIENlbnRPUyA3LCBvdmlydCA0LjIu
ICAgL2V0Yy9ob3N0cyBoYXMgdGhlIGNvcnJlY3QgSVAgYW5kIEZRRE4gc2V0dXAuCgpXaGVuIEkg
YnVpbGQgYSBWTSBhbmQgdHJ5IHRvIG9wZW4gYSBjb25zb2xlIHNlc3Npb24gdmlhICBTUElDRSBJ
IGFtIHVuYWJsZSB0byBjb25uZWN0IHRvIHRoZSBncmFwaGljIHNlcnZlci4gIEknbSBjb25uZWN0
aW5nIGZyb20gYSBXaW5kb3dzIDEwIGJveC4gICBVc2luZyB2aXJ0LW1hbmFnZXIgdG8gY29ubmVj
dC4KCkkndmUgZ29vZ2xlZCBhbmQgSSBqdXN0IGNhbnQgc2VlbSB0byBmaW5kIGFueSByZXNvbHV0
aW9uIHRvIHRoaXMuICBOb3csIEkgZGlkIGJ1aWxkIHRoZSBzZXJ2ZXIgb24gbXkgaG9tZSBuZXR3
b3JrIGJ1dCB0aGUgc3VibmV0IGl0cyBvbiBpcyB0aGUgc2FtZS4uIGludGVybmFsIDE5Mi4xNjgu
MS54eHguICAgVGhlIHdlYiBpbnRlcmZhY2UgaXMgYWNjZXNzaWJsZSBhbHNvLgoKQW55IGhpbnRz
IGFzIHRvIHdoYXQgZWxzZSBJIGNhbiBjaGVjaz8KClRoYW5rcyEKClNlbnQgd2l0aCBbUHJvdG9u
TWFpbF0oaHR0cHM6Ly9wcm90b25tYWlsLmNvbSkgU2VjdXJlIEVtYWlsLg==
--b1_576a8a5bb40fefc5c569d4cd3c0b589a
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGRpdj5JJ3ZlIGJ1aWx0IGFuZCByZWJ1aWx0IGFib3V0IDQgb1ZpcnQgc2VydmVycy4mbmJzcDsg
Q29uc2lkZXIgbXlzZWxmIHByZXR0eSBnb29kIGF0IHRoaXMuJm5ic3A7IExPTC48YnI+PC9kaXY+
PGRpdj5TbyBJIGFtIHNldHRpbmcgdXAgYSBvVmlydCBzZXJ2ZXIgZm9yIGEgZnJpZW5kIG9uIGhp
cyByNzEwLiZuYnNwOyBDZW50T1MgNywgb3ZpcnQgNC4yLiZuYnNwOyZuYnNwOyAvZXRjL2hvc3Rz
IGhhcyB0aGUgY29ycmVjdCBJUCBhbmQgRlFETiBzZXR1cC48YnI+PC9kaXY+PGRpdj48YnI+PC9k
aXY+PGRpdj5XaGVuIEkgYnVpbGQgYSBWTSBhbmQgdHJ5IHRvIG9wZW4gYSBjb25zb2xlIHNlc3Np
b24gdmlhJm5ic3A7IFNQSUNFIEkgYW0gdW5hYmxlIHRvIGNvbm5lY3QgdG8gdGhlIGdyYXBoaWMg
c2VydmVyLiZuYnNwOyBJJ20gY29ubmVjdGluZyBmcm9tIGEgV2luZG93cyAxMCBib3guJm5ic3A7
Jm5ic3A7IFVzaW5nIHZpcnQtbWFuYWdlciB0byBjb25uZWN0Ljxicj48L2Rpdj48ZGl2Pjxicj48
L2Rpdj48ZGl2PkkndmUgZ29vZ2xlZCBhbmQgSSBqdXN0IGNhbnQgc2VlbSB0byBmaW5kIGFueSBy
ZXNvbHV0aW9uIHRvIHRoaXMuJm5ic3A7IE5vdywgSSBkaWQgYnVpbGQgdGhlIHNlcnZlciBvbiBt
eSBob21lIG5ldHdvcmsgYnV0IHRoZSBzdWJuZXQgaXRzIG9uIGlzIHRoZSBzYW1lLi4gaW50ZXJu
YWwgMTkyLjE2OC4xLnh4eC4mbmJzcDsmbmJzcDsgVGhlIHdlYiBpbnRlcmZhY2UgaXMgYWNjZXNz
aWJsZSBhbHNvLjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkFueSBoaW50cyBhcyB0byB3
aGF0IGVsc2UgSSBjYW4gY2hlY2s/PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+VGhhbmtz
ITxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2IGNsYXNzPSJwcm90b25tYWlsX3NpZ25hdHVy
ZV9ibG9jayI+PGRpdiBjbGFzcz0icHJvdG9ubWFpbF9zaWduYXR1cmVfYmxvY2stdXNlciBwcm90
b25tYWlsX3NpZ25hdHVyZV9ibG9jay1lbXB0eSI+PGJyPjwvZGl2PjxkaXYgY2xhc3M9InByb3Rv
bm1haWxfc2lnbmF0dXJlX2Jsb2NrLXByb3RvbiI+U2VudCB3aXRoIDxhIGhyZWY9Imh0dHBzOi8v
cHJvdG9ubWFpbC5jb20iPlByb3Rvbk1haWw8L2E+IFNlY3VyZSBFbWFpbC48YnI+PC9kaXY+PC9k
aXY+PGRpdj48YnI+PC9kaXY+
--b1_576a8a5bb40fefc5c569d4cd3c0b589a--
7 years, 2 months
qcow2 images corruption
by Nicolas Ecarnot
Hello,
TL; DR : qcow2 images keep getting corrupted. Any workaround?
Long version:
This discussion has already been launched by me on the oVirt and on
qemu-block mailing list, under similar circumstances but I learned
further things since months and here are some informations :
- We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS
7.{2,3} hosts
- Hosts :
- CentOS 7.2 1511 :
- Kernel = 3.10.0 327
- KVM : 2.3.0-31
- libvirt : 1.2.17
- vdsm : 4.17.32-1
- CentOS 7.3 1611 :
- Kernel 3.10.0 514
- KVM : 2.3.0-31
- libvirt 2.0.0-10
- vdsm : 4.17.32-1
- Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated
network
- Depends on weeks, but all in all, there are around 32 hosts, 8 storage
domains and for various reasons, very few VMs (less than 200).
- One peculiar point is that most of our VMs are provided an additional
dedicated network interface that is iSCSI-connected to some volumes of
our SAN - these volumes not being part of the oVirt setup. That could
lead to a lot of additional iSCSI traffic.
From times to times, a random VM appears paused by oVirt.
Digging into the oVirt engine logs, then into the host vdsm logs, it
appears that the host considers the qcow2 image as corrupted.
Along what I consider as a conservative behavior, vdsm stops any
interaction with this image and marks it as paused.
Any try to unpause it leads to the same conservative pause.
After having found (https://access.redhat.com/solutions/1173623) the
right logical volume hosting the qcow2 image, I can run qemu-img check
on it.
- On 80% of my VMs, I find no errors.
- On 15% of them, I find Leaked cluster errors that I can correct using
"qemu-img check -r all"
- On 5% of them, I find Leaked clusters errors and further fatal errors,
which can not be corrected with qemu-img.
In rare cases, qemu-img can correct them, but destroys large parts of
the image (becomes unusable), and on other cases it can not correct them
at all.
Months ago, I already sent a similar message but the error message was
about No space left on device
(https://www.mail-archive.com/qemu-block@gnu.org/msg00110.html).
This time, I don't have this message about space, but only corruption.
I kept reading and found a similar discussion in the Proxmox group :
https://lists.ovirt.org/pipermail/users/2018-February/086750.html
https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heav...
What I read similar to my case is :
- usage of qcow2
- heavy disk I/O
- using the virtio-blk driver
In the proxmox thread, they tend to say that using virtio-scsi is the
solution. Having asked this question to oVirt experts
(https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but
it's not clear the driver is to blame.
I agree with the answer Yaniv Kaul gave to me, saying I have to properly
report the issue, so I'm longing to know which peculiar information I
can give you now.
As you can imagine, all this setup is in production, and for most of the
VMs, I can not "play" with them. Moreover, we launched a campaign of
nightly stopping every VM, qemu-img check them one by one, then boot.
So it might take some time before I find another corrupted image.
(which I'll preciously store for debug)
Other informations : We very rarely do snapshots, but I'm close to
imagine that automated migrations of VMs could trigger similar behaviors
on qcow2 images.
Last point about the versions we use : yes that's old, yes we're
planning to upgrade, but we don't know when.
Regards,
--
Nicolas ECARNOT
7 years, 2 months
Ovirt backups lead to unresponsive VM
by Alex K
Hi all,
I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
top glusterfs.
On some VMs (especially one Windows server 2016 64bit with 500 GB of disk).
Guest agents are installed at VMs. i almost always observe that during the
backup of the VM the VM is rendered unresponsive (dashboard shows a
question mark at the VM status and VM does not respond to ping or to
anything).
For scheduled backups I use:
https://github.com/wefixit-AT/oVirtBackup
The script does the following:
1. snapshot VM (this is done ok without any failure)
2. Clone snapshot (this steps renders the VM unresponsive)
3. Export Clone
4. Delete clone
5. Delete snapshot
Do you have any similar experience? Any suggestions to address this?
I have never seen such issue with hosted Linux VMs.
The cluster has enough storage to accommodate the clone.
Thanx,
Alex
7 years, 2 months
Failing live migration with SPICE
by Alex K
Hi all,
I am running a 3 node ovirt 4.1 selft hosted setup.
I have consistently observed that windows 10 VMs with SPICE console fail to
live migrate. Other VMs (windows server 2016) do migrate normally.
VDSM log indicates:
internal error: unable to execute QEMU command 'migrate': qxl: guest bug:
command not in ram bar (migration:287)
2018-02-18 11:41:59,586+0000 ERROR (migsrc/2cf3a254) [virt.vm]
(vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate
(migration:429)
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirtError: internal error: unable to execute QEMU command 'migrate':
qxl: guest bug: command not in ram bar
Seems as a guest agent bug for Windows 10? Is there any fix?
Thanx,
Alex
7 years, 2 months
ovirt change of email alert
by Alex K
Hi all,
I had put a specif email alert during the deploy and then I wanted to
change it.
I did the following:
At one of the hosts ra:
hosted-engine --set-shared-config destination-emails alerts(a)domain.com
--type=broker
systemctl restart ovirt-ha-broker.service
I had to do the above since changing the email from GUI did not have any
effect.
After the above the emails are received at the new email address but the
cluster seems to have some issue recognizing the state of engine. i am
flooded with emails that " EngineMaybeAway-EngineUnexpectedlyDown "
I have restarted at each host also the ovirt-ha-agent.service.
Did put the cluster to global maintenance and then disabled global
maintenance.
host agent logs I have:
MainThread::ERROR::2018-02-18
11:12:20,751::hosted_engine::720::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
cannot get lock on host id 1: host already holds lock on a different host id
One other host logs:
MainThread::INFO::2018-02-18
11:20:23,692::states::682::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to unexpected vm shutdown at Sun Feb 18 11:15:13 2018
MainThread::INFO::2018-02-18
11:20:23,692::hosted_engine::453::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUnexpectedlyDown (score: 0)
The engine status on 3 hosts is:
hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : v0
Host ID : 1
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : cfd15dac
local_conf_timestamp : 4721144
Host timestamp : 4721144
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=4721144 (Sun Feb 18 11:20:33 2018)
host-id=1
score=0
vm_conf_refresh_time=4721144 (Sun Feb 18 11:20:33 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Tue Feb 24 15:29:44 1970
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : v1
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : 5cbcef4c
local_conf_timestamp : 2499416
Host timestamp : 2499416
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2499416 (Sun Feb 18 11:20:46 2018)
host-id=2
score=0
vm_conf_refresh_time=2499416 (Sun Feb 18 11:20:46 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan 29 22:18:42 1970
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : v2
Host ID : 3
Engine status : unknown stale-data
Score : 3400
stopped : False
Local maintenance : False
crc32 : f064d529
local_conf_timestamp : 2920612
Host timestamp : 2920611
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2920611 (Sun Feb 18 10:47:31 2018)
host-id=3
score=3400
vm_conf_refresh_time=2920612 (Sun Feb 18 10:47:32 2018)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
Putting each host at maintenance then activating them back does not resolve
the issue. Seems I have to avoid defining email address during deploy and
have it set only later at GUI.
How one can recover from this situation?
Thanx,
Alex
7 years, 2 months
Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!
by Enrico Becchetti
This is a cryptographically signed message in MIME format.
--------------ms050607070104000301070509
Content-Type: multipart/alternative;
boundary="------------ED6337BCF2BC492470FB68A8"
Content-Language: it-IT
This is a multi-part message in MIME format.
--------------ED6337BCF2BC492470FB68A8
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
=C2=A0 Hi,
also you can download them throught these
links:
https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD
https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb
Thanks again !!!!
Best Regards
Enrico
> Il 13/02/2018 14:52, Maor Lipchuk ha scritto:
>>
>>
>> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk <mlipchuk(a)redhat.com=20
>> <mailto:mlipchuk@redhat.com>> wrote:
>>
>>
>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti
>> <enrico.becchetti(a)pg.infn.it
>> <mailto:enrico.becchetti@pg.infn.it>> wrote:
>>
>> see the attach files please ... thanks for your attention !!!
>>
>>
>>
>> Seems like the engine logs does not contain the entire process,
>> can you please share older logs since the import operation?
>>
>>
>> And VDSM logs as well from your host
>>
>> Best Regards
>> Enrico
>>
>>
>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto:
>>>
>>>
>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti
>>> <enrico.becchetti(a)pg.infn.it
>>> <mailto:enrico.becchetti@pg.infn.it>> wrote:
>>>
>>> =C2=A0Dear All,
>>> I have been using ovirt for a long time with three
>>> hypervisors and an external engine running in a centos vm=
.
>>>
>>> This three hypervisors have HBAs and access to fiber
>>> channel storage. Until recently I used version 3.5, then
>>> I reinstalled everything from scratch and now I have 4.2.=
>>>
>>> Before formatting everything, I detach the storage data
>>> domani (FC) with the virtual machines and reimported it
>>> to the new 4.2 and all went well. In
>>> this domain there were virtual machines with and without
>>> snapshots.
>>>
>>> Now I have two problems. The first is that if I try to
>>> delete a snapshot the process is not end successful and
>>> remains hanging and the second problem is that
>>> in one case I lost the virtual machine !!!
>>>
>>>
>>>
>>> Not sure that I fully understand the scneario.'
>>> How was the virtual machine got lost if you only tried to
>>> delete a snapshot?
>>>
>>>
>>> So I need your help to kill the three running zombie
>>> tasks because with taskcleaner.sh I can't do anything
>>> and then I need to know how I can delete the old snapshot=
s
>>> made with the 3.5 without losing other data or without
>>> having new processes that terminate correctly.
>>>
>>> If you want some log files please let me know.
>>>
>>>
>>>
>>> Hi Enrico,
>>>
>>> Can you please attach the engine and VDSM logs
>>>
>>>
>>> Thank you so much.
>>> Best Regards
>>> Enrico
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> <http://lists.ovirt.org/mailman/listinfo/users>
>>>
>>>
>>
>> --=20
>> ______________________________________________________________=
_________
>>
>> Enrico Becchetti Servizio di Calcolo e Reti=
>>
>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
>> Phone:+39 075 5852777 <tel:+39%20075%20585%202777> =
Mail: Enrico.Becchetti<at>pg.infn.it <http://pg.infn.it>
>> ______________________________________________________________=
________
>>
>>
>>
>
> --=20
> _______________________________________________________________________=
>
> Enrico Becchetti Servizio di Calcolo e Reti
>
> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
> Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
> ______________________________________________________________________
--=20
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
______________________________________________________________________
--------------ED6337BCF2BC492470FB68A8
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf=
-8">
</head>
<body text=3D"#000000" bgcolor=3D"#FFFFFF">
=C2=A0 Hi,
<div class=3D"moz-cite-prefix">also you can download them throught
these<br>
links:<br>
<br>
<a class=3D"moz-txt-link-freetext" href=3D"https://owncloud.pg.infn=
=2Eit/index.php/s/QpsTyGxtRTPYRTD">https://owncloud.pg.infn.it/index.php/=
s/QpsTyGxtRTPYRTD</a><br>
</div>
<a class=3D"moz-txt-link-freetext" href=3D"https://owncloud.pg.infn.i=
t/index.php/s/ph8pLcABe0nadeb">https://owncloud.pg.infn.it/index.php/s/ph=
8pLcABe0nadeb</a><br>
<br>
Thanks again !!!!<br>
Best Regards<br>
Enrico<br>
<br>
<blockquote type=3D"cite"
cite=3D"mid:54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it">
<div class=3D"moz-cite-prefix"> Il 13/02/2018 14:52, Maor Lipchuk h=
a
scritto:<br>
</div>
<blockquote type=3D"cite"
cite=3D"mid:CAJ1JNOfTOG850gVAznZpC1qfRPU1ws9ow3OGSJUK=3Dk+dVAzjdg@mail.gm=
ail.com">
<div dir=3D"ltr"><br>
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote">On Tue, Feb 13, 2018 at 3:51 PM,
Maor Lipchuk <span dir=3D"ltr"><<a
href=3D"mailto:mlipchuk@redhat.com" target=3D"_blank"
moz-do-not-send=3D"true">mlipchuk(a)redhat.com</a>></s=
pan>
wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir=3D"ltr">
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote"><span class=3D"">On Tue, F=
eb
13, 2018 at 3:42 PM, Enrico Becchetti <span
dir=3D"ltr"><<a
href=3D"mailto:enrico.becchetti@pg.infn.it"
target=3D"_blank" moz-do-not-send=3D"true">en=
rico.becchetti(a)pg.infn.it</a>></span>
wrote:<br>
<blockquote class=3D"gmail_quote"
style=3D"margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor=3D"#FFFFFF">
<div
class=3D"m_1867120190031562186gmail-m_-8783=
667902876592593moz-cite-prefix">see
the attach files please ... thanks for
your attention !!!<br>
</div>
</div>
</blockquote>
<div class=3D"gmail_quote"><br>
</div>
<div class=3D"gmail_quote"><br>
</div>
</span>Seems like the engine logs does not contain
the entire process, can you please share older
logs since the import operation?</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>And VDSM logs as well from your host</div>
<div>=C2=A0</div>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir=3D"ltr">
<div class=3D"gmail_extra">
<div class=3D"gmail_quote">
<div>
<div class=3D"h5">
<div>=C2=A0</div>
<blockquote class=3D"gmail_quote"
style=3D"margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor=3D"#FFFFFF">
<div
class=3D"m_1867120190031562186gmail-m_-87=
83667902876592593moz-cite-prefix">
Best Regards<br>
Enrico
<div>
<div
class=3D"m_1867120190031562186gmail-h=
5"><br>
<br>
Il 13/02/2018 14:09, Maor Lipchuk ha
scritto:<br>
</div>
</div>
</div>
<div>
<div
class=3D"m_1867120190031562186gmail-h5"=
>
<blockquote type=3D"cite">
<div dir=3D"ltr"><br>
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote">On Tue=
,
Feb 13, 2018 at 1:48 PM,
Enrico Becchetti <span
dir=3D"ltr"><<a
href=3D"mailto:enrico.becch=
etti(a)pg.infn.it"
target=3D"_blank"
moz-do-not-send=3D"true">en=
rico.becchetti(a)pg.infn.it</a>></span>
wrote:<br>
<blockquote
class=3D"gmail_quote"
style=3D"margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">=C2=A0Dear All,<br>
I have been using ovirt for
a long time with three
hypervisors and an external
engine running in a centos
vm .<br>
<br>
This three hypervisors have
HBAs and access to fiber
channel storage. Until
recently I used version 3.5,
then I reinstalled
everything from scratch and
now I have 4.2.<br>
<br>
Before formatting
everything, I detach the
storage data domani (FC)
with the virtual machines
and reimported it to the new
4.2 and all went well. In<br>=
this domain there were
virtual machines with and
without snapshots.<br>
<br>
Now I have two problems. The
first is that if I try to
delete a snapshot the
process is not end
successful and remains
hanging and the second
problem is that<br>
in one case I lost the
virtual machine !!!<br>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div>Not sure that I fully
understand the scneario.'</di=
v>
<div>How was the virtual
machine got lost if you only
tried to delete a snapshot?</=
div>
<div>=C2=A0</div>
<blockquote
class=3D"gmail_quote"
style=3D"margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"> <br>
So I need your help to kill
the three running zombie
tasks because with
taskcleaner.sh I can't do
anything and then I need to
know how I can delete the
old snapshots<br>
made with the 3.5 without
losing other data or without
having new processes that
terminate correctly.<br>
<br>
If you want some log files
please let me know.<br>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div>Hi Enrico,</div>
<div><br>
</div>
<div>Can you please attach the
engine and VDSM logs</div>
<div>=C2=A0=C2=A0</div>
<blockquote
class=3D"gmail_quote"
style=3D"margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"> <br>
Thank you so much.<br>
Best Regards<span
class=3D"m_1867120190031562=
186gmail-m_-8783667902876592593gmail-HOEnZb"><font
color=3D"#888888"><br>
Enrico<br>
<br>
<br>
</font></span><br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a
href=3D"mailto:Users@ovirt.=
org"
target=3D"_blank"
moz-do-not-send=3D"true">Us=
ers(a)ovirt.org</a><br>
<a
href=3D"http://lists.ovirt.=
org/mailman/listinfo/users"
rel=3D"noreferrer"
target=3D"_blank"
moz-do-not-send=3D"true">ht=
tp://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<p><br>
</p>
</div>
</div>
<span
class=3D"m_1867120190031562186gmail-HOEnZ=
b"><font
color=3D"#888888">
<pre class=3D"m_1867120190031562186gmai=
l-m_-8783667902876592593moz-signature" cols=3D"72">--=20
______________________________<wbr>______________________________<wbr>___=
________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:<a href=3D"tel:+39%20075%20585%202777" value=3D"+390755852777" targ=
et=3D"_blank" moz-do-not-send=3D"true">+39 075 5852777</a> Ma=
il: Enrico.Becchetti<at><a href=3D"http://pg.infn.it" target=3D"_bl=
ank" moz-do-not-send=3D"true">pg.infn.it</a>
______________________________<wbr>______________________________<wbr>___=
_______ </pre>
</font></span></div>
</blockquote>
</div>
</div>
</div>
<br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<p><br>
</p>
<pre class=3D"moz-signature" cols=3D"72">--=20
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn=
=2Eit
______________________________________________________________________ </=
pre>
</blockquote>
<p><br>
</p>
<pre class=3D"moz-signature" cols=3D"72">--=20
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn=
=2Eit
______________________________________________________________________ </=
pre>
</body>
</html>
--------------ED6337BCF2BC492470FB68A8--
--------------ms050607070104000301070509
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
BZMwggWPMIIDd6ADAgECAgMAkbUwDQYJKoZIhvcNAQELBQAwQzELMAkGA1UEBhMCSVQxDTAL
BgNVBAoTBElORk4xJTAjBgNVBAMTHElORk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcN
MTcwMjIyMTM0NDU1WhcNMTgwMjIyMTM0NDU1WjBoMQswCQYDVQQGEwJJVDENMAsGA1UEChME
SU5GTjEdMBsGA1UECxMUUGVyc29uYWwgQ2VydGlmaWNhdGUxEDAOBgNVBAcTB1BlcnVnaWEx
GTAXBgNVBAMTEEVucmljbyBCZWNjaGV0dGkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDM9hqLromxJFRrJxIkfgn+pYWI6IJmy0zu0OH6x/zrp8eEpTkI6Rlngg3jVexYWldr
obFy6TSSW429GDbkszGjy/yNJKtgQnRama8Izihm/RY5m/j8iTqvw4RPdk6CrxO8AD3FGiR8
sSImHzBagHmvAHwA9Ba5gTCJVgRO4MsGAMaxKNkwUrGIZK+Rmi5zVLW2yfjqsSiO4sCZDXey
/Ndm17b8FF5HG5rNWjwG8A0y2s25AWTxURvM1gNnTXl8zOPWmtvZcpp5dBp3VlA6N/0Pk3uy
klBIBEEK67W9+vqcDfqNON24hL+Cm3OsGrW/XpIIb/oQNdLp5dtSUG3qTn8BAgMBAAGjggFl
MIIBYTAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQEAwIEsDAdBgNVHSUEFjAUBggrBgEFBQcD
AgYIKwYBBQUHAwQwPwYDVR0fBDgwNjA0oDKgMIYuaHR0cDovL3NlY3VyaXR5LmZpLmluZm4u
aXQvQ0EvSU5GTi1DQS0yMDE1LmNybDAlBgNVHSAEHjAcMAwGCisGAQQB0SMKAQgwDAYKKoZI
hvdMBQICATAdBgNVHQ4EFgQUiw04Jlbw6qOmGOQVq8eXcDhG08cwcwYDVR0jBGwwaoAUQ4xN
/uyWyunvCqR9wZ0C5Z+9loChR6RFMEMxCzAJBgNVBAYTAklUMQ0wCwYDVQQKEwRJTkZOMSUw
IwYDVQQDExxJTkZOIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggkAsyYCxn1JJicwJgYDVR0R
BB8wHYEbZW5yaWNvLmJlY2NoZXR0aUBwZy5pbmZuLml0MA0GCSqGSIb3DQEBCwUAA4ICAQBY
SsKt/XTw+Kc+Vc1pjVi5uuEf/oP0U/nQQywIyM+rhbkeHy8c1eokA5k8Xnycwx8kzAh0e2lz
W0a8gM2tSI799DF6N8s5NfCo6RoPciSYA+7edylWlE8TfGoJ2I5pNZV3IctYlfokSiz59EPz
CNhQsfVbF8qkINB/J8HlbKvm0ERDOAse1pBjNwdU3Lt5nxmw3Yd08b9jPw4g5eMSmW4XmRzP
kERTjAQxophM4OIKb0Vj0XfqONBt+LansgyzHcWbo+aKJmgQDvdumnGsNoQbRI4TxjnlB4Dg
Vgbvo0mKZYfu0Sr2U9+wpRJ0U/+YljbVEuzmD9hl/eBRoBgIhx9dwoBYsuXEkWo8RkgeVpnt
z8ze4zubi7+4BnuuVLZeq1VWQFy1qU7KGTou4a6B+gM88ZviGJI6o03oDhE976uOST5v5pZ7
q7uhgBd8FW594PAWdI1oPQ5YVKX+NX6XrXwf7GYf18OTmV3kNxWllrNVz3YLLhbnSSf1o3sd
FLZXMu1r4dbeQuk7duYa/c/POyl3f8H/JGAPIOVQWfotMm8X/WET9Rhglc5Ead18rZPePTpj
zyjfB+k0C0ZH1cM7ynTpMDxrk9X7Zq4wbKFlzG1OVKSzTiZGLQrjhciXoSDHs4J9RNM5ar2G
IYWhWFkQ+tVY/73EOsNHHXjk438qUUD33jGCAwgwggMEAgEBMEowQzELMAkGA1UEBhMCSVQx
DTALBgNVBAoTBElORk4xJTAjBgNVBAMTHElORk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkC
AwCRtTANBglghkgBZQMEAgEFAKCCAY8wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkq
hkiG9w0BCQUxDxcNMTgwMjE0MDgxMTQyWjAvBgkqhkiG9w0BCQQxIgQgWNGQtbBzZ41OMyBw
nGzdNpVSgpH9TRdai8vep1YqQYAwWQYJKwYBBAGCNxAEMUwwSjBDMQswCQYDVQQGEwJJVDEN
MAsGA1UEChMESU5GTjElMCMGA1UEAxMcSU5GTiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eQID
AJG1MFsGCyqGSIb3DQEJEAILMUygSjBDMQswCQYDVQQGEwJJVDENMAsGA1UEChMESU5GTjEl
MCMGA1UEAxMcSU5GTiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eQIDAJG1MGwGCSqGSIb3DQEJ
DzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0D
AgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcN
AQEBBQAEggEAcjFESE65z2g5Tc/bjY5/qPeWr9bljHuiUHx41Yq9AbJi8ODtkL+g4ntC2Upd
H964sFzA+0RoakYOwDJH4dAMgbdrX93uR5V3/s5cjMuaxaU9ko+dgVmjq1ptl1mAOJVWCcsg
8p1lC250ursn0QnU4gURm4KT+11PsrCEF5qrbuIXvsuqiOsLA70wYCq4PSRW4sqNLTaXR8oc
3g4v9jP1aolER0UTA7UQapU928/mYZZkTXCCAsKfDhQvEVe9/QJyikRxXsuH8PU6ywKp25sK
WXc/1dwteceSyz8aCBM6e4+mhGe+RirP7b9Rf2sqTJf+m2SSHwJ2ARzcVjlHQBxGZgAAAAAA
AA==
--------------ms050607070104000301070509--
7 years, 2 months
database restoration
by Fabrice Bacchella
I'm running a restoration test and getting the following log generated by engine-backup --mode=restore:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 4274; 0 0 COMMENT EXTENSION plpgsql
pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql
Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
pg_restore: WARNING: no privileges could be revoked for "public"
pg_restore: WARNING: no privileges could be revoked for "public"
pg_restore: WARNING: no privileges were granted for "public"
pg_restore: WARNING: no privileges were granted for "public"
WARNING: errors ignored on restore: 1
Do I need to worry, as this error is ignored ?
7 years, 2 months
Unable to put Host into Maintenance mode
by Mark Steele
I have a host that is currently reporting down with NO VM's on it or
associated with it. However when I attempt to put it into maintenance mode,
I get the following error:
Host hv-01 cannot change into maintenance mode - not all Vms have been
migrated successfully. Consider manual intervention: stopping/migrating
Vms: <UNKNOWN> (User: admin)
I am running
oVirt Engine Version: 3.5.0.1-1.el6
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
7 years, 2 months
Username / password for ovirt-shell
by Mark Steele
Hello,
I'm not the original system architect of our Cluster and I'm not able to
locate any documentation regarding the username and password for our
ovirt-shell CLI.
Is there a config file on the HostedEngine that would point me in the right
direction?
Best regards,
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
7 years, 2 months
Requirements for basic host
by Mark Steele
Hello again,
I'm building a new host for my cluster and have a quick question about
required software for joining the host to my cluster.
In my notes from a previous colleague, I am instructed to do the following:
yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
yum install ovirt-hosted-engine-setup
hosted-engine --deploy
We already have a HostedEngine running on another server in the cluster -
so do I need to install ovirt-hosted-engine-setup and then deploy it for
this server to join the cluster and operate properly?
As always - thank you for your time.
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
7 years, 2 months
ERROR - some other host already uses IP ###.###.###.###
by Mark Steele
Good morning,
We had a storage crash early this morning that messed up a couple of our
ovirt hosts. Networking seemed to be the biggest issue. I have decided to
remove the bridge information in /etc/sysconfig/network-scripts and ip the
nics in order to re-import them into my ovirt installation (I have already
removed the hosts).
One of the NIC's refuses to come up and is generating the following error:
ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Error, some other
host (0C:C4:7A:5B:11:5C) already uses address ###.###.###.###.
When I ARP on this server, I do not see that Mac address - and none of my
other hosts are using it either. I'm not sure where to go next other than
completely reinstalling Centos on this server and starting over.
Ovirt version is oVirt Engine Version: 3.5.0.1-1.el6
OS version is
CentOS Linux release 7.4.1708 (Core)
Thank you
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
7 years, 2 months
Internal Server Error while add Permission [cli]
by Thomas Fecke
--_000_077d1469b6bf4f3c886b50c69af94b2fDR1XEXCH01Besetcorp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hey dear Community,
I work a bit with that ovirt shell. That worked pretty fine but I got some =
Problems when I try to add Permission:
What I want to do:
Add a Role to an VM
What I did:
add permission --parent-vm-name vm1 --user-id user1 --role-id UserVmCreator
Error:
status: 500
reason: Internal Server Error
detail:
<html><head><title>Error</title></head><body>Internal Server Error</body></=
html>
Any other cli command works fine for me. What am I doing wrong? Thank you !
--_000_077d1469b6bf4f3c886b50c69af94b2fDR1XEXCH01Besetcorp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:"Calibri Light";
panose-1:2 15 3 2 2 2 4 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hey dear Community,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I work a bit with that ovirt shell. That worked pret=
ty fine but I got some Problems when I try to add Permission:<o:p></o:p></p=
>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">What I want to do:<o:p></o:p></p>
<p class=3D"MsoNormal">Add a Role to an VM<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">What I did:<o:p></o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;font-family:"Ca=
libri Light",sans-serif;mso-fareast-language:EN-GB">add permission --p=
arent-vm-name vm1 --user-id user1 --role-id UserVmCreator<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Error:<o:p></o:p></p>
<p class=3D"MsoNormal">status: 500<o:p></o:p></p>
<p class=3D"MsoNormal"> reason: Internal Server Error<o:p></o:p></p>
<p class=3D"MsoNormal"> detail:<o:p></o:p></p>
<p class=3D"MsoNormal"><html><head><title>Error</title=
></head><body>Internal Server Error</body></html>=
;<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Any other cli command works fine for me. What am I d=
oing wrong? Thank you !<o:p></o:p></p>
</div>
</body>
</html>
--_000_077d1469b6bf4f3c886b50c69af94b2fDR1XEXCH01Besetcorp_--
7 years, 2 months
XML error ovirt-4.2.1 release
by Ladislav Humenik
This is a multi-part message in MIME format.
--------------3F038A49E1BAC4879CE7C6F3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hello all,
we just tested the 4.2.0 release, worked fine so far.
Yesterday just updated to the latest 4.2.1 and since then we can not
send and receive response; the error has to do with the response to the
server:
checkContentType(XML_CONTENT_TYPE_RE, "XML",
response.getFirstHeader("content-type").getValue());
it seems it is not XML type
the error is:
throw new Error("Failed to send request", e);
Through the web api I can connect and get and see all, but through the
SDK it exits. I've tried both 420 and 421 SDKs of ovirt.
--
Ladislav Humenik
--------------3F038A49E1BAC4879CE7C6F3
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hello all,</p>
<p><br>
</p>
<p>we just tested the 4.2.0 release, worked fine so far.</p>
<p> Yesterday just updated to the latest 4.2.1 and since then we can
not send and receive response; the error has to do with the
response to the server:<br>
</p>
<p> <font face="Arial,sans-serif">checkContentType(XML_CONTENT_TYPE_RE,
"XML", response.getFirstHeader("content-type").getValue());</font></p>
<p>it seems it is not XML type</p>
<p>the error is:</p>
<p><span style="color: rgb(68, 68, 68); font-family: -apple-system,
BlinkMacSystemFont, Roboto, "Helvetica Neue", Arial,
sans-serif, "Apple Color Emoji", "Segoe UI",
"Segoe UI Emoji", "Segoe UI Symbol",
"Meiryo UI"; font-size: 14px; font-style: normal;
font-variant-ligatures: normal; font-variant-caps: normal;
font-weight: normal; letter-spacing: normal; orphans: 2;
text-align: left; text-indent: 0px; text-transform: none;
white-space: normal; widows: 2; word-spacing: 0px;
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255,
255); display: inline !important; float: none;"> <font
size="+1"> throw new Error("Failed to send request",
e);</font></span></p>
<p>Through the web api I can connect and get and see all, but
through the SDK it exits. I've tried both 420 and 421 SDKs of
ovirt.</p>
<p><br>
</p>
<pre class="moz-signature" cols="72">--
Ladislav Humenik</pre>
</body>
</html>
--------------3F038A49E1BAC4879CE7C6F3--
7 years, 2 months
Partition Trouble on oVirt Node
by Matt Simonsen
Hello all,
This may not be oVirt specific (but it may be) so thank you in advance
for any assistance.
I have a system installed with oVirt Node Next 4.1.9 that was installed
to /dev/sda
I had a seperate RAID Volume /dev/sdb that should not have been used,
but now that the operating system is loaded I'm struggling to get the
device partitioned.
I've tried mkfs.ext4 on the device and also pvcreate, with the errors
below. I've also rebooted a couple times and tried to disable
multipathd. Is multipathd even safe to disable on Node Next?
Below are the errors I've received, and thank you again for any tips.
[root@node1-g6-h3 ~]# mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
/dev/sdb is apparently in use by the system; will not make a filesystem
here!
[root@node1-g6-h3 ~]# gdisk
GPT fdisk (gdisk) version 0.8.6
Type device filename, or press <Enter> to exit: /dev/sdb
Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!
Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!
Warning! One or more CRCs don't match. You should repair the disk!
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: damaged
Found invalid MBR and corrupt GPT. What do you want to do? (Using the
GPT MAY permit recovery of GPT data.)
1 - Use current GPT
2 - Create blank GPT
Your answer: 2
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-16952264590, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-16952264590, default = 16952264590) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.
[root@node1-g6-h3 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
7 years, 2 months
Virtual networks in oVirt 4.2 and MTU 1500
by Dmitry Semenov
I have a not big cluster on oVirt 4.2.
Each node has a bond, that has several vlans in its turn.
I use virtual networks OVN (External Provider -> ovirt-provider-ovn).
While testing I have noticed that in virtual network MTU must be less 1500, so my question is may I change something in network or in bond in order everything in virtual network works correctly with MTU 1500?
Below link with my settings:
https://pastebin.com/F7ssCVFa
--
Best regards
7 years, 2 months
hosted-engine deploy 4.2.1 fails when ovirtmgmt is defined on vlan subinterface
by Kuko Armas
I'm not sure if I should submit a bug report about this, so I ask around here first...
I've found a bug that "seems" related but I think it's not (https://bugzilla.redhat.com/show_bug.cgi?id=1523661)
This is the problem:
- I'm trying to do a clean HE deploy with oVirt 4.2.1 on a clean CentOS 7.4 host
- I have a LACP bond (bond0) and I need my management network to be on vlan 1005, si I have created interface bond0.1005 on the host and everything works
- I run hosted-engine deploy, and it always fails with
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.006473", "end": "2018-02-15 13:57:11.132359", "rc": 0, "start": "2018-02-15 13:57:11.125886", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
- Looking at the ansible playbook, I see it's trying to look for an ip rule using a custom routing table, but I have no such rule
[root@ovirt1 ~]# ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
- I also find that I have no "ovirtmgmt" bridge
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000.000000000000 no
virbr0 8000.525400e6ca97 yes virbr0-nic
vnet0
- But I haven't found any reference in the ansible playbook to this network creation.
- The HE VM gets created and I can connect with SSH, so I tried to find out if the ovirtmgmt network is created via vdsm from the engine
- Looking at the engine.log I found this:
2018-02-15 13:49:26,850Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Engine managed to communicate wi
th VDSM agent on host 'ovirt1' with address 'ovirt1' ('06651b32-4ef8-4b5d-ab2d-c38e84c2d790')
2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] EVENT_ID: VLAN_ID_
MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION(1,119), Failed to configure management network on host ovirt1. Host ovirt1 has an interface bond0.1005 for the management netwo
rk configuration with VLAN-ID (1005), which is different from data-center definition (none).
2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Exception: org.ovirt.eng
ine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: Failed to configure management network
- So I guess that the engine tried to create the ovirtmgmt bridge on the host via vdsm, but it failed because "Host ovirt1 has an interface bond0.1005 for the management netwo
rk configuration with VLAN-ID (1005), which is different from data-center definition (none)"
- Of course I haven't had the opportunity to setup the management network's vlan in the datacenter yet, because I'm still trying to deploy the Hosted Engine
Is this a supported configuration? Is there a way I can tell the datacenter that the management network is on vlan 1005? Should I file a bug report?
Is there a workaround?
Salu2!
--
Miguel Armas
CanaryTek Consultoria y Sistemas SL
http://www.canarytek.com/
7 years, 2 months
hosted-engine 4.2.1-pre setup on a clean node..
by Thomas Davis
Is this supported?
I have a node, that centos 7.4 minimal is installed on, with an interface
setup for an IP address.
I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run screen,
and then do the 'hosted-engine --deploy' command.
It hangs on:
[ INFO ] changed: [localhost]
[ INFO ] TASK [Get ovirtmgmt route table id]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
"cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{
print $9 }'", "delta": "0:00:00.004845", "end": "2018-02-02
12:03:30.794860", "rc": 0, "start": "2018-02-02 12:03:30.790015", "stderr":
"", "stderr_lines": [], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [Gathering Facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Remove local vm dir]
[ INFO ] ok: [localhost]
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202120333.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180202115038-r11nh1.log
but the VM is up and running, just attached to the 192.168.122.0/24 subnet
[root@d8-r13-c2-n1 ~]# ssh root(a)192.168.122.37
root(a)192.168.122.37's password:
Last login: Fri Feb 2 11:54:47 2018 from 192.168.122.1
[root@ovirt ~]# systemctl status ovirt-engine
● ovirt-engine.service - oVirt Engine
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled;
vendor preset: disabled)
Active: active (running) since Fri 2018-02-02 11:54:42 PST; 11min ago
Main PID: 24724 (ovirt-engine.py)
CGroup: /system.slice/ovirt-engine.service
├─24724 /usr/bin/python
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
--redirect-output --systemd=notify start
└─24856 ovirt-engine -server -XX:+TieredCompilation -Xms3971M
-Xmx3971M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000
-Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse...
Feb 02 11:54:41 ovirt.crt.nersc.gov systemd[1]: Starting oVirt Engine...
Feb 02 11:54:41 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02
11:54:41,767-0800 ovirt-engine: INFO _detectJBossVersion:187 Detecting
JBoss version. Running: /usr/lib/jvm/jre/...600000', '-
Feb 02 11:54:42 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02
11:54:42,394-0800 ovirt-engine: INFO _detectJBossVersion:207 Return code:
0, | stdout: '[u'WildFly Full 11.0.0....tderr: '[]'
Feb 02 11:54:42 ovirt.crt.nersc.gov systemd[1]: Started oVirt Engine.
Feb 02 11:55:25 ovirt.crt.nersc.gov python2[25640]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:29 ovirt.crt.nersc.gov python2[25698]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25741]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25767]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:31 ovirt.crt.nersc.gov python2[25795]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True
The 'ip rule list' never has an ovirtmgmt rule/table in it.. which means
the ansible script loops then dies; vdsmd has never configured the network
on the node.
[root@d8-r13-c2-n1 ~]# systemctl status vdsmd -l
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: active (running) since Fri 2018-02-02 11:55:11 PST; 14min ago
Main PID: 7654 (vdsmd)
CGroup: /system.slice/vdsmd.service
└─7654 /usr/bin/python2 /usr/share/vdsm/vdsmd
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
dummybr
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
tune_system
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
test_space
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
test_lo
Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop Server
Manager.
Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File:
/var/run/vdsm/trackedInterfaces/vnet0 already removed
Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, ignoring event
'|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0'
args={'ba56a114-efb0-45e0-b2ad-808805ae93e0': {'status': 'Powering up',
'displayInfo': [{'tlsPort': '-1', 'ipAddress': '127.0.0.1', 'type': 'vnc',
'port': '5900'}], 'hash': '5328187475809024041', 'cpuUser': '0.00',
'monitorResponse': '0', 'elapsedTime': '0', 'cpuSys': '0.00', 'vcpuPeriod':
100000L, 'timeOffset': '0', 'clientIp': '', 'pauseCode': 'NOERR',
'vcpuQuota': '-1'}}
Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available.
Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available, KSM stats
will be missing.
Feb 02 11:55:17 d8-r13-c2-n1 vdsm[7654]: WARN ping was deprecated in favor
of ping2 and confirmConnectivity
Do I need to install a complete ovirt-engine on the node first, bring the
node into ovirt, then bring up hosted-engine? I'd like to avoid this and
just go straight to hosted-engine setup.
thomas
7 years, 2 months
issue on engine deployment on oVirt node
by Vincent Kwiatkowski
Hi Folks,
I tried a few time to configure a simple oVirt engine on oVirt node.
After the fresh install of the node, I connect cockpit and launch the
engine setup, then at the end I have the message that I need to connect to
the VM with "hosted-engine --console" or via VNC.
Via VNC, I can't do nothing, have no prompt
using the --console, I have the error:
:internal error: character device console0 is not using a PTY
What can I do to continue the setup?
Thx a lot in advance
--
Vincent Kwiatkowski | Production System Engineer |ULLINK | D: + 33 1
44 50 25 45 | T: +1 49 95 30 00
| 23/25 rue de Provence | 75009, Paris | vk(a)ullink.com
Please consider the environment before printing this email
--
*The information contained in or attached to this email is strictly
confidential. If you are not the intended recipient, please notify us
immediately by telephone and return the message to us.*
7 years, 2 months
Re: [ovirt-users] Manageiq ovn
by Alona Kaplan
On Thu, Feb 15, 2018 at 4:03 PM, Aliaksei Nazarenka <
aliaksei.nazarenka(a)gmail.com> wrote:
> and how i can change network in the created VM?
>
It is not possible via manageiq. Only via ovirt.
>
> Sorry for my intrusive questions)))
>
> 2018-02-15 16:51 GMT+03:00 Aliaksei Nazarenka <
> aliaksei.nazarenka(a)gmail.com>:
>
>> ovirt-provider-ovn-1.2.7-0.20180213232754.gitebd60ad.el7.centos.noarch
>> on hosted-engine
>> ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch on ovirt hosts
>>
>> 2018-02-15 16:40 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>
>>>
>>>
>>> On Thu, Feb 15, 2018 at 3:36 PM, Aliaksei Nazarenka <
>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>
>>>> when i try to create network router, i see this message: *Unable to
>>>> create Network Router "test_router": undefined method `[]' for nil:NilClass*
>>>>
>>>
>>> What ovn-provider version you're using? Can you please attach the ovn
>>> provider log ( /var/log/ovirt-provider-ovn.log)?
>>>
>>>
>>>>
>>>> 2018-02-15 16:20 GMT+03:00 Aliaksei Nazarenka <
>>>> aliaksei.nazarenka(a)gmail.com>:
>>>>
>>>>> Big Thank you! This work! But... Networks are created, but I do not
>>>>> see them in the ovirt manager, but through the ovn-nbctl command, I see all
>>>>> the networks. And maybe you can tell me how to assign a VM network from
>>>>> Manageiq?
>>>>>
>>>>> 2018-02-15 15:01 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka <
>>>>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>>>>
>>>>>>> Error - 1 Minute Ago
>>>>>>> undefined method `orchestration_stacks' for
>>>>>>> #<ManageIQ::Providers::Redhat::InfraManager:0x00000007bf9288> - I
>>>>>>> get this message if I try to create a network of overts and then try to
>>>>>>> check the status of the network manager.
>>>>>>>
>>>>>>
>>>>>> It is the same bug.
>>>>>> You need to apply the fixes in https://github.com/ManageIQ/ma
>>>>>> nageiq-providers-ovirt/pull/198/files to make it work.
>>>>>> The best option is to upgrade your version.
>>>>>>
>>>>>>
>>>>>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka <
>>>>>>> aliaksei.nazarenka(a)gmail.com>:
>>>>>>>
>>>>>>>> I tried to make changes to the file refresher_ovn_provider.yml -
>>>>>>>> changed the passwords, corrected the names of the names, but it was not
>>>>>>>> successful.
>>>>>>>>
>>>>>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka <
>>>>>>>> aliaksei.nazarenka(a)gmail.com>:
>>>>>>>>
>>>>>>>>> Hi!
>>>>>>>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.2018012514301
>>>>>>>>> 9_1450f27
>>>>>>>>> After i set this commits (upstream - https://bugzilla.redhat.com/
>>>>>>>>> 1542063) i no saw changes.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> What version of manageiq you are using?
>>>>>>>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream -
>>>>>>>>>> https://bugzilla.redhat.com/1542063) that was fixed in version
>>>>>>>>>> 5.9.0.20
>>>>>>>>>>
>>>>>>>>>> Please let me know it upgrading the version helped you.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Alona.
>>>>>>>>>>
>>>>>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka <
>>>>>>>>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Good afternoon!
>>>>>>>>>>> I read your article - https://www.ovirt.org/develop/
>>>>>>>>>>> release-management/features/network/manageiq_ovn/. I have only
>>>>>>>>>>> one question: how to create a network or subnet in Manageiq + ovirt 4.2.1.
>>>>>>>>>>> When I try to create a network, I need to select a tenant, but there is
>>>>>>>>>>> nothing that I could choose. How can it be?
>>>>>>>>>>>
>>>>>>>>>>> Sincerely. Alexey Nazarenko
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
7 years, 2 months
Re: [ovirt-users] Manageiq ovn
by Alona Kaplan
On Thu, Feb 15, 2018 at 3:36 PM, Aliaksei Nazarenka <
aliaksei.nazarenka(a)gmail.com> wrote:
> when i try to create network router, i see this message: *Unable to
> create Network Router "test_router": undefined method `[]' for nil:NilClass*
>
What ovn-provider version you're using? Can you please attach the ovn
provider log ( /var/log/ovirt-provider-ovn.log)?
>
> 2018-02-15 16:20 GMT+03:00 Aliaksei Nazarenka <
> aliaksei.nazarenka(a)gmail.com>:
>
>> Big Thank you! This work! But... Networks are created, but I do not see
>> them in the ovirt manager, but through the ovn-nbctl command, I see all the
>> networks. And maybe you can tell me how to assign a VM network from
>> Manageiq?
>>
>> 2018-02-15 15:01 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>
>>>
>>>
>>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka <
>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>
>>>> Error - 1 Minute Ago
>>>> undefined method `orchestration_stacks' for
>>>> #<ManageIQ::Providers::Redhat::InfraManager:0x00000007bf9288> - I get
>>>> this message if I try to create a network of overts and then try to check
>>>> the status of the network manager.
>>>>
>>>
>>> It is the same bug.
>>> You need to apply the fixes in https://github.com/ManageIQ/ma
>>> nageiq-providers-ovirt/pull/198/files to make it work.
>>> The best option is to upgrade your version.
>>>
>>>
>>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka <
>>>> aliaksei.nazarenka(a)gmail.com>:
>>>>
>>>>> I tried to make changes to the file refresher_ovn_provider.yml -
>>>>> changed the passwords, corrected the names of the names, but it was not
>>>>> successful.
>>>>>
>>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka <
>>>>> aliaksei.nazarenka(a)gmail.com>:
>>>>>
>>>>>> Hi!
>>>>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.2018012514301
>>>>>> 9_1450f27
>>>>>> After i set this commits (upstream - https://bugzilla.redhat.com/
>>>>>> 1542063) i no saw changes.
>>>>>>
>>>>>>
>>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> What version of manageiq you are using?
>>>>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream -
>>>>>>> https://bugzilla.redhat.com/1542063) that was fixed in version
>>>>>>> 5.9.0.20
>>>>>>>
>>>>>>> Please let me know it upgrading the version helped you.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Alona.
>>>>>>>
>>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka <
>>>>>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>>>>>
>>>>>>>> Good afternoon!
>>>>>>>> I read your article - https://www.ovirt.org/develop/
>>>>>>>> release-management/features/network/manageiq_ovn/. I have only one
>>>>>>>> question: how to create a network or subnet in Manageiq + ovirt 4.2.1. When
>>>>>>>> I try to create a network, I need to select a tenant, but there is nothing
>>>>>>>> that I could choose. How can it be?
>>>>>>>>
>>>>>>>> Sincerely. Alexey Nazarenko
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
7 years, 2 months
Re: [ovirt-users] Manageiq ovn
by Alona Kaplan
Hi Alexey,
Please reply to the users list so all the user may enjoy the information.
Automatic sync of ovn networks to ovirt was added on version
ovirt-engine-4.2.1.3 (https://bugzilla.redhat.com/1511823).
If you use lower version you should import the network to ovirt manually
(networks tab -> import button).
Once the ovn network is imported to ovirt a vnic profile is automatically
created to it.
In manageiq, you can assign this profile to a vm you provision (provision
vm -> network tab -> vlan field).
Alona.
On Thu, Feb 15, 2018 at 3:20 PM, Aliaksei Nazarenka <
aliaksei.nazarenka(a)gmail.com> wrote:
> Big Thank you! This work! But... Networks are created, but I do not see
> them in the ovirt manager, but through the ovn-nbctl command, I see all the
> networks. And maybe you can tell me how to assign a VM network from
> Manageiq?
>
> 2018-02-15 15:01 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>
>>
>>
>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka <
>> aliaksei.nazarenka(a)gmail.com> wrote:
>>
>>> Error - 1 Minute Ago
>>> undefined method `orchestration_stacks' for
>>> #<ManageIQ::Providers::Redhat::InfraManager:0x00000007bf9288> - I get
>>> this message if I try to create a network of overts and then try to check
>>> the status of the network manager.
>>>
>>
>> It is the same bug.
>> You need to apply the fixes in https://github.com/ManageIQ/ma
>> nageiq-providers-ovirt/pull/198/files to make it work.
>> The best option is to upgrade your version.
>>
>>
>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka <
>>> aliaksei.nazarenka(a)gmail.com>:
>>>
>>>> I tried to make changes to the file refresher_ovn_provider.yml -
>>>> changed the passwords, corrected the names of the names, but it was not
>>>> successful.
>>>>
>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka <
>>>> aliaksei.nazarenka(a)gmail.com>:
>>>>
>>>>> Hi!
>>>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.2018012514301
>>>>> 9_1450f27
>>>>> After i set this commits (upstream - https://bugzilla.redhat.com/
>>>>> 1542063) i no saw changes.
>>>>>
>>>>>
>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> What version of manageiq you are using?
>>>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream -
>>>>>> https://bugzilla.redhat.com/1542063) that was fixed in version
>>>>>> 5.9.0.20
>>>>>>
>>>>>> Please let me know it upgrading the version helped you.
>>>>>>
>>>>>> Thanks,
>>>>>> Alona.
>>>>>>
>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka <
>>>>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>>>>
>>>>>>> Good afternoon!
>>>>>>> I read your article - https://www.ovirt.org/develop/
>>>>>>> release-management/features/network/manageiq_ovn/. I have only one
>>>>>>> question: how to create a network or subnet in Manageiq + ovirt 4.2.1. When
>>>>>>> I try to create a network, I need to select a tenant, but there is nothing
>>>>>>> that I could choose. How can it be?
>>>>>>>
>>>>>>> Sincerely. Alexey Nazarenko
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
7 years, 2 months
Re: [ovirt-users] Manageiq ovn
by Alona Kaplan
On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka <
aliaksei.nazarenka(a)gmail.com> wrote:
> Error - 1 Minute Ago
> undefined method `orchestration_stacks' for #<ManageIQ::Providers::Redhat:
> :InfraManager:0x00000007bf9288> - I get this message if I try to create a
> network of overts and then try to check the status of the network manager.
>
It is the same bug.
You need to apply the fixes in
https://github.com/ManageIQ/manageiq-providers-ovirt/pull/198/files to make
it work.
The best option is to upgrade your version.
> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka <
> aliaksei.nazarenka(a)gmail.com>:
>
>> I tried to make changes to the file refresher_ovn_provider.yml - changed
>> the passwords, corrected the names of the names, but it was not successful.
>>
>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka <
>> aliaksei.nazarenka(a)gmail.com>:
>>
>>> Hi!
>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.20180125143019_1450f27
>>> After i set this commits (upstream - https://bugzilla.redhat.com/
>>> 1542063) i no saw changes.
>>>
>>>
>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan <alkaplan(a)redhat.com>:
>>>
>>>> Hi,
>>>>
>>>> What version of manageiq you are using?
>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream -
>>>> https://bugzilla.redhat.com/1542063) that was fixed in version 5.9.0.20
>>>>
>>>> Please let me know it upgrading the version helped you.
>>>>
>>>> Thanks,
>>>> Alona.
>>>>
>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka <
>>>> aliaksei.nazarenka(a)gmail.com> wrote:
>>>>
>>>>> Good afternoon!
>>>>> I read your article - https://www.ovirt.org/develop/
>>>>> release-management/features/network/manageiq_ovn/. I have only one
>>>>> question: how to create a network or subnet in Manageiq + ovirt 4.2.1. When
>>>>> I try to create a network, I need to select a tenant, but there is nothing
>>>>> that I could choose. How can it be?
>>>>>
>>>>> Sincerely. Alexey Nazarenko
>>>>>
>>>>
>>>>
>>>
>>
>
7 years, 2 months
hosted engine install fails on useless DHCP lookup
by Jamie Lawrence
Hello,
I'm seeing the hosted engine install fail on an Ansible playbook step. Log below. I tried looking at the file specified for retry, below (/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry); it contains the word, 'localhost'.
The log below didn't contain anything I could see that was actionable; given that it was an ansible error, I hunted down the config and enabled logging. On this run the error was different - the installer log was the same, but the reported error (from the installer changed).
The first time, the installer said:
[ INFO ] TASK [Wait for the host to become non operational]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
Second:
[ INFO ] TASK [Get local vm ip]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:11:e7:bd | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.093840", "end": "2018-02-13 16:53:08.658556", "rc": 0, "start": "2018-02-13 16:53:08.564716", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
Ansible log below; as with that second snippet, it appears that it was trying to parse out a host name from virsh's list of DHCP leases, couldn't, and died.
Which makes sense: I gave it a static IP, and unless I'm missing something, setup should not have been doing that. I verified that the answer file has the IP:
OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.181.26.150/24
Anyone see what is wrong here?
-j
hosted-engine --deploy log:
2018-02-13 16:20:32,138-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Force host-deploy in offline mode]
2018-02-13 16:20:33,041-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost]
2018-02-13 16:20:33,342-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [include_tasks]
2018-02-13 16:20:33,443-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost]
2018-02-13 16:20:33,744-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Obtain SSO token using username/password credentials]
2018-02-13 16:20:35,248-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost]
2018-02-13 16:20:35,550-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Add host]
2018-02-13 16:20:37,053-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost]
2018-02-13 16:20:37,355-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the host to become non operational]
2018-02-13 16:27:48,895-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible_parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts': 150, u'invocation': {u'module_args': {u'pattern': u'name=ovirt-1.squaretrade.com', u'fetch_nested': False, u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}}
2018-02-13 16:27:48,995-0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false}
2018-02-13 16:27:49,297-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 42 changed: 17 unreachable: 0 skipped: 2 failed: 1
2018-02-13 16:27:49,397-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [ovirt-engine-1.squaretrade.com] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0
2018-02-13 16:27:49,498-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 2
2018-02-13 16:27:49,498-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout:
2018-02-13 16:27:49,499-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to retry, use: --limit @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry
2018-02-13 16:27:49,499-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr:
2018-02-13 16:27:49,500-0800 DEBUG otopi.context context._executeMethod:143 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup
r = ah.run()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 194, in run
raise RuntimeError(_('Failed executing ansible-playbook'))
RuntimeError: Failed executing ansible-playbook
2018-02-13 16:27:49,512-0800 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed executing ansible-playbook
2018-02-13 16:27:49,513-0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN
- - - - - -
ansible log snip:
2018-02-13 16:52:47,548 ovirt-hosted-engine-setup-ansible ansible on_any args (<ansible.executor.task_result.TaskResult object at 0x7f00dc19f850>,) kwargs {}
2018-02-13 16:52:58,124 ovirt-hosted-engine-setup-ansible ansible on_any args (<ansible.executor.task_result.TaskResult object at 0x2a09310>,) kwargs {}
2018-02-13 16:53:08,954 ovirt-hosted-engine-setup-ansible var changed: host "localhost" var "local_vm_ip" type "<type 'dict'>" value: "{'stderr_lines': [], u'changed': True, u'end': u'2018-02-13 16:53:08.658556', u'stdout': u'', u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:11:e7:bd | awk '{ print $5 }' | cut -f1 -d'/'", u'rc': 0, u'start': u'2018-02-13 16:53:08.564716', 'attempts': 50, u'stderr': u'', u'delta': u'0:00:00.093840', 'stdout_lines': [], 'failed': True}"
7 years, 2 months
Moving Combined Engine & Node to new network.
by Rulas Mur
Hi,
I setup the host+engine on centos 7 on my home network and everything
worked perfectly, However when I connected it to my work network networking
failed completely.
hostname -I would be blank.
lspci does list the hardware
nmcli d is empty
nmcli con show is empty
nmcli device status is empty
there is a device in /sys/class/net/
Is there a way to fix this? or do I have to reinstall?
On another note, ovirt is amazing!
Thanks for the quality product,
Rulasmur
7 years, 2 months
Q: Upgrade 4.2 -> 4.2.1 Dependency Problem
by Andrei V
Hi !
I run into unexpected problem upgrading oVirt node (installed manually on CentOS):
This problem have to be fixed manually otherwise upgrade command from host engine also fail.
-> glusterfs-rdma = 3.12.5-2.el7
was installed manually as a dependency resolution for ovirt-host-4.2.1-1.el7.centos.x86_64
Q: How to get around this problem? Thanks in advance.
Error: Package: ovirt-host-4.2.1-1.el7.centos.x86_64 (ovirt-4.2)
Requires: glusterfs-rdma
Removing: glusterfs-rdma-3.12.5-2.el7.x86_64 (@ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.5-2.el7
Obsoleted By: mlnx-ofa_kernel-3.4-OFED.3.4.2.1.5.1.ged26eb5.1.rhel7u3.x86_64 (HP-spp)
Not found
Available: glusterfs-rdma-3.8.4-18.4.el7.centos.x86_64 (base)
glusterfs-rdma = 3.8.4-18.4.el7.centos
Available: glusterfs-rdma-3.12.0-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.0-1.el7
Available: glusterfs-rdma-3.12.1-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.1-1.el7
Available: glusterfs-rdma-3.12.1-2.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.1-2.el7
Available: glusterfs-rdma-3.12.3-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.3-1.el7
Available: glusterfs-rdma-3.12.4-1.el7.x86_64 (ovirt-4.2-centos-gluster312)
glusterfs-rdma = 3.12.4-1.el7
7 years, 2 months
Ovirt 3.6 to 4.2 upgrade
by Gary Lloyd
Hi
Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ?
Does live migration still function between the older vdsm nodes and vdsm
nodes with software built against Ovirt 4.2 ?
We changed a couple of the vdsm python files to enable iscsi multipath on
direct luns.
(It's a fairly simple change to a couple of the python files).
We've been running it this way since 2012 (Ovirt 3.2).
Many Thanks
*Gary Lloyd*
________________________________________________
I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>
________________________________________________
7 years, 2 months
ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf
by Reznikov Alexei
Hi all!
After upgrade from ovirt 4.0 to 4.1, a have trouble add to next
HostedEngine host to my cluster via webui... host add succesfully and
become up, but HE not active in this host.
log's from trouble host
# cat agent.log
> KeyError: 'Configuration value not found:
file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'
# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
host_id=2
log deploy from engine in attach.
trouble host:
ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch
ovirt-host-deploy-1.6.7-1.el7.centos.noarch
vdsm-4.19.45-1.el7.centos.x86_64
CentOS Linux release 7.4.1708 (Core)
engine host:
ovirt-release41-4.1.9-1.el7.centos.noarch
ovirt-engine-4.1.9.1-1.el7.centos.noarch
CentOS Linux release 7.4.1708 (Core)
Please help me fix it.
Thanx, Alex.
7 years, 2 months
Slow conversion from VMware in 4.1
by Luca 'remix_tj' Lorenzetto
Hello,
i've started my migrations from vmware today. I had successfully
migrated over 200 VM from vmware to another cluster based on 4.0 using
our home-made scripts interacting with the API's. All the migrated vms
are running RHEL 6 or 7, with no SELinux.
We understood a lot about the necessities and we recorded also some
metrics about migration times. In July, with 4.0 as destination, we
were migrating ~30gb vm in ~40 mins.
It was an acceptable time, considering that about 50% of our vms stand
around that size.
Today we started migrating to the production cluster, that is,
instead, running 4.1.8. With the same scripts, the same api calls, and
a vm of about 50gb we were supposing that we will have the vm running
in the new cluster after 70 minutes, more or less.
Instead, the migration is taking more than 2 hours, and this not
because of the slow conversion time by qemu-img given that we're
transferring an entire disk via http.
Looking at the log, seems that activities executed before qemu-img
took more than 2000 seconds. As example, appears to me that dracut
took more than 14 minutes, which is in my opinion a bit long.
Is there any option to get a quicker conversion? Also some tasks to
run in the guests before the conversion are accepted.
We have to migrate ~300 vms in 2.5 months, and we're only at 11 after
7 hours (and today an exception that allowed us to start 4 hours in
advance, but usually our maintenance time is significantly lower).
This is a filtered out log reporting only the rows were we can
understand how much time has passed:
[ 0.0] Opening the source -i libvirt -ic
vpx://vmwareuser%40domain@vcenter/DC/Cluster/Host?no_verify=1
vmtoconvert
[ 6.1] Creating an overlay to protect the source from being modified
[ 7.4] Initializing the target -o vdsm -os
/rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/a118578a-4cf2-4e0c-ac47-20e9f0321da1
--vdsm-image-uuid 1a93e503-ce57-4631-8dd2-eeeae45866ca --vdsm-vol-uuid
88d92582-0f53-43b0-89ff-af1c17ea8618 --vdsm-vm-uuid
1434e14f-e228-41c1-b769-dcf48b258b12 --vdsm-ovf-output
/var/run/vdsm/v2v
[ 7.4] Opening the overlay
[00034ms] /usr/libexec/qemu-kvm \
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
[ 0.000000] Linux version 3.10.0-693.11.1.el7.x86_64
(mockbuild(a)x86-041.build.eng.bos.redhat.com) (gcc version 4.8.5
20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Fri Oct 27 05:39:05 EDT
2017
[ 0.000000] Command line: panic=1 console=ttyS0 edd=off
udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1
cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable
8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1
guestfs_network=1 TERM=linux guestfs_identifier=v2v
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007cfddfff] usable
[ 0.000000] BIOS-e820: [mem 0x000000007cfde000-0x000000007cffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] SMBIOS 2.8 present.
[ 0.000000] Hypervisor detected: KVM
[ 0.000000] e820: last_pfn = 0x7cfde max_arch_pfn = 0x400000000
[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[ 0.000000] found SMP MP-table at [mem 0x000f72f0-0x000f72ff]
mapped at [ffff8800000f72f0]
[ 0.000000] Using GB pages for direct mapping
[ 0.000000] RAMDISK: [mem 0x7ccb2000-0x7cfcffff]
[ 0.000000] Early table checksum verification disabled
[ 0.000000] ACPI: RSDP 00000000000f70d0 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 000000007cfe14d5 0002C (v01 BOCHS BXPCRSDT
00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 000000007cfe13e9 00074 (v01 BOCHS BXPCFACP
00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 000000007cfe0040 013A9 (v01 BOCHS BXPCDSDT
00000001 BXPC 00000001)
[ 0.000000] ACPI: FACS 000000007cfe0000 00040
[ 0.000000] ACPI: APIC 000000007cfe145d 00078 (v01 BOCHS BXPCAPIC
00000001 BXPC 00000001)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000007cfddfff]
[ 0.000000] NODE_DATA(0) allocated [mem 0x7cc8b000-0x7ccb1fff]
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000000] kvm-clock: cpu 0, msr 0:7cc3b001, primary cpu clock
[ 0.000000] kvm-clock: using sched offset of 1030608733 cycles
[ 0.000000] Zone ranges:
[ 0.000000] DMA [mem 0x00001000-0x00ffffff]
[ 0.000000] DMA32 [mem 0x01000000-0xffffffff]
[ 0.000000] Normal empty
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x00001000-0x0009efff]
[ 0.000000] node 0: [mem 0x00100000-0x7cfddfff]
[ 0.000000] Initmem setup node 0 [mem 0x00001000-0x7cfddfff]
[ 0.000000] ACPI: PM-Timer IO Port: 0x608
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[ 0.000000] e820: [mem 0x7d000000-0xfeffbfff] available for PCI devices
[ 0.000000] Booting paravirtualized kernel on KVM
[ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:1
nr_cpu_ids:1 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 33 pages/cpu @ffff88007ca00000 s97048
r8192 d29928 u2097152
[ 0.000000] KVM setup async PF for cpu 0
[ 0.000000] kvm-stealtime: cpu 0, msr 7ca0f440
[ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on.
Total pages: 503847
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: panic=1 console=ttyS0 edd=off
udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1
cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable
8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1
guestfs_network=1 TERM=linux guestfs_identifier=v2v
[ 0.000000] Disabling memory control group subsystem
[ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100
[ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using
standard form
[ 0.000000] Memory: 1994224k/2047864k available (6886k kernel code,
392k absent, 53248k reserved, 4545k data, 1764k init)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] \tRCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=1.
[ 0.000000] NR_IRQS:327936 nr_irqs:256 0
[ 0.000000] Console: colour *CGA 80x25
[ 0.000000] console [ttyS0] enabled
[ 0.000000] tsc: Detected 2099.998 MHz processor
[ 0.065500] Calibrating delay loop (skipped) preset value.. 4199.99
BogoMIPS (lpj=2099998)
[ 0.066153] pid_max: default: 32768 minimum: 301
[ 0.066548] Security Framework initialized
[ 0.066872] SELinux: Disabled at boot.
[ 0.067181] Yama: becoming mindful.
[ 0.067622] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
[ 0.068574] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
[ 0.069290] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.069813] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.070525] Initializing cgroup subsys memory
[ 0.070877] Initializing cgroup subsys devices
[ 0.071237] Initializing cgroup subsys freezer
[ 0.071589] Initializing cgroup subsys net_cls
[ 0.071932] Initializing cgroup subsys blkio
[ 0.072275] Initializing cgroup subsys perf_event
[ 0.072644] Initializing cgroup subsys hugetlb
[ 0.072984] Initializing cgroup subsys pids
[ 0.073316] Initializing cgroup subsys net_prio
[ 0.073721] CPU: Physical Processor ID: 0
[ 0.074810] mce: CPU supports 10 MCE banks
[ 0.075185] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[ 0.075621] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
[ 0.076030] tlb_flushall_shift: 6
[ 0.085827] Freeing SMP alternatives: 24k freed
[ 0.091125] ACPI: Core revision 20130517
[ 0.091976] ACPI: All ACPI Tables successfully acquired
[ 0.092448] ftrace: allocating 26586 entries in 104 pages
[ 0.116144] smpboot: Max logical packages: 1
[ 0.116640] Enabling x2apic
[ 0.116863] Enabled x2apic
[ 0.117290] Switched APIC routing to physical x2apic.
[ 0.118588] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.119054] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2683 v4 @
2.10GHz (fam: 06, model: 4f, stepping: 01)
[ 0.119813] Performance Events: 16-deep LBR, Broadwell events,
Intel PMU driver.
[ 0.121545] ... version: 2
[ 0.121847] ... bit width: 48
[ 0.122161] ... generic registers: 4
[ 0.122472] ... value mask: 0000ffffffffffff
[ 0.122874] ... max period: 000000007fffffff
[ 0.123276] ... fixed-purpose events: 3
[ 0.123584] ... event mask: 000000070000000f
[ 0.124004] KVM setup paravirtual spinlock
[ 0.125379] Brought up 1 CPUs
[ 0.125616] smpboot: Total of 1 processors activated (4199.99 BogoMIPS)
[ 0.126464] devtmpfs: initialized
[ 0.128347] EVM: security.selinux
[ 0.128608] EVM: security.ima
[ 0.128835] EVM: security.capability
[ 0.129796] atomic64 test passed for x86-64 platform with CX8 and with SSE
[ 0.130333] pinctrl core: initialized pinctrl subsystem
[ 0.130805] RTC time: 20:26:38, date: 01/24/18
[ 0.131217] NET: Registered protocol family 16
[ 0.131774] ACPI: bus type PCI registered
[ 0.132096] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 0.132660] PCI: Using configuration type 1 for base access
[ 0.133830] ACPI: Added _OSI(Module Device)
[ 0.134170] ACPI: Added _OSI(Processor Device)
[ 0.134514] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.134872] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.137001] ACPI: Interpreter enabled
[ 0.137303] ACPI: (supports S0 S5)
[ 0.137573] ACPI: Using IOAPIC for interrupt routing
[ 0.137971] PCI: Using host bridge windows from ACPI; if necessary,
use "pci=nocrs" and report a bug
[ 0.140442] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[ 0.140917] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[ 0.141446] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[ 0.141961] acpi PNP0A03:00: fail to add MMCONFIG information,
can't access extended PCI configuration space under this bridge.
[ 0.142997] acpiphp: Slot [2] registered
[ 0.143309] acpiphp: Slot [3] registered
[ 0.143625] acpiphp: Slot [4] registered
[ 0.143949] acpiphp: Slot [5] registered
[ 0.144260] acpiphp: Slot [6] registered
[ 0.144575] acpiphp: Slot [7] registered
[ 0.144887] acpiphp: Slot [8] registered
[ 0.145205] acpiphp: Slot [9] registered
[ 0.145523] acpiphp: Slot [10] registered
[ 0.145841] acpiphp: Slot [11] registered
[ 0.146161] acpiphp: Slot [12] registered
[ 0.146642] acpiphp: Slot [13] registered
[ 0.146960] acpiphp: Slot [14] registered
[ 0.147279] acpiphp: Slot [15] registered
[ 0.147602] acpiphp: Slot [16] registered
[ 0.147934] acpiphp: Slot [17] registered
[ 0.148255] acpiphp: Slot [18] registered
[ 0.148579] acpiphp: Slot [19] registered
[ 0.148896] acpiphp: Slot [20] registered
[ 0.149219] acpiphp: Slot [21] registered
[ 0.149546] acpiphp: Slot [22] registered
[ 0.149863] acpiphp: Slot [23] registered
[ 0.150178] acpiphp: Slot [24] registered
[ 0.150505] acpiphp: Slot [25] registered
[ 0.150824] acpiphp: Slot [26] registered
[ 0.151139] acpiphp: Slot [27] registered
[ 0.151461] acpiphp: Slot [28] registered
[ 0.151786] acpiphp: Slot [29] registered
[ 0.152104] acpiphp: Slot [30] registered
[ 0.152426] acpiphp: Slot [31] registered
[ 0.152741] PCI host bridge to bus 0000:00
[ 0.153059] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 0.153478] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
[ 0.153991] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
[ 0.154508] pci_bus 0000:00: root bus resource [mem
0x000a0000-0x000bffff window]
[ 0.155072] pci_bus 0000:00: root bus resource [mem
0x7d000000-0xfebfffff window]
[ 0.162550] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7]
[ 0.163097] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6]
[ 0.163590] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177]
[ 0.164129] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376]
[ 0.165004] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by
PIIX4 ACPI
[ 0.165564] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB
[ 0.223140] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.223712] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[ 0.224245] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.224789] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.225296] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[ 0.225817] ACPI: Enabled 2 GPEs in block 00 to 0F
[ 0.226262] vgaarb: loaded
[ 0.227000] SCSI subsystem initialized
[ 0.227314] ACPI: bus type USB registered
[ 0.227640] usbcore: registered new interface driver usbfs
[ 0.228068] usbcore: registered new interface driver hub
[ 0.228487] usbcore: registered new device driver usb
[ 0.228936] PCI: Using ACPI for IRQ routing
[ 0.229436] NetLabel: Initializing
[ 0.230112] NetLabel: domain hash size = 128
[ 0.230455] NetLabel: protocols = UNLABELED CIPSOv4
[ 0.230843] NetLabel: unlabeled traffic allowed by default
[ 0.231317] amd_nb: Cannot enumerate AMD northbridges
[ 0.231722] Switched to clocksource kvm-clock
[ 0.235503] pnp: PnP ACPI init
[ 0.235767] ACPI: bus type PNP registered
[ 0.236396] pnp: PnP ACPI: found 5 devices
[ 0.236716] ACPI: bus type PNP unregistered
[ 0.242333] NET: Registered protocol family 2
[ 0.242806] TCP established hash table entries: 16384 (order: 5,
131072 bytes)
[ 0.243384] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[ 0.243907] TCP: Hash tables configured (established 16384 bind 16384)
[ 0.244414] TCP: reno registered
[ 0.244668] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[ 0.245130] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[ 0.245656] NET: Registered protocol family 1
[ 0.246013] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.246473] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.246924] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 0.247457] Unpacking initramfs...
[ 0.249930] Freeing initrd memory: 3192k freed
[ 0.251174] sha1_ssse3: Using AVX optimized SHA-1 implementation
[ 0.251706] sha256_ssse3: Using AVX2 optimized SHA-256 implementation
[ 0.252355] futex hash table entries: 256 (order: 2, 16384 bytes)
[ 0.252836] Initialise system trusted keyring
[ 0.253187] audit: initializing netlink socket (disabled)
[ 0.253610] type=2000 audit(1516825598.479:1): initialized
[ 0.275426] HugeTLB registered 1 GB page size, pre-allocated 0 pages
[ 0.275927] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.277129] zpool: loaded
[ 0.277350] zbud: loaded
[ 0.277669] VFS: Disk quotas dquot_6.5.2
[ 0.277998] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.278609] msgmni has been set to 3901
[ 0.278956] Key type big_key registered
[ 0.279450] NET: Registered protocol family 38
[ 0.279810] Key type asymmetric registered
[ 0.280125] Asymmetric key parser 'x509' registered
[ 0.280523] Block layer SCSI generic (bsg) driver version 0.4
loaded (major 250)
[ 0.281107] io scheduler noop registered
[ 0.281416] io scheduler deadline registered (default)
[ 0.281839] io scheduler cfq registered
[ 0.282216] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.282648] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.283250] input: Power Button as
/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[ 0.283835] ACPI: Power Button [PWRF]
[ 0.284207] GHES: HEST is not enabled!
[ 0.284534] Serial: 8250/16550 driver, 1 ports, IRQ sharing enabled
[ 0.307889] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.308457] Non-volatile memory driver v1.3
[ 0.308809] Linux agpgart interface v0.103
[ 0.309200] crash memory driver: version 1.1
[ 0.309568] rdac: device handler registered
[ 0.309913] hp_sw: device handler registered
[ 0.310247] emc: device handler registered
[ 0.310565] alua: device handler registered
[ 0.310922] libphy: Fixed MDIO Bus: probed
[ 0.311267] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 0.311780] ehci-pci: EHCI PCI platform driver
[ 0.312129] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[ 0.312609] ohci-pci: OHCI PCI platform driver
[ 0.312958] uhci_hcd: USB Universal Host Controller Interface driver
[ 0.313474] usbcore: registered new interface driver usbserial
[ 0.313926] usbcore: registered new interface driver usbserial_generic
[ 0.314428] usbserial: USB Serial support registered for generic
[ 0.314911] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU]
at 0x60,0x64 irq 1,12
[ 0.316032] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 0.316418] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 0.316857] mousedev: PS/2 mouse device common for all mice
[ 0.317468] input: AT Translated Set 2 keyboard as
/devices/platform/i8042/serio0/input/input1
[ 0.318561] input: VirtualPS/2 VMware VMMouse as
/devices/platform/i8042/serio1/input/input2
[ 0.319363] input: VirtualPS/2 VMware VMMouse as
/devices/platform/i8042/serio1/input/input3
[ 0.320042] rtc_cmos 00:00: RTC can wake from S4
[ 0.320573] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
[ 0.321099] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram
[ 0.321632] cpuidle: using governor menu
[ 0.321989] hidraw: raw HID events driver (C) Jiri Kosina
[ 0.322467] usbcore: registered new interface driver usbhid
[ 0.322894] usbhid: USB HID core driver
[ 0.323734] drop_monitor: Initializing network drop monitor service
[ 0.324272] TCP: cubic registered
[ 0.324537] Initializing XFRM netlink socket
[ 0.324936] NET: Registered protocol family 10
[ 0.325410] NET: Registered protocol family 17
[ 0.325872] microcode: CPU0 sig=0x406f1, pf=0x1, revision=0x1
[ 0.326331] microcode: Microcode Update Driver: v2.01
<tigran(a)aivazian.fsnet.co.uk>, Peter Oruba
[ 0.327060] Loading compiled-in X.509 certificates
[ 0.327855] Loaded X.509 cert 'Red Hat Enterprise Linux Driver
Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
[ 0.329151] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch
signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
[ 0.330379] Loaded X.509 cert 'Red Hat Enterprise Linux kernel
signing key: 34fc3b85a61b8fead6e9e905e7e602a1f7fa049a'
[ 0.331196] registered taskstats version 1
[ 0.331639] Key type trusted registered
[ 0.332056] Key type encrypted registered
[ 0.332920] IMA: No TPM chip found, activating TPM-bypass!
[ 0.333605] Magic number: 2:270:448
[ 0.333970] rtc_cmos 00:00: setting system clock to 2018-01-24
20:26:38 UTC (1516825598)
[ 0.335302] Freeing unused kernel memory: 1764k freed
[ 0.339427] alg: No test for crc32 (crc32-pclmul)
[ 0.342995] alg: No test for crc32 (crc32-generic)
[ 0.352535] scsi host0: ata_piix
[ 0.352853] scsi host1: ata_piix
[ 0.353127] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
[ 0.353645] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
[ 0.541003] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[ 0.545766] random: fast init done
[ 0.548737] random: crng init done
[ 0.565923] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ 0.567592] scsi host2: Virtio SCSI HBA
[ 0.569801] scsi 2:0:0:0: Direct-Access QEMU QEMU HARDDISK
2.5+ PQ: 0 ANSI: 5
[ 0.570526] scsi 2:0:1:0: Direct-Access QEMU QEMU HARDDISK
2.5+ PQ: 0 ANSI: 5
[ 0.580538] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks:
(53.6 GB/50.0 GiB)
[ 0.581264] sd 2:0:1:0: [sdb] 8388608 512-byte logical blocks:
(4.29 GB/4.00 GiB)
[ 0.581894] sd 2:0:0:0: [sda] Write Protect is off
[ 0.582312] sd 2:0:0:0: [sda] Write cache: enabled, read cache:
enabled, doesn't support DPO or FUA
[ 0.583032] sd 2:0:1:0: [sdb] Write Protect is off
[ 0.583444] sd 2:0:1:0: [sdb] Write cache: enabled, read cache:
enabled, doesn't support DPO or FUA
[ 0.586373] sd 2:0:1:0: [sdb] Attached SCSI disk
[ 0.602190] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ 0.636655] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 1.253809] tsc: Refined TSC clocksource calibration: 2099.994 MHz
[ 1.510203] sda: sda1 sda2 sda3
[ 1.511710] sd 2:0:0:0: [sda] Attached SCSI disk
[ 1.528245] EXT4-fs (sdb): mounting ext2 file system using the ext4 subsystem
[ 1.530353] EXT4-fs (sdb): mounted filesystem without journal. Opts:
[/usr/lib/tmpfiles.d/systemd.conf:11] Unknown group 'utmp'.
[/usr/lib/tmpfiles.d/systemd.conf:19] Unknown user 'systemd-network'.
[/usr/lib/tmpfiles.d/systemd.conf:20] Unknown user 'systemd-network'.
[/usr/lib/tmpfiles.d/systemd.conf:21] Unknown user 'systemd-network'.
[/usr/lib/tmpfiles.d/systemd.conf:25] Unknown group 'systemd-journal'.
[/usr/lib/tmpfiles.d/systemd.conf:26] Unknown group 'systemd-journal'.
[ 1.650422] input: PC Speaker as /devices/platform/pcspkr/input/input4
[ 1.655216] piix4_smbus 0000:00:01.3: SMBus Host Controller at
0x700, revision 0
[ 1.694118] sd 2:0:0:0: Attached scsi generic sg0 type 0
[ 1.696802] sd 2:0:1:0: Attached scsi generic sg1 type 0
[ 1.698009] FDC 0 is a S82078B
[ 1.710807] AES CTR mode by8 optimization enabled
[ 1.724293] ppdev: user-space parallel port driver
[ 1.732252] Error: Driver 'pcspkr' is already registered, aborting...
[ 1.734673] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[ 1.746232] EDAC MC: Ver: 3.0.0
[ 1.749324] EDAC sbridge: Ver: 1.1.1
[ 25.658309] device-mapper: uevent: version 1.0.3
[ 25.659092] device-mapper: ioctl: 4.35.0-ioctl (2016-06-23)
initialised: dm-devel(a)redhat.com
[ 57.8] Inspecting the overlay
[ 51.302190] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: (null)
[ 58.667082] EXT4-fs (dm-1): mounted filesystem with ordered data
mode. Opts: (null)
[ 61.147593] EXT4-fs (dm-4): mounted filesystem with ordered data
mode. Opts: (null)
[ 63.977572] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: (null)
[ 75.614795] EXT4-fs (dm-6): mounted filesystem with ordered data
mode. Opts: (null)
[ 80.782266] EXT4-fs (dm-5): mounted filesystem with ordered data
mode. Opts: (null)
[ 98.734329] EXT4-fs (dm-2): mounted filesystem with ordered data
mode. Opts: (null)
[ 102.090148] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: (null)
[ 105.057661] EXT4-fs (dm-3): mounted filesystem with ordered data
mode. Opts: (null)
[ 108.085788] EXT4-fs (dm-9): mounted filesystem with ordered data
mode. Opts: (null)
[ 111.328257] EXT4-fs (dm-8): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.201934] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.212101] EXT4-fs (dm-2): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.221689] EXT4-fs (dm-5): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.233016] EXT4-fs (dm-6): mounted filesystem with ordered data
mode. Opts: (null)
[ 112.971075] EXT4-fs (dm-4): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.788961] EXT4-fs (dm-1): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.799156] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.811402] EXT4-fs (dm-3): mounted filesystem with ordered data
mode. Opts: (null)
[ 113.823347] EXT4-fs (dm-9): mounted filesystem with ordered data
mode. Opts: (null)
[ 115.345857] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: (null)
[ 115.356280] EXT4-fs (dm-8): mounted filesystem with ordered data
mode. Opts: (null)
[ 476.5] Checking for sufficient free disk space in the guest
[ 476.5] Estimating space required on target for each disk
[ 476.5] Converting Red Hat Enterprise Linux Server 7.4 (Maipo) to run on KVM
[ 1072.265252] dracut[1565] No '/dev/log' or 'logger' included for
syslog logging
[ 1076.444899] dracut[1565] Executing: /sbin/dracut --verbose
--add-drivers "virtio virtio_ring virtio_blk virtio_scsi virtio_net
virtio_pci" /boot/initramfs-3.10.0-693.el7.x86_64.img
3.10.0-693.el7.x86_64
[ 1104.118050] dracut[1565] dracut module 'busybox' will not be
installed, because command 'busybox' could not be found!
[ 1111.893587] dracut[1565] dracut module 'crypt' will not be
installed, because command 'cryptsetup' could not be found!
[ 1112.694542] dracut[1565] dracut module 'dmraid' will not be
installed, because command 'dmraid' could not be found!
[ 1117.763735] dracut[1565] dracut module 'mdraid' will not be
installed, because command 'mdadm' could not be found!
[ 1117.769004] dracut[1565] dracut module 'multipath' will not be
installed, because command 'multipath' could not be found!
[ 1122.366992] dracut[1565] dracut module 'cifs' will not be
installed, because command 'mount.cifs' could not be found!
[ 1122.387968] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsistart' could not be found!
[ 1122.390569] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsi-iname' could not be found!
[ 1140.889553] dracut[1565] dracut module 'busybox' will not be
installed, because command 'busybox' could not be found!
[ 1140.910458] dracut[1565] dracut module 'crypt' will not be
installed, because command 'cryptsetup' could not be found!
[ 1140.915646] dracut[1565] dracut module 'dmraid' will not be
installed, because command 'dmraid' could not be found!
[ 1140.924489] dracut[1565] dracut module 'mdraid' will not be
installed, because command 'mdadm' could not be found!
[ 1140.928995] dracut[1565] dracut module 'multipath' will not be
installed, because command 'multipath' could not be found!
[ 1140.939832] dracut[1565] dracut module 'cifs' will not be
installed, because command 'mount.cifs' could not be found!
[ 1140.954810] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsistart' could not be found!
[ 1140.957229] dracut[1565] dracut module 'iscsi' will not be
installed, because command 'iscsi-iname' could not be found!
[ 1142.066303] dracut[1565] *** Including module: bash ***
[ 1142.073837] dracut[1565] *** Including module: nss-softokn ***
[ 1143.838047] dracut[1565] *** Including module: i18n ***
[ 1230.935044] dracut[1565] *** Including module: network ***
[ 1323.749409] dracut[1565] *** Including module: ifcfg ***
[ 1323.755682] dracut[1565] *** Including module: drm ***
[ 1340.716219] dracut[1565] *** Including module: plymouth ***
[ 1359.941093] dracut[1565] *** Including module: dm ***
[ 1366.392221] dracut[1565] Skipping udev rule: 64-device-mapper.rules
[ 1366.394670] dracut[1565] Skipping udev rule: 60-persistent-storage-dm.rules
[ 1366.397021] dracut[1565] Skipping udev rule: 55-dm.rules
[ 1375.796931] dracut[1565] *** Including module: kernel-modules ***
[ 1627.998656] dracut[1565] *** Including module: lvm ***
[ 1631.138460] dracut[1565] Skipping udev rule: 64-device-mapper.rules
[ 1631.141015] dracut[1565] Skipping udev rule: 56-lvm.rules
[ 1631.143409] dracut[1565] Skipping udev rule: 60-persistent-storage-lvm.rules
[ 1635.270706] dracut[1565] *** Including module: qemu ***
[ 1635.277842] dracut[1565] *** Including module: rootfs-block ***
[ 1636.845616] dracut[1565] *** Including module: terminfo ***
[ 1639.189294] dracut[1565] *** Including module: udev-rules ***
[ 1640.076624] dracut[1565] Skipping udev rule: 40-redhat-cpu-hotplug.rules
[ 1649.962889] dracut[1565] Skipping udev rule: 91-permissions.rules
[ 1651.008527] dracut[1565] *** Including module: biosdevname ***
[ 1651.921630] dracut[1565] *** Including module: systemd ***
[ 1685.124521] dracut[1565] *** Including module: usrmount ***
[ 1685.128532] dracut[1565] *** Including module: base ***
[ 1694.743507] dracut[1565] *** Including module: fs-lib ***
[ 1696.295216] dracut[1565] *** Including module: shutdown ***
[ 1698.578228] dracut[1565] *** Including modules done ***
[ 1699.586287] dracut[1565] *** Installing kernel module dependencies
and firmware ***
[ 1717.505952] dracut[1565] *** Installing kernel module dependencies
and firmware done ***
[ 1724.539224] dracut[1565] *** Resolving executable dependencies ***
[ 1844.709874] dracut[1565] *** Resolving executable dependencies done***
[ 1844.723313] dracut[1565] *** Hardlinking files ***
[ 1847.281611] dracut[1565] *** Hardlinking files done ***
[ 1847.284119] dracut[1565] *** Stripping files ***
[ 1908.635888] dracut[1565] *** Stripping files done ***
[ 1908.638262] dracut[1565] *** Generating early-microcode cpio image
contents ***
[ 1908.645054] dracut[1565] *** Constructing GenuineIntel.bin ****
[ 1909.567397] dracut[1565] *** Store current command line parameters ***
[ 1909.571686] dracut[1565] *** Creating image file ***
[ 1909.574239] dracut[1565] *** Creating microcode section ***
[ 1911.789907] dracut[1565] *** Created microcode section ***
[ 1921.680575] dracut[1565] *** Creating image file done ***
[ 1926.764407] dracut[1565] *** Creating initramfs image file
'/boot/initramfs-3.10.0-693.el7.x86_64.img' done ***
[1994.1] Mapping filesystem data to avoid copying unused and blank areas
[ 1984.841231] EXT4-fs (dm-8): mounted filesystem with ordered data
mode. Opts: discard
[ 1987.252106] EXT4-fs (dm-9): mounted filesystem with ordered data
mode. Opts: discard
[ 1990.531305] EXT4-fs (dm-3): mounted filesystem with ordered data
mode. Opts: discard
[ 1992.903109] EXT4-fs (dm-7): mounted filesystem with ordered data
mode. Opts: discard
[ 1995.876230] EXT4-fs (dm-2): mounted filesystem with ordered data
mode. Opts: discard
[ 1995.986384] EXT4-fs (dm-5): mounted filesystem with ordered data
mode. Opts: discard
[ 1997.748087] EXT4-fs (dm-6): mounted filesystem with ordered data
mode. Opts: discard
[ 1997.785914] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: discard
[ 1997.824003] EXT4-fs (dm-4): mounted filesystem with ordered data
mode. Opts: discard
[ 2000.172658] EXT4-fs (dm-1): mounted filesystem with ordered data
mode. Opts: discard
[ 2001.214202] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: discard
[2010.7] Closing the overlay
[2010.7] Checking if the guest needs BIOS or UEFI to boot
[2010.7] Assigning disks to buses
[2010.7] Copying disk 1/1 to
/rhev/data-center/e8263fb4-114d-4706-b1c0-5defcd15d16b/a118578a-4cf2-4e0c-ac47-20e9f0321da1/images/1a93e503-ce57-4631-8dd2-eeeae45866ca/88d92582-0f53-43b0-89ff-af1c17ea8618
(raw)
[7000.4] Creating output metadata
[7000.4] Finishing off
Any help is appreciated.
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 2 months
VM with multiple vdisks can't migrate
by fsoyer
------=_=-_OpenGroupware_org_NGMime-15477-1518516470.384163-4------
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 29601
Hi all,
I discovered yesterday a problem when migrating VM with more than one v=
disk.
On our test servers (oVirt4.1, shared storage with Gluster), I created =
2 VMs needed for a test, from a template with a 20G vdisk. On this VMs =
I added a 100G vdisk (for this tests I didn't want to waste time to ext=
end the existing vdisks... But I lost time finally...). The VMs with th=
e 2 vdisks works well.
Now I saw some updates waiting on the host. I tried to put it in mainte=
nance... But it stopped on the two VM. They were marked "migrating", bu=
t no more accessible. Other (small) VMs with only 1 vdisk was migrated =
without problem at the same time.
I saw that a kvm process for the (big) VMs was launched on the source A=
ND destination host, but after tens of minutes, the migration and the V=
Ms was always freezed. I tried to cancel the migration for the VMs : fa=
iled. The only way to stop it was to poweroff the VMs : the kvm process=
died on the 2 hosts and the GUI alerted on a failed migration.
In doubt, I tried to delete the second vdisk on one of this VMs : it mi=
grates then without error ! And no access problem.
I tried to extend the first vdisk of the second VM, the delete the seco=
nd vdisk : it migrates now without problem !=C2=A0=C2=A0=C2=A0
So after another test with a VM with 2 vdisks, I can say that this bloc=
ked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-1=
2 16:46:29,705+01 INFO =C2=A0[org.ovirt.engine.core.bll.MigrateVmToServ=
erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc=
k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10=
-85cc-d573004a099d=3DVM]', sharedLocks=3D''}'
2018-02-12 16:46:29,955+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-=
46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand inter=
nal: false. Entities affected : =C2=A0ID: 3f57e669-5e4c-4d10-85cc-d5730=
04a099d Type: VMAction group MIGRATE=5FVM with role type USER
2018-02-12 16:46:30,261+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4=
6a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParam=
eters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb=
1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0=
.6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.=
168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', =
migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa=
lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve=
rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall=
ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim=
it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act=
ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=
=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt=
ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]=
}}]]'}), log id: 14f61ee0
2018-02-12 16:46:30,262+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) =
[2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(H=
ostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3=
f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=
=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321=
', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown=
time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console=
Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max=
IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim=
it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act=
ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=
=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt=
ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=
=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo=
g id: 775cd381
2018-02-12 16:46:30,277+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) =
[2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,=
log id: 775cd381
2018-02-12 16:46:30,285+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4=
6a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom=
, log id: 14f61ee0
2018-02-12 16:46:30,301+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-3=
2) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5F=
START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID=
: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: nu=
ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECON=
DARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.=
fr, User: admin@internal-authz).
2018-02-12 16:46:31,106+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] STAR=
T, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSC=
ommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-=
f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log =
id: 54b4b435
2018-02-12 16:46:31,147+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINI=
SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D=
{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QE=
MU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtru=
e, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guest=
NumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, cust=
om=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4d=
f1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879=
c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d57=
3004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specP=
arams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial=
, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', =
deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null',=
logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c=
6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F=
8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5=
d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac=
-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d=
evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',=
address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge=
d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie=
s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null=
'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5=
7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE=
R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D=
0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',=
plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope=
rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'=
null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-=
4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d=
b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a=
a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'=
unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h=
ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D=
1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma=
xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c=
c-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuarantee=
dSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3D=
ovirtmgmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVC=
pus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log=
id: 54b4b435
2018-02-12 16:46:31,150+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F=
etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'
2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re=
ceived a vnc Device without an address when processing VM 3f57e669-5e4c=
-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.=
6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p=
ort=3D5901}
2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re=
ceived a lease Device without an address when processing VM 3f57e669-5e=
4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e=
669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f=
dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off=
set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea=
ses, type=3Dlease}
2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66=
9-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) was unexpectedly det=
ected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(gi=
nger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb=
1')
2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66=
9-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-=
8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
....
2018-02-12 16:46:41,631+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-=
4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22=
-840a-f17d7cd87bb1'(victor.local.systea.fr)
2018-02-12 16:46:41,632+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr=
oyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd8=
7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', =
secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D't=
rue'}), log id: 560eca57
2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, Dest=
royVDSCommand, log id: 560eca57
2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-=
4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' =
--> 'Down'
2018-02-12 16:46:41,651+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3=
f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c=
2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo'
2018-02-12 16:46:42,163+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4=
d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -->=
'Up'
2018-02-12 16:46:42,169+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, =
MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr, MigrateSta=
tusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-487=
8-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), =
log id: 7a25c281
2018-02-12 16:46:42,174+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,=
MigrateStatusVDSCommand, log id: 7a25c281
2018-02-12 16:46:42,194+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVEN=
T=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-8=
2c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call St=
ack: null, Custom ID: null, Custom Event ID: -1, Message: Migration com=
pleted (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destina=
tion: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, =
Actual downtime: (N/A))
2018-02-12 16:46:42,201+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object '=
EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DV=
M]', sharedLocks=3D''}'
2018-02-12 16:46:42,203+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL=
istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285=
cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6=
5298
2018-02-12 16:46:42,254+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, Full=
ListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440f=
x-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748,=
guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cp=
uType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd=
, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9=
3ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D=
'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc=
-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s=
pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se=
rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals=
e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu=
ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93=
-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254dev=
ice=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-b=
f0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c=
4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d=
'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D=
'[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p=
lugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProp=
erties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D=
'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTR=
OLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bu=
s=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fal=
se', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP=
roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=
=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9=
3ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1=
908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485=
-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=
=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addres=
s=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', manage=
d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann=
el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSock=
et=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D=
32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a0=
99d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse=
, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNe=
twork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuarantee=
dSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600, disp=
lay=3Dvnc}], log id: 7cc65298
2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a=
vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85=
cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{=
displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type=
=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D59=
01}
2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a=
lease Device without an address when processing VM 3f57e669-5e4c-4d10-=
85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c=
-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16=
, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D62=
91456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0=
.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, typ=
e=3Dlease}
2018-02-12 16:46:46,260+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINI=
SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D=
18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49=
-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, =
transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2=
, guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbd=
dd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56503=
9fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af=
02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi=
ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr=
ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', mana=
ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha=
nnel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu=
ll', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb27=
4device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-41=
56-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmD=
evice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284=
d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet'=
, type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=
=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', read=
Only=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snapsh=
otId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbd=
dd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceI=
d=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0=
x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn=
apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F=
fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56=
5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN=
NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll=
er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D=
'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},=
vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F=
SECONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3D=
false, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2,=
smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmE=
nable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devic=
es=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D=
16, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cd=
ef4c
2018-02-12 16:46:46,267+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re=
ceived a vnc Device without an address when processing VM 3f57e669-5e4c=
-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.=
5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p=
ort=3D5901}
2018-02-12 16:46:46,268+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re=
ceived a lease Device without an address when processing VM 3f57e669-5e=
4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e=
669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f=
dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off=
set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea=
ses, type=3Dlease}
=C2=A0
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7=
458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-=
627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}'
2018-02-12 16:49:06,407+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-=
4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand inter=
nal: false. Entities affected : =C2=A0ID: f7d4ec12-627a-4b83-b59e-88640=
0d55474 Type: VMAction group MIGRATE=5FVM with role type USER
2018-02-12 16:49:06,712+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParam=
eters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6=
9', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0=
.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.=
168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', =
migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa=
lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve=
rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall=
ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim=
it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act=
ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=
=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt=
ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]=
}}]]'}), log id: 3702a9e0
2018-02-12 16:49:06,713+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) =
[92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(H=
ostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f=
7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=
=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321=
', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown=
time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console=
Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max=
IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim=
it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act=
ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=
=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt=
ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=
=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo=
g id: 1840069c
2018-02-12 16:49:06,724+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) =
[92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,=
log id: 1840069c
2018-02-12 16:49:06,732+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom=
, log id: 3702a9e0
2018-02-12 16:49:06,753+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-4=
9) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5F=
START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID=
: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: nu=
ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMA=
RY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr=
, User: admin@internal-authz).
...
2018-02-12 16:49:16,453+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] F=
etched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'
2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec=
ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict=
or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'=
)
2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
...
2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec=
ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict=
or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'=
)
2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
=C2=A0
and so on, last lines repeated indefinitly for hours since we poweroff =
the VM...
Is this something known ? Any idea about that ?
Thanks
--
Cordialement,
Frank Soyer
=C2=A0
------=_=-_OpenGroupware_org_NGMime-15477-1518516470.384163-4------
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 30330
<html>Hi all,<br />I discovered yesterday a problem when migrating VM w=
ith more than one vdisk.<br />On our test servers (oVirt4.1, shared sto=
rage with Gluster), I created 2 VMs needed for a test, from a template =
with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I di=
dn't want to waste time to extend the existing vdisks... But I lost tim=
e finally...). The VMs with the 2 vdisks works well.<br />Now I saw som=
e updates waiting on the host. I tried to put it in maintenance... But =
it stopped on the two VM. They were marked "migrating", but no more acc=
essible. Other (small) VMs with only 1 vdisk was migrated without probl=
em at the same time.<br />I saw that a kvm process for the (big) VMs wa=
s launched on the source AND destination host, but after tens of minute=
s, the migration and the VMs was always freezed. I tried to cancel the =
migration for the VMs : failed. The only way to stop it was to poweroff=
the VMs : the kvm process died on the 2 hosts and the GUI alerted on a=
failed migration.<br />In doubt, I tried to delete the second vdisk on=
one of this VMs : it migrates then without error ! And no access probl=
em.<br />I tried to extend the first vdisk of the second VM, the delete=
the second vdisk : it migrates now without problem ! =
<br /><br />So after another test with a VM with 2 vdisks, I can say th=
at this blocked the migration process :(<br /><br />In engine.log, for =
a VMs with 1 vdisk migrating well, we see :<blockquote>2018-02-12 16:46=
:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerComma=
nd] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acqui=
red to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d=
573004a099d=3DVM]', sharedLocks=3D''}'<br />2018-02-12 16:46:29,955+01 =
INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ov=
irt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Run=
ning command: MigrateVmToServerCommand internal: false. Entities affect=
ed : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction grou=
p MIGRATE=5FVM with role type USER<br />2018-02-12 16:46:30,261+01 INFO=
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.t=
hread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, M=
igrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'true', hostI=
d=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=3D'd569c2dd-8f30-=
4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321', migrationMethod=
=3D'ONLINE', tunnelMigration=3D'false', migrationDowntime=3D'0', autoCo=
nverge=3D'true', migrateCompressed=3D'false', consoleAddress=3D'null', =
maxBandwidth=3D'500', enableGuestEvents=3D'true', maxIncomingMigrations=
=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{n=
ame=3DsetDowntime, params=3D[100]}], stalling=3D[{limit=3D1, action=3D{=
name=3DsetDowntime, params=3D[150]}}, {limit=3D2, action=3D{name=3DsetD=
owntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, pa=
rams=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400=
]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=
=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 14f61ee0<br =
/>2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbroke=
r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32=
) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand=
(HostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runA=
sync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVds=
Id=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:543=
21', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDo=
wntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', conso=
leAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', m=
axIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSch=
edule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{l=
imit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, a=
ction=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{na=
me=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDow=
ntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, para=
ms=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), =
log id: 775cd381<br />2018-02-12 16:46:30,277+01 INFO [org.ovirt.=
engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thr=
ead.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, Mi=
grateBrokerVDSCommand, log id: 775cd381<br />2018-02-12 16:46:30,285+01=
INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ov=
irt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FIN=
ISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<br />20=
18-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.dbbroker=
.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32)=
[2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5FST=
ART(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: =
4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null=
, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECONDA=
RY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr=
, User: admin@internal-authz).<br />2018-02-12 16:46:31,106+01 INFO &nb=
sp;[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (Defa=
ultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName =3D =
victor.local.systea.fr, FullListVDSCommandParameters:{runAsync=3D'true'=
, hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds=3D'[3f57e669-5=
e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435<br />2018-02-12 16:46:=
31,147+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullLis=
tVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCo=
mmand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel7.3=
.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D{0QEMU=5FQEMU=5F=
HARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQ=
M00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtrue, timeOffset=3D=
0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guestNumaNodes=3D[Lja=
va.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, custom=3D{device=5Ff=
bddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565=
039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-=
af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', de=
vice=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', ad=
dress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', ma=
naged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'c=
hannel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'=
null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb=
274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-=
4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DV=
mDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d95572=
84d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'table=
t', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{b=
us=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', re=
adOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snap=
shotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ff=
bddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{devic=
eId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d1=
0-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0=
x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn=
apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F=
fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56=
5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN=
NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll=
er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D=
'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},=
vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F=
SECONDARY, nice=3D0, status=3DMigration Source, maxMemSize=3D32768, boo=
tMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOf=
IoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D8192, kvmEnab=
le=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D=
[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D16, clientIp=3D=
, statusTime=3D4299484520, maxMemSlots=3D16}], log id: 54b4b435<br />20=
18-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbroker.mo=
nitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fet=
ched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'<br />2018-02=
-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received =
a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-8=
5cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D=
{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.6}, typ=
e=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D5=
901}<br />2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.=
vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54=
a65b66] Received a lease Device without an address when processing VM 3=
f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5F=
id=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0=
-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d0425=
7be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/g=
lusterSD/192.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=
=5Fmd/xleases, type=3Dlease}<br />2018-02-12 16:46:31,152+01 INFO  =
;[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartz=
Scheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=
=5FSECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2=
dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'c=
e3938b1-b23f-4d22-840a-f17d7cd87bb1')<br />2018-02-12 16:46:31,152+01 I=
NFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (Defa=
ultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099=
d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.lo=
cal.systea.fr) ignoring it in the refresh until migration is done<br />=
....<br />2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.=
vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57=
e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1=
-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br />2018-02-12 1=
6:46:41,632+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.De=
stroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand=
(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandParameters:{ru=
nAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', secondsToWait=3D=
'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D'true'}), log id: =
560eca57<br />2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.c=
ore.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [=
] FINISH, DestroyVDSCommand, log id: 560eca57<br />2018-02-12 16:46:41,=
650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyze=
r] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a09=
9d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' --> 'Down'<br />2=
018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f=
57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c2=
dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo'<br =
/>2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroke=
r.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c=
-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -=
-> 'Up'<br />2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine=
.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-work=
er-4) [] START, MigrateStatusVDSCommand(HostName =3D ginger.local.syste=
a.fr, MigrateStatusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd=
569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d=
573004a099d'}), log id: 7a25c281<br />2018-02-12 16:46:42,174+01 INFO &=
nbsp;[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand=
] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id:=
7a25c281<br />2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.=
core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-wo=
rker-4) [] EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712=
024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85=
7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message=
: Migration completed (VM: Oracle=5FSECONDARY, Source: victor.local.sys=
tea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Tota=
l: 11 seconds, Actual downtime: (N/A))<br />2018-02-12 16:46:42,201+01 =
INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJo=
inPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks=3D=
'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DVM]', sharedLocks=3D''}'<br />=
2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL=
istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285=
cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6=
5298<br />2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.=
vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FI=
NISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D=
18748, guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D=
0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760=
085fd, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F=
879c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{devi=
ceId=3D'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d=
10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvir=
tio-serial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D=
'false', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D=
'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7=
d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254=
device=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aa=
c-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-=
01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a0=
99d'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParam=
s=3D'[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false=
', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', custom=
Properties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevic=
e=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{=
id=3D'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vm=
Id=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'=
CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x0=
1, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', cus=
tomProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDe=
vice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F8=
79c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a=
a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-=
8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', de=
vice=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', ad=
dress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', ma=
naged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'c=
hannel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'=
null', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPer=
Socket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemS=
ize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d57=
3004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3D=
false, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, disp=
layNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuar=
anteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600,=
display=3Dvnc}], log id: 7cc65298<br />2018-02-12 16:46:42,257+01 INFO=
[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring]=
(ForkJoinPool-1-worker-4) [] Received a vnc Device without an address =
when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skippi=
ng device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, key=
Map=3Dfr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b=
1-446a-4e88-9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:42,25=
7+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMo=
nitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without =
an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devi=
ces, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099=
d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da=
09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease,=
path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e51cecc-=
eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />2018-0=
2-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbro=
ker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, Fu=
llListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i44=
0fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D1874=
8, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D=
{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, trans=
parentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, gue=
stNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbddd528=
-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc2=
54=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af02-56=
5039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D=
'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h=
ostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274devic=
e=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-848=
5-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmDevice:=
{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284d6', v=
mId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet', type=
=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, =
type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D=
'false', deviceAlias=3D'input0', customProperties=3D'[]', snapshotId=3D=
'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7=
d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'fb=
ddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d5=
73004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', sp=
ecParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, =
type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true', read=
Only=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', snapshotI=
d=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd5=
28-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc=
c254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D'3f5=
7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL'=
, bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D=
0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D'[]=
', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}}, vm=
Type=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5FSE=
CONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3Dfa=
lse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2, s=
mpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmEna=
ble=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=
=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D1=
6, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cde=
f4c<br />2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.v=
dsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fc=
b200a] Received a vnc Device without an address when processing VM 3f57=
e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvn=
c, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D1=
92.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d=
2c442d, port=3D5901}<br />2018-02-12 16:46:46,268+01 INFO [org.ov=
irt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuart=
zScheduler5) [7fcb200a] Received a lease Device without an address when=
processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping d=
evice: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e=
51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b=
6d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/d=
ata-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-=
920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<p> </p></blockquote><=
br />For the VM with 2 vdisks we see :<blockquote><p>2018-02-12 16:49:0=
6,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand=
] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquire=
d to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e-886=
400d55474=3DVM]', sharedLocks=3D''}'<br />2018-02-12 16:49:06,407+01 IN=
FO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovir=
t.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Runni=
ng command: MigrateVmToServerCommand internal: false. Entities affected=
: ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group =
MIGRATE=5FVM with role type USER<br />2018-02-12 16:49:06,712+01 INFO &=
nbsp;[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thr=
ead.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, Mig=
rateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'true', hostId=3D=
'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f7d4ec12-627a-4b83-b59e=
-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=3D'ce3938b1-b23f-4d22=
-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321', migrationMethod=3D'=
ONLINE', tunnelMigration=3D'false', migrationDowntime=3D'0', autoConver=
ge=3D'true', migrateCompressed=3D'false', consoleAddress=3D'null', maxB=
andwidth=3D'500', enableGuestEvents=3D'true', maxIncomingMigrations=3D'=
2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{name=3D=
setDowntime, params=3D[100]}], stalling=3D[{limit=3D1, action=3D{name=3D=
setDowntime, params=3D[150]}}, {limit=3D2, action=3D{name=3DsetDowntime=
, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, params=3D=
[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400]}}, {l=
imit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, =
action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 3702a9e0<br />2018-=
02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbr=
oker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5=
af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostNa=
me =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync=3D=
'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f7d4ec=
12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=3D'c=
e3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321', mi=
grationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDowntime=3D=
'0', autoConverge=3D'true', migrateCompressed=3D'false', consoleAddress=
=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', maxIncomin=
gMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'=
[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{limit=3D1,=
action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, action=3D{=
name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetD=
owntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, pa=
rams=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500=
]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 1=
840069c<br />2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.co=
re.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-=
6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrok=
erVDSCommand, log id: 1840069c<br />2018-02-12 16:49:06,732+01 INFO &nb=
sp;[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.threa=
d.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, Migr=
ateVDSCommand, return: MigratingFrom, log id: 3702a9e0<br />2018-02-12 =
16:49:06,753+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditlog=
handling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af3=
3-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5FSTART(62), =
Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-=
f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom =
Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMARY, Source:=
ginger.local.systea.fr, Destination: victor.local.systea.fr, User: adm=
in@internal-authz).<br />...<br />2018-02-12 16:49:16,453+01 INFO  =
;[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (Def=
aultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-=
4d22-840a-f17d7cd87bb1'<br />2018-02-12 16:49:16,455+01 INFO [org=
.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzSched=
uler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5FPR=
IMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f=
-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd=
-8f30-4878-8aea-858db285cf69')<br />2018-02-12 16:49:16,455+01 INFO &nb=
sp;[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuar=
tzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is m=
igrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.sys=
tea.fr) ignoring it in the refresh until migration is done<br />...<br =
/>2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroke=
r.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4e=
c12-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly det=
ected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vi=
ctor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf6=
9')<br />2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.v=
dsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM=
'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b=
23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the =
refresh until migration is done<br /> </p></blockquote><br />and s=
o on, last lines repeated indefinitly for hours since we poweroff the V=
M...<br />Is this something known ? Any idea about that ?<br /><br />Th=
anks<br />--<br /><style type=3D"text/css">.Text1 {
color: black;
font-size:9pt;
font-family:Verdana;
}
.Text2 {
color: black;
font-size:7pt;
font-family:Verdana;
}</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer=
</b><br /> </p></html>
------=_=-_OpenGroupware_org_NGMime-15477-1518516470.384163-4--------
7 years, 2 months
Re: [ovirt-users] Network configuration validation error
by spfma.tech@e.mail.fr
--=_0f45da8360572fbabc948ee54ed96c44
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
I did not see I had to enable another repo to get this update, so I was=
sure I had the latest version available !=0A After adding it, things we=
nt a lot better and I was able to update the engine and all the nodes fl=
awlessly to version 4.2.1.6-1.el7.centos Thanks a lot for your help ! =
The "no default route error" has disappeared indeed. But I still coul=
dn't validate network setup modifications on one node as I still had the=
following error in the GUI : =0A=0A =09* must match "^b((25[0-5]|2[0-4=
]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]dd|d?d)"=0A =09* Attribute: ipCo=
nfiguration.iPv4Addresses[0].gateway=0A=0A So I tried a dummy thing : I=
put a value in the gateway field for the NIC which doesn't need one (NF=
S), was able to validate. Then I edited it again, removed the value and=
was able to validate again ! Regards =0A=0A Le 12-Feb-2018 10:42:30=
+0100, mburman(a)redhat.com a crit: =0A "no default route" bug was fixed=
only on 4.2.1 Your current version doesn't have the fix =0A On Mon, Feb=
12, 2018 at 11:09 AM, wrote:=0A=0A Le 12-Feb-2018 08:06:43 +0100, jbel=
ka(a)redhat.com a crit: =0A> This option relevant only for the upgrade fro=
m 3.6 to 4.0(engine had=0A > different OS major versions), it all other=
cases the upgrade flow very=0A > similar to upgrade flow of standard en=
gine environment.=0A > =0A > =0A > 1. Put hosted-engine environment to G=
lobalMaintenance(you can do it via=0A > UI)=0A > 2. Update engine packag=
es(# yum update -y)=0A > 3. Run engine-setup=0A > 4. Disable GlobalMaint=
enance=0A > So I followed these steps connected in the engine VM and d=
idn't get any error message. But the version showed in the GUI is still=
4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I still=
have the "no default route" and network validation problems. Regards =
=0A > Could someone explain me at least what "Cluster PROD is at versio=
n 4.2 which=0A > is not supported by this upgrade flow. Please fix it be=
fore upgrading."=0A > means ? As far as I know 4.2 is the most recent br=
anch available, isn't it ?=0A=0A I have no idea where did you get=0A=0A=
"Cluster PROD is at version 4.2 which is not supported by this upgrade=
flow. Please fix it before upgrading."=0A=0A Please do not cut output a=
nd provide exact one.=0A=0A IIUC you should do 'yum update ovirt*setup*'=
and then 'engine-setup'=0A and only after it would finish successfully=
you would do 'yum -y update'.=0A Maybe that's your problem?=0A=0A Jiri=
=0A=0A-----------------------------------------------------------------=
--------------------------------=0AFreeMail powered by mail.fr =0A_____=
__________________________________________=0A Users mailing list=0AUsers=
@ovirt.org=0Ahttp://lists.ovirt.org/mailman/listinfo/users=0A=0A-- =0A=
=0AMichael Burman =0A=0ASenior Quality engineer - rhv network - redhat i=
srael =0A=0ARed Hat =0A=0A mburman(a)redhat.com M: 0545355725 IM: mburman=
=0A=0A-----------------------------------------------------------------=
--------------------------------=0AFreeMail powered by mail.fr
--=_0f45da8360572fbabc948ee54ed96c44
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div> </div>=0A<div><span style=3D"font-family: arial, helvetica, s=
ans-serif; font-size: 10pt; color: #000000;">I did not see I had to enab=
le another repo to get this update, so I was sure I had the latest versi=
on available !<br /></span></div>=0A<div><span style=3D"font-family: ari=
al, helvetica, sans-serif; font-size: 10pt; color: #000000;">After addin=
g it, things went a lot better and I was able to update the engine and a=
ll the nodes flawlessly to version <span class=3D"gwt-InlineLabel">4.2.1=
.6-1.el7.centos</span></span></div>=0A<div><span style=3D"font-family: a=
rial, helvetica, sans-serif; font-size: 10pt; color: #000000;"><span cla=
ss=3D"gwt-InlineLabel">Thanks a lot for your help !</span></span></div>=
=0A<div> </div>=0A<div><span style=3D"font-family: arial, helvetica=
, sans-serif; font-size: 10pt; color: #000000;">The "no default route er=
ror" has disappeared indeed.</span></div>=0A<div> </div>=0A<div><sp=
an style=3D"font-family: arial, helvetica, sans-serif; font-size: 10pt;=
color: #000000;">But I still couldn't validate network setup modificati=
ons on one node as I still had the following error in the GUI :</span></=
div>=0A<div>=0A<ul style=3D"margin-top: 0;">=0A<li>must match "^b((25[0-=
5]|2[0-4]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]dd|d?d)"</li>=0A<li>Attr=
ibute: ipConfiguration.iPv4Addresses[0].gateway</li>=0A</ul>=0A<div>So I=
tried a dummy thing : I put a value in the gateway field for the NIC wh=
ich doesn't need one (NFS), was able to validate. Then I edited it again=
, removed the value and was able to validate again !</div>=0A</div>=0A<d=
iv> </div>=0A<div>Regards</div>=0A<p><br /> Le 12-Feb-2018 10:42:30=
+0100, mburman(a)redhat.com a écrit:</p>=0A<blockquote style=3D"ma=
rgin-left: 0; padding-left: 5px; border-left: 2px solid #000080;">=0A<di=
v dir=3D"ltr">=0A<div>"no default route" bug was fixed only on 4.2.1</di=
v>=0AYour current version doesn't have the fix</div>=0A<div class=3D"gma=
il_extra"><br />=0A<div class=3D"gmail_quote">On Mon, Feb 12, 2018 at 11=
:09 AM, <span dir=3D"ltr"><<a href=3D"mailto:spfma.tech@e.mail.fr" ta=
rget=3D"_blank" rel=3D"noreferrer noopener">spfma.tech(a)e.mail.fr</a>>=
</span> wrote:<br />=0A<blockquote class=3D"gmail_quote" style=3D"margin=
: 0 0 0 .8ex; border-left: 1px #ccc solid; padding-left: 1ex;">=0A<div>&=
nbsp;</div>=0A<p><br /><br /> Le 12-Feb-2018 08:06:43 +0100, <a href=3D"=
mailto:jbelka@redhat.com" target=3D"_blank" rel=3D"noreferrer noopener">=
jbelka(a)redhat.com</a> a écrit:</p>=0A<blockquote style=3D"margin-=
left: 0; padding-left: 5px; border-left: 2px solid #000080;">> This o=
ption relevant only for the upgrade from 3.6 to 4.0(engine had<br /> >=
; different OS major versions), it all other cases the upgrade flow very=
<br /> > similar to upgrade flow of standard engine environment.<br /=
> > <br /> > <br /> > 1. Put hosted-engine environment to Globa=
lMaintenance(you can do it via<br /> > UI)<br /> > 2. Update engin=
e packages(# yum update -y)<br /> > 3. Run engine-setup<br /> > 4.=
Disable GlobalMaintenance<br /> ></blockquote>=0A<div> </div>=
=0A<div>So I followed these steps connected in the engine VM and didn't=
get any error message. But the version showed in the GUI is=0A<div id=
=3D"m_-3545490503635468479welcome-section">=0A<div>still 4.2.0.2-1.el7.c=
entos. Yum had no newer packages to install. And I still have the "no de=
fault route" and network validation problems.</div>=0A<div>Regards</div>=
=0A</div>=0A</div>=0A<div><br /> > Could someone explain me at least=
what "Cluster PROD is at version 4.2 which<br /> > is not supported=
by this upgrade flow. Please fix it before upgrading."<br /> > means=
? As far as I know 4.2 is the most recent branch available, isn't it ?<=
br /><br /> I have no idea where did you get<br /><br /> "Cluster PROD i=
s at version 4.2 which is not supported by this upgrade flow. Please fix=
it before upgrading."<br /><br /> Please do not cut output and provide=
exact one.<br /><br /> IIUC you should do 'yum update ovirt*setup*' and=
then 'engine-setup'<br /> and only after it would finish successfully y=
ou would do 'yum -y update'.<br /> Maybe that's your problem?<br /><br /=
> Jiri</div>=0A<br />=0A<div class=3D"HOEnZb">=0A<div class=3D"h5"><hr /=
>FreeMail powered by <a href=3D"https://mail.fr" target=3D"_blank" rel=
=3D"noreferrer noopener">mail.fr</a></div>=0A</div>=0A<br />____________=
___________________________________<br /> Users mailing list<br /><a hre=
f=3D"mailto:Users@ovirt.org" target=3D"_blank" rel=3D"noreferrer noopene=
r">Users(a)ovirt.org</a><br /><a href=3D"http://lists.ovirt.org/mailman/li=
stinfo/users" target=3D"_blank" rel=3D"noreferrer noopener">http://lists=
.ovirt.org/mailman/listinfo/users</a><br /><br /></blockquote>=0A</div>=
=0A<br /><br clear=3D"all" /><br />-- <br />=0A<div class=3D"gmail_signa=
ture">=0A<div dir=3D"ltr">=0A<div>=0A<div dir=3D"ltr">=0A<div>=0A<p styl=
e=3D"font-weight: bold; margin: 0; padding: 0; font-size: 14px; text-tra=
nsform: uppercase; margin-bottom: 0;">Michael Burman</p>=0A<p style=3D"f=
ont-weight: normal; font-size: 10px; margin: 0px 0px 4px; text-transform=
: uppercase;">Senior Quality engineer - rhv network - redhat israel</p>=
=0A<p style=3D"font-weight: normal; margin: 0; font-size: 10px; color: #=
999;"><a style=3D"color: #0088ce; font-size: 10px; margin: 0; text-decor=
ation: none; font-family: overpass, sans-serif;" href=3D"https://www.red=
hat.com" target=3D"_blank" rel=3D"noreferrer noopener">Red Hat <br /><br=
/></a></p>=0A<p style=3D"font-weight: normal; margin: 0px 0px 6px; font=
-size: 10px; color: #999999;"><span style=3D"margin: 0px; padding: 0px;"=
> <a style=3D"color: #0088ce; font-size: 10px; margin: 0; text-decoratio=
n: none; font-family: overpass, sans-serif;" href=3D"mailto:mburman@redh=
at.com" target=3D"_blank" rel=3D"noreferrer noopener">mburman(a)redhat.com=
</a> </span> M: <a style=3D"color: #0088ce; font-size: 11px;=
margin: 0; text-decoration: none; font-family: overpass, sans-serif;" t=
arget=3D"_blank" rel=3D"noreferrer noopener">0545355725</a>  =
; IM: mburman</p>=0A<table border=3D"0">=0A<tbody>=0A<tr>=0A<td width=3D=
"100"> </td>=0A</tr>=0A</tbody>=0A</table>=0A</div>=0A</div>=0A</di=
v>=0A</div>=0A</div>=0A</div>=0A</blockquote>=0A <br/=
><hr>FreeMail powered by <a href=3D"https://mail.fr" target=3D"_blank">m=
ail.fr</a>=0A
--=_0f45da8360572fbabc948ee54ed96c44--
7 years, 2 months
Import Domain and snapshot issue ... please help !!!
by Enrico Becchetti
This is a cryptographically signed message in MIME format.
--------------ms010906060108030106020200
Content-Type: text/plain; charset=iso-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable
Content-Language: it-IT
=A0Dear All,
I have been using ovirt for a long time with three hypervisors and an=20
external engine running in a centos vm .
This three hypervisors have HBAs and access to fiber channel storage.=20
Until recently I used version 3.5, then I reinstalled everything from=20
scratch and now I have 4.2.
Before formatting everything, I detach the storage data domani (FC) with =
the virtual machines and reimported it to the new 4.2 and all went well. =
In
this domain there were virtual machines with and without snapshots.
Now I have two problems. The first is that if I try to delete a snapshot =
the process is not end successful and remains hanging and the second=20
problem is that
in one case I lost the virtual machine !!!
So I need your help to kill the three running zombie tasks because with=20
taskcleaner.sh I can't do anything and then I need to know how I can=20
delete the old snapshots
made with the 3.5 without losing other data or without having new=20
processes that terminate correctly.
If you want some log files please let me know.
Thank you so much.
Best Regards
Enrico
--------------ms010906060108030106020200
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
BZMwggWPMIIDd6ADAgECAgMAkbUwDQYJKoZIhvcNAQELBQAwQzELMAkGA1UEBhMCSVQxDTAL
BgNVBAoTBElORk4xJTAjBgNVBAMTHElORk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcN
MTcwMjIyMTM0NDU1WhcNMTgwMjIyMTM0NDU1WjBoMQswCQYDVQQGEwJJVDENMAsGA1UEChME
SU5GTjEdMBsGA1UECxMUUGVyc29uYWwgQ2VydGlmaWNhdGUxEDAOBgNVBAcTB1BlcnVnaWEx
GTAXBgNVBAMTEEVucmljbyBCZWNjaGV0dGkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDM9hqLromxJFRrJxIkfgn+pYWI6IJmy0zu0OH6x/zrp8eEpTkI6Rlngg3jVexYWldr
obFy6TSSW429GDbkszGjy/yNJKtgQnRama8Izihm/RY5m/j8iTqvw4RPdk6CrxO8AD3FGiR8
sSImHzBagHmvAHwA9Ba5gTCJVgRO4MsGAMaxKNkwUrGIZK+Rmi5zVLW2yfjqsSiO4sCZDXey
/Ndm17b8FF5HG5rNWjwG8A0y2s25AWTxURvM1gNnTXl8zOPWmtvZcpp5dBp3VlA6N/0Pk3uy
klBIBEEK67W9+vqcDfqNON24hL+Cm3OsGrW/XpIIb/oQNdLp5dtSUG3qTn8BAgMBAAGjggFl
MIIBYTAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQEAwIEsDAdBgNVHSUEFjAUBggrBgEFBQcD
AgYIKwYBBQUHAwQwPwYDVR0fBDgwNjA0oDKgMIYuaHR0cDovL3NlY3VyaXR5LmZpLmluZm4u
aXQvQ0EvSU5GTi1DQS0yMDE1LmNybDAlBgNVHSAEHjAcMAwGCisGAQQB0SMKAQgwDAYKKoZI
hvdMBQICATAdBgNVHQ4EFgQUiw04Jlbw6qOmGOQVq8eXcDhG08cwcwYDVR0jBGwwaoAUQ4xN
/uyWyunvCqR9wZ0C5Z+9loChR6RFMEMxCzAJBgNVBAYTAklUMQ0wCwYDVQQKEwRJTkZOMSUw
IwYDVQQDExxJTkZOIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggkAsyYCxn1JJicwJgYDVR0R
BB8wHYEbZW5yaWNvLmJlY2NoZXR0aUBwZy5pbmZuLml0MA0GCSqGSIb3DQEBCwUAA4ICAQBY
SsKt/XTw+Kc+Vc1pjVi5uuEf/oP0U/nQQywIyM+rhbkeHy8c1eokA5k8Xnycwx8kzAh0e2lz
W0a8gM2tSI799DF6N8s5NfCo6RoPciSYA+7edylWlE8TfGoJ2I5pNZV3IctYlfokSiz59EPz
CNhQsfVbF8qkINB/J8HlbKvm0ERDOAse1pBjNwdU3Lt5nxmw3Yd08b9jPw4g5eMSmW4XmRzP
kERTjAQxophM4OIKb0Vj0XfqONBt+LansgyzHcWbo+aKJmgQDvdumnGsNoQbRI4TxjnlB4Dg
Vgbvo0mKZYfu0Sr2U9+wpRJ0U/+YljbVEuzmD9hl/eBRoBgIhx9dwoBYsuXEkWo8RkgeVpnt
z8ze4zubi7+4BnuuVLZeq1VWQFy1qU7KGTou4a6B+gM88ZviGJI6o03oDhE976uOST5v5pZ7
q7uhgBd8FW594PAWdI1oPQ5YVKX+NX6XrXwf7GYf18OTmV3kNxWllrNVz3YLLhbnSSf1o3sd
FLZXMu1r4dbeQuk7duYa/c/POyl3f8H/JGAPIOVQWfotMm8X/WET9Rhglc5Ead18rZPePTpj
zyjfB+k0C0ZH1cM7ynTpMDxrk9X7Zq4wbKFlzG1OVKSzTiZGLQrjhciXoSDHs4J9RNM5ar2G
IYWhWFkQ+tVY/73EOsNHHXjk438qUUD33jGCAwgwggMEAgEBMEowQzELMAkGA1UEBhMCSVQx
DTALBgNVBAoTBElORk4xJTAjBgNVBAMTHElORk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkC
AwCRtTANBglghkgBZQMEAgEFAKCCAY8wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkq
hkiG9w0BCQUxDxcNMTgwMjEzMTE0ODMwWjAvBgkqhkiG9w0BCQQxIgQglU+Ffi75OVEriDBU
KCyOh3iM6sO9i/cJZDBS/vADmIswWQYJKwYBBAGCNxAEMUwwSjBDMQswCQYDVQQGEwJJVDEN
MAsGA1UEChMESU5GTjElMCMGA1UEAxMcSU5GTiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eQID
AJG1MFsGCyqGSIb3DQEJEAILMUygSjBDMQswCQYDVQQGEwJJVDENMAsGA1UEChMESU5GTjEl
MCMGA1UEAxMcSU5GTiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eQIDAJG1MGwGCSqGSIb3DQEJ
DzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0D
AgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcN
AQEBBQAEggEAm8SIJoSJRCwHurGmhWeXVun/OCq1tAm/fd0Dng6Y8q/gPOCGeXfDfay26WU+
Nn8IviGPdRyvirngnID2zxykr5q8XKvqNnJSXtE/4TCjoCIYvAxKwv3ICX8R/H8W/15n5gil
y4IUZR9iypWmC1IXkaUxwNewKn6xETpypj0qVe+8jm/0pXpF+5nBgAai08hVNO+fmAYGk3zA
KlsXYqKyeOfuvhmARRU5ZwLd6ByYtQXW4WqdagdLOxvB1WcB8BItTwW+Kw9LG0DQS0E9tCfV
mbkO1Cb1YwW0CDZfRYeC5RiUVnnlUrkm6MrDCuHkGye+TWn95+bm/sBpyHbfmm1ktwAAAAAA
AA==
--------------ms010906060108030106020200--
7 years, 2 months
Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1
by Stefano Danzi
Hello!
In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
Hosted engine starts regularly.
I have a sigle host with Hosted Engine.
Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
When I start any VM I get this error: "The CPU type of the cluster is
unknown. Its possible to change the cluster cpu or set a different one
per VM."
All VMs have " Guest CPU Type: N/D"
Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu
type before the upgrade), my CPU should be Ivy Bridge but it isn't in
the dropdown list.
If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
can't chage cluster cpu type when I have running hosts with a lower CPU
type.
I can't put host in maintenance because hosted engine is running on it.
How I can solve?
7 years, 2 months
leftover of disk moving operation
by Gianluca Cecchi
Hello,
I had a problem during a disk migration from one storage to another in a
4.1.7 environment connected to SAN storage.
Now, after deleting the live storage migration snapshot, I want to retry
(with the VM powered off) but at destination the logical volume still
exists and was not pruned after the initial failure.
I get
HSMGetAllTasksStatusesVDS failed: Cannot create Logical Volume:
('c0097b1a-a387-4ffa-a62b-f9e6972197ef',
u'a20bb16e-7c7c-4ed4-85c0-cbf297048a8e')
I was able to move the other 4 disks that were part of this VM.
Can I simply lvremove the target LV at host side (I have only one host
running at this moment) and try the move again, or do I have to execute
anything more, eg at engine rdbms level?
Thanks,
Gianluca
7 years, 2 months
Defining custom network filter or editing existing
by Tim Thompson
All,
I was wondering if someone can point me in the direction of the
documentation related to defining custom network filters (nwfilter) in
4.2. I found the docs on assigning a network filter to a vNIC profile,
but I cannot find any mention of how you can create your own. Normally
you'd use 'virst nwfilter-define', but that is locked out since vdsm
manages everything. I need to expand clean-traffic's scope to include
ipv6, since it doesn't handle ipv6 at all by default, it seems.
Thanks,
-Tim
7 years, 2 months
VM is down with error: Bad volume specification
by Chris Boot
Hi all,
I'm running oVirt 4.2.0 and have been using oVirtBackup with it. So far
it has been working fine, until this morning. Once of my VMs seems to
have had a snapshot created that I can't delete.
I noticed when the VM failed to migrate to my other hosts, so I just
shut it down to allow the host to go into maintenance. Now I can't start
the VM with the snapshot nor can I delete the snapshot.
Please let me know what further information you need to help me diagnose
the issue and recover the VM.
Best regards,
Chris
-------- Forwarded Message --------
Subject: alertMessage (ovirt.boo.tc), [VM morse is down with error. Exit
message: Bad volume specification {'address': {'bus': '0', 'controller':
'0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi',
'apparentsize': '12386304', 'cache': 'none', 'imageID':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type':
'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize':
'0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311',
'device': 'disk', 'path':
'/rhev/data-center/00000001-0001-0001-0001-000000000311/23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab',
'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file',
'specParams': {}, 'discard': True}.]
Date: Tue, 23 Jan 2018 11:32:21 +0000 (GMT)
From: engine(a)ovirt.boo.tc
To: bootc(a)bootc.net
Time:2018-01-23 11:30:39.677
Message:VM morse is down with error. Exit message: Bad volume
specification {'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target': '0', 'unit': '0'}, 'serial':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi',
'apparentsize': '12386304', 'cache': 'none', 'imageID':
'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type':
'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize':
'0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311',
'device': 'disk', 'path':
'/rhev/data-center/00000001-0001-0001-0001-000000000311/23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab',
'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file',
'specParams': {}, 'discard': True}.
Severity:ERROR
VM Name: morse
Host Name: ovirt2.boo.tc
Template Name: Blank
--
Chris Boot
bootc(a)boo.tc
7 years, 2 months
effectiveness of "discard=unmap"
by Matthias Leopold
Hi,
i'm sorry to bother you again with my ignorance of the DISCARD feature
for block devices in general.
after finding several ways to enable "discard=unmap" for oVirt disks
(via standard GUI option for iSCSI disks or via "diskunmap" custom
property for Cinder disks) i wanted to check in the guest for the
effectiveness of this feature. to my surprise i couldn't find a
difference between Linux guests with and without "discard=unmap" enabled
in the VM. "lsblk -D" reports the same in both cases and also
fstrim/blkdiscard commands appear to work with no difference. Why is
this? Do i have to look at the underlying storage to find out what
really happens? Shouldn't this be visible in the guest OS?
thx
matthias
7 years, 2 months
Info about windows guest performance
by Gianluca Cecchi
Hello,
while in my activities to accomplish migration of a Windows 2008 R2 VM
(with an Oracle RDBMS inside) from vSphere to oVirt, I'm going to check
performance related things.
Up to now I only ran Windows guests inside my laptops and not inside an
oVirt infrastructure.
Now I successfully migrated this kind of VM to oVirt 4.1.9.
The guest had an LSI logic sas controller. Inside the oVirt host that I
used as proxy (for VMware virt-v2v) I initially didn't have the virtio-win
rpm.
I presume that has been for this reason that the oVirt guest has been
configured with IDE disks...
Can you confirm?
For this test I started with ide, then added a virtio-scsi disk and then
changed also the boot disk to virtio-scsi and all now goes well, with also
ovirt-guest-tools-iso-4.1-3 provided iso used to install qxl and so on...
So far so good.
I found this bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1277353
where it seems that
"
For optimum I/O performance it's critical to make sure that Windows
guests use the Hyper-V reference counter feature. QEMU command line
should include
-cpu ...,hv_time
and
-no-hpet
"
Analyzing my command line I see the "-no-hpet" but I dont see the "hv_time"
See below full comand.
Any hints?
Thanks,
Gianluca
/usr/libexec/qemu-kvm
-name guest=testmig,debug-threads=on
-S
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-12-testmig/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
-cpu Westmere,vmx=on
-m size=4194304k,slots=16,maxmem=16777216k
-realtime mlock=off
-smp 2,maxcpus=16,sockets=16,cores=1,threads=1
-numa node,nodeid=0,cpus=0-1,mem=4096
-uuid x-y-z-x-y
-smbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-4.1708.el7.centos,serial=xx,uuid=yy
-no-user-config
-nodefaults
-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-12-testmig/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control
-rtc base=2018-02-09T12:41:41,driftfix=slew
-global kvm-pit.lost_tick_policy=delay
-no-hpet
-no-shutdown
-boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5
-device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
-drive if=none,id=drive-ide0-1-0,readonly=on
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0-45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0-0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache=none,werror=stop,rerror=stop,aio=native
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1
-drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184-4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0-0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none,werror=stop,rerror=stop,aio=native
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-netdev tap,fd=30,id=hostnet0
-device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent
-device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
-msg timestamp=on
7 years, 2 months
Network configuration validation error
by spfma.tech@e.mail.fr
--=_c8dd4bd4673b7a5ada782429f2b8b302
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi, I am experiencing a new problem : when I try to modify something in=
the network setup on the second node (added to the cluster after instal=
ling the engine on the other one) using the Engine GUI, I get the follow=
ing error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\=
d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)"=0A Attribut : ipConfigu=
ration.iPv4Addresses[0].gateway Moreover, on the general status of the=
r server, I have a "Host has no default route" alert. The ovirtmgmt ne=
twork has a defined gateway of course, and the storage network has none=
because it is not required. Both server have the same setup, with diffe=
rent addresses of course :-) I have not been able to find anything use=
ful in the logs. Is this a bug or am I doing something wrong ? Regar=
ds =0A=0A---------------------------------------------------------------=
----------------------------------=0AFreeMail powered by mail.fr
--=_c8dd4bd4673b7a5ada782429f2b8b302
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<div> </div>=0A<div>Hi,</div>=0A<div>I am experiencing a new proble=
m : when I try to modify something in the network setup on the second no=
de (added to the cluster after installing the engine on the other one) u=
sing the Engine GUI, I get the following error when validating :</div>=
=0A<div> </div>=0A<div> must match "^\b((25[0-5]|=
2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)"<br />&n=
bsp; Attribut : ipConfiguration.iPv4Addresses[0].gateway</di=
v>=0A<div> </div>=0A<div>Moreover, on the general status of ther se=
rver, I have a "Host has no default route" alert.</div>=0A<div> </d=
iv>=0A<div>The ovirtmgmt network has a defined gateway of course, and th=
e storage network has none because it is not required. Both server have=
the same setup, with different addresses of course :-)</div>=0A<div>&nb=
sp;</div>=0A<div>I have not been able to find anything useful in the log=
s.</div>=0A<div> </div>=0A<div>Is this a bug or am I doing somethin=
g wrong ?</div>=0A<div> </div>=0A<div>Regards</div>=0A =
<br/><hr>FreeMail powered by <a href=3D"https://mail.fr" target=
=3D"_blank">mail.fr</a>=0A
--=_c8dd4bd4673b7a5ada782429f2b8b302--
7 years, 2 months
Using network assigned to VM on CentOS host?
by Wesley Stewart
This might be a stupid question. But I am testing out a 10Gb network
directly connected to my Freenas box using a Cat6 crossover cable.
I setup the connection (on device eno4) and called the network "Crossover"
in oVirt.
I dont have DHCP on this, but I can easy assign VMs a NIC on the
"Crossover" network, assign them an ip address (10.10.10.x) and everything
works fine. But I was curious about doing this for the CentOS host as
well. I want to test out hosting VM's on the NFS share over the 10Gb
network but I wasn't quite sure how to do this without breaking other
connections and I did not want to do anything incorrectly.
I appreciate your feedback! I apologize if this is a stupid question.
Running oVirt 4.1.8 on CentOS 7.4
7 years, 2 months
Hosted-Engine mount .iso file CLI
by Russell Wecker
I have a hosted engine setup that when I ran the system updates for it, and
it will not boot. I would like to have it boot from a rescue CD image so i
can fix it i have copied the /var/run/ovirt-hosted-engine-ha/vm.conf to
/root and modified it however i cannot seem to find the exact options to
configure the file for .iso my current settings are
devices={index:2,iface:ide,shared:false,readonly:true,deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk}
How do i change it to boot from local .iso.
Thanks
Any help would be most appreciated.
7 years, 2 months
Issue with 4.2.1 RC and SSL
by ~Stack~
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--X1liMzBwOelVuDr9VnkmQpyzIXGjCLTVO
Content-Type: multipart/mixed; boundary="nhGTtLudrG119NimaBSBZPnEVXBJuW62Y";
protected-headers="v1"
From: ~Stack~ <i.am.stack(a)gmail.com>
To: users <users(a)ovirt.org>
Message-ID: <ff271e8b-7ec9-f0b6-6e00-511c5aad1b27(a)gmail.com>
Subject: Issue with 4.2.1 RC and SSL
--nhGTtLudrG119NimaBSBZPnEVXBJuW62Y
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Greetings,
I was having a lot of issues with 4.2 and 95% of them are in the change
logs for 4.2.1. Since this is a new build, I just blew everything away
and started from scratch with the RC release.
The very first thing that I did after the engine-config was to set up my
SSL cert. I followed the directions from here:
https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/
Logged in the first time to the web interface and everything worked! Grea=
t.
Install my hosts (also completely fresh installs - Scientific Linux 7
fully updated) and none would finish the install...
I can send the full host debug log if you want, however, I'm pretty sure
that the problem is because of the SSL somewhere. I've cut/pasted the
relevant part.
Any advice/help, please?
Thanks!
~Stack~
2018-02-07 16:56:21,697-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.tune.tuned.Plugin._misc (None)
2018-02-07 16:56:21,698-0600 DEBUG otopi.context
context._executeMethod:128 Stage misc METHOD
otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id
2018-02-07 16:56:21,698-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None)
2018-02-07 16:56:21,699-0600 DEBUG otopi.transaction
transaction._prepare:61 preparing 'File transaction for '/etc/vdsm/vdsm.i=
d''
2018-02-07 16:56:21,699-0600 DEBUG otopi.filetransaction
filetransaction.prepare:183 file '/etc/vdsm/vdsm.id' missing
2018-02-07 16:56:21,705-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None)
2018-02-07 16:56:21,706-0600 DEBUG otopi.context
context._executeMethod:128 Stage misc METHOD
otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks
2018-02-07 16:56:21,706-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None)
2018-02-07 16:56:21,707-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD
otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None)
2018-02-07 16:56:21,707-0600 DEBUG otopi.context
context._executeMethod:128 Stage misc METHOD
otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc
2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD
otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc (None)
2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### Setting up PKI
2018-02-07 16:56:21,709-0600 DEBUG
otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:813 execute:
('/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes',
'-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), executable=3D'None',
cwd=3D'None', env=3DNone
2018-02-07 16:56:21,756-0600 DEBUG
otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:863
execute-result: ('/usr/bin/openssl', 'req', '-new', '-newkey',
'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), rc=3D=
0
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### Please issue VDSM
certificate based on this certificate request
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ***D:MULTI-STRING
VDSM_CERTIFICATE_REQUEST --=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND -----BEGIN CERTIFICATE REQUEST--=
---
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMZm
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
eYTWbHKkN+GlQnZ8C6fdk++htyFE+IHSzkhTyTSZdM0bPTdvhomTeCwzNlWBWdU+
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
PrVB7j/1iksSt6RXDQUWlPDPBNfAa6NtZijEaGuxAe0RpI71G5feZmgVRmtIfrkE
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
5BjhnCMJW46y9Y7dc2TaXzQqeVj0nkWkHt0v6AVdRWP3OHfOCvqoABny1urStvFT
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
TeAhSBVBUWTaNczBrZBpMXhXrSAe/hhLXMF3VfBV1odOOwb7AeccYkGePMxUOg8+
2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
XMAKdDCn7N0ZC4gSyEAP9mSobvOvNObcfw02NyYdny32/edgPrXKR+ISf4IwVd0d
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
mDonT4W2ROTE/A3M/mkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCpAKAMv/Vh
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
0ByC02R3fxtA6b/OZyys+xyIAfAGxo2NSDJDQsw9Gy1QWVtJX5BGsbzuhnNJjhRm
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
5yx0wrS/k34oEv8Wh+po1fwpI5gG1W9L96Sx+vF/+UXBenJbhEVfir/cOzjmP1Hg
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
TtK5nYnBM7Py5JdnnAPww6jPt6uRypDZqqM8YOct1OEsBr8gPvmQvt5hDGJKqW37
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
xFbad6ILwYIE0DXAu2h9y20Pl3fy4Kb2LQDjltiaQ2IBiHFRUB/H2DOxq0NpH4z7
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
wqU/ai7sXWT/Vq4R6jD+c0V0WP4+VgSkgqPvnSYHwqQUbc9Kh7RwRnVyzLupbWdM
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND Pr+MZ2D1jg27
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND -----END CERTIFICATE REQUEST----=
-
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND
--=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%QStart: VDSM_CERTIFICATE_CHAI=
N
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### Please input VDSM
certificate chain that matches certificate request, top is issuer
2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ###
2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ### type
'--=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--' in own line to mark end,=
'--=3D451b80dc-996f-ABORT-9e4f-2b29ef6d1141=3D--' aborts
2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND ***Q:MULTI-STRING
VDSM_CERTIFICATE_CHAIN --=3D451b80dc-996f-432e-9e4f-2b29ef6d1141=3D--
--=3D451b80dc-996f-ABORT-9e4f-2b29ef6d1141=3D--
2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%QEnd: VDSM_CERTIFICATE_CHAIN
2018-02-07 16:56:22,765-0600 DEBUG otopi.context
context._executeMethod:143 method exception
Traceback (most recent call last):
File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/tmp/ovirt-h7XmTvEqc3/otopi-plugins/ovirt-host-common/vdsm/pki.py",
line 241, in _misc
'\n\nPlease input VDSM certificate chain that '
File "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/otopi/dialog/machine.py",
line 327, in queryMultiString
v =3D self._readline()
File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/dialog.py", line 248, in
_readline
raise IOError(_('End of file'))
IOError: End of file
2018-02-07 16:56:22,766-0600 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Misc configuration':
End of file
--nhGTtLudrG119NimaBSBZPnEVXBJuW62Y--
--X1liMzBwOelVuDr9VnkmQpyzIXGjCLTVO
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJae5YTAAoJELkej+ysXJPmkskP/RipREtSwvODOXqtHU893bjl
3RrYt9BZ98vYqdM0paqori+b+HGwhM3BMP9wsDV2j96fg6wbYgGIZT3aCgxNJza2
mIuvZWWu8Y7NgbUL/Wb2cWPx/ZendohCcT6BC83mPnw6ehGik9+zhNpBFJWrbLwP
LnsJMh1Hx5R06X9FWjCUnsREKLLmBGCuauLEomdpOSVtrMADWknUWz9jJP3RGYWI
7eDEys2oAkrwt5IQC51JIdi++PX7QWaARP22Oo6qH97ofLxL4B1xIzxCP1Y0JNY8
k+i2vF8sQHIkUR1B5aO5sUHyhK9pAGf+ASBiYMH+04d6L2t/mhRB4G0EOJtR/VJu
lZSF3wU+F4IEOWvMmk1/fPMqxctgbTAfWoMu8dmoHT17p4fodRy6506acByP0+RG
kQ6O4ccfwgAk2GXNSMRAgzi4gQxfh4+T7UIZ9DccW93Cn/35uF/3UzE0PlH0Dy9I
TacAksGGb96OKSGVp2AHJu78/1hDrNn++lH0pZFWWiHWsEXrtWnEw0kKJojuuj5i
GM7El2VeFjC33ObeqCrLCRpibxwl2FaTVN1VxPyCVFQ+SLv4ayAydo05v5ETl/NB
MkxuyidIQgovlIiUu++9Gw9EkHu+A4VVOlugwukX187F6Ln2Sy21hDsp/c86mHuD
7aOnX/1hWCV5ut1kD9N1
=shka
-----END PGP SIGNATURE-----
--X1liMzBwOelVuDr9VnkmQpyzIXGjCLTVO--
7 years, 2 months
Few Questions on New Install
by Talk Jesus
Greetings,
Just installed Ovirt:
Software Version:4.2.0.2-1.el7.centos
How Do I:
- add a subnet of IPv4 to assign to VMs
- download (or import) basic Linux templates like Centos 7, Ubuntu 16 even
if using minimal iso
- import from SolusVM based KVM nodes
Does oVirt support bulk IPv4 assignment to VMs? If I wish to assign say a
full /26 subnet of IPv4 to VM #1, is this a one click option?
Thank you. I read the docs, but everything is a bit confusing for me.
7 years, 2 months
Network Topologies
by aeR7Re
This is a multi-part message in MIME format.
--b1_bf9e7016867c417f2a03f0009ce95b53
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
SGVsbG8sCkknbSBsb29raW5nIGZvciBzb21lIGFkdmljZSBvbiBvciBldmVuIGp1c3Qgc29tZSBl
eGFtcGxlcyBvZiBob3cgb3RoZXIgb1ZpcnQgdXNlcnMgaGF2ZSBjb25maWd1cmVkIG5ldHdvcmtp
bmcgaW5zaWRlIHRoZWlyIGNsdXN0ZXJzLgoKQ3VycmVudGx5IHdlJ3JlIHJ1bm5pbmcgYSBjbHVz
dGVyIHdpdGggaG9zdHMgc3ByZWFkIGFjcm9zcyBtdWx0aXBsZSByYWNrcyBpbiBvdXIgREMsIHdp
dGggbGF5ZXIgMiBzcGFubmVkIGJldHdlZW4gdGhlbSBmb3IgVk0gbmV0d29ya3MuIFdoaWxlIHRo
aXMgaXMgZnVuY3Rpb25hbCwgaXQncyAxMDAlIG5vdCBpZGVhbCBhcyB0aGVyZSdzIG11bHRpcGxl
IHNpbmdsZSBwb2ludHMgb2YgZmFpbHVyZSBhbmQgYXQgc29tZSBwb2ludCBzb21lb25lIGlzIGdv
aW5nIHRvIGFjY2lkZW50YWxseSBsb29wIGl0IDopCgpXaGF0IHdlJ3JlIGFmdGVyIGlzIGEgbWV0
aG9kIG9mIHByb3ZpZGluZyBhIFZNIG5ldHdvcmsgYWNyb3NzIG11bHRpcGxlIHJhY2tzIHdoZXJl
IHRoZXJlIGFyZSBubyBzaW5nbGUgcG9pbnRzIG9mIGZhaWx1cmUuIFdlJ3ZlIGdvdCBsYXllciAz
IHN3aXRjaGVzIGluIHJhY2tzIGNhcGFibGUgb2YgcnVubmluZyBhbiBJR1AvRUdQLgoKQ3VycmVu
dCBpZGVhczoKLSBSdW4gYSByb3V0aW5nIGRhZW1vbiBvbiBlYWNoIFZNIGFuZCBoYXZlIGl0IGFk
dmVydGlzZSBhIC8zMiB0byB0aGUgZGlzdHJpYnV0aW9uIHN3aXRjaAotIE9WTiBmb3IgbGF5ZXIg
MiBiZXR3ZWVuIGhvc3RzICsgcG90ZW50aWFsbHkgVlJSUCBvciBzaW1pbGFyIG9uIHRoZSBkaXN0
cmlidXRpb24gc3dpdGNoCgpTbyBhcyBwZXIgbXkgb3JpZ2luYWwgcGFyYWdyYXBoLCBhbnkgYWR2
aWNlIG9uIHRoZSBtb3N0IGFwcHJvcHJpYXRlIG5ldHdvcmsgdG9wb2xvZ3kgZm9yIGFuIG9WaXJ0
IGNsdXN0ZXI/IG9yIGhvdyBoYXZlIHlvdSBzZXQgdXAgeW91ciBuZXR3b3Jrcz8KClRoYW5rIHlv
dQoKU2VudCB3aXRoIFtQcm90b25NYWlsXShodHRwczovL3Byb3Rvbm1haWwuY29tKSBTZWN1cmUg
RW1haWwu
--b1_bf9e7016867c417f2a03f0009ce95b53
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGRpdj5IZWxsbyw8YnI+PC9kaXY+PGRpdj5JJ20gbG9va2luZyBmb3Igc29tZSBhZHZpY2Ugb24g
b3IgZXZlbiBqdXN0IHNvbWUgZXhhbXBsZXMgb2YgaG93IG90aGVyIG9WaXJ0IHVzZXJzIGhhdmUg
Y29uZmlndXJlZCBuZXR3b3JraW5nIGluc2lkZSB0aGVpciBjbHVzdGVycy4gPGJyPjwvZGl2Pjxk
aXY+PGJyPjwvZGl2PjxkaXY+Q3VycmVudGx5IHdlJ3JlIHJ1bm5pbmcgYSBjbHVzdGVyIHdpdGgg
aG9zdHMgc3ByZWFkIGFjcm9zcyBtdWx0aXBsZSByYWNrcyBpbiBvdXIgREMsIHdpdGggbGF5ZXIg
MiBzcGFubmVkIGJldHdlZW4gdGhlbSBmb3IgVk0gbmV0d29ya3MuIFdoaWxlIHRoaXMgaXMgZnVu
Y3Rpb25hbCwgaXQncyAxMDAlIG5vdCBpZGVhbCBhcyB0aGVyZSdzIG11bHRpcGxlIHNpbmdsZSBw
b2ludHMgb2YgZmFpbHVyZSBhbmQgYXQgc29tZSBwb2ludCBzb21lb25lIGlzIGdvaW5nIHRvIGFj
Y2lkZW50YWxseSBsb29wIGl0IDopIDxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PldoYXQg
d2UncmUgYWZ0ZXIgaXMgYSBtZXRob2Qgb2YgcHJvdmlkaW5nIGEgVk0gbmV0d29yayBhY3Jvc3Mg
bXVsdGlwbGUgcmFja3Mgd2hlcmUgdGhlcmUgYXJlIG5vIHNpbmdsZSBwb2ludHMgb2YgZmFpbHVy
ZS4gV2UndmUgZ290IGxheWVyIDMgc3dpdGNoZXMgaW4gcmFja3MgY2FwYWJsZSBvZiBydW5uaW5n
IGFuIElHUC9FR1AuPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+Q3VycmVudCBpZGVhczo8
YnI+PC9kaXY+PGRpdj4tIFJ1biBhIHJvdXRpbmcgZGFlbW9uIG9uIGVhY2ggVk0gYW5kIGhhdmUg
aXQgYWR2ZXJ0aXNlIGEgLzMyIHRvIHRoZSBkaXN0cmlidXRpb24gc3dpdGNoPGJyPjwvZGl2Pjxk
aXY+LSBPVk4gZm9yIGxheWVyIDIgYmV0d2VlbiBob3N0cyArIHBvdGVudGlhbGx5IFZSUlAgb3Ig
c2ltaWxhciBvbiB0aGUgZGlzdHJpYnV0aW9uIHN3aXRjaDxicj48L2Rpdj48ZGl2Pjxicj48L2Rp
dj48ZGl2PlNvIGFzIHBlciBteSBvcmlnaW5hbCBwYXJhZ3JhcGgsIGFueSBhZHZpY2Ugb24gdGhl
IG1vc3QgYXBwcm9wcmlhdGUgbmV0d29yayB0b3BvbG9neSBmb3IgYW4gb1ZpcnQgY2x1c3Rlcj8g
b3IgaG93IGhhdmUgeW91IHNldCB1cCB5b3VyIG5ldHdvcmtzPzxicj48L2Rpdj48ZGl2Pjxicj48
L2Rpdj48ZGl2PlRoYW5rIHlvdTxicj48L2Rpdj48ZGl2IGNsYXNzPSJwcm90b25tYWlsX3NpZ25h
dHVyZV9ibG9jayI+PGRpdiBjbGFzcz0icHJvdG9ubWFpbF9zaWduYXR1cmVfYmxvY2stdXNlciBw
cm90b25tYWlsX3NpZ25hdHVyZV9ibG9jay1lbXB0eSI+PGJyPjwvZGl2PjxkaXYgY2xhc3M9InBy
b3Rvbm1haWxfc2lnbmF0dXJlX2Jsb2NrLXByb3RvbiI+U2VudCB3aXRoIDxhIGhyZWY9Imh0dHBz
Oi8vcHJvdG9ubWFpbC5jb20iPlByb3Rvbk1haWw8L2E+IFNlY3VyZSBFbWFpbC48YnI+PC9kaXY+
PC9kaXY+PGRpdj48YnI+PC9kaXY+
--b1_bf9e7016867c417f2a03f0009ce95b53--
7 years, 2 months
Re: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt
by Luca 'remix_tj' Lorenzetto
What you're looking at is called fault tolerance in other hypervisors.
As far as i know, ovirt doesn't implement such solution.
But if your system doesn't support failure recovery done by high
availability options, you should take in account to revise your application
architecture if you want to keep running on ovirt.
Luca
Il 10 feb 2018 8:31 AM, "Ranjith P" <ranjithspr13(a)yahoo.com> ha scritto:
Hi,
>>Who's shutting down the hypervisor? (Or perhaps it is shutdown
externally, due to overheating or otherwise?)
We need a continuous availability of VM's in our production setup. If the
hypervisor goes down due to any hardware failure or work load then VM's
above hypervisor will reboot and started on available hypervisors. This is
normally happening but it disrupting VM's. Can you suggest a solution in
this case? Can we achieve this challenge using glusterfs?
Thanks & Regards
Ranjith
Sent from Yahoo Mail on Android
<https://overview.mail.yahoo.com/mobile/?.src=Android>
On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul
<ykaul(a)redhat.com> wrote:
On Fri, Feb 9, 2018 at 9:25 PM, ranjithspr13(a)yahoo.com <
ranjithspr13(a)yahoo.com> wrote:
Hi,
Anyone can suggest how to setup VM Live migration (without restart vm)
while Hypervisor goes down in ovirt?
I think there are two parts to achieving this:
1. Have a script that migrates VMs off a specific host. This should be easy
to write using the Python/Ruby/Java SDK, Ansible or using REST directly.
2. Having this script run as a service when a host shuts down, in the right
order - well before libvirt and VDSM shut down, and would be fast enough
not to be terminated by systemd.
This is a bit more challenging.
Who's shutting down the hypervisor? (Or perhaps it is shutdown externally,
due to overheating or otherwise?)
Y.
Using glusterfs is it possible? Then how?
Thanks & Regards
Ranjith
Sent from Yahoo Mail on Android
<https://overview.mail.yahoo.com/mobile/?.src=Android>
______________________________ _________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
VM backups - Bacchus
by Niyazi Elvan
Dear Friends,
It has been a while I could not have time to work on Bacchus. This weekend
I created an ansible playbook to replace the installation procedure.
You simply download installer.yml and settings.yml files from git repo and
run the installer as "ansible-playbook installer.yml" Please check it at
https://github.com/openbacchus/bacchus . I recommend you to run the
installer on a fresh VM, which has no MySQL DB or previous installation.
Hope this helps to more people and please let me know about your ideas.
ps. Regarding oVirt 4.2, I had a chance to look at it and tried the new
domain type "Backup Domain". This is really cool feature and I am planning
to implement the support in Bacchus. Hopefully, CBT will show up soon and
we will have a better world :)
King Regards,
--
Niyazi Elvan
7 years, 2 months
Maximum time node can be offline.
by Thomas Letherby
Hello all,
Is there a maximum length of time an Ovirt Node 4.2 based host can be
offline in a cluster before it would have issues when powered back on?
The reason I ask is in my lab I currently have a three node cluster that
works really well, however a lot of the time I only actually need the
resources of one host, so to save power I'd like to keep the other two
offline until needed.
I can always script them to boot once a week or so if I need to.
Thanks,
Thomas
7 years, 2 months
Live migration of VM(0 downtime) while Hypervisor goes down in ovirt
by ranjithspr13@yahoo.com
------=_Part_1968242_1358094662.1518204324810
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?Using glusterfs is it possible? Then how?
Thanks & RegardsRanjith
Sent from Yahoo Mail on Android
------=_Part_1968242_1358094662.1518204324810
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi,<div id="yMail_cursorElementTracker_1518203906835">Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?</div><div id="yMail_cursorElementTracker_1518204210420">Using glusterfs is it possible? Then how?</div><div id="yMail_cursorElementTracker_1518204099547"><br></div><div id="yMail_cursorElementTracker_1518204100461">Thanks & Regards</div><div id="yMail_cursorElementTracker_1518204113848">Ranjith<br><br><div id="ymail_android_signature"><a href="https://overview.mail.yahoo.com/mobile/?.src=Android">Sent from Yahoo Mail on Android</a></div></div>
------=_Part_1968242_1358094662.1518204324810--
7 years, 2 months
Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider
by maoz zadok
Hello there,
I'm following
https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt/
guide
in order to import VMS from Libvirt to oVirt using ssh.
URL: "qemu+ssh://host1.example.org/system"
and get the following error:
Failed to communicate with the external provider, see log for additional
details.
*oVirt agent log:*
*- Failed to retrieve VMs information from external server
qemu+ssh://XXX.XXX.XXX.XXX/system*
*- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot recv
data: Host key verification failed.: Connection reset by peer*
*remote host sshd DEBUG log:*
*Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port
48148 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147
port 48148 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006*
*Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port
48150 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147
port 48150 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008*
*Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port
48152 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147
port 48152 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010*
*Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port
48154 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147
port 48154 [preauth]*
Thank you!
7 years, 2 months
Virt-viewer not working over VPN
by Vincent Royer
Hi, I asked this on the virt-viewer list, but it appears to be dead, so my
apologies if this isn't the right place for this question.
When I access my vm's locally using virt-viewer on windows clients,
everything works fine, spice or vnc.
When I access the same vm's remotely over a site-to-site VPN (setup between
the two firewalls), it fails with an error: unable to connect to libvirt
with uri: [none]. Similarly I cannot connect in a browser-based vnc
session (cannot connect to host).
I can resolve the DNS of the server from my remote client (domain override
in the firewall pointing to the DNS server locally) and everything else I
do seems completely unaware of the vpn link (SSH, RDP, etc). For example
connecting to https://ovirt-enginr.mydomain.com works as expected. The
only function not working remotely is virt-viewer.
Any clues would be appreciated!
7 years, 2 months
Re: [ovirt-users] Ovirt backups lead to unresponsive VM
by Alex K
Ok. I will reproduce and collect logs.
Thanx,
Alex
On Jan 29, 2018 20:21, "Mahdi Adnan" <mahdi.adnan(a)outlook.com> wrote:
I have Windows VMs, both client and server.
if you provide the engine.log file we might have a look at it.
--
Respectfully
*Mahdi A. Mahdi*
------------------------------
*From:* Alex K <rightkicktech(a)gmail.com>
*Sent:* Monday, January 29, 2018 5:40 PM
*To:* Mahdi Adnan
*Cc:* users
*Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM
Hi,
I have observed this logged at host when the issue occurs:
VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer
or
VDSM host.domain command GetStatsVDS failed: Connection reset by peer
At engine logs have not been able to correlate.
Are you hosting Windows 2016 server and Windows 10 VMs?
The weird is that I have same setup on other clusters with no issues.
Thanx,
Alex
On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan <mahdi.adnan(a)outlook.com>
wrote:
Hi,
We have a cluster of 17 nodes, backed by GlusterFS storage, and using this
same script for backup.
we have no issues with it so far.
have you checked engine log file ?
--
Respectfully
*Mahdi A. Mahdi*
------------------------------
*From:* users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of Alex
K <rightkicktech(a)gmail.com>
*Sent:* Wednesday, January 24, 2018 4:18 PM
*To:* users
*Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM
Hi all,
I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
top glusterfs.
On some VMs (especially one Windows server 2016 64bit with 500 GB of disk).
Guest agents are installed at VMs. i almost always observe that during the
backup of the VM the VM is rendered unresponsive (dashboard shows a
question mark at the VM status and VM does not respond to ping or to
anything).
For scheduled backups I use:
https://github.com/wefixit-AT/oVirtBackup
The script does the following:
1. snapshot VM (this is done ok without any failure)
2. Clone snapshot (this steps renders the VM unresponsive)
3. Export Clone
4. Delete clone
5. Delete snapshot
Do you have any similar experience? Any suggestions to address this?
I have never seen such issue with hosted Linux VMs.
The cluster has enough storage to accommodate the clone.
Thanx,
Alex
7 years, 2 months
Cannot Remove Disk
by Donny Davis
Ovirt 4.2 has been humming away quite nicely for me in the last few months,
and now I am hitting an issue when try to touch any api call that has to do
with a specific disk. This disk resides on a hyperconverged DC, and none of
the other disks seem to be affected. Here is the error thrown.
2018-02-08 10:13:20,005-05 ERROR
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default
task-22) [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during
ValidateFailure.:
org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota
6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool
5a497956-0380-021e-0025-00000000035e
Any ideas what can be done to fix this?
7 years, 2 months
Info about exporting from vSphere
by Gianluca Cecchi
Hello,
I have this ind of situation.
Source env:
It is vSphere 6.5 (both vCenter Server appliance and ESXi hosts) where I
have an admin account to connect to, but currently only to vCenter and not
to the ESXi hosts
The VM to be migrated is Windows 2008 R2 SP1 with virtual hw version 8
(ESXi 5.0 and later) and has one boot disk 35Gb and one data disk 250Gb.
The SCSI controller is LSI logic sas and network vmxnet3
It has no snapshots at the moment
I see in my oVirt 4.1.9 that I can import from:
1) VMware
2) VMware Virtual Appliance
and found also related documentations in RHEV 4.1 Virtual Machine
Management pdf
Some doubts:
- what is the best between the 2 methods if I can chose? Their Pros&Cons?
- Does 1) imply that I also need the ESXi account? Currently my windows
domain account that gives me access to vcenter doesn't work connecting to
ESXi hosts
- also it seems that 1) is more intrusive, while for 2) I only need to put
the ova file into some nfs share...
Thanks in advance,
Gianluca
7 years, 2 months
when creating VMs, I don't want hosted_storage to be an option
by Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not available to add new VMs to? Right now it’s the default storage domain when adding a VM. At the least, I’d like to make another storage domain the default.
Is there a way to do this?
Thanks
7 years, 2 months
oVirt CLI Question
by Andrei V
Hi,
How to force power off, and then launch (after timeout e.g. 20sec)
particular VM from bash or Python script?
Is 20sec is enough to get oVirt engine updated after forced power off ?
What happened with this wiki? Seems like it is deleted or moved.
http://wiki.ovirt.org/wiki/CLI#Usage
Is this project part of oVirt distro? It looks like in state of active
development with last updates 2 months ago.
https://github.com/fbacchella/ovirtcmd
Thanks !
7 years, 2 months
IndexError python-sdk
by David David
Hi all.
python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64
Issue is that I cant upload a snapshot I get IndexError when do
upload_disk_snapshots.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
Output:
Traceback (most recent call last):
File "snapshot_upload.py", line 298, in <module>
images_chain = get_images_chain(disk_path)
File "snapshot_upload.py", line 263, in get_images_chain
base_volume = [v for v in volumes_info.values() if
'full-backing-filename' not in v ][0]
IndexError: list index out of range
7 years, 2 months
Re: [ovirt-users] vdsmd fails after upgrade 4.1 -> 4.2
by Frank Rothenstein
This is a multi-part message in MIME format.
------------MIME--520934545-23878-delim
Content-Type: multipart/alternative;
boundary="----------MIME--520934545-17562-delim"
------------MIME--520934545-17562-delim
Content-Type: text/plain;
charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Thanks Thomas=2C
it seems you were right=2E I followed the instructions to enable
hugepages via kernel command line and after reboot vdsmd starts
correctly=2E
=28I went back to 4=2E1=2E9 in between=2C added the kernel command line and=
upgraded to 4=2E2=29
The docs/release notes should mention it - or did I miss it=3F
Am Dienstag=2C den 06=2E02=2E2018=2C 17=3A17 -0800 schrieb Thomas Davis=3A=
=3E sorry=2C make that=3A
=3E=20
=3E hugeadm --pool-list
=3E Size Minimum Current Maximum Default
=3E 2097152 1024 1024 1024 *
=3E 1073741824 4 4 4 =20
=3E=20
=3E=20
=3E On Tue=2C Feb 6=2C 2018 at 5=3A16 PM=2C Thomas Davis =3Ctadavis=40lbl=
=2Egov=3E wrote=3A
=3E =3E I found that you now need hugepage1g support=2E The error messages=
=3E =3E are wrong - it=27s not truly a libvirt problem=2C it=27s hugepages1=
g are
=3E =3E missing for libvirt=2E
=3E =3E=20
=3E =3E add something like=3A
=3E =3E=20
=3E =3E default=5Fhugepagesz=3D1G hugepagesz=3D1G hugepages=3D4 hugepagesz=
=3D2M
=3E =3E hugepages=3D1024 to the kernel command line=2E
=3E =3E=20
=3E =3E You can also do a =27yum install libhugetlbfs-utils=27=2C then do=
=3A
=3E =3E=20
=3E =3E hugeadm --list
=3E =3E Mount Point Options
=3E =3E /dev/hugepages rw=2Cseclabel=2Crelatime
=3E =3E /dev/hugepages1G rw=2Cseclabel=2Crelatime=2Cpagesize=3D1G
=3E =3E=20
=3E =3E if you do not see the /dev/hugepages1G listed=2C then vdsmd/libvirt=
=3E =3E will not start=2E
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E=20
=3E =3E On Mon=2C Feb 5=2C 2018 at 5=3A49 AM=2C Frank Rothenstein =3Cf=2Ero=
thenstein=40bo
=3E =3E dden-kliniken=2Ede=3E wrote=3A
=3E =3E =3E Hi=2C=20
=3E =3E =3E=20
=3E =3E =3E I=27m currently stuck - after upgrading 4=2E1 to 4=2E2 I cannot=
start
=3E =3E =3E the=20
=3E =3E =3E host-processes=2E=20
=3E =3E =3E systemctl start vdsmd fails with following lines in journalctl=
=3A=20
=3E =3E =3E=20
=3E =3E =3E =3Csnip=3E=20
=3E =3E =3E=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A Running wait=
=5Ffor=5Fnetwork=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A Running run=5F=
init=5Fhooks=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A Running check=
=5Fis=5Fconfigured=20
=3E =3E =3E Feb 05 14=3A40=3A15 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E sasldblistusers2=5B10440=5D=3A DIGEST-MD5 common mech free=20=
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A Error=3A=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A One of the modules is=
not configured
=3E =3E =3E to=20
=3E =3E =3E work with VDSM=2E=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A To configure the modul=
e use the
=3E =3E =3E following=3A=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A =27vdsm-tool configure=
=5B--module
=3E =3E =3E module-=20
=3E =3E =3E name=5D=27=2E=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A If all modules are not=
configured
=3E =3E =3E try to=20
=3E =3E =3E use=3A=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A =27vdsm-tool configure=
--force=27=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A =28The force flag will=
stop the
=3E =3E =3E module=27s=20
=3E =3E =3E service and start it=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A afterwards automatical=
ly to load the
=3E =3E =3E new=20
=3E =3E =3E configuration=2E=29=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A abrt is already config=
ured for vdsm=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A lvm is configured for=
vdsm=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A libvirt is not configu=
red for vdsm
=3E =3E =3E yet=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A Current revision of mu=
ltipath=2Econf=20
=3E =3E =3E detected=2C preserving=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A Modules libvirt are no=
t configured=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet=20
=3E =3E =3E vdsmd=5Finit=5Fcommon=2Esh=5B10414=5D=3A vdsm=3A stopped during=
execute=20
=3E =3E =3E check=5Fis=5Fconfigured task =28task returned with error code 1=
=29=2E=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet system=
d=5B1=5D=3A=20
=3E =3E =3E vdsmd=2Eservice=3A control process exited=2C code=3Dexited stat=
us=3D1=20
=3E =3E =3E Feb 05 14=3A40=3A16 glusternode1=2Ebodden-kliniken=2Enet system=
d=5B1=5D=3A
=3E =3E =3E Failed to=20
=3E =3E =3E start Virtual Desktop Server Manager=2E=20
=3E =3E =3E -- Subject=3A Unit vdsmd=2Eservice has failed=20
=3E =3E =3E -- Defined-By=3A systemd=20
=3E =3E =3E -- Support=3A http=3A//lists=2Efreedesktop=2Eorg/mailman/listin=
fo/systemd
=3E =3E =3E -devel=20
=3E =3E =3E --=20
=3E =3E =3E -- Unit vdsmd=2Eservice has failed=2E=20
Frank Rothenstein=C2=A0
Systemadministrator
Fon=3A +49 3821 700 125
Fax=3A=C2=A0+49 3821 700 190Internet=3A=C2=A0www=2Ebodden-kliniken=2Ede
E-Mail=3A f=2Erothenstein=40bodden-kliniken=2Ede
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten
Telefon=3A 03821-700-0
Telefax=3A 03821-700-240
E-Mail=3A info=40bodden-kliniken=2Ede=20
Internet=3A http=3A//www=2Ebodden-kliniken=2Ede
Sitz=3A Ribnitz-Damgarten=2C Amtsgericht=3A Stralsund=2C HRB 2919=2C Steuer=
-Nr=2E=3A 079/133/40188
Aufsichtsratsvorsitzende=3A Carmen Schr=C3=B6ter=2C Gesch=C3=A4ftsf=C3=BChr=
er=3A Dr=2E Falko Milski=2C MBA
Der Inhalt dieser E-Mail ist ausschlie=C3=9Flich f=C3=BCr den bezeichneten=
Adressaten bestimmt=2E Wenn Sie nicht der=20
vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten=2C be=
achten Sie bitte=2C dass jede=20
Form der Ver=C3=B6ffentlichung=2C Vervielf=C3=A4ltigung oder Weitergabe des=
Inhalts dieser E-Mail unzul=C3=A4ssig ist=2E=20
Wir bitten Sie=2C sofort den Absender zu informieren und die E-Mail zu l=
=C3=B6schen=2E=20
=C2=A0 =C2=A0 =C2=A0 =C2=A9 BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017
*** Virenfrei durch Kerio Mail Server und SOPHOS Antivirus ***
------------MIME--520934545-17562-delim
Content-Type: text/html;
charset="utf-8"
Content-Transfer-Encoding: quoted-printable
=3Chtml=3E
=3Cbody=3E
Thanks Thomas, <br>
<br>
it seems you were right. I followed the instructions to enable <br>
hugepages via kernel command line and after reboot vdsmd starts <br>
correctly. <br>
(I went back to 4.1.9 in between, added the kernel command line and <br=
>
upgraded to 4.2) <br>
<br>
The docs/release notes should mention it - or did I miss it? <br>
<br>
Am Dienstag, den 06.02.2018, 17:17 -0800 schrieb Thomas Davis: <br>
<font color=3D"#000000">> sorry, make that: </font><br>
<font color=3D"#000000">> </font><br>
<font color=3D"#000000">> hugeadm --pool-list </font><br>
<font color=3D"#000000">> Size =
Minimum Current Maximum Default </font><br>
<font color=3D"#000000">> 2097152&nb=
sp; 1024 1024 &n=
bsp; 1024 * </font><br>
<font color=3D"#000000">> 1073741824 =
4 &=
nbsp; 4 4=
</font><br>
<font color=3D"#000000">> </font><br>
<font color=3D"#000000">> </font><br>
<font color=3D"#000000">> On Tue, Feb 6, 2018 at 5:16 PM, Thomas Davis &=
lt;<a href=3D"mailto:tadavis@lbl.gov">tadavis(a)lbl.gov</a>> wrote: </=
font><br>
<font color=3D"#000000">> > I found that you now need hugepage1g supp=
ort. The error messages </font><br>
<font color=3D"#000000">> > are wrong - it's not truly a libvirt prob=
lem, it's hugepages1g are </font><br>
<font color=3D"#000000">> > missing for libvirt. </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > add something like: </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > default_hugepagesz=3D1G hugepagesz=3D1G h=
ugepages=3D4 hugepagesz=3D2M </font><br>
<font color=3D"#000000">> > hugepages=3D1024 to the kernel command li=
ne. </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > You can also do a 'yum install libhugetlb=
fs-utils', then do: </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > hugeadm --list </font><br>
<font color=3D"#000000">> > Mount Point =
Options </font><br>
<font color=3D"#000000">> > /dev/hugepages &nb=
sp; rw,seclabel,relatime </font><br>
<font color=3D"#000000">> > /dev/hugepages1G =
rw,seclabel,relatime,pagesize=3D1G </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > if you do not see the /dev/hugepages1G li=
sted, then vdsmd/libvirt </font><br>
<font color=3D"#000000">> > will not start. </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > </font><br>
<font color=3D"#000000">> > On Mon, Feb 5, 2018 at 5:49 AM, Frank Rot=
henstein <f.rothenstein@bo </font><br>
<font color=3D"#000000">> > dden-kliniken.de> wrote: </font><b=
r>
<font color=3D"#000000">> > > Hi, </font><br>
<font color=3D"#000000">> > > </font><br>
<font color=3D"#000000">> > > I'm currently stuck - after upgradin=
g 4.1 to 4.2 I cannot start </font><br>
<font color=3D"#000000">> > > the </font><br>
<font color=3D"#000000">> > > host-processes. </font><br>
<font color=3D"#000000">> > > systemctl start vdsmd fails with fol=
lowing lines in journalctl: </font><br>
<font color=3D"#000000">> > > </font><br>
<font color=3D"#000000">> > > <snip> </font><br>
<font color=3D"#000000">> > > </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: R=
unning wait_for_network </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: R=
unning run_init_hooks </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: R=
unning check_is_configured </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:15 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > sasldblistusers2[10440]: DIGEST-MD5 =
common mech free </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: Error: =
</font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: One of =
the modules is not configured </font><br>
<font color=3D"#000000">> > > to </font><br>
<font color=3D"#000000">> > > work with VDSM. </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: To conf=
igure the module use the </font><br>
<font color=3D"#000000">> > > following: </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: 'vdsm-t=
ool configure [--module </font><br>
<font color=3D"#000000">> > > module- </font><br>
<font color=3D"#000000">> > > name]'. </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: If all =
modules are not configured </font><br>
<font color=3D"#000000">> > > try to </font><br>
<font color=3D"#000000">> > > use: </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: 'vdsm-t=
ool configure --force' </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: (The fo=
rce flag will stop the </font><br>
<font color=3D"#000000">> > > module's </font><br>
<font color=3D"#000000">> > > service and start it </font><br=
>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: afterwa=
rds automatically to load the </font><br>
<font color=3D"#000000">> > > new </font><br>
<font color=3D"#000000">> > > configuration.) </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: abrt is=
already configured for vdsm </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: lvm is =
configured for vdsm </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: libvirt=
is not configured for vdsm </font><br>
<font color=3D"#000000">> > > yet </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: Current=
revision of multipath.conf </font><br>
<font color=3D"#000000">> > > detected, preserving </font><br=
>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: Modules=
libvirt are not configured </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net </font><br>
<br>
<font color=3D"#000000">> > > vdsmd_init_common.sh[10414]: vdsm: s=
topped during execute </font><br>
<font color=3D"#000000">> > > check_is_configured task (task retur=
ned with error code 1). </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net systemd[1]: </font><br>
<font color=3D"#000000">> > > vdsmd.service: control process exite=
d, code=3Dexited status=3D1 </font><br>
<font color=3D"#000000">> > > Feb 05 14:40:16 glusternode1.bodden-=
kliniken.net systemd[1]: </font><br>
<font color=3D"#000000">> > > Failed to </font><br>
<font color=3D"#000000">> > > start Virtual Desktop Server Manager=
. </font><br>
<font color=3D"#000000">> > > -- Subject: Unit vdsmd.service has f=
ailed </font><br>
<font color=3D"#000000">> > > -- Defined-By: systemd </font><=
br>
<font color=3D"#000000">> > > -- Support: <a href=3D"http://lists.=
freedesktop.org/mailman/listinfo/systemd">http://lists.freedesktop.org/mail=
man/listinfo/systemd</a> </font><br>
<font color=3D"#000000">> > > -devel </font><br>
<font color=3D"#000000">> > > -- </font><br>
<font color=3D"#000000">> > > -- Unit vdsmd.service has failed. &#=
13;</font><br>
=
=3CBR /=3E
=3CBR /=3E
=3Cfont face=3D=22Arial=22 style=3D=22font-size=3A 12px=3B font-family=3A A=
rial=3B=22=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3E=3C/span=3E=3C/fo=
nt=3E=3Cdiv=3E=3Cfont face=3D=22arial=22 size=3D=222=22=3E=3Cfont=3E=3Cfont=
=3E=3Cfont=3E=3Cbr=3EFrank Rothenstein=26nbsp=3B=3C/font=3E=3C/font=3E=3C/f=
ont=3E=3Cfont=3E=3Cfont=3E=3Cbr=3E=3Cbr=3ESystemadministrator=3Cbr=3EFon=3A=
+49 3821 700 125=3Cbr=3EFax=3A=26nbsp=3B=3C/font=3E+49 3821 700 190=3C/fon=
t=3E=3C/font=3E=3C/div=3E=3Cdiv=3E=3Cfont=3E=3Cfont face=3D=22arial=22 size=
=3D=222=22=3EInternet=3A=26nbsp=3Bwww=2Ebodden-kliniken=2Ede=3Cbr=3EE-Mail=
=3A f=2Erothenstein=40bodden-kliniken=2Ede=3Cbr=3E=3C/font=3E=3Cbr=3E=3C/fo=
nt=3E=3Cspan style=3D=22font-size=3A 12px=3B font-family=3A arial=3B=22=3E=
=3Cimg src=3D=22cid=3A547f8827=2Ef9d4cce7=2Epng=2Ed413ca20=22=3E=3C/span=3E=
=3Cbr=3E=3Cfont face=3D=22arial=22 style=3D=22font-size=3A 12px=3B font-fam=
ily=3A Arial=3B=22=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3E=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=3Cbr=3EBODDEN-KLINIKEN=
Ribnitz-Damgarten GmbH=3C/span=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 1=
2px=3B=22=3ESandhufe 2=3C/span=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 12=
px=3B=22=3E18311 Ribnitz-Damgarten=3C/span=3E=3Cbr=3E=3Cbr=3E=3Cfont size=
=3D=221=22=3ETelefon=3A 03821-700-0=3Cbr=3ETelefax=3A 03821-700-240=3C/font=
=3E=3Cbr=3E=3Cbr=3E=3Cfont size=3D=221=22=3EE-Mail=3A info=40bodden-klinike=
n=2Ede =3Cbr=3EInternet=3A http=3A//www=2Ebodden-kliniken=2Ede=3C/font=3E=
=3Cbr=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3ESitz=3A Ribnit=
z-Damgarten=2C Amtsgericht=3A Stralsund=2C HRB 2919=2C Steuer-Nr=2E=3A 079/=
133/40188=3C/span=3E=3Cbr=3E=3Cspan style=3D=22font-size=3A 12px=3B=22=3EAu=
fsichtsratsvorsitzende=3A Carmen Schr=C3=B6ter=2C Gesch=C3=A4ftsf=C3=BChrer=
=3A Dr=2E Falko Milski=2C MBA=3C/span=3E=3Cbr=3E=3Cbr=3E=3Cfont size=3D=221=
=22 style=3D=22font-size=3A 12px=3B=22=3EDer Inhalt dieser E-Mail ist aussc=
hlie=C3=9Flich f=C3=BCr den bezeichneten Adressaten bestimmt=2E Wenn Sie ni=
cht der =3Cbr=3Evorgesehene Adressat dieser E-Mail oder dessen Vertreter se=
in sollten=2C beachten Sie bitte=2C dass jede =3Cbr=3EForm der Ver=C3=B6ffe=
ntlichung=2C Vervielf=C3=A4ltigung oder Weitergabe des Inhalts dieser E-Mai=
l unzul=C3=A4ssig ist=2E =3Cbr=3EWir bitten Sie=2C sofort den Absender zu i=
nformieren und die E-Mail zu l=C3=B6schen=2E =3C/font=3E=3Cbr=3E=3Cspan sty=
le=3D=22font-size=3A 12px=3B=22=3E=3Cbr=3E=26nbsp=3B =26nbsp=3B =26nbsp=3B=
=C2=A9 BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017=3C/span=3E=3Cbr=3E=3Csp=
an style=3D=22font-size=3A 12px=3B=22=3E*** Virenfrei durch Kerio Mail Serv=
er und SOPHOS Antivirus ***=3C/span=3E=3C/font=3E=3C/div=3E=3CBR /=3E
=3C/body=3E
=3C/html=3E
------------MIME--520934545-17562-delim--
------------MIME--520934545-23878-delim
Content-Type: image/png; name="547f8827.f9d4cce7.png"
Content-Transfer-Encoding: base64
Content-Disposition: inline
Content-ID: <547f8827.f9d4cce7.png.d413ca20>
iVBORw0KGgoAAAANSUhEUgAAAMgAAABACAYAAABfuzgrAAAAAXNSR0IArs4c6QAAAARnQU1BAACx
jwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAEYJSURBVHhe7b0HWJVXuja8KYotJqZOJnOmfFNy
Zs5kMr2kTBJjjb0rtpiqibF3o6KCUgUEu2IXu4gVO4qKoiIqzYKoFBVsdNibfX/3vV52YjIzHvOf
//uu+a6Lxyz2+6531Wc991PWu/aODbVUS7X0L6kWILVUS4+gWoDUUi09gmoBUku19AiqBUgt1dIj
qBYgtVRLj6BagNRSLT2CagFSS7X0CKoFSC3V0iPo/wxAnPpTXXPhNFePQ98sZxp5bLJ6qiHnN+7+
W/rHkv+Tuq48/n1oHP+snKGHHnxd+l+Q2mN6ZJlv01eFH13rXz21etNf6993p39ex5X7/6VFF/3L
uo/Dd1ENP1Xq6xrfpMcGSDmTb3Q8hm/IwNB1aRixKRvDN17B2LUXMHzpcYSdKMapYocph2oHUHmP
6QFKy50oYVYeU3qRA/MPXsFfB8/FH0euwn99thDvjl2KgA3HkFoE3GSZSiZ7NYfqqOIVsOd8FiZE
H8eI9akYvfEyRq9OwZfr0jFsQzaCE8pw7LYd+Q4Hilm2lKmy3M66GkUV4rMrMW7dOQxdn4Ehm67C
Z30SJq85hrHrUxB0KB/7OKhLdgfKWNrBLu1V1XBUcgSVRTh1KQfj1p/CmLWnMDjmMoatSoXPlkwM
4f2odakYsS4RBy7dYE1gx4nzGLMyERO3ZsN/TzruMs+oBfsDODmWCt6rD9H25HwMmbMdV+4+QGU1
+2LHziqn6b8MdlNWFH0oHSPWnMFo9jl2RQKuFGp2LETeqq39N6oxaAXnE30MN3MLUVBehQkbzmDo
pkyMjk5CWNwVlErjMDmrq5kqmey4zaX5bOF+rt1FjFxzGvNjE9WdUU6xx9MwemUSxm2+jADWv8qy
4qOjPA8OctfBUg67VojrknqH89iDi/c0LuDagxIMX7wfYzZfxNA157FwzymTTyYg7nwuhkafwfiN
GZiwOAGnLxeypWpUVpaj1FmN0/mVmLwpHYOWHMS5tGxTKzz2AiZwvYesyyavL5O/yZi4Kgmjl5HP
G9NY9rjhVXUV570qAcM2puOL6BQk5mh8HHjJbRSVlRl5pESYzwKu9cToEwiPS1VN/kfJtBejmmPQ
c/Hg2/TYABEbfvNJJGztFsDWMQq2ZmFo0jMKnn8djAZvj4CtVQjc2vjis3l7jZBz5EAxQUI6nF+O
DjNi4PX3oajL9FzP2fjNxP34vc9hNOkehvqvDcDTrUajQ2g8LpXYcbOSQuM0q4M5W4/A9vZ42Lot
YR/h8GrhC4+/jYXtnSCmGfBqOhI/6TgE0+PScdklXRQEOCsw9+Q92N7yha39YtjaLIbb34fD441h
zJsA25tfwr3paLw0IBzDorYbEJcZlDA5i7E9JQe29/w532DWnw9bi3C82M4fjTtMQhPOs2G70Qja
vNt0N335NrY5iSkY7t0WY9ouLQDJTtQ7y0EdgQdMoikbLsD25y+w71quERI4mDhcThlFvNA4NI3+
M2PYJ/tvMwfPe89G0hXCzs4nlXZc5PP/HLoGtj8Ow4eh29Ussu6X4sWPl3K8c2Fr6Ys/j9qCXHav
tu12O6tKhTiRyiWxNZ8GW1vO6d0Z6O6zytSXcHy5NI7rOoP1I/g8DIF7rXlUVRWgyvFAUGFBowLh
uzkdtr+MwpaUAnN/Ivcu3N/9kvVDYGsajL6hMSZf5LflJHkzle3OR923fRF9II251SihcN7iVbeg
fbC9PgmvDo5UD4Z+/wnXrFkoeb+a6x+CFzqH4wftffH99lN5HcC+JllyRrK9Q/mQXLZbhF9+FIGK
qnIuPydKZVJMxjpoJQSSaxV2/MdHi/GHYcusipwTqu5zyQV/iwffpscGyH3K+6ufkfmdo+Deazl+
zwU4ywUoKK3A1HmbKbwz4fXBWk5mMtaeL0JxjUDkck1/1i8MdTuF4MmOAeg0dR2O5ZdCeuISyyTd
r0Avn6Vo0HoibK2D8dchC5FMkGhCopgjKVxICnmPVWjYKRQh2zKQVVqF2MtleH/OETzVYiS+12UK
hckPb41bRSEjLqlNNd35yUXw6BIOW9eFsHVahKnbzuNURTV2XClG/4CN+I9eM2HrEoY6nXzRZtwi
ZNyrpBVRr3asSb6Fup9s5HMKS5c5GLPpCshO5FdUIr+4DJdKy6mRLCs3JWoHGnTwR4MP15E/C/B8
v3k4mStR5yioKOxcIGvhnQjdmwvbX4fj0HXZVOZUsg0+rOKEi3mhhVLZT0N3cc4U4h5r8MMPFuL8
dS44QS/qvuA0hdwPzcctwQPVZ92sokr89AsKE+dat3Mwmk7eibtiInlsJwidpq4Dl/jh1YnKpVsU
6nSdDW/fNaZN0fTV++D+HoWvxyLU916EJ7vMxKmbpWYtymmlHVI8lbKP4DyyCYQvsTPttrk/d/se
nqQAe3ovJw/mo3fIZpMvCt6eTP6HcS5r0ZjA23r8MnOrzFznJZUSHAH4Sc9gXLxTRCDaNWQ0m7gO
np3CUa/HYvz0wyVIIA7zyO+s8kpco5LI4Brb5SnQMjbptxDuA7bA1mcrFdQC9KqZU7W8EAFaypqU
V+nALwcvx1uTNph71YW9xFgQWeX/EUCE1j8MXWYEVQz425BFuFsD9+LKajKdaO9Jgeq7AX+fvInT
t8xa/2nL4N6OC9JlMZ7ruwRnqE41EC6ZYbz0muxMg3YEQb9NBNoMvDdjmxFG0Zodh6gZ5sHWaxPq
tZqGrSfEXFpQpkKm0WuOolFHCnpvanEu7qDIbQYkonlnaUHaz4KbN61Phwi6HWdMvbsSGH767aIW
pEC5fxANjzbT0JoAc9EyAsTWezXcerNuxzD40AWwiCOmG1butJOxlhaYvnIv6rw3FXX7r2Cd9Sw/
B3/5YiGkW1WitMwSbEmyb+xV2N4Yg53ZN8kjPq0iE8kIeaVFXDCXK/Zp6G4KMeftvRHPDViM09ct
jiRepGD+bRIFcQES82RVyD26DlklXPwRBKj3UtTrNgvNp+5BYQ1AHHJZCZBKClQGh+LWnlaiRxS8
es1Fv6CNpl0tpe+6Q6jTYRa8uE7u3edwncPxu09DjXts7AYFCaVyhO2YufsiFdckbEuzLEha4X08
0W46PLrTilFJDJi91eSL/DbTgvTkXHqsg1vLMGxMzDL5p/MK8CzHYGs1D4uOal0JRro94tbrY8jL
Nv5o2HMu/qO7Hy4STZaYW2MtMopBq2nnPKhIKHc2b4GElrelP1Yl55qyBtAOliNIcgmqn3w4H699
WQMQSSIBVE0XVH3+jwAitL88kGbPOxpeXefinRFLcd88sYTNvTM1bV8CpPNCtKaAS89sy8hBwxZj
4daTWqXbUnwenWzAUFlRSr+bC24vRXGZNe0PllIrdiZTus5Hvc6hrGsxfuWeRGof5nvHwqPVFMQk
SlDJREcpCqhtrvHuPZ9YAmADPPuuR6NW43Eo564Z05JLdwmaQHj2XkytOAez6a6JpU4utINaSNd/
CTjK9ufBs9tCeLYP/cpl2JR6Gx7vc9z9oij08zF29VmTbwSytARVZHgl+xf5rtoHrza0YrQ0bgM2
w12L1WYmRi1NMGDVWFx/QuLour0xGvE5+cowYDP+FYW40llGRWRpncERBEhPWZAVePKj1ThQUIVL
ReV448Op8Gw7AyNiUq0mKyW+TlqQCvzwMyqwPstQt2sI3pywDbel1dicNKSDQmLnTSbZbWs7ne0K
+CHoMSPatKOZTFq1Fx7tgwiOcLj3mQv3/isJmBB8EHHIrKcsEcrEHzsC9l62LMgli1+p1P71qSQ8
6F3Y2s1Br6AtJl8UvPMc15YWxHsd6nZcgKjTdwxf3h4dAq+2vugammgUJSqyONYSo4ybTd0Erx6R
cOsYip/3nYVL1BxlBujVBJFllSXcDo7erUMovMSrbovg9vEezm0j6nePQPwlWTeOmXImRhRSGb08
aDHenLrVgMFwmm0pRpMUGn5+ix7fxWL67Qi6UBR2WYvfD1lihFMCL01ua0n/23sF3JqOwazY04YB
0+gO2VrS3+0fTSGcjaijF2viEw7NQcxSY1bLbJOWnLpuwOXVLRL16BaNXHnUDHhunMwzrZa3LMRM
LD0sgDCocpYarSYhn72Tblg7ai7vDajTIxwDIy3tFXb2PoWWLlYP+qftQzAvNt5iQlUZqkuLDINC
TuezX8YZPWQd16D3nHiVwJbTl2jeA2m5qPm6zMaQ1ZmGqazJ4Nqad0UNRyfM3YznegTjswOcAy2l
rcsSeHgvxjOdfbEyQf4210iVSEE7rsD29xHYlWkF+I4SzkAs4OMK/mPrJv/jgA20fLIgy9Hw083Y
yxX0nr0LjZt+jraTFxutXlxVAafDiBauEDz/S3GJN10nuVjT4nBXXXKMRpjILYoV0sk0t04BBNJy
avsIdPNbY+alIfhuiEdjWqYJiTfILz+jrNyZnu8WgXkHtdqkMlmyagQdzKIFmYjdVxRFABcK7qNR
+5lw701l1mEe+oR+DZCZ286Sx6Go128N3FpHYv0dYNaFPLqJn+OH3aebzQBjY6sL6OJSefLy7fHr
4SHF1X0JXvKei1TKuMRc5R4wThSXymgRVPb5D5ehuc9GdA5nDMX23b6g0mM88l+fhPOpiDEtrewt
ume/+GQBXpuyxdRXkvLQ5kUNq/6BvhNAfj4wCm696DJRy/zxy804zrxTnNyHK2hC21N7tpqObmMj
UVBm+ZG//XwBBYV1+qzCE138cfSSpTUVNBmTxkLS5mL4rrQc1Gc88HTvSPqeoejkv9uALDKemqqt
XCzGOZ1nI/JYjpmMXJFyaRN+pjOmqdecWrE/LVjHCLw1JJS5wII0ulgdwugmRbFuOBYQIIbke9IC
iSWHbxbi+e4+jKtYpst6/HFaonHvDqVk4vvdJ3HB6Wr0WYpPN+RAo5dzYOInJu0IiQJX78UTHX0R
c/MeRu68RhcnDJ70423UbC92m4kTecWGHxUsH76b8/ndACRczzVzkIelkEmjUb+Gc5zg0IjttEIB
FJB5aPzxRrTbWk2BXQTPd8cZhWTmX/Op8pcelOEnwxiD0AcXAN6YvIMWVg9r2mRpbQoYgHRk8E8X
SwDpPHW54aGS38bDVE4R2HOnHGO2p6F+z3DGZxFoTI/hSbqqJ2+XWVqXFBZ/jUH6UOy6XBOD0MVq
3JEKpQctSIf5tCAuN6YGIB0C0bjXIri3CkGfbWXwHLyVQjwd27I0c/ZPoZfKcjgsBfHuxBi4M260
dVtH13w1DlMjyWmSvdI6XCx2GAGXemjUKRhdJ843vPj1p5SVtlyzgfvZfjh6UQGojHh1l3L360EL
8bbPFmMxDEDIGxdA/hl9J4C8MpjagYj26jmH5i8Mr47fiB+9HwqPluPh1X81PovJxy2zGJa/+F+f
0u3qRX+WluGpLr64eNeavJ5J85e6uE1hPXqF/qh3JBpQ+9XtGoFWNIMSmODdZG5XTroP3ZbOkQg/
lm/ql3GyVTXW5xY5U6fZeFPGrctCvD4o0OQvzaCL1YbWoQ/H3Xku5m87YvJlpqtLLc2rhf157ymo
581AniD88ZBdyCe3Dp67hO91Hse+GVv1W4mX6G69/fk8vDFkFt78JAx/GjQDK/cfNW34LI7FU4yh
NmXfNbsybw2fw0B5Fjz60vJRsNpNWQeJkTgTFJsOzze/QEJ+gVEAEvbr7O8Kp3KZE7NCYFqLqdHw
7LUAdWlRPWmZbX3oX/eMRr0OQTh27ZYRVM3AmgUBQgvys5G08Iy3PORiESAuAAsg0rkCSAYlxaPD
1wDp7b/OlNHYfNfFw71NEHZlFZp5NJ9CxdZ+Gp7ietvahuL1sZuRwzFK2Hy2JNNbmEALYgEkhXxs
RBDIJbR1nI+ewVZsI5q5nS4WrXA9Kqtnuy9Ew48ZJ/SmteM6T9tkbQdXlDvMelfXuK1NJ3C+nai0
em2hwlmKVz+NwtufhKLpRzPQYlAA/th1uCkneNVt74+Ooy1rsSk5D09155p3IVBpyb///nzMPZxl
FHIu45bfDFyAv0+INoAwSlq8+f8DIBLoVz6n39qLDODCNeo7D93m7UGXaUvxk64UJAamdXssxBcL
D+AmF0Zo/sPnc81ORJ0P1qF+p5lILqSGYL6dLpYGR55Y5CjDoay7aER/v05PCiqFudmX602ZRXEn
yEgG+dodYQA5L8EKvqrtnFqlSlDAuGgK0G19t8KjyyK8M9ACyIq0O2ZnzNZrJce3AJFbLbdNpstR
YS1EYmERftTXj8EghaDbSvxqyCbcIrd2XrhGwfCncAqcy/Dq59H4InALPp+1Al8ErEWvGQsZDyWZ
Nqat3IWnO8zA3izLHy+kHni+My0a4zVb99Wo2ykIH4db27HBOy/B9tfB2J1XiJiLuWj8zmjUb+YL
2+sj4fZGP7w7aKIRwAGh1ICd51E45qEJAVqn+wq6O0vJzwi80i8QV2mlpW4qjRmmRqUF+ZEsCAXQ
1jngGwCRI2I5JdW4yMbrUKBs3aOMInLtYokvU9ccgHtLP+xLVyAOZBRV47f9p8LWYgI8B3AuXeah
Z+hO8yyC7qTbayMZg1gu1tnbtCDU5LbeVAoCSMhDANmRSnAvQuOeUWjSgYDXGPtyTag867eYiqOZ
VjRb5uSa1GwjvzV+HQFC15huc2Pyv0fwHgyaFYNPZizBiPC1GDAhxJSTBDSgPLYdt8jca5b+G46x
7iyON4rADsdz3UJw7A5dLM7yNwPn4a3RUaas2FPtZBRDeXSJ4rfpsQGijv9rEAfchwPvtgSvTt6C
TOYJweuPpcKzJ58R8R6tp2Pa+gRTvvWXK+DVhbHJ+9RsXWdja+ZdY8q1o+J0qgSXpcakbuGi2Ly5
wBIqIr9HSJzJX7/nMO/pD9Ont3UOwir6x4YqCUEmIT8+32GZ9vfjCJAF6DI6whSJOn0Xbgy8TZud
lmL2tpNGEFRJbrnGvjn/Hhp2mUlBDGfZ2Wjnu8sog3UXbtK9YtDXnfPqGImJGy0fXAsiZtKNNnGI
5jN51R40aDUVx64yt4p2gfFR1Em6IL3Iq/676W4twPPtGZhfysaiE4yL/jQMOwruYvCmBNje8qEr
wPinM+fX+ksMW2X13z8yyQiIF63HswyWv9hZwnguBB6dF6A+BXtwVAJLkSotGyIL8lNjQRYTSLQg
k74JECu2qcYVTtpLAOkWZdrxnvb1zl3AhiN0k2bhNK25ZefJh5OZaNKdAJbge1Prt52B7SnXsCj+
Buq+MRxxWZYFOU2ANOxIRdZLAFmInkFfb/P6bSdAOi1E497RaNAyCIM25OLJTxQzEqS0Dq3Hb8N9
LWT1fQqrNZ83J1IWtPFD9/zp/ktwnIyXdZXVFd+tUtYLYhtB13TCarO2GrUsUXMfuuRtw1B/YCyt
eCjaTlqFS+THX4ZGocXYpSwhcHCpaD0EEEtd/iM9NkCE6z8PoU/YNYoCuwh/GLHMuA0ircMTPYjY
PhTEzsvwyw/mmYn4bjiOpzqQud2Z320ZQg6nWcvkoCCREfZKikKFpT0WJ15l22Qa3Qhbp0iMj7bc
oajt8WQktfj7KxngTcSygxkmH7QeEnIJ7JebGANp8eivetDvnLrSeju8Jvku6lHotVVq60iAbE82
+XpT7iyViDPAT2SQ3ov99taCRNL3jzX5Mak3ac0iOSbt9sz+ahfLrjfJjGHs1HYWtIHx89fjex2m
IN4lWNUlxoJ2W8RxtQsz83midwQVTCjeX5WGhu/5YO/1O9h+9hpW7E7BvJ3nMftAJqbHHEFqid3w
+pOwA/BkcGzrsRTf+3gd4rmCrw+jxu3AuKT/JoIpEAnZ1hzYITKLq/CrkeQd4xS3bnPwjs9O3JIA
8KnTrl1D8Vm7XQRIO7bbbSPqtZuPftO/FuQA8tz2jj8OZ8iCUKuKwaSxqxPYZrilkXssxq+HrsTg
Lblwf3049pqdIrpYBffQoGOIxWuu40dB0SZfFBybQiEOtraA20RiC9n02fJ9cO8wk/nkb9v5iNyj
158cYc2mw19HUWkw9tFu25OMS09RVPSkmny30yW3m5emxZy5ky7/QjQjoDTaUv4RQK4Wl+Bn3my/
Pd32PlzbZn4YsOkyfj1mM9r6rlQX7IyrxLac8kYkxNZ0v0GPDRB1+qdBEfSLFTgtxhvDLRBU2auM
MPxmGCffhW5Kj7V4+YsYpFNODl/JxpMtxlBA1sKt5zp8suKQ0doO7SJVVKKqooK+Z4kUOjUmgdCd
WpQC+b0egYjLzDPCv2RfMgV3Eeq8v4rmeRLWJl03gCxixFvCCcmevDtxHRqQmU/3W4pnOvggqbDM
1F129IbZRjTWhWY/mMJoyMGRV1rA7DlzDxeIC/HpdlqLEJzPs0z8lqRLaNSDC05fuU77YGr2ZAMI
u+IeO7nh1FYv4yDm+S3dgu+1m4i9V2/XgEZ/HSaYfHfKajR5n0LQZjbcGWzW78OYjG5VQpblmhiU
W2rDrI+u9Pl54DYGzFQ6VCzPfrQaifediD6aiRc7TCA/6J70j8G7kzaiiMNxOhy4RID84hOuTU+u
A93dtyduQhYZZTSjNiTKqH+pKbM5PVtTf9T5cBfd0tn4IPDrN97BGxMYkM/BgatmUxfV0qxcHN21
+3INvDpR2ZAf7nKzqf0bMMiOO18DEFqQunoX5s3gu8NiDAlZYfJF4VtT4GXiE64DLdeis0W4Rsb9
vOsEuo50tygb3+s/H5fvSDosXd58EpVAu1AqgzV4oc8cXOCCyrIS6WTSA1RXMYNeSKnDiXqMpVpO
sPpTC7lG0J3Yf/4aGrXxRV1atLp6P+JNq0JP588T5ltt0WLBvG7gZY1X8W36TgD5/SACgPGBrctS
/HnkMiMAElYh+9k+oWQcF6jvCvxwYITZcbjKQfYI4aBazIVbl1X4X32DccWMjMSK95lU9yrTE92p
oXR0pZ0/Oo+bWyNiQOgWarVudHO6z+Nkx2Hp8RumjiWIwJx9DKbbT8GT1HD1247FyBWxZqwS8/Xn
stneKAoTBZRB4sTdZwwDlQSP2NM38UKbafCiK2VrORFtgzcx16INKTlo2Jtz7bkcnm0DMTE20yiC
h4ZvQKhxTJi7Fo1a+2J71h2TbwTS8cA8O8mO3Ol2utPnN9ZRAXer6TiSX6P9K7gyjgpq6yqU8NI1
rw+Dd1NZUIv23IDnP16Fg9d11IPBu6/ALiCsN3HXgn0XTPlz98pooZagbu8ouFFZvD2JATXzNV5X
m3oPkkFhsCk+0nuDDn7o7LvIiKTS1I06ajIKcRlSO5xdNRWZjqnwLj63Ck0UY3SIQJ0B2u5nPETh
25VuxV3JBTUA6U1B7L4cn8ysOc5BCtyWTgVB2REo283FklPW3Mcu3AWv7rTSPTiXbovwcY311jzf
nbgenl0j4d5tPp7vPRvnOQmtm/grOdZaaF76dGec23TUQl5xvixQoAFXWn0MmCOrOB11+2+wXkRz
/O1DY613eHLzGfPoQARDOtPut+mxASKh/MMXlobXC8FXx27F/ptVSMwroItDf/ndCajbdwk83/XB
4j2ut87UWOTzzwZQKJoHo0EPPzSfthwHC0ug96bavdl9rRh/m7Sdi01t2T4Er30Wjhv3JN5W4LTy
6EUykBqzJ80kA9ZhO7Kw9epNrE6/hY8Wn4Ib+23QQ29jZ6P5hPVmG1BMk/Buv5AHL/r1tl56P7AR
76/PwN4bhdiYfQudlpzHC+0C0PDvY+gGTkLnkDXmsCTKLXGKPk8Xq0cEPLrOR4MuYfh45XnEc1wn
bt7HcWr/5Bu3cKW4zPQ1Y9VuNGgTgm1Z98yYjVYqo4og88X0hYl3UJ+m3tZlGdz7cSytZ2J7TqGp
a7SWsxqOalpiqjFTn9QnhJat5xYK3GZ8v98cnLxuWbzMu1UUHM63l+KNFXip8xRkUDhT6Fv8bthK
uDOG8OocirdoQXYzJjp6/ZbZ9Yq/nIvrRcVIkQV5j1aViqx+b+02xRhBk1L5cmM8GtCKHsnW3hqh
VU7r4KC7WG65k1GJt9BEMWWHUHh4L0GdVj6ITbH2LQWQhu3o0hAcitk+9IsywBLN3M41bM/8jwnA
NsHYdsGyOvJAXv64Zi4918DzvWnYmXzNtNeU1tGLMY0n5/KDvhFYcf4ujuTd4dhuIuFqPg5n5RnA
KB5pSJC1m2xZEDstirbTUZ5DT6XEeBjNJm9CHbajTQ8vBvSvj1xo+tbWhf7qUGcRE//7B3psgEh4
fvwRJ9mdk+lEJL4XgZfen4vnu5LZzSaRAfS1uy7AgMhE6yBuBYdfcccw6Si9idcmbINnx3HU1B/j
Ge/paDpxDVoOi8APdY7qjRHU0v5o5hOLkwSdIWovUfjG/eyLbXenNm1B/5sC0LhLCDyb0Ty3nER/
mgFnUz94r72MNFbVlEukRkhbEnLQ4E26eG1pgVouRB2O+YUW7O/vtCrNpuPJVjPwg1YT6Gatx1W7
AjXWK7MEceWJS3DvQqZ2pEJoTleLAHiiRxieoAv3XPvxeOKdwfDZFG+C9UFzY+hKTsNexiDq3wTq
FXlUeXcMUGVpBy6Ih1czX9TpSC38OoN0glRqwCyK/GrGNeUPbTf2CyI4mjFW6LKQFnI8MggoKQ3x
c+CaY+QFXZY2gWjYchx6B25CIof+/U+5Nh2oqbvMh2enWXim+yw82W4qnu4ajEbvTcXIoGXmBWO9
Fn6sS3fpzcloO26Z0dgCgM/KA7C9Ngbx6QS3k+tXTufKTn5UlZuXohKq4Uv3oW6L8YwHp6JR0xHY
eSbfjDkp9w6eaEleEzy25pPxecBi5lo0Y8MZ8p/5bSJQv/UkbDhUcyqBtDrlDtffB3X6cH3fGY+/
fhpqNn/+MIkK4q3p8PRmPi3l0+T9M5380KTzTDzZOQAN/tzH8E/Atr0zFm1GzVFzhior+MSej9KS
+2Yb/WxZNZ5uPwnuBHSdZlPQYsQCSznRRdZbdK2R7v9HAKlyVMJv2S58vvwYPlh9BkOXH8aYeVsx
dflejI6KQ8DuVGpncz7VoLhajLVzObjo6lxp2ZlsjFi1Hf0DNqDjmCXoMSkKn4bFYPTS3QSRdeRY
AlDG+MQuH5N0JvUSBkZuxqhV7I8LOCpqN0Ys3Iapa/ZhRORGrDudhwtcYdcOh0NOs44P0C8/kXYd
IyPWY8iygxiy6hhG8XP8oq0Ys2w3xq47irlxGWajQcIhWDhlJ0vJcjaRSCs1kPMbuuwIRrLu4CX7
MGbFQUxYcwBjF2+Hz7Kd2HUm3bgwsUfOYHTYOmQyUDVUTeGqInR0/oltqYyAND5yPUYt2oYRdMlS
7zywNKyTs5aloYtVVfPiU8cnVuw7hT5hmzGGgbPP4s24fUcxRKnhb3pROYYu3oUJq/ZjwoLNGBe6
GueKnJi24QTzOc6V8exnN8Ys3o3xDIaHR+3F5wt24WhKBvI5mCGzN+LLaPJj4Q4s3VazG0aKS0rD
2AXbGAsUm9jSMIJuSHWF4SyKyjUyYFjYCq7ZTn5GIyPvgcnLvluM4RFr8DnX54tFsVgbdwQ6GC2K
S8rE4Pmc99I9GDt3PddUEGAMWlVl+D5i+X58wTGOWRyLsaGrcPpGPoK3nsbIqP0YEbUP41ce5Hx2
YsyS3ZxfHIYtPwjfZZY7ppGNWrIdC7YdMveOmvcoqCg0B0UFIFma2HPXMJ5jHrt4J+ZRsRky21g6
3FPJf5pbzYAfoscDiN52l0sPaqkfJmnqhxvlohNIGlQZyzrl8GingGOWoEgQLd3OT1YTusWgr/K4
KBIAAUWAcu2i/PfEyVELW3v9GqPYJrZYo1NbyhXk1L7y9KmkZxqDRK9adWhJpFIlHq5x/WtifwrY
H+pLuywO1q4i4yvJN83F1a+LdG2Bg4CsJqgcRWb8rhMGas1Vvma5SWzdvCOw2vwmZzhSgfIbvfwz
0oysMprfV6STvgKqLpl0pSRumtIcm2nfzjXVJgVJ9TVWMw7+0aeeiBtyx02+iXx18U9IcyF/1Ibq
fXPk6vlfVXyIDK+dZqxqQ7NTLbNuHKfyy5ij9XA9/6of3TC+sjwVbRRVsOQ/9vmYFoQVS2VuK4yp
NYypZA80vcXlFYYh2lGSv+osKzaDoe5kLYqlAlYGos5KJ8pYRrnmOxBVHDYFgtaPfFQehdp+l3Ou
wAPmSRdbZdmGnVPV2S29M6FWsJKumaeF0xdfzPagRqbdIH21R/84Zfah8zby7nWsRfeWYKo/lmRf
YlGZhFrLLg7yQ+XUipZKrbpApiRRtHoiSXnQWmlXS2Oo5Dj1JSBBVGUttqsV3jkFBuoqHVDkvbNK
SucerZ2u1YYleGrfIaHV3M08tZoaNzlrEvvUvqSZt84uaY4sQtKYVFrnHyUgausrdSHh1lhVXQVV
R2pe1kLn7c068L+auhZf+JjJCJJDXgET56CTzKZPMVBAYGMSVvFFs9OdFQSzttpXXcmCdqHMM6Wa
gZhr5nM+WiOj2FlKrfKJ4Yna1RzEU620QzzRmpMfeq6kOmZINUnlrMOf4oBxoK0xq2Ezb85JPGTL
FlT18Jv02ADRYS/JlgTanLszZ+2rUMyOZAl0crRa5/PL2am4a/piQTHe7NQ46QZV06WROWMZaV4y
rIrlBDqnhL6KLWn3h/fS6uXs0M5YRmvqYH0tgOyEXflMAthXSWNUnyyruatdwwytrm40aHFRXBLY
zDFxjVnZald76hyXq44x1WVGWFRFd67FV3I1peL646zk8nEOYk0ReVXKzl1HPIx2Nq6UDhdyrFQq
JSbWKUd5xX1WE/MocPyoZEdqv8JsP3JhGWiC7qasiwG3BmwUg9WeBETjMHOtKSPhFcvVjpZC/FWZ
ryaiwSvpWoVc165nYiAb1DpIyMgVVmVlKRbzJSPqZZd1MOPRrPVcjbmS5IXjEQPNuyMLzBqvqcZk
/moOylciA2paIol3KmkNSfPQONSyhmkpSc1f/RpVqFxzrTy1o7JSjUaJGiupa5YzY+ZTKVwCU3Ud
evZP6LFjEK3NN0gaRe8zeKkdfTNodaijxYwD9MJGzDX1xGkyyjVJM+USas9KBlK8kx4137gTkypu
sm2V/CapHVcyc+Q/CzBfC675FiIXQOurPOOiiRHqWdqGTLUEnrXIGCk/E7KYJahx+DQhNaoF0Ax0
X8N064Hy1bOsES8fIsMDkr5/4VAMZYSCiRpUZU1ta/VNK9JdOlNmstSWhmok3QzAaEkHweaghtA7
F9OfNHKNYKi42hD+v1p0JtfRiSrNTc+knR3WQUPNX8+tdvQWWcqGPTIJWIaHBpiMLXgv3SsoG1fF
WBH2SECoLfFO/DEKjG06zbjIMwb4emR0EpNiUhfX9E/PLGJFbQYYKZCQu0iDZA2TVMvFET3S2Fme
Aq/xahxac/PUPKtZa12bpHnqU8/VHnlUA2RjrXhl5Edd8vPb9NgAkbCvPXIZYQwE1+9JQmlpOc7d
eIAlO0/hJvv7dc9JyGJw5yqrhdPOlzoXMa7EnF1ncOpKPq+duHTxOnMrDWu0X69FMFR1Dw/Ky7Hp
2EUsjD2KlTF7cY3Brxj87QlY07TIYrFF6vMeecL/DKWlp6BcOzIkF0MEQbWpeiI7rzJuWXvn34Bn
DeNU76v+DaM1Q8uA36MArFy/H5FLYxETl4hSfZnnn5DGmsdHP3/HG0n5d4y/bnZZcu+Sp8m4Q7Wf
V1SKTQcOI7/Y+l6EhF9JYLcEWAMS4Ci8FO4iftopsBJ6fe/cjI2jkttaQXOkuE6AclbeJQi0XSuh
0Aw1c4GGwbL5qw0CWUByRycdtMnA9gQcCboBg9ClGI0f+ur/gwr59vrqAWsRbNXGSnL1KaAKlmXl
Zfl1Ulcy4UoCnTZHstlmBjVp8n07jt+vRjwZcoFNXGMXlottAaNS7WiO8lrkFtkVC+l0suCmEmqV
95y7lJ85xmQ8Eo2F1pdPWdhqzGgC1bRAbtaVFyrOJzV8Fn8VE1Y9PkBuUN26vz4EMxMy0WJkGL4I
WozllGyfrRnIybsBW6tAhMSlI2j9QTPcCxkXsTG9BD7LdiE5M8vsMvVfloaE3Gr0D1yPv30UhLRz
F+G35gCmbz6NdftO4Ub2VTPgi/zz04/m4vMFccw/ip+1Hox9abeQea8C01YegP+81cgtqkA6Ob02
MRv+i9dhRWIOdqXfgf/ynThXYNmEfSnXMGTaHLTtNxbXiitw7HIBRgVFYUlcilmAKzkPEHHgAuJO
ncfIsEV4oacPzl2/aRZv6sJYrIreYXiaeKMQMek5mLEpARsOZNBIismCdAmyKYRtRi2BT+BiJJxL
w4b9CeYruVEcz6Yj5zE2ci0SC4CI5TuQcOmueYH6bIsB+HLDAYxbu9fs0885lIxf9h6L8+ws9mI2
nm/9IZJvCT5aS1kSLigXzkommwJCF4jmSPeyMEp6820tM//ygfIMoEiuZ65/VrmHr113WgFyjxbH
tWFh8CTTL80iQWJxtaZvr+iHJkoJjCqnbAwLaVPGBOBq4+u2JexX2fReuhszT+Sg1bKjeCP6In4x
Lw0/CE3H90Ky8KOI2/hLVD5eX5iB/lvTsPBikflagerqjZLmU1lWQReVwCQgHWanVO9qtHl9l2Xs
RulouJRyVqSA0O1XfTNgPVDiPGRQtLZVxnzyglTCa/ONHFp+B0FeQXfysQGi70D89P1IzEstwxAK
z6DAKASeteO3n841LLC1DuGiJ6H9+IUYt+o4wpZH47lOPpi+5SheeLUFDlzOxi9HbUPo3kxM2HwW
vxowCxdzCrHvhhMTNmXgJ2/2RsqFi2asKWzw1ZEb4bvDOp/T1m8nmo9dgRROKoGpl080Xus3Bluy
yvFCex9MWXcE9ZpPxIcRseg4OhIthgRgPcHx627jcYpofaXneKw+dQV7rxYjmYr/ey2GYPbebPgt
WYcmrUdi/Yk0hO86CVtbX2P1/vBBICIO3sDIgOUE2AosPpYJr7f6wSc2CT98exASzli/vCHZiTp+
BbY/fGGudSKAQ0IeV6Ru2wBMWbwRLYbPwcvvR8Bn7ma80sUXSVychu8MwsSY0+juvxatJizC2rR8
/LTXVEQlX4f/zpN4od1YpNxWixZAlFyCrh9gqDRBfo3Q1+Q/TK7yKusCia4fl9S6dIDlupHhxm2R
a8XB814y5cKL5NCSNj63Uw3KTeYq3mW+Xtoe5xBnpdxBjzVZ+OHEbXAbsQY2n51wjzgBm/9h2EJP
w23WOdimHUd931Nwn3ESdQOT8YRvCuqPS4T74J34VVgSZqbewVmLJRRyzoXWVjJvVH9N/KLxSm1J
+Wl8wqdiC43TigeFDNmvIl5X4g7N4k1eScq+3HgISTduG/xU0gJVyyWlK/fYALnCcfz+i8UESRia
j7FeOPkcLcTbY1YYi+HeORLn+Dl503G8MzYK4Zvi8Pqk1aqKZ/7YDfuyc/Cz4Vsx92guJm0/jT8P
X2KepXCuL/cNwLxdqbheUIZPRvpi9oEreG1CDCZtsF4oeYcdxN8HhSGZ8xu4+AD6+ixFu8HTEU//
5M8D5xlmvNRlBnbkFGDu3hPo7rsUY9bG4S9DI8yEm01Zj0Grz2DP7TsYOD8Gv/54FvwP5MI3eiea
TlphQLnwcAae8p6H0/fL8OPewfg46hgGzFiNwUExWHXhKn7yvp/1VnbwQsQcTjd1tBjTYs+hcSt/
c32e6Rftx+Jk7k3U778GGZS0ceuT8I5fnDlR8PNus5DMig3fHIoTtDKBu07gzc/DsfbCbfy8px+m
77iB4cvP4EedfQgQCh0Xz/wiyUOCLuHX+wMXfRsIrjyR8lRe5Pr870g1JVByIMU7OS0OClS18QE0
S+uQqMGmhqHC8sH0+rqK4+Ct6h7js1HxOfhF8BE8HZAIz9BE2GYcRt3ZF+AWlgHP2Zdhi7gKW1Aa
QXKBKQXukSnMO406s9NQ3z8L9XxzUT/kHstkwSMwCa+E7MHSJEmeBUxtRFgWjr3SsminVK6g5EFj
kFzqsfihH0uqgI4CCRKFHHK5cfX0uy0dY9Pww2lrEXcl19SpMEqBbhZj6scGSD4H9OJ7IxG4Lwe/
6z0ZezNzMecUATJ0juGR7feDkFLqMD/7M2D+IcykT/63MVGGyY1f7YiD1/Lw8mcrEJNZgskbDuGt
4fNMu81HLkJbn/XmurCsChl5ReZbe3/9YjnmHrpqBvyjnsGYdfAKvIP3YEb8bYxbugPvDQ/GtjwH
Xv10AZI40yfb+mDt2RsI23oYr306BYLWzz6JwJdHcvGLz+YgnurE9sf+OFh4H80nRyPg+C0ExOzH
7wfOMhpz3p4svNgt3HyN+OUBs7GT4FPfGv+s/el8NsoaF61o7NF0k6/44dDNItR5bRwulzkNgOr/
3hsnr91A/d5L6E5WYPiKI3hzylak0/b/Z69wJHHlnnn9MxzJr8L0DUfQckgk3bdC/O2DcGRwnbVI
P203CWk1Z9W1uC6BF0noBQjluT4FGF1/GwQuS+K6flyqpsDLL5cSoLdPWFRRmCo455qvBMv10nab
DjBJOPkhHiqW3FFoR68NF/HkuB14YnoCnolIhy2A1mIxQTDnPGxhV1A/4AbqTLqCp/xvo77fdTSY
fY3AyCBITsEtYD8aTN+PZycexXNjT6LhRFp2n5NwC08lWJJRb/hutJ1/BduuVxn+G1WhCLuUVxWU
RMV/jH8Ui2i/1FnFldJ3f+j66XdUdDhIv/CltY2huW86Jwk23+OoM+MQNmXmfMtF+w4Ake5o/Zkv
9mWVYmrUBgz1nYN58Vfxoe9iPOAAW41fQm27DJ1HBCOd3IrYsAudJy8wdX/buh+OXs1D+4lLsD4h
DZtPpqHZp9OxYuch/KnXSPT3W4Upc9aYt+Zazhu0lR1GLYD3qNnoN9QXo9ckGeGcsuEk3hqzhJp9
FdoNC0LU2WK0nLjSLMzPe/ni0KU8REZvxdDQFdiRXYif9ZmBplM2ISDhDnSOt9uUpeg3ZS7aTZiL
7gGrMX3ldnwwc5npc//pLPzO2x8HUq8jNO483hrmz3hnB1bHJWPr2Sy8N9rPgMd79AIkJFuxkl4v
KaaYx/JtP52GEZGb0fKjybh07wF++0k4TuYUY8b6fejltxoX7znQdtg8pJVUoxfnNdg/Gl2GhCAp
7z62nEhHl6FhuMqVOXTpOt7pO9m8oRYJEC5L4AKLku7larkA4PoUuco8XE/X/4werucixwMG6Oa7
59YZLAmTdhmlmTVvCZ2znM/t+tEEy6XZdceBvlvz0WRaIur4XUC98KuoE5wKr8DTaBR+Fh6zzsAj
+BxdqAt4Pvwmnpl5A89Ou4pnp6Th6UlJeM7nGF4OO4nPDt6GP9c1trAUB+6XYnN+GYJTS/DmspN4
OoQuGGMV24xcvEjwTdyTgYRi64f0zCxMzEQmaidVmwXOBxZABCDFbCwiO3iOGJpxtAA/HbMTTxOI
9YOvw2syw4L4M0bO1Z7RDlQUjw0QWQmRTJdLF+lanVJ5mes7fKAvvqgTIVvJWmarvphpGMyk8koq
I7+/gHPSboceCsHmuZm1Vc/V68PLrL5V35pNDdXUOXHtATafLcCKE3fw5J964MPp1jfnZFa14ArE
1Zbq66sFInnPD9iUbqWdcrX1RlI/KitBEOm7WubFJlvTy0w9U8nce9YY9Vjtqj0905BUWmXUt+5V
Rv2ojD5dz1z3qu96selKEnZXejjfZUF07QKFyFVHn/+KHgaRiC2S1eS+NiH0Y3M6vMnBaINMkynn
QMvtVaiij651Xnu7HP1izuPFaXFo6H8KtkC6TsE3aBHy4BF2CfVmpeCpwBQ8Ny0DzwVnoWHoJdhC
aEmCThJIB/DLsGMYuTublrPU8Fcr+TVftGWg/TWnkaPoAideWcP2Q1NplSjYU7fiJb8dmHL6Jk5w
fBqP6ornFTWxh+RIQ9fnSTYyM/EOXglnzDP+INzo3j0ZcQMNfTPRaPAOBO5OMP2Yo3zfDSBkmhgo
hjMZJvJTbSj40X66fj1Q91oa6/2DgigtIHPMCxm6AFwM80JNtVjGCBY/NQGrTbYg349oF1Zk5fif
AYo2JyU0YoIROtbTpBVuKUfX2lIUWtWMyt5kV0fTCnDkXC7ulJFhzK+kNqmgA60+jR9hCnPV5aKo
PybV1U/LmJ7UGZdLx1C0UOZe0WA1IeSk7TJv0Plhxl/FaTvodajcfS4S5yhnnW2JN2KO+EGRNG0p
KYg0c9M2aU2e69r8ZCjb+O+SCzSmi4fqCDSu5Mr7Nn27vKxSGcfkWldUkFOl/NQvAeqSda4wbaQL
2GHHVTQKTaLQn4ZX8EnUCz0Lr9DzqBuaTouRCc+wy3APvQK34KvwCMk3n7ZZqbQk8XhpwSGMTr+N
o1xoAcNIg05nqAP2Ix5V8MapU2xmt6qK2dUmzu248zDcQnbDfUkObLPpwk05giYzj+LtqGQEXyhC
7N0SHLpbhv2FxVid+wDjTuWiS3QyXgk6gXqTjjMGugLbfLp1cxj/sN5LfocxOPYqzj+wXjtI9iQP
0gaPDxAJjNSIKxmh1+JT6JnMAUUjYVwoU6amvNmTtpLTvMGm7jTHKHgv7EjYTB2lmjra7zZ5yhbr
uIjVhAIDKz7lM73JLaEgW4IFxz1OiuMgE6sr+IyMdnCWmihHqGaM8DuqmE8BqJL/rP60lWleICrp
wKDEUkSB0wE9nUg23NKBvLvsk/X4n4kEq7lwTh1pp85hJ/pWmsZVTQVgN/v1BQSHAj32YSeINAi5
AJq3mQUTeWP9Zi51pXnJZuVVk096taa5u4T3UcllQUQPWwQXcB6+1+e/ItWtIH+0g3Wf835QzHno
TZ/64Ycs3DHqkjarU/Cs3154hFLAwq7D5p+OuuGZaMAY4amgU2gSmkzX6izqzrrM2IHPQ2lRFuXD
No0Be/AxDDx2C3Gcrna57pAf2lZ16t0LeWZiGvahrQFFDebXiuXu6RukJdLvdrrbxei4jVbAJwlu
c9l+eLYV8PsmoZFvPJ6fsgcvTd6DH3x5AF7jtjK+iEW9gIPwCjnDsWQaC+QewfqTt+KPy09h881i
s0ur+Sms0nxRKaVZ8R0BYo5MaCFNK5yFFpYaypSRGPOvAEMBIUtZjEEd7yWkUqquN5fmDSgXX/Ji
JFfHUbQzo1UwWl2Coja4YKqnWlWFfCxQqLFiFmPoyHE4NBb7bbZt7bfYFaBRELTQ5WXSB06U8VMn
OyvoG+nHl40/bgbLumyr2lnKZZC/rRdvmgMfSgFwXGZIXDFZEKP5eS+jYH01VB4tgaV71eOdXjJJ
+Kv11pm80rjM22mDUNVTWQHeUizmyIYR3Jr00L0AIl5ZifcPJVdZCfzDbtU3QMAPk19za6cCMXN/
iOxcBFfdr4h5BuO8lMXOYP0lWUVoufIUnpq6Gw2DU+AVkQWPmRcJglx4Bl3HkwSCx/SzeCIk3SSv
4DRjRYy2DjpHzb0Z3rsysYu6Qu6lIf3eUZkUSBlxUUJOyntgHpUGOWNcJCNbGp4RCY69SrWrzJb6
WwuS0WTKQTSeQ4CEXKRFyII7Xbo6BGndmWfQMDCD8c8NuGkTIESbACm8TkYDgvvXoXuwObfUnLLW
jnoZ+SIZlQ40Jwyo2JxUeN8BIFpkjlIaT9pai8DJOHhdrjVgo/qFjUpqHv1TP1pAHZEgrIwLpgFo
0hJ6PTfHLmRBBBAtlOqKGWqf4NOvn5i6ymK/dj6UL6+zVxISWX0d1dD+vNrVqyqO0ICIYsQylnau
qCxm8xVcD52tovsgIVHfpi+95NJJLL3mso5WaJxEiWlIhkGmvYI3ZizK1pAloKaGtlhZhkm+q/p1
MHjVmTUn29bbawNy88xqW3N8OOmBPg2Lmcy9xmAqiSdMrGh9f6EmsYBJqkctbyyYqvLTsIR/rLna
uQaWu1tRwfUyYFDj+lRH7KNmIEY4dV1D2lCNPF+ApitO4qmQg/T7j8MjMBnus+k2zboEL98L+EHA
ZTwx5TyempUDd9/LdKtuUhizjXvl4Z8Mj2nH8KMZB7A26/7XJyu0QCUU9CIqmVLmWLrIHGaVc6kC
moMZSk2SMjWGu+Zar0IYu6NV0G40mrgbTeYSiLNo0QLP0ZJdQKPZlwhQgmYmg3q/bLp6sjB0p8IT
4HO6AOdrfgLXCKJ+/EOpgjIoL6O8iOtQwlT8+EG6RS7Gukj3rr/W58PXrs+Hk+jh6394KDLXVj9f
P6Iw8K9yXUW/vv76mcj13Lp6OD38zEWqazHLlUyZmio1H1ZeDX19r79Wr67n1kisv9an6/q7kRRO
FRfKLiuo4KmMUlGDfvO+TgLDWwMESosW1yCXmbI85vwTn6mMoqdiVmCJGr0g51RaWu6Ntm4ltoI/
cO3WHSxMzUbLXVl4ioJtm3qUAqbt2VzLVQq+gvrh1MzUxk1mnsT3/c8xCM+ki0UtHUpNvpjlFjIQ
n7kfPw7ci3H7LpmdPvVq3rSzL6kxA1lJO8daTZdX/xsDuXPVxRyxhlKs+fA5BbasSBGmxWNzWFY3
io2oNLWD+c6ceDwxkZbELwuNp1zD01Oz0dCfYGHMYZt5EC/MOIzX5hxHwNHryGL9r9ZDmk1xI8ek
I/GGzEP9sdJ3BEgt/d8iWctSWqIHVSXEBt1DptIyxmD8lIBXOktQUX2Pn0V44CglAOim0KXT3r9i
H6N+BRiZbkmUAKaDhCYMtTY45HenUEDX3AaGxhej1dp8fH/qadQfdwyNfc7giennGU9cRd0AAoDx
hC2Ubsqsc/CISIFn5CkGyifgMY9u1OwMuIekoS4/GwSewBNTd6LjqiQGyXKaBFjhVqOWGOpTEZYF
cIV4cvGMK85yZQyMdbZMUJL7c54g2V9QhS3Z9xCVcg3bbjzAUYJHrh+5YgL8FE5t1OE8vBiwBx5h
h+hqJeCp4L34U/hhfLY9GwvySnGEZWTB5CEYy2SO6WsUVvAv2AoS36ZagPybkn4o2sT9XDUtnJKO
0JcZjS/LoEOGJRQ5u1l0l+grua4lYDopLX/9AkESd6McG68Uwf94Dj7blo62a9LwU7pNDaeepKVI
Z7BN9yi8jEHtLTQOykcDuieN/VPxxMzTaMIAt2FQEurNOg3P0PMURGro+bQYc7UzlYknwi7iybEH
8BZ9/NlnCoxm1zh01L26XOO03EtO4atk7pmvX9iUwRBoJfDaJYu+UYjB+86jU0wK3l6TiLZbUtBu
VTwGbEzAwA174HPoNA7dLDPxg3G/WEcvh+dcvY7pV3Owl9f6zcaNl+/hCv038cJsxLFjxcbGDBvr
YdlPJT7+B6oFyL8rSblRynOulWJvSj62X7+P6JulmHu5AuGpFZifWoVFpxwIP1IE37hrmLjjIoZu
SkO/6GR0XZWMV4N24Vehe/GLyCP40ZyjeHbWETQMSEWdmWlwn3aG8cJp1AtIoXU4h7r02+sHnEdd
XwbZQQy8Q26gztwCxhKZ8Aw5Cy9ahSdDT/JZEhoz+K4fkEVLkQfPoFy4RVyD59Rk/MwnEaO3F5iA
XrtTt/lZQgGspGhWO6nvhRCZC6FeOwCKZymSMmwCkvanrvHR3PSb+Hj3aXTakozWWzPRbl8h2u29
jY+O3ERI2l2cZNVjBUWIzirCyNizmL37HG6UVeGeEEKSdyaQ7cm7h0E7UjFt1xkUFhfTQtF+2S3V
YS/XWy7B0dpO10is0fwj1QLk35S0GaZ08a4d4eev42/Rx2Ebu4Jani6Ebwp96+uw+VCIJ1xFvekn
4DXtKOr6MF6YrJSIeiEZqMPkHkS3iEKv9xG2UMYHs5So+eUyzcqgkJ9FveBkAuAsGs48hhe1RetP
KxGeyXiD7pTeFYSfhW32ebjTSngxzvDyv456fvl4ZnoqfjwpDn1WnseRAutlngRU73/sdPcczgcE
xx0Knt5jUPoViWt3UDt81ADaG9SVrM3a1ByM2XwK3TZlodXG6+i++x5arrmED/fcwsLMezhP6VX7
2gSydjIA/YTg+7vi0SwmASNP3MT8pEKsPHYTfrEX0S8qBuMTEnHwahb5yF60fV51H1VV9zgmvT2z
NljkWLksWi1A/h8ivbIqQSGX0ApQRblFdiw6lIOmYQloNHkbA+cj9Lep4WdRcAkEz8BUNIrMhkfA
BdSN1BvtVHhGEBiBF+AVmWWOfTQJp6UgIOoolghLZXCdBs+wdLgxeYamo07oRbpTGXgmOIPAuUgQ
sf4cBufzc2CLuAT3CFohBr1PBx5F302MM64VmhHK4GnX0NgDAsP6SjPvyyiEJdqwt8AjQCh+ECgS
9bWAS3fx4eaz6Lz2AjpvzkbHmEJ0XJ+DT7dfx6ILRcY9lLsoTW/ihgr2pE0Ju/WrBzr+MyXxJt7e
lI6/7MxBq5gbmLA/FwevFSHHJfT6w6SdT+1YWrGQrIeJgix0mKTC36RagPybkpZRxwPNOyGzr2ky
DV2mlK25kodum+ny+KyDze8MNfwVavssxhJJ8AhORz0KuldAGpqE0R2iW9WYVqQ+A+n64ZcMcNxo
PTwiGT/QKrgzrw7r6t1GXdarF3wJHjNpfYIFDlqpyHxaIcYo0w+hwcR16LT+GN29EtT8NqS1eUbh
016gzqdJoOXmCDiSTfn3EuRkCmAcg4FZZ/MwcMc59IrNRAsC472dBXh3RzFaxlWh27o0TDiQjURO
W21U0BLpf6pj/Y6Atql10kDtsmXTJ3DjngNbTudg2flbiL1bhdPkk2IOvebSqzsZHQFY4zAWjklQ
VvxmnojHZmNDrX2TagHyb0paOlfwWLN7i3K6FiVcdWlhLbS0st5V+GcBzSlYL89KxoszT+IZv0Q8
FZSMejOTUT80k27RRTTQuwt/Wo3AdHiE0hKEE0wRegOt7VkCIYBWJDCD8QcBo+MbkQTR7DS40QXz
nBSP/ww6hpG7MnBSb1JJTsFAFkKyxUFqJzolpxTBu09iwt5zGJ5wA902XkSXDblov+wyOi87iw7L
k/He6vN4L+Yq2u28jRZMrXfdw3tbGWdsu4+usXk4+sA4ZJwf51p1Gw7zZSidnLtPXlSbZwKmrIfd
cde8r7C48zUpR+WMq0cwCQyuJF66NgusfStx0cVpa24PUy1A/k1JSy4gSBtbiQJDbVpGlXi/6gGK
KRj6FZYi+vUugThX7MSStHv4kH78K7MO41n/eNhm0A0LSSIICBb/U6jvl2RepHmG6Bh6qnnD7BZ6
me7VRXj4837yCQbtCWgYvAtNpsXgtciDCDxZgJN3SsxumZEhHdUpvw+nfr+McmXibWYV8/rC/XLs
LijB0Pir6LDlDDrEXUaLDRfQY8NF9Iy5xrxrdIOuo/W2PHSIyUbPzZkYtDkNOwsqcJnCLE2v0xHW
3pLeZrMfWg9tDitHc1Usov+ToZ0uqJPxhFwlKQz9L9pkLWQJqsinO+TKfQMlvSblP7lQwoOSrg1I
9EZGDpfOUdQC5P8dkprTOwz5EwpsFWTS73aYs2ySJD1jOblfTmpYEwzrC7DmGxtmJymJF1Gp9zDl
IIPWdcloHpOHX0Vdw4thF9DA9zjcxsfhhaBE/CT0BP6w4AzarMvAx3syMGrXKazLyId+NeArknRW
VjO+thMIlWbn6T5FS5GAOf2rIyN6vc1xSzvLPVL8sL/wNlZnXMGME/nwPX0XY47ewYiEQkw5cwcL
0vOQxIbUlpHacrajb0GZebFZZlVxnmrPyK54oheK+v/C6LW7vCKWUbZkXsBw0qo59b0Qc4xIDwQd
qg8dDdKvx+jLXWKQ+uBjluIcNA9rR+vbVAuQf1eSVGiBzdtmgUCJ1zVnVszP6fDT8hUoWObXQSpZ
RZ6186v3gxIAyYLERC/K9EMJF8scSC0qR8rdYlwq1f9a2W6+gyNQSUOrvJpWXTbDcSiDV+qT/+mZ
2pbLY8FRkspkjvDoqaWLpZNdsigLJ9AU8oGSrmUh9cyQBFqmyPRVk2pIbZk/GoN4oQNTNahQ2PBV
6GB4wYq6V1IbGpN6UYxh6jLT1T7LmCJM4tk/o1qA1FItPYJqAVJLtfQIqgVILdXSI6gWILVUS4+g
WoDUUi09gmoBUku19AiqBUgt1dIjqBYgtVRLj6BagNRSLT2CagFSS7X0CKoFSC3V0iOoFiC1VEv/
koD/DQ4+w6kmMAFSAAAAAElFTkSuQmCC
------------MIME--520934545-23878-delim--
7 years, 2 months
Re: [ovirt-users] ovn problem - Failed to communicate with the external provider, see log for additional details.
by George Sitov
Thank you! It was certifucate problem. I returned it to pki and all work.
8 февр. 2018 г. 4:44 ПП пользователь "Marcin Mirecki" <mmirecki(a)redhat.com>
написал:
Hello George,
Probably your engine and provider certs do not match.
The engine pki should be in:
/etc/pki/ovirt-engine/certs/
The provider keys are defined in the SSL section of the config file
(/etc/ovirt-provider-ovn/conf.d/...):
[SSL]
https-enabled=true
ssl-key-file=...
ssl-cert-file=...
ssl-cacert-file=...
You can compare the keys/certs using openssl.
Was the provider created using egine-setup?
For testing purposes you can change the "https-enabled" to false and try
connecting using http.
Thanks,
Marcin
On Thu, Feb 8, 2018 at 12:58 PM, Ilya Fedotov <kosha79(a)gmail.com> wrote:
> Hello, Georgy
>
> Maybe, the problem have the different domain name and name your node
> name(local domain), and certificate note valid.
>
>
>
> with br, Ilya
>
> 2018-02-05 22:36 GMT+03:00 George Sitov <usual.man(a)gmail.com>:
>
>> Hello!
>>
>> I have a problem wiith configure external provider.
>>
>> Edit config file - ovirt-provider-ovn.conf, set ssl parameters.
>> systemctl start ovirt-provider-ovn start without problem.
>> In external proveder in web gui i set:
>> Provider URL: https://ovirt.mydomain.com:9696
>> Username: admin@internal
>> Authentication URL: https://ovirt.mydomain.com:35357/v2.0/
>> But after i press test button i see error - Failed to communicate with
>> the external provider, see log for additional details.
>>
>> /var/log/ovirt-engine/engine.log:
>> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro
>> vider.network.openstack.BaseNetworkProviderProxy] (default task-29)
>> [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway (OpenStack response
>> error code: 502)
>> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro
>> vider.TestProviderConnectivityCommand] (default task-29)
>> [69fa312e-6e2e-4925-b081-385beba18a6a] Command '
>> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050)
>>
>> In /var/log/ovirt-provider-ovn.log:
>>
>> 2018-02-05 21:33:55,510 Starting new HTTPS connection (1):
>> ovirt.astrecdata.com
>> 2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate
>> verify failed (_ssl.c:579)
>> Traceback (most recent call last):
>> File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line
>> 126, in _handle_request
>> method, path_parts, content)
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 176, in handle_request
>> return self.call_response_handler(handler, content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler
>> return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 60, in post_tokens
>> user_password=user_password)
>> File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26,
>> in create_token
>> return auth.core.plugin.create_token(user_at_domain, user_password)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py",
>> line 48, in create_token
>> timeout=self._timeout())
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 62, in create_token
>> username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 53, in wrapper
>> response = func(*args, **kwargs)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 46, in wrapper
>> raise BadGateway(e)
>> BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
>> (_ssl.c:579)
>>
>> Whan i do wrong ?
>> Please help.
>>
>> ----
>> With best regards Georgii.
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
7 years, 2 months
ovn problem - Failed to communicate with the external provider, see log for additional details.
by George Sitov
Hello!
I have a problem wiith configure external provider.
Edit config file - ovirt-provider-ovn.conf, set ssl parameters.
systemctl start ovirt-provider-ovn start without problem.
In external proveder in web gui i set:
Provider URL: https://ovirt.mydomain.com:9696
Username: admin@internal
Authentication URL: https://ovirt.mydomain.com:35357/v2.0/
But after i press test button i see error - Failed to communicate with
the external provider, see log for additional details.
/var/log/ovirt-engine/engine.log:
2018-02-05 21:33:55,517+02 ERROR
[org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy]
(default task-29) [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway
(OpenStack response error code: 502)
2018-02-05 21:33:55,517+02 ERROR
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
(default task-29) [69fa312e-6e2e-4925-b081-385beba18a6a] Command
'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050)
In /var/log/ovirt-provider-ovn.log:
2018-02-05 21:33:55,510 Starting new HTTPS connection (1):
ovirt.astrecdata.com
2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate
verify failed (_ssl.c:579)
Traceback (most recent call last):
File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 126,
in _handle_request
method, path_parts, content)
File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
176, in handle_request
return self.call_response_handler(handler, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
call_response_handler
return response_handler(content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", line
60, in post_tokens
user_password=user_password)
File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
create_token
return auth.core.plugin.create_token(user_at_domain, user_password)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
48, in create_token
timeout=self._timeout())
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 62,
in create_token
username, password, engine_url, ca_file, timeout)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 53,
in wrapper
response = func(*args, **kwargs)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 46,
in wrapper
raise BadGateway(e)
BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
(_ssl.c:579)
Whan i do wrong ?
Please help.
----
With best regards Georgii.
7 years, 2 months
Engine AAA LDAP startTLS Protocol Issue
by Alan Griffiths
Hi,
Trying to configure Engine to authenticate against OpenLDAP and I seem
to be hitting a protocol bug.
Attempts to test the login during the setup fail with
2018-02-07 12:27:37,872Z WARNING Exception: The connection reader was
unable to successfully complete TLS negotiation:
SSLException(message='Received fatal alert: protocol_version',
trace='getSSLException(Alerts.java:208) /
getSSLException(Alerts.java:154) / recvAlert(SSLSocketImpl.java:2033)
/ readRecord(SSLSocketImpl.java:1135) /
performInitialHandshake(SSLSocketImpl.java:1385) /
startHandshake(SSLSocketImpl.java:1413) /
startHandshake(SSLSocketImpl.java:1397) /
run(LDAPConnectionReader.java:301)', revision=0)
Running a packet trace I see that it's trying to negotiate with TLS
1.0, but my LDAP server only support TLS 1.2.
This looks like a regression as it works fine in 4.0.
I see the issue in both 4.1 and 4.2
4.1.9.1
4.2.0.2
Should I submit a bug?
Thanks,
Alan
7 years, 2 months
oVirt DR: ansible with 4.1, only a subset of storage domain replicated
by Luca 'remix_tj' Lorenzetto
Hello,
i'm starting the implementation of our disaster recovery site with RHV
4.1.latest for our production environment.
Our production setup is very easy, with self hosted engine on dc
KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
and EMC VNX8000. Both storage arrays supports replication via their
own replication protocols (SRDF, MirrorView), so we'd like to delegate
to them the replication of data to the remote site, which is located
on another remote datacenter.
In KVMPD DC we have some storage domains that contains non critical
VMs, which we don't want to replicate to remote site (in case of
failure they have a low priority and will be restored from a backup).
In our setup we won't replicate them, so will be not available for
attachment on remote site. Can be this be an issue? Do we require to
replicate everything?
What about master domain? Do i require that the master storage domain
stays on a replicated volume or can be any of the available ones?
I've seen that since 4.1 there's an API for updating OVF_STORE disks.
Do we require to invoke it with a frequency that is the compatible
with the replication frequency on storage side. We set at the moment
RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
updated with the required frequency?
I've seen a recent presentation by Maor Lipchuk that is showing the
automagic ansible role for disaster recovery:
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 2 months
Spice Newb
by Marshall Mitchell
--_000_SN1PR20MB0670222DE294B753BA2AE843D6FC0SN1PR20MB0670namp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I'm attempting to get my first install of oVirt going in full swing. I hav=
e all the hosts installed and an engine running. All is smooth. I'm not try=
ing to connect to the Spice console with my Remote Viewer and I have no clu=
e how to figure out what port I should be connecting too. I've been all ove=
r the web via google looking for a process to install / configure / verify =
spice is operational, but I've not been lucky. How do I go about connecting=
/ finding the port numbers for my VM's? I did open the firewall range requ=
ired. I appreciate the help.
-Marshall
--_000_SN1PR20MB0670222DE294B753BA2AE843D6FC0SN1PR20MB0670namp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">I’m attempting to get my first install of oVir=
t going in full swing. I have all the hosts installed and an engine r=
unning. All is smooth. I’m not trying to connect to the Spice console=
with my Remote Viewer and I have no clue how to figure
out what port I should be connecting too. I’ve been all over the web=
via google looking for a process to install / configure / verify spice is =
operational, but I’ve not been lucky. How do I go about connecting / =
finding the port numbers for my VM’s? I did
open the firewall range required. I appreciate the help. <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">-Marshall<o:p></o:p></p>
</div>
</body>
</html>
--_000_SN1PR20MB0670222DE294B753BA2AE843D6FC0SN1PR20MB0670namp_--
7 years, 2 months
Clear name_server table entries
by Carlos Rodrigues
Hi,
I'm getting the following problem:
https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3
and after fix DNS entries no /etc/resolv.conf on host, i have to many
entries on name_server table:
engine=# select count(*) from name_server;
count
-------
31401
(1 row)
I would like to know if may i delete this entries?
Best regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
7 years, 2 months
Re: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration.
by Petr Kotas
Hi Pym,
the feature is know in testing. I am not sure when it will be released, but
I hope for sooner date.
Petr
On Tue, Feb 6, 2018 at 12:36 PM, Pym <pym0914(a)163.com> wrote:
> Thank you very much for your help, so is this patch released now? Where
> can I get this patch?
>
>
>
>
>
>
> At 2018-02-05 20:52:04, "Petr Kotas" <pkotas(a)redhat.com> wrote:
>
> Hi,
>
> I have experimented on the issue and figured out the reason for the
> original issue.
>
> You are right, that the vm1 is not properly stopped. This is due to the
> known issue in the graceful shutdown introduced in the ovirt 4.2.
> The vm on the host in shutdown are killed, but are not marked as stopped.
> This results in the behavior you have observed.
>
> Luckily, the patch is already done and present in the latest ovirt.
> However, be ware that gracefully shutting down the host, will result in
> graceful shutdown of
> the VMs. This result in engine not migrating them, since they have been
> terminated gracefully.
>
> Hope this helps.
>
> Best,
> Petr
>
>
> On Fri, Feb 2, 2018 at 6:00 PM, Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Thu, Feb 1, 2018 at 1:06 PM, Pym <pym0914(a)163.com> wrote:
>>
>>> The environment on my side may be different from the link. My VM1 can be
>>> used normally after it is started on host2, but there is still information
>>> left on host1 that is not cleaned up.
>>>
>>> Only the interface and background can still get the information of vm1
>>> on host1, but the vm2 has been successfully started on host2, with the HA
>>> function.
>>>
>>> I would like to ask a question, whether the UUID of the virtual machine
>>> is stored in the database or where is it maintained? Is it not successfully
>>> deleted after using the HA function?
>>>
>>>
>> I just encounter a similar behavior:
>> after a reboot of the host 'vdsm-client Host getVMFullList' is still
>> reporting an old VM that is not visible with 'virsh -r list --all'.
>>
>> I filed a bug to track it:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1541479
>>
>>
>>
>>>
>>>
>>>
>>>
>>> 2018-02-01 16:12:16,"Simone Tiraboschi" <stirabos(a)redhat.com> :
>>>
>>>
>>>
>>> On Thu, Feb 1, 2018 at 2:21 AM, Pym <pym0914(a)163.com> wrote:
>>>
>>>>
>>>> I checked the vm1, he is keep up state, and can be used, but on host1
>>>> has after shutdown is a suspended vm1, this cannot be used, this is the
>>>> problem now.
>>>>
>>>> In host1, you can get the information of vm1 using the "vdsm-client
>>>> Host getVMList", but you can't get the vm1 information using the "virsh
>>>> list".
>>>>
>>>>
>>> Maybe a side effect of https://bugzilla.redhat.com
>>> /show_bug.cgi?id=1505399
>>>
>>> Arik?
>>>
>>>
>>>
>>>>
>>>>
>>>>
>>>> 2018-02-01 07:16:37,"Simone Tiraboschi" <stirabos(a)redhat.com> :
>>>>
>>>>
>>>>
>>>> On Wed, Jan 31, 2018 at 12:46 PM, Pym <pym0914(a)163.com> wrote:
>>>>
>>>>> Hi:
>>>>>
>>>>> The current environment is as follows:
>>>>>
>>>>> Ovirt-engine version 4.2.0 is the source code compilation and
>>>>> installation. Add two hosts, host1 and host2, respectively. At host1, a
>>>>> virtual machine is created on vm1, and a vm2 is created on host2 and HA is
>>>>> configured.
>>>>>
>>>>> Operation steps:
>>>>>
>>>>> Use the shutdown -r command on host1. Vm1 successfully migrated to
>>>>> host2.
>>>>> When host1 is restarted, the following situation occurs:
>>>>>
>>>>> The state of the vm2 will be shown in two images, switching from up
>>>>> and pause.
>>>>>
>>>>> When I perform the "vdsm-client Host getVMList" in host1, I will get
>>>>> the information of vm1. When I execute the "vdsm-client Host getVMList" in
>>>>> host2, I will get the information of vm1 and vm2.
>>>>> When I do "virsh list" in host1, there is no virtual machine
>>>>> information. When I execute "virsh list" at host2, I will get information
>>>>> of vm1 and vm2.
>>>>>
>>>>> How to solve this problem?
>>>>>
>>>>> Is it the case that vm1 did not remove the information on host1 during
>>>>> the migration, or any other reason?
>>>>>
>>>>
>>>> Did you also check if your vms always remained up?
>>>> In 4.2 we have libvirt-guests service on the hosts which tries to
>>>> properly shutdown the running VMs on host shutdown.
>>>>
>>>>
>>>>>
>>>>> Thank you.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
>
>
7 years, 2 months
qcow2 images corruption
by Nicolas Ecarnot
Hello,
TL; DR : qcow2 images keep getting corrupted. Any workaround?
Long version:
This discussion has already been launched by me on the oVirt and on
qemu-block mailing list, under similar circumstances but I learned
further things since months and here are some informations :
- We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS
7.{2,3} hosts
- Hosts :
- CentOS 7.2 1511 :
- Kernel = 3.10.0 327
- KVM : 2.3.0-31
- libvirt : 1.2.17
- vdsm : 4.17.32-1
- CentOS 7.3 1611 :
- Kernel 3.10.0 514
- KVM : 2.3.0-31
- libvirt 2.0.0-10
- vdsm : 4.17.32-1
- Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated
network
- Depends on weeks, but all in all, there are around 32 hosts, 8 storage
domains and for various reasons, very few VMs (less than 200).
- One peculiar point is that most of our VMs are provided an additional
dedicated network interface that is iSCSI-connected to some volumes of
our SAN - these volumes not being part of the oVirt setup. That could
lead to a lot of additional iSCSI traffic.
From times to times, a random VM appears paused by oVirt.
Digging into the oVirt engine logs, then into the host vdsm logs, it
appears that the host considers the qcow2 image as corrupted.
Along what I consider as a conservative behavior, vdsm stops any
interaction with this image and marks it as paused.
Any try to unpause it leads to the same conservative pause.
After having found (https://access.redhat.com/solutions/1173623) the
right logical volume hosting the qcow2 image, I can run qemu-img check
on it.
- On 80% of my VMs, I find no errors.
- On 15% of them, I find Leaked cluster errors that I can correct using
"qemu-img check -r all"
- On 5% of them, I find Leaked clusters errors and further fatal errors,
which can not be corrected with qemu-img.
In rare cases, qemu-img can correct them, but destroys large parts of
the image (becomes unusable), and on other cases it can not correct them
at all.
Months ago, I already sent a similar message but the error message was
about No space left on device
(https://www.mail-archive.com/qemu-block@gnu.org/msg00110.html).
This time, I don't have this message about space, but only corruption.
I kept reading and found a similar discussion in the Proxmox group :
https://lists.ovirt.org/pipermail/users/2018-February/086750.html
https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heav...
What I read similar to my case is :
- usage of qcow2
- heavy disk I/O
- using the virtio-blk driver
In the proxmox thread, they tend to say that using virtio-scsi is the
solution. Having asked this question to oVirt experts
(https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but
it's not clear the driver is to blame.
I agree with the answer Yaniv Kaul gave to me, saying I have to properly
report the issue, so I'm longing to know which peculiar information I
can give you now.
As you can imagine, all this setup is in production, and for most of the
VMs, I can not "play" with them. Moreover, we launched a campaign of
nightly stopping every VM, qemu-img check them one by one, then boot.
So it might take some time before I find another corrupted image.
(which I'll preciously store for debug)
Other informations : We very rarely do snapshots, but I'm close to
imagine that automated migrations of VMs could trigger similar behaviors
on qcow2 images.
Last point about the versions we use : yes that's old, yes we're
planning to upgrade, but we don't know when.
Regards,
--
Nicolas ECARNOT
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
Re: [ovirt-users] GUI trouble when adding FC datadomain
by Yaniv Kaul
On Feb 2, 2018 1:09 PM, "Roberto Nunin" <robnunin(a)gmail.com> wrote:
Hi Yaniv
Currently Engine is 4.2.0.2-1 on CentOS7.4
I've used using oVirt Node image 4.2-2017122007.iso
LUN I need is certainly empty. (the second one in the list).
Please file a bug with logs, so we can understand the issue better.
Y.
2018-02-02 13:01 GMT+01:00 Yaniv Kaul <ykaul(a)redhat.com>:
> Which version are you using? Are you sure the LUNs are empty?
> Y.
>
>
> On Feb 2, 2018 11:19 AM, "Roberto Nunin" <robnunin(a)gmail.com> wrote:
>
>> Hi all
>>
>> I'm trying to setup ad HE cluster, with FC domain.
>> HE is also on FC.
>>
>> When I try to add the first domain in the datacenter, I've this form:
>>
>> [image: Immagine incorporata 1]
>>
>> So I'm not able to choose any of the three volumes currently masked
>> towards the chosen host.
>> I've tried all browser I've: Firefox 58, Chrome 63, IE 11, MS Edge, with
>> no changes.
>>
>> Tried to click in the rows, scrolling etc. with no success.
>>
>> Someone has found the same issue ?
>> Thanks in advance
>>
>> --
>> Roberto Nunin
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
--
Roberto Nunin
7 years, 2 months
Migration of a VM from ovirt 3.6 to ovirt 4.2
by eee ffff
------=_Part_436298_197042249.1518015278852
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Dear ovirt-users,
I would like to copy the VMs that I have now on a running ovirt 3.6 Data Center to a new ovirt 4.2 Data Center, located in a different building. An export domain is not an option,
as I would need to upgrade the ovirt 3.6 host to 4.2 and (as this is an operation that I would have to do multiple times) constantly upgrading and downgrading a host, so that it would be compatible to the ovirt environment does not make sense.
Do you have other suggestions?
Cheers,
Eli
------=_Part_436298_197042249.1518015278852
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<HTML><BODY>
Dear ovirt-users,<br><br>I would like to copy the VMs that I have now on a =
running ovirt 3.6 Data Center to a new ovirt 4.2 Data Center, located in a =
different building. An export domain is not an option, as I would nee=
d to upgrade the ovirt 3.6 host to 4.2 and (as this is an operation that I =
would have to do multiple times) constantly upgrading and downgrading a hos=
t, so that it would be compatible to the ovirt environment does not make se=
nse. Do you have other suggestions?<br><br>Cheers,<br>Eli<br></BODY><=
/HTML>
------=_Part_436298_197042249.1518015278852--
7 years, 2 months
4.1 slot user portal vm creation
by Staniforth, Paul
--_000_151723540099698601leedsbeckettacuk_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
We are experiences slow response when trying to create a new vm i=
n the user portal (for some users the New Virtual Machine page doesn't get =
created). Also in the Templates page of the user portal it doesn't list the=
templates, just has the 3 waiting to load icons flashing.
In the admin portal it lists the templates with no problem.
We are running 4.1.9 on the engine and nodes.
Any help appreciated.
Thanks,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
--_000_151723540099698601leedsbeckettacuk_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none"><!--P{margin-top:0;margin-b=
ottom:0;} --></style>
</head>
<body dir=3D"ltr" style=3D"font-size:12pt;color:#000000;background-color:#F=
FFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hello,</p>
<p> We are experiences slow response when=
trying to create a new vm in the user portal (for some users the New =
Virtual Machine page doesn't get created). Also in=
the Templates page of the user portal it doesn't =
list the templates, just has the 3 waiting&nb=
sp;to
load icons flashing.</p>
<p>In the admin portal it lists the templates with no problem.<br>
</p>
<p><br>
</p>
<p>We are running 4.1.9 on the engine and nodes. <=
/p>
<p><br>
</p>
<p>Any help appreciated.</p>
<p><br>
</p>
<p>Thanks,</p>
<p> Paul S.<br>
</p>
To view the terms under which this email is distributed, please go to:- <br=
>
<a href=3D"http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html"=
target=3D"_blank">http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaim=
er.html</a>
<p></p>
</body>
</html>
--_000_151723540099698601leedsbeckettacuk_--
7 years, 2 months
Add a disk and set the console for a VM in the user portal
by nicolas@devels.es
--=_eb4ca82dfafca5034ad170a1bec7f025
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8;
format=flowed
Hi,
We recently upgraded to oVirt 4.2.0 and we're testing things so we can
determine if our production system might also be upgraded or not. We do
an extensive use of the User Portal, I've granted the VmCreator and
DiskProfileUser privileges on a user (the user has a quota as well), I
logged in to the user portal, I can successfully create a VM setting its
memory and CPUs but:
1) I can't see a way to change the console type. By default, when the
machine is created, SPICE is chosen as the mechanism, and I'd like to
change it to VNC, but I can't find a way.
2) I can't see a way to add a disk to the VM.
I'm attaching a screenshot of what I see in the panel.
Are some new privileges needed to add a disk or change the console type?
Thanks
--=_eb4ca82dfafca5034ad170a1bec7f025
Content-Transfer-Encoding: base64
Content-Type: image/png;
name="Captura de pantalla de 2018-02-06 11-43-35.png"
Content-Disposition: attachment;
filename="Captura de pantalla de 2018-02-06 11-43-35.png";
size=12012
iVBORw0KGgoAAAANSUhEUgAAAU0AAADACAIAAAAPy6YhAAAAA3NCSVQICAjb4U/gAAAAGXRFWHRT
b2Z0d2FyZQBnbm9tZS1zY3JlZW5zaG907wO/PgAAIABJREFUeJztnX1sG3We/z9kho6742YOj2TT
ieLGPmw6yD7Z4G4MCesIQ7OXdNNtpFYJuuzS37YH3SId3P12qUS7ArZiyR27BalPKl3xkIqgREqX
3CYiuboX7y8B5zCN92zO+cWs3ToXF5ufrZvUI2bKDPz+cB6cxHlsHp3v6w/Lnvk+fObhPZ/vk+dz
13fffQcIBCKvKVhrAxAIxIqDdI5A5D9I5whE/oN0jkDkP0jnCET+gy8hj9/VFZMWWw9jcZo1S6hs
c/Ht/3DfAEX81VrbgcgzlqLzoNcXWXQmTuswa5ZS2+bhmx5X6MdeUVHMfHRAY0HnCrF8oHb7bIQ/
fKPxrZ74alX3TU9X6MdekQOID8d+2BL3LbbFhEDMztK9Bmnaf6hSO19+Kdp1sTXAz5UkHR3o8wbC
sfgoLwNGkBTN6C1lZWaNYsm2bTj+Z2j4KZ/Ijf+MD8eecit9TnItbULkEXfQOsRxBT5vdnzuJOmw
q7Xdm5AKi1m2nFERksilYpFwNCE5lm7YBuObngHur9jid2xCxp8DAABuYbasqVWIvGJNe4HcQHu7
N0maag5Us9Tk5oo1M2j1+aanK/Rjn6gIiH/Yb/gDZJru+N7df32evXutbVsTJG6o19UfjMZHRcAI
BUUz2p22sl1aJQBAtOt0s2+8cYgRJEUXG2zl5Sw9dhsne9662E/V/NMBNvu+nqPBOKXAcYid+5/b
q1/xI11V1lDnUri3b1hUldZXZot8Ounop+5eXySW4oEgaS1rdzgmLitwnrff8uvqa+iA2xOKpngZ
I9VaU1llhZGayO5xuQci8VERCJKk1FpTmXNXUaZHICWDbrdnKJocFYFUMTpLuWPsfprNEpfbG4mP
ikCqtKyj0mGk8PlrmYMxkXMAXCzxw1b4aL/hD3D9vKroHev3NlGnJQvO3/Z2ZwRnTBYHQ+GSmEpE
o+FIsmyXdiIJVlxabWMwWebTsWjA398eisTrGyqKct/JC2gwYoyt0sZkZccK1St3hGvEGuo8Fozw
wJTZ5hiFF8IdTW1BmbGWVTsoTEwEvZ72pni64cld9HgSOeVrbaVYR2V9pYqQUiF3R3d7GzQcrNAA
QNLT2tIr6cudNWoSRC4VjcRkfExByU9bmq4mqJ12Z5makLnhgT53SzRRe7Ban0tiQrSrucUPepuz
RkvKCX9v7+UmruapWlY5dy2z801PV+iHPlEc/83FEj9sxT31hg8270h7MuiNiGpHQ4N94vrap6fB
KS1rzHhb1rrLpn3/7W5vX9B+wJzjjC+owYirdGY2z9z3DNbunuLiKR4KWfXsDhTifa4Ar3ns8JO7
MhdJb2QZ/K1mt8vPHjBP5JPUZfsrx34qrVUVodPtoWCyQkODEIsmZKaqyj5+D7DW8UyC3+UeJkz7
G6r1mVNgZPWq99/q7uqzHHEWzbTE0+XjtVWHajP16PXFBH++u7c/zjo1c9QyK9981BH6cWBS5AAA
gNl3bivZvCIHAIETANOo5mjeTYcymxnXcDKWBPOMq7awBuPmYO3m1SReAFCQc/i9eDCcwrQWc9ZF
wrU2ViVHgxFhcpvGoM96VigKaRJEngcAUKjVKoh5uvzxrOSZyiOhqFxotOmzZEWZrXpsdGgwx1Ra
MhJJYVozO1mPUq+nIRWLpueqZRZyi7zysfv+sGuTNtfHobUaQo76gtz8SSfAJz6mk2kw2uZqMG4a
7uAUSJIgSfPOq0mzzQPjpAIgzc8xTcxzPJBa1dRbn9ZQIHNJDmB8O6GcMf0kQaZYTXltDd/e1fnO
abdab7KV2sxFyrGiORko9dTHPK6iKAhzKQmm3xlcnAM50f7bxvap21W8CKCctZZcIJHPgYJ1OP3N
nZ1vvRWy2m02VkvNd38J0aGEjDEMPXPXAhqMGSQxLQhZFeEKRd49GZZ+QHyg9c3AHdRMUoUExOMJ
AWaZKJ/1CbEIcJqtPmhwjIS8Xk+g85LXs7P6wN45WnFzVEkaqvaVqaeeL5ykFlXLNx91DP04cBuJ
fFZo84FDan+f2+PvbPZ2FRab7Q6HdeqIZsa/SAKfTIQDfe5BQe0oY3OcvfkbjBnEwfYzg1m/CdP+
56rzrb++dg8uXGtksMGoz58078rxNAbAaYqEEJcSIPs6J+McYBS9mA4XrixiK4rY8vKBtubuLhdr
qDWSFIVBPJHVKgAAKcXxQFKFM88JSZEQEoDWzLF2J0ctUwv65qOOv7yUwv4KIKtngO197L4PkMiz
UWjMzgNmZzoe9Pb1ersvhSJj450ZxMHLb47LEiMZdneD05rzjxPzNxgzEMWOGnvWCDtOMXd4BOuQ
peictVmomWePi/ojKRkwlc6snalCnFFNr0phKLOqIl53u0tT79TmuNNp1qDq9/r8nHnXeIFS1BdM
YYxNtwRh4LSZVbsiAi8AKHWsnggNeaOO6oklfZx/ICwXmg05bhoNq1f1D3g9cbZivs7elFomN3/r
6Qodk5iehsIvB/7yw+70DQAk8jlRatiKWgPb0/ROf0//CDsxOEroHqu1azCMUCgpmprj3M3bYByH
Uuv1+ea/p7MUnZudlebMN4kbiSYzrVApmvRHUgBAqnWGMe0QtLZojh4WXuSo3Z1s7vY2nw/rWLOO
IRUgCXwqEQNDldOoAI290hJqcb/fMmq36GhcjAc9fQGecdSa5+1zAQCAEOxqGyIMeoamFLjAhX29
UWAceiUAKFiHY6Cpu60JyuyshpS4sK/XO0zu3FeWcymvpqzSEm7pb25KWSwsowA+HU8MDwuGfXvN
yjlqmeQbX+K2goEv07DT+tcfSaGKq6IdiXxecI1BW9g/wI1KMDFBTtJarXbOXGN552swbiburN3O
DXS29qembJKH+y8P92e+q0oPHa6Y6xTjtPXAIcbv6fOFQh63TxxfrmQmM3YptJUN9Wp3n7evc4CX
cJJmzDU1Wetk5gFXqUne2+/y8qIMGKlidM56p3WsbUBbDxwke3v6vV1toyIQKo3OUe+cdZ3MhCUB
V0e/LGNEIc1oLWrFfLVk8V2/N/JD4a8HK/GekIxEvjC4WIwHkprRGFwI8zYYNxF3ofe9rgri+ab/
OhLLfEfN9VlI+9ua/aBlGBVFKhWQjkcHA4EYGCb659Gu081B3ezDZDPWvUrJgbbm7ghPqHI2GCHa
dbrZT9kqLdkr4HCVjp1jxmRDkncTCOsUoq6GrZAAABQ4XjLvdNHmRKHWMeAL+8J+UZYBI0harXus
fs7FyPMwX4MRAECOeTtj2ZmInfvzTufw3Rrx2muvvfbaa5kvefyJQKwHULsdgch/0PtkEIj8B+kc
gch/kM4RiPwH6RyByH+QzhGI/AfpHIHIf5DOEYj8B+kcgch/kM4RiPwH6RyByH+QzhGI/Af9cQqx
Gfniiy+uX7++1lYsgpKSkvvuu2/J2ZHOEZuR69ev/+AHP1ihwr/66qt77rnn7ruXM3KW2+2+E52j
djsCscx8/fXXsiyvtRVTQDpHrFumhKDnPG83NrYEly8s/Mr92fvbb79db3/33jjtdokLD3i8gXAs
OSrKgBGFlJoxWLOCZa5ThOhACGPN80dVXKflL8kGKdr1VlvMWn/QnvOVy4jVZn1rZAIu+GFLx2AK
V+kMFtNEJM3ocHrdH4AQ8bh9Ov2K6XCly98oNiye9eZyV5T1LhMAAIj3tHUMcrRt/35n1uuSZ0TS
XI8IkWBMlHQbtvwl2oBrK488t1b2IGayAXQuBN3eBDCP1Tr1c7ycjwt7evrGQpAXanTWsgq7fuLd
yiOuc81xe0O55B0PpV5Ia63O3fbxNwzOmyBTRY/LHYgmeQkvpLVWx257tkHZ0dSJQpqxOmvUwaY2
X0oGuHq+8SoAkKb6Z6tnvHo8Hf20zzsYjSe5TBx2g73SaR7rjES7TjeHdFNzhT98ozVheepwBR3u
enuW8ucM7T5+sKJnLGY7qdKanFUV6tSnbrc3NBng3TkRRn5WI6VZbJgwUjNZgLvXF4lxPOAkRevK
a6pZCkCKB3s9A5FYiuN5CSdprW3aSc9B2t9yvpOzPnXYmdUpkMIfnm4d1u1/Zq9+gfc08ufrCini
j8qY3pbjjegTpINtTe1hTGdzVGtJ4GJ+T29rU2xKuB455m7uUJsrquprKFxMBF2d7pZ24vCTE8XO
kyAdbGtqj5I7yyodNMZHA729rU1c/cHKsfeCJz9taboaI3TWMptaIQmJWIqkFIy9tkHV9c5V3ra/
1kIB4ESuY+BjoQQwlvJSmsT5qNfd39mGqw7nCM48HXy28ucP7S7H3K1djM1ZW6FWSImBzvb+tqYQ
KSuMzpqDxRRwIVd7d3sbefDgeCDy2Yyc1YYpCOGOptaAqN5pcdo1uMTFokBmgl9KXDjCUzq7laEI
4IJ9LndbJ3nowJyBOJSsxeC6HPRFHZUTQTWksC8ikqw1Z5QNxAbQeTLGyUAXM7NbKoXdrpA0+ZZv
MLIGTdvFdpcrbJh8usugddY6jQoAAKXSXlUaOnM1GE5brcoFJJCiblcIdu6v35uRit7IKN6+2Ov2
2xt2UWPR1PGd+w7uNY4paSwEOo0rcQC8kKZnjwinsT/ZMPFDz+CJ0+3R4SQUzR9DREHlKn9Bod1l
vNhZa88kUFbsNgff8XLU7npn5gwqrVX24OnuUJizj5U7q5G5bZhKvM8V4BnHwYbxp4Z5IkK8wlh9
0DhZMJWKvTMQiklm41w3Jq63suRgcCDq1I4dohAKRMVCs2UxMt9U/nz9z6uJkghA4MTsKWLBCE/o
LIYsH6A02AwkH/FHJ6dhMIbVZ40TKelCAniOhwUliAUjPGmwZu2ni7WFcjwSk2A8mjprMy7DOJSi
kCJB4BYYSj0HCwvtjqmNWZogVRQArc/qhigoCof06CxmLM7IZCaOvWUBwY8oNQmyIM43eYZrLSaV
GPGGxiwQQv6wqGIt63ZwP51OP/vssyzL/s3f/E1paem//uu/ZrafPXu2uLjYZrPZbLbMup26ujqW
ZTNbnn/++eUyYP37cwLHAGRJnBLaNBuB43mgNFOn13BKTUFglJuMZoiTxMyDzV7MMEcCieN44H3N
jb5puylBAFCORVNfYhCvdHSgfyAYTXC8KEoSgCTCjIDui2Bhod1zHCxBZD9LccAhK070nRjJ5Ypj
P4aUHPJ6fKFEkuNlSZJAEgGKF1CmxmphvO6BYJq1KiEd8kdlxmFd3AVYTX9+9OjRoqKizz//vKCg
4MaNG3v37i0uLrZYLADw93//9ydOnMhOfP78eYfDsbwGrH+dU2oKg0QsLsGsUUwWtHYCn+9Y50gg
gQSgsuyrsUxrmSqoO4zbkQ62vdMeJnY6HJVaDUVimBxzNV2OzZFDkhZ2vDPyTfk179m4MyMXaFK8
p6mpn2dsDoeDoUkCAz5w+R33gvJSrEXr7goEOesuLOSPQbFzjrD2a8vIyMjHH388ODhYUFAAADt2
7Dhx4sTrr79+6dKlVbNh/etcoTVoYDjiC6aNuUdnFBRNQjCenPog4BIcEMVzxc1djA0URUJCAFqT
MywySZEYJKdFU18QXNAb4tWO+r0TET2lKYrEAINpupZ4fs4llYsM7b4MRs4HmSuOPQBA1BtI4IZ9
B5wTvXERJABsQaVmRuMCwaQRC8RAW8ku9om7av58aGjIZDJh2ORxWSyWV199dXVqz7D+++dAmct2
kmLE1f5pPPfNxbA6Uoz4QunJTcKQN8ITWnaO0btFwbA6Ugx5B5I59+LFBgYbHfKGc3RYMRwDabZ+
LkiyBIBnNZiFcGBYnNyvpBQgcVz2kUVCiTnLx3Wsnhgd8mYNTYyFdtfnCu2+AOYzcp5jBNqgLZSj
Pj83vVxJEgAnFJPXKB4K5j7DucD1VrYwGfR6BuO43sJupDU62Y+YCxcuZHrjTz/9dGbLM888k9ly
4cKF5apx/ftzAIW+ep9jtNV99Z3zAQPL6tQkDpKQTMRSlLXWrgVc63DujLR3NLfF7RY9lZlXC0m6
KudyDIwBAACudzh3RtqvNjclbCa9SiHx6WQsMoxb6iuNOIDS7LT5mvrbmiSbzcgoJJ6LRYHda9cC
rtFpsJDf1cPY9SRgKv3UAH10sbYQvH2dHoWdISUu7PMEOZKECclQBlbd6/Z0D9BOgxqTk8Peq65Y
tsPLUf4iQ7vPz3xGznOMgBeVVRhC7Veb3k+VmrVKXErHoil1eaWZ0WuxwaCrg3GYaeDjIa83CtTC
vDkAAK61sJSv3wekaf9CJ83Xgvvvvz8QCMiyPOHSfT7fAw88kPmO+ueT4EX2hkPMQJ83EA72hr3y
+Pp225iPUbJ7nyI/dff63G39mTUutn0Hyo3L2WFTsnufIhl3r8/rCvAyYEQhzejt44N/uKaivoF2
uz0B12C/jBEkzZRaAACAslZVxzpd3vbWfqzQcuDINA0UOQ5UCZ29nvZmESMKGbbiQBXWe941sZ+2
19YIXX19Lee7ZYwo1OhsNftlV7N/IkGu8hcX2n1+5jNynmMEULK1DYSnp2+gr8snyhih0piLMQCl
uao22ekKuFoDMkaq9bbqekOs9e3wgg3TmM3qfrdgsCxF5qvWbmcY5pFHHnn55ZdfeumlgoKCaDR6
8uTJd999d3Vqz4DiKCI2LkKw7Xx70jx1ZdyCuHLlSmlp6YoYBXDz5s17771369atE1tu3bp17Nix
np6eLVu2KBSKF198cc+ePQBw9uzZZDKZ7c/r6uqOHDky05+73e7HH398ySZtDH+OQOQg6feGJcZh
W9qww2p6uG3btp05c2bm9p///OfTtnzwwQcrYQDSOWKjIXEjMU7kY76e3jhlPTDXgmjEGEjniI2G
EO5rc0VEvLDYXF3tXPKK9k3VY0U6R2w0lNYDz1nnT4bIAukcsUnZVP58A6yTQSAQdwjy54hNCvLn
CAQir0D+HLFJWTl/XlBQcNddd61Q4UsD+XMEYpnZunVr9r/T1gPInyM2IyUlJdeuXVtrKxZBSUnJ
nWRH69sRiPwHtdsRiPwH6RyByH+QzhGI/AfpHIHIf9B4O2Iz8sUXX1y/fn2trVgEJSUl991335Kz
I50jNiPXr1/PxEVYCb766qt77rnn7rvvXsYy3W73negctdsRiGXm66+/luU5X7696uS7zkdc5xpP
d0THfnGetxsbW4JLinOAyDO+WzG+/fbb9bYsZSO026Ndp5t9/LSNxM79z+3Vr4k9CMRGYyPoHAAA
Y2yVtuyoC1iheu2sQeQB683lrigbRee4SmdmkftGIJbERtF5bqShtjcuJ21PHa6YfLWv4G95s1N8
7GjDrvniEkjJoMvlCcWSvAgESdGM3uaciMaXjn7q7vVFYhwPOEnRuvKaapaC8IdvtEPVs5UKb6fL
G04IVGnDWN3p6KcutzcSHxWBVGlZR6XDmBXubY69I65zzXF7Q7nk7fVFYqlMmAmrc7d96XEVEAsB
+fNNgTDU0dw+TFnKKh0UIfLJWDgiwVh8FyHc0dQaENU7LU67Bpe4WBTI8TDA4mjI1RwKEwabw0Io
GRoAQIh2Nbf4QW9z1mhJOeHv7b3cxNU8VZuJ7Tf3XgCQY+7mDrW5oqq+hsLFRNDV6W5pJw4/id5Y
jFgmNozOJTEtCFnW4grFndmeHI7xhey+SmsRAADoWfOu8V3xPleAZxwHG+xjAULN2a8XjQWipv0H
q/WTsdvini4fr606VJsJ6KrXFxP8+e7e/jjr1My3FwAAZNA6a8eiwSmV9qrS0JmrwXDaakUufeVA
/nwdIg62nxnM+k2Y9j9XfUf9dZqhCW/Q3aOrLtdPjayeDIZTmLbKQufOiOnsTn12gMZkJJLCtOVZ
gXmVej0N7lg0DRrl3HvHimTY7BKVdCEBCY4HQDrPD9Lp9LFjx1wu1913371169bjx4//6Ec/AoCz
Z8/+5je/0Wg0APC9733vT3/6U11d3Z///GeSJAHg0UcfPXXq1LIYsFF0ThQ7auxZI+w4xdxhiQq2
ppZr7+htPe9TFRssNrvVOBYVkeN4ILWq2WKtkrR66i4uzoGcaP9tY/vUhCpeBFDOvXfsaEhi5oVY
Xwst8o/V9OdHjx4tKir6/PPPCwoKbty4sXfv3uLiYovFAihe6jQotV6/QP+90GUwCq39wBFLMhz0
ej3uy4E+pnT//oqieUMp47neCEQaqvaVqaeeTZykFrQXAN841wGxaEZGRj7++OPBwcGCggIA2LFj
x4kTJ15//fVLly6tmg35cH9NdXw8Ly7msBS03lqpt5aFu1rb+rt95oN2mqRICHEpAebXPAAAkBQJ
IQFojSZX+rn3ItaMVfPnQ0NDJpMp+41xFovl1VdfXZ3aM2zsda84SZLAc1yWB+fCoeQSSlLqLVoS
eI4HANqgLZSjPj+3wLwaVq+SY15PPGdDYu69iM1I9iPmwoULNpvNZrM9/fTTmS3PPPNMZsuFCxeW
q8YN7s81Jn2hz+/u8pPlOhoTExGPqz+5sGOKe9r6OEanVdGkQuLjQY9/lDQYGADAi8oqDKH2q03v
p0rNWiUupWPRlLq80jzbNJemrNISbulvbkpZLCyjAD4dTwwPC4Z9e83K+fYi8p77778/EAjIsjzh
0n0+3wMPPJD5jvrnCwAvch6oga5ed/P5ThkjVYzOvq8m0d4Wmz8rqVZJwQG3f1SUASMKaa1lX225
PnM+lGxtA+Hp6Rvo6/KJMkaoNObiud7Tq9BWNtSr3X3egKujX5YxopBmtJbx4bq59yLWiFVrtzMM
88gjj7z88ssvvfRSQUFBNBo9efLku+++uzq1Z0Dve0VsRq5cuVJaWrpChd+8efPee+/dunXrxJZb
t24dO3asp6dny5YtCoXixRdf3LNnDwCcPXs2mUxm+/O6urojR47M9Odut/vxxx9fsklI54jNyJUr
V77//e+vUOFffvnlNJ3fOXeo8409DodAIBbCBu+fIxBLZVO1ZJE/RyDyH+TPEZsU5M8RCERegfw5
YpOC/DkCgcgrkD9HbFJWzp8XFBTcddddK1T40kD+HIFYZrZu3Zr977T1APLniM1ISUnJtWvX1tqK
RVBSUnIn2dG6VwQi/0HtdgQi/0E6RyDyH6RzBCL/QTpHIPIfNN6O2Ix88cUX169fX2srFkFJScl9
99235OxI54jNyPXr13/wgx+sUOFfffXVPffcc/fddy9jmW63+050jtrtCMQy8/XXX8vy+gqzgXS+
coQ/fKPxrZ74MpUm+FsaG5s+TS9TcUtkxHWu8XRHdG2NWB6+WzG+/fbb9bYsZQO02yV/y287eccz
B+3T3qss+Fve7OQdhw7aZ4mEdgcIcb/H4wsNJzlelDGCpGi1dqe9bJdWCSDFg36+2Kpfmbcyj7jO
XfIqchzuNLjwpzHSymoWfAWjXaebffz4LwwjSFrN6Kx2O4siSOQ9G0Dnq44w4mlrdQ9LJGNg7TYV
ASKXiA+HownJCQAgxbwur2LfyuhcCvuCowCj/oG4PSuo+0zSQ33uiMXKzpVmJlhxabWNwUCWRYFL
RENBb/tgYMCxr9a+wNgzecR6c7krCtL5dNLBzsvuYVy3u6HWSuc4PVIsGOUl45LLT376fsuw+WCt
OZewhLAvJDA7d6YHA75oeaV21svDhUJxeZa9gr/tbX/xgSd3zWzm4JSWNU6EqbPaHVyw4/12d2sn
daiWRVEj8pc80nk66nG5ByLxUREIkqTUWlOZc9e4mxLiAy6XNxRLiUCqGIPd6TTnbvHG+3tDfKGp
PrfIRzxvt7oTIoD3UqMXADBDzXO1LD7iOtccL3+2Vh3s6u4LxXiFqf7Zau0sdvI8L0i5gzClQ76w
SJeX2/jYoH8g6tTqc9mYDn7Y1DE4KgN0/raxEwBUpU8dznL+kiTwPJ8j40xwiq2uicYu+Xr6R1hn
0XgF0U9dbm8kPioCqdKyjkqHkcqjG2UM5M83IklPa0uvpC931qhJELlUNBKT8TGRS/Ge5iavoLU6
apyUnAx5+zqbk9LBJ60ze8DxYDgFqlLbLJ5UY6qpJ93NnXFDzQG7GgCIcQHIfNjd5grJOpPDRClo
1VKOgQv6ojLjYOkiwajyDnhDgp7N4fWVuooDB5TtzV7c8VSVAQfAyTsYoMCLbKzK1x8OxZ1FGgAQ
ol3NLX7Q25w1WlJO+Ht7LzdxNU8hd7+hyRedC7FoQmaqquzjzWHWOrGP83d7k5rygwcy43V6vY4S
zl/u6w2bq6f7SymV4IAwaGfr9eJKWkMSAKAopOmp4koNhugFyEGSYLbQ5skBXwyY3UYKgDIZVF7v
QDDNWnOUp6BoqhAHIEiaztXqAHmsooVdXYpRYxBKcRJocIh7uny8tupQbSb4m15fTPDnu3v746xz
cUMB6x7kzzcgCrVaBV5Pl19daZ42fJwOB2Mys9s0KUuFVs/AYCyaBP20e1cQJBkIBbEUE9Q256wi
Tw80nenOCvrWfaaxO/NNV/VPB8w4AMCIN5jCmN0GCgBAYzar+92+AGedZ9h9HCnY9kZ7aPIBEnvn
t/1jdj32zMFdcxWCK3AFyJIIAJCMRFKYtjzrQJR6PQ3uWDQNGuTRl0o6nT527JjL5br77ru3bt16
/PjxH/3oRwBw9uzZ3/zmNxqNBgC+973v/elPf6qrq/vzn/9MkiQAPProo6dOnVoWA/JF56Apr63h
27s63zntVutNtlKbuWjstuRSHMDopLLGIUVxRikKHAcQhZk75odQqWcXk5KtekotSAAAo772joSx
dvdOAgAAV445ZCnsC41ijE2LC4IAAESxTgX98w67T4DrnA1PlUoSAIiD3W1D6uoaSyEAAK6g5nlS
SIIoAEYQAABcnAM50f7bxvapaVS8CJBfOl9Nf3706NGioqLPP/+8oKDgxo0be/fuLS4utlgsgOKl
ToIDBgC5Rq6kzO5MKpqtPmhwjIS8Xk+g85LXs7P6wF527BbHih+rd07rc+fq1eIqqhAiqUQS9Ivt
8uL4XOdSQWvGRrkUIRwIlbqoaIp4CRHwAAAOc0lEQVRqhJAvxIPMX7345tXs7cG5h92za6A0iszR
phME4KS6qGiBh5AcjstAayZ6AKShal+ZevrJWlizAjGTkZGRjz/+eHBwsKCgAAB27Nhx4sSJ119/
/dKlS6tmwwbQOU5SCojFOQmm9UW5BAeYhiKz0yqL2Ioitrx8oK25u8vFGmqNOKVSgizKCo1mAfe9
xqAlfQG/b8TmLFrFc5MODoRF0rC7ypIlJynS1+4NzTrsvkwIQ57gKKitBhoAgKRICAlAa/J/7cyq
+fOhoSGTyZT9xjiLxfLqq6+uTu0ZNsK6V4Y1kGKobyA5xaVz/r7gKKFndTk0gNNmVo2JAi8AgFK/
k8ESA56wsIC6cL3dpsZSA51dQ7OtMMVxHGRBzD0ztjS4YGBYVpnKrPpsjGWleoKP+EI5DCdwDEAU
FjZ7NjvCiKetc5AnTQ5L5iGoYfUqOeb1xJfz8BDTyH7EXLhwwWaz2Wy2p59+OrPlmWeeyWy5cOHC
ctW4Afw54Nqy3aZw+9WmtyOs2cBQCkjHwwHfYAI3VDnHJp6EYFfbEGHQMzSlwAUu7OuNAuPILFmj
rLvLA03utrd5q41lKFziuUQ0zDGVtTlWkgBtr62Ov98RuHw+WsyyBoZW4CDwiURM0jorzRSAWssQ
vqDbpQWWAozSFi1ubpmuOPy/p22KD/hiWPFu2/SOuMJYyhaG/LmG3RUMo4L+/i4PZWcUQKi1Wf5X
aX3yf1shJxIXDQ5JmCyLQjoRC4VDwymR1O3eX6kfz64pq7SEW/qbm1IWC8sogE/HE8PDgmHfXnN+
dc9Xj/vvvz8QCMiyPOHSfT7fAw88kPmO+ueTKI3VB+sZd58v7HH5RBkjCmmG3f13ZdaJTi6uUpO8
t9/l5UUZMFLF6Jz1zvH5cVxjbzhIud2eob4ub2a5urrYoiFnqY1i9x5Ws95ebzDicwdEGTCCJCk1
y2aukoLdXRVrd/k7W30YubP60CJ1PpOoL5gi9I5cg/VFNqvalxl2nzYHoCmv2c119vVebgZCXV5/
cGHtbHm4v30YAAAwjKQ0avYxh81qnNIfUmgrG+rV7j5vwNXRL2fOtdaizr9m/Kq12xmGeeSRR15+
+eWXXnqpoKAgGo2ePHny3XffXZ3aM6D3vSI2I1euXCktLV2hwm/evHnvvfdu3bp1YsutW7eOHTvW
09OzZcsWhULx4osv7tmzBwDOnj2bTCaz/XldXd2RI0dm+nO32/34448v2SSkc8Rm5MqVK9///vdX
qPAvv/xyms7vnDvU+Xofh2tsbGxsbMx8QZ/r5zPzBbFRQP4csRm5cuXKrl27VqjweDyO/DkCgVht
NsZ4OwKx7Gyqlizy5whE/oP8OWKTgvw5AoHIK5A/R2xSVs6fFxQU3HXXXStU+NJA/hyBWGa2bt2a
/e+09QDy54jNSElJybVr19baikVQUlJyJ9nROhkEIv9B7XYEIv9BOkcg8h+kcwQi/0E6RyDyH6Rz
BCL/QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6RyDyH6RzBCL/QTpHIPIfpHMEIv+5s/+lSgLHJbnk
KCdKkiQB4LiCIAspmqYpBfrHKwKxXliaGoURv8frC0Zio+IsKbBCtd5osZWatUokeARijVnK/8/9
Lac7IwuLyEswjvoG+/Q4oMvKiOvcpaC+/tlK7UrWgkBsZJbibEPRBYfdFmOhaNqumRoJNNp1utk3
UQSGESStZnRWu52dEvQz3vN2c5CpPVypRS0CBOKOWMo4HLYI3WFY7o46Vlxas2/fvn01NZVOu1El
D3vb37n4vmdEmJoMKRyxPMQ++F+P/vh3AQAAuOU+/rc/fsV9a670YvcvHn30+CerYtsqsBQhVVRa
Ul2+xGxd80kI1U6Hk80ZORuntKxRP/7LandwwY73292tndSh2rFA4JqKg89VLME8BGJuttAGk4nY
vmWt7VhFluQwGefBZ8tGwsFQaDiW4jiO4wVJlmUADCMUJElRtEpTrDcYDVpS4HgJqPlrwSm2uiYa
u+Tr6R9hnUUAYx1vbf2z1VoAAGFkwOX2RmIpXsaIQkrNmO1Ou145o6D00IdNlyNKR329XYMvOBdi
M0GYfvovJ9faiNVlKTp3vXWa05tNrMHssFbkHE6X0sl4LDrQ4Q6EoPSZhl0LEhZeZGNVvv5wKO4s
mj5yl/a3t3YnmVJHjZYEkU/FQlEBn9lQEKKu1vaQwnZgv12DLzgXYt3w2St/e3zLsZNG95n33KHk
bVDusNcdPfbTh+nxBMnPPjh15oNPQjdvg2r7g08cev7o7hIiZ1G3htpOvfaeO3Tz9pYdpicO/WT7
5C6x+xeP/5o4+X9OOgAg+dm7p85c/iR0Mw3K7TuMD9Y9/8Ie4/Qik+5XjhwPmE6e+5WDFq//8dSp
99zXbqRAqdq+w+Q4fOzopIHrlKXonFCIiZD3ash7FQAwopAiFbgCxzEAURIkQeRGeXk8LaZbxLwa
xagxCKU4CTTTMiXCMVFtr6wwU5nfrNk+I7cU/7StbUAw1DQ4tYoF50KsM1IdxxvTL5z8/QtGGpKf
XfzFc8dfoVve3EMDgPjZ744816Hc9/w/HzPRt296Pjj165/dSDe9WcvMKCX2x+NHTt148OjJY/Yd
kLzWdqbxvRu3t5lmpEv+8ZVffnC79ti5Xxm23E7dGPrsJk1PF/mtz373D698suPE73/loAECZ55r
vPbgCyebHtwOqRtDgaHb9HoXOSxN5859j0ndfYOZDrosjqbE0ZzpsEKdrbIqd/88tzEKXAGylKPj
TzM0NjjQ9SlTZZ1lQp4LtrdeTTFVDXuNyoXnQqw/ttiPvrDHuA0AgH7oUJ39g1+7A+IeBwHJjouX
k/YTv//H3dsAAEpKTm4X/67hrYufVP/q4WnaHPrgomdb9dmTdQ8RAFDCHDuZHtp3KjWzsuTQjdvb
6/Y5jAwAMCXG6U8CURz64Je/7Nh29PcndzMAALeu30gqTU888VAJAcAwJaaHl/0ErARLufsTsrH6
oNUZj0ai0VgslRzlREESJAlwHMcVJFlI0RqmuNigL1IKI9G4pJzunWdDEkQBMCJHO4zaVVM72tHp
bj7Tp9KxNrvdrJ3S6U96WvwRvtBiZ6lF5EKsR3Y8ZNg28YPYtm0LJG/dBiDEwCcBML3w8OROKHE8
sePs5U9C8PBUfSYDoZtKk8M0eScxDz60A/5tZmXGPfsMHWd+/g+xQz+pe+IhZuqttwVu/tvxVy7e
qj33+9rx3sG2h2sdF48fPwKHf1r3hMO4AVw5ACxN556W8zFax7IGHcPaWZqa5iolgUsmkonhvvar
oXCMchxt0Cxw4Cs5HJeB1tC5jKL0ziefLYsP+b39/a5mX59u9/5a6/gDhB9OqEw7IejrcrEHs+fb
58yFWI9sIZS5O9y3b6VvA71tW/a2bdu2bbmVnjFBduvWLdhm2JZdjpJW5hxfN/70XJPx8nvvXfzl
vsZtD+77+QtZHf7bn5w5Bbdvb7keS8PEM4N2nGz6ffd7b7332s9OvbLjicPPP1/30PpX+1JueZqE
SCLiS0R8md8YRuA4juMAkiRIoixnpSWKqYW224UhT3AU1FbD7GdNoTHuqjZay4Md77e7XH7Dk9bM
E4TcWXWgWs/Ro2+72zt0h/YaFQvJhdhQbNmm3AK3bt0CmJT6rVu3bm8zbJuedNu2bTBN/reT6du5
yyWYh+uOPVz3fOyTy6caf/2zm9DyL7szt+Dt7dXn3tzz2T/8rPF4m/FcbclE8cbdR/9l99FbQ91n
Xmt87kjyXMvRmT3/9cVS1sk4ntxXalBNPixlWRRFnud5XswWOaHS2Woaqo0LepQII562zkGeNDks
8z4cccpgZgiZ5yY78gocB6DtNc5iabCz3Z9eWC7ERoIwPWyCgPuTLPle97hvKE0PGaYnpY2G7elr
nwQmL3UyMHRznuKZh+tO/tx++5pnaHzTlpKHTbTxpyefNw6deuXdoek3zjbj7mPHarff/CwQW9oR
rSJLW/eqsNUcrpC4kWg0lkqkkmmeF6Wx/jlBUJRazWgYrZZWCPFgKEqy2pkuXeKiwSEJk2VRSCdi
oXBoOCWSut37K/U53X/Y9b4PDAaGpkhc4mO+3pCksrEzngiUtaYydLHd1T6gfdJKLTQXYmNAVx+t
azvS+MvfwdFakwpuet5rPHPDdPRXjpntfFPdTx7sOPXKr0teOGTfDqlAx5kz1wByXPnr3e+6wfiQ
ccf2bZAMXL58bYvpeeO0NCW1J5//5O8aj58xNf3jQ/BZ28UbtN20Yzu95fb1a++5b9J208zx/vXG
UnTu72xud6mKdXq9llExJj1LkoQCx3EAQRJEnue4VDLmc3mikWiC1zx2lM3xDxN5uL99GAAAMIyk
NGr2MYfNaszZMwcAoBla8PS7BnhRBowo1OjKa505/x6jZGuqhi5edrV5mAb7gnMhNgSE6ei5c9tP
nbn4i4YbKVDteNDxwrmje0pyJWVq//nN242/e++XDY23t2w3OH5y8sT242dnpttGJD1nXnnvZip9
e4tyu+Hhw//8/J6ZjwN6z6+OeQ788teNDze9QMP1P55qO3UzfRu2qAwPPnHijXXfaIel/V/N8/br
7oQ8fzoAgEJb/WEn+iMKArGmLCkuspQM9rq9gXCMn13tGKnWsjZHmVmDFqAhEGvNd3fA1//z33/5
v/957T8++fcrVz766KOPrlz593//5D/+87/+8t//79Y3ubO89tprr732WubLGn4iEJuKJflzBAKx
oUDvgUQg8h+kcwQi/0E6RyDyH6RzBCL/QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6RyDyH6RzBCL/
QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6RyDyH6RzBCL/QTpHIPIfpHMEIv9BOkcg8h+kcwQi/0E6
RyDyH6RzBCL/QTpHIPIfpHMEIv/5/3G+RF7rzPp3AAAAAElFTkSuQmCC
--=_eb4ca82dfafca5034ad170a1bec7f025--
7 years, 2 months
A possible bug on Fedora 27
by Valentin Bajrami
Hi Community,
Recently we discovered that our VM's became unstable after upgrading
from Fedora 26 to Fedora 27. The journalctl log shows the following
Jan 29 20:03:28 host1.project.local libvirtd[2741]: 2018-01-29
19:03:28.789+0000: 2741: error : qemuMonitorIO:705 : internal error: End
of file from qemu monitor
Jan 29 20:09:14 host1.project.local libvirtd[2741]: 2018-01-29
19:09:14.111+0000: 2741: error : qemuMonitorIO:705 : internal error: End
of file from qemu monitor
Jan 29 20:10:29 host1.project.local libvirtd[2741]: 2018-01-29
19:10:29.584+0000: 2741: error : qemuMonitorIO:705 : internal error: End
of file from qemu monitor
A similar bug report is already present here:
https://bugzilla.redhat.com/show_bug.cgi?id=1523314 but doesn't reflect
our problem entirely. This bug seems to be triggered only when a VM is
shut down gracefully. In our case this is being triggered without
attempting to shutdown a VM. Again, this is causing the VM's to be
unstable and eventually they'll shut down by themselves.
Do you have any clue what could be causing this?
--
Met vriendelijke groeten,
Valentin Bajrami
7 years, 2 months