Re: [Users] Host installation failed with error "Unable to set host time"
by Nicholas Kesick
--_5d15bdc1-d905-4034-ad4c-39e3fe255785_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
=0A=
=0A=
=0A=
Daniel=2C
It seems clear to me that for some reason Fedora 16 can't access the RTC (r=
eal time clock - hardware clock) on your computer's motherboard. Odd that R=
HEL 6.2 can but I forget what it is based on. Anyway can you try a Fedora 1=
7 live cd and see if the "hwclock --show" command works? If it doesn't=2C I=
am thinking that you will want to file a bugzilla. If it does work=2C then=
vdsm and ovirt-node-2.5.0 will work for you on Fedora 17.
- Nick
---------------------------------------------------------------------------=
-----------
From: DYeung(a)TrustedCS.com
To: cybertimber2000(a)hotmail.com
Subject: RE: [Users] Host installation failed with error "Unable to set hos=
t time"
Hi=2C Nicholas=2C=0A=
I like to give you an update. I found that hwclock worked perfectly f=
ine on RHEL 6.2 system. I am still=0A=
puzzling why it failed in Fedora 16. I am thinking of installing oVirt 3.0=
on RHEL 6.2 to see how it turns out. =0A=
If you find anything=2C please let me know. =0A=
Thanks a lot. =0A=
=0A=
Daniel =0A=
=0A=
=0A=
=0A=
From: DYeung(a)TrustedCS.com
To: cybertimber2000(a)hotmail.com
Subject: RE: [Users] Host installation failed with error "Unable to set hos=
t time"
Date: Wed=2C 1 Aug 2012 12:36:26 +0000
=0A=
=0A=
=0A=
=0A=
=0A=
=0A=
Here are the putput:
=0A=
=0A=
#date
=0A=
Wed Aug 1 08:29:59 EDT 2012
=0A=
=0A=
# hwclock --show
=0A=
hwclock: Cannot access the Hardware Clock via any known method.
=0A=
hwclock: Use the --debug option to see the details of our search for an acc=
ess method.
=0A=
=0A=
# hwclock --debug
=0A=
hwclock from util-linux 2.20.1
=0A=
hwclock: Open of /dev/rtc failed: Device or resource busy
=0A=
No usable clock interface found.
=0A=
hwclock: Cannot access the Hardware Clock via any known method.
=0A=
=0A=
I am wondering if this is a bug in Fedora 16 or maybe I am missing the driv=
er or related rpm.=0A=
=0A=
=0A=
Let me know if you find anything. Thank you for your help.=20
=0A=
=0A=
DY
=0A=
=0A=
=0A=
=0A=
=0A=
From: Nicholas Kesick [cybertimber2000(a)hotmail.com]
=0A=
Sent: Tuesday=2C July 31=2C 2012 5:38 PM
=0A=
To: Daniel Yeung=3B oVirt Mailing List
=0A=
Subject: RE: [Users] Host installation failed with error "Unable to set hos=
t time"
=0A=
=0A=
=0A=
=0A=
=0A=
=0A=
=0A=
From: DYeung(a)TrustedCS.com
=0A=
To: users(a)ovirt.org
=0A=
Date: Tue=2C 31 Jul 2012 20:12:31 +0000
=0A=
Subject: [Users] Host installation failed with error "Unable to set host ti=
me"
=0A=
=0A=
=0A=
=0A=
=0A=
I created a new host and the installation failed with the following message=
s on the engine.log:
=0A=
=0A=
<BSTRAP component=3D'SetSSHAccess' status=3D'OK' message=3D'SUCCESS'/>
=0A=
<BSTRAP component=3D'SET_SYSTEM_TIME' status=3D'FAIL' message=3D'Unable to =
set host time.'/>
=0A=
<BSTRAP component=3D'RHEV_INSTALL' status=3D'FAIL'/>
=0A=
. Error occured. (Stage: Running first installation script on Host)
=0A=
2012-07-31 15:28:06=2C530 INFO [org.ovirt.engine.core.utils.hostinstall.Mi=
naInstallWrapper] (pool-5-thread-4) RunSSHCommand returns true
=0A=
2012-07-31 15:28:06=2C530 INFO [org.ovirt.engine.core.bll.VdsInstaller] (p=
ool-5-thread-4) RunScript ended:true
=0A=
2012-07-31 15:28:06=2C530 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (p=
ool-5-thread-4) Installation of 192.168.4.125. Operation failure. (Stage: R=
unning first installation script on Host)
=0A=
2012-07-31 15:28:06=2C531 INFO [org.ovirt.engine.core.bll.InstallVdsComman=
d] (pool-5-thread-4) After Installation pool-5-thread-4
=0A=
2012-07-31 15:28:06=2C532 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStat=
usVDSCommand] (pool-5-thread-4) START=2C SetVdsStatusVDSCommand(vdsId =3D 1=
001e89e-db3f-11e1-99f0-bbd8c818bb29=2C status=3DInstallFailed=2C nonOperati=
onalReason=3DNONE)=2C log id: 3b08378
=0A=
2012-07-31 15:28:06=2C544 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStat=
usVDSCommand] (pool-5-thread-4) FINISH=2C SetVdsStatusVDSCommand=2C log id:=
3b08378
=0A=
=0A=
=0A=
Here are the vdsm related rpms in my Fedora 16 system.
=0A=
=0A=
vdsm-4.9.3.3-0.fc16.x86_64
=0A=
vdsm-bootstrap-4.9.3.3-0.fc16.noarch
=0A=
vdsm-cli-4.9.3.3-0.fc16.noarch
=0A=
=0A=
Does anyone encounter the same problem? Any hints?=20
=0A=
=0A=
Thank you.
=0A=
=0A=
DY
=0A=
=0A=
_______________________________________________ Users mailing list Users@ov=
irt.org=0A=
http://lists.ovirt.org/mailman/listinfo/users=0A=
=0A=
I'll take a stab at it.
=0A=
Can you post the output of these two commands? Exclude the #=20
=0A=
#date
=0A=
#hwclock --show
=0A=
=0A=
=0A=
=0A=
=0A=
=0A=
=
--_5d15bdc1-d905-4034-ad4c-39e3fe255785_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div id=3D"mps0_readMsgBodyConta=
iner" class=3D"ReadMsgBody"><div class=3D"SandboxScopeClass ExternalClass" =
id=3D"mps0_MsgContainer"><div>=0A=
<div><span lang=3D"en-US">=0A=
<div>=0A=
<div style=3D""><span lang=3D"en-US">Daniel=2C<br><br>It seems clear to me =
that for some reason Fedora 16 can't access the RTC (real time clock - hard=
ware clock) on your computer's motherboard. Odd that RHEL 6.2 can but I for=
get what it is based on. Anyway can you try a Fedora 17 live cd and see if =
the "hwclock --show" command works? If it doesn't=2C I am thinking that you=
will want to file a bugzilla. If it does work=2C then vdsm and ovirt-node-=
2.5.0 will work for you on Fedora 17.<br><br>- Nick<br><br></span><font sty=
le=3D"font-size: 12pt=3B" face=3D"Times New Roman" size=3D"3"><span style=
=3D"font-size:12pt"><font style=3D"font-size: 16pt=3B" face=3D"Arial" size=
=3D"4"><span style=3D"font-size:14pt">-------------------------------------=
-------------------------------------------------<br></span></font></span><=
/font>From: DYeung(a)TrustedCS.com<br>To: cybertimber2000(a)hotmail.com<br>Subj=
ect: RE: [Users] Host installation failed with error "Unable to set host ti=
me"<br><br><font face=3D"Times New Roman" size=3D"3"><span style=3D"font-si=
ze:12pt"><font face=3D"Arial" size=3D"4"><span style=3D"font-size:14pt">Hi=
=2C Nicholas=2C</span></font></span></font></div>=0A=
<div style=3D""><font face=3D"Times New Roman" size=3D"3"><span style=3D"fo=
nt-size:12pt"><font face=3D"Arial" size=3D"4"><span style=3D"font-size:14pt=
"> =3B =3B =3B =3B =3B I like to give you an update. I =
found that hwclock worked perfectly fine on RHEL 6.2 system. I am still=0A=
puzzling why it failed in Fedora 16.  =3BI am thinking of installing oV=
irt 3.0 on RHEL 6.2 to see how it turns out. </span></font></span></font></=
div>=0A=
<div style=3D""><font face=3D"Times New Roman" size=3D"3"><span style=3D"fo=
nt-size:12pt"><font face=3D"Arial" size=3D"4"><span style=3D"font-size:14pt=
"> =3B =3B =3B =3B =3B If you find anything=2C please l=
et me know. </span></font></span></font></div>=0A=
<div style=3D""><font face=3D"Times New Roman" size=3D"3"><span style=3D"fo=
nt-size:12pt"><font face=3D"Arial" size=3D"4"><span style=3D"font-size:14pt=
"> =3B =3B =3B =3B =3B Thanks a lot. </span></font></sp=
an></font></div>=0A=
<div style=3D""><font face=3D"Times New Roman" size=3D"3"><span style=3D"fo=
nt-size:12pt"><font face=3D"Arial" size=3D"4"><span style=3D"font-size:14pt=
"> =3B</span></font></span></font></div>=0A=
<div style=3D""><font face=3D"Times New Roman" size=3D"3"><span style=3D"fo=
nt-size:12pt"><font face=3D"Arial" size=3D"4"><span style=3D"font-size:14pt=
">Daniel </span></font></span></font></div>=0A=
</div>=0A=
</span></div>=0A=
</div>=0A=
</div></div><br><br><div><div id=3D"SkyDrivePlaceholder"></div><hr id=3D"st=
opSpelling">From: DYeung(a)TrustedCS.com<br>To: cybertimber2000(a)hotmail.com<b=
r>Subject: RE: [Users] Host installation failed with error "Unable to set h=
ost time"<br>Date: Wed=2C 1 Aug 2012 12:36:26 +0000<br><br>=0A=
=0A=
=0A=
<style><!--=0A=
.ExternalClass .ecxhmmessage p=0A=
{padding:0px=3B}=0A=
.ExternalClass body.ecxhmmessage=0A=
{font-size:12pt=3Bfont-family:Calibri=3B}=0A=
=0A=
--></style><style id=3D"ecxowaParaStyle">=0A=
</style>=0A=
=0A=
=0A=
<div style=3D"direction:ltr=3Bfont-family:Tahoma=3Bcolor:#000000=3Bfont-siz=
e:10pt">Here are the putput:<br>=0A=
<br>=0A=
#date<br>=0A=
Wed Aug =3B 1 08:29:59 EDT 2012<br>=0A=
<br>=0A=
# hwclock --show<br>=0A=
hwclock: Cannot access the Hardware Clock via any known method.<br>=0A=
hwclock: Use the --debug option to see the details of our search for an acc=
ess method.<br>=0A=
<br>=0A=
# hwclock --debug<br>=0A=
hwclock from util-linux 2.20.1<br>=0A=
hwclock: Open of /dev/rtc failed: Device or resource busy<br>=0A=
No usable clock interface found.<br>=0A=
hwclock: Cannot access the Hardware Clock via any known method.<br>=0A=
<br>=0A=
I am wondering if this is a bug in Fedora 16 or maybe I am missing the driv=
er or related rpm.=0A=
<br>=0A=
<br>=0A=
Let me know if you find anything. Thank you for your help. <br>=0A=
<br>=0A=
DY<br>=0A=
<br>=0A=
<br>=0A=
<div style=3D"font-family:Times New Roman=3Bcolor:#000000=3Bfont-size:16px"=
>=0A=
<hr tabindex=3D"-1">=0A=
<div style=3D"direction:ltr" id=3D"ecxdivRpF58378"><font color=3D"#000000" =
face=3D"Tahoma" size=3D"2"><b>From:</b> Nicholas Kesick [cybertimber2000@ho=
tmail.com]<br>=0A=
<b>Sent:</b> Tuesday=2C July 31=2C 2012 5:38 PM<br>=0A=
<b>To:</b> Daniel Yeung=3B oVirt Mailing List<br>=0A=
<b>Subject:</b> RE: [Users] Host installation failed with error "Unable to =
set host time"<br>=0A=
</font><br>=0A=
</div>=0A=
<div></div>=0A=
<div>=0A=
<div dir=3D"ltr">=0A=
<hr id=3D"ecxstopSpelling">=0A=
From: DYeung(a)TrustedCS.com<br>=0A=
To: users(a)ovirt.org<br>=0A=
Date: Tue=2C 31 Jul 2012 20:12:31 +0000<br>=0A=
Subject: [Users] Host installation failed with error "Unable to set host ti=
me"<br>=0A=
<br>=0A=
<style id=3D"ecxowaParaStyle">=0A=
.ExternalClass p=0A=
{margin-bottom:0=3B}=0A=
=0A=
.ExternalClass=0A=
{direction:ltr=3Bfont-family:Tahoma=3Bcolor:#000000=3Bfont-size:10pt=3B}=0A=
.ExternalClass P=0A=
{margin-bottom:0=3B}=0A=
.ExternalClass=0A=
{scrollbar-base-color:undefined=3Bscrollbar-highlight-color:undefined=3Bscr=
ollbar-darkshadow-color:undefined=3Bscrollbar-track-color:undefined=3Bscrol=
lbar-arrow-color:undefined=3B}=0A=
</style><br>=0A=
<div>=0A=
<div style=3D"color:rgb(0=2C0=2C0)=3Bfont-family:Tahoma=3Bfont-size:10pt=3B=
direction:ltr">=0A=
I created a new host and the installation failed with the following message=
s on the engine.log:<br>=0A=
<br>=0A=
<=3BBSTRAP component=3D'SetSSHAccess' status=3D'OK' message=3D'SUCCESS'/&=
gt=3B<br>=0A=
<=3BBSTRAP component=3D'SET_SYSTEM_TIME' status=3D'FAIL' message=3D'Unabl=
e to set host time.'/>=3B<br>=0A=
<=3BBSTRAP component=3D'RHEV_INSTALL' status=3D'FAIL'/>=3B<br>=0A=
. Error occured. (Stage: Running first installation script on Host)<br>=0A=
2012-07-31 15:28:06=2C530 INFO =3B [org.ovirt.engine.core.utils.hostins=
tall.MinaInstallWrapper] (pool-5-thread-4) RunSSHCommand returns true<br>=
=0A=
2012-07-31 15:28:06=2C530 INFO =3B [org.ovirt.engine.core.bll.VdsInstal=
ler] (pool-5-thread-4) =3B RunScript ended:true<br>=0A=
2012-07-31 15:28:06=2C530 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (p=
ool-5-thread-4) Installation of 192.168.4.125. Operation failure. (Stage: R=
unning first installation script on Host)<br>=0A=
2012-07-31 15:28:06=2C531 INFO =3B [org.ovirt.engine.core.bll.InstallVd=
sCommand] (pool-5-thread-4) After Installation pool-5-thread-4<br>=0A=
2012-07-31 15:28:06=2C532 INFO =3B [org.ovirt.engine.core.vdsbroker.Set=
VdsStatusVDSCommand] (pool-5-thread-4) START=2C SetVdsStatusVDSCommand(vdsI=
d =3D 1001e89e-db3f-11e1-99f0-bbd8c818bb29=2C status=3DInstallFailed=2C non=
OperationalReason=3DNONE)=2C log id: 3b08378<br>=0A=
2012-07-31 15:28:06=2C544 INFO =3B [org.ovirt.engine.core.vdsbroker.Set=
VdsStatusVDSCommand] (pool-5-thread-4) FINISH=2C SetVdsStatusVDSCommand=2C =
log id: 3b08378<br>=0A=
<br>=0A=
<br>=0A=
Here are the vdsm related rpms in my Fedora 16 system.<br>=0A=
<br>=0A=
vdsm-4.9.3.3-0.fc16.x86_64<br>=0A=
vdsm-bootstrap-4.9.3.3-0.fc16.noarch<br>=0A=
vdsm-cli-4.9.3.3-0.fc16.noarch<br>=0A=
<br>=0A=
Does anyone encounter the same problem? =3B Any hints? <br>=0A=
<br>=0A=
Thank you.<br>=0A=
<br>=0A=
DY<br>=0A=
<br>=0A=
_______________________________________________ Users mailing list Users@ov=
irt.org=0A=
<a href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank"=
>http://lists.ovirt.org/mailman/listinfo/users</a></div>=0A=
<div style=3D"color:rgb(0=2C0=2C0)=3Bfont-family:Tahoma=3Bfont-size:10pt=3B=
direction:ltr">=0A=
I'll take a stab at it.<br>=0A=
Can you post the output of these two commands? Exclude the # <br>=0A=
#date<br>=0A=
#hwclock --show<br>=0A=
</div>=0A=
</div>=0A=
</div>=0A=
</div>=0A=
</div>=0A=
</div></div> </div></body>
</html>=
--_5d15bdc1-d905-4034-ad4c-39e3fe255785_--
12 years, 3 months
[Users] Failure to migrate from one host out of four
by Karli Sjöberg
--_000_0CBBF9E4118E4C0C82215A5FE4D6D522sluse_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
Wondering if anyone has encountered the same issue as me. On one host in my=
cluster, if I migrate in a guest, I cannot migrate it out to another host?=
The get "stuck" there, so to speak. Same when a guest is started on that =
particular host, it is impossible to migrate them out again.
iptables are flushed, for excluding purposes. vdsmd, libvirtd and sanlock a=
re all running.
This is what I caught from libvirtd.log:
2012-08-02 11:44:07.542+0000: 4231: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D3
2012-08-02 11:44:07.542+0000: 4232: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2
2012-08-02 11:44:07.542+0000: 4232: debug : virConnectIsAlive:18395 : conn=
=3D0x7fe7e00c4e50
2012-08-02 11:44:07.562+0000: 4236: debug : virDomainMigrateSetMaxDowntime:=
16565 : dom=3D0x7fe7d8000cc0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-=
8774-2ed84b8bccc1), downtime=3D50, flags=3D0
2012-08-02 11:44:07.562+0000: 4236: debug : qemuDomainObjBeginJobInternal:7=
53 : Starting job: migration operation (async=3Dmigration out)
2012-08-02 11:44:07.562+0000: 4236: debug : qemuDomainMigrateSetMaxDowntime=
:9500 : Setting migration downtime to 50ms
2012-08-02 11:44:07.562+0000: 4236: debug : qemuMonitorRef:239 : QEMU_MONIT=
OR_REF: mon=3D0x7fe7e40c8410 refs=3D3
2012-08-02 11:44:07.562+0000: 4236: debug : qemuMonitorSetMigrationDowntime=
:1753 : mon=3D0x7fe7e40c8410 downtime=3D50
2012-08-02 11:44:07.562+0000: 4236: debug : qemuMonitorSend:861 : QEMU_MONI=
TOR_SEND_MSG: mon=3D0x7fe7e40c8410 msg=3D{"execute":"migrate_set_downtime",=
"arguments":{"value":0,050000},"id":"libvirt-261"}
fd=3D-1
2012-08-02 11:44:07.563+0000: 4231: debug : qemuMonitorRef:239 : QEMU_MONIT=
OR_REF: mon=3D0x7fe7e40c8410 refs=3D4
2012-08-02 11:44:07.563+0000: 4231: debug : qemuMonitorIOWrite:470 : QEMU_M=
ONITOR_IO_WRITE: mon=3D0x7fe7e40c8410 buf=3D{"execute":"migrate_set_downtim=
e","arguments":{"value":0,050000},"id":"libvirt-261"}
len=3D86 ret=3D86 errno=3D11
2012-08-02 11:44:07.563+0000: 4231: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D3
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorRef:239 : QEMU_MONIT=
OR_REF: mon=3D0x7fe7e40c8410 refs=3D4
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorIOProcess:365 : QEMU=
_MONITOR_IO_PROCESS: mon=3D0x7fe7e40c8410 buf=3D{"error": {"class": "JSONPa=
rsing", "desc": "Invalid JSON syntax", "data": {}}}
len=3D80
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D3
2012-08-02 11:44:07.564+0000: 4236: error : qemuMonitorJSONCheckError:331 :=
internal error unable to execute QEMU command 'migrate_set_downtime': Inva=
lid JSON syntax
2012-08-02 11:44:07.564+0000: 4236: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2
2012-08-02 11:44:07.564+0000: 4236: debug : qemuDomainObjEndJob:870 : Stopp=
ing job: migration operation (async=3Dmigration out)
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorRef:239 : QEMU_MONIT=
OR_REF: mon=3D0x7fe7e40c8410 refs=3D3
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorIOProcess:365 : QEMU=
_MONITOR_IO_PROCESS: mon=3D0x7fe7e40c8410 buf=3D{"error": {"class": "JSONPa=
rsing", "desc": "Invalid JSON syntax", "data": {}}}
len=3D80
2012-08-02 11:44:07.564+0000: 4236: debug : virDomainFree:2345 : dom=3D0x7f=
e7d8000cc0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)
2012-08-02 11:44:07.564+0000: 4231: error : qemuMonitorJSONIOProcessLine:15=
6 : internal error Unexpected JSON reply '{"error": {"class": "JSONParsing"=
, "desc": "Invalid JSON syntax", "data": {}}}'
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorIO:645 : Error on mo=
nitor internal error Unexpected JSON reply '{"error": {"class": "JSONParsin=
g", "desc": "Invalid JSON syntax", "data": {}}}'
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorIO:679 : Triggering =
error callback
2012-08-02 11:44:07.564+0000: 4231: debug : qemuProcessHandleMonitorError:3=
45 : Received error on 0x7fe7e4000e50 'milli'
2012-08-02 11:44:07.592+0000: 4232: debug : qemuDomainObjBeginJobInternal:7=
53 : Starting job: async nested (async=3Dmigration out)
2012-08-02 11:44:07.593+0000: 4232: debug : qemuMonitorRef:239 : QEMU_MONIT=
OR_REF: mon=3D0x7fe7e40c8410 refs=3D3
2012-08-02 11:44:07.593+0000: 4232: debug : qemuMonitorGetMigrationStatus:1=
776 : mon=3D0x7fe7e40c8410
2012-08-02 11:44:07.593+0000: 4232: debug : qemuMonitorSend:851 : Attempt t=
o send command while error is set internal error Unexpected JSON reply '{"e=
rror": {"class": "JSONParsing", "desc": "Invalid JSON syntax", "data": {}}}=
'
2012-08-02 11:44:07.593+0000: 4232: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2
2012-08-02 11:44:07.593+0000: 4232: debug : doPeer2PeerMigrate3:2425 : Fini=
sh3 0x7fe7e00c4e50 ret=3D-1
2012-08-02 11:44:08.798+0000: 4233: debug : virDomainInterfaceStats:7299 : =
dom=3D0x7fe7e40c72d0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed=
84b8bccc1), path=3Dvnet0, stats=3D0x7fe803e45b10, size=3D64
2012-08-02 11:44:08.802+0000: 4233: debug : virDomainFree:2345 : dom=3D0x7f=
e7e40c72d0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)
2012-08-02 11:44:12.564+0000: 4234: debug : virDomainGetJobInfo:16465 : dom=
=3D0x7fe7dc000e00, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b=
8bccc1), info=3D0x7fe803644af0
2012-08-02 11:44:12.564+0000: 4234: debug : virDomainFree:2345 : dom=3D0x7f=
e7dc000e00, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)
2012-08-02 11:44:13.804+0000: 4235: debug : virDomainGetInfo:4298 : dom=3D0=
x7fe7e8001bd0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bcc=
c1), info=3D0x7fe802e43b20
2012-08-02 11:44:13.805+0000: 4235: debug : qemudGetProcessInfo:1156 : Got =
status for 5448/0 user=3D1770 sys=3D1445 cpu=3D0 rss=3D185496
2012-08-02 11:44:13.805+0000: 4235: debug : virDomainFree:2345 : dom=3D0x7f=
e7e8001bd0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)
2012-08-02 11:44:13.806+0000: 4238: debug : virDomainInterfaceStats:7299 : =
dom=3D0x7fe7f0002520, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed=
84b8bccc1), path=3Dvnet0, stats=3D0x7fe801640b10, size=3D64
2012-08-02 11:44:13.807+0000: 4238: debug : virDomainFree:2345 : dom=3D0x7f=
e7f0002520, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)
2012-08-02 11:44:16.474+0000: 4232: error : virNetClientProgramDispatchErro=
r:174 : An error occurred, but the cause is unknown
2012-08-02 11:44:16.474+0000: 4232: debug : doPeer2PeerMigrate3:2458 : Conf=
irm3 0x7fe7e40029c0 ret=3D-1 vm=3D0x7fe7e4000e50
2012-08-02 11:44:16.474+0000: 4232: debug : qemuMigrationConfirm:3109 : dri=
ver=3D0x7fe7f80bd9c0, conn=3D0x7fe7e40029c0, vm=3D0x7fe7e4000e50, cookiein=
=3D(null), cookieinlen=3D0, flags=3D3, retcode=3D1
2012-08-02 11:44:16.474+0000: 4232: debug : qemuMigrationEatCookie:752 : co=
okielen=3D0 cookie=3D'(null)'
2012-08-02 11:44:16.475+0000: 4232: debug : qemuProcessStartCPUs:2644 : Usi=
ng lock state '(null)'
2012-08-02 11:44:16.475+0000: 4232: debug : qemuDomainObjBeginJobInternal:7=
53 : Starting job: async nested (async=3Dmigration out)
And this is from a migration gone well:
2012-08-02 11:41:28.386+0000: 618: debug : qemuProcessStop:3872 : Shutting =
down VM 'milli' pid=3D3776 flags=3D1
2012-08-02 11:41:28.386+0000: 618: debug : qemuMonitorClose:797 : QEMU_MONI=
TOR_CLOSE: mon=3D0x7f49dc000d50 refs=3D2
2012-08-02 11:41:28.386+0000: 618: debug : qemuMonitorUnref:248 : QEMU_MONI=
TOR_UNREF: mon=3D0x7f49dc000d50 refs=3D1
2012-08-02 11:41:28.386+0000: 606: debug : qemuMonitorUnref:248 : QEMU_MONI=
TOR_UNREF: mon=3D0x7f49dc000d50 refs=3D0
2012-08-02 11:41:28.386+0000: 606: debug : qemuMonitorFree:225 : mon=3D0x7f=
49dc000d50
2012-08-02 11:41:28.386+0000: 618: debug : qemuProcessKill:3769 : vm=3Dmill=
i pid=3D3776 flags=3D5
2012-08-02 11:41:28.586+0000: 618: debug : qemuDomainCleanupRun:1921 : driv=
er=3D0x7f49e806f1b0, vm=3Dmilli
2012-08-02 11:41:28.586+0000: 618: debug : qemuProcessAutoDestroyRemove:432=
0 : vm=3Dmilli
2012-08-02 11:41:28.586+0000: 618: debug : qemuDriverCloseCallbackUnset:578=
: vm=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1, cb=3D0x4900d0
2012-08-02 11:41:28.586+0000: 618: debug : virCgroupNew:603 : New group /li=
bvirt/qemu/milli
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 0:cpu at /sys/fs/cgroup/cpu,cpuacct in /system/libvirtd.servic=
e
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 1:cpuacct at /sys/fs/cgroup/cpu,cpuacct in /system/libvirtd.se=
rvice
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 2:cpuset at /sys/fs/cgroup/cpuset in
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 3:memory at /sys/fs/cgroup/memory in
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 4:devices at /sys/fs/cgroup/devices in
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 5:freezer at /sys/fs/cgroup/freezer in
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected m=
ount/mapping 6:blkio at /sys/fs/cgroup/blkio in
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:524 : Make gr=
oup /libvirt/qemu/milli
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/mi=
lli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/mi=
lli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/cpuset/libvirt/qemu/milli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/memory/libvirt/qemu/milli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/devices/libvirt/qemu/milli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/freezer/libvirt/qemu/milli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make co=
ntroller /sys/fs/cgroup/blkio/libvirt/qemu/milli/
2012-08-02 11:41:28.587+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/milli=
/ and all child cgroups
2012-08-02 11:41:28.588+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt=
/qemu/milli//vcpu1
2012-08-02 11:41:28.595+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt=
/qemu/milli//vcpu0
2012-08-02 11:41:28.603+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt=
/qemu/milli/
2012-08-02 11:41:28.611+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/milli=
/ and all child cgroups
2012-08-02 11:41:28.611+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/cpuset/libvirt/qemu/milli/ and all child cgroups
2012-08-02 11:41:28.611+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/cpuset/libvirt/qemu/milli/
2012-08-02 11:41:28.620+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/memory/libvirt/qemu/milli/ and all child cgroups
2012-08-02 11:41:28.620+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/memory/libvirt/qemu/milli/
2012-08-02 11:41:28.633+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/devices/libvirt/qemu/milli/ and all child cgroups
2012-08-02 11:41:28.679+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/devices/libvirt/qemu/milli/
2012-08-02 11:41:28.684+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/freezer/libvirt/qemu/milli/ and all child cgroups
2012-08-02 11:41:28.684+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/freezer/libvirt/qemu/milli/
2012-08-02 11:41:28.689+0000: 618: debug : virCgroupRemove:758 : Removing c=
group /sys/fs/cgroup/blkio/libvirt/qemu/milli/ and all child cgroups
2012-08-02 11:41:28.689+0000: 618: debug : virCgroupRemoveRecursively:713 :=
Removing cgroup /sys/fs/cgroup/blkio/libvirt/qemu/milli/
2012-08-02 11:41:28.697+0000: 618: debug : virConnectClose:1496 : conn=3D0x=
7f49dc10fac0
2012-08-02 11:41:28.698+0000: 606: debug : virDomainFree:2345 : dom=3D0xaa0=
ea0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)
2012-08-02 11:41:28.700+0000: 618: debug : qemuDomainObjEndAsyncJob:887 : S=
topping async job: migration out
# rpm -qa | egrep '(vdsm|libvirt|sanlock|json)' | sort -d
json-c-0.9-4.fc17.x86_64
json-glib-0.14.2-2.fc17.x86_64
libvirt-0.9.11.4-3.fc17.x86_64
libvirt-client-0.9.11.4-3.fc17.x86_64
libvirt-daemon-0.9.11.4-3.fc17.x86_64
libvirt-daemon-config-network-0.9.11.4-3.fc17.x86_64
libvirt-daemon-config-nwfilter-0.9.11.4-3.fc17.x86_64
libvirt-lock-sanlock-0.9.11.4-3.fc17.x86_64
libvirt-python-0.9.11.4-3.fc17.x86_64
python-simplejson-2.5.2-1.fc17.x86_64
sanlock-2.3-3.fc17.x86_64
sanlock-lib-2.3-3.fc17.x86_64
sanlock-python-2.3-3.fc17.x86_64
vdsm-4.10.0-5.fc17.x86_64
vdsm-cli-4.10.0-5.fc17.noarch
vdsm-python-4.10.0-5.fc17.x86_64
vdsm-xmlrpc-4.10.0-5.fc17.noarch
Best Regards
---------------------------------------------------------------------------=
----
Karli Sj=F6berg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kron=E5sv=E4gen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg(a)slu.se<mailto:karli.sjoberg@adm.slu.se>
--_000_0CBBF9E4118E4C0C82215A5FE4D6D522sluse_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode:=
space; -webkit-line-break: after-white-space; ">Hi,<div><br></div><div><di=
v>Wondering if anyone has encountered the same issue as me. On one host in =
my cluster, if I migrate in a guest, I cannot migrate it out to another hos=
t? The get "stuck" there, so to speak. Same when a guest is started o=
n that particular host, it is impossible to migrate them out again.</d=
iv><div><br></div><div>iptables are flushed, for excluding purposes. v=
dsmd, libvirtd and sanlock are all running.</div><div><br></div><div>This i=
s what I caught from libvirtd.log:</div><div>2012-08-02 11:44:07.542+0000: =
4231: debug : qemuMonitorUnref:248 : QEMU_MONITOR_UNREF: mon=3D0x7fe7e40c84=
10 refs=3D3</div><div><div>2012-08-02 11:44:07.542+0000: 4232: debug : qemu=
MonitorUnref:248 : QEMU_MONITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2</div><=
div>2012-08-02 11:44:07.542+0000: 4232: debug : virConnectIsAlive:18395 : c=
onn=3D0x7fe7e00c4e50</div><div>2012-08-02 11:44:07.562+0000: 4236: debug : =
virDomainMigrateSetMaxDowntime:16565 : dom=3D0x7fe7d8000cc0, (VM: name=3Dmi=
lli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1), downtime=3D50, flags=3D0=
</div><div>2012-08-02 11:44:07.562+0000: 4236: debug : qemuDomainObjBeginJo=
bInternal:753 : Starting job: migration operation (async=3Dmigration out)</=
div><div>2012-08-02 11:44:07.562+0000: 4236: debug : qemuDomainMigrateSetMa=
xDowntime:9500 : Setting migration downtime to 50ms</div><div>2012-08-02 11=
:44:07.562+0000: 4236: debug : qemuMonitorRef:239 : QEMU_MONITOR_REF: mon=
=3D0x7fe7e40c8410 refs=3D3</div><div>2012-08-02 11:44:07.562+0000: 4236: de=
bug : qemuMonitorSetMigrationDowntime:1753 : mon=3D0x7fe7e40c8410 downtime=
=3D50</div><div>2012-08-02 11:44:07.562+0000: 4236: debug : qemuMonitorSend=
:861 : QEMU_MONITOR_SEND_MSG: mon=3D0x7fe7e40c8410 msg=3D{"execute":"migrat=
e_set_downtime","arguments":{"value":0,050000},"id":"libvirt-261"}</div><di=
v> fd=3D-1</div><div>2012-08-02 11:44:07.563+0000: 4231: debug : qemuM=
onitorRef:239 : QEMU_MONITOR_REF: mon=3D0x7fe7e40c8410 refs=3D4</div><div>2=
012-08-02 11:44:07.563+0000: 4231: debug : qemuMonitorIOWrite:470 : QEMU_MO=
NITOR_IO_WRITE: mon=3D0x7fe7e40c8410 buf=3D{"execute":"migrate_set_downtime=
","arguments":{"value":0,050000},"id":"libvirt-261"}</div><div> len=3D=
86 ret=3D86 errno=3D11</div><div>2012-08-02 11:44:07.563+0000: 4231: debug =
: qemuMonitorUnref:248 : QEMU_MONITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D3<=
/div><div>2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorRef:239 : =
QEMU_MONITOR_REF: mon=3D0x7fe7e40c8410 refs=3D4</div><div>2012-08-02 11:44:=
07.564+0000: 4231: debug : qemuMonitorIOProcess:365 : QEMU_MONITOR_IO_PROCE=
SS: mon=3D0x7fe7e40c8410 buf=3D{"error": {"class": "JSONParsing", "desc": "=
Invalid JSON syntax", "data": {}}}</div><div> len=3D80</div><div>2012-=
08-02 11:44:07.564+0000: 4231: debug : qemuMonitorUnref:248 : QEMU_MONITOR_=
UNREF: mon=3D0x7fe7e40c8410 refs=3D3</div><div>2012-08-02 11:44:07.564+0000=
: 4236: error : qemuMonitorJSONCheckError:331 : internal error unable to ex=
ecute QEMU command 'migrate_set_downtime': Invalid JSON syntax</div><div>20=
12-08-02 11:44:07.564+0000: 4236: debug : qemuMonitorUnref:248 : QEMU_MONIT=
OR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2</div><div>2012-08-02 11:44:07.564+0=
000: 4236: debug : qemuDomainObjEndJob:870 : Stopping job: migration operat=
ion (async=3Dmigration out)</div><div>2012-08-02 11:44:07.564+0000: 4231: d=
ebug : qemuMonitorRef:239 : QEMU_MONITOR_REF: mon=3D0x7fe7e40c8410 refs=3D3=
</div><div>2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorIOProcess=
:365 : QEMU_MONITOR_IO_PROCESS: mon=3D0x7fe7e40c8410 buf=3D{"error": {"clas=
s": "JSONParsing", "desc": "Invalid JSON syntax", "data": {}}}</div><div>&n=
bsp;len=3D80</div><div>2012-08-02 11:44:07.564+0000: 4236: debug : virDomai=
nFree:2345 : dom=3D0x7fe7d8000cc0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-=
4821-8774-2ed84b8bccc1)</div><div>2012-08-02 11:44:07.564+0000: 4231: error=
: qemuMonitorJSONIOProcessLine:156 : internal error Unexpected JSON reply =
'{"error": {"class": "JSONParsing", "desc": "Invalid JSON syntax", "data": =
{}}}'</div><div>2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorIO:6=
45 : Error on monitor internal error Unexpected JSON reply '{"error": {"cla=
ss": "JSONParsing", "desc": "Invalid JSON syntax", "data": {}}}'</div><div>=
2012-08-02 11:44:07.564+0000: 4231: debug : qemuMonitorUnref:248 : QEMU_MON=
ITOR_UNREF: mon=3D0x7fe7e40c8410 refs=3D2</div><div>2012-08-02 11:44:07.564=
+0000: 4231: debug : qemuMonitorIO:679 : Triggering error callback</div><di=
v>2012-08-02 11:44:07.564+0000: 4231: debug : qemuProcessHandleMonitorError=
:345 : Received error on 0x7fe7e4000e50 'milli'</div><div>2012-08-02 11:44:=
07.592+0000: 4232: debug : qemuDomainObjBeginJobInternal:753 : Starting job=
: async nested (async=3Dmigration out)</div><div>2012-08-02 11:44:07.593+00=
00: 4232: debug : qemuMonitorRef:239 : QEMU_MONITOR_REF: mon=3D0x7fe7e40c84=
10 refs=3D3</div><div>2012-08-02 11:44:07.593+0000: 4232: debug : qemuMonit=
orGetMigrationStatus:1776 : mon=3D0x7fe7e40c8410</div><div>2012-08-02 11:44=
:07.593+0000: 4232: debug : qemuMonitorSend:851 : Attempt to send command w=
hile error is set internal error Unexpected JSON reply '{"error": {"class":=
"JSONParsing", "desc": "Invalid JSON syntax", "data": {}}}'</div><div>2012=
-08-02 11:44:07.593+0000: 4232: debug : qemuMonitorUnref:248 : QEMU_MONITOR=
_UNREF: mon=3D0x7fe7e40c8410 refs=3D2</div><div>2012-08-02 11:44:07.593+000=
0: 4232: debug : doPeer2PeerMigrate3:2425 : Finish3 0x7fe7e00c4e50 ret=3D-1=
</div><div>2012-08-02 11:44:08.798+0000: 4233: debug : virDomainInterfaceSt=
ats:7299 : dom=3D0x7fe7e40c72d0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-48=
21-8774-2ed84b8bccc1), path=3Dvnet0, stats=3D0x7fe803e45b10, size=3D64</div=
><div>2012-08-02 11:44:08.802+0000: 4233: debug : virDomainFree:2345 : dom=
=3D0x7fe7e40c72d0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b=
8bccc1)</div><div>2012-08-02 11:44:12.564+0000: 4234: debug : virDomainGetJ=
obInfo:16465 : dom=3D0x7fe7dc000e00, (VM: name=3Dmilli, uuid=3D2291f0d8-634=
1-4821-8774-2ed84b8bccc1), info=3D0x7fe803644af0</div><div>2012-08-02 11:44=
:12.564+0000: 4234: debug : virDomainFree:2345 : dom=3D0x7fe7dc000e00, (VM:=
name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1)</div><div>2012-=
08-02 11:44:13.804+0000: 4235: debug : virDomainGetInfo:4298 : dom=3D0x7fe7=
e8001bd0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1), =
info=3D0x7fe802e43b20</div><div>2012-08-02 11:44:13.805+0000: 4235: debug :=
qemudGetProcessInfo:1156 : Got status for 5448/0 user=3D1770 sys=3D1445 cp=
u=3D0 rss=3D185496</div><div>2012-08-02 11:44:13.805+0000: 4235: debug : vi=
rDomainFree:2345 : dom=3D0x7fe7e8001bd0, (VM: name=3Dmilli, uuid=3D2291f0d8=
-6341-4821-8774-2ed84b8bccc1)</div><div>2012-08-02 11:44:13.806+0000: 4238:=
debug : virDomainInterfaceStats:7299 : dom=3D0x7fe7f0002520, (VM: name=3Dm=
illi, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1), path=3Dvnet0, stats=3D0=
x7fe801640b10, size=3D64</div><div>2012-08-02 11:44:13.807+0000: 4238: debu=
g : virDomainFree:2345 : dom=3D0x7fe7f0002520, (VM: name=3Dmilli, uuid=3D22=
91f0d8-6341-4821-8774-2ed84b8bccc1)</div><div>2012-08-02 11:44:16.474+0000:=
4232: error : virNetClientProgramDispatchError:174 : An error occurred, bu=
t the cause is unknown</div><div>2012-08-02 11:44:16.474+0000: 4232: debug =
: doPeer2PeerMigrate3:2458 : Confirm3 0x7fe7e40029c0 ret=3D-1 vm=3D0x7fe7e4=
000e50</div><div>2012-08-02 11:44:16.474+0000: 4232: debug : qemuMigrationC=
onfirm:3109 : driver=3D0x7fe7f80bd9c0, conn=3D0x7fe7e40029c0, vm=3D0x7fe7e4=
000e50, cookiein=3D(null), cookieinlen=3D0, flags=3D3, retcode=3D1</div><di=
v>2012-08-02 11:44:16.474+0000: 4232: debug : qemuMigrationEatCookie:752 : =
cookielen=3D0 cookie=3D'(null)'</div><div>2012-08-02 11:44:16.475+0000: 423=
2: debug : qemuProcessStartCPUs:2644 : Using lock state '(null)'</div><div>=
2012-08-02 11:44:16.475+0000: 4232: debug : qemuDomainObjBeginJobInternal:7=
53 : Starting job: async nested (async=3Dmigration out)</div></div><div><br=
></div><div>And this is from a migration gone well:</div><div><div>2012-08-=
02 11:41:28.386+0000: 618: debug : qemuProcessStop:3872 : Shutting down VM =
'milli' pid=3D3776 flags=3D1</div><div>2012-08-02 11:41:28.386+0000: 618: d=
ebug : qemuMonitorClose:797 : QEMU_MONITOR_CLOSE: mon=3D0x7f49dc000d50 refs=
=3D2</div><div>2012-08-02 11:41:28.386+0000: 618: debug : qemuMonitorUnref:=
248 : QEMU_MONITOR_UNREF: mon=3D0x7f49dc000d50 refs=3D1</div><div>2012-08-0=
2 11:41:28.386+0000: 606: debug : qemuMonitorUnref:248 : QEMU_MONITOR_UNREF=
: mon=3D0x7f49dc000d50 refs=3D0</div><div>2012-08-02 11:41:28.386+0000: 606=
: debug : qemuMonitorFree:225 : mon=3D0x7f49dc000d50</div><div>2012-08-02 1=
1:41:28.386+0000: 618: debug : qemuProcessKill:3769 : vm=3Dmilli pid=3D3776=
flags=3D5</div><div>2012-08-02 11:41:28.586+0000: 618: debug : qemuDomainC=
leanupRun:1921 : driver=3D0x7f49e806f1b0, vm=3Dmilli</div><div>2012-08-02 1=
1:41:28.586+0000: 618: debug : qemuProcessAutoDestroyRemove:4320 : vm=3Dmil=
li</div><div>2012-08-02 11:41:28.586+0000: 618: debug : qemuDriverCloseCall=
backUnset:578 : vm=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2ed84b8bccc1, cb=
=3D0x4900d0</div><div>2012-08-02 11:41:28.586+0000: 618: debug : virCgroupN=
ew:603 : New group /libvirt/qemu/milli</div><div>2012-08-02 11:41:28.587+00=
00: 618: debug : virCgroupDetect:262 : Detected mount/mapping 0:cpu at /sys=
/fs/cgroup/cpu,cpuacct in /system/libvirtd.service</div><div>2012-08-02 11:=
41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected mount/mapping 1=
:cpuacct at /sys/fs/cgroup/cpu,cpuacct in /system/libvirtd.service</div><di=
v>2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected=
mount/mapping 2:cpuset at /sys/fs/cgroup/cpuset in</div><div>2012-08-02 11=
:41:28.587+0000: 618: debug : virCgroupDetect:262 : Detected mount/mapping =
3:memory at /sys/fs/cgroup/memory in</div><div>2012-08-02 11:41:28.587+0000=
: 618: debug : virCgroupDetect:262 : Detected mount/mapping 4:devices at /s=
ys/fs/cgroup/devices in</div><div>2012-08-02 11:41:28.587+0000: 618: debug =
: virCgroupDetect:262 : Detected mount/mapping 5:freezer at /sys/fs/cgroup/=
freezer in</div><div>2012-08-02 11:41:28.587+0000: 618: debug : virCgroupDe=
tect:262 : Detected mount/mapping 6:blkio at /sys/fs/cgroup/blkio in</div><=
div>2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:524 : Mak=
e group /libvirt/qemu/milli</div><div>2012-08-02 11:41:28.587+0000: 618: de=
bug : virCgroupMakeGroup:546 : Make controller /sys/fs/cgroup/cpu,cpuacct/s=
ystem/libvirtd.service/libvirt/qemu/milli/</div><div>2012-08-02 11:41:28.58=
7+0000: 618: debug : virCgroupMakeGroup:546 : Make controller /sys/fs/cgrou=
p/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/milli/</div><div>2012-08=
-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup:546 : Make controlle=
r /sys/fs/cgroup/cpuset/libvirt/qemu/milli/</div><div>2012-08-02 11:41:28.5=
87+0000: 618: debug : virCgroupMakeGroup:546 : Make controller /sys/fs/cgro=
up/memory/libvirt/qemu/milli/</div><div>2012-08-02 11:41:28.587+0000: 618: =
debug : virCgroupMakeGroup:546 : Make controller /sys/fs/cgroup/devices/lib=
virt/qemu/milli/</div><div>2012-08-02 11:41:28.587+0000: 618: debug : virCg=
roupMakeGroup:546 : Make controller /sys/fs/cgroup/freezer/libvirt/qemu/mil=
li/</div><div>2012-08-02 11:41:28.587+0000: 618: debug : virCgroupMakeGroup=
:546 : Make controller /sys/fs/cgroup/blkio/libvirt/qemu/milli/</div><div>2=
012-08-02 11:41:28.587+0000: 618: debug : virCgroupRemove:758 : Removing cg=
roup /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/milli/=
and all child cgroups</div><div>2012-08-02 11:41:28.588+0000: 618: debug :=
virCgroupRemoveRecursively:713 : Removing cgroup /sys/fs/cgroup/cpu,cpuacc=
t/system/libvirtd.service/libvirt/qemu/milli//vcpu1</div><div>2012-08-02 11=
:41:28.595+0000: 618: debug : virCgroupRemoveRecursively:713 : Removing cgr=
oup /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/milli//=
vcpu0</div><div>2012-08-02 11:41:28.603+0000: 618: debug : virCgroupRemoveR=
ecursively:713 : Removing cgroup /sys/fs/cgroup/cpu,cpuacct/system/libvirtd=
.service/libvirt/qemu/milli/</div><div>2012-08-02 11:41:28.611+0000: 618: d=
ebug : virCgroupRemove:758 : Removing cgroup /sys/fs/cgroup/cpu,cpuacct/sys=
tem/libvirtd.service/libvirt/qemu/milli/ and all child cgroups</div><div>20=
12-08-02 11:41:28.611+0000: 618: debug : virCgroupRemove:758 : Removing cgr=
oup /sys/fs/cgroup/cpuset/libvirt/qemu/milli/ and all child cgroups</div><d=
iv>2012-08-02 11:41:28.611+0000: 618: debug : virCgroupRemoveRecursively:71=
3 : Removing cgroup /sys/fs/cgroup/cpuset/libvirt/qemu/milli/</div><div>201=
2-08-02 11:41:28.620+0000: 618: debug : virCgroupRemove:758 : Removing cgro=
up /sys/fs/cgroup/memory/libvirt/qemu/milli/ and all child cgroups</div><di=
v>2012-08-02 11:41:28.620+0000: 618: debug : virCgroupRemoveRecursively:713=
: Removing cgroup /sys/fs/cgroup/memory/libvirt/qemu/milli/</div><div>2012=
-08-02 11:41:28.633+0000: 618: debug : virCgroupRemove:758 : Removing cgrou=
p /sys/fs/cgroup/devices/libvirt/qemu/milli/ and all child cgroups</div><di=
v>2012-08-02 11:41:28.679+0000: 618: debug : virCgroupRemoveRecursively:713=
: Removing cgroup /sys/fs/cgroup/devices/libvirt/qemu/milli/</div><div>201=
2-08-02 11:41:28.684+0000: 618: debug : virCgroupRemove:758 : Removing cgro=
up /sys/fs/cgroup/freezer/libvirt/qemu/milli/ and all child cgroups</div><d=
iv>2012-08-02 11:41:28.684+0000: 618: debug : virCgroupRemoveRecursively:71=
3 : Removing cgroup /sys/fs/cgroup/freezer/libvirt/qemu/milli/</div><div>20=
12-08-02 11:41:28.689+0000: 618: debug : virCgroupRemove:758 : Removing cgr=
oup /sys/fs/cgroup/blkio/libvirt/qemu/milli/ and all child cgroups</div><di=
v>2012-08-02 11:41:28.689+0000: 618: debug : virCgroupRemoveRecursively:713=
: Removing cgroup /sys/fs/cgroup/blkio/libvirt/qemu/milli/</div><div>2012-=
08-02 11:41:28.697+0000: 618: debug : virConnectClose:1496 : conn=3D0x7f49d=
c10fac0</div><div>2012-08-02 11:41:28.698+0000: 606: debug : virDomainFree:=
2345 : dom=3D0xaa0ea0, (VM: name=3Dmilli, uuid=3D2291f0d8-6341-4821-8774-2e=
d84b8bccc1)</div><div>2012-08-02 11:41:28.700+0000: 618: debug : qemuDomain=
ObjEndAsyncJob:887 : Stopping async job: migration out</div></div><div><br>=
</div><div><div><div># rpm -qa | egrep '(vdsm|libvirt|sanlock|json)' | sort=
-d</div><div>json-c-0.9-4.fc17.x86_64</div><div>json-glib-0.14.2-2.fc17.x8=
6_64</div><div>libvirt-0.9.11.4-3.fc17.x86_64</div><div>libvirt-client-0.9.=
11.4-3.fc17.x86_64</div><div>libvirt-daemon-0.9.11.4-3.fc17.x86_64</div><di=
v>libvirt-daemon-config-network-0.9.11.4-3.fc17.x86_64</div><div>libvirt-da=
emon-config-nwfilter-0.9.11.4-3.fc17.x86_64</div><div>libvirt-lock-sanlock-=
0.9.11.4-3.fc17.x86_64</div><div>libvirt-python-0.9.11.4-3.fc17.x86_64</div=
><div>python-simplejson-2.5.2-1.fc17.x86_64</div><div>sanlock-2.3-3.fc17.x8=
6_64</div><div>sanlock-lib-2.3-3.fc17.x86_64</div><div>sanlock-python-2.3-3=
.fc17.x86_64</div><div>vdsm-4.10.0-5.fc17.x86_64</div><div>vdsm-cli-4.10.0-=
5.fc17.noarch</div><div>vdsm-python-4.10.0-5.fc17.x86_64</div><div>vdsm-xml=
rpc-4.10.0-5.fc17.noarch</div></div></div><div>
<div><br class=3D"Apple-interchange-newline"><br></div><div>Best Regards<br=
>--------------------------------------------------------------------------=
-----<br>Karli Sj=F6berg<br>Swedish University of Agricultural Sciences<br>=
Box 7079 (Visiting Address Kron=E5sv=E4gen 8)<br>S-750 07 Uppsala, Sweden<b=
r>Phone: +46-(0)18-67 15 66</div><div><a href=3D"mailto:karli.sjoberg=
@adm.slu.se">karli.sjoberg(a)slu.se</a></div>
</div>
<br></div></body></html>=
--_000_0CBBF9E4118E4C0C82215A5FE4D6D522sluse_--
12 years, 3 months
[Users] host raiding two LUN from different external storage
by Johan Kragsterman
Hi!
In some setups, like when you got two datacenters that works like failover sites, you would like to have two external raid controllers(storage devices), one in each datacenter.
You then send two identical LUN's, from each controller, to a cluster, let's say two hosts, for simplicity. What you normally do is to host raid these LUN's in mirror(raid 1), so the hosts write the same to both LUN's, and both controllers.
Question for me here is if I can accomplish this in oVirt management? Because if I can't, it will be a problem, because if I host raid at the host level, then the storage would be local storage for oVirt, wouldn't it? And then I suppose it can't be used for live migration, can it?
Regrds Johan
12 years, 3 months
[Users] Upgrading ovirt-engine from Centos dre repo
by Neil
Hi guys,
I'm running an older version of ovirt-engine on Centos 6.2 and I'd
like to upgrade ovirt-engine and the other corresponding packages
ovirt-engine-webadmin-portal-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-dbscripts-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-tools-common-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-backend-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-log-collector-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-iso-uploader-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-cli-3.1.0.1-1alpha.el6.noarch
ovirt-engine-jbossas-1.2-2.fc16.x86_64
ovirt-engine-restapi-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-jboss-deps-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-config-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-image-uploader-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-sdk-3.1.0.1-1alpha.el6.noarch
ovirt-engine-userportal-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-notification-service-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-genericapi-3.1.0_0001-1.8.el6.x86_64
ovirt-engine-setup-3.1.0_0001-1.8.el6.x86_64
Using yum even if I do a yum update ovirt-engine* the only updates I get are...
ovirt-engine-cli noarch
3.1.0.6-1.el6 ovirt31-dre
144 k
ovirt-engine-sdk noarch
3.1.0.4-1.el6 ovirt31-dre
222 k
and yet I can see there are newer ovirt-engine packages available than
the ones I'm running, I've checked that the repo is enabled...
[ovirt-dre]
name=oVirt engine repo
baseurl=http://www.dreyou.org/ovirt/ovirt-engine/
http://www1.dreyou.org/ovirt/ovirt-engine/
enabled=1
gpgcheck=0
[ovirt31-dre]
name=oVirt 3.1 engine repo
baseurl=http://www.dreyou.org/ovirt/ovirt-engine31/
http://www1.dreyou.org/ovirt/ovirt-engine31/
enabled=1
gpgcheck=0
...and browsing the repo manually through a browser I see the
following packages under ovirt-engine31/
ovirt-engine-3.1.0-3.11.el6.noarch.rpm
ovirt-engine-3.1.0-3.11.el6.src.rpm
and under the ovirt-engine/ directory I see the following versions...
ovirt-engine-3.1.0-3.15.el6.noarch.rpm
ovirt-engine-3.1.0-3.15.el6.src.rpm
Anyone have any ideas why yum won't find the newer versions?
Thank you.
Regards.
Neil.
12 years, 3 months
[Users] [User] why VM image owner change to root after stop the vm
by T-Sinjon
Dear everyone:
Description
When i create a vm , the vm owner is vdsm:kvm(36:36)
when i start a vm , the vm owner change to qemu:qemu(107:107)
-rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df
-rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then i stop the vm , the vm owner change to root:root
-rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df
-rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then , i cannot start the vm , on the web logs event:
2012-Jul-25, 16:27:29
VM Git-Server is down. Exit message: 'truesize'.
2012-Jul-25, 16:27:28
VM Git-Server was restarted on Host ovirt-node-sun-1.local
2012-Jul-25, 16:27:28
Failed to run VM Git-Server on Host ovirt-node-sun-4.local.
and the ovirt-engine.log:
2012-07-25 16:27:24,359 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 7d8f1a84
2012-07-25 16:27:24,364 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log id: 7d8f1a84
2012-07-25 16:27:24,441 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId = 4f03fc62-a71e-4560-b807-5388526f6968), log id: 6c699650
2012-07-25 16:27:24,443 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 6c699650
2012-07-25 16:27:24,491 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-48) [36951f95] Lock Acquired to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM
, sharedLocks= ]
2012-07-25 16:27:24,515 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-48) [36951f95] Running command: RunVmCommand internal: false. Entities affected : ID: 4f03fc62-a71e-4560-b807-5388526f6968 Type: VM
2012-07-25 16:27:24,691 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-48) [36951f95] START, IsoPrefixVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 74d1cdf9
2012-07-25 16:27:24,695 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-48) [36951f95] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111, log id: 74d1cdf9
2012-07-25 16:27:24,699 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-48) [36951f95] START, CreateVmVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@4fb10fb7), log id: 7a29d259
2012-07-25 16:27:24,708 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-48) [36951f95] START, CreateVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@4fb10fb7), log id: 29395031
2012-07-25 16:27:24,789 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-48) [36951f95] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,nicModel=pv,pv,keyboardLayout=en-us,nice=0,timeOffset=-2,transparentHugePages=true,vmId=4f03fc62-a71e-4560-b807-5388526f6968,drives=[Ljava.util.Map;@1869fe1c,acpiEnable=true,custom={},spiceSslCipherSuite=DEFAULT,memSize=4096,boot=cd,smp=2,vmType=kvm,emulatedMachine=pc-0.14,display=vnc,tabletEnable=true,spiceSecureChannels=smain,sinputs,smpCoresPerSocket=1,spiceMonitors=1,cdrom=/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-LiveDVD.iso,macAddr=00:1a:4a:1e:01:10,00:1a:4a:1e:01:11,bridge=network_10,ovirtmgmt,vmName=Git-Server,cpuType=Conroe
2012-07-25 16:27:24,799 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-48) [36951f95] FINISH, CreateVDSCommand, log id: 29395031
2012-07-25 16:27:24,807 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-48) [36951f95] IncreasePendingVms::CreateVmIncreasing vds ovirt-node-sun-3.local pending vcpu count, now 2. Vm: Git-Server
2012-07-25 16:27:24,849 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-48) [36951f95] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 7a29d259
2012-07-25 16:27:24,858 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-48) [36951f95] Lock freed to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM
, sharedLocks= ]
2012-07-25 16:27:26,096 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-37) [686128dd] START, DestroyVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vmId=4f03fc62-a71e-4560-b807-5388526f6968, force=false, secondsToWait=0, gracefully=false), log id: 168e7377
2012-07-25 16:27:26,181 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-37) [686128dd] FINISH, DestroyVDSCommand, log id: 168e7377
2012-07-25 16:27:26,207 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-37) [686128dd] Running on vds during rerun failed vm: null
2012-07-25 16:27:26,211 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-37) [686128dd] vm Git-Server running in db and not running in vds - add to rerun treatment. vds ovirt-node-sun-3.local
2012-07-25 16:27:26,232 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-37) [686128dd] Rerun vm 4f03fc62-a71e-4560-b807-5388526f6968. Called from vds ovirt-node-sun-3.local
2012-07-25 16:27:26,238 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-50) [686128dd] START, UpdateVdsDynamicDataVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@8ce6c23d), log id: 1d991881
2012-07-25 16:27:26,249 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-50) [686128dd] FINISH, UpdateVdsDynamicDataVDSCommand, log id: 1d991881
2012-07-25 16:27:26,274 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-50) [686128dd] Lock Acquired to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM
, sharedLocks= ]
2012-07-25 16:27:26,374 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-50) [686128dd] START, IsValidVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 412b8a26
2012-07-25 16:27:26,379 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-50) [686128dd] FINISH, IsValidVDSCommand, return: true, log id: 412b8a26
2012-07-25 16:27:26,464 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-50) [686128dd] START, IsVmDuringInitiatingVDSCommand(vmId = 4f03fc62-a71e-4560-b807-5388526f6968), log id: 557baa43
2012-07-25 16:27:26,467 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-50) [686128dd] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 557baa43
2012-07-25 16:27:26,516 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-50) [686128dd] Running command: RunVmCommand internal: false. Entities affected : ID: 4f03fc62-a71e-4560-b807-5388526f6968 Type: VM
2012-07-25 16:27:26,636 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-50) [686128dd] START, IsoPrefixVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 4343f447
2012-07-25 16:27:26,639 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-50) [686128dd] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111, log id: 4343f447
2012-07-25 16:27:26,643 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-50) [686128dd] START, CreateVmVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@10bca4bb), log id: 225a83d1
2012-07-25 16:27:26,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-50) [686128dd] START, CreateVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@10bca4bb), log id: 78491589
2012-07-25 16:27:26,735 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-50) [686128dd] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,nicModel=pv,pv,keyboardLayout=en-us,nice=0,timeOffset=-2,transparentHugePages=true,vmId=4f03fc62-a71e-4560-b807-5388526f6968,drives=[Ljava.util.Map;@3d83b1e9,acpiEnable=true,custom={},spiceSslCipherSuite=DEFAULT,memSize=4096,boot=cd,smp=2,vmType=kvm,emulatedMachine=pc-0.14,display=vnc,tabletEnable=true,spiceSecureChannels=smain,sinputs,smpCoresPerSocket=1,spiceMonitors=1,cdrom=/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-LiveDVD.iso,macAddr=00:1a:4a:1e:01:10,00:1a:4a:1e:01:11,bridge=network_10,ovirtmgmt,vmName=Git-Server,cpuType=Conroe
2012-07-25 16:27:26,745 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-50) [686128dd] FINISH, CreateVDSCommand, log id: 78491589
2012-07-25 16:27:26,750 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-50) [686128dd] IncreasePendingVms::CreateVmIncreasing vds ovirt-node-sun-4.local pending vcpu count, now 2. Vm: Git-Server
2012-07-25 16:27:26,780 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-50) [686128dd] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 225a83d1
2012-07-25 16:27:26,804 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-50) [686128dd] Lock freed to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM
, sharedLocks= ]
2012-07-25 16:27:28,023 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-6) [5be91432] START, DestroyVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vmId=4f03fc62-a71e-4560-b807-5388526f6968, force=false, secondsToWait=0, gracefully=false), log id: 4b8dcf4e
2012-07-25 16:27:28,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-6) [5be91432] FINISH, DestroyVDSCommand, log id: 4b8dcf4e
2012-07-25 16:27:28,147 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-6) [5be91432] Running on vds during rerun failed vm: null
2012-07-25 16:27:28,151 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-6) [5be91432] vm Git-Server running in db and not running in vds - add to rerun treatment. vds ovirt-node-sun-4.local
2012-07-25 16:27:28,174 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-6) [5be91432] Rerun vm 4f03fc62-a71e-4560-b807-5388526f6968. Called from vds ovirt-node-sun-4.local
2012-07-25 16:27:28,180 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) [5be91432] START, UpdateVdsDynamicDataVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@54ec3b0a), log id: 60cb5c98
2012-07-25 16:27:28,191 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) [5be91432] FINISH, UpdateVdsDynamicDataVDSCommand, log id: 60cb5c98
2012-07-25 16:27:28,216 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-43) [5be91432] Lock Acquired to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM
, sharedLocks= ]
2012-07-25 16:27:28,301 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-43) [5be91432] START, IsValidVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 78370bf8
2012-07-25 16:27:28,306 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-43) [5be91432] FINISH, IsValidVDSCommand, return: true, log id: 78370bf8
2012-07-25 16:27:28,383 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-43) [5be91432] START, IsVmDuringInitiatingVDSCommand(vmId = 4f03fc62-a71e-4560-b807-5388526f6968), log id: 289dae44
2012-07-25 16:27:28,386 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-43) [5be91432] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 289dae44
2012-07-25 16:27:28,416 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-43) [5be91432] Running command: RunVmCommand internal: false. Entities affected : ID: 4f03fc62-a71e-4560-b807-5388526f6968 Type: VM
2012-07-25 16:27:28,542 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-43) [5be91432] START, IsoPrefixVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 15fd0eb1
2012-07-25 16:27:28,545 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-43) [5be91432] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111, log id: 15fd0eb1
2012-07-25 16:27:28,548 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-43) [5be91432] START, CreateVmVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@1e76519d), log id: 18cf91be
2012-07-25 16:27:28,557 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-43) [5be91432] START, CreateVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@1e76519d), log id: 7b9e926c
2012-07-25 16:27:28,646 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-43) [5be91432] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,nicModel=pv,pv,keyboardLayout=en-us,nice=0,timeOffset=-2,transparentHugePages=true,vmId=4f03fc62-a71e-4560-b807-5388526f6968,drives=[Ljava.util.Map;@1db78df0,acpiEnable=true,custom={},spiceSslCipherSuite=DEFAULT,memSize=4096,boot=cd,smp=2,vmType=kvm,emulatedMachine=pc-0.14,display=vnc,tabletEnable=true,spiceSecureChannels=smain,sinputs,smpCoresPerSocket=1,spiceMonitors=1,cdrom=/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-LiveDVD.iso,macAddr=00:1a:4a:1e:01:10,00:1a:4a:1e:01:11,bridge=network_10,ovirtmgmt,vmName=Git-Server,cpuType=Conroe
2012-07-25 16:27:28,655 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-43) [5be91432] FINISH, CreateVDSCommand, log id: 7b9e926c
2012-07-25 16:27:28,661 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-43) [5be91432] IncreasePendingVms::CreateVmIncreasing vds ovirt-node-sun-1.local pending vcpu count, now 2. Vm: Git-Server
2012-07-25 16:27:28,683 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-43) [5be91432] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 18cf91be
2012-07-25 16:27:28,691 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-43) [5be91432] Lock freed to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM
, sharedLocks= ]
2012-07-25 16:27:29,550 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-54) START, DestroyVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vmId=4f03fc62-a71e-4560-b807-5388526f6968, force=false, secondsToWait=0, gracefully=false), log id: 2b6ad571
2012-07-25 16:27:29,644 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-54) FINISH, DestroyVDSCommand, log id: 2b6ad571
2012-07-25 16:27:29,713 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-54) Running on vds during rerun failed vm: null
2012-07-25 16:27:29,716 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-54) vm Git-Server running in db and not running in vds - add to rerun treatment. vds ovirt-node-sun-1.local
2012-07-25 16:27:29,741 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-54) Rerun vm 4f03fc62-a71e-4560-b807-5388526f6968. Called from vds ovirt-node-sun-1.local
2012-07-25 16:27:29,747 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) START, UpdateVdsDynamicDataVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@53255766), log id: 6a9130c4
2012-07-25 16:27:29,758 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) FINISH, UpdateVdsDynamicDataVDSCommand, log id: 6a9130c4
my Environment:
engine:
ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noarch
ovirt-engine-cli-3.1.0.6-1.fc17.noarch
ovirt-engine-tools-common-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-genericapi-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-notification-service-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-restapi-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-dbscripts-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-userportal-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-setup-3.1.0-1.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-config-3.1.0-0.2.20120704git1df1ba.fc17.noarch
ovirt-engine-backend-3.1.0-0.2.20120704git1df1ba.fc17.noarch
node:
oVirt Node Hypervisor 2.2.2-2.2.fc16
12 years, 3 months
[Users] binary directory missing
by Nicholas Kesick
--_f59d4042-c23e-4f3b-8497-a34cb6589917_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
The binary directory appears to have gone missing from http://ovirt.org/rel=
eases/=2C including http://ovirt.org/releases/stable/binary which contains =
the 2.3.0-1.0 ovirt-node.
- Nick
=
--_f59d4042-c23e-4f3b-8497-a34cb6589917_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>The binary directory appears to =
have gone missing from http://ovirt.org/releases/=2C including http://ovirt=
.org/releases/stable/binary which contains the 2.3.0-1.0 ovirt-node.<br><br=
>- Nick<br> </div></body>
</html>=
--_f59d4042-c23e-4f3b-8497-a34cb6589917_--
12 years, 3 months
[Users] oVirt 3.1 Release meeting - 2012-08-06
by Ofer Schreiber
Hey,
We will have a release go/no go meeting on the upcoming Monday.
Meeting Time and Place:
* Sunday, August 6th @ 15:00 UTC
* To see in your timezone date -d 'MONDAY 1000 EDT'
* On IRC: #ovirt on irc.oftc.net
Feel free to join us.
---
Ofer Schreiber
oVirt Release Manager
12 years, 3 months
[Users] oVirt Weekly Meeting Minutes -- 2012-08-01
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-08-01-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-08-01-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-08-01-14.00.log.html
============================
#ovirt: oVirt Weekly Meeting
============================
Meeting started by mburns at 14:00:22 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-08-01-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:27)
* Upcoming Workshops (mburns, 14:02:32)
* LINK: http://events.linuxfoundation.org/events/linuxcon/ (lh,
14:03:35)
* next workshop: LC North America in August -- please register soon,
it's getting full (mburns, 14:05:00)
* CFP for Bangalore workshop is going to be done just over email on
users@ (mburns, 14:05:42)
* KVM Forum committee is working with oVirt team to do CFP for LC
Europe (mburns, 14:06:09)
* LINK: http://wiki.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:06:37)
* still looking for sponsors for LCNA and KVM Forum+oVirt Workshop
(mburns, 14:06:58)
* please contact lh if interested (mburns, 14:07:12)
* Release Status (mburns, 14:07:39)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=842948
(oschreib, 14:08:59)
* One blocker left for 3.1 (oschreib, 14:09:15)
* LINK:
https://bugzilla.redhat.com/showdependencytree.cgi?id=822145&hide_resolved=1
(oschreib, 14:09:22)
* ACTION: dougsland to move vdsm bugs to ON_QA/CLOSED (oschreib,
14:10:01)
* oVirt 3.1 - breth0 still available after registration and manual
reboot (VDSM, POST) (oschreib, 14:10:35)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=842948
(oschreib, 14:10:42)
* bug is CLOSED, vdsm-4.10.0-6.fc17 available (oschreib, 14:12:03)
* LINK: http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade
(oschreib, 14:12:58)
* ACTION: to build new ovirt-node with updated vdsm (mburns,
14:13:02)
* upgrade should be working with latest 3.1 build (mburns, 14:16:13)
* LINK: http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade (mburns,
14:16:18)
* LINK: http://wiki.ovirt.org/wiki/OVirt_3.1_release_notes (sgordon,
14:17:03)
* ACTION: fabiand_ to update install instructions for ovirt-node
(mburns, 14:23:57)
* ACTION: oschreib to send the release go/no go meeting (oschreib,
14:29:22)
* LINK: http://openetherpad.org/ovirt-3-1 (oschreib, 14:30:05)
* LINK: http://openetherpad.org/ovirt-3-1 (mburns, 14:30:16)
* announcement email^^ (mburns, 14:30:23)
* sub-project status (mburns, 14:33:39)
* sub-project status (infra) (mburns, 14:33:44)
* rpm sync to nightly releases directory almost done (mburns,
14:38:15)
* new option for adding headless jenkins slaves coming soon (mburns,
14:38:29)
* jenkins backup and staging server setup jenkins.ovirt.info (mburns,
14:38:53)
* finalizing decision on running gerrit patches automatically in
jenkins (security concerns) (mburns, 14:39:29)
* also investigating moving jenkins master out of EC2 (mburns,
14:39:42)
* other topics (mburns, 14:41:37)
Meeting ended at 14:50:15 UTC.
Action Items
------------
* dougsland to move vdsm bugs to ON_QA/CLOSED
* to build new ovirt-node with updated vdsm
* fabiand_ to update install instructions for ovirt-node
* oschreib to send the release go/no go meeting
Action Items, by person
-----------------------
* dougsland
* dougsland to move vdsm bugs to ON_QA/CLOSED
* fabiand_
* fabiand_ to update install instructions for ovirt-node
* oschreib
* oschreib to send the release go/no go meeting
* **UNASSIGNED**
* to build new ovirt-node with updated vdsm
People Present (lines said)
---------------------------
* mburns (73)
* oschreib (47)
* sgordon (16)
* lh (14)
* dougsland (9)
* eedri (9)
* RobertM (6)
* ovirtbot (6)
* ovedo (1)
* jb_netapp (1)
* fabiand_ (1)
* rickyh (1)
* dustins (1)
* fsimonce (1)
* ofrenkel (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
12 years, 3 months