Re: [Users] Migration issues with ovirt 3.3

No, it doesn't succeed neither. I've also noticed that in the Gluster Swift secction it appears to me everything as Not installed, except the memcached that appears as Down, could this has something to do with all these? The private chat was my bad I think, I've only answered to you in some point, sorry! Thanks! 2013/9/25 Dan Kenigsberg <danken@redhat.com>
On Wed, Sep 25, 2013 at 10:40:30AM -0300, emitor@gmail.com wrote:
I've created a new debian VM. I've restarted the vdsm on the source and destination host and after try the migration now I received the following message:
Migration failed due to Error: Fatal error during migration (VM: debian-test2, Source: ovirt1, Destination: ovirt2).
Do you think that with a clean instalation of the host could this be solved? No.
We have a real bug on re-connecting to VMs on startup.
But why are we in a private chat? Please report it as a reply to the mailing list thread.
But let me verify: if you start a VM and then migrate (no vdsm restart), does migration succede?
Dan.
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com

I've just tried to do a migration without vdsm restart and it works with the debian-test2 VM. I've also tried with the pfsense1 VM but it doesn't, might it be because this vm has an IDE type disk? Here goes the logs! 2013/9/25 emitor@gmail.com <emitor@gmail.com>
No, it doesn't succeed neither.
I've also noticed that in the Gluster Swift secction it appears to me everything as Not installed, except the memcached that appears as Down, could this has something to do with all these?
The private chat was my bad I think, I've only answered to you in some point, sorry!
Thanks!
2013/9/25 Dan Kenigsberg <danken@redhat.com>
On Wed, Sep 25, 2013 at 10:40:30AM -0300, emitor@gmail.com wrote:
I've created a new debian VM. I've restarted the vdsm on the source and destination host and after try the migration now I received the following message:
Migration failed due to Error: Fatal error during migration (VM: debian-test2, Source: ovirt1, Destination: ovirt2).
Do you think that with a clean instalation of the host could this be solved? No.
We have a real bug on re-connecting to VMs on startup.
But why are we in a private chat? Please report it as a reply to the mailing list thread.
But let me verify: if you start a VM and then migrate (no vdsm restart), does migration succede?
Dan.
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com

So Dan, could I do some other test for you?, The live migration it's only working with a Debian install, I've just finished to install a windows server 2K8 and it doesn't work neither, gives the following error "VM W2K8 is down. Exit message: 'iface'." Regards! 2013/9/25 emitor@gmail.com <emitor@gmail.com>
I've just tried to do a migration without vdsm restart and it works with the debian-test2 VM.
I've also tried with the pfsense1 VM but it doesn't, might it be because this vm has an IDE type disk?
Here goes the logs!
2013/9/25 emitor@gmail.com <emitor@gmail.com>
No, it doesn't succeed neither.
I've also noticed that in the Gluster Swift secction it appears to me everything as Not installed, except the memcached that appears as Down, could this has something to do with all these?
The private chat was my bad I think, I've only answered to you in some point, sorry!
Thanks!
2013/9/25 Dan Kenigsberg <danken@redhat.com>
On Wed, Sep 25, 2013 at 10:40:30AM -0300, emitor@gmail.com wrote:
I've created a new debian VM. I've restarted the vdsm on the source and destination host and after try the migration now I received the following message:
Migration failed due to Error: Fatal error during migration (VM: debian-test2, Source: ovirt1, Destination: ovirt2).
Do you think that with a clean instalation of the host could this be solved? No.
We have a real bug on re-connecting to VMs on startup.
But why are we in a private chat? Please report it as a reply to the mailing list thread.
But let me verify: if you start a VM and then migrate (no vdsm restart), does migration succede?
Dan.
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com

On Fri, Sep 27, 2013 at 03:36:22PM -0300, emitor@gmail.com wrote:
So Dan, could I do some other test for you?, The live migration it's only working with a Debian install, I've just finished to install a windows server 2K8 and it doesn't work neither, gives the following error "VM W2K8 is down. Exit message: 'iface'."
Frankly, I am out of ideas of what could be causing this very important bug :-( It is unrelated to hotplugging, and probably not to migration - something bad happens when the VM parameters are persisted to disk, or loaded from it. I do not see how the type of the guest could matter. Could you tell if there is any other difference betrween your W2K8 VM and your Debian one? Could you post here the first vmCreate command of the two VMs, and their libvirt domxml? Regards, Dan.

On Sep 29, 2013, at 14:15 , Dan Kenigsberg <danken@redhat.com> wrote:
On Fri, Sep 27, 2013 at 03:36:22PM -0300, emitor@gmail.com wrote:
So Dan, could I do some other test for you?, The live migration it's only working with a Debian install, I've just finished to install a windows server 2K8 and it doesn't work neither, gives the following error "VM W2K8 is down. Exit message: 'iface'."
Do you have any non-gluster setup to give it a try? We have this bug 1007980 which also seems to demonstrate in gluster-setup only….don't know why yet, but perhaps this can be related…. Thanks, michal
Frankly, I am out of ideas of what could be causing this very important bug :-(
It is unrelated to hotplugging, and probably not to migration - something bad happens when the VM parameters are persisted to disk, or loaded from it.
I do not see how the type of the guest could matter. Could you tell if there is any other difference betrween your W2K8 VM and your Debian one?
Could you post here the first vmCreate command of the two VMs, and their libvirt domxml?
Regards, Dan.

Dan, which are the path to those files that you're asking to me? Now the debian vm it's not working. After try the migration of the Windows vm, the live migration it's not working in the debian. Do you think that this issue it's related to the hosts or with the manager? i really would like to get this working and to other user this is working fine. Do you think if that i make a whole new install maybe this start to work? I'm going to test what Michal say right now with the same host, but i think that's going to happen the same thing. Regards! 2013/9/30 Michal Skrivanek <mskrivan@redhat.com>
On Sep 29, 2013, at 14:15 , Dan Kenigsberg <danken@redhat.com> wrote:
On Fri, Sep 27, 2013 at 03:36:22PM -0300, emitor@gmail.com wrote:
So Dan, could I do some other test for you?, The live migration it's only working with a Debian install, I've just finished to install a windows server 2K8 and it doesn't work neither, gives the following error "VM W2K8 is down. Exit message: 'iface'."
Do you have any non-gluster setup to give it a try? We have this bug 1007980 which also seems to demonstrate in gluster-setup only….don't know why yet, but perhaps this can be related….
Thanks, michal
Frankly, I am out of ideas of what could be causing this very important bug :-(
It is unrelated to hotplugging, and probably not to migration - something bad happens when the VM parameters are persisted to disk, or loaded from it.
I do not see how the type of the guest could matter. Could you tell if there is any other difference betrween your W2K8 VM and your Debian one?
Could you post here the first vmCreate command of the two VMs, and their libvirt domxml?
Regards, Dan.
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com

On Tue, Oct 01, 2013 at 02:35:34PM -0300, emitor@gmail.com wrote:
Dan, which are the path to those files that you're asking to me?
Now the debian vm it's not working. After try the migration of the Windows vm, the live migration it's not working in the debian.
The fact that now both VMs behave the same makes comparing them fruitless. Still it may be intersting to see your vdsm.log (from source and destination) since the first time a faulty VM was started until the 'iface' exception.
Do you think that this issue it's related to the hosts or with the manager?
It smells very much Vdsm-specific, but I'm only guessing.
i really would like to get this working and to other user this is working fine. Do you think if that i make a whole new install maybe this start to work?
I do not suppose this would help, but I'm not in the position to exclude exploration routes.
I'm going to test what Michal say right now with the same host, but i think that's going to happen the same thing.
Regards!

I'm having this issue on my ovirt 3.3 setup (two node, one is AIO, GlusterFS storage, both on F19) as well. Jason On Tue, 2013-10-01 at 21:54 +0100, Dan Kenigsberg wrote:
On Tue, Oct 01, 2013 at 02:35:34PM -0300, emitor@gmail.com wrote:
Dan, which are the path to those files that you're asking to me?
Now the debian vm it's not working. After try the migration of the Windows vm, the live migration it's not working in the debian.
The fact that now both VMs behave the same makes comparing them fruitless. Still it may be intersting to see your vdsm.log (from source and destination) since the first time a faulty VM was started until the 'iface' exception.
Do you think that this issue it's related to the hosts or with the manager?
It smells very much Vdsm-specific, but I'm only guessing.
i really would like to get this working and to other user this is working fine. Do you think if that i make a whole new install maybe this start to work?
I do not suppose this would help, but I'm not in the position to exclude exploration routes.
I'm going to test what Michal say right now with the same host, but i think that's going to happen the same thing.
Regards!
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I've configured the host as a NFS cluster. I've imported the VMs, the debian now it's migrating without problems, the pfSense now gives me an error (Migration failed due to Error: Desktop does not exist) and the Windows VM doesn't start in the new cluster after import it. I've attached the logs of the vdsm The other thing that's not working for me is the HA, even of the Debian VM, I don't know if ti has something to do with all of this. Now I'm going to reinstall the host and the engine using the stable repositories, I hope to get some more stability and get the HA working. Regards! 2013/10/1 Jason Brooks <jbrooks@redhat.com>
I'm having this issue on my ovirt 3.3 setup (two node, one is AIO, GlusterFS storage, both on F19) as well.
Jason
On Tue, Oct 01, 2013 at 02:35:34PM -0300, emitor@gmail.com wrote:
Dan, which are the path to those files that you're asking to me?
Now the debian vm it's not working. After try the migration of the Windows vm, the live migration it's not working in the debian.
The fact that now both VMs behave the same makes comparing them fruitless. Still it may be intersting to see your vdsm.log (from source and destination) since the first time a faulty VM was started until the 'iface' exception.
Do you think that this issue it's related to the hosts or with the
manager?
It smells very much Vdsm-specific, but I'm only guessing.
i really would like to get this working and to other user this is working fine. Do you think if that i make a whole new install maybe this start to work?
I do not suppose this would help, but I'm not in the position to exclude exploration routes.
I'm going to test what Michal say right now with the same host, but i
On Tue, 2013-10-01 at 21:54 +0100, Dan Kenigsberg wrote: think
that's going to happen the same thing.
Regards!
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com

On Wed, Oct 02, 2013 at 01:36:56PM -0300, emitor@gmail.com wrote:
I've configured the host as a NFS cluster. I've imported the VMs, the debian now it's migrating without problems, the pfSense now gives me an error (Migration failed due to Error: Desktop does not exist) and the Windows VM doesn't start in the new cluster after import it.
I've attached the logs of the vdsm
Thanks for providing your, but they begin too late. The bug is still a mystery to me. Something garbles Vdsm's internal representation of virtual disks. And it's bad. I would appreciate if you could dig backward in your logs and find the last successfull startup of a VM, say W2K8_cluster1 (86f10137-d917-4508-a07a-3f83e6f24a95). Then, browse forward to find the first failure (be it on next startup of VM or during migration). Best regards, Dan.

Dan, I've made a whole new installation of oVirt 3.3 using the stable repos, I've updated libvirt to 1.1.2-1 and configured a glusterFS cluster. I've just tried the live migration of the Windows Host, here i send you the vdsm, libvir and qemu logs 2013/10/4 Dan Kenigsberg <danken@redhat.com>
On Wed, Oct 02, 2013 at 01:36:56PM -0300, emitor@gmail.com wrote:
I've configured the host as a NFS cluster. I've imported the VMs, the debian now it's migrating without problems, the pfSense now gives me an error (Migration failed due to Error: Desktop does not exist) and the Windows VM doesn't start in the new cluster after import it.
I've attached the logs of the vdsm
Thanks for providing your, but they begin too late.
The bug is still a mystery to me. Something garbles Vdsm's internal representation of virtual disks. And it's bad.
I would appreciate if you could dig backward in your logs and find the last successfull startup of a VM, say W2K8_cluster1 (86f10137-d917-4508-a07a-3f83e6f24a95). Then, browse forward to find the first failure (be it on next startup of VM or during migration).
Best regards, Dan.
-- *Emiliano Tortorella* +598 98941176 emitor@gmail.com

--Apple-Mail=_80349546-76AA-4918-9B45-6194CB54F8F2 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 On 4 Oct 2013, at 19:55, emitor@gmail.com wrote:
Dan, I've made a whole new installation of oVirt 3.3 using the stable = repos, I've updated libvirt to 1.1.2-1 and configured a glusterFS = cluster. =20 I've just tried the live migration of the Windows Host, here i send = you the vdsm, libvir and qemu logs
=20 =20 2013/10/4 Dan Kenigsberg <danken@redhat.com> On Wed, Oct 02, 2013 at 01:36:56PM -0300, emitor@gmail.com wrote:
I've configured the host as a NFS cluster. I've imported the VMs, =
debian now it's migrating without problems, the pfSense now gives me = an error (Migration failed due to Error: Desktop does not exist) and =
Windows VM doesn't start in the new cluster after import it.
I've attached the logs of the vdsm =20 Thanks for providing your, but they begin too late. =20 The bug is still a mystery to me. Something garbles Vdsm's internal representation of virtual disks. And it's bad. =20 I would appreciate if you could dig backward in your logs and find the last successfull startup of a VM, say W2K8_cluster1 (86f10137-d917-4508-a07a-3f83e6f24a95). Then, browse forward to find =
hmm=85I see both PCI address for CDROM as we've been seeing on gluster = setups. So far no update from gluster AFAIK=85it seems to be broken on = their side somewhere=85 and I also see error on migrations dst which seems to be = another/different disk device mess up The W2K8 create VM call in the ovirt1's log is already wrong, can you = capture/include the initial create call? In the previous cases it was = always the case that the VM was created(from engine) with IDE address = but then after the first run it somehow changed to PCI on the first = status update from vdsm Thanks, michal the the the
first failure (be it on next startup of VM or during migration). =20 Best regards, Dan. =20 =20 =20 --=20 Emiliano Tortorella +598 98941176 emitor@gmail.com = <ovirt_logs_clusterGFS.tar.gz>____________________________________________=
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
<blockquote type=3D"cite"><div dir=3D"ltr"> </div><div class=3D"gmail_extra"><br><br><div = class=3D"gmail_quote">2013/10/4 Dan Kenigsberg <span dir=3D"ltr"><<a =
--Apple-Mail=_80349546-76AA-4918-9B45-6194CB54F8F2 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 <html><head></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; = "><br><div><div>On 4 Oct 2013, at 19:55, <a = href=3D"mailto:emitor@gmail.com">emitor@gmail.com</a> wrote:</div><br = class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><div = dir=3D"ltr">Dan, I've made a whole new installation of oVirt 3.3 using = the stable repos, I've updated libvirt to 1.1.2-1 and configured a = glusterFS cluster.<div><br></div><div>I've just tried the live migration = of the Windows Host, here i send you the vdsm, libvir and qemu = logs</div></div></blockquote><div><br></div>hmm=85I see both PCI address = for CDROM as we've been seeing on gluster setups. So far no update from = gluster AFAIK=85it seems to be broken on their side = somewhere=85</div><div>and I also see error on migrations dst which = seems to be another/different disk device mess = up</div><div><div><br></div><div>The W2K8 create VM call in the ovirt1's = log is already wrong, can you capture/include the initial create call? = In the previous cases it was always the case that the VM was = created(from engine) with IDE address but then after the first run it = somehow changed to PCI on the first status update from = vdsm</div><div><br></div><div>Thanks,</div><div>michal</div><div><br></div= href=3D"mailto:danken@redhat.com" = target=3D"_blank">danken@redhat.com</a>></span><br><blockquote = class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"> <div class=3D"im">On Wed, Oct 02, 2013 at 01:36:56PM -0300, <a = href=3D"mailto:emitor@gmail.com">emitor@gmail.com</a> wrote:<br> > I've configured the host as a NFS cluster. I've imported the VMs, = the<br> > debian now it's migrating without problems, the pfSense now gives = me an<br> > error (Migration failed due to Error: Desktop does not exist) and = the<br> > Windows VM doesn't start in the new cluster after import it.<br> ><br> > I've attached the logs of the vdsm<br> <br> </div>Thanks for providing your, but they begin too late.<br> <br> The bug is still a mystery to me. Something garbles Vdsm's internal<br> representation of virtual disks. And it's bad.<br> <br> I would appreciate if you could dig backward in your logs and find = the<br> last successfull startup of a VM, say W2K8_cluster1<br> (86f10137-d917-4508-a07a-3f83e6f24a95). Then, browse forward to find = the<br> first failure (be it on next startup of VM or during migration).<br> <br> Best regards,<br> Dan.<br> </blockquote></div><br><br clear=3D"all"><div><br></div>-- <br><div = dir=3D"ltr"><b>Emiliano Tortorella</b><br>+598 98941176<br><a = href=3D"mailto:emitor@gmail.com" = target=3D"_blank">emitor@gmail.com</a></div> </div> = <span><ovirt_logs_clusterGFS.tar.gz></span>_________________________= ______________________<br>Users mailing list<br><a = href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br>http://lists.ovirt.= org/mailman/listinfo/users<br></blockquote></div><br></body></html>= --Apple-Mail=_80349546-76AA-4918-9B45-6194CB54F8F2--

I've done some further tests and comparisons In this F19 based + ovirt stable repo the difference in qemu command line between the working run once with cd enabled of this CentOS 6.4 x86_64 and the normal run that doesn't proceed is: run once -drive file=/rhev/data-center/mnt/f18engine.mydomain:_var_lib_exports_iso/cc790a86-72e2-4aa2-a0b6-700756c399c3/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 run -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 A working normal run of a WinXP 32bit VM in another oVirt 3.2.3 environment is -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 so it is the same as in 3.3. Possible problems: - 64bit OS I can try tomorrow to create a CentOS 6.4 32 bit and compare results - qemu default emulated pc version Some comparisons a) f18 host (AIO) with oVirt 3.2.3 qemu-kvm-1.2.2-14.fc18.x86_64 [root@tekkaman polimi]# qemu-kvm -machine ? Supported machines are: none empty machine pc Standard PC (alias of pc-1.2) pc-1.2 Standard PC (default) pc-1.1 Standard PC pc-1.0 Standard PC pc-0.15 Standard PC pc-0.14 Standard PC pc-0.13 Standard PC pc-0.12 Standard PC pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu 0.10 isapc ISA-only PC When I start WinXP (32bit of course) I see this command line: /usr/bin/qemu-kvm -name winxp -S -M pc-0.14 -cpu Opteron_G3 .... b) f19 host in oVirt 3.3 qemu-kvm-1.4.2-11.fc19.x86_64 [root@f18ovn03 ~]# qemu-kvm -machine ? Supported machines are: none empty machine pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-1.4) pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) pc-1.3 Standard PC pc-1.2 Standard PC pc-1.1 Standard PC pc-1.0 Standard PC pc-0.15 Standard PC pc-0.14 Standard PC pc-0.13 Standard PC pc-0.12 Standard PC pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu 0.10 isapc ISA-only PC q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-1.4) pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) qemu command line for a CentOS 6.4 64bit vm is: /usr/bin/qemu-system-x86_64 -machine accel=kvm -name c6s -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 both /usr/bin/qemu-kvm and /usr/bin/qemu-system-x86_64 are in qemu-system-x86 rpm package. Questions: - given a version of oVirt, pc version in qemu generated command line is the same for every defined VM or is there any logic depending on OS chosen and/or bit (32 vs 64)? - if answer to previous question is fixed pc version, is there any particular reason to use pc-0.14 in oVirt 3.2.3 and pc-1.0 in oVirt 3.3 and not the latest available (so pc-1.2 in 3.2.3 and pc-1.3 or pc-i440fx-1.4 in 3.3)? - where can I find documentation about difference between the different pc-x.y versions? - can I use a wrapper in oVirt 3.3 to start VM with a different pc-x.y (eg pc-0.14) and see if anything changes helping debug and solve the problem? Thanks, Gianluca

On Mon, Oct 07, 2013 at 12:25:30AM +0200, Gianluca Cecchi wrote:
I've done some further tests and comparisons
Is this related to emitor's bug? For emitor, when a VM does not start it happens when libvirt recongnizes that its devices have bad addresses. It never gets to qemu command line phase.
In this F19 based + ovirt stable repo the difference in qemu command line between the working run once with cd enabled of this CentOS 6.4 x86_64 and the normal run that doesn't proceed is:
run once -drive file=/rhev/data-center/mnt/f18engine.mydomain:_var_lib_exports_iso/cc790a86-72e2-4aa2-a0b6-700756c399c3/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
run -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
A working normal run of a WinXP 32bit VM in another oVirt 3.2.3 environment is -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
so it is the same as in 3.3.
Possible problems:
- 64bit OS I can try tomorrow to create a CentOS 6.4 32 bit and compare results
- qemu default emulated pc version
Some comparisons
a) f18 host (AIO) with oVirt 3.2.3
qemu-kvm-1.2.2-14.fc18.x86_64
[root@tekkaman polimi]# qemu-kvm -machine ? Supported machines are: none empty machine pc Standard PC (alias of pc-1.2) pc-1.2 Standard PC (default) pc-1.1 Standard PC pc-1.0 Standard PC pc-0.15 Standard PC pc-0.14 Standard PC pc-0.13 Standard PC pc-0.12 Standard PC pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu 0.10 isapc ISA-only PC
When I start WinXP (32bit of course) I see this command line: /usr/bin/qemu-kvm -name winxp -S -M pc-0.14 -cpu Opteron_G3 ....
b) f19 host in oVirt 3.3
qemu-kvm-1.4.2-11.fc19.x86_64
[root@f18ovn03 ~]# qemu-kvm -machine ? Supported machines are: none empty machine pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-1.4) pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) (default) pc-1.3 Standard PC pc-1.2 Standard PC pc-1.1 Standard PC pc-1.0 Standard PC pc-0.15 Standard PC pc-0.14 Standard PC pc-0.13 Standard PC pc-0.12 Standard PC pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu 0.10 isapc ISA-only PC q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-1.4) pc-q35-1.4 Standard PC (Q35 + ICH9, 2009)
qemu command line for a CentOS 6.4 64bit vm is: /usr/bin/qemu-system-x86_64 -machine accel=kvm -name c6s -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G2
both /usr/bin/qemu-kvm and /usr/bin/qemu-system-x86_64 are in qemu-system-x86 rpm package.
Questions: - given a version of oVirt, pc version in qemu generated command line is the same for every defined VM or is there any logic depending on OS chosen and/or bit (32 vs 64)?
emulatedMachine is set by the effective clusterLevel.
- if answer to previous question is fixed pc version, is there any particular reason to use pc-0.14 in oVirt 3.2.3 and pc-1.0 in oVirt 3.3 and not the latest available (so pc-1.2 in 3.2.3 and pc-1.3 or pc-i440fx-1.4 in 3.3)?
We promise inter-cluster migration of VMs. Hence, 3.2.3 may use only the newest machine available at 3.2.0.
- where can I find documentation about difference between the different pc-x.y versions?
I am not aware of anything beyond the qemu source code and release notes.
- can I use a wrapper in oVirt 3.3 to start VM with a different pc-x.y (eg pc-0.14) and see if anything changes helping debug and solve the problem?
Yes, you may write a vdsm before_vm_start hook. It should search for the 'machine' attribute and set it to whatever value you please. Dan.

On Fri, Oct 04, 2013 at 02:55:42PM -0300, emitor@gmail.com wrote:
Dan, I've made a whole new installation of oVirt 3.3 using the stable repos, I've updated libvirt to 1.1.2-1 and configured a glusterFS cluster.
I've just tried the live migration of the Windows Host, here i send you the vdsm, libvir and qemu logs
Here goes. vm.Vm._getUnderlyingDriveInfo() matches the list of disks provided by Engine with the list of disks reported back by libvirt. After a match has been found, the libvirt-assigned address of each disk is reported to Engine. _getUnderlyingDriveInfo is awfully naive. If the cdrom is empty, the function matches it with the gluster volume, and overides the cdrom's address with the gluster volume's. Would you please test if http://gerrit.ovirt.org/19906 solves the issue? (I haven't. Too late at night.) Regards, Dan.

On Mon, Oct 7, 2013 at 2:59 AM, Dan Kenigsberg wrote:
Would you please test if http://gerrit.ovirt.org/19906 solves the issue? (I haven't. Too late at night.)
Regards, Dan.
I can confirm that it resolves https://bugzilla.redhat.com/show_bug.cgi?id=1007980 so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though) But migration still fails On engine: 2013-10-08 01:20:51,459 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-15) START, GlusterV olumesListVDSCommand(HostName = f18ovn03, HostId = b67bcfd4-f868-49d5-8704-4936ee922249), log id: 4425a532 2013-10-08 01:20:51,547 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-15) FINISH, Gluster VolumesListVDSCommand, return: {ed71a4c2-6205-4aad-9aab-85da086d5ba3=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@8defb13f , c8cbcac7-1d40-4cee-837d-bb97467fb2bd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@91be2b8a}, log id: 4425a532 2013-10-08 01:20:51,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-39) RefreshVmList vm id d54660a2-45 ed-41ae-ab99-a6f93ebbdbb1 is migrating to vds f18ovn01 ignoring it in the refresh until migration is done 2013-10-08 01:20:54,765 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand] (pool-6-thread-38) START, GlusterServicesListVDSC ommand(HostName = f18ovn01, HostId = 80188ccc-83b2-4bc8-9385-8d07f7458a3c), log id: 41ed75a4 2013-10-08 01:20:54,766 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand] (pool-6-thread-36) START, GlusterServicesListVDSC ommand(HostName = f18ovn03, HostId = b67bcfd4-f868-49d5-8704-4936ee922249), log id: 7eea38bc 2013-10-08 01:20:56,618 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-2) START, GlusterVo lumesListVDSCommand(HostName = f18ovn01, HostId = 80188ccc-83b2-4bc8-9385-8d07f7458a3c), log id: 34d0827b 2013-10-08 01:20:56,722 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ListVDSCommand] (DefaultQuartzScheduler_Worker-84) Command ListVDS execution failed. Exception: VDSNetworkException: java.net.SocketException: Connection reset 2013-10-08 01:20:56,725 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand] (pool-6-thread-36) Command GlusterServicesListVDS execution failed. Exception: VDSNetworkException: java.net.SocketException: Connection reset 2013-10-08 01:20:56,725 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand] (pool-6-thread-36) FINISH, GlusterServicesListVDSCommand, log id: 7eea38bc 2013-10-08 01:20:56,727 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-84) Failed to refresh VDS , vds = b67bcfd4-f868-49d5-8704-4936ee922249 : f18ovn03, VDS Network Error, continuing. java.net.SocketException: Connection reset 2013-10-08 01:20:58,583 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-48) Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: eb679feb-4da2-4fd0-a185-abbe459ffa70 Type: StoragePool 2013-10-08 01:20:58,603 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-48) Correlation ID: 16f779a3, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Gluster. Setting status to Non Responsive. 2013-10-08 01:20:58,701 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-48) Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM: ()]] 2013-10-08 01:20:58,714 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-48) HostName = f18ovn03 2013-10-08 01:20:58,714 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-48) Command HSMGetAllTasksStatusesVDS execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM: () 2013-10-08 01:20:59,356 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ListVDSCommand] (DefaultQuartzScheduler_Worker-41) Command ListVDS execution failed. Exception: VDSNetworkException: java.net.SocketException: Connection reset On host where VM was running vdsm.log contains: Thread-4643::DEBUG::2013-10-08 01:20:53,700::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.60]::call vmMigrate with ({'tunneled': 'false', 'dst': '10.4.4.58:54321', 'vmId': 'd54660a2-45ed-41ae-ab99-a6f93ebbdbb1', 'method': 'online', 'src': '10.4.4.59'},) {} flowID [7c55a4a9] Thread-4643::DEBUG::2013-10-08 01:20:53,700::API::462::vds::(migrate) {'tunneled': 'false', 'dst': '10.4.4.58:54321', 'vmId': 'd54660a2-45ed-41ae-ab99-a6f93ebbdbb1', 'method': 'online', 'src': '10.4.4.59'} Thread-4643::DEBUG::2013-10-08 01:20:53,701::BindingXMLRPC::986::vds::(wrapper) return vmMigrate with {'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 0} Thread-4644::DEBUG::2013-10-08 01:20:53,702::vm::182::vm.Vm::(_setupVdsConnection) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Destination server is: 10.4.4.58:54321 Thread-4644::DEBUG::2013-10-08 01:20:53,703::vm::184::vm.Vm::(_setupVdsConnection) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Initiating connection with destination Thread-4644::DEBUG::2013-10-08 01:20:53,718::vm::238::vm.Vm::(_prepareGuest) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Migration started Thread-4644::DEBUG::2013-10-08 01:20:53,739::vm::302::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration semaphore acquired Thread-4644::DEBUG::2013-10-08 01:20:53,933::vm::360::vm.Vm::(_startUnderlyingMigration) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration to qemu+tls://10.4.4.58/system with miguri tcp://10.4.4.58 Thread-4645::DEBUG::2013-10-08 01:20:53,934::vm::718::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread started Thread-4646::DEBUG::2013-10-08 01:20:53,935::vm::756::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration monitor thread Thread-4648::DEBUG::2013-10-08 01:20:54,321::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.60]::call volumesList with () {} Thread-4648::DEBUG::2013-10-08 01:20:54,349::BindingXMLRPC::986::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'gviso': {'transportType': ['TCP'], 'uuid': 'c8cbcac7-1d40-4cee-837d-bb97467fb2bd', 'bricks': ['f18ovn01.mydomain:/gluster/ISO_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/ISO_GLUSTER/brick1'], 'volumeName': 'gviso', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'storage.owner-gid': '36', 'storage.owner-uid': '36', 'server.allow-insecure': 'on'}}, 'gvdata': {'transportType': ['TCP'], 'uuid': 'ed71a4c2-6205-4aad-9aab-85da086d5ba3', 'bricks': ['f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1'], 'volumeName': 'gvdata', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'server.allow-insecure': 'on', 'storage.owner-uid': '36', 'storage.owner-gid': '36'}}}} Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::94::libvirtconnection::(wrapper) connection to libvirt broken. ecode: 38 edom: 7 Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::96::libvirtconnection::(wrapper) taking calling process down. MainThread::DEBUG::2013-10-08 01:20:54,874::vdsm::45::vds::(sigtermHandler) Received signal 15 Thread-4644::DEBUG::2013-10-08 01:20:54,874::vm::733::vm.Vm::(cancel) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::canceling migration downtime thread Thread-4644::DEBUG::2013-10-08 01:20:54,875::vm::803::vm.Vm::(stop) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::stopping migration monitor thread Thread-4645::DEBUG::2013-10-08 01:20:54,875::vm::730::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread exiting Thread-4644::ERROR::2013-10-08 01:20:54,875::vm::244::vm.Vm::(_recover) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Cannot recv data: Input/output error Thread-4644::ERROR::2013-10-08 01:20:55,008::vm::324::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 311, in run self._startUnderlyingMigration() File "/usr/share/vdsm/vm.py", line 388, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/vm.py", line 826, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1253, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Cannot recv data: Input/output error MainThread::INFO::2013-10-08 01:20:55,321::vmChannels::183::vds::(stop) VM channels listener was stopped. MainThread::INFO::2013-10-08 01:20:55,322::momIF::81::MOM::(stop) Shutting down MOM MainThread::DEBUG::2013-10-08 01:20:55,322::task::579::TaskManager.Task::(_updateState) Task=`23df8b46-c916-4c52-98a9-3e6540171467`::moving from state init -> state preparing MainThread::INFO::2013-10-08 01:20:55,323::logUtils::44::dispatcher::(wrapper) Run and protect: prepareForShutdown(options=None) Thread-11::DEBUG::2013-10-08 01:20:55,323::storageServer::698::ConnectionMonitor::(_monitorConnections) Monitoring stopped VM Channels Listener::INFO::2013-10-08 01:20:55,672::vmChannels::178::vds::(run) VM channels listener thread has ended. Thread-23::DEBUG::2013-10-08 01:20:55,929::domainMonitor::161::Storage.DomainMonitorThread::(_monitorLoop) Stopping domain monitor for cc790a86-72e2-4aa2-a0b6-700756c399c3 Thread-24::DEBUG::2013-10-08 01:20:55,930::domainMonitor::161::Storage.DomainMonitorThread::(_monitorLoop) Stopping domain monitor for d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 Thread-24::INFO::2013-10-08 01:20:55,930::clusterlock::197::SANLock::(releaseHostId) Releasing host id for domain d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 (id: 2) Thread-24::DEBUG::2013-10-08 01:20:55,930::domainMonitor::170::Storage.DomainMonitorThread::(_monitorLoop) Unable to release the host id 2 for domain d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 167, in _monitorLoop self.domain.releaseHostId(self.hostId, unused=True) File "/usr/share/vdsm/storage/sd.py", line 461, in releaseHostId self._clusterLock.releaseHostId(hostId, async, unused) File "/usr/share/vdsm/storage/clusterlock.py", line 204, in releaseHostId raise se.ReleaseHostIdFailure(self._sdUUID, e) ReleaseHostIdFailure: Cannot release host id: ('d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy')) MainThread::DEBUG::2013-10-08 01:20:55,932::taskManager::80::TaskManager::(prepareForShutdown) Request to stop all tasks MainThread::INFO::2013-10-08 01:20:55,932::logUtils::47::dispatcher::(wrapper) Run and protect: prepareForShutdown, Return response: None MainThread::DEBUG::2013-10-08 01:20:55,932::task::1168::TaskManager.Task::(prepare) Task=`23df8b46-c916-4c52-98a9-3e6540171467`::finished: None MainThread::DEBUG::2013-10-08 01:20:55,933::task::579::TaskManager.Task::(_updateState) Task=`23df8b46-c916-4c52-98a9-3e6540171467`::moving from state preparing -> state finished On target host vdsm.log: Thread-3393::DEBUG::2013-10-08 01:20:53,900::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291' (0 active users) Thread-3393::DEBUG::2013-10-08 01:20:53,900::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291' is free, finding out if anyone is waiting for it. Thread-3393::DEBUG::2013-10-08 01:20:53,900::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', Clearing records. Thread-3393::DEBUG::2013-10-08 01:20:53,901::task::974::TaskManager.Task::(_decref) Task=`be76585d-3de3-4520-873e-9eb501e12311`::ref 0 aborting False Thread-3393::INFO::2013-10-08 01:20:53,901::clientIF::325::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/eb679feb-4da2-4fd0-a185-abbe459ffa70/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49 Thread-3393::INFO::2013-10-08 01:20:53,901::clientIF::325::vds::(prepareVolumePath) prepared volume path: gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49 Thread-3393::DEBUG::2013-10-08 01:20:53,907::vm::2036::vm.Vm::(_startUnderlyingVm) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::_ongoingCreations released Thread-3393::DEBUG::2013-10-08 01:20:53,908::vm::3347::vm.Vm::(_waitForIncomingMigrationFinish) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Waiting 300 seconds for end of migration Thread-3392::ERROR::2013-10-08 01:20:53,907::vm::4789::vm.Vm::(_updateDevicesDomxmlCache) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Alias not found for device type disk during migration at destination host Thread-3392::DEBUG::2013-10-08 01:20:53,918::API::510::vds::(migrationCreate) Destination VM creation succeeded Thread-3392::DEBUG::2013-10-08 01:20:53,918::BindingXMLRPC::986::vds::(wrapper) return vmMigrationCreate with {'status': {'message': 'Done', 'code': 0}, 'migrationPort': 0, 'params': {'guestFQDN': '', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'afterMigrationStatus': 'Up', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'pid': '0', 'memGuaranteedSize': 1024, 'transparentHugePages': 'true', 'displaySecurePort': '-1', 'timeOffset': '-114', 'cpuType': 'Opteron_G2', 'smp': '1', 'pauseCode': 'NOERR', 'migrationDest': 'libvirt', 'custom': {'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104bedevice_1459cc81-2f88-45dd-ae3c-0c030a084d2e': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=1459cc81-2f88-45dd-ae3c-0c030a084d2e, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=1, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104bedevice_1459cc81-2f88-45dd-ae3c-0c030a084d2edevice_5bea8f60-7de6-4c93-99ff-af67635e2af6device_8d086706-28b6-40fa-a86d-3733c9ebe5c1': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=8d086706-28b6-40fa-a86d-3733c9ebe5c1, device=spicevmc, type=CHANNEL, bootOrder=0, specParams={}, address={port=3, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19e': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=db19ff20-8f6d-488c-bad8-19e33b01f19e, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x01, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAlias=ide0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104be': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=9c0a4779-62bc-414b-abb5-a2fab36104be, device=virtio-serial, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x05, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=virtio-serial0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=a5fa5a6d-f191-42d2-be55-d6227499c658, device=scsi, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=scsi0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104bedevice_1459cc81-2f88-45dd-ae3c-0c030a084d2edevice_5bea8f60-7de6-4c93-99ff-af67635e2af6': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=5bea8f60-7de6-4c93-99ff-af67635e2af6, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=2, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel1, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=bd099ee2-faee-41c3-9e01-0a01f458abe9, device=disk, type=DISK, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=virtio-disk0, customProperties={}}'}, 'vmType': 'kvm', 'spiceSslCipherSuite': 'DEFAULT', 'memSize': 1024, 'vmName': 'c8again32', 'nice': '0', 'username': 'Unknown', 'vmId': 'd54660a2-45ed-41ae-ab99-a6f93ebbdbb1', 'displayIp': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'smartcardEnable': 'false', 'guestIPs': '', 'nicModel': 'rtl8139,pv', 'smpCoresPerSocket': '1', 'kvmEnable': 'true', 'pitReinjection': 'false', 'devices': [{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'usb', 'alias': 'usb0', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide0', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'specParams': {'vram': '65536'}, 'alias': 'video0', 'deviceId': 'c4763c67-c69b-4b7c-b0cd-ac5beca82669', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'qxl', 'type': 'video'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:8f:04:f7', 'linkActive': True, 'network': 'ovirtmgmt', 'specParams': {}, 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'alias': 'net0', 'deviceId': '4478616d-d9d4-4284-8e6b-2cce0d330504', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'shared': 'false', 'specParams': {'path': ''}, 'readonly': 'True', 'deviceId': 'a3d3a41d-7f47-4a64-b3fe-10f33312f929', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'path': '', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'volumeInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'f18ovn01.mydomain', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49'}, 'format': 'raw', 'shared': 'false', 'optional': 'false', 'volumeID': 'a123801a-0a4d-4a47-a426-99d8480d2e49', 'propagateErrors': 'off', 'imageID': '15f9ca1c-c435-4892-9eb7-0c84583b2a7d', 'specParams': {}, 'readonly': 'false', 'domainID': 'd0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', 'deviceId': '15f9ca1c-c435-4892-9eb7-0c84583b2a7d', 'truesize': '4337233920', 'poolID': 'eb679feb-4da2-4fd0-a185-abbe459ffa70', 'device': 'disk', 'reqsize': '0', 'path': '/rhev/data-center/eb679feb-4da2-4fd0-a185-abbe459ffa70/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49', 'apparentsize': '10737418240', 'type': 'disk', 'volumeChain': [{'path': '/rhev/data-center/eb679feb-4da2-4fd0-a185-abbe459ffa70/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49', 'domainID': 'd0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'f18ovn01.mydomain', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49'}, 'volumeID': 'a123801a-0a4d-4a47-a426-99d8480d2e49', 'imageID': '15f9ca1c-c435-4892-9eb7-0c84583b2a7d'}]}, {'index': '0', 'iface': 'pci', 'name': 'vda', 'alias': 'virtio-disk0', 'readonly': 'False', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': 'gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49', 'type': 'disk'}, {'target': 1048576, 'specParams': {'model': 'virtio'}, 'alias': 'balloon0', 'deviceId': '216acfca-9672-4390-91b0-feb7aa201484', 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'memballoon', 'type': 'balloon'}], 'status': 'Migration Destination', 'clientIp': '', 'display': 'qxl'}} Thread-3396::DEBUG::2013-10-08 01:20:54,554::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.60]::call vmGetStats with ('d54660a2-45ed-41ae-ab99-a6f93ebbdbb1',) {} Thread-3396::DEBUG::2013-10-08 01:20:54,555::BindingXMLRPC::986::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Migration Destination', 'hash': '0', 'acpiEnable': 'true', 'pid': '0', 'displayIp': '0', 'displayPort': '-1', 'displaySecurePort': '-1', 'timeOffset': '-114', 'pauseCode': 'NOERR', 'kvmEnable': 'true', 'network': {}, 'vmId': 'd54660a2-45ed-41ae-ab99-a6f93ebbdbb1', 'displayType': 'qxl', 'disks': {}, 'monitorResponse': '0', 'elapsedTime': '178', 'vmType': 'kvm', 'clientIp': ''}]} Thread-3397::DEBUG::2013-10-08 01:20:54,885::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.59]::call vmDestroy with ('d54660a2-45ed-41ae-ab99-a6f93ebbdbb1',) {} Thread-3397::INFO::2013-10-08 01:20:54,886::API::317::vds::(destroy) vmContainerLock acquired by vm d54660a2-45ed-41ae-ab99-a6f93ebbdbb1 Thread-3397::DEBUG::2013-10-08 01:20:54,886::vm::4258::vm.Vm::(destroy) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::destroy Called Thread-3397::INFO::2013-10-08 01:20:54,887::vm::4204::vm.Vm::(releaseVm) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Release VM resources Thread-3397::WARNING::2013-10-08 01:20:54,887::clientIF::337::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method Drive._checkIoTuneCategories of <vm.Drive object at 0x7fae4415bc90>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fae4415bc90>> _deviceXML:<disk device="cdrom" type="file"> <driver name="qemu" type="raw"/> <source startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <serial/> <alias name="ide0-1-0"/> <address bus="1" controller="0" target="0" type="drive" unit="0"/> </disk> _makeName:<bound method Drive._makeName of <vm.Drive object at 0x7fae4415bc90>> _validateIoTuneParams:<bound method Drive._validateIoTuneParams of <vm.Drive object at 0x7fae4415bc90>> address:{'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'} alias:ide0-1-0 apparentsize:0 blockDev:False cache:none conf:{'guestFQDN': '', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'afterMigrationStatus': 'Up', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'pid': '0', 'memGuaranteedSize': 1024, 'transparentHugePages': 'true', 'displaySecurePort': '-1', 'timeOffset': '-114', 'cpuType': 'Opteron_G2', 'smp': '1', 'pauseCode': 'NOERR', 'migrationDest': 'libvirt', 'custom': {'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104bedevice_1459cc81-2f88-45dd-ae3c-0c030a084d2e': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=1459cc81-2f88-45dd-ae3c-0c030a084d2e, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=1, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104bedevice_1459cc81-2f88-45dd-ae3c-0c030a084d2edevice_5bea8f60-7de6-4c93-99ff-af67635e2af6device_8d086706-28b6-40fa-a86d-3733c9ebe5c1': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=8d086706-28b6-40fa-a86d-3733c9ebe5c1, device=spicevmc, type=CHANNEL, bootOrder=0, specParams={}, address={port=3, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19e': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=db19ff20-8f6d-488c-bad8-19e33b01f19e, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x01, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAlias=ide0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104be': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=9c0a4779-62bc-414b-abb5-a2fab36104be, device=virtio-serial, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x05, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=virtio-serial0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=a5fa5a6d-f191-42d2-be55-d6227499c658, device=scsi, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=scsi0, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9device_db19ff20-8f6d-488c-bad8-19e33b01f19edevice_9c0a4779-62bc-414b-abb5-a2fab36104bedevice_1459cc81-2f88-45dd-ae3c-0c030a084d2edevice_5bea8f60-7de6-4c93-99ff-af67635e2af6': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=5bea8f60-7de6-4c93-99ff-af67635e2af6, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=2, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel1, customProperties={}}', 'device_a5fa5a6d-f191-42d2-be55-d6227499c658device_bd099ee2-faee-41c3-9e01-0a01f458abe9': 'VmDevice {vmId=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, deviceId=bd099ee2-faee-41c3-9e01-0a01f458abe9, device=disk, type=DISK, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=virtio-disk0, customProperties={}}'}, 'vmType': 'kvm', 'spiceSslCipherSuite': 'DEFAULT', 'memSize': 1024, 'vmName': 'c8again32', 'nice': '0', 'username': 'Unknown', 'vmId': 'd54660a2-45ed-41ae-ab99-a6f93ebbdbb1', 'displayIp': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'smartcardEnable': 'false', 'guestIPs': '', 'nicModel': 'rtl8139,pv', 'smpCoresPerSocket': '1', 'kvmEnable': 'true', 'pitReinjection': 'false', 'devices': [{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'usb', 'alias': 'usb0', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide0', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'specParams': {'vram': '65536'}, 'alias': 'video0', 'deviceId': 'c4763c67-c69b-4b7c-b0cd-ac5beca82669', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'qxl', 'type': 'video'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:8f:04:f7', 'linkActive': True, 'network': 'ovirtmgmt', 'specParams': {}, 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'alias': 'net0', 'deviceId': '4478616d-d9d4-4284-8e6b-2cce0d330504', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0', 'shared': 'false', 'specParams': {'path': ''}, 'readonly': 'True', 'deviceId': 'a3d3a41d-7f47-4a64-b3fe-10f33312f929', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'path': '', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'volumeInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'f18ovn01.mydomain', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49'}, 'format': 'raw', 'shared': 'false', 'optional': 'false', 'volumeID': 'a123801a-0a4d-4a47-a426-99d8480d2e49', 'propagateErrors': 'off', 'imageID': '15f9ca1c-c435-4892-9eb7-0c84583b2a7d', 'specParams': {}, 'readonly': 'false', 'domainID': 'd0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', 'deviceId': '15f9ca1c-c435-4892-9eb7-0c84583b2a7d', 'truesize': '4337233920', 'poolID': 'eb679feb-4da2-4fd0-a185-abbe459ffa70', 'device': 'disk', 'reqsize': '0', 'path': '/rhev/data-center/eb679feb-4da2-4fd0-a185-abbe459ffa70/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49', 'apparentsize': '10737418240', 'type': 'disk', 'volumeChain': [{'path': '/rhev/data-center/eb679feb-4da2-4fd0-a185-abbe459ffa70/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49', 'domainID': 'd0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'f18ovn01.mydomain', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49'}, 'volumeID': 'a123801a-0a4d-4a47-a426-99d8480d2e49', 'imageID': '15f9ca1c-c435-4892-9eb7-0c84583b2a7d'}]}, {'index': '0', 'iface': 'pci', 'name': 'vda', 'alias': 'virtio-disk0', 'readonly': 'False', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': 'gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49', 'type': 'disk'}, {'target': 1048576, 'specParams': {'model': 'virtio'}, 'alias': 'balloon0', 'deviceId': '216acfca-9672-4390-91b0-feb7aa201484', 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'memballoon', 'type': 'balloon'}], 'status': 'Migration Destination', 'clientIp': '', 'display': 'qxl'} createXmlElem:<bound method Drive.createXmlElem of <vm.Drive object at 0x7fae4415bc90>> device:cdrom deviceId:a3d3a41d-7f47-4a64-b3fe-10f33312f929 getNextVolumeSize:<bound method Drive.getNextVolumeSize of <vm.Drive object at 0x7fae4415bc90>> getXML:<bound method Drive.getXML of <vm.Drive object at 0x7fae4415bc90>> iface:ide index:2 isDiskReplicationInProgress:<bound method Drive.isDiskReplicationInProgress of <vm.Drive object at 0x7fae4415bc90>> isVdsmImage:<bound method Drive.isVdsmImage of <vm.Drive object at 0x7fae4415bc90>> log:<logUtils.SimpleLogAdapter object at 0x7fae2058f1d0> name:hdc networkDev:False path: readonly:True reqsize:0 serial: shared:false specParams:{'path': ''} truesize:0 type:disk volExtensionChunk:1024 watermarkLimit:536870912 Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 331, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1344, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-3393::ERROR::2013-10-08 01:20:54,888::libvirtconnection::94::libvirtconnection::(wrapper) connection to libvirt broken. ecode: 1 edom: 7 Thread-3393::ERROR::2013-10-08 01:20:54,892::libvirtconnection::96::libvirtconnection::(wrapper) taking calling process down. firewall on destination seems ok: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 192.168.3.3 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ...

On Tue, Oct 08, 2013 at 01:44:47AM +0200, Gianluca Cecchi wrote:
On Mon, Oct 7, 2013 at 2:59 AM, Dan Kenigsberg wrote:
Would you please test if http://gerrit.ovirt.org/19906 solves the issue? (I haven't. Too late at night.)
Regards, Dan.
I can confirm that it resolves https://bugzilla.redhat.com/show_bug.cgi?id=1007980
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the buggy Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
Thread-4644::DEBUG::2013-10-08 01:20:53,933::vm::360::vm.Vm::(_startUnderlyingMigration) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration to qemu+tls://10.4.4.58/system with miguri tcp://10.4.4.58
Thread-4645::DEBUG::2013-10-08 01:20:53,934::vm::718::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread started Thread-4646::DEBUG::2013-10-08 01:20:53,935::vm::756::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration monitor thread Thread-4648::DEBUG::2013-10-08 01:20:54,321::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.60]::call volumesList with () {} Thread-4648::DEBUG::2013-10-08 01:20:54,349::BindingXMLRPC::986::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'gviso': {'transportType': ['TCP'], 'uuid': 'c8cbcac7-1d40-4cee-837d-bb97467fb2bd', 'bricks': ['f18ovn01.mydomain:/gluster/ISO_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/ISO_GLUSTER/brick1'], 'volumeName': 'gviso', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'storage.owner-gid': '36', 'storage.owner-uid': '36', 'server.allow-insecure': 'on'}}, 'gvdata': {'transportType': ['TCP'], 'uuid': 'ed71a4c2-6205-4aad-9aab-85da086d5ba3', 'bricks': ['f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1'], 'volumeName': 'gvdata', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'server.allow-insecure': 'on', 'storage.owner-uid': '36', 'storage.owner-gid': '36'}}}} Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::94::libvirtconnection::(wrapper) connection to libvirt broken. ecode: 38 edom: 7 Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::96::libvirtconnection::(wrapper) taking calling process down. MainThread::DEBUG::2013-10-08 01:20:54,874::vdsm::45::vds::(sigtermHandler) Received signal 15 Thread-4644::DEBUG::2013-10-08 01:20:54,874::vm::733::vm.Vm::(cancel) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::canceling migration downtime thread Thread-4644::DEBUG::2013-10-08 01:20:54,875::vm::803::vm.Vm::(stop) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::stopping migration monitor thread Thread-4645::DEBUG::2013-10-08 01:20:54,875::vm::730::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread exiting Thread-4644::ERROR::2013-10-08 01:20:54,875::vm::244::vm.Vm::(_recover) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Cannot recv data: Input/output error Thread-4644::ERROR::2013-10-08 01:20:55,008::vm::324::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 311, in run self._startUnderlyingMigration() File "/usr/share/vdsm/vm.py", line 388, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/vm.py", line 826, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1253, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Cannot recv data: Input/output error
Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 192.168.3.3 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ...

----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com> Cc: "users" <users@ovirt.org>, "Michal Skrivanek" <mskrivan@redhat.com> Sent: Tuesday, October 8, 2013 11:40:25 AM Subject: Re: [Users] Migration issues with ovirt 3.3
On Tue, Oct 08, 2013 at 01:44:47AM +0200, Gianluca Cecchi wrote:
On Mon, Oct 7, 2013 at 2:59 AM, Dan Kenigsberg wrote:
Would you please test if http://gerrit.ovirt.org/19906 solves the issue? (I haven't. Too late at night.)
Regards, Dan.
I can confirm that it resolves https://bugzilla.redhat.com/show_bug.cgi?id=1007980
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the buggy Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure only 'bad' vms return: psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';" if so, you can run this to clear the address field for them, so they could run again: psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
Thread-4644::DEBUG::2013-10-08 01:20:53,933::vm::360::vm.Vm::(_startUnderlyingMigration) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration to qemu+tls://10.4.4.58/system with miguri tcp://10.4.4.58
Thread-4645::DEBUG::2013-10-08 01:20:53,934::vm::718::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread started Thread-4646::DEBUG::2013-10-08 01:20:53,935::vm::756::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration monitor thread Thread-4648::DEBUG::2013-10-08 01:20:54,321::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.60]::call volumesList with () {} Thread-4648::DEBUG::2013-10-08 01:20:54,349::BindingXMLRPC::986::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'gviso': {'transportType': ['TCP'], 'uuid': 'c8cbcac7-1d40-4cee-837d-bb97467fb2bd', 'bricks': ['f18ovn01.mydomain:/gluster/ISO_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/ISO_GLUSTER/brick1'], 'volumeName': 'gviso', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'storage.owner-gid': '36', 'storage.owner-uid': '36', 'server.allow-insecure': 'on'}}, 'gvdata': {'transportType': ['TCP'], 'uuid': 'ed71a4c2-6205-4aad-9aab-85da086d5ba3', 'bricks': ['f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1'], 'volumeName': 'gvdata', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'server.allow-insecure': 'on', 'storage.owner-uid': '36', 'storage.owner-gid': '36'}}}} Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::94::libvirtconnection::(wrapper) connection to libvirt broken. ecode: 38 edom: 7 Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::96::libvirtconnection::(wrapper) taking calling process down. MainThread::DEBUG::2013-10-08 01:20:54,874::vdsm::45::vds::(sigtermHandler) Received signal 15 Thread-4644::DEBUG::2013-10-08 01:20:54,874::vm::733::vm.Vm::(cancel) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::canceling migration downtime thread Thread-4644::DEBUG::2013-10-08 01:20:54,875::vm::803::vm.Vm::(stop) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::stopping migration monitor thread Thread-4645::DEBUG::2013-10-08 01:20:54,875::vm::730::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread exiting Thread-4644::ERROR::2013-10-08 01:20:54,875::vm::244::vm.Vm::(_recover) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Cannot recv data: Input/output error Thread-4644::ERROR::2013-10-08 01:20:55,008::vm::324::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 311, in run self._startUnderlyingMigration() File "/usr/share/vdsm/vm.py", line 388, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/vm.py", line 826, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1253, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Cannot recv data: Input/output error
Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 192.168.3.3 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ...
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the buggy Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure only 'bad' vms return: psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
if so, you can run this to clear the address field for them, so they could run again: psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
I wanted to test this but for some reason it seems actually it solved itself. I first ran the query and already had no value: engine=# select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%'; vm_name --------- (0 rows) (overall my VMs are engine=# select distinct vm_name from vm_static; vm_name ----------- Blank c6s c8again32 (3 rows) ) Then I tried to start the powered off VM that gave problems and it started ok. Tried to run shutdown inside it and power on again and it worked too. As soon as I updated the bugzilla with my comment, I restarted vdsmd on both nodes but I wasn't able to run it... strange... any other thing that could have resolved by itself. Things done in sequence related to that VM were: modify vm.py on both nodes and restart vdsmd while VM was powered off verify that power on gave the error verify that run once worked as before shutdown and power off VM after two days (no activity on my side at all) I wanted to try the DB workaround to solve this VM status but apparently the column was already empty and VM able to start normally.... Gianluca

On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the buggy Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure only 'bad' vms return: psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
if so, you can run this to clear the address field for them, so they could run again: psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
I wanted to test this but for some reason it seems actually it solved itself. I first ran the query and already had no value: engine=# select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%'; vm_name --------- (0 rows)
(overall my VMs are engine=# select distinct vm_name from vm_static; vm_name ----------- Blank c6s c8again32 (3 rows)
Which of these 3 is the one that was started up with an empty cdrom on a vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.

Il 09/ott/2013 15:35 "Dan Kenigsberg" <danken@redhat.com> ha scritto:
On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the buggy Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure
only 'bad' vms return:
psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
if so, you can run this to clear the address field for them, so they could run again: psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
I wanted to test this but for some reason it seems actually it solved itself. I first ran the query and already had no value: engine=# select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%'; vm_name --------- (0 rows)
(overall my VMs are engine=# select distinct vm_name from vm_static; vm_name ----------- Blank c6s c8again32 (3 rows)
Which of these 3 is the one that was started up with an empty cdrom on a vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.
It is c6s

Il 09/ott/2013 15:39 "Gianluca Cecchi" <gianluca.cecchi@gmail.com> ha scritto:
Il 09/ott/2013 15:35 "Dan Kenigsberg" <danken@redhat.com> ha scritto:
On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the
Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure only 'bad' vms return: psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
if so, you can run this to clear the address field for them, so
buggy they could run again:
psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
I wanted to test this but for some reason it seems actually it solved itself. I first ran the query and already had no value: engine=# select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%'; vm_name --------- (0 rows)
(overall my VMs are engine=# select distinct vm_name from vm_static; vm_name ----------- Blank c6s c8again32 (3 rows)
Which of these 3 is the one that was started up with an empty cdrom on a vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.
It is c6s
Note that the query suggested by Omer had 0 rows. My further query with 3 results was to show you all rows from vm_static at the moment

On Wed, Oct 09, 2013 at 03:41:58PM +0200, Gianluca Cecchi wrote:
Il 09/ott/2013 15:39 "Gianluca Cecchi" <gianluca.cecchi@gmail.com> ha scritto:
Il 09/ott/2013 15:35 "Dan Kenigsberg" <danken@redhat.com> ha scritto:
On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
> > so now I'm able to start VM without having to select run once and > attaching a cd iso > (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the
Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure only 'bad' vms return: psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
if so, you can run this to clear the address field for them, so
buggy they could run again:
psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
I wanted to test this but for some reason it seems actually it solved itself. I first ran the query and already had no value: engine=# select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%'; vm_name --------- (0 rows)
(overall my VMs are engine=# select distinct vm_name from vm_static; vm_name ----------- Blank c6s c8again32 (3 rows)
Which of these 3 is the one that was started up with an empty cdrom on a vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.
It is c6s
Note that the query suggested by Omer had 0 rows. My further query with 3 results was to show you all rows from vm_static at the moment
Yeah, understood. I have no idea how your c6s cleaned itslef up.

On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use See all: - In libvirtd.log of source host 2013-10-07 23:20:54.471+0000: 1209: debug : qemuMonitorOpenInternal:751 : QEMU_MONITOR_NEW: mon=0x7fc66412e820 refs=2 fd=30 2013-10-07 23:20:54.472+0000: 1209: warning : qemuDomainObjEnterMonitorInternal:1136 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous 2013-10-07 23:20:54.472+0000: 1209: debug : qemuMonitorSetCapabilities:1145 : mon=0x7fc66412e820 2013-10-07 23:20:54.472+0000: 1209: debug : qemuMonitorSend:887 : QEMU_MONITOR_SEND_MSG: mon=0x7fc66412e820 msg={"execute":"qmp_capabilities","id":"libvirt-1"} fd=-1 2013-10-07 23:20:54.769+0000: 1199: error : qemuMonitorIORead:505 : Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.769+0000: 1199: debug : qemuMonitorIO:638 : Error on monitor Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.769+0000: 1199: debug : qemuMonitorIO:672 : Triggering error callback 2013-10-07 23:20:54.769+0000: 1199: debug : qemuProcessHandleMonitorError:351 : Received error on 0x7fc664124fb0 'c8again32' 2013-10-07 23:20:54.769+0000: 1209: debug : qemuMonitorSend:899 : Send command resulted in error Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.770+0000: 1199: debug : qemuMonitorIO:638 : Error on monitor Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.770+0000: 1209: debug : virFileMakePathHelper:1283 : path=/var/run/libvirt/qemu mode=0777 2013-10-07 23:20:54.770+0000: 1199: debug : qemuMonitorIO:661 : Triggering EOF callback 2013-10-07 23:20:54.770+0000: 1199: debug : qemuProcessHandleMonitorEOF:294 : Received EOF on 0x7fc664124fb0 'c8again32' 2013-10-07 23:20:54.770+0000: 1209: debug : qemuProcessStop:3992 : Shutting down VM 'c8again32' pid=18053 flags=0 2013-10-07 23:20:54.771+0000: 1209: error : virNWFilterDHCPSnoopEnd:2135 : internal error ifname "vnet0" not in key map 2013-10-07 23:20:54.782+0000: 1209: debug : virCommandRunAsync:2251 : About to run /bin/sh -c 'IPT="/usr/sbin/iptables" $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0 $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0 $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT $IPT -F FO-vnet0 $IPT -X FO-vnet0 $IPT -F FI-vnet0 $IPT -X FI-vnet0 $IPT -F HI-vnet0 $IPT -X HI-vnet0 IPT="/usr/sbin/ip6tables" $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0 $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0 $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT $IPT -F FO-vnet0 $IPT -X FO-vnet0 $IPT -F FI-vnet0 $IPT -X FI-vnet0 $IPT -F HI-vnet0 $IPT -X HI-vnet0 EBT="/usr/sbin/ebtables" $EBT -t nat -D PREROUTING -i vnet0 -j libvirt-I-vnet0 $EBT -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0 EBT="/usr/sbin/ebtables" collect_chains() { for tmp2 in $*; do for tmp in $($EBT -t nat -L $tmp2 | \ sed -n "/Bridge chain/,\$ s/.*-j \\([IO]-.*\\)/\\1/p"); do echo $tmp collect_chains $tmp done done } rm_chains() { for tmp in $*; do $EBT -t nat -F $tmp; done for tmp in $*; do $EBT -t nat -X $tmp; done } tmp='\'' '\'' IFS='\'' '\'''\'' '\''$tmp chains="$(collect_chains libvirt-I-vnet0 libvirt-O-vnet0)" $EBT -t nat -F libvirt-I-vnet0 $EBT -t nat -F libvirt-O-vnet0 rm_chains $chains $EBT -t nat -F libvirt-I-vnet0 $EBT -t nat -X libvirt-I-vnet0 $EBT -t nat -F libvirt-O-vnet0 $EBT -t nat -X libvirt-O-vnet0 ' 2013-10-07 23:20:54.784+0000: 1209: debug : virCommandRunAsync:2256 : Command result 0, with PID 18076 2013-10-07 23:20:54.863+0000: 1209: debug : virCommandRun:2125 : Result exit status 255, stdout: '' stderr: 'iptables v1.4.18: goto 'FO-vnet0' is not a chain Try `iptables -h' or 'iptables --help' for more information. iptables v1.4.18: goto 'FO-vnet0' is not a chain Try `iptables -h' or 'iptables --help' for more information. iptables v1.4.18: goto 'FI-vnet0' is not a chain Try `iptables -h' or 'iptables --help' for more information. iptables v1.4.18: goto 'HI-vnet0' is not a chain Try `iptables -h' or 'iptables --help' for more information. iptables: Bad rule (does a matching rule exist in that chain?). iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. ip6tables v1.4.18: goto 'FO-vnet0' is not a chain Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables v1.4.18: goto 'FO-vnet0' is not a chain Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables v1.4.18: goto 'FI-vnet0' is not a chain Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables v1.4.18: goto 'HI-vnet0' is not a chain Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables: Bad rule (does a matching rule exist in that chain?). ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. Illegal target name 'libvirt-O-vnet0'. Chain 'libvirt-O-vnet0' doesn't exist. Chain 'libvirt-O-vnet0' doesn't exist. Chain 'libvirt-O-vnet0' doesn't exist. Chain 'libvirt-O-vnet0' doesn't exist. ' 2013-10-07 23:20:54.863+0000: 1209: debug : qemuMonitorClose:821 : QEMU_MONITOR_CLOSE: mon=0x7fc66412e820 refs=2 2013-10-07 23:20:54.863+0000: 1209: debug : qemuProcessKill:3951 : vm=c8again32 pid=18053 flags=5 2013-10-07 23:20:54.863+0000: 1209: debug : virProcessKillPainfully:269 : vpid=18053 force=1 2013-10-07 23:20:54.863+0000: 1209: debug : qemuDomainCleanupRun:2132 : driver=0x7fc664024cd0, vm=c8again32 2013-10-07 23:20:54.863+0000: 1209: debug : qemuProcessAutoDestroyRemove:4504 : vm=c8again32 2013-10-07 23:20:54.863+0000: 1209: debug : virQEMUCloseCallbacksUnset:744 : vm=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, cb=0x7fc66b6fe570 2013-10-07 23:20:54.864+0000: 1209: error : virPortAllocatorRelease:174 : port 0 must be in range (5900, 65535) 2013-10-07 23:20:54.865+0000: 1209: debug : qemuDomainObjEndJob:1070 : Stopping job: none (async=migration in) 2013-10-07 23:20:54.865+0000: 1209: debug : qemuDomainObjEndAsyncJob:1088 : Stopping async job: migration in 2013-10-07 23:20:54.865+0000: 1199: debug : qemuProcessHandleMonitorEOF:306 : Domain 0x7fc664124fb0 is not active, ignoring EOF Caught Segmentation violation dumping internal log buffer: ====== start of log ===== - In destitnation host libvirtd.log 2013-10-07 23:20:53.970+0000: 1206: debug : virQEMUCloseCallbacksGet:786 : cb=(nil) 2013-10-07 23:20:53.970+0000: 1206: debug : qemuMigrationEatCookie:1058 : cookielen=0 cookie='(null)' 2013-10-07 23:20:53.971+0000: 1206: debug : qemuMigrationBakeCookie:1035 : cookielen=250 cookie=<qemu-migration> <name>c8again32</name> <uuid>d54660a2-45ed-41ae-ab99-a6f93ebbdbb1</uuid> <hostname>f18ovn03.mydomain</hostname> <hostuuid>4045b69a-5dae-4218-ad2b-7911a2c368d9</hostuuid> <feature name='lockstate'/> </qemu-migration> 2013-10-07 23:20:53.975+0000: 1206: debug : qemuDomainDefFormatBuf:1353 : Removing default USB controller from domain 'c8again32' for migration compatibility 2013-10-07 23:20:53.975+0000: 1206: debug : qemuDomainDefFormatBuf:1374 : Removing default 'pci-root' from domain 'c8again32' for migration compatibility 2013-10-07 23:20:53.975+0000: 1206: debug : doPeer2PeerMigrate3:3321 : Prepare3 0x7f41940015d0 2013-10-07 23:20:54.870+0000: 1206: error : virNetSocketReadWire:1373 : Cannot recv data: Input/output error 2013-10-07 23:20:54.870+0000: 1206: debug : virConnectClose:1494 : conn=0x7f41940015d0 2013-10-07 23:20:54.871+0000: 1206: error : virNetClientSendInternal:1965 : internal error client socket is closed 2013-10-07 23:20:54.871+0000: 1206: debug : qemuDomainObjEndAsyncJob:1088 : Stopping async job: migration out 2013-10-07 23:20:54.872+0000: 1206: debug : virFileMakePathHelper:1283 : path=/var/run/libvirt/qemu mode=0777 2013-10-07 23:20:54.872+0000: 1206: debug : virDomainFree:2326 : dom=0x7f4194190c10, (VM: name=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1) 2013-10-07 23:20:54.876+0000: 1204: debug : virDomainGetJobInfo:17503 : dom=0x7f418c009530, (VM: name=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1), info=0x7f41b0fe4b10 2013-10-07 23:20:54.876+0000: 1204: debug : virDomainFree:2326 : dom=0x7f418c009530, (VM: name=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1) 2013-10-07 23:20:54.992+0000: 1202: debug : virDomainGetXMLDesc:4384 : dom=0x7f41a03a7840, (VM: name=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1), flags=0 2013-10-07 23:20:54.992+0000: 1202: debug : virDomainFree:2326 : dom=0x7f41a03a7840, (VM: name=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1) 2013-10-07 23:20:59.574+0000: 1192: error : virNetSocketReadWire:1381 : End of file while reading data: Input/output error 2013-10-07 23:20:59.575+0000: 1192: debug : virConnectClose:1494 : conn=0x7f41a012eba0 2013-10-07 23:20:59.576+0000: 1192: debug : virQEMUCloseCallbacksRun:882 : conn=0x7f41a012eba0 2013-10-07 23:20:59.591+0000: 1192: error : virNetSocketReadWire:1381 : End of file while reading data: Input/output error 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=14 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=15 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=16 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=20 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=18 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=17 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectDomainEventDeregisterAny:17972 : conn=0x7f41a0231a70, callbackID=19 2013-10-07 23:20:59.592+0000: 1192: debug : virConnectClose:1494 : conn=0x7f41a0231a70 2013-10-07 23:20:59.593+0000: 1192: debug : virQEMUCloseCallbacksRun:882 : conn=0x7f41a0231a70 2013-10-07 23:21:00.587+0000: 1203: debug : virIdentitySetAttr:241 : ident=0x7f41980620d0 attribute=0 value=vdsm 2013-10-07 23:21:00.587+0000: 1203: debug : virIdentitySetAttr:241 : ident=0x7f41980620d0 attribute=1 value=kvm 2013-10-07 23:21:00.587+0000: 1203: debug : virIdentitySetAttr:241 : ident=0x7f41980620d0 attribute=2 value=12891 2013-10-07 23:21:00.587+0000: 1203: debug : virIdentitySetAttr:241 : ident=0x7f41980620d0 attribute=3 value=11193333 2013-10-07 23:21:00.587+0000: 1203: debug : virIdentitySetAttr:241 : ident=0x7f41980620d0 attribute=6 value=system_u:system_r:initrc_t:s0 2013-10-07 23:21:00.602+0000: 1207: debug : virConnectOpen:1376 : name=qemu:///system 2013-10-07 23:21:00.602+0000: 1207: debug : virConnectGetConfigFile:991 : Loading config file '/etc/libvirt/libvirt.conf' 2013-10-07 23:21:00.602+0000: 1207: debug : virConfReadFile:767 : filename=/etc/libvirt/libvirt.conf 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1165 : name "qemu:///system" to URI components: scheme qemu server (null) user (null) port 0 path /system 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 0 (Test) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 0 Test returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 1 (OPENVZ) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 1 OPENVZ returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 2 (VMWARE) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 2 VMWARE returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 3 (PHYP) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 3 PHYP returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 4 (VBOX) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 4 VBOX returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 5 (ESX) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 5 ESX returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 6 (Hyper-V) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 6 Hyper-V returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 7 (remote) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 7 remote returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 8 (Xen) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 8 Xen returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 9 (xenlight) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 9 xenlight returned DECLINED 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1211 : trying driver 10 (QEMU) ... 2013-10-07 23:21:00.602+0000: 1207: debug : do_open:1217 : driver 10 QEMU returned SUCCESS ... - In vm.log under qemu of source host 2013-10-07 23:17:56.249+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c8again32 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid d54660a2-45ed-41ae-ab99-a6f93ebbdbb1 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-4,serial=34353439-3036-435A-4A38-303330393338,uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c8again32.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-07T23:16:01,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://f18ovn01.mydomain/gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49,if=none,id=drive-virtio-disk0,format=raw,serial=15f9ca1c-c435-4892-9eb7-0c84583b2a7d,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f7,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d54660a2-45ed-41ae-ab99-a6f93ebbdbb1.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d54660a2-45ed-41ae-ab99-a6f93ebbdbb1.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 main_channel_link: add main channel client main_channel_handle_parsed: net test: latency 2.694000 ms, bitrate 521517697 bps (497.358033 Mbps) red_dispatcher_set_cursor_peer: inputs_connect: inputs channel client create main_channel_handle_parsed: agent start red_channel_client_disconnect: rcc=0x7f2c44243c40 (channel=0x7f2c4421f360 type=2 id=0) red_channel_client_disconnect: rcc=0x7f2cbad97110 (channel=0x7f2cbad13130 type=3 id=0) red_channel_client_disconnect: rcc=0x7f2cbad92d80 (channel=0x7f2cbad07f30 type=1 id=0) main_channel_client_on_disconnect: rcc=0x7f2cbad92d80 red_client_destroy: destroy client 0x7f2cbad8e1c0 with #channels=4 red_dispatcher_disconnect_cursor_peer: red_channel_client_disconnect: rcc=0x7f2c442a0f00 (channel=0x7f2c4421f920 type=4 id=0) red_dispatcher_disconnect_display_peer: 2013-10-07 23:53:29.384+0000: shutting down - In vm.log under qemu of dest host 2013-10-07 23:20:54.260+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c8again32 -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid d54660a2-45ed-41ae-ab99-a6f93ebbdbb1 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-4,serial=34353439-3036-435A-4A38-303330393338,uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c8again32.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-07T23:18:59,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://f18ovn01.mydomain/gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49,if=none,id=drive-virtio-disk0,format=raw,serial=15f9ca1c-c435-4892-9eb7-0c84583b2a7d,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f7,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d54660a2-45ed-41ae-ab99-a6f93ebbdbb1.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d54660a2-45ed-41ae-ab99-a6f93ebbdbb1.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -incoming tcp:[::]:49153 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -incoming tcp:[::]:49153: Failed to bind socket: Address already in use 2013-10-07 23:20:54.771+0000: shutting down Thanks Gianluca

On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
See all: - In libvirtd.log of source host 2013-10-07 23:20:54.471+0000: 1209: debug : qemuMonitorOpenInternal:751 : QEMU_MONITOR_NEW: mon=0x7fc66412e820 refs=2 fd=30 2013-10-07 23:20:54.472+0000: 1209: warning : qemuDomainObjEnterMonitorInternal:1136 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous 2013-10-07 23:20:54.472+0000: 1209: debug : qemuMonitorSetCapabilities:1145 : mon=0x7fc66412e820 2013-10-07 23:20:54.472+0000: 1209: debug : qemuMonitorSend:887 : QEMU_MONITOR_SEND_MSG: mon=0x7fc66412e820 msg={"execute":"qmp_capabilities","id":"libvirt-1"} fd=-1 2013-10-07 23:20:54.769+0000: 1199: error : qemuMonitorIORead:505 : Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.769+0000: 1199: debug : qemuMonitorIO:638 : Error on monitor Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.769+0000: 1199: debug : qemuMonitorIO:672 : Triggering error callback 2013-10-07 23:20:54.769+0000: 1199: debug : qemuProcessHandleMonitorError:351 : Received error on 0x7fc664124fb0 'c8again32' 2013-10-07 23:20:54.769+0000: 1209: debug : qemuMonitorSend:899 : Send command resulted in error Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.770+0000: 1199: debug : qemuMonitorIO:638 : Error on monitor Unable to read from monitor: Connection reset by peer 2013-10-07 23:20:54.770+0000: 1209: debug : virFileMakePathHelper:1283 : path=/var/run/libvirt/qemu mode=0777 2013-10-07 23:20:54.770+0000: 1199: debug : qemuMonitorIO:661 : Triggering EOF callback 2013-10-07 23:20:54.770+0000: 1199: debug : qemuProcessHandleMonitorEOF:294 : Received EOF on 0x7fc664124fb0 'c8again32' 2013-10-07 23:20:54.770+0000: 1209: debug : qemuProcessStop:3992 : Shutting down VM 'c8again32' pid=18053 flags=0 2013-10-07 23:20:54.771+0000: 1209: error : virNWFilterDHCPSnoopEnd:2135 : internal error ifname "vnet0" not in key map 2013-10-07 23:20:54.782+0000: 1209: debug : virCommandRunAsync:2251 : About to run /bin/sh -c 'IPT="/usr/sbin/iptables" $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0 $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0 $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT $IPT -F FO-vnet0 $IPT -X FO-vnet0 $IPT -F FI-vnet0 $IPT -X FI-vnet0 $IPT -F HI-vnet0 $IPT -X HI-vnet0 IPT="/usr/sbin/ip6tables" $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0 $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0 $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0 $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT $IPT -F FO-vnet0 $IPT -X FO-vnet0 $IPT -F FI-vnet0 $IPT -X FI-vnet0 $IPT -F HI-vnet0 $IPT -X HI-vnet0 EBT="/usr/sbin/ebtables" $EBT -t nat -D PREROUTING -i vnet0 -j libvirt-I-vnet0 $EBT -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0 EBT="/usr/sbin/ebtables" collect_chains() { for tmp2 in $*; do for tmp in $($EBT -t nat -L $tmp2 | \ sed -n "/Bridge chain/,\$ s/.*-j \\([IO]-.*\\)/\\1/p"); do echo $tmp collect_chains $tmp done done } rm_chains() { for tmp in $*; do $EBT -t nat -F $tmp; done for tmp in $*; do $EBT -t nat -X $tmp; done } tmp='\'' '\'' IFS='\'' '\'''\'' '\''$tmp chains="$(collect_chains libvirt-I-vnet0 libvirt-O-vnet0)" $EBT -t nat -F libvirt-I-vnet0 $EBT -t nat -F libvirt-O-vnet0 rm_chains $chains $EBT -t nat -F libvirt-I-vnet0 $EBT -t nat -X libvirt-I-vnet0 $EBT -t nat -F libvirt-O-vnet0 $EBT -t nat -X libvirt-O-vnet0 ' 2013-10-07 23:20:54.784+0000: 1209: debug : virCommandRunAsync:2256 : Command result 0, with PID 18076 2013-10-07 23:20:54.863+0000: 1209: debug : virCommandRun:2125 : Result exit status 255, stdout: '' stderr: 'iptables v1.4.18: goto 'FO-vnet0' is not a chain
Try `iptables -h' or 'iptables --help' for more information. iptables v1.4.18: goto 'FO-vnet0' is not a chain
Try `iptables -h' or 'iptables --help' for more information. iptables v1.4.18: goto 'FI-vnet0' is not a chain Try `iptables -h' or 'iptables --help' for more information. iptables v1.4.18: goto 'HI-vnet0' is not a chain
Try `iptables -h' or 'iptables --help' for more information. iptables: Bad rule (does a matching rule exist in that chain?). iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. ip6tables v1.4.18: goto 'FO-vnet0' is not a chain
Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables v1.4.18: goto 'FO-vnet0' is not a chain
Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables v1.4.18: goto 'FI-vnet0' is not a chain
Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables v1.4.18: goto 'HI-vnet0' is not a chain
Try `ip6tables -h' or 'ip6tables --help' for more information. ip6tables: Bad rule (does a matching rule exist in that chain?). ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. ip6tables: No chain/target/match by that name. Illegal target name 'libvirt-O-vnet0'. Chain 'libvirt-O-vnet0' doesn't exist. Chain 'libvirt-O-vnet0' doesn't exist. Chain 'libvirt-O-vnet0' doesn't exist. Chain 'libvirt-O-vnet0' doesn't exist. ' 2013-10-07 23:20:54.863+0000: 1209: debug : qemuMonitorClose:821 : QEMU_MONITOR_CLOSE: mon=0x7fc66412e820 refs=2 2013-10-07 23:20:54.863+0000: 1209: debug : qemuProcessKill:3951 : vm=c8again32 pid=18053 flags=5 2013-10-07 23:20:54.863+0000: 1209: debug : virProcessKillPainfully:269 : vpid=18053 force=1 2013-10-07 23:20:54.863+0000: 1209: debug : qemuDomainCleanupRun:2132 : driver=0x7fc664024cd0, vm=c8again32 2013-10-07 23:20:54.863+0000: 1209: debug : qemuProcessAutoDestroyRemove:4504 : vm=c8again32 2013-10-07 23:20:54.863+0000: 1209: debug : virQEMUCloseCallbacksUnset:744 : vm=c8again32, uuid=d54660a2-45ed-41ae-ab99-a6f93ebbdbb1, cb=0x7fc66b6fe570 2013-10-07 23:20:54.864+0000: 1209: error : virPortAllocatorRelease:174 : port 0 must be in range (5900, 65535) 2013-10-07 23:20:54.865+0000: 1209: debug : qemuDomainObjEndJob:1070 : Stopping job: none (async=migration in) 2013-10-07 23:20:54.865+0000: 1209: debug : qemuDomainObjEndAsyncJob:1088 : Stopping async job: migration in 2013-10-07 23:20:54.865+0000: 1199: debug : qemuProcessHandleMonitorEOF:306 : Domain 0x7fc664124fb0 is not active, ignoring EOF Caught Segmentation violation dumping internal log buffer:
This line one seems ominous. Can libvir-list help with it?

On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
yeah ! It seems gluster uses it on both sides On destination [root@f18ovn01 qemu]# ss -ntp |egrep "State|49153" State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 192.168.3.1:975 192.168.3.1:49153 users:(("glusterfs",31166,7)) ESTAB 0 0 192.168.3.1:49153 192.168.3.3:972 users:(("glusterfsd",18615,14)) ESTAB 0 0 192.168.3.1:49153 192.168.3.1:965 users:(("glusterfsd",18615,13)) ESTAB 0 0 192.168.3.1:963 192.168.3.3:49153 users:(("glusterfs",31152,17)) ESTAB 0 0 192.168.3.1:49153 192.168.3.1:975 users:(("glusterfsd",18615,9)) ESTAB 0 0 192.168.3.1:49153 192.168.3.3:966 users:(("glusterfsd",18615,15)) ESTAB 0 0 192.168.3.1:965 192.168.3.1:49153 users:(("glusterfs",31152,7)) ESTAB 0 0 192.168.3.1:960 192.168.3.3:49153 users:(("glusterfs",31166,11)) ... [root@f18ovn01 qemu]# ps -ef|grep 31166 root 14950 10958 0 16:50 pts/0 00:00:00 grep --color=auto 31166 root 31166 1 0 Oct07 ? 00:00:04 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/626066f6d74e376808c27ad679a1e85c.socket --xlator-option *replicate*.node-uuid=ebaf2f1a-65a8-409a-b911-6e631a5f182f [root@f18ovn01 qemu]# lsof -Pp 31166|grep 49153 glusterfs 31166 root 7u IPv4 4703891 0t0 TCP f18ovn01.mydomain:975->f18ovn01.mydomain:49153 (ESTABLISHED) glusterfs 31166 root 11u IPv4 4780563 0t0 TCP f18ovn01.mydomain:960->f18ovn03.mydomain:49153 (ESTABLISHED) not so good indeed ;-)

On Wed, Oct 09, 2013 at 04:52:20PM +0200, Gianluca Cecchi wrote:
On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
yeah ! It seems gluster uses it on both sides
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it? Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)

Le mercredi 09 octobre 2013 à 16:18 +0100, Dan Kenigsberg a écrit :
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it?
Already done: https://bugzilla.redhat.com/show_bug.cgi?id=987555 (not using ovirt, but the problem is easy to trigger as soon as you use GlusterFS and libvirt on the same boxes) Regards, Daniel
Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)
-- Daniel Berteaud FIREWALL-SERVICES SARL. Société de Services en Logiciels Libres Technopôle Montesquieu 33650 MARTILLAC Tel : 05 56 64 15 32 Fax : 05 56 64 15 32 Web : http://www.firewall-services.com

On Wed, Oct 9, 2013 at 5:52 PM, Daniel Berteaud wrote:
Le mercredi 09 octobre 2013 à 16:18 +0100, Dan Kenigsberg a écrit :
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it?
Already done: https://bugzilla.redhat.com/show_bug.cgi?id=987555
(not using ovirt, but the problem is easy to trigger as soon as you use GlusterFS and libvirt on the same boxes)
Regards, Daniel
Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)
I added my point to the bugzilla and everyone on oVirt should do so in my opinion... very frustrating.. ;-( And it seems not so easy to change gluster range. I read in past threads for previous versions that the range 24009+ was written inside the code. In 3.4 it should be an option such as --volfile-server-port=PORT from a client point of view... but I tried some things and not able to arrive at a significant point... always 49152 and 49153 used for my two bricks Furthermore, as a side note, I wanted to try migration reattempt it several times as each time it chooses next port. I have two bricks so that the first two attempts fail; in VM.log I get ... -incoming tcp:[::]:49152: Failed to bind socket: Address already in use ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use Unfortunately, tested three times with same result, after the second attempt the dest node goes into a loop of failing, recoverying from crash, non reposnive. I easily correct the situation with systemctl restart vdsmd on it, but possibly I could have raised some sort of bug. And then if I try again to migrate it restarts from 49152... so I cannot test it. See here below a file with what written in vdsmd.log, with the two attempts and the failure and the loop of type Thread-61::DEBUG::2013-10-10 01:15:32,161::BindingXMLRPC::986::vds::(wrapper) return getCapabilities with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} https://docs.google.com/file/d/0BwoPbcrMv8mvamR3RmpzLU11OFE/edit?usp=sharing

On Wed, Oct 09, 2013 at 16:18:25 +0100, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 04:52:20PM +0200, Gianluca Cecchi wrote:
On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
yeah ! It seems gluster uses it on both sides
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it?
Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)
If it makes you feel better, you can't even do that :-) Unfortunately, the port range used for migration is hard-coded in libvirt. And since we don't use port allocator yet (patches are in progress), we don't automatically avoid ports that are already taken. All this is going to change, though. Jirka

On Thu, Oct 10, 2013 at 08:40:29AM +0200, Jiri Denemark wrote:
On Wed, Oct 09, 2013 at 16:18:25 +0100, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 04:52:20PM +0200, Gianluca Cecchi wrote:
On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
> > But migration still fails >
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
yeah ! It seems gluster uses it on both sides
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it?
Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)
If it makes you feel better, you can't even do that :-) Unfortunately, the port range used for migration is hard-coded in libvirt. And since we don't use port allocator yet (patches are in progress), we don't automatically avoid ports that are already taken. All this is going to change, though.
Are these patches posted? Is there a libvirt BZ that tracks this issue? Dan.

On Fri, Oct 11, 2013 at 11:41 PM, Dan Kenigsberg wrote:
yeah ! It seems gluster uses it on both sides
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it?
Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)
If it makes you feel better, you can't even do that :-) Unfortunately, the port range used for migration is hard-coded in libvirt. And since we don't use port allocator yet (patches are in progress), we don't automatically avoid ports that are already taken. All this is going to change, though.
Are these patches posted? Is there a libvirt BZ that tracks this issue?
Dan.
In the mean time I got 3.4.1 source rpm for fedora 19 and rebuilt with patches review as in comment#2 of https://bugzilla.redhat.com/show_bug.cgi?id=1018178 Now range is 50152+ and I'm able to successfully migrate in oVirt my CentOS 6.4 VM Gianluca

On Fri, Oct 11, 2013 at 22:41:32 +0100, Dan Kenigsberg wrote:
On Thu, Oct 10, 2013 at 08:40:29AM +0200, Jiri Denemark wrote:
On Wed, Oct 09, 2013 at 16:18:25 +0100, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 04:52:20PM +0200, Gianluca Cecchi wrote:
On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg wrote:
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
> >> >> But migration still fails >> > > It seems like an unrelated failure. I do not know what's blocking > migration traffic. Could you see if libvirtd.log and qemu logs at source > and destinaiton have clues? >
It seems that on VM.log under qemu of desdt host I have: ... -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
Is that port really taken (`ss -ntp` should tell by whom)?
yeah ! It seems gluster uses it on both sides
Since libvirt has been using this port range first, would you open a bug on gluster to avoid it?
Dan. (prays that Vdsm is not expected to edit libvirt's migration port range on installation)
If it makes you feel better, you can't even do that :-) Unfortunately, the port range used for migration is hard-coded in libvirt. And since we don't use port allocator yet (patches are in progress), we don't automatically avoid ports that are already taken. All this is going to change, though.
Are these patches posted? Is there a libvirt BZ that tracks this issue?
https://www.redhat.com/archives/libvir-list/2013-October/msg00513.html is v3 of a patch that makes libvirt skip ports that are already in use. Of course, if something binds to that port just before QEMU is started, it will still fail. But that's a separate issue that may only be solved by giving QEMU a socket which is already bound rather than telling QEMU to bind to it. A patch for making the port range configurable was not post yet... BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1018530 Jirka

On Wed, Oct 2, 2013 at 12:07 AM, Jason Brooks wrote:
I'm having this issue on my ovirt 3.3 setup (two node, one is AIO, GlusterFS storage, both on F19) as well.
Jason
Me too with oVirt 3.3 setup and GlusterFS DataCenter. One dedicated engine + 2 vdsm hosts. All fedora 19 + ovirt stable. My trying to migrate VM is CentOS 6.4 fully updated After trying to migrate in webadmin gui I get: VM c6s is down. Exit message: 'iface'. For a few moments the VM appears as down in the GUI but actually an ssh session I had before is still alive. Also the qemu process on source host is still there. After a little, the VM results again as "Up" in the source node from the gui Actually it never stopped: [root@c6s ~]# uptime 16:36:56 up 19 min, 1 user, load average: 0.00, 0.00, 0.00 At target many errors such as Thread-8609::ERROR::2013-10-03 16:17:13,086::task::850::TaskManager.Task::(_setError) Task=`a102541e-fbe5-46a3-958c-e5f4026cac8c`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2123, in getAllTasksStatuses allTasksStatus = sp.getAllTasksStatuses() File "/usr/share/vdsm/storage/securable.py", line 66, in wrapper raise SecureError() SecureError But relevant perhaps is around the time of migration (16:33-16:34) Thread-9968::DEBUG::2013-10-03 16:33:38,811::task::1168::TaskManager.Task::(prepare) Task=`6c1d3161-edcd-4344-a32a-4a18f75f5ba3`::finished: {'taskStatus': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '0eaac2f3-3d25-4c8c-9738-708aba290404'}} Thread-9968::DEBUG::2013-10-03 16:33:38,811::task::579::TaskManager.Task::(_updateState) Task=`6c1d3161-edcd-4344-a32a-4a18f75f5ba3`::moving from state preparing -> state finished Thread-9968::DEBUG::2013-10-03 16:33:38,811::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-9968::DEBUG::2013-10-03 16:33:38,812::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-9968::DEBUG::2013-10-03 16:33:38,812::task::974::TaskManager.Task::(_decref) Task=`6c1d3161-edcd-4344-a32a-4a18f75f5ba3`::ref 0 aborting False 0eaac2f3-3d25-4c8c-9738-708aba290404::ERROR::2013-10-03 16:33:38,847::task::850::TaskManager.Task::(_setError) Task=`0eaac2f3-3d25-4c8c-9738-708aba290404`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/sp.py", line 272, in startSpm self.masterDomain.acquireHostId(self.id) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', SanlockException(5, 'Sanlock lockspace add failure', 'Input/output error')) 0eaac2f3-3d25-4c8c-9738-708aba290404::DEBUG::2013-10-03 16:33:38,847::task::869::TaskManager.Task::(_run) Task=`0eaac2f3-3d25-4c8c-9738-708aba290404`::Task._run: 0eaac2f3-3d25-4c8c-9738-708aba290404 () {} failed - stopping task 0eaac2f3-3d25-4c8c-9738-708aba290404::DEBUG::2013-10-03 16:33:38,847::task::1194::TaskManager.Task::(stop) Task=`0eaac2f3-3d25-4c8c-9738-708aba290404`::stopping in state running (force False) Instead at source: Thread-10402::ERROR::2013-10-03 16:35:03,713::vm::244::vm.Vm::(_recover) vmId=`4147e0d3-19a7-447b-9d88-2ff19365bec0`::migration destination error: Error creating the requested VM Thread-10402::ERROR::2013-10-03 16:35:03,740::vm::324::vm.Vm::(run) vmId=`4147e0d3-19a7-447b-9d88-2ff19365bec0`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 311, in run self._startUnderlyingMigration() File "/usr/share/vdsm/vm.py", line 347, in _startUnderlyingMigration response['status']['message']) RuntimeError: migration destination error: Error creating the requested VM Thread-1161::DEBUG::2013-10-03 16:35:04,243::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/f18ovn01.mydomain:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/metadata bs=4096 count=1' (cwd None) Thread-1161::DEBUG::2013-10-03 16:35:04,262::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n512 bytes (512 B) copied, 0.0015976 s, 320 kB/s\n'; <rc> = 0 Thread-1161::INFO::2013-10-03 16:35:04,269::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 (id: 2) Thread-1161::DEBUG::2013-10-03 16:35:04,270::clusterlock::192::SANLock::(acquireHostId) Host id for domain d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 successfully acquired (id: 2) Full logs here: https://docs.google.com/file/d/0BwoPbcrMv8mvNVFBaGhDeUdMOFE/edit?usp=sharing in the zip file: engine.log vdsm_source.log vdsm_target.log ovirt gluster 3.3 datacenter status.png (note as many times it seems from a gui point of view that DC set to non responsive...?) One strange thing: In Clusters --> Gluster01 --> Volumes I see my gluster volume (gvdata) as up while in DC --> Gluster (My DC name) Storage --> gvdata (my storage domain name) here is marked as down, but the related C6 VM is always up and running??? source: [root@f18ovn03 vdsm]# df -h | grep rhev f18ovn01.mydomain:gvdata 30G 4.7G 26G 16% /rhev/data-center/mnt/glusterSD/f18ovn01.mydomain:gvdata f18engine.mydomain:/var/lib/exports/iso 35G 9.4G 24G 29% /rhev/data-center/mnt/f18engine.mydomain:_var_lib_exports_iso target: [root@f18ovn01 vdsm]# df -h | grep rhev f18ovn01.mydomain:gvdata 30G 4.7G 26G 16% /rhev/data-center/mnt/glusterSD/f18ovn01.mydomain:gvdata f18engine.mydomain:/var/lib/exports/iso 35G 9.4G 24G 29% /rhev/data-center/mnt/f18engine.mydomain:_var_lib_exports_iso I didn't try to activate again because I didn't want to make worse things: it seems strange to me that both hosts are marked as up but the only existing Storage Domain (so marked as "Data (Master)) in the Gui is down..... Gianluca

On 10/03/2013 06:12 PM, Gianluca Cecchi wrote:
On Wed, Oct 2, 2013 at 12:07 AM, Jason Brooks wrote:
I'm having this issue on my ovirt 3.3 setup (two node, one is AIO, GlusterFS storage, both on F19) as well.
Jason
Me too with oVirt 3.3 setup and GlusterFS DataCenter. One dedicated engine + 2 vdsm hosts. All fedora 19 + ovirt stable. My trying to migrate VM is CentOS 6.4 fully updated
centos qemu/libvirt versions don't support glusterfs storage domain yet. 6.5 would be needed. we're trying to find a solution until then.
After trying to migrate in webadmin gui I get: VM c6s is down. Exit message: 'iface'.
For a few moments the VM appears as down in the GUI but actually an ssh session I had before is still alive. Also the qemu process on source host is still there. After a little, the VM results again as "Up" in the source node from the gui
Actually it never stopped: [root@c6s ~]# uptime 16:36:56 up 19 min, 1 user, load average: 0.00, 0.00, 0.00
At target many errors such as Thread-8609::ERROR::2013-10-03 16:17:13,086::task::850::TaskManager.Task::(_setError) Task=`a102541e-fbe5-46a3-958c-e5f4026cac8c`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2123, in getAllTasksStatuses allTasksStatus = sp.getAllTasksStatuses() File "/usr/share/vdsm/storage/securable.py", line 66, in wrapper raise SecureError() SecureError
But relevant perhaps is around the time of migration (16:33-16:34)
Thread-9968::DEBUG::2013-10-03 16:33:38,811::task::1168::TaskManager.Task::(prepare) Task=`6c1d3161-edcd-4344-a32a-4a18f75f5ba3`::finished: {'taskStatus': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '0eaac2f3-3d25-4c8c-9738-708aba290404'}} Thread-9968::DEBUG::2013-10-03 16:33:38,811::task::579::TaskManager.Task::(_updateState) Task=`6c1d3161-edcd-4344-a32a-4a18f75f5ba3`::moving from state preparing -> state finished Thread-9968::DEBUG::2013-10-03 16:33:38,811::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-9968::DEBUG::2013-10-03 16:33:38,812::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-9968::DEBUG::2013-10-03 16:33:38,812::task::974::TaskManager.Task::(_decref) Task=`6c1d3161-edcd-4344-a32a-4a18f75f5ba3`::ref 0 aborting False 0eaac2f3-3d25-4c8c-9738-708aba290404::ERROR::2013-10-03 16:33:38,847::task::850::TaskManager.Task::(_setError) Task=`0eaac2f3-3d25-4c8c-9738-708aba290404`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/sp.py", line 272, in startSpm self.masterDomain.acquireHostId(self.id) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', SanlockException(5, 'Sanlock lockspace add failure', 'Input/output error')) 0eaac2f3-3d25-4c8c-9738-708aba290404::DEBUG::2013-10-03 16:33:38,847::task::869::TaskManager.Task::(_run) Task=`0eaac2f3-3d25-4c8c-9738-708aba290404`::Task._run: 0eaac2f3-3d25-4c8c-9738-708aba290404 () {} failed - stopping task 0eaac2f3-3d25-4c8c-9738-708aba290404::DEBUG::2013-10-03 16:33:38,847::task::1194::TaskManager.Task::(stop) Task=`0eaac2f3-3d25-4c8c-9738-708aba290404`::stopping in state running (force False)
Instead at source:
Thread-10402::ERROR::2013-10-03 16:35:03,713::vm::244::vm.Vm::(_recover) vmId=`4147e0d3-19a7-447b-9d88-2ff19365bec0`::migration destination error: Error creating the requested VM Thread-10402::ERROR::2013-10-03 16:35:03,740::vm::324::vm.Vm::(run) vmId=`4147e0d3-19a7-447b-9d88-2ff19365bec0`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 311, in run self._startUnderlyingMigration() File "/usr/share/vdsm/vm.py", line 347, in _startUnderlyingMigration response['status']['message']) RuntimeError: migration destination error: Error creating the requested VM Thread-1161::DEBUG::2013-10-03 16:35:04,243::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/f18ovn01.mydomain:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/metadata bs=4096 count=1' (cwd None) Thread-1161::DEBUG::2013-10-03 16:35:04,262::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n512 bytes (512 B) copied, 0.0015976 s, 320 kB/s\n'; <rc> = 0 Thread-1161::INFO::2013-10-03 16:35:04,269::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 (id: 2) Thread-1161::DEBUG::2013-10-03 16:35:04,270::clusterlock::192::SANLock::(acquireHostId) Host id for domain d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291 successfully acquired (id: 2)
Full logs here: https://docs.google.com/file/d/0BwoPbcrMv8mvNVFBaGhDeUdMOFE/edit?usp=sharing
in the zip file: engine.log vdsm_source.log vdsm_target.log ovirt gluster 3.3 datacenter status.png (note as many times it seems from a gui point of view that DC set to non responsive...?)
One strange thing: In Clusters --> Gluster01 --> Volumes I see my gluster volume (gvdata) as up
while in DC --> Gluster (My DC name) Storage --> gvdata (my storage domain name) here is marked as down, but the related C6 VM is always up and running???
source: [root@f18ovn03 vdsm]# df -h | grep rhev f18ovn01.mydomain:gvdata 30G 4.7G 26G 16% /rhev/data-center/mnt/glusterSD/f18ovn01.mydomain:gvdata f18engine.mydomain:/var/lib/exports/iso 35G 9.4G 24G 29% /rhev/data-center/mnt/f18engine.mydomain:_var_lib_exports_iso
target: [root@f18ovn01 vdsm]# df -h | grep rhev f18ovn01.mydomain:gvdata 30G 4.7G 26G 16% /rhev/data-center/mnt/glusterSD/f18ovn01.mydomain:gvdata f18engine.mydomain:/var/lib/exports/iso 35G 9.4G 24G 29% /rhev/data-center/mnt/f18engine.mydomain:_var_lib_exports_iso
I didn't try to activate again because I didn't want to make worse things: it seems strange to me that both hosts are marked as up but the only existing Storage Domain (so marked as "Data (Master)) in the Gui is down.....
Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, if I understand Gianluca correct he has setup the ManagementNode and 2 ComputeNodes with Fedora 19. CentOS 6.4 is the VM-OS (inside the VM). So this should work? HTH Sven On 04/10/13 11:29, Itamar Heim wrote:
Me too with oVirt 3.3 setup and GlusterFS DataCenter. One dedicated engine + 2 vdsm hosts. All fedora 19 + ovirt stable. My trying to migrate VM is CentOS 6.4 fully updated
centos qemu/libvirt versions don't support glusterfs storage domain yet. 6.5 would be needed. we're trying to find a solution until then.

On Fri, Oct 4, 2013 at 11:39 AM, Sven Kieske wrote:
Hi,
if I understand Gianluca correct he has setup the ManagementNode and 2 ComputeNodes with Fedora 19. CentOS 6.4 is the VM-OS (inside the VM).
So this should work?
Yes, this is my environment with plain F19 systems installed and updated as of yesterday and ovirt repo stable enabled. CentOS 6.4 x86_64 was configured as a server in oVirt, but wanting to generate some significant I/O during install to test Gluster efficiency I choosed Gnome Desktop in CentOS anaconda phase. This led to 1089 packages installed. After reboot I ran "yum update" on the VM, enabled epel and installed the new ovirt-guest-agent so that I was correctly able to see ip and kernel of the VM from the webadmin GUI. But then I incurred in these problems with preliminary tests, donna if correlated or not: - migration issue - shutdown (so power off ) and power on error see: https://bugzilla.redhat.com/show_bug.cgi?id=1007980 that coud be related to both of them - gluster split brain Gianluca

On 10/04/2013 12:39 PM, Sven Kieske wrote:
Hi,
if I understand Gianluca correct he has setup the ManagementNode and 2 ComputeNodes with Fedora 19. CentOS 6.4 is the VM-OS (inside the VM).
So this should work?
yes, it should.
HTH
Sven
On 04/10/13 11:29, Itamar Heim wrote:
Me too with oVirt 3.3 setup and GlusterFS DataCenter. One dedicated engine + 2 vdsm hosts. All fedora 19 + ovirt stable. My trying to migrate VM is CentOS 6.4 fully updated
centos qemu/libvirt versions don't support glusterfs storage domain yet. 6.5 would be needed. we're trying to find a solution until then.
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Oct 2, 2013 at 12:07 AM, Jason Brooks wrote:
I'm having this issue on my ovirt 3.3 setup (two node, one is AIO, GlusterFS storage, both on F19) as well.
Jason
I uploaded all my day logs to bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1007980 I can reproduce the problem and the migrate phase is what generates apparently files to heal in gluster volume VM is running on f18ovn03 before migrate [root@f18ovn01 vdsm]# gluster volume heal gvdata info Gathering Heal info on volume gvdata has been successful Brick 10.4.4.58:/gluster/DATA_GLUSTER/brick1 Number of entries: 0 Brick 10.4.4.59:/gluster/DATA_GLUSTER/brick1 Number of entries: 0 Start migrate that fails with the "iface" error And now you see: [root@f18ovn01 vdsm]# gluster volume heal gvdata info Gathering Heal info on volume gvdata has been successful Brick 10.4.4.58:/gluster/DATA_GLUSTER/brick1 Number of entries: 1 /d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/ids Brick 10.4.4.59:/gluster/DATA_GLUSTER/brick1 Number of entries: 1 /d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/ids While on f18ovn03: [root@f18ovn03 vdsm]# gluster volume heal gvdata info Gathering Heal info on volume gvdata has been successful Brick 10.4.4.58:/gluster/DATA_GLUSTER/brick1 Number of entries: 0 Brick 10.4.4.59:/gluster/DATA_GLUSTER/brick1 Number of entries: 0 Gianluca
participants (11)
-
Dan Kenigsberg
-
Daniel Berteaud
-
emitor@gmail.com
-
Gianluca Cecchi
-
Itamar Heim
-
Jason Brooks
-
Jiri Denemark
-
Michal Skrivanek
-
Michal Skrivanek
-
Omer Frenkel
-
Sven Kieske