[Users] Migration Failed

Hi guys, I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate. I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary). These are my versions... and attached are my engine.log and my vdsm.log Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64 I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo. Thanks, any help is greatly appreciated. Regards. Neil Wilson.

Is it still in the same condition? If yes, please add the outputs from both hosts for: #virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode) Thnaks, Elad Ben Aharon RHEV-QE storage team ----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed Hi guys, I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate. I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary). These are my versions... and attached are my engine.log and my vdsm.log Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64 I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo. Thanks, any help is greatly appreciated. Regards. Neil Wilson. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Elad, Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated". Below are the details requested. [root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running [root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up [root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running [root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up Thanks. Regards. Neil Wilson. On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Is it still in the same condition? If yes, please add the outputs from both hosts for:
#virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode)
Thnaks,
Elad Ben Aharon RHEV-QE storage team
----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed
Hi guys,
I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate.
I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary).
These are my versions... and attached are my engine.log and my vdsm.log
Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch
vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64
I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo.
Thanks, any help is greatly appreciated.
Regards.
Neil Wilson.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Ok... several things :) 1. for migration we need to see vdsm logs from both src and dst. 2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain: 2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358 Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3' hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one) 3. The migration fails with libvirt error but we need the trace from the second log: Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3' 4. But I am worried about this and would more info about this vm... Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format' How did you create this vm? was it from the UI? was it from a script? what are the parameters you used? Thanks, Dafna On 01/07/2014 04:34 PM, Neil wrote:
Hi Elad,
Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated".
Below are the details requested.
[root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running
[root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
[root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running
[root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
Thanks.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Is it still in the same condition? If yes, please add the outputs from both hosts for:
#virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode)
Thnaks,
Elad Ben Aharon RHEV-QE storage team
----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed
Hi guys,
I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate.
I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary).
These are my versions... and attached are my engine.log and my vdsm.log
Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch
vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64
I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo.
Thanks, any help is greatly appreciated.
Regards.
Neil Wilson.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron

Hi Dafna, Thanks for the reply. Attached is the log from the source server (node03). I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting. Regards. Neil Wilson. On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote:
Ok... several things :)
1. for migration we need to see vdsm logs from both src and dst.
2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain:
2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358
Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one)
3. The migration fails with libvirt error but we need the trace from the second log:
Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3'
4. But I am worried about this and would more info about this vm...
Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format'
How did you create this vm? was it from the UI? was it from a script? what are the parameters you used?
Thanks,
Dafna
On 01/07/2014 04:34 PM, Neil wrote:
Hi Elad,
Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated".
Below are the details requested.
[root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running
[root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
[root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running
[root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
Thanks.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Is it still in the same condition? If yes, please add the outputs from both hosts for:
#virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode)
Thnaks,
Elad Ben Aharon RHEV-QE storage team
----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed
Hi guys,
I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate.
I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary).
These are my versions... and attached are my engine.log and my vdsm.log
Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch
vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64
I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo.
Thanks, any help is greatly appreciated.
Regards.
Neil Wilson.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron

Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None Please check if the vm's were booted with a cd... bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0 serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912 Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> <target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive" unit="0"/> On 01/08/2014 06:28 AM, Neil wrote:
Hi Dafna,
Thanks for the reply.
Attached is the log from the source server (node03).
I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote:
Ok... several things :)
1. for migration we need to see vdsm logs from both src and dst.
2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain:
2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358
Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one)
3. The migration fails with libvirt error but we need the trace from the second log:
Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3'
4. But I am worried about this and would more info about this vm...
Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format'
How did you create this vm? was it from the UI? was it from a script? what are the parameters you used?
Thanks,
Dafna
On 01/07/2014 04:34 PM, Neil wrote:
Hi Elad,
Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated".
Below are the details requested.
[root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running
[root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
[root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running
[root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
Thanks.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Is it still in the same condition? If yes, please add the outputs from both hosts for:
#virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode)
Thnaks,
Elad Ben Aharon RHEV-QE storage team
----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed
Hi guys,
I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate.
I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary).
These are my versions... and attached are my engine.log and my vdsm.log
Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch
vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64
I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo.
Thanks, any help is greatly appreciated.
Regards.
Neil Wilson.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
-- Dafna Ron

Hi guys, Apologies for the late reply. The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error. I've looked for any attached images/cd's and found none unfortunately. Thank you so much for your assistance so far. Regards. Neil Wilson. On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote:
Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0 serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> <target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive" unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote:
Hi Dafna,
Thanks for the reply.
Attached is the log from the source server (node03).
I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote:
Ok... several things :)
1. for migration we need to see vdsm logs from both src and dst.
2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain:
2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358
Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07
13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd
if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one)
3. The migration fails with libvirt error but we need the trace from the second log:
Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3'
4. But I am worried about this and would more info about this vm...
Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format'
How did you create this vm? was it from the UI? was it from a script? what are the parameters you used?
Thanks,
Dafna
On 01/07/2014 04:34 PM, Neil wrote:
Hi Elad,
Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated".
Below are the details requested.
[root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running
[root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
[root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running
[root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
Thanks.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Is it still in the same condition? If yes, please add the outputs from both hosts for:
#virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode)
Thnaks,
Elad Ben Aharon RHEV-QE storage team
----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed
Hi guys,
I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate.
I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary).
These are my versions... and attached are my engine.log and my vdsm.log
Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch
vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64
I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo.
Thanks, any help is greatly appreciated.
Regards.
Neil Wilson.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
-- Dafna Ron

Hi Neil, the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh) Thanks, Dafna On 01/08/2014 02:42 PM, Neil wrote:
Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote:
Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0 serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> <target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive" unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote:
Hi Dafna,
Thanks for the reply.
Attached is the log from the source server (node03).
I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote:
Ok... several things :)
1. for migration we need to see vdsm logs from both src and dst.
2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain:
2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358
Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07
13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd
if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one)
3. The migration fails with libvirt error but we need the trace from the second log:
Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3'
4. But I am worried about this and would more info about this vm...
Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format'
How did you create this vm? was it from the UI? was it from a script? what are the parameters you used?
Thanks,
Dafna
On 01/07/2014 04:34 PM, Neil wrote:
Hi Elad,
Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated".
Below are the details requested.
[root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running
[root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
[root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running
[root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
Thanks.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Is it still in the same condition? If yes, please add the outputs from both hosts for:
#virsh -r list #pgrep qemu #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working in insecure mode)
Thnaks,
Elad Ben Aharon RHEV-QE storage team
----- Original Message ----- From: "Neil" <nwilson123@gmail.com> To: users@ovirt.org Sent: Tuesday, January 7, 2014 4:21:43 PM Subject: [Users] Migration Failed
Hi guys,
I've tried to migrate a VM from one host(node03) to another(node01), and it failed to migrate, and the VM(tux) remained on the original host. I've now tried to migrate the same VM again, and it picks up that the previous migration is still in progress and refuses to migrate.
I've checked for the KVM process on each of the hosts and the VM is definitely still running on node03 so there doesn't appear to be any chance of the VM trying to run on both hosts (which I've had before which is very scary).
These are my versions... and attached are my engine.log and my vdsm.log
Centos 6.5 ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-host-deploy-1.1.2-1.el6.noarch ovirt-release-el6-9-1.noarch ovirt-engine-setup-3.3.1-2.el6.noarch ovirt-engine-3.3.1-2.el6.noarch ovirt-host-deploy-java-1.1.2-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-engine-dbscripts-3.3.1-2.el6.noarch ovirt-engine-cli-3.3.0.6-1.el6.noarch ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch ovirt-engine-userportal-3.3.1-2.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.1-2.el6.noarch ovirt-engine-lib-3.3.1-2.el6.noarch ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch ovirt-engine-backend-3.3.1-2.el6.noarch ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch ovirt-engine-restapi-3.3.1-2.el6.noarch
vdsm-python-4.13.0-11.el6.x86_64 vdsm-cli-4.13.0-11.el6.noarch vdsm-xmlrpc-4.13.0-11.el6.noarch vdsm-4.13.0-11.el6.x86_64 vdsm-python-cpopen-4.13.0-11.el6.x86_64
I've had a few issues with this particular installation in the past, as it's from a very old pre release of ovirt, then upgrading to the dreyou repo, then finally moving to the official Centos ovirt repo.
Thanks, any help is greatly appreciated.
Regards.
Neil Wilson.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
-- Dafna Ron
-- Dafna Ron

Hi Dafna, Apologies for the late reply, I was out of my office yesterday. Just to get back to you on your questions. can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters.. [root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 0 /dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0 can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files. can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh) Below is the xml <domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain> Thank you very much for your help. Regards. Neil Wilson. On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote:
Hi Neil,
the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Thanks,
Dafna
On 01/08/2014 02:42 PM, Neil wrote:
Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote:
Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0 serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source
dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> <target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive" unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote:
Hi Dafna,
Thanks for the reply.
Attached is the log from the source server (node03).
I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote:
Ok... several things :)
1. for migration we need to see vdsm logs from both src and dst.
2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain:
2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358
Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07
13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07
13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd
if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one)
3. The migration fails with libvirt error but we need the trace from the second log:
Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3'
4. But I am worried about this and would more info about this vm...
Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format'
How did you create this vm? was it from the UI? was it from a script? what are the parameters you used?
Thanks,
Dafna
On 01/07/2014 04:34 PM, Neil wrote:
Hi Elad,
Thanks for assisting me, yes the same condition exists, if I try to migrate Tux it says "The VM Tux is being migrated".
Below are the details requested.
[root@node01 ~]# virsh -r list Id Name State ---------------------------------------------------- 1 adam running
[root@node01 ~]# pgrep qemu 11232 [root@node01 ~]# vdsClient -s 0 list table 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
[root@node03 ~]# virsh -r list Id Name State ---------------------------------------------------- 7 tux running
[root@node03 ~]# pgrep qemu 32333 [root@node03 ~]# vdsClient -s 0 list table 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
Thanks.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote: > > Is it still in the same condition? > If yes, please add the outputs from both hosts for: > > #virsh -r list > #pgrep qemu > #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you > are > working in insecure mode) > > > Thnaks, > > Elad Ben Aharon > RHEV-QE storage team > > > > > ----- Original Message ----- > From: "Neil" <nwilson123@gmail.com> > To: users@ovirt.org > Sent: Tuesday, January 7, 2014 4:21:43 PM > Subject: [Users] Migration Failed > > Hi guys, > > I've tried to migrate a VM from one host(node03) to another(node01), > and it failed to migrate, and the VM(tux) remained on the original > host. I've now tried to migrate the same VM again, and it picks up > that the previous migration is still in progress and refuses to > migrate. > > I've checked for the KVM process on each of the hosts and the VM is > definitely still running on node03 so there doesn't appear to be any > chance of the VM trying to run on both hosts (which I've had before > which is very scary). > > These are my versions... and attached are my engine.log and my > vdsm.log > > Centos 6.5 > ovirt-iso-uploader-3.3.1-1.el6.noarch > ovirt-host-deploy-1.1.2-1.el6.noarch > ovirt-release-el6-9-1.noarch > ovirt-engine-setup-3.3.1-2.el6.noarch > ovirt-engine-3.3.1-2.el6.noarch > ovirt-host-deploy-java-1.1.2-1.el6.noarch > ovirt-image-uploader-3.3.1-1.el6.noarch > ovirt-engine-dbscripts-3.3.1-2.el6.noarch > ovirt-engine-cli-3.3.0.6-1.el6.noarch > ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch > ovirt-engine-userportal-3.3.1-2.el6.noarch > ovirt-log-collector-3.3.1-1.el6.noarch > ovirt-engine-tools-3.3.1-2.el6.noarch > ovirt-engine-lib-3.3.1-2.el6.noarch > ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch > ovirt-engine-backend-3.3.1-2.el6.noarch > ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch > ovirt-engine-restapi-3.3.1-2.el6.noarch > > > vdsm-python-4.13.0-11.el6.x86_64 > vdsm-cli-4.13.0-11.el6.noarch > vdsm-xmlrpc-4.13.0-11.el6.noarch > vdsm-4.13.0-11.el6.x86_64 > vdsm-python-cpopen-4.13.0-11.el6.x86_64 > > I've had a few issues with this particular installation in the past, > as it's from a very old pre release of ovirt, then upgrading to the > dreyou repo, then finally moving to the official Centos ovirt repo. > > Thanks, any help is greatly appreciated. > > Regards. > > Neil Wilson. > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
-- Dafna Ron
-- Dafna Ron

Good morning everyone, Sorry to trouble you again, anyone have any ideas on what to try next? Thank you so much, Regards. Neil Wilson. On Fri, Jan 10, 2014 at 8:31 AM, Neil <nwilson123@gmail.com> wrote:
Hi Dafna,
Apologies for the late reply, I was out of my office yesterday.
Just to get back to you on your questions.
can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters..
[root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 0 /dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0
can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files.
can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Below is the xml
<domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain>
Thank you very much for your help.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote:
Hi Neil,
the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Thanks,
Dafna
On 01/08/2014 02:42 PM, Neil wrote:
Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote:
Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0 serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source
dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> <target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive" unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote:
Hi Dafna,
Thanks for the reply.
Attached is the log from the source server (node03).
I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote:
Ok... several things :)
1. for migration we need to see vdsm logs from both src and dst.
2. Is it possible that the vm has an iso attached? because I see that you are having problems with the iso domain:
2014-01-07 14:26:27,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was reported with error code 358
Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
hread-19::ERROR::2014-01-07 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain e9ab725d-69c1-4a59-b225-b995d095c289 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Thread-19::ERROR::2014-01-07
13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 monitoring information Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in _monitorDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'e9ab725d-69c1-4a59-b225-b995d095c289',) Dummy-29013::DEBUG::2014-01-07
13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd
if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd N one)
3. The migration fails with libvirt error but we need the trace from the second log:
Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished Thread-1165153::DEBUG::2014-01-07 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 a4fb7b3'
4. But I am worried about this and would more info about this vm...
Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect) vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x2ce0998> Traceback (most recent call last): File "/usr/share/vdsm/sampling.py", line 351, in collect statsFunction() File "/usr/share/vdsm/sampling.py", line 226, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/vm.py", line 509, in _highWrite if not vmDrive.blockDev or vmDrive.format != 'cow': AttributeError: 'Drive' object has no attribute 'format'
How did you create this vm? was it from the UI? was it from a script? what are the parameters you used?
Thanks,
Dafna
On 01/07/2014 04:34 PM, Neil wrote: > > Hi Elad, > > Thanks for assisting me, yes the same condition exists, if I try to > migrate Tux it says "The VM Tux is being migrated". > > > Below are the details requested. > > > [root@node01 ~]# virsh -r list > Id Name State > ---------------------------------------------------- > 1 adam running > > [root@node01 ~]# pgrep qemu > 11232 > [root@node01 ~]# vdsClient -s 0 list table > 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up > > > [root@node03 ~]# virsh -r list > Id Name State > ---------------------------------------------------- > 7 tux running > > [root@node03 ~]# pgrep qemu > 32333 > [root@node03 ~]# vdsClient -s 0 list table > 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up > > Thanks. > > Regards. > > Neil Wilson. > > > On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> > wrote: >> >> Is it still in the same condition? >> If yes, please add the outputs from both hosts for: >> >> #virsh -r list >> #pgrep qemu >> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you >> are >> working in insecure mode) >> >> >> Thnaks, >> >> Elad Ben Aharon >> RHEV-QE storage team >> >> >> >> >> ----- Original Message ----- >> From: "Neil" <nwilson123@gmail.com> >> To: users@ovirt.org >> Sent: Tuesday, January 7, 2014 4:21:43 PM >> Subject: [Users] Migration Failed >> >> Hi guys, >> >> I've tried to migrate a VM from one host(node03) to another(node01), >> and it failed to migrate, and the VM(tux) remained on the original >> host. I've now tried to migrate the same VM again, and it picks up >> that the previous migration is still in progress and refuses to >> migrate. >> >> I've checked for the KVM process on each of the hosts and the VM is >> definitely still running on node03 so there doesn't appear to be any >> chance of the VM trying to run on both hosts (which I've had before >> which is very scary). >> >> These are my versions... and attached are my engine.log and my >> vdsm.log >> >> Centos 6.5 >> ovirt-iso-uploader-3.3.1-1.el6.noarch >> ovirt-host-deploy-1.1.2-1.el6.noarch >> ovirt-release-el6-9-1.noarch >> ovirt-engine-setup-3.3.1-2.el6.noarch >> ovirt-engine-3.3.1-2.el6.noarch >> ovirt-host-deploy-java-1.1.2-1.el6.noarch >> ovirt-image-uploader-3.3.1-1.el6.noarch >> ovirt-engine-dbscripts-3.3.1-2.el6.noarch >> ovirt-engine-cli-3.3.0.6-1.el6.noarch >> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch >> ovirt-engine-userportal-3.3.1-2.el6.noarch >> ovirt-log-collector-3.3.1-1.el6.noarch >> ovirt-engine-tools-3.3.1-2.el6.noarch >> ovirt-engine-lib-3.3.1-2.el6.noarch >> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch >> ovirt-engine-backend-3.3.1-2.el6.noarch >> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch >> ovirt-engine-restapi-3.3.1-2.el6.noarch >> >> >> vdsm-python-4.13.0-11.el6.x86_64 >> vdsm-cli-4.13.0-11.el6.noarch >> vdsm-xmlrpc-4.13.0-11.el6.noarch >> vdsm-4.13.0-11.el6.x86_64 >> vdsm-python-cpopen-4.13.0-11.el6.x86_64 >> >> I've had a few issues with this particular installation in the past, >> as it's from a very old pre release of ovirt, then upgrading to the >> dreyou repo, then finally moving to the official Centos ovirt repo. >> >> Thanks, any help is greatly appreciated. >> >> Regards. >> >> Neil Wilson. >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
-- Dafna Ron
-- Dafna Ron

On Jan 13, 2014, at 11:37 , Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry to trouble you again, anyone have any ideas on what to try next?
Hi Neil, hm, other than noise I don't really see any failures in migration. Can you attach both src and dst vdsm log with a hint which VM and at what time approx it failed for you? There are errors for one volume reoccuring all the time, but that doesn't look related to the migration Thanks, michal
Thank you so much,
Regards.
Neil Wilson.
On Fri, Jan 10, 2014 at 8:31 AM, Neil <nwilson123@gmail.com> wrote:
Hi Dafna,
Apologies for the late reply, I was out of my office yesterday.
Just to get back to you on your questions.
can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters..
[root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 0 /dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0
can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files.
can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Below is the xml
<domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain>
Thank you very much for your help.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote:
Hi Neil,
the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Thanks,
Dafna
On 01/08/2014 02:42 PM, Neil wrote:
Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote:
Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0 serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source
dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> <target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive" unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote:
Hi Dafna,
Thanks for the reply.
Attached is the log from the source server (node03).
I'll reply to your other questions as soon as I'm back in the office this afternoon, have to run off to a meeting.
Regards.
Neil Wilson.
On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote: > > Ok... several things :) > > 1. for migration we need to see vdsm logs from both src and dst. > > 2. Is it possible that the vm has an iso attached? because I see that > you > are having problems with the iso domain: > > 2014-01-07 14:26:27,714 ERROR > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] > (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso > was > reported with error code 358 > > > Thread-1165153::DEBUG::2014-01-07 > 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) > Unknown > libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no > domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3' > > hread-19::ERROR::2014-01-07 > 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) > domain > e9ab725d-69c1-4a59-b225-b995d095c289 not found > Traceback (most recent call last): > File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain > dom = findMethod(sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 171, in > _findUnfetchedDomain > raise se.StorageDomainDoesNotExist(sdUUID) > StorageDomainDoesNotExist: Storage domain does not exist: > (u'e9ab725d-69c1-4a59-b225-b995d095c289',) > Thread-19::ERROR::2014-01-07 > > > 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) > Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 > monitoring information > Traceback (most recent call last): > File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in > _monitorDomain > self.domain = sdCache.produce(self.sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 98, in produce > domain.getRealDomain() > File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain > return self._cache._realProduce(self._sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce > domain = self._findDomain(sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain > dom = findMethod(sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 171, in > _findUnfetchedDomain > raise se.StorageDomainDoesNotExist(sdUUID) > StorageDomainDoesNotExist: Storage domain does not exist: > (u'e9ab725d-69c1-4a59-b225-b995d095c289',) > Dummy-29013::DEBUG::2014-01-07 > > 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) > 'dd > > > if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox > iflag=direct,fullblock count=1 bs=1024000' (cwd N > one) > > 3. The migration fails with libvirt error but we need the trace from > the > second log: > > Thread-1165153::DEBUG::2014-01-07 > 13:39:42,451::sampling::292::vm.Vm::(stop) > vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection > Thread-1163583::DEBUG::2014-01-07 > 13:39:42,452::sampling::323::vm.Vm::(run) > vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished > Thread-1165153::DEBUG::2014-01-07 > 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) > Unknown > libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no > domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 > a4fb7b3' > > > 4. But I am worried about this and would more info about this vm... > > Thread-247::ERROR::2014-01-07 > 15:35:14,868::sampling::355::vm.Vm::(collect) > vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: > <AdvancedStatsFunction _highWrite at 0x2ce0998> > Traceback (most recent call last): > File "/usr/share/vdsm/sampling.py", line 351, in collect > statsFunction() > File "/usr/share/vdsm/sampling.py", line 226, in __call__ > retValue = self._function(*args, **kwargs) > File "/usr/share/vdsm/vm.py", line 509, in _highWrite > if not vmDrive.blockDev or vmDrive.format != 'cow': > AttributeError: 'Drive' object has no attribute 'format' > > How did you create this vm? was it from the UI? was it from a script? > what > are the parameters you used? > > Thanks, > > Dafna > > > > On 01/07/2014 04:34 PM, Neil wrote: >> >> Hi Elad, >> >> Thanks for assisting me, yes the same condition exists, if I try to >> migrate Tux it says "The VM Tux is being migrated". >> >> >> Below are the details requested. >> >> >> [root@node01 ~]# virsh -r list >> Id Name State >> ---------------------------------------------------- >> 1 adam running >> >> [root@node01 ~]# pgrep qemu >> 11232 >> [root@node01 ~]# vdsClient -s 0 list table >> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up >> >> >> [root@node03 ~]# virsh -r list >> Id Name State >> ---------------------------------------------------- >> 7 tux running >> >> [root@node03 ~]# pgrep qemu >> 32333 >> [root@node03 ~]# vdsClient -s 0 list table >> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up >> >> Thanks. >> >> Regards. >> >> Neil Wilson. >> >> >> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar@redhat.com> >> wrote: >>> >>> Is it still in the same condition? >>> If yes, please add the outputs from both hosts for: >>> >>> #virsh -r list >>> #pgrep qemu >>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you >>> are >>> working in insecure mode) >>> >>> >>> Thnaks, >>> >>> Elad Ben Aharon >>> RHEV-QE storage team >>> >>> >>> >>> >>> ----- Original Message ----- >>> From: "Neil" <nwilson123@gmail.com> >>> To: users@ovirt.org >>> Sent: Tuesday, January 7, 2014 4:21:43 PM >>> Subject: [Users] Migration Failed >>> >>> Hi guys, >>> >>> I've tried to migrate a VM from one host(node03) to another(node01), >>> and it failed to migrate, and the VM(tux) remained on the original >>> host. I've now tried to migrate the same VM again, and it picks up >>> that the previous migration is still in progress and refuses to >>> migrate. >>> >>> I've checked for the KVM process on each of the hosts and the VM is >>> definitely still running on node03 so there doesn't appear to be any >>> chance of the VM trying to run on both hosts (which I've had before >>> which is very scary). >>> >>> These are my versions... and attached are my engine.log and my >>> vdsm.log >>> >>> Centos 6.5 >>> ovirt-iso-uploader-3.3.1-1.el6.noarch >>> ovirt-host-deploy-1.1.2-1.el6.noarch >>> ovirt-release-el6-9-1.noarch >>> ovirt-engine-setup-3.3.1-2.el6.noarch >>> ovirt-engine-3.3.1-2.el6.noarch >>> ovirt-host-deploy-java-1.1.2-1.el6.noarch >>> ovirt-image-uploader-3.3.1-1.el6.noarch >>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch >>> ovirt-engine-cli-3.3.0.6-1.el6.noarch >>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch >>> ovirt-engine-userportal-3.3.1-2.el6.noarch >>> ovirt-log-collector-3.3.1-1.el6.noarch >>> ovirt-engine-tools-3.3.1-2.el6.noarch >>> ovirt-engine-lib-3.3.1-2.el6.noarch >>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch >>> ovirt-engine-backend-3.3.1-2.el6.noarch >>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch >>> ovirt-engine-restapi-3.3.1-2.el6.noarch >>> >>> >>> vdsm-python-4.13.0-11.el6.x86_64 >>> vdsm-cli-4.13.0-11.el6.noarch >>> vdsm-xmlrpc-4.13.0-11.el6.noarch >>> vdsm-4.13.0-11.el6.x86_64 >>> vdsm-python-cpopen-4.13.0-11.el6.x86_64 >>> >>> I've had a few issues with this particular installation in the past, >>> as it's from a very old pre release of ovirt, then upgrading to the >>> dreyou repo, then finally moving to the official Centos ovirt repo. >>> >>> Thanks, any help is greatly appreciated. >>> >>> Regards. >>> >>> Neil Wilson. >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > -- > Dafna Ron
-- Dafna Ron
-- Dafna Ron
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

selinux issue? On 13 January 2014 12:48, Michal Skrivanek <michal.skrivanek@redhat.com>wrote:
On Jan 13, 2014, at 11:37 , Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry to trouble you again, anyone have any ideas on what to try next?
Hi Neil, hm, other than noise I don't really see any failures in migration. Can you attach both src and dst vdsm log with a hint which VM and at what time approx it failed for you? There are errors for one volume reoccuring all the time, but that doesn't look related to the migration
Thanks, michal
Thank you so much,
Regards.
Neil Wilson.
On Fri, Jan 10, 2014 at 8:31 AM, Neil <nwilson123@gmail.com> wrote:
Hi Dafna,
Apologies for the late reply, I was out of my office yesterday.
Just to get back to you on your questions.
can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters..
[root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults
/dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0
can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files.
can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Below is the xml
<domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain>
Thank you very much for your help.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote:
Hi Neil,
the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands
you have in virsh)
Thanks,
Dafna
On 01/08/2014 02:42 PM, Neil wrote:
Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote:
Thread-847747::INFO::2014-01-07 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') Thread-847747::INFO::2014-01-07 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True
reqsize:0
serial: truesize:0 *type:cdrom* volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-847747::WARNING::2014-01-07 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method D rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" type="block"> <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/> <source
dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/>
<target bus="ide" dev="hda"/> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> <alias name="ide0-0-0"/> <address bus="0" controller="0" target="0" type="drive"
unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote: > > Hi Dafna, > > Thanks for the reply. > > Attached is the log from the source server (node03). > > I'll reply to your other questions as soon as I'm back in the office > this afternoon, have to run off to a meeting. > > Regards. > > Neil Wilson. > > > On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote: >> >> Ok... several things :) >> >> 1. for migration we need to see vdsm logs from both src and dst. >> >> 2. Is it possible that the vm has an iso attached? because I see
1 0 that that
>> you >> are having problems with the iso domain: >> >> 2014-01-07 14:26:27,714 ERROR >> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] >> (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso >> was >> reported with error code 358 >> >> >> Thread-1165153::DEBUG::2014-01-07 >> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >> Unknown >> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no >> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3' >> >> hread-19::ERROR::2014-01-07 >> 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) >> domain >> e9ab725d-69c1-4a59-b225-b995d095c289 not found >> Traceback (most recent call last): >> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >> dom = findMethod(sdUUID) >> File "/usr/share/vdsm/storage/sdc.py", line 171, in >> _findUnfetchedDomain >> raise se.StorageDomainDoesNotExist(sdUUID) >> StorageDomainDoesNotExist: Storage domain does not exist: >> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >> Thread-19::ERROR::2014-01-07 >> >> >> 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) >> Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 >> monitoring information >> Traceback (most recent call last): >> File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in >> _monitorDomain >> self.domain = sdCache.produce(self.sdUUID) >> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce >> domain.getRealDomain() >> File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain >> return self._cache._realProduce(self._sdUUID) >> File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce >> domain = self._findDomain(sdUUID) >> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >> dom = findMethod(sdUUID) >> File "/usr/share/vdsm/storage/sdc.py", line 171, in >> _findUnfetchedDomain >> raise se.StorageDomainDoesNotExist(sdUUID) >> StorageDomainDoesNotExist: Storage domain does not exist: >> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >> Dummy-29013::DEBUG::2014-01-07 >> >> 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) >> 'dd >> >> >> if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox >> iflag=direct,fullblock count=1 bs=1024000' (cwd N >> one) >> >> 3. The migration fails with libvirt error but we need the trace from >> the >> second log: >> >> Thread-1165153::DEBUG::2014-01-07 >> 13:39:42,451::sampling::292::vm.Vm::(stop) >> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics
collection >>>>>>> Thread-1163583::DEBUG::2014-01-07 >>>>>>> 13:39:42,452::sampling::323::vm.Vm::(run) >>>>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished >>>>>>> Thread-1165153::DEBUG::2014-01-07 >>>>>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>>>>>> Unknown >>>>>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no >>>>>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 >>>>>>> a4fb7b3' >>>>>>> >>>>>>> >>>>>>> 4. But I am worried about this and would more info about this vm... >>>>>>> >>>>>>> Thread-247::ERROR::2014-01-07 >>>>>>> 15:35:14,868::sampling::355::vm.Vm::(collect) >>>>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: >>>>>>> <AdvancedStatsFunction _highWrite at 0x2ce0998> >>>>>>> Traceback (most recent call last): >>>>>>> File "/usr/share/vdsm/sampling.py", line 351, in collect >>>>>>> statsFunction() >>>>>>> File "/usr/share/vdsm/sampling.py", line 226, in __call__ >>>>>>> retValue = self._function(*args, **kwargs) >>>>>>> File "/usr/share/vdsm/vm.py", line 509, in _highWrite >>>>>>> if not vmDrive.blockDev or vmDrive.format != 'cow': >>>>>>> AttributeError: 'Drive' object has no attribute 'format' >>>>>>> >>>>>>> How did you create this vm? was it from the UI? was it from a script? >>>>>>> what >>>>>>> are the parameters you used? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Dafna >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 01/07/2014 04:34 PM, Neil wrote: >>>>>>>> >>>>>>>> Hi Elad, >>>>>>>> >>>>>>>> Thanks for assisting me, yes the same condition exists, if I try to >>>>>>>> migrate Tux it says "The VM Tux is being migrated". >>>>>>>> >>>>>>>> >>>>>>>> Below are the details requested. >>>>>>>> >>>>>>>> >>>>>>>> [root@node01 ~]# virsh -r list >>>>>>>> Id Name State >>>>>>>> ---------------------------------------------------- >>>>>>>> 1 adam running >>>>>>>> >>>>>>>> [root@node01 ~]# pgrep qemu >>>>>>>> 11232 >>>>>>>> [root@node01 ~]# vdsClient -s 0 list table >>>>>>>> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up >>>>>>>> >>>>>>>> >>>>>>>> [root@node03 ~]# virsh -r list >>>>>>>> Id Name State >>>>>>>> ---------------------------------------------------- >>>>>>>> 7 tux running >>>>>>>> >>>>>>>> [root@node03 ~]# pgrep qemu >>>>>>>> 32333 >>>>>>>> [root@node03 ~]# vdsClient -s 0 list table >>>>>>>> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up >>>>>>>> >>>>>>>> Thanks. >>>>>>>> >>>>>>>> Regards. >>>>>>>> >>>>>>>> Neil Wilson. >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon < ebenahar@redhat.com> >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Is it still in the same condition? >>>>>>>>> If yes, please add the outputs from both hosts for: >>>>>>>>> >>>>>>>>> #virsh -r list >>>>>>>>> #pgrep qemu >>>>>>>>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you >>>>>>>>> are >>>>>>>>> working in insecure mode) >>>>>>>>> >>>>>>>>> >>>>>>>>> Thnaks, >>>>>>>>> >>>>>>>>> Elad Ben Aharon >>>>>>>>> RHEV-QE storage team >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>> From: "Neil" <nwilson123@gmail.com> >>>>>>>>> To: users@ovirt.org >>>>>>>>> Sent: Tuesday, January 7, 2014 4:21:43 PM >>>>>>>>> Subject: [Users] Migration Failed >>>>>>>>> >>>>>>>>> Hi guys, >>>>>>>>> >>>>>>>>> I've tried to migrate a VM from one host(node03) to another(node01), >>>>>>>>> and it failed to migrate, and the VM(tux) remained on the original >>>>>>>>> host. I've now tried to migrate the same VM again, and it picks up >>>>>>>>> that the previous migration is still in progress and refuses to >>>>>>>>> migrate. >>>>>>>>> >>>>>>>>> I've checked for the KVM process on each of the hosts and the VM is >>>>>>>>> definitely still running on node03 so there doesn't appear to be any >>>>>>>>> chance of the VM trying to run on both hosts (which I've had before >>>>>>>>> which is very scary). >>>>>>>>> >>>>>>>>> These are my versions... and attached are my engine.log and my >>>>>>>>> vdsm.log >>>>>>>>> >>>>>>>>> Centos 6.5 >>>>>>>>> ovirt-iso-uploader-3.3.1-1.el6.noarch >>>>>>>>> ovirt-host-deploy-1.1.2-1.el6.noarch >>>>>>>>> ovirt-release-el6-9-1.noarch >>>>>>>>> ovirt-engine-setup-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-3.3.1-2.el6.noarch >>>>>>>>> ovirt-host-deploy-java-1.1.2-1.el6.noarch >>>>>>>>> ovirt-image-uploader-3.3.1-1.el6.noarch >>>>>>>>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch >>>>>>>>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-userportal-3.3.1-2.el6.noarch >>>>>>>>> ovirt-log-collector-3.3.1-1.el6.noarch >>>>>>>>> ovirt-engine-tools-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-lib-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-backend-3.3.1-2.el6.noarch >>>>>>>>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch >>>>>>>>> ovirt-engine-restapi-3.3.1-2.el6.noarch >>>>>>>>> >>>>>>>>> >>>>>>>>> vdsm-python-4.13.0-11.el6.x86_64 >>>>>>>>> vdsm-cli-4.13.0-11.el6.noarch >>>>>>>>> vdsm-xmlrpc-4.13.0-11.el6.noarch >>>>>>>>> vdsm-4.13.0-11.el6.x86_64 >>>>>>>>> vdsm-python-cpopen-4.13.0-11.el6.x86_64 >>>>>>>>> >>>>>>>>> I've had a few issues with this particular installation in the past, >>>>>>>>> as it's from a very old pre release of ovirt, then upgrading to the >>>>>>>>> dreyou repo, then finally moving to the official Centos ovirt repo. >>>>>>>>> >>>>>>>>> Thanks, any help is greatly appreciated. >>>>>>>>> >>>>>>>>> Regards. >>>>>>>>> >>>>>>>>> Neil Wilson. >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users@ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users@ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Dafna Ron >>>>> >>>>> >>>>> >>>>> -- >>>>> Dafna Ron >>> >>> >>> >>> -- >>> Dafna Ron > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Good morning everyone, Sorry for the late reply. Tom: unfortunately selinux is disabled on all the machines involved. Michal: Attached are the latest logs, I started the migration at 8:43am and it returned an error/failed at 8:52am. The details of the migration are as follows. node01 (10.0.2.21) is the destination node03 (10.0.2.23) is the source engine01 (10.0.2.31) is the engine Tux is the VM Strangely enough the immediate "pop up" error I received in the GUI previously didn't appear this time and it looked like it might actually work, however after waiting quite a while it eventually returned an error in the Tasks as follows... 2014-Jan-14, 09:05 Refresh image list failed for domain(s): bla-iso (ISO file type). Please check domain activity. 2014-Jan-14, 09:05 Migration failed due to Error: Internal Engine Error. Trying to migrate to another Host (VM: tux, Source: node03.blabla.com, Destination: node01.blabla.com). 2014-Jan-14, 08:52 Migration failed due to Error: Internal Engine Error (VM: tux, Source: node03.blabla.com, Destination: node01.blabla.com). and then also received an error in the engine.log which is why I've attached that as well. Please shout if you need any further info. Thank you so much. Regards. Neil Wilson. On Mon, Jan 13, 2014 at 4:07 PM, Tom Brown <tom@ng23.net> wrote:
selinux issue?
On 13 January 2014 12:48, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On Jan 13, 2014, at 11:37 , Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry to trouble you again, anyone have any ideas on what to try next?
Hi Neil, hm, other than noise I don't really see any failures in migration. Can you attach both src and dst vdsm log with a hint which VM and at what time approx it failed for you? There are errors for one volume reoccuring all the time, but that doesn't look related to the migration
Thanks, michal
Thank you so much,
Regards.
Neil Wilson.
On Fri, Jan 10, 2014 at 8:31 AM, Neil <nwilson123@gmail.com> wrote:
Hi Dafna,
Apologies for the late reply, I was out of my office yesterday.
Just to get back to you on your questions.
can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters..
[root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 0 /dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0
can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files.
can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Below is the xml
<domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain>
Thank you very much for your help.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote:
Hi Neil,
the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Thanks,
Dafna
On 01/08/2014 02:42 PM, Neil wrote:
Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a physical machine using Clonezilla. It's been migrated a number of times in the past, only now when trying to move it off node03 is it giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote: > > Thread-847747::INFO::2014-01-07 > 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: > inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') > Thread-847747::INFO::2014-01-07 > 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: > inappropriateDevices, Return response: None > > Please check if the vm's were booted with a cd... > > > bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at > 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True > reqsize:0 > serial: truesize:0 *type:cdrom* volExtensionChunk:1024 > watermarkLimit:536870912 > > Traceback (most recent call last): > File "/usr/share/vdsm/clientIF.py", line 356, in > teardownVolumePath > res = self.irs.teardownImage(drive['domainID'], > File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ > raise KeyError(key) > KeyError: 'domainID' > Thread-847747::WARNING::2014-01-07 > 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not > a > vdsm > image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 > VOLWM_FREE_PCT:50 > _blockDev:True _checkIoTuneCategories:<bound method D > rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> > _customize:<bound method Drive._customize of <vm.Drive object at > 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" > type="block"> > <driver cache="none" error_policy="stop" io="native" > name="qemu" > type="raw"/> > <source > > > dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> > <target bus="ide" dev="hda"/> > <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> > <alias name="ide0-0-0"/> > <address bus="0" controller="0" target="0" type="drive" > unit="0"/> > > > > On 01/08/2014 06:28 AM, Neil wrote: >> >> Hi Dafna, >> >> Thanks for the reply. >> >> Attached is the log from the source server (node03). >> >> I'll reply to your other questions as soon as I'm back in the >> office >> this afternoon, have to run off to a meeting. >> >> Regards. >> >> Neil Wilson. >> >> >> On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote: >>> >>> Ok... several things :) >>> >>> 1. for migration we need to see vdsm logs from both src and dst. >>> >>> 2. Is it possible that the vm has an iso attached? because I see >>> that >>> you >>> are having problems with the iso domain: >>> >>> 2014-01-07 14:26:27,714 ERROR >>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] >>> (pool-6-thread-48) Domain >>> e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso >>> was >>> reported with error code 358 >>> >>> >>> Thread-1165153::DEBUG::2014-01-07 >>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>> Unknown >>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not >>> found: no >>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3' >>> >>> hread-19::ERROR::2014-01-07 >>> 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) >>> domain >>> e9ab725d-69c1-4a59-b225-b995d095c289 not found >>> Traceback (most recent call last): >>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >>> dom = findMethod(sdUUID) >>> File "/usr/share/vdsm/storage/sdc.py", line 171, in >>> _findUnfetchedDomain >>> raise se.StorageDomainDoesNotExist(sdUUID) >>> StorageDomainDoesNotExist: Storage domain does not exist: >>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >>> Thread-19::ERROR::2014-01-07 >>> >>> >>> >>> 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) >>> Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 >>> monitoring information >>> Traceback (most recent call last): >>> File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in >>> _monitorDomain >>> self.domain = sdCache.produce(self.sdUUID) >>> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce >>> domain.getRealDomain() >>> File "/usr/share/vdsm/storage/sdc.py", line 52, in >>> getRealDomain >>> return self._cache._realProduce(self._sdUUID) >>> File "/usr/share/vdsm/storage/sdc.py", line 122, in >>> _realProduce >>> domain = self._findDomain(sdUUID) >>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >>> dom = findMethod(sdUUID) >>> File "/usr/share/vdsm/storage/sdc.py", line 171, in >>> _findUnfetchedDomain >>> raise se.StorageDomainDoesNotExist(sdUUID) >>> StorageDomainDoesNotExist: Storage domain does not exist: >>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >>> Dummy-29013::DEBUG::2014-01-07 >>> >>> >>> 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) >>> 'dd >>> >>> >>> >>> if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox >>> iflag=direct,fullblock count=1 bs=1024000' (cwd N >>> one) >>> >>> 3. The migration fails with libvirt error but we need the trace >>> from >>> the >>> second log: >>> >>> Thread-1165153::DEBUG::2014-01-07 >>> 13:39:42,451::sampling::292::vm.Vm::(stop) >>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics >>> collection >>> Thread-1163583::DEBUG::2014-01-07 >>> 13:39:42,452::sampling::323::vm.Vm::(run) >>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished >>> Thread-1165153::DEBUG::2014-01-07 >>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>> Unknown >>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not >>> found: no >>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 >>> a4fb7b3' >>> >>> >>> 4. But I am worried about this and would more info about this >>> vm... >>> >>> Thread-247::ERROR::2014-01-07 >>> 15:35:14,868::sampling::355::vm.Vm::(collect) >>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function >>> failed: >>> <AdvancedStatsFunction _highWrite at 0x2ce0998> >>> Traceback (most recent call last): >>> File "/usr/share/vdsm/sampling.py", line 351, in collect >>> statsFunction() >>> File "/usr/share/vdsm/sampling.py", line 226, in __call__ >>> retValue = self._function(*args, **kwargs) >>> File "/usr/share/vdsm/vm.py", line 509, in _highWrite >>> if not vmDrive.blockDev or vmDrive.format != 'cow': >>> AttributeError: 'Drive' object has no attribute 'format' >>> >>> How did you create this vm? was it from the UI? was it from a >>> script? >>> what >>> are the parameters you used? >>> >>> Thanks, >>> >>> Dafna >>> >>> >>> >>> On 01/07/2014 04:34 PM, Neil wrote: >>>> >>>> Hi Elad, >>>> >>>> Thanks for assisting me, yes the same condition exists, if I try >>>> to >>>> migrate Tux it says "The VM Tux is being migrated". >>>> >>>> >>>> Below are the details requested. >>>> >>>> >>>> [root@node01 ~]# virsh -r list >>>> Id Name State >>>> ---------------------------------------------------- >>>> 1 adam running >>>> >>>> [root@node01 ~]# pgrep qemu >>>> 11232 >>>> [root@node01 ~]# vdsClient -s 0 list table >>>> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam >>>> Up >>>> >>>> >>>> [root@node03 ~]# virsh -r list >>>> Id Name State >>>> ---------------------------------------------------- >>>> 7 tux running >>>> >>>> [root@node03 ~]# pgrep qemu >>>> 32333 >>>> [root@node03 ~]# vdsClient -s 0 list table >>>> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux >>>> Up >>>> >>>> Thanks. >>>> >>>> Regards. >>>> >>>> Neil Wilson. >>>> >>>> >>>> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon >>>> <ebenahar@redhat.com> >>>> wrote: >>>>> >>>>> Is it still in the same condition? >>>>> If yes, please add the outputs from both hosts for: >>>>> >>>>> #virsh -r list >>>>> #pgrep qemu >>>>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if >>>>> you >>>>> are >>>>> working in insecure mode) >>>>> >>>>> >>>>> Thnaks, >>>>> >>>>> Elad Ben Aharon >>>>> RHEV-QE storage team >>>>> >>>>> >>>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: "Neil" <nwilson123@gmail.com> >>>>> To: users@ovirt.org >>>>> Sent: Tuesday, January 7, 2014 4:21:43 PM >>>>> Subject: [Users] Migration Failed >>>>> >>>>> Hi guys, >>>>> >>>>> I've tried to migrate a VM from one host(node03) to >>>>> another(node01), >>>>> and it failed to migrate, and the VM(tux) remained on the >>>>> original >>>>> host. I've now tried to migrate the same VM again, and it picks >>>>> up >>>>> that the previous migration is still in progress and refuses to >>>>> migrate. >>>>> >>>>> I've checked for the KVM process on each of the hosts and the VM >>>>> is >>>>> definitely still running on node03 so there doesn't appear to be >>>>> any >>>>> chance of the VM trying to run on both hosts (which I've had >>>>> before >>>>> which is very scary). >>>>> >>>>> These are my versions... and attached are my engine.log and my >>>>> vdsm.log >>>>> >>>>> Centos 6.5 >>>>> ovirt-iso-uploader-3.3.1-1.el6.noarch >>>>> ovirt-host-deploy-1.1.2-1.el6.noarch >>>>> ovirt-release-el6-9-1.noarch >>>>> ovirt-engine-setup-3.3.1-2.el6.noarch >>>>> ovirt-engine-3.3.1-2.el6.noarch >>>>> ovirt-host-deploy-java-1.1.2-1.el6.noarch >>>>> ovirt-image-uploader-3.3.1-1.el6.noarch >>>>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch >>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch >>>>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch >>>>> ovirt-engine-userportal-3.3.1-2.el6.noarch >>>>> ovirt-log-collector-3.3.1-1.el6.noarch >>>>> ovirt-engine-tools-3.3.1-2.el6.noarch >>>>> ovirt-engine-lib-3.3.1-2.el6.noarch >>>>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch >>>>> ovirt-engine-backend-3.3.1-2.el6.noarch >>>>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch >>>>> ovirt-engine-restapi-3.3.1-2.el6.noarch >>>>> >>>>> >>>>> vdsm-python-4.13.0-11.el6.x86_64 >>>>> vdsm-cli-4.13.0-11.el6.noarch >>>>> vdsm-xmlrpc-4.13.0-11.el6.noarch >>>>> vdsm-4.13.0-11.el6.x86_64 >>>>> vdsm-python-cpopen-4.13.0-11.el6.x86_64 >>>>> >>>>> I've had a few issues with this particular installation in the >>>>> past, >>>>> as it's from a very old pre release of ovirt, then upgrading to >>>>> the >>>>> dreyou repo, then finally moving to the official Centos ovirt >>>>> repo. >>>>> >>>>> Thanks, any help is greatly appreciated. >>>>> >>>>> Regards. >>>>> >>>>> Neil Wilson. >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> -- >>> Dafna Ron > > > > -- > Dafna Ron
-- Dafna Ron
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Sorry, before anyone wastes anymore time on this, I found the issue. My NFS ISO domain was attached to the other host node02, but the NFS mount wasn't accessible due to the iptables service being activated on boot once I had run all the OS updates a while back. I've disabled the service again, and the migration has completed successfully. Thank you all for your assistance, and I'm sorry for wasting your time. Regards. Neil Wilson. On Tue, Jan 14, 2014 at 9:30 AM, Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry for the late reply.
Tom: unfortunately selinux is disabled on all the machines involved.
Michal: Attached are the latest logs, I started the migration at 8:43am and it returned an error/failed at 8:52am. The details of the migration are as follows.
node01 (10.0.2.21) is the destination node03 (10.0.2.23) is the source engine01 (10.0.2.31) is the engine Tux is the VM
Strangely enough the immediate "pop up" error I received in the GUI previously didn't appear this time and it looked like it might actually work, however after waiting quite a while it eventually returned an error in the Tasks as follows...
2014-Jan-14, 09:05 Refresh image list failed for domain(s): bla-iso (ISO file type). Please check domain activity. 2014-Jan-14, 09:05 Migration failed due to Error: Internal Engine Error. Trying to migrate to another Host (VM: tux, Source: node03.blabla.com, Destination: node01.blabla.com). 2014-Jan-14, 08:52 Migration failed due to Error: Internal Engine Error (VM: tux, Source: node03.blabla.com, Destination: node01.blabla.com).
and then also received an error in the engine.log which is why I've attached that as well.
Please shout if you need any further info.
Thank you so much.
Regards.
Neil Wilson.
On Mon, Jan 13, 2014 at 4:07 PM, Tom Brown <tom@ng23.net> wrote:
selinux issue?
On 13 January 2014 12:48, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On Jan 13, 2014, at 11:37 , Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry to trouble you again, anyone have any ideas on what to try next?
Hi Neil, hm, other than noise I don't really see any failures in migration. Can you attach both src and dst vdsm log with a hint which VM and at what time approx it failed for you? There are errors for one volume reoccuring all the time, but that doesn't look related to the migration
Thanks, michal
Thank you so much,
Regards.
Neil Wilson.
On Fri, Jan 10, 2014 at 8:31 AM, Neil <nwilson123@gmail.com> wrote:
Hi Dafna,
Apologies for the late reply, I was out of my office yesterday.
Just to get back to you on your questions.
can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters..
[root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 0 /dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0
can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files.
can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Below is the xml
<domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain>
Thank you very much for your help.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote:
Hi Neil,
the error in the log suggests that the vm is missing a disk... can you look at the vm dialogue and see what boot devices the vm has? can you write to the vm? can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Thanks,
Dafna
On 01/08/2014 02:42 PM, Neil wrote: > > Hi guys, > > Apologies for the late reply. > > The VM (Tux) was created about 2 years ago, it was converted from a > physical machine using Clonezilla. It's been migrated a number of > times in the past, only now when trying to move it off node03 is it > giving this error. > > I've looked for any attached images/cd's and found none > unfortunately. > > Thank you so much for your assistance so far. > > Regards. > > Neil Wilson. > > > > On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote: >> >> Thread-847747::INFO::2014-01-07 >> 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: >> inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') >> Thread-847747::INFO::2014-01-07 >> 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: >> inappropriateDevices, Return response: None >> >> Please check if the vm's were booted with a cd... >> >> >> bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at >> 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True >> reqsize:0 >> serial: truesize:0 *type:cdrom* volExtensionChunk:1024 >> watermarkLimit:536870912 >> >> Traceback (most recent call last): >> File "/usr/share/vdsm/clientIF.py", line 356, in >> teardownVolumePath >> res = self.irs.teardownImage(drive['domainID'], >> File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ >> raise KeyError(key) >> KeyError: 'domainID' >> Thread-847747::WARNING::2014-01-07 >> 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not >> a >> vdsm >> image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 >> VOLWM_FREE_PCT:50 >> _blockDev:True _checkIoTuneCategories:<bound method D >> rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> >> _customize:<bound method Drive._customize of <vm.Drive object at >> 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" >> type="block"> >> <driver cache="none" error_policy="stop" io="native" >> name="qemu" >> type="raw"/> >> <source >> >> >> dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> >> <target bus="ide" dev="hda"/> >> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> >> <alias name="ide0-0-0"/> >> <address bus="0" controller="0" target="0" type="drive" >> unit="0"/> >> >> >> >> On 01/08/2014 06:28 AM, Neil wrote: >>> >>> Hi Dafna, >>> >>> Thanks for the reply. >>> >>> Attached is the log from the source server (node03). >>> >>> I'll reply to your other questions as soon as I'm back in the >>> office >>> this afternoon, have to run off to a meeting. >>> >>> Regards. >>> >>> Neil Wilson. >>> >>> >>> On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote: >>>> >>>> Ok... several things :) >>>> >>>> 1. for migration we need to see vdsm logs from both src and dst. >>>> >>>> 2. Is it possible that the vm has an iso attached? because I see >>>> that >>>> you >>>> are having problems with the iso domain: >>>> >>>> 2014-01-07 14:26:27,714 ERROR >>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] >>>> (pool-6-thread-48) Domain >>>> e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso >>>> was >>>> reported with error code 358 >>>> >>>> >>>> Thread-1165153::DEBUG::2014-01-07 >>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>>> Unknown >>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not >>>> found: no >>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3' >>>> >>>> hread-19::ERROR::2014-01-07 >>>> 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) >>>> domain >>>> e9ab725d-69c1-4a59-b225-b995d095c289 not found >>>> Traceback (most recent call last): >>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >>>> dom = findMethod(sdUUID) >>>> File "/usr/share/vdsm/storage/sdc.py", line 171, in >>>> _findUnfetchedDomain >>>> raise se.StorageDomainDoesNotExist(sdUUID) >>>> StorageDomainDoesNotExist: Storage domain does not exist: >>>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >>>> Thread-19::ERROR::2014-01-07 >>>> >>>> >>>> >>>> 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) >>>> Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 >>>> monitoring information >>>> Traceback (most recent call last): >>>> File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in >>>> _monitorDomain >>>> self.domain = sdCache.produce(self.sdUUID) >>>> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce >>>> domain.getRealDomain() >>>> File "/usr/share/vdsm/storage/sdc.py", line 52, in >>>> getRealDomain >>>> return self._cache._realProduce(self._sdUUID) >>>> File "/usr/share/vdsm/storage/sdc.py", line 122, in >>>> _realProduce >>>> domain = self._findDomain(sdUUID) >>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >>>> dom = findMethod(sdUUID) >>>> File "/usr/share/vdsm/storage/sdc.py", line 171, in >>>> _findUnfetchedDomain >>>> raise se.StorageDomainDoesNotExist(sdUUID) >>>> StorageDomainDoesNotExist: Storage domain does not exist: >>>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >>>> Dummy-29013::DEBUG::2014-01-07 >>>> >>>> >>>> 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) >>>> 'dd >>>> >>>> >>>> >>>> if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox >>>> iflag=direct,fullblock count=1 bs=1024000' (cwd N >>>> one) >>>> >>>> 3. The migration fails with libvirt error but we need the trace >>>> from >>>> the >>>> second log: >>>> >>>> Thread-1165153::DEBUG::2014-01-07 >>>> 13:39:42,451::sampling::292::vm.Vm::(stop) >>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics >>>> collection >>>> Thread-1163583::DEBUG::2014-01-07 >>>> 13:39:42,452::sampling::323::vm.Vm::(run) >>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished >>>> Thread-1165153::DEBUG::2014-01-07 >>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>>> Unknown >>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not >>>> found: no >>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 >>>> a4fb7b3' >>>> >>>> >>>> 4. But I am worried about this and would more info about this >>>> vm... >>>> >>>> Thread-247::ERROR::2014-01-07 >>>> 15:35:14,868::sampling::355::vm.Vm::(collect) >>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function >>>> failed: >>>> <AdvancedStatsFunction _highWrite at 0x2ce0998> >>>> Traceback (most recent call last): >>>> File "/usr/share/vdsm/sampling.py", line 351, in collect >>>> statsFunction() >>>> File "/usr/share/vdsm/sampling.py", line 226, in __call__ >>>> retValue = self._function(*args, **kwargs) >>>> File "/usr/share/vdsm/vm.py", line 509, in _highWrite >>>> if not vmDrive.blockDev or vmDrive.format != 'cow': >>>> AttributeError: 'Drive' object has no attribute 'format' >>>> >>>> How did you create this vm? was it from the UI? was it from a >>>> script? >>>> what >>>> are the parameters you used? >>>> >>>> Thanks, >>>> >>>> Dafna >>>> >>>> >>>> >>>> On 01/07/2014 04:34 PM, Neil wrote: >>>>> >>>>> Hi Elad, >>>>> >>>>> Thanks for assisting me, yes the same condition exists, if I try >>>>> to >>>>> migrate Tux it says "The VM Tux is being migrated". >>>>> >>>>> >>>>> Below are the details requested. >>>>> >>>>> >>>>> [root@node01 ~]# virsh -r list >>>>> Id Name State >>>>> ---------------------------------------------------- >>>>> 1 adam running >>>>> >>>>> [root@node01 ~]# pgrep qemu >>>>> 11232 >>>>> [root@node01 ~]# vdsClient -s 0 list table >>>>> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam >>>>> Up >>>>> >>>>> >>>>> [root@node03 ~]# virsh -r list >>>>> Id Name State >>>>> ---------------------------------------------------- >>>>> 7 tux running >>>>> >>>>> [root@node03 ~]# pgrep qemu >>>>> 32333 >>>>> [root@node03 ~]# vdsClient -s 0 list table >>>>> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux >>>>> Up >>>>> >>>>> Thanks. >>>>> >>>>> Regards. >>>>> >>>>> Neil Wilson. >>>>> >>>>> >>>>> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon >>>>> <ebenahar@redhat.com> >>>>> wrote: >>>>>> >>>>>> Is it still in the same condition? >>>>>> If yes, please add the outputs from both hosts for: >>>>>> >>>>>> #virsh -r list >>>>>> #pgrep qemu >>>>>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if >>>>>> you >>>>>> are >>>>>> working in insecure mode) >>>>>> >>>>>> >>>>>> Thnaks, >>>>>> >>>>>> Elad Ben Aharon >>>>>> RHEV-QE storage team >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "Neil" <nwilson123@gmail.com> >>>>>> To: users@ovirt.org >>>>>> Sent: Tuesday, January 7, 2014 4:21:43 PM >>>>>> Subject: [Users] Migration Failed >>>>>> >>>>>> Hi guys, >>>>>> >>>>>> I've tried to migrate a VM from one host(node03) to >>>>>> another(node01), >>>>>> and it failed to migrate, and the VM(tux) remained on the >>>>>> original >>>>>> host. I've now tried to migrate the same VM again, and it picks >>>>>> up >>>>>> that the previous migration is still in progress and refuses to >>>>>> migrate. >>>>>> >>>>>> I've checked for the KVM process on each of the hosts and the VM >>>>>> is >>>>>> definitely still running on node03 so there doesn't appear to be >>>>>> any >>>>>> chance of the VM trying to run on both hosts (which I've had >>>>>> before >>>>>> which is very scary). >>>>>> >>>>>> These are my versions... and attached are my engine.log and my >>>>>> vdsm.log >>>>>> >>>>>> Centos 6.5 >>>>>> ovirt-iso-uploader-3.3.1-1.el6.noarch >>>>>> ovirt-host-deploy-1.1.2-1.el6.noarch >>>>>> ovirt-release-el6-9-1.noarch >>>>>> ovirt-engine-setup-3.3.1-2.el6.noarch >>>>>> ovirt-engine-3.3.1-2.el6.noarch >>>>>> ovirt-host-deploy-java-1.1.2-1.el6.noarch >>>>>> ovirt-image-uploader-3.3.1-1.el6.noarch >>>>>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch >>>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch >>>>>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch >>>>>> ovirt-engine-userportal-3.3.1-2.el6.noarch >>>>>> ovirt-log-collector-3.3.1-1.el6.noarch >>>>>> ovirt-engine-tools-3.3.1-2.el6.noarch >>>>>> ovirt-engine-lib-3.3.1-2.el6.noarch >>>>>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch >>>>>> ovirt-engine-backend-3.3.1-2.el6.noarch >>>>>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch >>>>>> ovirt-engine-restapi-3.3.1-2.el6.noarch >>>>>> >>>>>> >>>>>> vdsm-python-4.13.0-11.el6.x86_64 >>>>>> vdsm-cli-4.13.0-11.el6.noarch >>>>>> vdsm-xmlrpc-4.13.0-11.el6.noarch >>>>>> vdsm-4.13.0-11.el6.x86_64 >>>>>> vdsm-python-cpopen-4.13.0-11.el6.x86_64 >>>>>> >>>>>> I've had a few issues with this particular installation in the >>>>>> past, >>>>>> as it's from a very old pre release of ovirt, then upgrading to >>>>>> the >>>>>> dreyou repo, then finally moving to the official Centos ovirt >>>>>> repo. >>>>>> >>>>>> Thanks, any help is greatly appreciated. >>>>>> >>>>>> Regards. >>>>>> >>>>>> Neil Wilson. >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>>> -- >>>> Dafna Ron >> >> >> >> -- >> Dafna Ron
-- Dafna Ron
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, you may want to consider to enable your firewall for security reasons. The ports which nfs uses are configured under: /etc/sysconfig/nfs for EL 6. There's no reason at all to run oVirt without correct firewalling, not even in test environments as you will want firewalling in production and you have to test it anyway. -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

On 14 Jan 2014, at 08:43, Neil wrote:
Sorry, before anyone wastes anymore time on this, I found the issue.
My NFS ISO domain was attached to the other host node02, but the NFS mount wasn't accessible due to the iptables service being activated on boot once I had run all the OS updates a while back.
I've disabled the service again, and the migration has completed successfully.
Thank you all for your assistance, and I'm sorry for wasting your time.
good it's working for you I was also checking the internal error and it's http://gerrit.ovirt.org/#/c/21700, which is fixed by now Thanks, michal
Regards.
Neil Wilson.
On Tue, Jan 14, 2014 at 9:30 AM, Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry for the late reply.
Tom: unfortunately selinux is disabled on all the machines involved.
Michal: Attached are the latest logs, I started the migration at 8:43am and it returned an error/failed at 8:52am. The details of the migration are as follows.
node01 (10.0.2.21) is the destination node03 (10.0.2.23) is the source engine01 (10.0.2.31) is the engine Tux is the VM
Strangely enough the immediate "pop up" error I received in the GUI previously didn't appear this time and it looked like it might actually work, however after waiting quite a while it eventually returned an error in the Tasks as follows...
2014-Jan-14, 09:05 Refresh image list failed for domain(s): bla-iso (ISO file type). Please check domain activity. 2014-Jan-14, 09:05 Migration failed due to Error: Internal Engine Error. Trying to migrate to another Host (VM: tux, Source: node03.blabla.com, Destination: node01.blabla.com). 2014-Jan-14, 08:52 Migration failed due to Error: Internal Engine Error (VM: tux, Source: node03.blabla.com, Destination: node01.blabla.com).
and then also received an error in the engine.log which is why I've attached that as well.
Please shout if you need any further info.
Thank you so much.
Regards.
Neil Wilson.
On Mon, Jan 13, 2014 at 4:07 PM, Tom Brown <tom@ng23.net> wrote:
selinux issue?
On 13 January 2014 12:48, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On Jan 13, 2014, at 11:37 , Neil <nwilson123@gmail.com> wrote:
Good morning everyone,
Sorry to trouble you again, anyone have any ideas on what to try next?
Hi Neil, hm, other than noise I don't really see any failures in migration. Can you attach both src and dst vdsm log with a hint which VM and at what time approx it failed for you? There are errors for one volume reoccuring all the time, but that doesn't look related to the migration
Thanks, michal
Thank you so much,
Regards.
Neil Wilson.
On Fri, Jan 10, 2014 at 8:31 AM, Neil <nwilson123@gmail.com> wrote:
Hi Dafna,
Apologies for the late reply, I was out of my office yesterday.
Just to get back to you on your questions.
can you look at the vm dialogue and see what boot devices the vm has? Sorry I'm not sure where you want me to get this info from? Inside the ovirt GUI or on the VM itself. The VM has one 2TB LUN assigned. Then inside the VM this is the fstab parameters..
[root@tux ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 0 /dev/vda1 /boot ext3 defaults 1 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/VolGroup00/LogVol02 /homes xfs defaults,usrquota,grpquota 1 0
can you write to the vm? Yes the machine is fully functioning, it's their main PDC and hosts all of their files.
can you please dump the vm xml from libvirt? (it's one of the commands that you have in virsh)
Below is the xml
<domain type='kvm' id='7'> <name>tux</name> <uuid>2736197b-6dc3-4155-9a29-9306ca64881d</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <shares>1020</shares> </cputune> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>6-4.el6.centos.10</entry> <entry name='serial'>4C4C4544-0038-5310-8050-C6C04F34354A</entry> <entry name='uuid'>2736197b-6dc3-4155-9a29-9306ca64881d</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <serial></serial> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/> <target dev='vda' bus='virtio'/> <serial>fd1a562a-3ba5-4ddb-a643-37912a6ae86f</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='00:1a:4a:a8:7a:00'/> <source bridge='ovirtmgmt'/> <target dev='vnet5'/> <model type='virtio'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/> <target type='virtio' name='com.redhat.rhevm.vdsm'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='mouse' bus='ps2'/> <graphics type='spice' port='5912' tlsPort='5913' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54' connected='disconnect'> <listen type='address' address='0'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none'/> </domain>
Thank you very much for your help.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 4:55 PM, Dafna Ron <dron@redhat.com> wrote: > Hi Neil, > > the error in the log suggests that the vm is missing a disk... > can you look at the vm dialogue and see what boot devices the vm has? > can you write to the vm? > can you please dump the vm xml from libvirt? (it's one of the commands > that > you have in virsh) > > Thanks, > > Dafna > > > > On 01/08/2014 02:42 PM, Neil wrote: >> >> Hi guys, >> >> Apologies for the late reply. >> >> The VM (Tux) was created about 2 years ago, it was converted from a >> physical machine using Clonezilla. It's been migrated a number of >> times in the past, only now when trying to move it off node03 is it >> giving this error. >> >> I've looked for any attached images/cd's and found none >> unfortunately. >> >> Thank you so much for your assistance so far. >> >> Regards. >> >> Neil Wilson. >> >> >> >> On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron@redhat.com> wrote: >>> >>> Thread-847747::INFO::2014-01-07 >>> 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: >>> inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3') >>> Thread-847747::INFO::2014-01-07 >>> 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: >>> inappropriateDevices, Return response: None >>> >>> Please check if the vm's were booted with a cd... >>> >>> >>> bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at >>> 0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True >>> reqsize:0 >>> serial: truesize:0 *type:cdrom* volExtensionChunk:1024 >>> watermarkLimit:536870912 >>> >>> Traceback (most recent call last): >>> File "/usr/share/vdsm/clientIF.py", line 356, in >>> teardownVolumePath >>> res = self.irs.teardownImage(drive['domainID'], >>> File "/usr/share/vdsm/vm.py", line 1386, in __getitem__ >>> raise KeyError(key) >>> KeyError: 'domainID' >>> Thread-847747::WARNING::2014-01-07 >>> 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not >>> a >>> vdsm >>> image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 >>> VOLWM_FREE_PCT:50 >>> _blockDev:True _checkIoTuneCategories:<bound method D >>> rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>> >>> _customize:<bound method Drive._customize of <vm.Drive object at >>> 0x7fb1f00cbc10>> _deviceXML:<disk device="disk" snapshot="no" >>> type="block"> >>> <driver cache="none" error_policy="stop" io="native" >>> name="qemu" >>> type="raw"/> >>> <source >>> >>> >>> dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/> >>> <target bus="ide" dev="hda"/> >>> <serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial> >>> <alias name="ide0-0-0"/> >>> <address bus="0" controller="0" target="0" type="drive" >>> unit="0"/> >>> >>> >>> >>> On 01/08/2014 06:28 AM, Neil wrote: >>>> >>>> Hi Dafna, >>>> >>>> Thanks for the reply. >>>> >>>> Attached is the log from the source server (node03). >>>> >>>> I'll reply to your other questions as soon as I'm back in the >>>> office >>>> this afternoon, have to run off to a meeting. >>>> >>>> Regards. >>>> >>>> Neil Wilson. >>>> >>>> >>>> On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron@redhat.com> wrote: >>>>> >>>>> Ok... several things :) >>>>> >>>>> 1. for migration we need to see vdsm logs from both src and dst. >>>>> >>>>> 2. Is it possible that the vm has an iso attached? because I see >>>>> that >>>>> you >>>>> are having problems with the iso domain: >>>>> >>>>> 2014-01-07 14:26:27,714 ERROR >>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] >>>>> (pool-6-thread-48) Domain >>>>> e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso >>>>> was >>>>> reported with error code 358 >>>>> >>>>> >>>>> Thread-1165153::DEBUG::2014-01-07 >>>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>>>> Unknown >>>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not >>>>> found: no >>>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3' >>>>> >>>>> hread-19::ERROR::2014-01-07 >>>>> 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) >>>>> domain >>>>> e9ab725d-69c1-4a59-b225-b995d095c289 not found >>>>> Traceback (most recent call last): >>>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >>>>> dom = findMethod(sdUUID) >>>>> File "/usr/share/vdsm/storage/sdc.py", line 171, in >>>>> _findUnfetchedDomain >>>>> raise se.StorageDomainDoesNotExist(sdUUID) >>>>> StorageDomainDoesNotExist: Storage domain does not exist: >>>>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >>>>> Thread-19::ERROR::2014-01-07 >>>>> >>>>> >>>>> >>>>> 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) >>>>> Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 >>>>> monitoring information >>>>> Traceback (most recent call last): >>>>> File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in >>>>> _monitorDomain >>>>> self.domain = sdCache.produce(self.sdUUID) >>>>> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce >>>>> domain.getRealDomain() >>>>> File "/usr/share/vdsm/storage/sdc.py", line 52, in >>>>> getRealDomain >>>>> return self._cache._realProduce(self._sdUUID) >>>>> File "/usr/share/vdsm/storage/sdc.py", line 122, in >>>>> _realProduce >>>>> domain = self._findDomain(sdUUID) >>>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain >>>>> dom = findMethod(sdUUID) >>>>> File "/usr/share/vdsm/storage/sdc.py", line 171, in >>>>> _findUnfetchedDomain >>>>> raise se.StorageDomainDoesNotExist(sdUUID) >>>>> StorageDomainDoesNotExist: Storage domain does not exist: >>>>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',) >>>>> Dummy-29013::DEBUG::2014-01-07 >>>>> >>>>> >>>>> 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) >>>>> 'dd >>>>> >>>>> >>>>> >>>>> if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox >>>>> iflag=direct,fullblock count=1 bs=1024000' (cwd N >>>>> one) >>>>> >>>>> 3. The migration fails with libvirt error but we need the trace >>>>> from >>>>> the >>>>> second log: >>>>> >>>>> Thread-1165153::DEBUG::2014-01-07 >>>>> 13:39:42,451::sampling::292::vm.Vm::(stop) >>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics >>>>> collection >>>>> Thread-1163583::DEBUG::2014-01-07 >>>>> 13:39:42,452::sampling::323::vm.Vm::(run) >>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished >>>>> Thread-1165153::DEBUG::2014-01-07 >>>>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) >>>>> Unknown >>>>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not >>>>> found: no >>>>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660 >>>>> a4fb7b3' >>>>> >>>>> >>>>> 4. But I am worried about this and would more info about this >>>>> vm... >>>>> >>>>> Thread-247::ERROR::2014-01-07 >>>>> 15:35:14,868::sampling::355::vm.Vm::(collect) >>>>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function >>>>> failed: >>>>> <AdvancedStatsFunction _highWrite at 0x2ce0998> >>>>> Traceback (most recent call last): >>>>> File "/usr/share/vdsm/sampling.py", line 351, in collect >>>>> statsFunction() >>>>> File "/usr/share/vdsm/sampling.py", line 226, in __call__ >>>>> retValue = self._function(*args, **kwargs) >>>>> File "/usr/share/vdsm/vm.py", line 509, in _highWrite >>>>> if not vmDrive.blockDev or vmDrive.format != 'cow': >>>>> AttributeError: 'Drive' object has no attribute 'format' >>>>> >>>>> How did you create this vm? was it from the UI? was it from a >>>>> script? >>>>> what >>>>> are the parameters you used? >>>>> >>>>> Thanks, >>>>> >>>>> Dafna >>>>> >>>>> >>>>> >>>>> On 01/07/2014 04:34 PM, Neil wrote: >>>>>> >>>>>> Hi Elad, >>>>>> >>>>>> Thanks for assisting me, yes the same condition exists, if I try >>>>>> to >>>>>> migrate Tux it says "The VM Tux is being migrated". >>>>>> >>>>>> >>>>>> Below are the details requested. >>>>>> >>>>>> >>>>>> [root@node01 ~]# virsh -r list >>>>>> Id Name State >>>>>> ---------------------------------------------------- >>>>>> 1 adam running >>>>>> >>>>>> [root@node01 ~]# pgrep qemu >>>>>> 11232 >>>>>> [root@node01 ~]# vdsClient -s 0 list table >>>>>> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam >>>>>> Up >>>>>> >>>>>> >>>>>> [root@node03 ~]# virsh -r list >>>>>> Id Name State >>>>>> ---------------------------------------------------- >>>>>> 7 tux running >>>>>> >>>>>> [root@node03 ~]# pgrep qemu >>>>>> 32333 >>>>>> [root@node03 ~]# vdsClient -s 0 list table >>>>>> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux >>>>>> Up >>>>>> >>>>>> Thanks. >>>>>> >>>>>> Regards. >>>>>> >>>>>> Neil Wilson. >>>>>> >>>>>> >>>>>> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon >>>>>> <ebenahar@redhat.com> >>>>>> wrote: >>>>>>> >>>>>>> Is it still in the same condition? >>>>>>> If yes, please add the outputs from both hosts for: >>>>>>> >>>>>>> #virsh -r list >>>>>>> #pgrep qemu >>>>>>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if >>>>>>> you >>>>>>> are >>>>>>> working in insecure mode) >>>>>>> >>>>>>> >>>>>>> Thnaks, >>>>>>> >>>>>>> Elad Ben Aharon >>>>>>> RHEV-QE storage team >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> From: "Neil" <nwilson123@gmail.com> >>>>>>> To: users@ovirt.org >>>>>>> Sent: Tuesday, January 7, 2014 4:21:43 PM >>>>>>> Subject: [Users] Migration Failed >>>>>>> >>>>>>> Hi guys, >>>>>>> >>>>>>> I've tried to migrate a VM from one host(node03) to >>>>>>> another(node01), >>>>>>> and it failed to migrate, and the VM(tux) remained on the >>>>>>> original >>>>>>> host. I've now tried to migrate the same VM again, and it picks >>>>>>> up >>>>>>> that the previous migration is still in progress and refuses to >>>>>>> migrate. >>>>>>> >>>>>>> I've checked for the KVM process on each of the hosts and the VM >>>>>>> is >>>>>>> definitely still running on node03 so there doesn't appear to be >>>>>>> any >>>>>>> chance of the VM trying to run on both hosts (which I've had >>>>>>> before >>>>>>> which is very scary). >>>>>>> >>>>>>> These are my versions... and attached are my engine.log and my >>>>>>> vdsm.log >>>>>>> >>>>>>> Centos 6.5 >>>>>>> ovirt-iso-uploader-3.3.1-1.el6.noarch >>>>>>> ovirt-host-deploy-1.1.2-1.el6.noarch >>>>>>> ovirt-release-el6-9-1.noarch >>>>>>> ovirt-engine-setup-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-3.3.1-2.el6.noarch >>>>>>> ovirt-host-deploy-java-1.1.2-1.el6.noarch >>>>>>> ovirt-image-uploader-3.3.1-1.el6.noarch >>>>>>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch >>>>>>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-userportal-3.3.1-2.el6.noarch >>>>>>> ovirt-log-collector-3.3.1-1.el6.noarch >>>>>>> ovirt-engine-tools-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-lib-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-backend-3.3.1-2.el6.noarch >>>>>>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch >>>>>>> ovirt-engine-restapi-3.3.1-2.el6.noarch >>>>>>> >>>>>>> >>>>>>> vdsm-python-4.13.0-11.el6.x86_64 >>>>>>> vdsm-cli-4.13.0-11.el6.noarch >>>>>>> vdsm-xmlrpc-4.13.0-11.el6.noarch >>>>>>> vdsm-4.13.0-11.el6.x86_64 >>>>>>> vdsm-python-cpopen-4.13.0-11.el6.x86_64 >>>>>>> >>>>>>> I've had a few issues with this particular installation in the >>>>>>> past, >>>>>>> as it's from a very old pre release of ovirt, then upgrading to >>>>>>> the >>>>>>> dreyou repo, then finally moving to the official Centos ovirt >>>>>>> repo. >>>>>>> >>>>>>> Thanks, any help is greatly appreciated. >>>>>>> >>>>>>> Regards. >>>>>>> >>>>>>> Neil Wilson. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users@ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users@ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>>> >>>>> -- >>>>> Dafna Ron >>> >>> >>> >>> -- >>> Dafna Ron > > > > -- > Dafna Ron
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (6)
-
Dafna Ron
-
Elad Ben Aharon
-
Michal Skrivanek
-
Neil
-
Sven Kieske
-
Tom Brown