[Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

Moving the storage of a (running) VM to a different (FC) storage domain fails. Steps to reproduce: 1) Create new VM 2) Start VM 3) Start move of the VM to a different storage domain When I look at the logs it seems that vdsm/libvirt tries to use an option that is unsupported by libvirt or the qemu-kvm version on CentOS 6.4: "libvirtError: unsupported configuration: reuse is not supported with this QEMU binary" Information in the "Events" section of the oVirt engine manager: 2013-Nov-04, 14:45 VM migratest powered off by grendelmans (Host: gnkvm01). 2013-Nov-04, 14:05 User grendelmans moving disk migratest_Disk1 to domain gneva03_vmdisk02. 2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage Migration' creation for VM 'migratest' has been completed. 2013-Nov-04, 14:04 Failed to create live snapshot 'Auto-generated for Live Storage Migration' for VM 'migratest'. VM restart is recommended. 2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage Migration' creation for VM 'migratest' was initiated by grendelmans. 2013-Nov-04, 14:04 VM migratest started on Host gnkvm01 2013-Nov-04, 14:03 VM migratest was started by grendelmans (Host: gnkvm01). Information from the vdsm log: Thread-100903::DEBUG::2013-11-04 14:04:56,548::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 Thread-100903::DEBUG::2013-11-04 14:04:56,615::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex Thread-100903::DEBUG::2013-11-04 14:04:56,622::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=38 bs=512 if=/dev/dfbbc8dd-bfae-44e1-8876-2bb82921565a/metadata count=1' (cwd None) Thread-100903::DEBUG::2013-11-04 14:04:56,642::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000208694 s, 2.5 MB/s\n'; <rc> = 0 Thread-100903::DEBUG::2013-11-04 14:04:56,643::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000208694 s, 2.5 MB/s'], size: 512 Thread-100903::INFO::2013-11-04 14:04:56,644::logUtils::47::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'volType': 'path'}, 'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'chain': [{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo': {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'volType': 'path'}, 'volumeID': '7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID': '57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo': {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'volType': 'path'}, 'volumeID': '4d05730d-433c-40d9-8600-6fb0eb5af821', 'imageID': '57ff3040-0cbd-4659-bd21-f07036d84dd8'}]} Thread-100903::DEBUG::2013-11-04 14:04:56,644::task::1168::TaskManager.Task::(prepare) Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::finished: {'info': {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'volType': 'path'}, 'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'chain': [{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo': {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'volType': 'path'}, 'volumeID': '7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID': '57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo': {'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821', 'volType': 'path'}, 'volumeID': '4d05730d-433c-40d9-8600-6fb0eb5af821', 'imageID': '57ff3040-0cbd-4659-bd21-f07036d84dd8'}]} Thread-100903::DEBUG::2013-11-04 14:04:56,644::task::579::TaskManager.Task::(_updateState) Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::moving from state preparing -> state finished Thread-100903::DEBUG::2013-11-04 14:04:56,645::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a': < ResourceRef 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a', isValid: 'True' obj: 'None'>} Thread-100903::DEBUG::2013-11-04 14:04:56,645::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-100903::DEBUG::2013-11-04 14:04:56,646::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a' Thread-100903::DEBUG::2013-11-04 14:04:56,646::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a' (0 active users) Thread-100903::DEBUG::2013-11-04 14:04:56,647::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a' is free, finding out if anyone is waiting for it. Thread-100903::DEBUG::2013-11-04 14:04:56,647::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a', Clearing records. Thread-100903::DEBUG::2013-11-04 14:04:56,648::task::974::TaskManager.Task::(_decref) Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::ref 0 aborting False Thread-100903::INFO::2013-11-04 14:04:56,648::clientIF::325::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821 Thread-100903::DEBUG::2013-11-04 14:04:56,649::vm::3619::vm.Vm::(snapshot) vmId=`2147dd59-6794-4be6-98b9-948636a31159`::<domainsnapshot> <disks> <disk name="vda" snapshot="external"> <source file="/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821"/> </disk> </disks> </domainsnapshot> Thread-100903::DEBUG::2013-11-04 14:04:56,659::libvirtconnection::101::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported configuration: reuse is not supported with this QEMU binary Thread-100903::DEBUG::2013-11-04 14:04:56,659::vm::3640::vm.Vm::(snapshot) vmId=`2147dd59-6794-4be6-98b9-948636a31159`::Snapshot failed using the quiesce flag, trying again without it (unsupported configuration: reuse is not supported with this QEMU binary) Thread-100903::DEBUG::2013-11-04 14:04:56,668::libvirtconnection::101::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported configuration: reuse is not supported with this QEMU binary Thread-100903::ERROR::2013-11-04 14:04:56,668::vm::3644::vm.Vm::(snapshot) vmId=`2147dd59-6794-4be6-98b9-948636a31159`::Unable to take snapshot Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 3642, in snapshot self._dom.snapshotCreateXML(snapxml, snapFlags) File "/usr/share/vdsm/vm.py", line 826, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1623, in snapshotCreateXML if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', dom=self) libvirtError: unsupported configuration: reuse is not supported with this QEMU binary Thread-100903::DEBUG::2013-11-04 14:04:56,670::BindingXMLRPC::986::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Version information: oVirt Engine server (CentOS 6.4 + updates, ovirt 3.3 stable): [root@gnovirt01 ~]# rpm -qa '*ovirt*' '*vdsm*' '*libvirt*' '*kvm*' ovirt-host-deploy-1.1.1-1.el6.noarch ovirt-engine-lib-3.3.0.1-1.el6.noarch ovirt-engine-webadmin-portal-3.3.0.1-1.el6.noarch ovirt-engine-dbscripts-3.3.0.1-1.el6.noarch libvirt-client-0.10.2-18.el6_4.14.x86_64 qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64 ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch ovirt-release-el6-8-1.noarch ovirt-host-deploy-java-1.1.1-1.el6.noarch ovirt-engine-websocket-proxy-3.3.0.1-1.el6.noarch ovirt-image-uploader-3.3.1-1.el6.noarch ovirt-log-collector-3.3.1-1.el6.noarch ovirt-engine-userportal-3.3.0.1-1.el6.noarch ovirt-engine-restapi-3.3.0.1-1.el6.noarch ovirt-engine-backend-3.3.0.1-1.el6.noarch ovirt-engine-setup-3.3.0.1-1.el6.noarch libvirt-0.10.2-18.el6_4.14.x86_64 ovirt-engine-cli-3.3.0.4-1.el6.noarch ovirt-node-iso-3.0.1-1.0.2.vdsm.el6.noarch ovirt-iso-uploader-3.3.1-1.el6.noarch ovirt-engine-tools-3.3.0.1-1.el6.noarch ovirt-engine-3.3.0.1-1.el6.noarch oVirt hypervisor servers (CentOS 6.4 + updates, ovirt 3.3 stable): [root@gnkvm01 vdsm]# rpm -qa '*ovirt*' '*vdsm*' '*libvirt*' '*kvm*' libvirt-client-0.10.2-18.el6_4.14.x86_64 libvirt-lock-sanlock-0.10.2-18.el6_4.14.x86_64 qemu-kvm-tools-0.12.1.2-2.355.0.1.el6_4.9.x86_64 libvirt-python-0.10.2-18.el6_4.14.x86_64 libvirt-0.10.2-18.el6_4.14.x86_64 vdsm-python-4.12.1-4.el6.x86_64 vdsm-xmlrpc-4.12.1-4.el6.noarch vdsm-4.12.1-4.el6.x86_64 qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64 vdsm-python-cpopen-4.12.1-4.el6.x86_64 vdsm-cli-4.12.1-4.el6.noarch [root@gnkvm01 vdsm]#

Can anyone reproduce / comment on this? Can this be caused by http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4 ?

Can anyone reproduce / comment on this? Can this be caused by http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4 ?

On 11/06/2013 10:42 AM, Sander Grendelman wrote:
Can anyone reproduce / comment on this?
Can this be caused by http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4 ? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
do you use qemu-kvm or qemu-kvm-rhev rpm?

Dne 6.11.2013 12:06, Itamar Heim napsal(a):
On 11/06/2013 10:42 AM, Sander Grendelman wrote:
Can anyone reproduce / comment on this?
Can this be caused by http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4 ? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
do you use qemu-kvm or qemu-kvm-rhev rpm? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Hello, I have same problem here. I think it is related to https://bugzilla.redhat.com/show_bug.cgi?id=1009100, because before live migration takes place it tries to create live snapshot and it fails.. So, is there somewhere qemu-kvm-rhev for centos? Offline migration test still in progress, but I believe it is going to work, because I dont need live snapshot to be created. Thank you.

On Wed, Nov 6, 2013 at 12:42 PM, Jakub Bittner <> wrote:
do you use qemu-kvm or qemu-kvm-rhev rpm?
I have same problem here. I think it is related to https://bugzilla.redhat.com/show_bug.cgi?id=1009100, because before live migration takes place it tries to create live snapshot and it fails.. So, is there somewhere qemu-kvm-rhev for centos?
Offline migration test still in progress, but I believe it is going to work, because I dont need live snapshot to be created.
I can confirm that live storage migration works with qemu-kvm-rhev. For my test I have built the package using http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu... [root@gnkvm01 ~]# rpm -qa '*kvm*' qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.9.x86_64 qemu-kvm-rhev-0.12.1.2-2.355.el6.9.x86_64 [root@gnkvm01 ~]#

do you use qemu-kvm or qemu-kvm-rhev rpm?
qemu-kvm: [root@gnkvm01 ~]# rpm -qa '*kvm*' qemu-kvm-tools-0.12.1.2-2.355. 0.1.el6_4.9.x86_64 qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64 [root@gnkvm01 ~]# yum list |grep kvm qemu-kvm.x86_64 2:0.12.1.2-2.355.0.1.el6_4.9 qemu-kvm-tools.x86_64 2:0.12.1.2-2.355.0.1.el6_4.9 [root@gnkvm01 ~]# It seems that qemu-kvm-rhev is not available for CentOS/oVirt? On Wed, Nov 6, 2013 at 12:06 PM, Itamar Heim <iheim@redhat.com> wrote:
On 11/06/2013 10:42 AM, Sander Grendelman wrote:
Can anyone reproduce / comment on this?
Can this be caused by http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4 ? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
do you use qemu-kvm or qemu-kvm-rhev rpm?
participants (4)
-
Itamar Heim
-
Jakub Bittner
-
Sander Grendelman
-
Sander Grendelman