[Users] issues with live snapshot
Ayal Baron
abaron at redhat.com
Thu Feb 13 13:48:23 UTC 2014
----- Original Message -----
> There's a bug on this:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1063979
you can install the qemu-kvm-rhev package to solve this as described here:
http://comments.gmane.org/gmane.linux.centos.general/138593
>
> Thanks,
>
> Dafna
>
>
> On 02/13/2014 09:14 AM, Maor Lipchuk wrote:
> > Hi Andreas,
> >
> > Basically it means that the snapshot was created but the process of the
> > QEMU is still writing on the original volume (The snapshot), so any
> > changes you will made while this VM is running will be in the snapshot.
> >
> > This could be fixed when restarting the VM (as described in the event),
> > after the restart the QEMU process should run pointing to the right
> > volumes.
> >
> > Regards,
> > Maor
> >
> > On 02/13/2014 09:56 AM, andreas.ewert at cbc.de wrote:
> >> Hi,
> >>
> >> I want to create a live snapshot, but it fails at the finalizing task.
> >> There are 3 events:
> >>
> >> - Snapshot 'test' creation for VM 'snaptest' was initiated by EwertA
> >> - Failed to create live snapshot 'test' for VM 'snaptest'. VM restart is
> >> recommended.
> >> - Failed to complete snapshot 'test' creation for VM 'snaptest‘.
> >>
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,672::BindingXMLRPC::965::vds::(wrapper) client
> >> [10.98.229.5]::call vmSnapshot with
> >> ('31c185ce-cc2e-4246-bf46-fcd96cd30050', [{'baseVolumeID':
> >> 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'domainID':
> >> '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volumeID':
> >> 'c677d01e-dc50-486b-a532-f88a71666d2c', 'imageID':
> >> 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}], '') {}
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,672::task::579::TaskManager.Task::(_updateState)
> >> Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::moving from state init ->
> >> state preparing
> >> Thread-338209::INFO::2014-02-13
> >> 08:40:19,672::logUtils::44::dispatcher::(wrapper) Run and protect:
> >> prepareImage(sdUUID='54f86ad7-2c12-4322-b2d1-f129f3d20e57',
> >> spUUID='5849b030-626e-47cb-ad90-3ce782d831b3',
> >> imgUUID='db6faf9e-2cc8-4106-954b-fef7e4b1bd1b',
> >> leafUUID='c677d01e-dc50-486b-a532-f88a71666d2c')
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,673::resourceManager::197::ResourceManager.Request::(__init__)
> >> ResName=`Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57`ReqID=`630a701e-bd44-49ef-8a14-f657b8653a33`::Request
> >> was made in '/usr/share/vdsm/storage/hsm.py' line '3236' at
> >> 'prepareImage'
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,673::resourceManager::541::ResourceManager::(registerResource)
> >> Trying to register resource
> >> 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' for lock type 'shared'
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,673::resourceManager::600::ResourceManager::(registerResource)
> >> Resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' is free. Now
> >> locking as 'shared' (1 active user)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,673::resourceManager::237::ResourceManager.Request::(grant)
> >> ResName=`Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57`ReqID=`630a701e-bd44-49ef-8a14-f657b8653a33`::Granted
> >> request
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,674::task::811::TaskManager.Task::(resourceAcquired)
> >> Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::_resourcesAcquired:
> >> Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57 (shared)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,675::task::974::TaskManager.Task::(_decref)
> >> Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::ref 1 aborting False
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,675::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm
> >> reload operation' got the operation mutex
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,675::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
> >> /sbin/lvm lvs --config " devices { preferred_names =
> >> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> >> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> >> \'a|/dev/mapper/36000d7710000ec7c7d5beda78691839c|\', \'r|.*|\' ] }
> >> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 }
> >> backup { retain_min = 50 retain_days = 0 } " --noheadings --units b
> >> --nosuffix --separator | -o
> >> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
> >> 54f86ad7-2c12-4322-b2d1-f129f3d20e57' (cwd None)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,715::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '';
> >> <rc> = 0
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,739::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,740::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm
> >> reload operation' released the operation mutex
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,741::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
> >> /sbin/lvm lvchange --config " devices { preferred_names =
> >> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> >> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> >> \'a|/dev/mapper/36000d7710000ec7c7d5beda78691839c|\', \'r|.*|\' ] }
> >> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 }
> >> backup { retain_min = 50 retain_days = 0 } " --autobackup n
> >> --available y
> >> 54f86ad7-2c12-4322-b2d1-f129f3d20e57/c677d01e-dc50-486b-a532-f88a71666d2c'
> >> (cwd None)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,800::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '';
> >> <rc> = 0
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,801::lvm::526::OperationMutex::(_invalidatelvs) Operation 'lvm
> >> invalidate operation' got the operation mutex
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,801::lvm::538::OperationMutex::(_invalidatelvs) Operation 'lvm
> >> invalidate operation' released the operation mutex
> >> Thread-338209::WARNING::2014-02-13
> >> 08:40:19,801::fileUtils::167::Storage.fileUtils::(createdir) Dir
> >> /var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b
> >> already exists
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,801::blockSD::1068::Storage.StorageDomain::(createImageLinks)
> >> img run vol already exists:
> >> /var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,802::blockSD::1068::Storage.StorageDomain::(createImageLinks)
> >> img run vol already exists:
> >> /var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,802::blockSD::1040::Storage.StorageDomain::(linkBCImage) path to
> >> image directory already exists:
> >> /rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,802::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm
> >> reload operation' got the operation mutex
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,803::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
> >> /sbin/lvm lvs --config " devices { preferred_names =
> >> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> >> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> >> \'a|/dev/mapper/36000d7710000ec7c7d5beda78691839c|\', \'r|.*|\' ] }
> >> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 }
> >> backup { retain_min = 50 retain_days = 0 } " --noheadings --units b
> >> --nosuffix --separator | -o
> >> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
> >> 54f86ad7-2c12-4322-b2d1-f129f3d20e57' (cwd None)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,831::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '';
> >> <rc> = 0
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,849::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,849::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm
> >> reload operation' released the operation mutex
> >> Thread-338209::INFO::2014-02-13
> >> 08:40:19,850::logUtils::47::dispatcher::(wrapper) Run and protect:
> >> prepareImage, Return response: {'info': {'domainID':
> >> '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset':
> >> 128974848, 'path':
> >> '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c',
> >> 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath':
> >> '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID':
> >> 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, 'path':
> >> '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c',
> >> 'imgVolumesInfo': [{'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57',
> >> 'volType': 'path', 'leaseOffset': 128974848, 'path':
> >> '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f8!
> > 8a71666d2c
> > ', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath':
> > '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID':
> > 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID':
> > '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset':
> > 127926272, 'path':
> > '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17',
> > 'volumeID': 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'leasePath':
> > '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID':
> > 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID':
> > '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset':
> > 111149056, 'path':
> > '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8',
> > 'volumeID': 'bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'leasePath':
> > '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6fa!
> > f9e-2cc8-4
> > 106-954b-fef7e4b1bd1b'}]}
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,850::task::1168::TaskManager.Task::(prepare)
> >> Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::finished: {'info':
> >> {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path',
> >> 'leaseOffset': 128974848, 'path':
> >> '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c',
> >> 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath':
> >> '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID':
> >> 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, 'path':
> >> '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c',
> >> 'imgVolumesInfo': [{'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57',
> >> 'volType': 'path', 'leaseOffset': 128974848, 'path':
> >> '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-!
> > 486b-a532-
> > f88a71666d2c', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c',
> > 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases',
> > 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID':
> > '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset':
> > 127926272, 'path':
> > '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17',
> > 'volumeID': 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'leasePath':
> > '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID':
> > 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID':
> > '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset':
> > 111149056, 'path':
> > '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8',
> > 'volumeID': 'bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'leasePath':
> > '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imag!
> > eID': 'db6
> > faf9e-2cc8-4106-954b-fef7e4b1bd1b'}]}
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,850::task::579::TaskManager.Task::(_updateState)
> >> Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::moving from state preparing
> >> -> state finished
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,850::resourceManager::939::ResourceManager.Owner::(releaseAll)
> >> Owner.releaseAll requests {} resources
> >> {'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57': < ResourceRef
> >> 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57', isValid: 'True' obj:
> >> 'None'>}
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,851::resourceManager::976::ResourceManager.Owner::(cancelAll)
> >> Owner.cancelAll requests {}
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,851::resourceManager::615::ResourceManager::(releaseResource)
> >> Trying to release resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57'
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,851::resourceManager::634::ResourceManager::(releaseResource)
> >> Released resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' (0
> >> active users)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,851::resourceManager::640::ResourceManager::(releaseResource)
> >> Resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' is free, finding
> >> out if anyone is waiting for it.
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,851::resourceManager::648::ResourceManager::(releaseResource) No
> >> one is waiting for resource
> >> 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57', Clearing records.
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,852::task::974::TaskManager.Task::(_decref)
> >> Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::ref 0 aborting False
> >> Thread-338209::INFO::2014-02-13
> >> 08:40:19,852::clientIF::353::vds::(prepareVolumePath) prepared volume
> >> path:
> >> /rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c
> >> Thread-338209::DEBUG::2014-02-13 08:40:19,852::vm::3743::vm.Vm::(snapshot)
> >> vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::<domainsnapshot>
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,865::libvirtconnection::108::libvirtconnection::(wrapper)
> >> Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported
> >> configuration: reuse is not supported with this QEMU binary
> >> Thread-338209::DEBUG::2014-02-13 08:40:19,865::vm::3764::vm.Vm::(snapshot)
> >> vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::Snapshot failed using the
> >> quiesce flag, trying again without it (unsupported configuration: reuse
> >> is not supported with this QEMU binary)
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,869::libvirtconnection::108::libvirtconnection::(wrapper)
> >> Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported
> >> configuration: reuse is not supported with this QEMU binary
> >> Thread-338209::ERROR::2014-02-13 08:40:19,869::vm::3768::vm.Vm::(snapshot)
> >> vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::Unable to take snapshot
> >> Thread-338209::DEBUG::2014-02-13
> >> 08:40:19,870::BindingXMLRPC::972::vds::(wrapper) return vmSnapshot with
> >> {'status': {'message': 'Snapshot failed', 'code': 48}}
> >>
> >> What can I do to fix this?
> >>
> >> best regards
> >> Andreas
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
> --
> Dafna Ron
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
More information about the Users
mailing list