[Users] Disk move when VM is running problem

Hello, oVirt 3.2.1 on f18 host I have a running VM with one disk. Storage domain is FC and disk is thin provisioned. VM has preexisting snapshots. I select to move the disk On the top part of the window I note in red this message: Note: Moving the disk(s) while the VM is running Donna if it is a generale advice... I proceed and I get Snapshot Auto-generated for Live Storage Migration creation for VM zensrv was initiated by admin@internal. Snapshot Auto-generated for Live Storage Migration creation for VM zensrv has been completed. User admin@internal moving disk zensrv_Disk1 to domain DS6800_Z1_1181. during the move: Total DISK READ: 73398.57 K/s | Total DISK WRITE: 79927.24 K/s PID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 29513 idle vdsm 72400.02 K/s 72535.35 K/s 0.00 % 87.65 % dd if=/rhev/data-cent~nt=10240 oflag=direct 2457 be/4 sanlock 336.35 K/s 0.33 K/s 0.00 % 0.12 % sanlock daemon -U sanlock -G sanlock 4173 be/3 vdsm 0.33 K/s 0.00 K/s 0.00 % 0.00 % python /usr/share/vdsm~eFileHandler.pyc 43 40 8760 be/4 qemu 0.00 K/s 7574.52 K/s 0.00 % 0.00 % qemu-kvm -name F18 -S ~on0,bus=pci.0,addr=0x8 2830 be/4 root 0.00 K/s 13.14 K/s 0.00 % 0.00 % libvirtd --listen 27445 be/4 qemu 0.00 K/s 44.67 K/s 0.00 % 0.00 % qemu-kvm -name zensrv ~on0,bus=pci.0,addr=0x6 3141 be/3 vdsm 0.00 K/s 3.94 K/s 0.00 % 0.00 % python /usr/share/vdsm/vdsm vdsm 29513 3141 14 17:14 ? 00:00:17 /usr/bin/dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90 of=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90 bs=1048576 seek=0 skip=0 conv=notrunc count=10240 oflag=direct After some minutes I get: User admin@internal have failed to move disk zensrv_Disk1 to domain DS6800_Z1_1181. But actually the vm is still running (I had a ssh terminal open on it) and the disk appears to be the target one, as if the move operation actually completed ok.... what to do? Can I safely shutdown and restart the vm? In engine.log around the error time I see this that lets me suspect the problem is deallocating the old pv perhaps? 2013-04-16 17:17:45,039 WARN [org.ovirt.engine.core.bll.GetConfigurationValueQuery] (ajp--127.0.0.1-8702-2) calling GetConfigurationValueQuery (VdcVersion) with null version, using default general for version 2013-04-16 17:18:02,979 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-50) [4e37d247] Failed in DeleteImageGroupVDS method 2013-04-16 17:18:02,980 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-50) [4e37d247] Error code CannotRemoveLogicalVolume and error message IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Cannot remove Logical Volume: ('013bcc40-5f3d-4394-bd3b-971b14852654', "{'d477fcba-2110-403e-93fe-15565aae5304': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'), 'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'), '2be37d02-b44f-4823-bf26-054d1a1f0c90': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='00000000-0000-0000-0000-000000000000'), '0a6c7300-011a-46f9-9a5d-d22476a7f4c6': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}") 2013-04-16 17:18:03,029 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-3-thread-50) [4e37d247] IrsBroker::Failed::DeleteImageGroupVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Cannot remove Logical Volume: ('013bcc40-5f3d-4394-bd3b-971b14852654', "{'d477fcba-2110-403e-93fe-15565aae5304': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'), 'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'), '2be37d02-b44f-4823-bf26-054d1a1f0c90': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='00000000-0000-0000-0000-000000000000'), '0a6c7300-011a-46f9-9a5d-d22476a7f4c6': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}") 2013-04-16 17:18:03,067 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-3-thread-50) [4e37d247] FINISH, DeleteImageGroupVDSCommand, log id: 54616b28 2013-04-16 17:18:03,067 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (pool-3-thread-50) [4e37d247] Command org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Cannot remove Logical Volume: ('013bcc40-5f3d-4394-bd3b-971b14852654', "{'d477fcba-2110-403e-93fe-15565aae5304': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'), 'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'), '2be37d02-b44f-4823-bf26-054d1a1f0c90': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='00000000-0000-0000-0000-000000000000'), '0a6c7300-011a-46f9-9a5d-d22476a7f4c6': ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',), parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}") 2013-04-16 17:18:03,069 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (pool-3-thread-50) [4e37d247] Reverting task unknown, handler: org.ovirt.engine.core.bll.lsm.VmReplicateDiskStartTaskHandler 2013-04-16 17:18:03,088 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (pool-3-thread-50) [4e37d247] START, VmReplicateDiskFinishVDSCommand(HostName = f18ovn01, HostId = 0f799290-b29a-49e9-bc1e-85ba5605a535, vmId=c0a43bef-7c9d-4170-bd9c-63497e61d3fc), log id: 6f8ba7ae 2013-04-16 17:18:03,093 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (pool-3-thread-50) [4e37d247] FINISH, VmReplicateDiskFinishVDSCommand, log id: 6f8ba7ae 2013-04-16 17:18:03,095 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (pool-3-thread-50) [4e37d247] Reverting task deleteImage, handler: org.ovirt.engine.core.bll.lsm.CreateImagePlaceholderTaskHandler 2013-04-16 17:18:03,113 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-3-thread-50) [4e37d247] START, DeleteImageGroupVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, compatabilityVersion = null, storageDomainId = 14b5167c-5883-4920-8236-e8905456b01f, imageGroupId = 01488698-6420-4a32-9095-cfed1ff8f4bf, postZeros = false, forceDelete = false), log id: 515ac5ae 2013-04-16 17:18:03,694 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-3-thread-50) [4e37d247] FINISH, DeleteImageGroupVDSCommand, log id: 515ac5ae and in vdsm.log: Thread-1929238::ERROR::2013-04-16 17:14:02,928::libvirtvm::2320::vm.Vm::(diskReplicateStart) vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::Unable to start the replication for vda to {'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304', 'volumeID': 'd477fcba-2110-403e-93fe-15565aae5304', 'volumeChain': [{'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90', 'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID': '2be37d02-b44f-4823-bf26-054d1a1f0c90', 'imageID': '01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc', 'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID': 'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc', 'imageID': '01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/0a6c7300-011a-46f9-9a5d-d22476a7f4c6', 'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID': '0a6c7300-011a-46f9-9a5d-d22476a7f4c6', 'imageID': '01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304', 'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID': 'd477fcba-2110-403e-93fe-15565aae5304', 'imageID': '01488698-6420-4a32-9095-cfed1ff8f4bf'}], 'imageID': '01488698-6420-4a32-9095-cfed1ff8f4bf'} Traceback (most recent call last): File "/usr/share/vdsm/libvirtvm.py", line 2316, in diskReplicateStart libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW File "/usr/share/vdsm/libvirtvm.py", line 541, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 626, in blockRebase if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self) libvirtError: unsupported flags (0xb) in function qemuDomainBlockRebase Thread-1929238::DEBUG::2013-04-16 17:14:02,930::task::568::TaskManager.Task::(_updateState) Task=`27736fbc-7054-4df3-9f26-388601621ea9`::moving from state init -> state preparing Thread-1929238::INFO::2013-04-16 17:14:02,930::logUtils::41::dispatcher::(wrapper) Run and protect: teardownImage(sdUUID='14b5167c-5883-4920-8236-e8905456b01f', spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', imgUUID='01488698-6420-4a32-9095-cfed1ff8f4bf', volUUID=None) Thread-1929238::DEBUG::2013-04-16 17:14:02,931::resourceManager::190::ResourceManager.Request::(__init__) ResName=`Storage.14b5167c-5883-4920-8236-e8905456b01f`ReqID=`e1aa534c-45b6-471f-9a98-8413d449b480`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at '__init__' Thread-1929238::DEBUG::2013-04-16 17:14:02,931::resourceManager::504::ResourceManager::(registerResource) Trying to register resource 'Storage.14b5167c-5883-4920-8236-e8905456b01f' for lock type 'shared' Thread-1929238::DEBUG::2013-04-16 17:14:02,931::resourceManager::547::ResourceManager::(registerResource) Resource 'Storage.14b5167c-5883-4920-8236-e8905456b01f' is free. Now locking as 'shared' (1 active user) Thread-1929238::DEBUG::2013-04-16 17:14:02,931::resourceManager::227::ResourceManager.Request::(grant) ResName=`Storage.14b5167c-5883-4920-8236-e8905456b01f`ReqID=`e1aa534c-45b6-471f-9a98-8413d449b480`::Granted request Thread-1929238::DEBUG::2013-04-16 17:14:02,932::task::794::TaskManager.Task::(resourceAcquired) Task=`27736fbc-7054-4df3-9f26-388601621ea9`::_resourcesAcquired: Storage.14b5167c-5883-4920-8236-e8905456b01f (shared) Thread-1929238::DEBUG::2013-04-16 17:14:02,932::task::957::TaskManager.Task::(_decref) Task=`27736fbc-7054-4df3-9f26-388601621ea9`::ref 1 aborting False Thread-1929238::DEBUG::2013-04-16 17:14:02,932::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex thanks, Gianluca

On Tue, Apr 16, 2013 at 5:43 PM, Gianluca Cecchi wrote:
After some minutes I get: User admin@internal have failed to move disk zensrv_Disk1 to domain DS6800_Z1_1181.
But actually the vm is still running (I had a ssh terminal open on it) and the disk appears to be the target one, as if the move operation actually completed ok....
what to do? Can I safely shutdown and restart the vm?
engine.log : https://docs.google.com/file/d/0BwoPbcrMv8mvaDMzY0JNcHFwVnc/edit?usp=sharing vdsm.log: https://docs.google.com/file/d/0BwoPbcrMv8mvaDVIWTdEUzBPeVU/edit?usp=sharing Gianluca

You have used live storage migration. In order to understand better what happened in your environment I need the following details: 1) What is your vdsm and libvirt version? You probably need to upgrade libvirt, there is a known bug there. 2) attach the output of: run on the vdsm host: 'tree '/rhev/data-center/' 3) attach the output of: run 'lvs' on the vdsm host 4) Did you check what is on the disk of the vm (the target disk of the replication)? because the replication failed so it should probably be empty. 5) Does your disk have any snapshots? -- Yeela ----- Original Message -----
From: "Gianluca Cecchi" <gianluca.cecchi@gmail.com> To: "users" <users@ovirt.org> Sent: Tuesday, April 16, 2013 6:50:03 PM Subject: Re: [Users] Disk move when VM is running problem
On Tue, Apr 16, 2013 at 5:43 PM, Gianluca Cecchi wrote:
After some minutes I get: User admin@internal have failed to move disk zensrv_Disk1 to domain DS6800_Z1_1181.
But actually the vm is still running (I had a ssh terminal open on it) and the disk appears to be the target one, as if the move operation actually completed ok....
what to do? Can I safely shutdown and restart the vm?
engine.log : https://docs.google.com/file/d/0BwoPbcrMv8mvaDMzY0JNcHFwVnc/edit?usp=sharing
vdsm.log: https://docs.google.com/file/d/0BwoPbcrMv8mvaDVIWTdEUzBPeVU/edit?usp=sharing Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

You have used live storage migration. In order to understand better what happened in your environment I need the following details: 1) What is your vdsm and libvirt version? [root@f18ovn01 ~]# rpm -q vdsm libvirt vdsm-4.10.3-10.fc18.x86_64
On Thu, Apr 18, 2013 at 2:35 PM, Yeela Kaplan wrote: libvirt-0.10.2.3-1.fc18.x86_64
You probably need to upgrade libvirt, there is a known bug there. Can you detail version impacted and first version where bug resolved? Or a bugzilla number?
2) attach the output of: run on the vdsm host: 'tree '/rhev/data-center/' https://docs.google.com/file/d/0BwoPbcrMv8mvZld5Q0I1T0pEbXM/edit?usp=sharing
3) attach the output of: run 'lvs' on the vdsm host https://docs.google.com/file/d/0BwoPbcrMv8mvRmx0R3lEYlY5WG8/edit?usp=sharing
4) Did you check what is on the disk of the vm (the target disk of the replication)? because the replication failed so it should probably be empty. The VM was runnig when doing the disk migration and didn't get any problem. In the gui it appears that the VM has the new LUN as the backing storage
5) Does your disk have any snapshots? Yes. And one more was generated with description: Auto-generated for Live Storage Migration
VM has only one disk. Overall snapshot now Date Status Description Current Ok Active VM snapshot 2013-Apr-16, 17:13 Ok Auto-generated for Live Storage Migration 2013-Mar-22, 08:29 Ok following 2013-Feb-26, 14:45 Ok test Gianluca

As soon as the VM was running, no problem. But after shutdown and try to power on I get this in the gui. 2013-Apr-18, 15:22 Failed to run VM zensrv (User: admin@internal). 2013-Apr-18, 15:22 Failed to run VM zensrv on Host f18ovn01. 2013-Apr-18, 15:22 VM zensrv is down. Exit message: 'truesize'. 2013-Apr-18, 15:22 VM zensrv was started by admin@internal (Host: f18ovn01). And this in vdsm.log: Thread-2061275::DEBUG::2013-04-18 15:22:10,511::task::568::TaskManager.Task::(_updateState) Task=`8cc2d3b9-48f3-43e3-97f3-43055d4865c4`::moving from state init -> state preparing Thread-2061275::INFO::2013-04-18 15:22:10,511::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-2061275::INFO::2013-04-18 15:22:10,513::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'be882de7-6c79-413b-85aa-f0b8b77cb59e': {'delay': '0.0623390674591', 'lastCheck': '3.8', 'code': 0, 'valid': True}, u'3fb66ba1-cfcb-4341-8960-46f0e8cf6e83': {'delay': '0.0525228977203', 'lastCheck': '7.5', 'code': 0, 'valid': True}, u'8573d237-f86f-4b27-be80-479281a53645': {'delay': '0.0551929473877', 'lastCheck': '9.8', 'code': 0, 'valid': True}, u'596a3408-67d7-4b26-b482-e3a7554a5897': {'delay': '0.0536139011383', 'lastCheck': '8.1', 'code': 0, 'valid': True}, u'e3251723-08e1-4b4b-bde4-c10d6372074b': {'delay': '0.012589931488', 'lastCheck': '3.8', 'code': 0, 'valid': True}, u'2aff7dc6-e25b-433b-9681-5541a29bb07c': {'delay': '0.0661220550537', 'lastCheck': '2.7', 'code': 0, 'valid': True}, u'14b5167c-5883-4920-8236-e8905456b01f': {'delay': '0.0578501224518', 'lastCheck': '6.8', 'code': 0, 'valid': True}} Thread-2061275::DEBUG::2013-04-18 15:22:10,513::task::1151::TaskManager.Task::(prepare) Task=`8cc2d3b9-48f3-43e3-97f3-43055d4865c4`::finished: {u'be882de7-6c79-413b-85aa-f0b8b77cb59e': {'delay': '0.0623390674591', 'lastCheck': '3.8', 'code': 0, 'valid': True}, u'3fb66ba1-cfcb-4341-8960-46f0e8cf6e83': {'delay': '0.0525228977203', 'lastCheck': '7.5', 'code': 0, 'valid': True}, u'8573d237-f86f-4b27-be80-479281a53645': {'delay': '0.0551929473877', 'lastCheck': '9.8', 'code': 0, 'valid': True}, u'596a3408-67d7-4b26-b482-e3a7554a5897': {'delay': '0.0536139011383', 'lastCheck': '8.1', 'code': 0, 'valid': True}, u'e3251723-08e1-4b4b-bde4-c10d6372074b': {'delay': '0.012589931488', 'lastCheck': '3.8', 'code': 0, 'valid': True}, u'2aff7dc6-e25b-433b-9681-5541a29bb07c': {'delay': '0.0661220550537', 'lastCheck': '2.7', 'code': 0, 'valid': True}, u'14b5167c-5883-4920-8236-e8905456b01f': {'delay': '0.0578501224518', 'lastCheck': '6.8', 'code': 0, 'valid': True}} Thread-2061275::DEBUG::2013-04-18 15:22:10,514::task::568::TaskManager.Task::(_updateState) Task=`8cc2d3b9-48f3-43e3-97f3-43055d4865c4`::moving from state preparing -> state finished Thread-2061275::DEBUG::2013-04-18 15:22:10,514::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-2061275::DEBUG::2013-04-18 15:22:10,514::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-2061275::DEBUG::2013-04-18 15:22:10,514::task::957::TaskManager.Task::(_decref) Task=`8cc2d3b9-48f3-43e3-97f3-43055 d4865c4`::ref 0 aborting False Thread-2061273::DEBUG::2013-04-18 15:22:10,576::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-2061273::DEBUG::2013-04-18 15:22:10,589::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex Thread-2061273::WARNING::2013-04-18 15:22:10,589::lvm::588::Storage.LVM::(getLv) lv: d477fcba-2110-403e-93fe-15565aae5304 not found in lvs vg: 14b5167c-5883-4920-8236-e8905456b01f response Thread-2061273::ERROR::2013-04-18 15:22:10,590::task::833::TaskManager.Task::(_setError) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 840, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 42, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2865, in getVolumeSize volUUID, bs=1)) File "/usr/share/vdsm/storage/volume.py", line 322, in getVSize return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs) File "/usr/share/vdsm/storage/blockVolume.py", line 105, in getVSize return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs) File "/usr/share/vdsm/storage/lvm.py", line 845, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('14b5167c-5883-4920-8236-e8905456b01f/d477fcba-2110-403e-93fe-15565aae5304',) Thread-2061273::DEBUG::2013-04-18 15:22:10,591::task::852::TaskManager.Task::(_run) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::Task._run: 77d5f0f2-6223-4303-81d8-32f20cd52acb ('14b5167c-5883-4920-8236-e8905456b01f', '5849b030-626e-47cb-ad90-3ce782d831b3', '01488698-6420-4a32-9095-cfed1ff8f4bf', 'd477fcba-2110-403e-93fe-15565aae5304') {} failed - stopping task Thread-2061273::DEBUG::2013-04-18 15:22:10,591::task::1177::TaskManager.Task::(stop) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::stopping in state preparing (force False) Thread-2061273::DEBUG::2013-04-18 15:22:10,591::task::957::TaskManager.Task::(_decref) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::ref 1 aborting True Thread-2061273::INFO::2013-04-18 15:22:10,591::task::1134::TaskManager.Task::(prepare) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::aborting: Task is aborted: 'Logical volume does not exist' - code 610 Thread-2061273::DEBUG::2013-04-18 15:22:10,591::task::1139::TaskManager.Task::(prepare) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::Prepare: aborted: Logical volume does not exist Thread-2061273::DEBUG::2013-04-18 15:22:10,592::task::957::TaskManager.Task::(_decref) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::ref 0 aborting True Thread-2061273::DEBUG::2013-04-18 15:22:10,592::task::892::TaskManager.Task::(_doAbort) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::Task._doAbort: force False Thread-2061273::DEBUG::2013-04-18 15:22:10,592::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-2061273::DEBUG::2013-04-18 15:22:10,592::task::568::TaskManager.Task::(_updateState) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::moving from state preparing -> state aborting Thread-2061273::DEBUG::2013-04-18 15:22:10,592::task::523::TaskManager.Task::(__state_aborting) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::_aborting: recover policy none Thread-2061273::DEBUG::2013-04-18 15:22:10,592::task::568::TaskManager.Task::(_updateState) Task=`77d5f0f2-6223-4303-81d8-32f20cd52acb`::moving from state aborting -> state failed Thread-2061273::DEBUG::2013-04-18 15:22:10,592::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-2061273::DEBUG::2013-04-18 15:22:10,593::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-2061273::ERROR::2013-04-18 15:22:10,593::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Logical volume does not exist: ('14b5167c-5883-4920-8236-e8905456b01f/d477fcba-2110-403e-93fe-15565aae5304',)", 'code': 610}} Thread-2061273::DEBUG::2013-04-18 15:22:10,593::vm::692::vm.Vm::(_startUnderlyingVm) vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::_ongoingCreations released Thread-2061273::ERROR::2013-04-18 15:22:10,593::vm::716::vm.Vm::(_startUnderlyingVm) vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 678, in _startUnderlyingVm self._run() File "/usr/share/vdsm/libvirtvm.py", line 1467, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/vm.py", line 515, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/vm.py", line 408, in _normalizeVdsmImg drv['truesize'] = res['truesize'] KeyError: 'truesize' Thread-2061273::DEBUG::2013-04-18 15:22:10,609::vm::1065::vm.Vm::(setDownStatus) vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::Changed state to Down: 'truesize' Thread-24::DEBUG::2013-04-18 15:22:10,705::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd iflag=direct if=/dev/8573d237-f86f-4b27-be80-479281a53645/metadata bs=4096 count=1' (cwd None) Thread-24::DEBUG::2013-04-18 15:22:10,753::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000281474 s, 14.6 MB/s\n'; <rc> = 0 Dummy-1143::DEBUG::2013-04-18 15:22:12,125::misc::84::Storage.Misc.excCmd::(<lambda>) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) Dummy-1143::DEBUG::2013-04-18 15:22:12,227::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.0310387 s, 33.0 MB/s\n'; <rc> = 0 Thread-2061278::DEBUG::2013-04-18 15:22:12,267::BindingXMLRPC::161::vds::(wrapper) [10.4.4.60] Thread-2061278::DEBUG::2013-04-18 15:22:12,268::task::568::TaskManager.Task::(_updateState) Task=`1c0d6170-1e3c-440f-8333-234c357503e5`::moving from state init -> state preparing Thread-2061278::INFO::2013-04-18 15:22:12,268::logUtils::41::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-2061278::INFO::2013-04-18 15:22:12,269::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 1}} Thread-2061278::DEBUG::2013-04-18 15:22:12,269::task::1151::TaskManager.Task::(prepare) Task=`1c0d6170-1e3c-440f-8333-234c357503e5`::finished: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 1}} Thread-2061278::DEBUG::2013-04-18 15:22:12,269::task::568::TaskManager.Task::(_updateState) Task=`1c0d6170-1e3c-440f-8333-234c357503e5`::moving from state preparing -> state finished Thread-2061278::DEBUG::2013-04-18 15:22:12,269::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-2061278::DEBUG::2013-04-18 15:22:12,269::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-2061278::DEBUG::2013-04-18 15:22:12,269::task::957::TaskManager.Task::(_decref) Task=`1c0d6170-1e3c-440f-8333-234c357503e5`::ref 0 aborting False Thread-2061279::DEBUG::2013-04-18 15:22:12,274::BindingXMLRPC::161::vds::(wrapper) [10.4.4.60] Thread-2061279::DEBUG::2013-04-18 15:22:12,274::task::568::TaskManager.Task::(_updateState) Task=`76c9085f-fc92-4d83-9699-00cc33ec329a`::moving from state init -> state preparing Thread-2061279::INFO::2013-04-18 15:22:12,275::logUtils::41::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-2061279::DEBUG::2013-04-18 15:22:12,275::resourceManager::190::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`5bb8a6a7-7f7c-4d9d-a6d5-5a8c85a02eaf`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at '__init__' Thread-2061279::DEBUG::2013-04-18 15:22:12,275::resourceManager::504::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'shared' Thread-2061279::DEBUG::2013-04-18 15:22:12,276::resourceManager::547::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'shared' (1 active user) Thread-2061279::DEBUG::2013-04-18 15:22:12,276::resourceManager::227::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`5bb8a6a7-7f7c-4d9d-a6d5-5a8c85a02eaf`::Granted request Thread-2061279::DEBUG::2013-04-18 15:22:12,276::task::794::TaskManager.Task::(resourceAcquired) Task=`76c9085f-fc92-4d83-9699-00cc33ec329a`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (shared) Thread-2061279::DEBUG::2013-04-18 15:22:12,277::task::957::TaskManager.Task::(_decref) Task=`76c9085f-fc92-4d83-9699-00cc33ec329a`::ref 1 aborting False Thread-2061279::INFO::2013-04-18 15:22:12,279::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 2, 'master_uuid': '2aff7dc6-e25b-433b-9681-5541a29bb07c', 'name': 'Default', 'version': '3', 'domains': u'be882de7-6c79-413b-85aa-f0b8b77cb59e:Active,45270bc7-5244-4822-83aa-a9f2fb516e01:Attached,3fb66ba1-cfcb-4341-8960-46f0e8cf6e83:Active,8573d237-f86f-4b27-be80-479281a53645:Active,596a3408-67d7-4b26-b482-e3a7554a5897:Active,e3251723-08e1-4b4b-bde4-c10d6372074b:Active,2aff7dc6-e25b-433b-9681-5541a29bb07c:Active,14b5167c-5883-4920-8236-e8905456b01f:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/e3251723-08e1-4b4b-bde4-c10d6372074b/images/11111111-1111-1111-1111-111111111111', 'type': 'FCP', 'master_ver': 2, 'lver': 1}, 'dominfo': {u'be882de7-6c79-413b-85aa-f0b8b77cb59e': {'status': u'Active', 'diskfree': '102810779648', 'alerts': [], 'disktotal': '106971529216'}, u'45270bc7-5244-4822-83aa-a9f2fb516e01': {'status': u'Attached'}, u'3fb66ba1-cfcb-4341-8960-46f0e8cf6e83': {'status': u'Active', 'diskfree': '69122129920', 'alerts': [], 'disktotal': '160658620416'}, u'8573d237-f86f-4b27-be80-479281a53645': {'status': u'Active', 'diskfree': '49123688448', 'alerts': [], 'disktotal': '53284438016'}, u'596a3408-67d7-4b26-b482-e3a7554a5897': {'status': u'Active', 'diskfree': '62008590336', 'alerts': [], 'disktotal': '106971529216'}, u'e3251723-08e1-4b4b-bde4-c10d6372074b': {'status': u'Active', 'diskfree': '10315366400', 'alerts': [], 'disktotal': '38501613568'}, u'2aff7dc6-e25b-433b-9681-5541a29bb07c': {'status': u'Active', 'diskfree': '23353884672', 'alerts': [], 'disktotal': '106971529216'}, u'14b5167c-5883-4920-8236-e8905456b01f': {'status': u'Active', 'diskfree': '62545461248', 'alerts': [], 'disktotal': '106971529216'}}} Thread-2061279::DEBUG::2013-04-18 15:22:12,279::task::1151::TaskManager.Task::(prepare) Task=`76c9085f-fc92-4d83-9699-00cc33ec329a`::finished: {'info': {'spm_id': 2, 'master_uuid': '2aff7dc6-e25b-433b-9681-5541a29bb07c', 'name': 'Default', 'version': '3', 'domains': u'be882de7-6c79-413b-85aa-f0b8b77cb59e:Active,45270bc7-5244-4822-83aa-a9f2fb516e01:Attached,3fb66ba1-cfcb-4341-8960-46f0e8cf6e83:Active,8573d237-f86f-4b27-be80-479281a53645:Active,596a3408-67d7-4b26-b482-e3a7554a5897:Active,e3251723-08e1-4b4b-bde4-c10d6372074b:Active,2aff7dc6-e25b-433b-9681-5541a29bb07c:Active,14b5167c-5883-4920-8236-e8905456b01f:Active', 'pool_status': 'connected', 'isoprefix': u'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/e3251723-08e1-4b4b-bde4-c10d6372074b/images/11111111-1111-1111-1111-111111111111', 'type': 'FCP', 'master_ver': 2, 'lver': 1}, 'dominfo': {u'be882de7-6c79-413b-85aa-f0b8b77cb59e': {'status': u'Active', 'diskfree': '102810779648', 'alerts': [], 'disktotal': '106971529216'}, u'45270bc7-5244-4822-83aa-a9f2fb516e01': {'status': u'Attached'}, u'3fb66ba1-cfcb-4341-8960-46f0e8cf6e83': {'status': u'Active', 'diskfree': '69122129920', 'alerts': [], 'disktotal': '160658620416'}, u'8573d237-f86f-4b27-be80-479281a53645': {'status': u'Active', 'diskfree': '49123688448', 'alerts': [], 'disktotal': '53284438016'}, u'596a3408-67d7-4b26-b482-e3a7554a5897': {'status': u'Active', 'diskfree': '62008590336', 'alerts': [], 'disktotal': '106971529216'}, u'e3251723-08e1-4b4b-bde4-c10d6372074b': {'status': u'Active', 'diskfree': '10315366400', 'alerts': [], 'disktotal': '38501613568'}, u'2aff7dc6-e25b-433b-9681-5541a29bb07c': {'status': u'Active', 'diskfree': '23353884672', 'alerts': [], 'disktotal': '106971529216'}, u'14b5167c-5883-4920-8236-e8905456b01f': {'status': u'Active', 'diskfree': '62545461248', 'alerts': [], 'disktotal': '106971529216'}}} Thread-2061279::DEBUG::2013-04-18 15:22:12,279::task::568::TaskManager.Task::(_updateState) Task=`76c9085f-fc92-4d83-9699-00cc33ec329a`::moving from state preparing -> state finished Thread-2061279::DEBUG::2013-04-18 15:22:12,279::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-2061279::DEBUG::2013-04-18 15:22:12,279::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-2061279::DEBUG::2013-04-18 15:22:12,280::resourceManager::557::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-2061279::DEBUG::2013-04-18 15:22:12,280::resourceManager::573::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-2061279::DEBUG::2013-04-18 15:22:12,280::resourceManager::578::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-2061279::DEBUG::2013-04-18 15:22:12,280::resourceManager::585::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-2061279::DEBUG::2013-04-18 15:22:12,280::task::957::TaskManager.Task::(_decref) Task=`76c9085f-fc92-4d83-9699-00cc33ec329a`::ref 0 aborting False Thread-25::DEBUG::2013-04-18 15:22:12,450::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd iflag=direct if=/dev/596a3408-67d7-4b26-b482-e3a7554a5897/metadata bs=4096 count=1' (cwd None) The probem is that the lv name for the not found lv (d477fcba-2110-403e-93fe-15565aae5304) is the one part of the old source VG (013bcc40-5f3d-4394-bd3b-971b14852654): [root@f18ovn01 vdsm]# lvs | grep d477fcba-2110-403e-93fe-15565aae5304 d477fcba-2110-403e-93fe-15565aae5304 013bcc40-5f3d-4394-bd3b-971b14852654 -wi-a---- 1.00g While the storage domain where it thinks it is now attached (DS6800_Z1_1181) is the target one (14b5167c-5883-4920-8236-e8905456b01f) And of course they don't match..... On the system the tarhet storage domain VG is : [root@f18ovn01 vdsm]# vgs | grep 14b5167c-5883-4920-8236-e8905456b01f 14b5167c-5883-4920-8236-e8905456b01f 1 11 0 wz--n- 99.62g 58.25g [root@f18ovn01 vdsm]# vgdisplay 14b5167c-5883-4920-8236-e8905456b01f --- Volume group --- VG Name 14b5167c-5883-4920-8236-e8905456b01f System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 104 VG Access read/write VG Status resizable MAX LV 0 Cur LV 11 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 99.62 GiB PE Size 128.00 MiB Total PE 797 Alloc PE / Size 331 / 41.38 GiB Free PE / Size 466 / 58.25 GiB VG UUID cVcTG6-zTUP-2p1P-UKkU-tqMc-QygG-KazICT [root@f18ovn01 vdsm]# vgdisplay -v 14b5167c-5883-4920-8236-e8905456b01f Using volume group(s) on command line Finding volume group "14b5167c-5883-4920-8236-e8905456b01f" --- Volume group --- VG Name 14b5167c-5883-4920-8236-e8905456b01f System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 104 VG Access read/write VG Status resizable MAX LV 0 Cur LV 11 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 99.62 GiB PE Size 128.00 MiB Total PE 797 Alloc PE / Size 331 / 41.38 GiB Free PE / Size 466 / 58.25 GiB VG UUID cVcTG6-zTUP-2p1P-UKkU-tqMc-QygG-KazICT --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/metadata LV Name metadata VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID hESVgu-5DFu-oxuQ-nfU3-5Qf2-9X3p-CD4PdN LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-01-22 17:49:00 +0100 LV Status available # open 0 LV Size 512.00 MiB Current LE 4 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:35 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/leases LV Name leases VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID za4ZdL-YeRB-ZrZt-15P3-QVmy-K4wt-i03o3a LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-01-22 17:49:00 +0100 LV Status available # open 0 LV Size 2.00 GiB Current LE 16 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:36 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/ids LV Name ids VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID jAJCMi-Oclm-azvj-q8la-BYxP-OoU3-ZE26XY LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-01-22 17:49:00 +0100 LV Status available # open 1 LV Size 128.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:37 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/inbox LV Name inbox VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID F99GOU-v7CU-QvY1-PA1V-wrkS-oqoB-0MG9Hx LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-01-22 17:49:00 +0100 LV Status available # open 0 LV Size 128.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:38 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/outbox LV Name outbox VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID ACcVyn-DE49-bLgn-va1g-vKne-14ts-JHGlCa LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-01-22 17:49:00 +0100 LV Status available # open 0 LV Size 128.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:39 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/master LV Name master VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID kP39nT-TUbv-Omc1-UbNY-tUec-13J9-AX9Sh0 LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-01-22 17:49:01 +0100 LV Status available # open 0 LV Size 1.00 GiB Current LE 8 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:40 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/65bfb5bb-2ff7-4920-ab85-92a7050ca88d LV Name 65bfb5bb-2ff7-4920-ab85-92a7050ca88d VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID 1l3nb8-U5PZ-nNYL-kJdE-pTcS-eltL-WN8R1m LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-02-28 17:08:28 +0100 LV Status available # open 1 LV Size 22.12 GiB Current LE 177 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:41 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/99a7a960-ccf8-4bd7-b375-bbba736e3e91 LV Name 99a7a960-ccf8-4bd7-b375-bbba736e3e91 VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID sh4ZCn-dnJn-qDQA-fVK6-grIT-zg65-r6GgV6 LV Write Access read/write LV Creation host, time f18ovn03.ceda.polimi.it, 2013-03-04 15:02:04 +0100 LV Status available # open 1 LV Size 4.00 GiB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:42 --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/6fe3e8fb-cb72-4824-b07b-6f7f41406bdf LV Name 6fe3e8fb-cb72-4824-b07b-6f7f41406bdf VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID y1m7RS-xeEV-er3w-VVV5-mehd-m1b5-NMhe1u LV Write Access read/write LV Creation host, time f18ovn01.ceda.polimi.it, 2013-04-16 17:20:44 +0200 LV Status NOT available LV Size 2.12 GiB Current LE 17 Segments 1 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/57fdf720-c7e0-4829-932d-bc0a4a549919 LV Name 57fdf720-c7e0-4829-932d-bc0a4a549919 VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID kW4cBQ-r9kx-O8Nm-UISc-REOM-2r33-eq7wk7 LV Write Access read/write LV Creation host, time f18ovn01.ceda.polimi.it, 2013-04-16 17:22:50 +0200 LV Status NOT available LV Size 7.12 GiB Current LE 57 Segments 1 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/14b5167c-5883-4920-8236-e8905456b01f/25b489a4-454b-42bb-852d-7869a552198e LV Name 25b489a4-454b-42bb-852d-7869a552198e VG Name 14b5167c-5883-4920-8236-e8905456b01f LV UUID eKzQzn-lqu7-leq1-FDOR-IGk5-zWkI-3b2a9a LV Write Access read/write LV Creation host, time f18ovn01.ceda.polimi.it, 2013-04-16 17:25:28 +0200 LV Status NOT available LV Size 2.12 GiB Current LE 17 Segments 2 Allocation inherit Read ahead sectors auto --- Physical volumes --- PV Name /dev/mapper/3600507630efe0b0c0000000000001181 PV UUID rmLGSC-sP60-SrIn-hqIe-mK3i-8F26-brR8AJ PV Status allocatable Total PE / Free PE 797 / 466

Can I try to directly modify the db telling it that the disk is lv ?? What to choose. ..? part of vg 14b5167c-5883-4920-8236-e8905456b01f ?

This is a multi-part message in MIME format. --------------020201070408090806060901 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit From the email above, 'lvs' shows that the "d477fcba-2110-403e-93fe-15565aae5304" is not physically migrated to the target storage domain "14b5167c-5883-4920-8236-e8905456b01f". It means the migration failed. I think you may modify the disk back to the old storage domain in the db to clear the side-effect of the migration failure. 2013-4-19 0:20, Gianluca Cecchi:
Can I try to directly modify the db telling it that the disk is
lv ?? What to choose. ..? part of vg 14b5167c-5883-4920-8236-e8905456b01f ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- --- ?? Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shuming@cn.ibm.com or shuming@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC --------------020201070408090806060901 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">From the email above, 'lvs' shows that the <pre wrap="">"d477fcba-2110-403e-93fe-15565aae5304" is not physically migrated to the target storage domain "14b5167c-5883-4920-8236-e8905456b01f". It means the migration failed. I think you may modify the disk back to the old storage domain in the db to clear the side-effect of the migration failure. </pre> 2013-4-19 0:20, Gianluca Cecchi:<br> </div> <blockquote cite="mid:CAG2kNCxZjWd7PijdoiTjxqKf=g_gDqPiO4Eua-ap3DTCwWtTgA@mail.gmail.com" type="cite"> <p dir="ltr">Can I try to directly modify the db telling it that the disk is</p> <p dir="ltr">lv ?? What to choose. ..?<br> part of<br> vg 14b5167c-5883-4920-8236-e8905456b01f<br> ?</p> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <br> <pre class="moz-signature" cols="72">-- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: <a class="moz-txt-link-abbreviated" href="mailto:shuming@cn.ibm.com">shuming@cn.ibm.com</a> or <a class="moz-txt-link-abbreviated" href="mailto:shuming@linux.vnet.ibm.com">shuming@linux.vnet.ibm.com</a> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC</pre> </body> </html> --------------020201070408090806060901--

On 04/18/2013 04:05 PM, Gianluca Cecchi wrote:
You have used live storage migration. In order to understand better what happened in your environment I need the following details: 1) What is your vdsm and libvirt version? [root@f18ovn01 ~]# rpm -q vdsm libvirt vdsm-4.10.3-10.fc18.x86_64
On Thu, Apr 18, 2013 at 2:35 PM, Yeela Kaplan wrote: libvirt-0.10.2.3-1.fc18.x86_64
I think you need a fedora libvirt with this patch series. https://www.redhat.com/archives/libvir-list/2013-February/msg00340.html
You probably need to upgrade libvirt, there is a known bug there. Can you detail version impacted and first version where bug resolved? Or a bugzilla number?
2) attach the output of: run on the vdsm host: 'tree '/rhev/data-center/' https://docs.google.com/file/d/0BwoPbcrMv8mvZld5Q0I1T0pEbXM/edit?usp=sharing
3) attach the output of: run 'lvs' on the vdsm host https://docs.google.com/file/d/0BwoPbcrMv8mvRmx0R3lEYlY5WG8/edit?usp=sharing
4) Did you check what is on the disk of the vm (the target disk of the replication)? because the replication failed so it should probably be empty. The VM was runnig when doing the disk migration and didn't get any problem. In the gui it appears that the VM has the new LUN as the backing storage
5) Does your disk have any snapshots? Yes. And one more was generated with description: Auto-generated for Live Storage Migration
VM has only one disk. Overall snapshot now Date Status Description Current Ok Active VM snapshot 2013-Apr-16, 17:13 Ok Auto-generated for Live Storage Migration 2013-Mar-22, 08:29 Ok following 2013-Feb-26, 14:45 Ok test
Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Gianluca Cecchi
-
Itamar Heim
-
Shu Ming
-
Yeela Kaplan