[Users] Disk move when VM is running problem

Gianluca Cecchi gianluca.cecchi at gmail.com
Tue Apr 16 15:43:40 UTC 2013


Hello,
oVirt 3.2.1 on f18 host

I have a running VM with one disk.
Storage domain is FC and disk is thin provisioned.
VM has preexisting snapshots.

I select to move the disk
On the top part of the window I note in red this message:

Note: Moving the disk(s) while the VM is running

Donna if it is a generale advice...

I proceed and I get

Snapshot Auto-generated for Live Storage Migration creation for VM
zensrv was initiated by admin at internal.

Snapshot Auto-generated for Live Storage Migration creation for VM
zensrv has been completed.

User admin at internal moving disk zensrv_Disk1 to domain DS6800_Z1_1181.


during the move:
Total DISK READ:   73398.57 K/s | Total DISK WRITE:   79927.24 K/s
  PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
29513 idle vdsm     72400.02 K/s 72535.35 K/s  0.00 % 87.65 % dd
if=/rhev/data-cent~nt=10240 oflag=direct
 2457 be/4 sanlock   336.35 K/s    0.33 K/s  0.00 %  0.12 % sanlock
daemon -U sanlock -G sanlock
 4173 be/3 vdsm        0.33 K/s    0.00 K/s  0.00 %  0.00 % python
/usr/share/vdsm~eFileHandler.pyc 43 40
 8760 be/4 qemu        0.00 K/s 7574.52 K/s  0.00 %  0.00 % qemu-kvm
-name F18 -S ~on0,bus=pci.0,addr=0x8
 2830 be/4 root        0.00 K/s   13.14 K/s  0.00 %  0.00 % libvirtd --listen
27445 be/4 qemu        0.00 K/s   44.67 K/s  0.00 %  0.00 % qemu-kvm
-name zensrv ~on0,bus=pci.0,addr=0x6
 3141 be/3 vdsm        0.00 K/s    3.94 K/s  0.00 %  0.00 % python
/usr/share/vdsm/vdsm


vdsm     29513  3141 14 17:14 ?        00:00:17 /usr/bin/dd
if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90
of=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90
bs=1048576 seek=0 skip=0 conv=notrunc count=10240 oflag=direct

After some minutes I get:
User admin at internal have failed to move disk zensrv_Disk1 to domain
DS6800_Z1_1181.

But actually the vm is still running (I had a ssh terminal open on it)
and the disk appears to be the target one, as if the move operation
actually completed ok....

what to do? Can I safely shutdown and restart the vm?

In engine.log around the error time I see this that lets me suspect
the problem is deallocating the old pv perhaps?

2013-04-16 17:17:45,039 WARN
[org.ovirt.engine.core.bll.GetConfigurationValueQuery]
(ajp--127.0.0.1-8702-2) calling GetConfigurationValueQuery
(VdcVersion) with null version, using default general for version
2013-04-16 17:18:02,979 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-50) [4e37d247] Failed in DeleteImageGroupVDS method
2013-04-16 17:18:02,980 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-50) [4e37d247] Error code CannotRemoveLogicalVolume and
error message IRSGenericException: IRSErrorException: Failed to
DeleteImageGroupVDS, error = Cannot remove Logical Volume:
('013bcc40-5f3d-4394-bd3b-971b14852654',
"{'d477fcba-2110-403e-93fe-15565aae5304':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'),
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'),
'2be37d02-b44f-4823-bf26-054d1a1f0c90':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='00000000-0000-0000-0000-000000000000'),
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}")
2013-04-16 17:18:03,029 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-3-thread-50) [4e37d247] IrsBroker::Failed::DeleteImageGroupVDS
due to: IRSErrorException: IRSGenericException: IRSErrorException:
Failed to DeleteImageGroupVDS, error = Cannot remove Logical Volume:
('013bcc40-5f3d-4394-bd3b-971b14852654',
"{'d477fcba-2110-403e-93fe-15565aae5304':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'),
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'),
'2be37d02-b44f-4823-bf26-054d1a1f0c90':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='00000000-0000-0000-0000-000000000000'),
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}")
2013-04-16 17:18:03,067 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(pool-3-thread-50) [4e37d247] FINISH, DeleteImageGroupVDSCommand, log
id: 54616b28
2013-04-16 17:18:03,067 ERROR
[org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand]
(pool-3-thread-50) [4e37d247] Command
org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS,
error = Cannot remove Logical Volume:
('013bcc40-5f3d-4394-bd3b-971b14852654',
"{'d477fcba-2110-403e-93fe-15565aae5304':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'),
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'),
'2be37d02-b44f-4823-bf26-054d1a1f0c90':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='00000000-0000-0000-0000-000000000000'),
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}")
2013-04-16 17:18:03,069 ERROR
[org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand]
(pool-3-thread-50) [4e37d247] Reverting task unknown, handler:
org.ovirt.engine.core.bll.lsm.VmReplicateDiskStartTaskHandler
2013-04-16 17:18:03,088 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(pool-3-thread-50) [4e37d247] START,
VmReplicateDiskFinishVDSCommand(HostName = f18ovn01, HostId =
0f799290-b29a-49e9-bc1e-85ba5605a535,
vmId=c0a43bef-7c9d-4170-bd9c-63497e61d3fc), log id: 6f8ba7ae
2013-04-16 17:18:03,093 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(pool-3-thread-50) [4e37d247] FINISH, VmReplicateDiskFinishVDSCommand,
log id: 6f8ba7ae
2013-04-16 17:18:03,095 ERROR
[org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand]
(pool-3-thread-50) [4e37d247] Reverting task deleteImage, handler:
org.ovirt.engine.core.bll.lsm.CreateImagePlaceholderTaskHandler
2013-04-16 17:18:03,113 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(pool-3-thread-50) [4e37d247] START, DeleteImageGroupVDSCommand(
storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3,
ignoreFailoverLimit = false, compatabilityVersion = null,
storageDomainId = 14b5167c-5883-4920-8236-e8905456b01f, imageGroupId =
01488698-6420-4a32-9095-cfed1ff8f4bf, postZeros = false, forceDelete =
false), log id: 515ac5ae
2013-04-16 17:18:03,694 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(pool-3-thread-50) [4e37d247] FINISH, DeleteImageGroupVDSCommand, log
id: 515ac5ae

and in vdsm.log:
Thread-1929238::ERROR::2013-04-16
17:14:02,928::libvirtvm::2320::vm.Vm::(diskReplicateStart)
vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::Unable to start the
replication for vda to {'domainID':
'14b5167c-5883-4920-8236-e8905456b01f', 'poolID':
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304',
'volumeID': 'd477fcba-2110-403e-93fe-15565aae5304', 'volumeChain':
[{'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'2be37d02-b44f-4823-bf26-054d1a1f0c90', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/0a6c7300-011a-46f9-9a5d-d22476a7f4c6',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'd477fcba-2110-403e-93fe-15565aae5304', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}], 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}
Traceback (most recent call last):
  File "/usr/share/vdsm/libvirtvm.py", line 2316, in diskReplicateStart
    libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW
  File "/usr/share/vdsm/libvirtvm.py", line 541, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 626, in blockRebase
    if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
libvirtError: unsupported flags (0xb) in function qemuDomainBlockRebase
Thread-1929238::DEBUG::2013-04-16
17:14:02,930::task::568::TaskManager.Task::(_updateState)
Task=`27736fbc-7054-4df3-9f26-388601621ea9`::moving from state init ->
state preparing
Thread-1929238::INFO::2013-04-16
17:14:02,930::logUtils::41::dispatcher::(wrapper) Run and protect:
teardownImage(sdUUID='14b5167c-5883-4920-8236-e8905456b01f',
spUUID='5849b030-626e-47cb-ad90-3ce782d831b3',
imgUUID='01488698-6420-4a32-9095-cfed1ff8f4bf', volUUID=None)
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::190::ResourceManager.Request::(__init__)
ResName=`Storage.14b5167c-5883-4920-8236-e8905456b01f`ReqID=`e1aa534c-45b6-471f-9a98-8413d449b480`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
'__init__'
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::504::ResourceManager::(registerResource)
Trying to register resource
'Storage.14b5167c-5883-4920-8236-e8905456b01f' for lock type 'shared'
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::547::ResourceManager::(registerResource)
Resource 'Storage.14b5167c-5883-4920-8236-e8905456b01f' is free. Now
locking as 'shared' (1 active user)
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::227::ResourceManager.Request::(grant)
ResName=`Storage.14b5167c-5883-4920-8236-e8905456b01f`ReqID=`e1aa534c-45b6-471f-9a98-8413d449b480`::Granted
request
Thread-1929238::DEBUG::2013-04-16
17:14:02,932::task::794::TaskManager.Task::(resourceAcquired)
Task=`27736fbc-7054-4df3-9f26-388601621ea9`::_resourcesAcquired:
Storage.14b5167c-5883-4920-8236-e8905456b01f (shared)
Thread-1929238::DEBUG::2013-04-16
17:14:02,932::task::957::TaskManager.Task::(_decref)
Task=`27736fbc-7054-4df3-9f26-388601621ea9`::ref 1 aborting False
Thread-1929238::DEBUG::2013-04-16
17:14:02,932::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' got the operation mutex

thanks,

Gianluca



More information about the Users mailing list