Cloning VM on NFS Leads to Locked Disks

Hi Everyone, I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;" I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone" I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.) 2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='', srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0', srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051', dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224', dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'', dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=False) (logUtils:51) 2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319) 2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249) 2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249) 2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID= srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051 dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7 dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW preallocate=SPARSE force=False postZero=False discard=False (image:765) 2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source 20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051 vol size 41943040 destination 20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7 apparentsize 41943040 (image:815) 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image 670a7b20-fecd-45c6-af5c-3c7b98258224 in domain 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319) 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create placeholder /rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224 for image's volumes (image:149) 2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to create RAW volume /rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7 with size = 20480 sectors (fileVolume:439) 2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID = 013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)

Hi, Can you provide the engine log ? Thanks, Freddy On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='', srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0', srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051', dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224', dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'', dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=False) (logUtils:51) 2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319) 2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249) 2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249) 2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID= srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051 dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7 dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW preallocate=SPARSE force=False postZero=False discard=False (image:765) 2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source 20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9- 82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051 vol size 41943040 destination 20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6- af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7 apparentsize 41943040 (image:815) 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image 670a7b20-fecd-45c6-af5c-3c7b98258224 in domain 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319) 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create placeholder /rhev/data-center/00000001-0001-0001-0001-000000000311/20423 d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224 for image's volumes (image:149) 2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to create RAW volume /rhev/data-center/00000001-000 1-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b35 4/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7 with size = 20480 sectors (fileVolume:439) 2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID = 013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------D9AA7C17511D7FBB85A2A682 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Hi Freddy, Sure, thanks for looking into this. Here you go: 2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:35:37,793-03 INFO [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock Acquired to object 'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM_DISK_BOOT, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}' 2017-05-10 11:35:37,843-03 INFO [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Running command: UpdateVmDiskCommand internal: false. Entities affected : ID: d73206ed-89ba-48a9-82ff-c107c1af60f0 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] START, SetVolumeDescriptionVDSCommand( SetVolumeDescriptionVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false', storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0', imageId='bba1813d-9e91-4e37-bed7-40beedda8051'}), log id: 31e5d891 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] -- executeIrsBrokerCommand: calling 'setVolumeDescription', parameters: 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ spUUID=00000001-0001-0001-0001-000000000311 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ imageGroupGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ volUUID=bba1813d-9e91-4e37-bed7-40beedda8051 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ description={"DiskAlias":"Ubu16_Disk1","DiskDescription":""} 2017-05-10 11:35:37,937-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] FINISH, SetVolumeDescriptionVDSCommand, log id: 31e5d891 2017-05-10 11:35:37,981-03 INFO [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock freed to object 'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM_DISK_BOOT, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}' 2017-05-10 11:35:37,988-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] EVENT_ID: USER_UPDATE_VM_DISK(88), Correlation ID: 2c63ae40-b59d-4a80-b348-6c9b9fa176b8, Call Stack: null, Custom Event ID: -1, Message: VM Ubu16 Xenial_Disk1 disk was updated by admin@internal-authz. 2017-05-10 11:36:19,931-03 INFO [org.ovirt.engine.core.bll.CloneVmCommand] (default task-15) [6c38dbf9-6c8f-4390-979f-4031b5b46c98] Lock Acquired to object 'EngineLock:{exclusiveLocks='[chUbu16Clone=<VM_NAME, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[d73206ed-89ba-48a9-82ff-c107c1af60f0=<DISK, ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName chUbu16Clone>, 6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>]'}' 2017-05-10 11:36:19,981-03 INFO [org.ovirt.engine.core.bll.CloneVmCommand] (default task-15) [6c38dbf9-6c8f-4390-979f-4031b5b46c98] Running command: CloneVmCommand internal: false. Entities affected : ID: 6d51dd95-ba25-4ae9-8bee-303f90e76e62 Type: VMAction group CREATE_VM with role type USER 2017-05-10 11:36:20,052-03 INFO [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand] (default task-15) [78d33cee] Running command: CopyImageGroupCommand internal: true. Entities affected : ID: 20423d5e-188c-4e10-9893-588ceb81b354 Type: Storage 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] START, CopyImageVDSCommand( CopyImageVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false', storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0', imageId='bba1813d-9e91-4e37-bed7-40beedda8051', dstImageGroupId='670a7b20-fecd-45c6-af5c-3c7b98258224', vmId='e470d257-6e3a-4989-a0ec-3b10c3ca808c', dstImageId='013d8855-4e49-4984-8266-6a5e9437dff7', imageDescription='', dstStorageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', copyVolumeType='LeafVol', volumeFormat='RAW', preallocate='Sparse', postZero='false', discard='false', force='false'}), log id: 3477da84 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID. Parameters: 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ spUUID=00000001-0001-0001-0001-000000000311 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ vmGUID=e470d257-6e3a-4989-a0ec-3b10c3ca808c 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ srcImageGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ dstImageGUID=670a7b20-fecd-45c6-af5c-3c7b98258224 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ descr= 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 2017-05-10 11:36:21,014-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] FINISH, CopyImageVDSCommand, return: de21abbb-5ec9-4f03-886d-c853abcc80ec, log id: 3477da84 2017-05-10 11:36:21,034-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-15) [78d33cee] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f' 2017-05-10 11:36:21,034-03 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-15) [78d33cee] CommandMultiAsyncTasks::attachTask: Attaching task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' to command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'. 2017-05-10 11:36:21,068-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-15) [78d33cee] Adding task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet.. 2017-05-10 11:36:21,089-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-15) [78d33cee] BaseAsyncTask::startPollingTask: Starting to poll task 'de21abbb-5ec9-4f03-886d-c853abcc80ec'. 2017-05-10 11:36:21,164-03 INFO [org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-15) [1270fe1f] Running command: AddGraphicsDeviceCommand internal: true. Entities affected : ID: e470d257-6e3a-4989-a0ec-3b10c3ca808c Type: VMAction group EDIT_VM_PROPERTIES with role type USER 2017-05-10 11:36:21,241-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-15) [1270fe1f] EVENT_ID: USER_ADD_VM(34), Correlation ID: 6c38dbf9-6c8f-4390-979f-4031b5b46c98, Job ID: 3320698b-6507-41e0-bd80-8d3269be0193, Call Stack: null, Custom Event ID: -1, Message: VM chUbu16Clone was created by admin@internal-authz. 2017-05-10 11:36:22,539-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2017-05-10 11:36:23,059-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler9) [29c890f1] SPMAsyncTask::PollTask: Polling task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'running'. 2017-05-10 11:36:23,059-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Finished polling Tasks, will poll again in 10 seconds. 2017-05-10 11:36:30,269-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler4) [62c29fef] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:36:30,269-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler4) [62c29fef] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:36:35,685-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-6-thread-49) [2ee2229a] START, GetFileStatsVDSCommand( GetFileStatsParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false'}), log id: 7559896f 2017-05-10 11:36:37,089-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-6-thread-49) [2ee2229a] FINISH, GetFileStatsVDSCommand, return: {ubuntu-16.04.2-server-amd64.iso={status=0, ctime=1492046787.0, size=869269504}, systemrescuecd-x86-4.9.5.iso={status=0, ctime=1492048781.0, size=520032256}, ubuntu-14.04.5-server-amd64.iso={status=0, ctime=1492130650.0, size=649068544}}, log id: 7559896f 2017-05-10 11:36:37,146-03 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSynchronizer] (org.ovirt.thread.pool-6-thread-49) [2ee2229a] Finished automatic refresh process for 'ISO' file type with success, for storage domain id '6154f231-195b-46cd-a108-a0e42d360145'. 2017-05-10 11:37:30,289-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler10) [493b7b3a] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:37:30,289-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler10) [493b7b3a] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:38:09,306-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] SPMAsyncTask::PollTask: Polling task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'. 2017-05-10 11:38:09,310-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] BaseAsyncTask::onTaskEndSuccess: Task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully. 2017-05-10 11:38:09,310-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] CommandAsyncTask::endActionIfNecessary: All tasks of command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f' has ended -> executing 'endAction' 2017-05-10 11:38:09,310-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'): calling endAction '. 2017-05-10 11:38:09,311-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [5a3af47c] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'CopyImageGroup', 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] Command [id=f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands. 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'CopyImageGroup' completed, handling the result. 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'CopyImageGroup' succeeded, clearing tasks. 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' 2017-05-10 11:38:09,326-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false', taskId='de21abbb-5ec9-4f03-886d-c853abcc80ec'}), log id: 3ae0372a 2017-05-10 11:38:09,326-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] START, HSMClearTaskVDSCommand(HostName = ovirt730-01, HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', hostId='f9519bb1-7b11-4a7b-98d8-72f606288e7c', taskId='de21abbb-5ec9-4f03-886d-c853abcc80ec'}), log id: 26278f86 2017-05-10 11:38:10,313-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] FINISH, HSMClearTaskVDSCommand, log id: 26278f86 2017-05-10 11:38:10,313-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] FINISH, SPMClearTaskVDSCommand, log id: 3ae0372a 2017-05-10 11:38:10,316-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] BaseAsyncTask::removeTaskFromDB: Removed task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' from DataBase 2017-05-10 11:38:10,316-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f' 2017-05-10 11:38:30,304-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:38:30,304-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:39:26,248-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Setting new tasks map. The map contains now 0 tasks 2017-05-10 11:39:26,248-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Cleared all tasks of pool '00000001-0001-0001-0001-000000000311'. 2017-05-10 11:39:30,318-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:39:30,318-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:40:30,333-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler3) [5bdc6552] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'. 2017-05-10 11:40:30,333-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler3) [5bdc6552] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'. On 2017-05-11 04:30 AM, Fred Rolland wrote:
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='', srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0', srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051', dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224', dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'', dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=False) (logUtils:51) 2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319) 2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249) 2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249) 2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID= srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0 srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051 dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7 dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW preallocate=SPARSE force=False postZero=False discard=False (image:765) 2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source 20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051 vol size 41943040 destination 20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7 apparentsize 41943040 (image:815) 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image 670a7b20-fecd-45c6-af5c-3c7b98258224 in domain 20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319) 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create placeholder /rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224 for image's volumes (image:149) 2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to create RAW volume /rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7 with size = 20480 sectors (fileVolume:439) 2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest] sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID = 013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------D9AA7C17511D7FBB85A2A682 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p>Hi Freddy,</p> <p> Sure, thanks for looking into this. Here you go:</p> <p>2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:35:37,793-03 INFO [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock Acquired to object 'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM_DISK_BOOT, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'<br> 2017-05-10 11:35:37,843-03 INFO [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Running command: UpdateVmDiskCommand internal: false. Entities affected : ID: d73206ed-89ba-48a9-82ff-c107c1af60f0 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] START, SetVolumeDescriptionVDSCommand( SetVolumeDescriptionVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false', storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0', imageId='bba1813d-9e91-4e37-bed7-40beedda8051'}), log id: 31e5d891<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] -- executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ spUUID=00000001-0001-0001-0001-000000000311<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ sdUUID=20423d5e-188c-4e10-9893-588ceb81b354<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ imageGroupGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ volUUID=bba1813d-9e91-4e37-bed7-40beedda8051<br> 2017-05-10 11:35:37,846-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++ description={"DiskAlias":"Ubu16_Disk1","DiskDescription":""}<br> 2017-05-10 11:35:37,937-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] FINISH, SetVolumeDescriptionVDSCommand, log id: 31e5d891<br> 2017-05-10 11:35:37,981-03 INFO [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock freed to object 'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM_DISK_BOOT, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'<br> 2017-05-10 11:35:37,988-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] EVENT_ID: USER_UPDATE_VM_DISK(88), Correlation ID: 2c63ae40-b59d-4a80-b348-6c9b9fa176b8, Call Stack: null, Custom Event ID: -1, Message: VM Ubu16 Xenial_Disk1 disk was updated by admin@internal-authz.<br> 2017-05-10 11:36:19,931-03 INFO [org.ovirt.engine.core.bll.CloneVmCommand] (default task-15) [6c38dbf9-6c8f-4390-979f-4031b5b46c98] Lock Acquired to object 'EngineLock:{exclusiveLocks='[chUbu16Clone=<VM_NAME, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='[d73206ed-89ba-48a9-82ff-c107c1af60f0=<DISK, ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName chUbu16Clone>, 6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>]'}'<br> 2017-05-10 11:36:19,981-03 INFO [org.ovirt.engine.core.bll.CloneVmCommand] (default task-15) [6c38dbf9-6c8f-4390-979f-4031b5b46c98] Running command: CloneVmCommand internal: false. Entities affected : ID: 6d51dd95-ba25-4ae9-8bee-303f90e76e62 Type: VMAction group CREATE_VM with role type USER<br> 2017-05-10 11:36:20,052-03 INFO [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand] (default task-15) [78d33cee] Running command: CopyImageGroupCommand internal: true. Entities affected : ID: 20423d5e-188c-4e10-9893-588ceb81b354 Type: Storage<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] START, CopyImageVDSCommand( CopyImageVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false', storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0', imageId='bba1813d-9e91-4e37-bed7-40beedda8051', dstImageGroupId='670a7b20-fecd-45c6-af5c-3c7b98258224', vmId='e470d257-6e3a-4989-a0ec-3b10c3ca808c', dstImageId='013d8855-4e49-4984-8266-6a5e9437dff7', imageDescription='', dstStorageDomainId='20423d5e-188c-4e10-9893-588ceb81b354', copyVolumeType='LeafVol', volumeFormat='RAW', preallocate='Sparse', postZero='false', discard='false', force='false'}), log id: 3477da84<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID. Parameters:<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ sdUUID=20423d5e-188c-4e10-9893-588ceb81b354<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ spUUID=00000001-0001-0001-0001-000000000311<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ vmGUID=e470d257-6e3a-4989-a0ec-3b10c3ca808c<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ srcImageGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ dstImageGUID=670a7b20-fecd-45c6-af5c-3c7b98258224<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ descr=<br> 2017-05-10 11:36:20,126-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] ++ dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354<br> 2017-05-10 11:36:21,014-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-15) [78d33cee] FINISH, CopyImageVDSCommand, return: de21abbb-5ec9-4f03-886d-c853abcc80ec, log id: 3477da84<br> 2017-05-10 11:36:21,034-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-15) [78d33cee] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'<br> 2017-05-10 11:36:21,034-03 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-15) [78d33cee] CommandMultiAsyncTasks::attachTask: Attaching task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' to command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'.<br> 2017-05-10 11:36:21,068-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-15) [78d33cee] Adding task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..<br> 2017-05-10 11:36:21,089-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-15) [78d33cee] BaseAsyncTask::startPollingTask: Starting to poll task 'de21abbb-5ec9-4f03-886d-c853abcc80ec'.<br> 2017-05-10 11:36:21,164-03 INFO [org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-15) [1270fe1f] Running command: AddGraphicsDeviceCommand internal: true. Entities affected : ID: e470d257-6e3a-4989-a0ec-3b10c3ca808c Type: VMAction group EDIT_VM_PROPERTIES with role type USER<br> 2017-05-10 11:36:21,241-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-15) [1270fe1f] EVENT_ID: USER_ADD_VM(34), Correlation ID: 6c38dbf9-6c8f-4390-979f-4031b5b46c98, Job ID: 3320698b-6507-41e0-bd80-8d3269be0193, Call Stack: null, Custom Event ID: -1, Message: VM chUbu16Clone was created by admin@internal-authz.<br> 2017-05-10 11:36:22,539-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now<br> 2017-05-10 11:36:23,059-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler9) [29c890f1] SPMAsyncTask::PollTask: Polling task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'running'.<br> 2017-05-10 11:36:23,059-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Finished polling Tasks, will poll again in 10 seconds.<br> 2017-05-10 11:36:30,269-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler4) [62c29fef] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:36:30,269-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler4) [62c29fef] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:36:35,685-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-6-thread-49) [2ee2229a] START, GetFileStatsVDSCommand( GetFileStatsParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false'}), log id: 7559896f<br> 2017-05-10 11:36:37,089-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] (org.ovirt.thread.pool-6-thread-49) [2ee2229a] FINISH, GetFileStatsVDSCommand, return: {ubuntu-16.04.2-server-amd64.iso={status=0, ctime=1492046787.0, size=869269504}, systemrescuecd-x86-4.9.5.iso={status=0, ctime=1492048781.0, size=520032256}, ubuntu-14.04.5-server-amd64.iso={status=0, ctime=1492130650.0, size=649068544}}, log id: 7559896f<br> 2017-05-10 11:36:37,146-03 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSynchronizer] (org.ovirt.thread.pool-6-thread-49) [2ee2229a] Finished automatic refresh process for 'ISO' file type with success, for storage domain id '6154f231-195b-46cd-a108-a0e42d360145'.<br> 2017-05-10 11:37:30,289-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler10) [493b7b3a] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:37:30,289-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler10) [493b7b3a] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:38:09,306-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] SPMAsyncTask::PollTask: Polling task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.<br> 2017-05-10 11:38:09,310-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] BaseAsyncTask::onTaskEndSuccess: Task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.<br> 2017-05-10 11:38:09,310-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] CommandAsyncTask::endActionIfNecessary: All tasks of command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f' has ended -> executing 'endAction'<br> 2017-05-10 11:38:09,310-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler6) [5a3af47c] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'): calling endAction '.<br> 2017-05-10 11:38:09,311-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [5a3af47c] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'CopyImageGroup',<br> 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] Command [id=f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.<br> 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'CopyImageGroup' completed, handling the result.<br> 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'CopyImageGroup' succeeded, clearing tasks.<br> 2017-05-10 11:38:09,325-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'de21abbb-5ec9-4f03-886d-c853abcc80ec'<br> 2017-05-10 11:38:09,326-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000311', ignoreFailoverLimit='false', taskId='de21abbb-5ec9-4f03-886d-c853abcc80ec'}), log id: 3ae0372a<br> 2017-05-10 11:38:09,326-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] START, HSMClearTaskVDSCommand(HostName = ovirt730-01, HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', hostId='f9519bb1-7b11-4a7b-98d8-72f606288e7c', taskId='de21abbb-5ec9-4f03-886d-c853abcc80ec'}), log id: 26278f86<br> 2017-05-10 11:38:10,313-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] FINISH, HSMClearTaskVDSCommand, log id: 26278f86<br> 2017-05-10 11:38:10,313-03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-34) [78d33cee] FINISH, SPMClearTaskVDSCommand, log id: 3ae0372a<br> 2017-05-10 11:38:10,316-03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] BaseAsyncTask::removeTaskFromDB: Removed task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' from DataBase<br> 2017-05-10 11:38:10,316-03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-34) [78d33cee] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'<br> 2017-05-10 11:38:30,304-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:38:30,304-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:39:26,248-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Setting new tasks map. The map contains now 0 tasks<br> 2017-05-10 11:39:26,248-03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler9) [29c890f1] Cleared all tasks of pool '00000001-0001-0001-0001-000000000311'.<br> 2017-05-10 11:39:30,318-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:39:30,318-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler9) [29c890f1] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:40:30,333-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler3) [5bdc6552] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.<br> 2017-05-10 11:40:30,333-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler3) [5bdc6552] Not removing session 'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==', session has running commands for user 'admin@internal-authz'.<br> </p> <br> <div class="moz-cite-prefix">On 2017-05-11 04:30 AM, Fred Rolland wrote:<br> </div> <blockquote cite="mid:CAF_B0vG3LQRXXXS7k-mY03ApsPdgnPqf8gZsTLnmgVCNJUaaGQ@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div> <div>Hi,<br> <br> </div> Can you provide the engine log ?<br> <br> </div> Thanks,<br> <br> </div> Freddy<br> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <span dir="ltr"><<a moz-do-not-send="true" href="mailto:charles@islandadmin.ca" target="_blank">charles@islandadmin.ca</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Everyone,<br> <br> I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"<br> <br> I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"<br> <br> I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c<wbr>1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98<wbr>258224 is the clone.)<br> <br> 2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and protect: copyImage(sdUUID=u'20423d5e-18<wbr>8c-4e10-9893-588ceb81b354', spUUID=u'00000001-0001-0001-00<wbr>01-000000000311', vmUUID='', srcImgUUID=u'd73206ed-89ba-48a<wbr>9-82ff-c107c1af60f0', srcVolUUID=u'bba1813d-9e91-4e3<wbr>7-bed7-40beedda8051', dstImgUUID=u'670a7b20-fecd-45c<wbr>6-af5c-3c7b98258224', dstVolUUID=u'013d8855-4e49-498<wbr>4-8266-6a5e9437dff7', description=u'', dstSdUUID=u'20423d5e-188c-4e10<wbr>-9893-588ceb81b354', volType=8, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=False) (logUtils:51)<br> 2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image d73206ed-89ba-48a9-82ff-c107c1<wbr>af60f0 in domain 20423d5e-188c-4e10-9893-588ceb<wbr>81b354 has vollist [u'bba1813d-9e91-4e37-bed7-40b<wbr>eedda8051'] (image:319)<br> 2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image] sdUUID=20423d5e-188c-4e10-9893<wbr>-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82f<wbr>f-c107c1af60f0 chain=[<storage.fileVolume.Fil<wbr>eVolume object at 0x278e450>] (image:249)<br> 2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893<wbr>-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82f<wbr>f-c107c1af60f0 chain=[<storage.fileVolume.Fil<wbr>eVolume object at 0x302c6d0>] (image:249)<br> 2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image] sdUUID=20423d5e-188c-4e10-9893<wbr>-588ceb81b354 vmUUID= srcImgUUID=d73206ed-89ba-48a9-<wbr>82ff-c107c1af60f0 srcVolUUID=bba1813d-9e91-4e37-<wbr>bed7-40beedda8051 dstImgUUID=670a7b20-fecd-45c6-<wbr>af5c-3c7b98258224 dstVolUUID=013d8855-4e49-4984-<wbr>8266-6a5e9437dff7 dstSdUUID=20423d5e-188c-4e10-9<wbr>893-588ceb81b354 volType=8 volFormat=RAW preallocate=SPARSE force=False postZero=False discard=False (image:765)<br> 2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source 20423d5e-188c-4e10-9893-588ceb<wbr>81b354:d73206ed-89ba-48a9-<wbr>82ff-c107c1af60f0:bba1813d-<wbr>9e91-4e37-bed7-40beedda8051 vol size 41943040 destination 20423d5e-188c-4e10-9893-588ceb<wbr>81b354:670a7b20-fecd-45c6-<wbr>af5c-3c7b98258224:013d8855-<wbr>4e49-4984-8266-6a5e9437dff7 apparentsize 41943040 (image:815)<br> 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image 670a7b20-fecd-45c6-af5c-3c7b98<wbr>258224 in domain 20423d5e-188c-4e10-9893-588ceb<wbr>81b354 has vollist [] (image:319)<br> 2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create placeholder /rhev/data-center/00000001-000<wbr>1-0001-0001-000000000311/20423<wbr>d5e-188c-4e10-9893-588ceb81b35<wbr>4/images/670a7b20-fecd-45c6-<wbr>af5c-3c7b98258224 for image's volumes (image:149)<br> 2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to create RAW volume /rhev/data-center/00000001-000<wbr>1-0001-0001-000000000311/20423<wbr>d5e-188c-4e10-9893-588ceb81b35<wbr>4/images/670a7b20-fecd-45c6-<wbr>af5c-3c7b98258224/013d8855-<wbr>4e49-4984-8266-6a5e9437dff7 with size = 20480 sectors (fileVolume:439)<br> 2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest] sdUUID=20423d5e-188c-4e10-9893<wbr>-588ceb81b354 imgUUID=670a7b20-fecd-45c6-af5<wbr>c-3c7b98258224 volUUID = 013d8855-4e49-4984-8266-6a5e94<wbr>37dff7 legality = LEGAL (volume:393)<br> <br> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br> <br> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------D9AA7C17511D7FBB85A2A682--

This is a multi-part message in MIME format. --------------B0189754CF12F61EEC3BA8DD Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently. I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me: 017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508 Here's the last 20 lines: 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0 On 2017-05-11 10:09 AM, Charles Tassell wrote:
Hi Freddy,
Sure, thanks for looking into this. Here you go:
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip]
On 2017-05-11 04:30 AM, Fred Rolland wrote:
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip] _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------B0189754CF12F61EEC3BA8DD Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p>Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.</p> <p>I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:</p> <p><br> </p> 017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508<br> <br> Here's the last 20 lines:<br> 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67<br> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids<br> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap<br> 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap<br> 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap<br> 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395<br> 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec<br> 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395<br> 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346<br> 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0<br> 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0<br> 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346<br> 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0<br> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0<br> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0<br> <br> <br> <div class="moz-cite-prefix">On 2017-05-11 10:09 AM, Charles Tassell wrote:<br> </div> <blockquote cite="mid:add72c93-640f-296b-0760-d2e6588dd5ff@islandadmin.ca" type="cite"> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <p>Hi Freddy,</p> <p> Sure, thanks for looking into this. Here you go:</p> <p>2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip]<br> </p> <div class="moz-cite-prefix">On 2017-05-11 04:30 AM, Fred Rolland wrote:<br> </div> <blockquote cite="mid:CAF_B0vG3LQRXXXS7k-mY03ApsPdgnPqf8gZsTLnmgVCNJUaaGQ@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div> <div>Hi,<br> <br> </div> Can you provide the engine log ?<br> <br> </div> Thanks,<br> <br> </div> Freddy<br> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <span dir="ltr"><<a moz-do-not-send="true" href="mailto:charles@islandadmin.ca" target="_blank">charles@islandadmin.ca</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Everyone,<br> <br> I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"<br> <br> I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"<br> <br> I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c<wbr>1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98<wbr>258224 is the clone.)<br> <br> [snip]<br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br> <br> </blockquote> </div> <br> </div> </blockquote> <br> </blockquote> <br> </body> </html> --------------B0189754CF12F61EEC3BA8DD--

The locking is on the engine side and restarting the vdsm will not affect it . Can you send the whole engine log ? Which exact version are you using ? On Thu, May 11, 2017 at 9:30 PM, Charles Tassell <charles@islandadmin.ca> wrote:
Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508
Here's the last 20 lines: 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e- 188c-4e10-9893-588ceb81b354/dom_md/ids 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1- 3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c- c1653257cf04 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data- center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c- 4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data- center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c- 4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data- center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c- 4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e- 1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e- 1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
On 2017-05-11 10:09 AM, Charles Tassell wrote:
Hi Freddy,
Sure, thanks for looking into this. Here you go:
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session ' vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip] On 2017-05-11 04:30 AM, Fred Rolland wrote:
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip] _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------8C0634D669A6203EC12B8A27 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sure, it's pretty big so I've put it online for download at http://krissy.islandadmin.ca/public/engine.log.txt On 2017-05-11 04:08 PM, Fred Rolland wrote:
The locking is on the engine side and restarting the vdsm will not affect it . Can you send the whole engine log ? Which exact version are you using ?
On Thu, May 11, 2017 at 9:30 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508
Here's the last 20 lines: 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
On 2017-05-11 10:09 AM, Charles Tassell wrote:
Hi Freddy,
Sure, thanks for looking into this. Here you go:
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip]
On 2017-05-11 04:30 AM, Fred Rolland wrote:
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip] _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------8C0634D669A6203EC12B8A27 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> Sure, it's pretty big so I've put it online for download at <a class="moz-txt-link-freetext" href="http://krissy.islandadmin.ca/public/engine.log.txt">http://krissy.islandadmin.ca/public/engine.log.txt</a><br> <br> <div class="moz-cite-prefix">On 2017-05-11 04:08 PM, Fred Rolland wrote:<br> </div> <blockquote cite="mid:CAF_B0vH3P5y+kFYp1bCHYCHTvf3aHexJX1Bgj+9P7OXHuM_4hQ@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div>The locking is on the engine side and restarting the vdsm will not affect it .<br> </div> Can you send the whole engine log ?<br> </div> Which exact version are you using ?<br> <div> <div><br> </div> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, May 11, 2017 at 9:30 PM, Charles Tassell <span dir="ltr"><<a moz-do-not-send="true" href="mailto:charles@islandadmin.ca" target="_blank">charles@islandadmin.ca</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> <p>Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.</p> <p>I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:</p> <p><br> </p> 017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508<br> <br> Here's the last 20 lines:<br> 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:<wbr>0x7fe160255000 ioto 10 to_count 67<br> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.<wbr>130.217:_media_ovirt/20423d5e-<wbr>188c-4e10-9893-588ceb81b354/<wbr>dom_md/ids<br> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:<wbr>0x7fe14e6fc000 ioto 10 to_count 65<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/<wbr>5dccd07d-a923-4d4b-9cb1-<wbr>3b51ebfdca4d/5a9c284f-0faa-<wbr>4a25-94ce-c9efdae07484/<wbr>ab2443f1-95ed-475d-886c-<wbr>c1653257cf04<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:<wbr>0x7fe160255000 result 1048576:0 match reap<br> 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:<wbr>0x7fe14e5fa000 result 1048576:0 match reap<br> 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:<wbr>0x7fe14e6fc000 result 1048576:0 match reap<br> 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-<wbr>588ceb81b354:SDM:/rhev/data-<wbr>center/mnt/192.168.130.217:_<wbr>media_ovirt/20423d5e-188c-<wbr>4e10-9893-588ceb81b354/dom_md/<wbr>leases:1048576 for 7,21,78395<br> 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec<br> 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-<wbr>588ceb81b354:SDM:/rhev/data-<wbr>center/mnt/192.168.130.217:_<wbr>media_ovirt/20423d5e-188c-<wbr>4e10-9893-588ceb81b354/dom_md/<wbr>leases:1048576 for 7,21,78395<br> 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-<wbr>588ceb81b354:SDM:/rhev/data-<wbr>center/mnt/192.168.130.217:_<wbr>media_ovirt/20423d5e-188c-<wbr>4e10-9893-588ceb81b354/dom_md/<wbr>leases:1048576 for 7,21,112346<br> 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-<wbr>2cf2b2445087:1:/dev/b010093e-<wbr>1924-46e1-bd57-2cf2b2445087/<wbr>ids:0<br> 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-<wbr>ab3b049dbae2.ovirt730-0<br> 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-<wbr>2cf2b2445087:SDM:/dev/<wbr>b010093e-1924-46e1-bd57-<wbr>2cf2b2445087/leases:1048576 for 7,21,112346<br> 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-<wbr>2cf2b2445087:1:/dev/b010093e-<wbr>1924-46e1-bd57-2cf2b2445087/<wbr>ids:0<br> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-<wbr>ab3b049dbae2.ovirt730-0<br> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-<wbr>d39835e10afb.ovirt730-0<span class=""><br> <br> <br> <div class="m_2097369334319903642moz-cite-prefix">On 2017-05-11 10:09 AM, Charles Tassell wrote:<br> </div> </span> <blockquote type="cite"><span class=""> <p>Hi Freddy,</p> <p> Sure, thanks for looking into this. Here you go:</p> </span> <p>2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.<wbr>aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session '<wbr>vZyqrcCljPC7hQtcILsk4uDug3Qsii<wbr>nZBOyoGDiQKkYYT2znGyWe4fasrPbj<wbr>YxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip]<br> </p> <span class=""> <div class="m_2097369334319903642moz-cite-prefix">On 2017-05-11 04:30 AM, Fred Rolland wrote:<br> </div> </span> <blockquote type="cite"><span class=""> <div dir="ltr"> <div> <div> <div>Hi,<br> <br> </div> Can you provide the engine log ?<br> <br> </div> Thanks,<br> <br> </div> Freddy<br> </div> </span> <div class="gmail_extra"><br> <div class="gmail_quote"><span class="">On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <span dir="ltr"><<a moz-do-not-send="true" href="mailto:charles@islandadmin.ca" target="_blank">charles@islandadmin.ca</a>></span> wrote:<br> </span> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Hi Everyone,<br> <br> I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"<br> <br> I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"<br> <br> I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c<wbr>1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98<wbr>258224 is the clone.)<br> <br> </span> [snip]<span class=""><br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> <br> </span></blockquote> </div> <br> </div> </blockquote> <br> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------8C0634D669A6203EC12B8A27--

Seems you hit [1]. Can you tell me what is the exact version of the engine ? [1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793 On Thu, May 11, 2017 at 10:29 PM, Charles Tassell <charles@islandadmin.ca> wrote:
Sure, it's pretty big so I've put it online for download at http://krissy.islandadmin.ca/public/engine.log.txt
On 2017-05-11 04:08 PM, Fred Rolland wrote:
The locking is on the engine side and restarting the vdsm will not affect it . Can you send the whole engine log ? Which exact version are you using ?
On Thu, May 11, 2017 at 9:30 PM, Charles Tassell <charles@islandadmin.ca> wrote:
Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508
Here's the last 20 lines: 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e- 188c-4e10-9893-588ceb81b354/dom_md/ids 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/ 5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/ mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893- 588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/ mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893- 588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/ mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893- 588ceb81b354/dom_md/leases:1048576 for 7,21,112346 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924- 46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924- 46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924- 46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
On 2017-05-11 10:09 AM, Charles Tassell wrote:
Hi Freddy,
Sure, thanks for looking into this. Here you go:
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPb jYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip] On 2017-05-11 04:30 AM, Fred Rolland wrote:
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip] _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------8298F33A8974EF14E7F6DFE3 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Hi Fred, Unfortunately we reinstalled the cluster a while ago, so I don't know the exact version we were running at the time. It would have been the latest version in the 4.1 production repos though if that helps. On 2017-07-02 11:32 AM, Fred Rolland wrote:
Seems you hit [1]. Can you tell me what is the exact version of the engine ?
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793
On Thu, May 11, 2017 at 10:29 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Sure, it's pretty big so I've put it online for download at http://krissy.islandadmin.ca/public/engine.log.txt <http://krissy.islandadmin.ca/public/engine.log.txt>
On 2017-05-11 04:08 PM, Fred Rolland wrote:
The locking is on the engine side and restarting the vdsm will not affect it . Can you send the whole engine log ? Which exact version are you using ?
On Thu, May 11, 2017 at 9:30 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508
Here's the last 20 lines: 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
On 2017-05-11 10:09 AM, Charles Tassell wrote:
Hi Freddy,
Sure, thanks for looking into this. Here you go:
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user 'admin@internal-authz'.[snip]
On 2017-05-11 04:30 AM, Fred Rolland wrote:
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <charles@islandadmin.ca <mailto:charles@islandadmin.ca>> wrote:
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip] _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------8298F33A8974EF14E7F6DFE3 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p>Hi Fred,</p> <p> Unfortunately we reinstalled the cluster a while ago, so I don't know the exact version we were running at the time. It would have been the latest version in the 4.1 production repos though if that helps.<br> </p> <br> <div class="moz-cite-prefix">On 2017-07-02 11:32 AM, Fred Rolland wrote:<br> </div> <blockquote type="cite" cite="mid:CAF_B0vF6BZFHXxbWaAGx1ya-8os5QfXe0xswfa-iYhH51O09Xg@mail.gmail.com"> <div dir="ltr"> <div>Seems you hit [1].<br> </div> Can you tell me what is the exact version of the engine ?<br> <div><br> [1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1395793" moz-do-not-send="true">https://bugzilla.redhat.com/show_bug.cgi?id=1395793</a><br> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, May 11, 2017 at 10:29 PM, Charles Tassell <span dir="ltr"><<a href="mailto:charles@islandadmin.ca" target="_blank" moz-do-not-send="true">charles@islandadmin.ca</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> Sure, it's pretty big so I've put it online for download at <a class="m_4218111262618390950moz-txt-link-freetext" href="http://krissy.islandadmin.ca/public/engine.log.txt" target="_blank" moz-do-not-send="true">http://krissy.islandadmin.ca/<wbr>public/engine.log.txt</a> <div> <div class="h5"><br> <br> <div class="m_4218111262618390950moz-cite-prefix">On 2017-05-11 04:08 PM, Fred Rolland wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div> <div>The locking is on the engine side and restarting the vdsm will not affect it .<br> </div> Can you send the whole engine log ?<br> </div> Which exact version are you using ?<br> <div> <div><br> </div> </div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, May 11, 2017 at 9:30 PM, Charles Tassell <span dir="ltr"><<a href="mailto:charles@islandadmin.ca" target="_blank" moz-do-not-send="true">charles@islandadmin.ca</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> <p>Just as an update, I created a new VM and had the same issue: the disk remains locked. So I then added a new data store (this one iSCSI not NFS) and create a new VM on that. Again, the disk remains locked. So the problem seems to be that any action that sets to modify a disk image on my cluster locks the disk and keeps it locked permanently.</p> <p>I tried restarting the vdsm daemon, but that didn't make a difference. I'm seeing this in my sanlock.log file though, which worries me:</p> <p><br> </p> 017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508<br> <br> Here's the last 20 lines:<br> 2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD 0x7fe1440008c0:0x7fe1440008d0:<wbr>0x7fe160255000 ioto 10 to_count 67<br> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/192.168.<wbr>130.217:_media_ovirt/20423d5e-<wbr>188c-4e10-9893-588ceb81b354/do<wbr>m_md/ids<br> 2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD 0x7fe1480008c0:0x7fe1480008d0:<wbr>0x7fe14e6fc000 ioto 10 to_count 65<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10 sec offset 0 /var/run/vdsm/storage/5dccd07d<wbr>-a923-4d4b-9cb1-3b51ebfdca4d/<wbr>5a9c284f-0faa-4a25-94ce-<wbr>c9efdae07484/ab2443f1-95ed-<wbr>475d-886c-c1653257cf04<br> 2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202 delta_length 10 last_success 1738508<br> 2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD 0x7fe1440008c0:0x7fe1440008d0:<wbr>0x7fe160255000 result 1048576:0 match reap<br> 2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD 0x7fe13c0008c0:0x7fe13c0008d0:<wbr>0x7fe14e5fa000 result 1048576:0 match reap<br> 2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD 0x7fe1480008c0:0x7fe1480008d0:<wbr>0x7fe14e6fc000 result 1048576:0 match reap<br> 2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource 20423d5e-188c-4e10-9893-588ceb<wbr>81b354:SDM:/rhev/data-center/<wbr>mnt/192.168.130.217:_media_<wbr>ovirt/20423d5e-188c-4e10-9893-<wbr>588ceb81b354/dom_md/leases:<wbr>1048576 for 7,21,78395<br> 2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time 10 sec<br> 2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource 20423d5e-188c-4e10-9893-588ceb<wbr>81b354:SDM:/rhev/data-center/<wbr>mnt/192.168.130.217:_media_<wbr>ovirt/20423d5e-188c-4e10-9893-<wbr>588ceb81b354/dom_md/leases:<wbr>1048576 for 7,21,78395<br> 2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource 20423d5e-188c-4e10-9893-588ceb<wbr>81b354:SDM:/rhev/data-center/<wbr>mnt/192.168.130.217:_media_<wbr>ovirt/20423d5e-188c-4e10-9893-<wbr>588ceb81b354/dom_md/leases:<wbr>1048576 for 7,21,112346<br> 2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace b010093e-1924-46e1-bd57-2cf2b2<wbr>445087:1:/dev/b010093e-1924-<wbr>46e1-bd57-2cf2b2445087/ids:0<br> 2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881 44ae07eb-3371-4750-8728-ab3b04<wbr>9dbae2.ovirt730-0<br> 2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource b010093e-1924-46e1-bd57-2cf2b2<wbr>445087:SDM:/dev/b010093e-1924-<wbr>46e1-bd57-2cf2b2445087/leases:<wbr>1048576 for 7,21,112346<br> 2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace b010093e-1924-46e1-bd57-2cf2b2<wbr>445087:1:/dev/b010093e-1924-<wbr>46e1-bd57-2cf2b2445087/ids:0<br> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909 44ae07eb-3371-4750-8728-ab3b04<wbr>9dbae2.ovirt730-0<br> 2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065 4d31313f-b2dd-4368-bf31-d39835<wbr>e10afb.ovirt730-0<span><br> <br> <br> <div class="m_4218111262618390950m_2097369334319903642moz-cite-prefix">On 2017-05-11 10:09 AM, Charles Tassell wrote:<br> </div> </span> <blockquote type="cite"><span> <p>Hi Freddy,</p> <p> Sure, thanks for looking into this. Here you go:</p> </span> <p>2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa<wbr>.SessionDataContainer] (DefaultQuartzScheduler8) [1c84abac] Not removing session 'vZyqrcCljPC7hQtcILsk4uDug3Qsi<wbr>inZBOyoGDiQKkYYT2znGyWe4fasrPb<wbr>jYxdjbfyR3DBnp+UZ9/k20dGsMA=='<wbr>, session has running commands for user 'admin@internal-authz'.[snip]<br> </p> <span> <div class="m_4218111262618390950m_2097369334319903642moz-cite-prefix">On 2017-05-11 04:30 AM, Fred Rolland wrote:<br> </div> </span> <blockquote type="cite"><span> <div dir="ltr"> <div> <div> <div>Hi,<br> <br> </div> Can you provide the engine log ?<br> <br> </div> Thanks,<br> <br> </div> Freddy<br> </div> </span> <div class="gmail_extra"><br> <div class="gmail_quote"><span>On Wed, May 10, 2017 at 5:57 PM, Charles Tassell <span dir="ltr"><<a href="mailto:charles@islandadmin.ca" target="_blank" moz-do-not-send="true">charles@islandadmin.ca</a>></span> wrote:<br> </span> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>Hi Everyone,<br> <br> I'm having some issues with my oVirt 4.1 (fully updated to latest release as of yesterday) cluster. When I clone a VM the disks of both the original and the clone stay in the locked state, and the only way I can resolve it is to go into the database on the engine and run "update images set imagestatus=1 where imagestatus=2;"<br> <br> I'm using NFS4 as a datastore and the disks seem to copy fine (file sizes match and everything), but the locking worries me. To clone the VM I just shut the source VM down and then right click on it and select "Clone"<br> <br> I've attached the full VDSM log from my last attempt, but here is the excerpt of the lines just referencing the two disks (d73206ed-89ba-48a9-82ff-c107c<wbr>1af60f0 is the original VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98<wbr>258224 is the clone.)<br> <br> </span> [snip]<span><br> ______________________________<wbr>_________________<br> Users mailing list<br> <a href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users@ovirt.org</a><br> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> <br> </span></blockquote> </div> <br> </div> </blockquote> <br> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </div> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------8298F33A8974EF14E7F6DFE3--
participants (2)
-
Charles Tassell
-
Fred Rolland