[Users] Unable to delete a snapshot

Hi, With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error : "Failed to delete snapshot 'blahblahbla' for VM 'myVM'." The disk is thin provisionned, accessed via virtIO, nothing special. The log below comes from the manager. I hope someone could help us because this server is quite serious. Thank you. 2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3. -- Nicolas Ecarnot -- Nicolas Ecarnot

Hi Nicolas, Can u please also add the VDSM log. Thanks, Maor On 01/06/2014 11:25 AM, Nicolas Ecarnot wrote:
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3.

Thank you Maor and Meital for your help. It took me some times just to find WHERE to get the vdsm logs, as this is done on a VM that is shut down, so not directly managed by a specific host. So I think I understood I had to look on the SPM. Then amongst the tons of logs, I think I've found the relevant lines. I have to add that : - (dumb as I may be,) I repeated th try to delete the snapshots, so there are several attempts of deletion in the logs. - after the last attempt, the snapshot now appears as "BROKEN" in the manager, and this doesn't sound good. I would love to hear from you there is a way to recover it. The vdsm log I found relevant is below. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,626::utils::538::Storage.Misc.excCmd::(watchCmd) SUCCESS: <err> = []; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,627::volume::1152::Storage.Volume::(qemuConvert) (qemuConvert): COPY /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 to /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE DONE 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:07,628::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE justme True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,628::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,631::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,212::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,213::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:08,215::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,215::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,217::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,581::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,582::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,584::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,585::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:08,862::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,901::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,136::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,139::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,142::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,146::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,149::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,164::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:09,318::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.342311 s, 3.0 MB/s\n'; <rc> = 0 Thread-4658::DEBUG::2014-01-06 10:06:09,422::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata bs=4096 count=1' (cwd None) Thread-4658::DEBUG::2014-01-06 10:06:09,454::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000705734 s, 5.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,502::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,506::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,812::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,940::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,941::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 as _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,944::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,975::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,207::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/83d39199-d4e4-474c-b232-7088c76a2811/metadata bs=4096 count=1' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,287::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.0114936 s, 356 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,313::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,317::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,327::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,557::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/serv-fs-adm1.sdis.isere.fr:_home_admin_vmexport/a7c189dd-4980-49ab-ad86-65a14235cbe4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,585::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n378 bytes (378 B) copied, 0.000281187 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,635::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,640::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:10,640::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE as c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,642::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,660::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,983::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,985::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,986::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,284::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,289::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,291::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,304::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,313::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:11,346::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,372::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000671794 s, 762 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,373::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000671794 s, 762 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,386::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,438::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642604::DEBUG::2014-01-06 10:06:11,474::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642604::DEBUG::2014-01-06 10:06:11,484::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state init -> state preparing Thread-3642604::INFO::2014-01-06 10:06:11,490::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642604::INFO::2014-01-06 10:06:11,497::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,503::task::1168::TaskManager.Task::(prepare) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::finished: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,507::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state preparing -> state finished Thread-3642604::DEBUG::2014-01-06 10:06:11,510::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,510::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000835653 s, 613 kB/s\n'; <rc> = 0 Thread-3642604::DEBUG::2014-01-06 10:06:11,514::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,515::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000835653 s, 613 kB/s'], size: 512 Thread-3642604::DEBUG::2014-01-06 10:06:11,519::task::974::TaskManager.Task::(_decref) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::ref 0 aborting False Thread-3642605::DEBUG::2014-01-06 10:06:11,546::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642605::DEBUG::2014-01-06 10:06:11,553::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state init -> state preparing Thread-3642605::INFO::2014-01-06 10:06:11,558::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642605::DEBUG::2014-01-06 10:06:11,568::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo' Thread-3642605::DEBUG::2014-01-06 10:06:11,572::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'shared' Thread-3642605::DEBUG::2014-01-06 10:06:11,578::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'shared' (1 active user) Thread-3642605::DEBUG::2014-01-06 10:06:11,583::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Granted request Thread-3642605::DEBUG::2014-01-06 10:06:11,593::task::811::TaskManager.Task::(resourceAcquired) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (shared) Thread-3642605::DEBUG::2014-01-06 10:06:11,597::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 1 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,610::image::881::Storage.Image::(removeSubChain) Remove volume _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,619::blockVolume::219::Storage.Volume::(delete) Request to delete LV _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 Thread-3642605::INFO::2014-01-06 10:06:11,623::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,625::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL Thread-3642605::DEBUG::2014-01-06 10:06:11,627::task::1168::TaskManager.Task::(prepare) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::finished: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} Thread-3642605::DEBUG::2014-01-06 10:06:11,632::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state preparing -> state finished Thread-3642605::DEBUG::2014-01-06 10:06:11,636::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-3642605::DEBUG::2014-01-06 10:06:11,640::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642605::DEBUG::2014-01-06 10:06:11,647::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-3642605::DEBUG::2014-01-06 10:06:11,654::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-3642605::DEBUG::2014-01-06 10:06:11,657::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-3642605::DEBUG::2014-01-06 10:06:11,661::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-3642605::DEBUG::2014-01-06 10:06:11,669::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 0 aborting False Thread-3642606::DEBUG::2014-01-06 10:06:11,725::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642606::DEBUG::2014-01-06 10:06:11,730::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state init -> state preparing 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,731::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642606::INFO::2014-01-06 10:06:11,756::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, options=None) Thread-3642606::DEBUG::2014-01-06 10:06:11,761::taskManager::102::TaskManager::(getAllTasksStatuses) Entry. Thread-3642606::DEBUG::2014-01-06 10:06:11,765::taskManager::93::TaskManager::(getTaskStatus) Entry. taskID: 236b3c5a-452a-4614-801a-c30cefbce87e Thread-3642606::DEBUG::2014-01-06 10:06:11,768::taskManager::96::TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'} Thread-3642606::DEBUG::2014-01-06 10:06:11,772::taskManager::113::TaskManager::(getAllTasksStatuses) Return: {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}} Thread-3642606::INFO::2014-01-06 10:06:11,776::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,779::task::1168::TaskManager.Task::(prepare) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::finished: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,784::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state preparing -> state finished Thread-3642606::DEBUG::2014-01-06 10:06:11,787::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-3642606::DEBUG::2014-01-06 10:06:11,791::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642606::DEBUG::2014-01-06 10:06:11,796::task::974::TaskManager.Task::(_decref) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::ref 0 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,803::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,809::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,885::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --deltag PU_506085b6-40e0-4176-a4df-9102857f51f2 --addtag PU_00000000-0000-0000-0000-000000000000 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:12,002::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.566167 s, 1.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,205::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,206::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,207::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,511::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,580::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,590::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,616::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,617::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,632::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,652::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,653::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,654::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,655::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) VM Channels Listener::DEBUG::2014-01-06 10:06:12,909::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 58. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,959::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,023::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,031::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,055::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000834663 s, 613 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,056::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000834663 s, 613 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,065::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,370::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,371::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,372::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::image::881::Storage.Image::(removeSubChain) Remove volume 506085b6-40e0-4176-a4df-9102857f51f2 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::blockVolume::219::Storage.Volume::(delete) Request to delete LV 506085b6-40e0-4176-a4df-9102857f51f2 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,382::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,389::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,430::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,450::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,451::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,460::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,672::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/1429ffe2-4137-416c-bb38-63fd73f4bcc1/metadata bs=4096 count=1' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,705::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000955526 s, 4.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,774::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,784::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,785::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:14,027::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,143::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,355::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::WARNING::2014-01-06 10:06:14,358::lvm::594::Storage.LVM::(getLv) lv: _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 not found in lvs vg: 11a077c7-658b-49bb-8596-a785109c24c9 response 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,362::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,367::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,369::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,374::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,377::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,380::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,383::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,389::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,392::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,395::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,398::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,401::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::ERROR::2014-01-06 10:06:14,407::task::850::TaskManager.Task::(_setError) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1162, in merge srcVol.shrinkToOptimalSize() File "/usr/share/vdsm/storage/blockVolume.py", line 315, in shrinkToOptimalSize volParams = self.getVolumeParams() File "/usr/share/vdsm/storage/volume.py", line 1008, in getVolumeParams volParams['imgUUID'] = self.getImage() File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage return self.getVolumeTag(TAG_PREFIX_IMAGE) File "/usr/share/vdsm/storage/blockVolume.py", line 464, in getVolumeTag return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix) File "/usr/share/vdsm/storage/blockVolume.py", line 662, in _getVolumeTag tags = lvm.getLV(sdUUID, volUUID).tags File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23',) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,427::task::869::TaskManager.Task::(_run) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._run: 236b3c5a-452a-4614-801a-c30cefbce87e () {} failed - stopping task 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,430::task::1194::TaskManager.Task::(stop) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::stopping in state running (force False) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,433::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 1 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,436::task::900::TaskManager.Task::(_runJobs) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::aborting: Task is aborted: 'Logical volume does not exist' - code 610 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,440::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 0 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,443::task::909::TaskManager.Task::(_doAbort) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._doAbort: force False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,445::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,449::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state running -> state aborting 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,457::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,509::task::534::TaskManager.Task::(__state_aborting) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_aborting: recover policy auto 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,512::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state aborting -> state racquiring 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,520::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,574::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state racquiring -> state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,581::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,633::task::782::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_recover 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,635::task::789::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::running recovery None 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,638::task::770::TaskManager.Task::(_recoverDone) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Recover Done: state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,641::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state recovering -> state recovered 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,649::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,697::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.11a077c7-658b-49bb-8596-a785109c24c9': < ResourceRef 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', isValid: 'True' obj: 'None'>} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,700::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,706::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,709::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,712::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,714::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,720::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 0 Dummy-1808243::DEBUG::2014-01-06 10:06:14,788::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.471388 s, 2.2 MB/s\n'; <rc> = 0 Le 06/01/2014 10:30, Maor Lipchuk a écrit :
Hi Nicolas, Can u please also add the VDSM log.
Thanks, Maor
On 01/06/2014 11:25 AM, Nicolas Ecarnot wrote:
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3.
-- Nicolas Ecarnot

What's the output of `lvs`? Also, Please attach the whole vdsm.log, it's hard to read it this way (lines are broken) ----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 12:16:52 PM Subject: Re: [Users] Unable to delete a snapshot
Thank you Maor and Meital for your help.
It took me some times just to find WHERE to get the vdsm logs, as this is done on a VM that is shut down, so not directly managed by a specific host. So I think I understood I had to look on the SPM. Then amongst the tons of logs, I think I've found the relevant lines.
I have to add that : - (dumb as I may be,) I repeated th try to delete the snapshots, so there are several attempts of deletion in the logs. - after the last attempt, the snapshot now appears as "BROKEN" in the manager, and this doesn't sound good. I would love to hear from you there is a way to recover it.
The vdsm log I found relevant is below.
236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,626::utils::538::Storage.Misc.excCmd::(watchCmd) SUCCESS: <err> = []; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,627::volume::1152::Storage.Volume::(qemuConvert) (qemuConvert): COPY /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 to /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE DONE 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:07,628::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE justme True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,628::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,631::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,212::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,213::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:08,215::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,215::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,217::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,581::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,582::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,584::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,585::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:08,862::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,901::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,136::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,139::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,142::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,146::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,149::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,164::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:09,318::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.342311 s, 3.0 MB/s\n'; <rc> = 0 Thread-4658::DEBUG::2014-01-06 10:06:09,422::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata bs=4096 count=1' (cwd None) Thread-4658::DEBUG::2014-01-06 10:06:09,454::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000705734 s, 5.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,502::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,506::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,812::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,940::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,941::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 as _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,944::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,975::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,207::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/83d39199-d4e4-474c-b232-7088c76a2811/metadata bs=4096 count=1' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,287::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.0114936 s, 356 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,313::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,317::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,327::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,557::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/serv-fs-adm1.sdis.isere.fr:_home_admin_vmexport/a7c189dd-4980-49ab-ad86-65a14235cbe4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,585::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n378 bytes (378 B) copied, 0.000281187 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,635::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,640::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:10,640::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE as c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,642::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,660::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,983::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,985::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,986::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,284::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,289::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,291::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,304::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,313::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:11,346::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,372::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000671794 s, 762 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,373::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000671794 s, 762 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,386::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,438::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642604::DEBUG::2014-01-06 10:06:11,474::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642604::DEBUG::2014-01-06 10:06:11,484::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state init -> state preparing Thread-3642604::INFO::2014-01-06 10:06:11,490::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642604::INFO::2014-01-06 10:06:11,497::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,503::task::1168::TaskManager.Task::(prepare) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::finished: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,507::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state preparing -> state finished Thread-3642604::DEBUG::2014-01-06 10:06:11,510::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,510::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000835653 s, 613 kB/s\n'; <rc> = 0 Thread-3642604::DEBUG::2014-01-06 10:06:11,514::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,515::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000835653 s, 613 kB/s'], size: 512 Thread-3642604::DEBUG::2014-01-06 10:06:11,519::task::974::TaskManager.Task::(_decref) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::ref 0 aborting False Thread-3642605::DEBUG::2014-01-06 10:06:11,546::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642605::DEBUG::2014-01-06 10:06:11,553::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state init -> state preparing Thread-3642605::INFO::2014-01-06 10:06:11,558::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642605::DEBUG::2014-01-06 10:06:11,568::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo' Thread-3642605::DEBUG::2014-01-06 10:06:11,572::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'shared' Thread-3642605::DEBUG::2014-01-06 10:06:11,578::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'shared' (1 active user) Thread-3642605::DEBUG::2014-01-06 10:06:11,583::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Granted request Thread-3642605::DEBUG::2014-01-06 10:06:11,593::task::811::TaskManager.Task::(resourceAcquired) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (shared) Thread-3642605::DEBUG::2014-01-06 10:06:11,597::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 1 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,610::image::881::Storage.Image::(removeSubChain) Remove volume _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,619::blockVolume::219::Storage.Volume::(delete) Request to delete LV _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 Thread-3642605::INFO::2014-01-06 10:06:11,623::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,625::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL Thread-3642605::DEBUG::2014-01-06 10:06:11,627::task::1168::TaskManager.Task::(prepare) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::finished: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} Thread-3642605::DEBUG::2014-01-06 10:06:11,632::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state preparing -> state finished Thread-3642605::DEBUG::2014-01-06 10:06:11,636::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-3642605::DEBUG::2014-01-06 10:06:11,640::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642605::DEBUG::2014-01-06 10:06:11,647::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-3642605::DEBUG::2014-01-06 10:06:11,654::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-3642605::DEBUG::2014-01-06 10:06:11,657::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-3642605::DEBUG::2014-01-06 10:06:11,661::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-3642605::DEBUG::2014-01-06 10:06:11,669::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 0 aborting False Thread-3642606::DEBUG::2014-01-06 10:06:11,725::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642606::DEBUG::2014-01-06 10:06:11,730::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state init -> state preparing 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,731::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642606::INFO::2014-01-06 10:06:11,756::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, options=None) Thread-3642606::DEBUG::2014-01-06 10:06:11,761::taskManager::102::TaskManager::(getAllTasksStatuses) Entry. Thread-3642606::DEBUG::2014-01-06 10:06:11,765::taskManager::93::TaskManager::(getTaskStatus) Entry. taskID: 236b3c5a-452a-4614-801a-c30cefbce87e Thread-3642606::DEBUG::2014-01-06 10:06:11,768::taskManager::96::TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'} Thread-3642606::DEBUG::2014-01-06 10:06:11,772::taskManager::113::TaskManager::(getAllTasksStatuses) Return: {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}} Thread-3642606::INFO::2014-01-06 10:06:11,776::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,779::task::1168::TaskManager.Task::(prepare) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::finished: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,784::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state preparing -> state finished Thread-3642606::DEBUG::2014-01-06 10:06:11,787::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-3642606::DEBUG::2014-01-06 10:06:11,791::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642606::DEBUG::2014-01-06 10:06:11,796::task::974::TaskManager.Task::(_decref) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::ref 0 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,803::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,809::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,885::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --deltag PU_506085b6-40e0-4176-a4df-9102857f51f2 --addtag PU_00000000-0000-0000-0000-000000000000 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:12,002::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.566167 s, 1.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,205::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,206::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,207::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,511::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,580::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,590::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,616::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,617::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,632::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,652::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,653::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,654::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,655::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) VM Channels Listener::DEBUG::2014-01-06 10:06:12,909::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 58. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,959::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,023::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,031::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,055::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000834663 s, 613 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,056::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000834663 s, 613 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,065::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,370::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,371::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,372::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::image::881::Storage.Image::(removeSubChain) Remove volume 506085b6-40e0-4176-a4df-9102857f51f2 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::blockVolume::219::Storage.Volume::(delete) Request to delete LV 506085b6-40e0-4176-a4df-9102857f51f2 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,382::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,389::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,430::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,450::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,451::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,460::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,672::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/1429ffe2-4137-416c-bb38-63fd73f4bcc1/metadata bs=4096 count=1' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,705::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000955526 s, 4.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,774::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,784::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,785::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:14,027::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,143::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,355::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::WARNING::2014-01-06 10:06:14,358::lvm::594::Storage.LVM::(getLv) lv: _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 not found in lvs vg: 11a077c7-658b-49bb-8596-a785109c24c9 response 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,362::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,367::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,369::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,374::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,377::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,380::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,383::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,389::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,392::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,395::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,398::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,401::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::ERROR::2014-01-06 10:06:14,407::task::850::TaskManager.Task::(_setError) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1162, in merge srcVol.shrinkToOptimalSize() File "/usr/share/vdsm/storage/blockVolume.py", line 315, in shrinkToOptimalSize volParams = self.getVolumeParams() File "/usr/share/vdsm/storage/volume.py", line 1008, in getVolumeParams volParams['imgUUID'] = self.getImage() File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage return self.getVolumeTag(TAG_PREFIX_IMAGE) File "/usr/share/vdsm/storage/blockVolume.py", line 464, in getVolumeTag return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix) File "/usr/share/vdsm/storage/blockVolume.py", line 662, in _getVolumeTag tags = lvm.getLV(sdUUID, volUUID).tags File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23',) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,427::task::869::TaskManager.Task::(_run) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._run: 236b3c5a-452a-4614-801a-c30cefbce87e () {} failed - stopping task 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,430::task::1194::TaskManager.Task::(stop) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::stopping in state running (force False) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,433::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 1 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,436::task::900::TaskManager.Task::(_runJobs) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::aborting: Task is aborted: 'Logical volume does not exist' - code 610 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,440::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 0 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,443::task::909::TaskManager.Task::(_doAbort) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._doAbort: force False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,445::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,449::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state running -> state aborting 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,457::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,509::task::534::TaskManager.Task::(__state_aborting) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_aborting: recover policy auto 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,512::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state aborting -> state racquiring 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,520::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,574::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state racquiring -> state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,581::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,633::task::782::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_recover 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,635::task::789::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::running recovery None 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,638::task::770::TaskManager.Task::(_recoverDone) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Recover Done: state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,641::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state recovering -> state recovered 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,649::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,697::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.11a077c7-658b-49bb-8596-a785109c24c9': < ResourceRef 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', isValid: 'True' obj: 'None'>} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,700::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,706::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,709::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,712::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,714::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,720::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 0 Dummy-1808243::DEBUG::2014-01-06 10:06:14,788::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.471388 s, 2.2 MB/s\n'; <rc> = 0
Le 06/01/2014 10:30, Maor Lipchuk a écrit :
Hi Nicolas, Can u please also add the VDSM log.
Thanks, Maor
On 01/06/2014 11:25 AM, Nicolas Ecarnot wrote:
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3.
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Sander, hi everyone, The oVirt version is 3.3.0-4.el6. For vdsm, it's : vdsm-4.12.1-2.el6 Le 06/01/2014 12:35, Meital Bourvine a écrit :
What's the output of `lvs`?
[root@serv-vm-adm9 vdsm]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert 1beb9910-e18a-4163-b0ac-3aeef5bd9912 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 7,62g 203a473b-c2c2-4193-bec8-afb00d9b45f0 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 1,00g 4779068e-96dd-411f-a924-2cdb2b4adf05 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 2,38g 4a8c17aa-5882-45a1-8a6e-40db39ed06ca 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 1,00g 55973d46-ceba-40a4-9d89-c69357fa8f2a 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 31,00g 5c71e53b-21f2-4671-94f8-4603d1b0bf5e 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 12,38g 6be3ca58-3cfa-478f-92e8-7a6061fbc192 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 330,00g 7369a73a-fea5-40d9-ad0a-7d81a43fe931 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 12,00g 89b6297f-e394-4d78-b1d3-ea6135ea114d 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 12,62g b8309060-7acd-43e1-9ae5-0455f0861dce 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 40,62g c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 100,00g e610a133-1f96-4c6a-8be5-efd1fdc413b6 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 126,00g eea02ca5-7d8c-4841-b161-b56d75b177b8 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 25,00g ids 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 128,00m inbox 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 128,00m leases 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 2,00g master 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 1,00g metadata 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 512,00m outbox 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 128,00m 11aa5d75-5c46-4a2d-9d36-785ee3fed483 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 12,12g 645920f2-458f-46e9-93e8-4ff72f4e7bd0 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 143,00g 6ec56633-707d-4ae5-9db0-94834a193ae5 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 10,00g 7a74ba80-9207-4f21-90a0-f28fc9925fee 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 16,00g 80bac371-6899-4fbe-a8e1-272037186bfb 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 21,00g 82fca941-0c88-4bb9-b99f-8e22e56d81b0 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 16,25g ad83757e-3720-4b77-bc7e-475e82dccbd4 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 32,00g b42ba23e-ed78-4bfc-a047-edd376696a65 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 16,00g bca93f68-8e10-4db3-bf46-d20cc4e9a4f2 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 18,00g ea9c8f12-4eb6-42de-b6d6-6296555d0ac0 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 62,00g ed91505a-a219-42af-9915-e5ffc79918f9 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 132,00g eee05299-4131-4116-87ab-085c6c0afd8d 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 330,00g fe2d2070-6758-467e-8faf-bd7af8f4788b 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 5,62g ids 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-ao--- 128,00m inbox 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 128,00m leases 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 2,00g master 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 1,00g metadata 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 512,00m outbox 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 128,00m 0062076a-1b4b-4b3f-8e0d-c1f411049f2b 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 3,00g 179f6e8e-8b30-4f3a-874e-61f9bcb0dabf 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 4,50g 38473266-f800-409a-ac41-aa737d5d25ba 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 5,75g 4250ecb7-3a66-4646-947c-bd65705abe13 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 1,00g 44421611-8696-4f4c-ad0f-07af8fac5a87 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 1,00g 476e7e0a-89dd-4668-8fcf-aafa14e0cb19 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 2,00g 74d53c18-b9bd-494e-84b5-ae99b68ad237 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 4,75g 7b0c1ea7-83f7-424b-8f4f-3640bc99e758 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 1,00g be0849c9-37e7-4b8c-bb11-c532e2b763b8 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 20,88g c9491855-eb18-41bf-8b40-4ebe50c0b22e 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 20,00g d0148213-c15b-44ca-8c2d-979c3b18b188 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 2,00g e825616c-4a97-4efb-a92b-46be2524e522 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 4,00g f43cc98c-988b-4107-b95b-c665bcf6fa64 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 8,25g ids 83d39199-d4e4-474c-b232-7088c76a2811 -wi-ao--- 128,00m inbox 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 128,00m leases 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 2,00g master 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 1,00g metadata 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 512,00m outbox 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 128,00m lv_home vg_servvmadm9 -wi-ao--- 77,80g lv_root vg_servvmadm9 -wi-ao--- 50,00g lv_swap vg_servvmadm9 -wi-ao--- 7,83g [root@serv-vm-adm9 vdsm]#
Also, Please attach the whole vdsm.log, it's hard to read it this way (lines are broken)
See attachment. -- Nicolas Ecarnot
----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 12:16:52 PM Subject: Re: [Users] Unable to delete a snapshot
Thank you Maor and Meital for your help.
It took me some times just to find WHERE to get the vdsm logs, as this is done on a VM that is shut down, so not directly managed by a specific host. So I think I understood I had to look on the SPM. Then amongst the tons of logs, I think I've found the relevant lines.
I have to add that : - (dumb as I may be,) I repeated th try to delete the snapshots, so there are several attempts of deletion in the logs. - after the last attempt, the snapshot now appears as "BROKEN" in the manager, and this doesn't sound good. I would love to hear from you there is a way to recover it.
The vdsm log I found relevant is below.
236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,626::utils::538::Storage.Misc.excCmd::(watchCmd) SUCCESS: <err> = []; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,627::volume::1152::Storage.Volume::(qemuConvert) (qemuConvert): COPY /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 to /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE DONE 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:07,628::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE justme True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,628::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,631::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,212::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,213::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:08,215::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,215::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,217::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,581::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,582::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,584::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,585::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:08,862::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,901::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,136::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,139::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,142::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,146::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,149::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,164::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:09,318::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.342311 s, 3.0 MB/s\n'; <rc> = 0 Thread-4658::DEBUG::2014-01-06 10:06:09,422::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata bs=4096 count=1' (cwd None) Thread-4658::DEBUG::2014-01-06 10:06:09,454::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000705734 s, 5.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,502::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,506::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,812::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,940::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,941::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 as _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,944::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,975::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,207::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/83d39199-d4e4-474c-b232-7088c76a2811/metadata bs=4096 count=1' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,287::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.0114936 s, 356 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,313::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,317::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,327::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,557::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/serv-fs-adm1.sdis.isere.fr:_home_admin_vmexport/a7c189dd-4980-49ab-ad86-65a14235cbe4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,585::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n378 bytes (378 B) copied, 0.000281187 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,635::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,640::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:10,640::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE as c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,642::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,660::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,983::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,985::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,986::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,284::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,289::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,291::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,304::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,313::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:11,346::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,372::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000671794 s, 762 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,373::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000671794 s, 762 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,386::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,438::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642604::DEBUG::2014-01-06 10:06:11,474::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642604::DEBUG::2014-01-06 10:06:11,484::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state init -> state preparing Thread-3642604::INFO::2014-01-06 10:06:11,490::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642604::INFO::2014-01-06 10:06:11,497::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,503::task::1168::TaskManager.Task::(prepare) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::finished: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,507::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state preparing -> state finished Thread-3642604::DEBUG::2014-01-06 10:06:11,510::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,510::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000835653 s, 613 kB/s\n'; <rc> = 0 Thread-3642604::DEBUG::2014-01-06 10:06:11,514::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,515::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000835653 s, 613 kB/s'], size: 512 Thread-3642604::DEBUG::2014-01-06 10:06:11,519::task::974::TaskManager.Task::(_decref) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::ref 0 aborting False Thread-3642605::DEBUG::2014-01-06 10:06:11,546::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642605::DEBUG::2014-01-06 10:06:11,553::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state init -> state preparing Thread-3642605::INFO::2014-01-06 10:06:11,558::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642605::DEBUG::2014-01-06 10:06:11,568::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo' Thread-3642605::DEBUG::2014-01-06 10:06:11,572::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'shared' Thread-3642605::DEBUG::2014-01-06 10:06:11,578::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'shared' (1 active user) Thread-3642605::DEBUG::2014-01-06 10:06:11,583::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Granted request Thread-3642605::DEBUG::2014-01-06 10:06:11,593::task::811::TaskManager.Task::(resourceAcquired) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (shared) Thread-3642605::DEBUG::2014-01-06 10:06:11,597::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 1 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,610::image::881::Storage.Image::(removeSubChain) Remove volume _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,619::blockVolume::219::Storage.Volume::(delete) Request to delete LV _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 Thread-3642605::INFO::2014-01-06 10:06:11,623::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,625::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL Thread-3642605::DEBUG::2014-01-06 10:06:11,627::task::1168::TaskManager.Task::(prepare) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::finished: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} Thread-3642605::DEBUG::2014-01-06 10:06:11,632::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state preparing -> state finished Thread-3642605::DEBUG::2014-01-06 10:06:11,636::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-3642605::DEBUG::2014-01-06 10:06:11,640::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642605::DEBUG::2014-01-06 10:06:11,647::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-3642605::DEBUG::2014-01-06 10:06:11,654::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-3642605::DEBUG::2014-01-06 10:06:11,657::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-3642605::DEBUG::2014-01-06 10:06:11,661::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-3642605::DEBUG::2014-01-06 10:06:11,669::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 0 aborting False Thread-3642606::DEBUG::2014-01-06 10:06:11,725::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642606::DEBUG::2014-01-06 10:06:11,730::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state init -> state preparing 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,731::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642606::INFO::2014-01-06 10:06:11,756::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, options=None) Thread-3642606::DEBUG::2014-01-06 10:06:11,761::taskManager::102::TaskManager::(getAllTasksStatuses) Entry. Thread-3642606::DEBUG::2014-01-06 10:06:11,765::taskManager::93::TaskManager::(getTaskStatus) Entry. taskID: 236b3c5a-452a-4614-801a-c30cefbce87e Thread-3642606::DEBUG::2014-01-06 10:06:11,768::taskManager::96::TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'} Thread-3642606::DEBUG::2014-01-06 10:06:11,772::taskManager::113::TaskManager::(getAllTasksStatuses) Return: {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}} Thread-3642606::INFO::2014-01-06 10:06:11,776::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,779::task::1168::TaskManager.Task::(prepare) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::finished: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,784::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state preparing -> state finished Thread-3642606::DEBUG::2014-01-06 10:06:11,787::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-3642606::DEBUG::2014-01-06 10:06:11,791::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642606::DEBUG::2014-01-06 10:06:11,796::task::974::TaskManager.Task::(_decref) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::ref 0 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,803::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,809::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,885::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --deltag PU_506085b6-40e0-4176-a4df-9102857f51f2 --addtag PU_00000000-0000-0000-0000-000000000000 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:12,002::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.566167 s, 1.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,205::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,206::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,207::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,511::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,580::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,590::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,616::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,617::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,632::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,652::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,653::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,654::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,655::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) VM Channels Listener::DEBUG::2014-01-06 10:06:12,909::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 58. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,959::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,023::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,031::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,055::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000834663 s, 613 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,056::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000834663 s, 613 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,065::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,370::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,371::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,372::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::image::881::Storage.Image::(removeSubChain) Remove volume 506085b6-40e0-4176-a4df-9102857f51f2 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::blockVolume::219::Storage.Volume::(delete) Request to delete LV 506085b6-40e0-4176-a4df-9102857f51f2 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,382::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,389::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,430::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,450::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,451::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,460::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,672::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/1429ffe2-4137-416c-bb38-63fd73f4bcc1/metadata bs=4096 count=1' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,705::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000955526 s, 4.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,774::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,784::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,785::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:14,027::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,143::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,355::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::WARNING::2014-01-06 10:06:14,358::lvm::594::Storage.LVM::(getLv) lv: _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 not found in lvs vg: 11a077c7-658b-49bb-8596-a785109c24c9 response 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,362::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,367::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,369::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,374::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,377::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,380::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,383::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,389::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,392::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,395::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,398::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,401::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::ERROR::2014-01-06 10:06:14,407::task::850::TaskManager.Task::(_setError) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1162, in merge srcVol.shrinkToOptimalSize() File "/usr/share/vdsm/storage/blockVolume.py", line 315, in shrinkToOptimalSize volParams = self.getVolumeParams() File "/usr/share/vdsm/storage/volume.py", line 1008, in getVolumeParams volParams['imgUUID'] = self.getImage() File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage return self.getVolumeTag(TAG_PREFIX_IMAGE) File "/usr/share/vdsm/storage/blockVolume.py", line 464, in getVolumeTag return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix) File "/usr/share/vdsm/storage/blockVolume.py", line 662, in _getVolumeTag tags = lvm.getLV(sdUUID, volUUID).tags File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23',) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,427::task::869::TaskManager.Task::(_run) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._run: 236b3c5a-452a-4614-801a-c30cefbce87e () {} failed - stopping task 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,430::task::1194::TaskManager.Task::(stop) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::stopping in state running (force False) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,433::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 1 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,436::task::900::TaskManager.Task::(_runJobs) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::aborting: Task is aborted: 'Logical volume does not exist' - code 610 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,440::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 0 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,443::task::909::TaskManager.Task::(_doAbort) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._doAbort: force False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,445::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,449::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state running -> state aborting 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,457::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,509::task::534::TaskManager.Task::(__state_aborting) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_aborting: recover policy auto 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,512::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state aborting -> state racquiring 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,520::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,574::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state racquiring -> state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,581::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,633::task::782::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_recover 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,635::task::789::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::running recovery None 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,638::task::770::TaskManager.Task::(_recoverDone) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Recover Done: state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,641::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state recovering -> state recovered 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,649::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,697::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.11a077c7-658b-49bb-8596-a785109c24c9': < ResourceRef 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', isValid: 'True' obj: 'None'>} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,700::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,706::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,709::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,712::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,714::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,720::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 0 Dummy-1808243::DEBUG::2014-01-06 10:06:14,788::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.471388 s, 2.2 MB/s\n'; <rc> = 0
Le 06/01/2014 10:30, Maor Lipchuk a écrit :
Hi Nicolas, Can u please also add the VDSM log.
Thanks, Maor
On 01/06/2014 11:25 AM, Nicolas Ecarnot wrote:
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3.
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nicolas Ecarnot

I got the attachment. This is the relevant error: 6caec3bc-fc66-42be-a642-7733fc033103::ERROR::2014-01-06 10:13:21,068::task::850::TaskManager.Task::(_setError) Task=`6caec3bc-fc66-42be-a642-7733fc033103`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1101, in merge dstVol = vols[ancestor] KeyError: '506085b6-40e0-4176-a4df-9102857f51f2' I don't know why it happens, so you'll have to wait for someone else to answer. ----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 4:22:57 PM Subject: Re: [Users] Unable to delete a snapshot
Le 06/01/2014 12:51, Nicolas Ecarnot a écrit :
Also, Please attach the whole vdsm.log, it's hard to read it this way (lines are broken)
See attachment.
Actually, I don't know if this mailing list allows attachments ?
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Nicolas, I think that the initial problem started at 10:06 when VDSM tried to clear records of the ancestor volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 (see [1]) Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1) The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list) Could you give it a try and check if that works for you? Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs. Regards, Maor [1] 236b3c5a-452a-4614-801a-c30cefbce87e::ERROR::2014-01-06 10:06:14,407::task::850::TaskManager.Task::(_setError) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1162, in merge srcVol.shrinkToOptimalSize() File "/usr/share/vdsm/storage/blockVolume.py", line 315, in shrinkToOptimalSize volParams = self.getVolumeParams() File "/usr/share/vdsm/storage/volume.py", line 1008, in getVolumeParams volParams['imgUUID'] = self.getImage() File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage return self.getVolumeTag(TAG_PREFIX_IMAGE) File "/usr/share/vdsm/storage/blockVolume.py", line 464, in getVolumeTag return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix) File "/usr/share/vdsm/storage/blockVolume.py", line 662, in _getVolumeTag tags = lvm.getLV(sdUUID, volUUID).tags File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23',) On 01/06/2014 04:39 PM, Meital Bourvine wrote:
I got the attachment.
This is the relevant error: 6caec3bc-fc66-42be-a642-7733fc033103::ERROR::2014-01-06 10:13:21,068::task::850::TaskManager.Task::(_setError) Task=`6caec3bc-fc66-42be-a642-7733fc033103`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1101, in merge dstVol = vols[ancestor] KeyError: '506085b6-40e0-4176-a4df-9102857f51f2'
I don't know why it happens, so you'll have to wait for someone else to answer.
----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 4:22:57 PM Subject: Re: [Users] Unable to delete a snapshot
Le 06/01/2014 12:51, Nicolas Ecarnot a écrit :
Also, Please attach the whole vdsm.log, it's hard to read it this way (lines are broken)
See attachment.
Actually, I don't know if this mailing list allows attachments ?
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi everyone, Thank you everyone who answered. In fact, I will be glad to file a bug when I'm done with recovering this very serious VM. But my main concern now is to be able to run it asap or switch to the painfull way of tape recovering. I found similarities between some already filed bugs and my issue, but I think my issue is much simpler. In my case : - the VM has only one disk - the whole oVirt setup is using an iSCSI SAN - the VM was shut, there were no attempt to do a live snapshot - I did not stop the engine during to delete or whatever disturbing action - I did the exact same steps two days ago on a test VM and it ran fine. - In between I did not upgrade or reset anything I found in the mail below many many common points : http://list-archives.org/2013/10/25/users-ovirt-org/vm-snapshot-delete-faile... By reading my logs, some of you jumped to the python errors, but when looking far above, one can see some previous (non-python) errors complaining about some logical volume not found. Today, I had no more log written in engine.log, so I decided to restart the engine : - Logs came back (...). - In the faulty VM, now I see NO snapshot at all. - I still see the disk - Trying to start the VM leads to the following error : VM uc-674 is down. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23,if=none,id=drive-virtio-disk0,format=qcow2,serial=69220da6-eeed-4435-aad0-7aa33f3a0d21,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23: Invalid argument . And indeed, from the SPM I'm looking for the device, I see nothing : [root@serv-vm-adm9 11a077c7-658b-49bb-8596-a785109c24c9]# ls -la /dev/11a077c7-658b-49bb-8596-a785109c24c9/ total 0 drwxr-xr-x. 2 root root 200 7 janv. 08:23 . drwxr-xr-x. 21 root root 4480 7 janv. 08:23 .. lrwxrwxrwx. 1 root root 8 5 déc. 11:58 5c71e53b-21f2-4671-94f8-4603d1b0bf5e -> ../dm-19 lrwxrwxrwx. 1 root root 8 5 déc. 11:58 7369a73a-fea5-40d9-ad0a-7d81a43fe931 -> ../dm-20 lrwxrwxrwx. 1 root root 7 10 oct. 17:22 ids -> ../dm-5 lrwxrwxrwx. 1 root root 7 10 oct. 17:22 inbox -> ../dm-7 lrwxrwxrwx. 1 root root 7 10 oct. 17:22 leases -> ../dm-6 lrwxrwxrwx. 1 root root 7 10 oct. 17:22 master -> ../dm-9 lrwxrwxrwx. 1 root root 7 10 oct. 17:22 metadata -> ../dm-4 lrwxrwxrwx. 1 root root 7 10 oct. 17:22 outbox -> ../dm-8 There is no trace of the lvs it should be using (/dev/11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23). In the URL I provided above, the op is able to lvchange -aey the device. In my case, though a lvmdiskscan + a lvs is showing me the LV, there is not device in /dev/{the proper VG}/{my missing LV}. Well, the last thing to ask is : Is there a way to recover it, to recreate an device to access this LV and to activate it? -- Nicolas Ecarnot Le 07/01/2014 04:09, Maor Lipchuk a écrit :
Hi Nicolas, I think that the initial problem started at 10:06 when VDSM tried to clear records of the ancestor volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 (see [1])
Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)
The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list)
Could you give it a try and check if that works for you?
Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs.
Regards, Maor
[1] 236b3c5a-452a-4614-801a-c30cefbce87e::ERROR::2014-01-06 10:06:14,407::task::850::TaskManager.Task::(_setError) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1162, in merge srcVol.shrinkToOptimalSize() File "/usr/share/vdsm/storage/blockVolume.py", line 315, in shrinkToOptimalSize volParams = self.getVolumeParams() File "/usr/share/vdsm/storage/volume.py", line 1008, in getVolumeParams volParams['imgUUID'] = self.getImage() File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage return self.getVolumeTag(TAG_PREFIX_IMAGE) File "/usr/share/vdsm/storage/blockVolume.py", line 464, in getVolumeTag return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix) File "/usr/share/vdsm/storage/blockVolume.py", line 662, in _getVolumeTag tags = lvm.getLV(sdUUID, volUUID).tags File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23',)
On 01/06/2014 04:39 PM, Meital Bourvine wrote:
I got the attachment.
This is the relevant error: 6caec3bc-fc66-42be-a642-7733fc033103::ERROR::2014-01-06 10:13:21,068::task::850::TaskManager.Task::(_setError) Task=`6caec3bc-fc66-42be-a642-7733fc033103`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1101, in merge dstVol = vols[ancestor] KeyError: '506085b6-40e0-4176-a4df-9102857f51f2'
I don't know why it happens, so you'll have to wait for someone else to answer.
----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 4:22:57 PM Subject: Re: [Users] Unable to delete a snapshot
Le 06/01/2014 12:51, Nicolas Ecarnot a écrit :
Also, Please attach the whole vdsm.log, it's hard to read it this way (lines are broken)
See attachment.
Actually, I don't know if this mailing list allows attachments ?
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nicolas Ecarnot

Le 07/01/2014 09:02, Nicolas Ecarnot a écrit :
http://list-archives.org/2013/10/25/users-ovirt-org/vm-snapshot-delete-faile... [...] In the URL I provided above, the op is able to lvchange -aey the device. In my case, though a lvmdiskscan + a lvs is showing me the LV, there is not device in /dev/{the proper VG}/{my missing LV}.
Although I couldn't see the device in /dev, activating it with lvchange -aey and providing the complete path lead to a successful activation of the LV. Then, like in the URL above, I was able to run the VM manually from the host with a simple qemu-kvm. But when trying to run it from oVirt, I still get the same errors, and the LV gets disabled. What would be the best way now? -- Nicolas Ecarnot

Le 07/01/2014 09:57, Nicolas Ecarnot a écrit :
Le 07/01/2014 09:02, Nicolas Ecarnot a écrit :
http://list-archives.org/2013/10/25/users-ovirt-org/vm-snapshot-delete-faile...
[...] In the URL I provided above, the op is able to lvchange -aey the device. In my case, though a lvmdiskscan + a lvs is showing me the LV, there is not device in /dev/{the proper VG}/{my missing LV}.
Although I couldn't see the device in /dev, activating it with lvchange -aey and providing the complete path lead to a successful activation of the LV.
Then, like in the URL above, I was able to run the VM manually from the host with a simple qemu-kvm.
But when trying to run it from oVirt, I still get the same errors, and the LV gets disabled.
What would be the best way now?
I have to add that in this precise case, recovering the VM is way more important than to recover the snapshot, that I can live without. -- Nicolas Ecarnot

On Tue, 2014-01-07 at 10:18 +0100, Nicolas Ecarnot wrote:
Le 07/01/2014 09:57, Nicolas Ecarnot a écrit :
Le 07/01/2014 09:02, Nicolas Ecarnot a écrit :
http://list-archives.org/2013/10/25/users-ovirt-org/vm-snapshot-delete-faile...
I've sent two mails altogether - here's the other one: http://lists.ovirt.org/pipermail/users/2013-November/017557.html. In December I upgraded to Ovirt 3.3.2. After the upgrade snapshots works, but I still have a few VMs with broken snapshots. I don't need these VMs, but I'm keeping them to eventually try to repair them (when someone helps and tells me how).
[...] In the URL I provided above, the op is able to lvchange -aey the device. In my case, though a lvmdiskscan + a lvs is showing me the LV, there is not device in /dev/{the proper VG}/{my missing LV}.
Although I couldn't see the device in /dev, activating it with lvchange -aey and providing the complete path lead to a successful activation of the LV.
Then, like in the URL above, I was able to run the VM manually from the host with a simple qemu-kvm.
But when trying to run it from oVirt, I still get the same errors, and the LV gets disabled.
What would be the best way now?
I have to add that in this precise case, recovering the VM is way more important than to recover the snapshot, that I can live without.
-- Milan Dlauhy ICZ a.s. Na hřebenech II 1718/10, 147 00 Praha 4 Nusle, CZ Tel.: +420 222 275 239 +420 724 429 878 Fax: +420 222 271 112 mailto:milan.dlauhy@i.cz http://www.i.cz midl@jabbim.cz, ICQ#344919380

Hi Maor, hi everyone, Le 07/01/2014 04:09, Maor Lipchuk a écrit :
Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)
In my case, there where nothing "live" : the VM was shut down when creating the snapshot as well as when tyring to delete it.
The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list)
Could you give it a try and check if that works for you?
I'm very shy with upgrading my oVirt production framework, but I began to read some things to upgrade it. Maybe you can lead me to a precise way to upgrade vdsm?
Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs.
Done : https://bugzilla.redhat.com/show_bug.cgi?id=1050901 -- Nicolas Ecarnot

Hi Maor, hi everyone,
Le 07/01/2014 04:09, Maor Lipchuk a écrit :
Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)
In my case, there where nothing "live" : the VM was shut down when creating the snapshot as well as when tyring to delete it. I understand though live storage migration sub flows are, creation of
On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote: live snapshot, so it could be related.
The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list)
Could you give it a try and check if that works for you?
I'm very shy with upgrading my oVirt production framework, but I began to read some things to upgrade it. Maybe you can lead me to a precise way to upgrade vdsm?
Sandro, do you aware of any documentation which can help in upgrading ovirt, specifically VDSM?
Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs.
Done :
Thanks

Adding Sandro to the thread On 01/12/2014 11:26 AM, Maor Lipchuk wrote:
Hi Maor, hi everyone,
Le 07/01/2014 04:09, Maor Lipchuk a écrit :
Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)
In my case, there where nothing "live" : the VM was shut down when creating the snapshot as well as when tyring to delete it. I understand though live storage migration sub flows are, creation of
On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote: live snapshot, so it could be related.
The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list)
Could you give it a try and check if that works for you?
I'm very shy with upgrading my oVirt production framework, but I began to read some things to upgrade it. Maybe you can lead me to a precise way to upgrade vdsm?
Sandro, do you aware of any documentation which can help in upgrading ovirt, specifically VDSM?
Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs.
Done :
Thanks
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 12/01/2014 10:40, Maor Lipchuk ha scritto:
Adding Sandro to the thread
On 01/12/2014 11:26 AM, Maor Lipchuk wrote:
Hi Maor, hi everyone,
Le 07/01/2014 04:09, Maor Lipchuk a écrit :
Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)
In my case, there where nothing "live" : the VM was shut down when creating the snapshot as well as when tyring to delete it. I understand though live storage migration sub flows are, creation of
On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote: live snapshot, so it could be related.
The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list)
Could you give it a try and check if that works for you?
I'm very shy with upgrading my oVirt production framework, but I began to read some things to upgrade it. Maybe you can lead me to a precise way to upgrade vdsm?
Sandro, do you aware of any documentation which can help in upgrading ovirt, specifically VDSM?
No, I'm not aware of any documentation about it. However since 3.3.1 is in ovirt-stable if you just need to update vdsm on a system I think that yum --enablerepo=ovirt-stable update "vdsm*" on that system should be enough. CCing Dan and VDSM list for confirming
Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs.
Done :
Thanks
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Le 13/01/2014 08:43, Sandro Bonazzola a écrit :
Il 12/01/2014 10:40, Maor Lipchuk ha scritto:
Adding Sandro to the thread
On 01/12/2014 11:26 AM, Maor Lipchuk wrote:
Hi Maor, hi everyone,
Le 07/01/2014 04:09, Maor Lipchuk a écrit :
Looking at bugzilla, it could be related to https://bugzilla.redhat.com/1029069 (based on the exception described at https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)
In my case, there where nothing "live" : the VM was shut down when creating the snapshot as well as when tyring to delete it. I understand though live storage migration sub flows are, creation of
On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote: live snapshot, so it could be related.
The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned it before in the mailing list)
Could you give it a try and check if that works for you?
I'm very shy with upgrading my oVirt production framework, but I began to read some things to upgrade it. Maybe you can lead me to a precise way to upgrade vdsm?
Sandro, do you aware of any documentation which can help in upgrading ovirt, specifically VDSM?
No, I'm not aware of any documentation about it. However since 3.3.1 is in ovirt-stable if you just need to update vdsm on a system I think that
yum --enablerepo=ovirt-stable update "vdsm*"
on that system should be enough.
Done
Also it will be great if you could open a bug on that with the full VDSM, engine logs and the list of lvs.
Done :
Just to close the bug : I updated to 3.4.1-1.el6, and I can not reproduce the bug anymore. -- Nicolas Ecarnot

Hi Sander, hi everyone, The oVirt version is 3.3.0-4.el6. For vdsm, it's : vdsm-4.12.1-2.el6 Le 06/01/2014 12:35, Meital Bourvine a écrit :
What's the output of `lvs`?
[root@serv-vm-adm9 vdsm]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert 1beb9910-e18a-4163-b0ac-3aeef5bd9912 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 7,62g 203a473b-c2c2-4193-bec8-afb00d9b45f0 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 1,00g 4779068e-96dd-411f-a924-2cdb2b4adf05 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 2,38g 4a8c17aa-5882-45a1-8a6e-40db39ed06ca 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 1,00g 55973d46-ceba-40a4-9d89-c69357fa8f2a 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 31,00g 5c71e53b-21f2-4671-94f8-4603d1b0bf5e 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 12,38g 6be3ca58-3cfa-478f-92e8-7a6061fbc192 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 330,00g 7369a73a-fea5-40d9-ad0a-7d81a43fe931 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 12,00g 89b6297f-e394-4d78-b1d3-ea6135ea114d 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 12,62g b8309060-7acd-43e1-9ae5-0455f0861dce 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 40,62g c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 100,00g e610a133-1f96-4c6a-8be5-efd1fdc413b6 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 126,00g eea02ca5-7d8c-4841-b161-b56d75b177b8 11a077c7-658b-49bb-8596-a785109c24c9 -wi------ 25,00g ids 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 128,00m inbox 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 128,00m leases 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 2,00g master 11a077c7-658b-49bb-8596-a785109c24c9 -wi-ao--- 1,00g metadata 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 512,00m outbox 11a077c7-658b-49bb-8596-a785109c24c9 -wi-a---- 128,00m 11aa5d75-5c46-4a2d-9d36-785ee3fed483 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 12,12g 645920f2-458f-46e9-93e8-4ff72f4e7bd0 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 143,00g 6ec56633-707d-4ae5-9db0-94834a193ae5 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 10,00g 7a74ba80-9207-4f21-90a0-f28fc9925fee 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 16,00g 80bac371-6899-4fbe-a8e1-272037186bfb 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 21,00g 82fca941-0c88-4bb9-b99f-8e22e56d81b0 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 16,25g ad83757e-3720-4b77-bc7e-475e82dccbd4 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 32,00g b42ba23e-ed78-4bfc-a047-edd376696a65 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 16,00g bca93f68-8e10-4db3-bf46-d20cc4e9a4f2 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 18,00g ea9c8f12-4eb6-42de-b6d6-6296555d0ac0 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 62,00g ed91505a-a219-42af-9915-e5ffc79918f9 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 132,00g eee05299-4131-4116-87ab-085c6c0afd8d 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 330,00g fe2d2070-6758-467e-8faf-bd7af8f4788b 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi------ 5,62g ids 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-ao--- 128,00m inbox 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 128,00m leases 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 2,00g master 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 1,00g metadata 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 512,00m outbox 1429ffe2-4137-416c-bb38-63fd73f4bcc1 -wi-a---- 128,00m 0062076a-1b4b-4b3f-8e0d-c1f411049f2b 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 3,00g 179f6e8e-8b30-4f3a-874e-61f9bcb0dabf 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 4,50g 38473266-f800-409a-ac41-aa737d5d25ba 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 5,75g 4250ecb7-3a66-4646-947c-bd65705abe13 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 1,00g 44421611-8696-4f4c-ad0f-07af8fac5a87 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 1,00g 476e7e0a-89dd-4668-8fcf-aafa14e0cb19 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 2,00g 74d53c18-b9bd-494e-84b5-ae99b68ad237 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 4,75g 7b0c1ea7-83f7-424b-8f4f-3640bc99e758 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 1,00g be0849c9-37e7-4b8c-bb11-c532e2b763b8 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 20,88g c9491855-eb18-41bf-8b40-4ebe50c0b22e 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 20,00g d0148213-c15b-44ca-8c2d-979c3b18b188 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 2,00g e825616c-4a97-4efb-a92b-46be2524e522 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 4,00g f43cc98c-988b-4107-b95b-c665bcf6fa64 83d39199-d4e4-474c-b232-7088c76a2811 -wi------ 8,25g ids 83d39199-d4e4-474c-b232-7088c76a2811 -wi-ao--- 128,00m inbox 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 128,00m leases 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 2,00g master 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 1,00g metadata 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 512,00m outbox 83d39199-d4e4-474c-b232-7088c76a2811 -wi-a---- 128,00m lv_home vg_servvmadm9 -wi-ao--- 77,80g lv_root vg_servvmadm9 -wi-ao--- 50,00g lv_swap vg_servvmadm9 -wi-ao--- 7,83g [root@serv-vm-adm9 vdsm]#
Also, Please attach the whole vdsm.log, it's hard to read it this way (lines are broken)
Attachment did not make its way to the list, so here it is : http://www.ecarnot.net/tmp/vdsm.log.1.xz -- Nicolas Ecarnot
----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 12:16:52 PM Subject: Re: [Users] Unable to delete a snapshot
Thank you Maor and Meital for your help.
It took me some times just to find WHERE to get the vdsm logs, as this is done on a VM that is shut down, so not directly managed by a specific host. So I think I understood I had to look on the SPM. Then amongst the tons of logs, I think I've found the relevant lines.
I have to add that : - (dumb as I may be,) I repeated th try to delete the snapshots, so there are several attempts of deletion in the logs. - after the last attempt, the snapshot now appears as "BROKEN" in the manager, and this doesn't sound good. I would love to hear from you there is a way to recover it.
The vdsm log I found relevant is below.
236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,626::utils::538::Storage.Misc.excCmd::(watchCmd) SUCCESS: <err> = []; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,627::volume::1152::Storage.Volume::(qemuConvert) (qemuConvert): COPY /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 to /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/69220da6-eeed-4435-aad0-7aa33f3a0d21/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE DONE 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:07,628::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE justme True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,628::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,629::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:07,631::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,212::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,213::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,214::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:08,215::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,215::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,216::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,217::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,581::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,582::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,583::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,584::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,585::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:08,862::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:08,901::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,136::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,139::blockVolume::418::Storage.Volume::(teardown) Tearing down volume 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2 justme False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,142::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,146::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,149::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,164::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:09,318::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.342311 s, 3.0 MB/s\n'; <rc> = 0 Thread-4658::DEBUG::2014-01-06 10:06:09,422::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata bs=4096 count=1' (cwd None) Thread-4658::DEBUG::2014-01-06 10:06:09,454::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000705734 s, 5.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,502::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,503::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_lvmActivationNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,504::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,506::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,812::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,940::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:09,941::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 as _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,944::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:09,975::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,207::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/83d39199-d4e4-474c-b232-7088c76a2811/metadata bs=4096 count=1' (cwd None) Thread-2280759::DEBUG::2014-01-06 10:06:10,287::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.0114936 s, 356 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,313::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,317::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,327::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,557::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/serv-fs-adm1.sdis.isere.fr:_home_admin_vmexport/a7c189dd-4980-49ab-ad86-65a14235cbe4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-281995::DEBUG::2014-01-06 10:06:10,585::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n378 bytes (378 B) copied, 0.000281187 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,635::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,640::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:10,640::blockVolume::349::Storage.Volume::(rename) Rename volume c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE as c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,642::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,660::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvrename --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9 c50561d9-c3ba-4366-b2bc-49bbfaa4cd23_MERGE c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,983::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,985::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:10,986::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9/c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,284::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,289::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,291::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,304::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,313::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:11,346::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,372::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000671794 s, 762 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,373::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000671794 s, 762 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,386::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,438::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642604::DEBUG::2014-01-06 10:06:11,474::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642604::DEBUG::2014-01-06 10:06:11,484::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state init -> state preparing Thread-3642604::INFO::2014-01-06 10:06:11,490::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642604::INFO::2014-01-06 10:06:11,497::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,503::task::1168::TaskManager.Task::(prepare) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::finished: {'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 15}} Thread-3642604::DEBUG::2014-01-06 10:06:11,507::task::579::TaskManager.Task::(_updateState) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::moving from state preparing -> state finished Thread-3642604::DEBUG::2014-01-06 10:06:11,510::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,510::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000835653 s, 613 kB/s\n'; <rc> = 0 Thread-3642604::DEBUG::2014-01-06 10:06:11,514::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,515::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000835653 s, 613 kB/s'], size: 512 Thread-3642604::DEBUG::2014-01-06 10:06:11,519::task::974::TaskManager.Task::(_decref) Task=`d59a176c-44d8-489d-8ab5-93de64eb1660`::ref 0 aborting False Thread-3642605::DEBUG::2014-01-06 10:06:11,546::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642605::DEBUG::2014-01-06 10:06:11,553::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state init -> state preparing Thread-3642605::INFO::2014-01-06 10:06:11,558::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) Thread-3642605::DEBUG::2014-01-06 10:06:11,568::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo' Thread-3642605::DEBUG::2014-01-06 10:06:11,572::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'shared' Thread-3642605::DEBUG::2014-01-06 10:06:11,578::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'shared' (1 active user) Thread-3642605::DEBUG::2014-01-06 10:06:11,583::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`3e5429cc-9cc9-4259-a51c-0dd76cb6b827`::Granted request Thread-3642605::DEBUG::2014-01-06 10:06:11,593::task::811::TaskManager.Task::(resourceAcquired) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (shared) Thread-3642605::DEBUG::2014-01-06 10:06:11,597::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 1 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,610::image::881::Storage.Image::(removeSubChain) Remove volume _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,619::blockVolume::219::Storage.Volume::(delete) Request to delete LV _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 Thread-3642605::INFO::2014-01-06 10:06:11,623::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:11,625::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 legality = ILLEGAL Thread-3642605::DEBUG::2014-01-06 10:06:11,627::task::1168::TaskManager.Task::(prepare) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::finished: {'info': {'spm_id': 2, 'master_uuid': '11a077c7-658b-49bb-8596-a785109c24c9', 'name': 'Etat-Major2', 'version': '3', 'domains': u'78fe6648-1b2a-438c-a704-eb8ac9fc438b:Active,83d39199-d4e4-474c-b232-7088c76a2811:Active,a7c189dd-4980-49ab-ad86-65a14235cbe4:Active,1429ffe2-4137-416c-bb38-63fd73f4bcc1:Active,11a077c7-658b-49bb-8596-a785109c24c9:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/serv-mvm-adm2.sdis.isere.fr:_home_iso/78fe6648-1b2a-438c-a704-eb8ac9fc438b/images/11111111-1111-1111-1111-111111111111', 'type': 'ISCSI', 'master_ver': 2, 'lver': 15}, 'dominfo': {u'78fe6648-1b2a-438c-a704-eb8ac9fc438b': {'status': u'Active', 'diskfree': '545295171584', 'alerts': [], 'version': 0, 'disktotal': '589474824192'}, u'83d39199-d4e4-474c-b232-7088c76a2811': {'status': u'Active', 'diskfree': '1011196362752', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'a7c189dd-4980-49ab-ad86-65a14235cbe4': {'status': u'Active', 'diskfree': '1023138594816', 'alerts': [], 'version': 0, 'disktotal': '1098434740224'}, u'1429ffe2-4137-416c-bb38-63fd73f4bcc1': {'status': u'Active', 'diskfree': '223204081664', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}, u'11a077c7-658b-49bb-8596-a785109c24c9': {'status': u'Active', 'diskfree': '230049185792', 'alerts': [], 'version': 3, 'disktotal': '1099243192320'}}} Thread-3642605::DEBUG::2014-01-06 10:06:11,632::task::579::TaskManager.Task::(_updateState) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::moving from state preparing -> state finished Thread-3642605::DEBUG::2014-01-06 10:06:11,636::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-3642605::DEBUG::2014-01-06 10:06:11,640::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642605::DEBUG::2014-01-06 10:06:11,647::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-3642605::DEBUG::2014-01-06 10:06:11,654::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-3642605::DEBUG::2014-01-06 10:06:11,657::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-3642605::DEBUG::2014-01-06 10:06:11,661::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-3642605::DEBUG::2014-01-06 10:06:11,669::task::974::TaskManager.Task::(_decref) Task=`bfdcfb66-54ce-4f83-a42a-9049bb11ec65`::ref 0 aborting False Thread-3642606::DEBUG::2014-01-06 10:06:11,725::BindingXMLRPC::177::vds::(wrapper) client [192.168.39.31] Thread-3642606::DEBUG::2014-01-06 10:06:11,730::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state init -> state preparing 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,731::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) Thread-3642606::INFO::2014-01-06 10:06:11,756::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, options=None) Thread-3642606::DEBUG::2014-01-06 10:06:11,761::taskManager::102::TaskManager::(getAllTasksStatuses) Entry. Thread-3642606::DEBUG::2014-01-06 10:06:11,765::taskManager::93::TaskManager::(getTaskStatus) Entry. taskID: 236b3c5a-452a-4614-801a-c30cefbce87e Thread-3642606::DEBUG::2014-01-06 10:06:11,768::taskManager::96::TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'} Thread-3642606::DEBUG::2014-01-06 10:06:11,772::taskManager::113::TaskManager::(getAllTasksStatuses) Return: {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}} Thread-3642606::INFO::2014-01-06 10:06:11,776::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,779::task::1168::TaskManager.Task::(prepare) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::finished: {'allTasksStatus': {'236b3c5a-452a-4614-801a-c30cefbce87e': {'code': 0, 'message': 'Task is initializing', 'taskState': 'running', 'taskResult': '', 'taskID': '236b3c5a-452a-4614-801a-c30cefbce87e'}}} Thread-3642606::DEBUG::2014-01-06 10:06:11,784::task::579::TaskManager.Task::(_updateState) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::moving from state preparing -> state finished Thread-3642606::DEBUG::2014-01-06 10:06:11,787::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-3642606::DEBUG::2014-01-06 10:06:11,791::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-3642606::DEBUG::2014-01-06 10:06:11,796::task::974::TaskManager.Task::(_decref) Task=`6130a21c-b843-4faa-83a7-2b1f956217f8`::ref 0 aborting False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,803::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,809::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00039345 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:11,885::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --deltag PU_506085b6-40e0-4176-a4df-9102857f51f2 --addtag PU_00000000-0000-0000-0000-000000000000 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:12,002::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.566167 s, 1.8 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,204::lvm::499::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,205::lvm::511::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,206::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,207::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,511::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,580::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,590::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=4 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,616::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,617::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00049159 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,632::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,652::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,653::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000381403 s, 1.3 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,654::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,655::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) VM Channels Listener::DEBUG::2014-01-06 10:06:12,909::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 58. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:12,959::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,023::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,031::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,055::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000834663 s, 613 kB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,056::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000834663 s, 613 kB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,065::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,370::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,371::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,372::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::image::881::Storage.Image::(removeSubChain) Remove volume 506085b6-40e0-4176-a4df-9102857f51f2 from image 69220da6-eeed-4435-aad0-7aa33f3a0d21 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,381::blockVolume::219::Storage.Volume::(delete) Request to delete LV 506085b6-40e0-4176-a4df-9102857f51f2 of image 69220da6-eeed-4435-aad0-7aa33f3a0d21 in VG 11a077c7-658b-49bb-8596-a785109c24c9 236b3c5a-452a-4614-801a-c30cefbce87e::INFO::2014-01-06 10:06:13,382::volume::669::Storage.Volume::(setLegality) sdUUID=11a077c7-658b-49bb-8596-a785109c24c9 imgUUID=69220da6-eeed-4435-aad0-7aa33f3a0d21 volUUID = 506085b6-40e0-4176-a4df-9102857f51f2 legality = ILLEGAL 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,389::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,414::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000504094 s, 1.0 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,430::blockVolume::588::Storage.Misc.excCmd::(getMetadata) '/bin/dd iflag=direct skip=13 bs=512 if=/dev/11a077c7-658b-49bb-8596-a785109c24c9/metadata count=1' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,450::blockVolume::588::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,451::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.0003572 s, 1.4 MB/s'], size: 512 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,460::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -f --autobackup n 11a077c7-658b-49bb-8596-a785109c24c9/506085b6-40e0-4176-a4df-9102857f51f2' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,672::blockSD::594::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/1429ffe2-4137-416c-bb38-63fd73f4bcc1/metadata bs=4096 count=1' (cwd None) Thread-640001::DEBUG::2014-01-06 10:06:13,705::blockSD::594::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000955526 s, 4.3 MB/s\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,774::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: This metadata update is NOT backed up\n'; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::489::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,775::lvm::491::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,784::lvm::415::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:13,785::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/364ed2ad5196b029d2a257530c50480f6|/dev/mapper/364ed2ad5196bb2247a277531c504a022|/dev/mapper/364ed2ad5196bf220842a5532c504a0f7|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 11a077c7-658b-49bb-8596-a785109c24c9' (cwd None) Dummy-1808243::DEBUG::2014-01-06 10:06:14,027::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,143::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,355::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex 236b3c5a-452a-4614-801a-c30cefbce87e::WARNING::2014-01-06 10:06:14,358::lvm::594::Storage.LVM::(getLv) lv: _remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 not found in lvs vg: 11a077c7-658b-49bb-8596-a785109c24c9 response 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,362::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,367::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,369::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,374::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,377::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,380::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,383::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.506085b6-40e0-4176-a4df-9102857f51f2', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,389::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,392::resourceManager::634::ResourceManager::(releaseResource) Released resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,395::resourceManager::640::ResourceManager::(releaseResource) Resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,398::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_volumeNS.c50561d9-c3ba-4366-b2bc-49bbfaa4cd23', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,401::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource '11a077c7-658b-49bb-8596-a785109c24c9_imageNS.69220da6-eeed-4435-aad0-7aa33f3a0d21', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::ERROR::2014-01-06 10:06:14,407::task::850::TaskManager.Task::(_setError) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 318, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots sdUUID, vmUUID, imgUUID, ancestor, successor, postZero) File "/usr/share/vdsm/storage/image.py", line 1162, in merge srcVol.shrinkToOptimalSize() File "/usr/share/vdsm/storage/blockVolume.py", line 315, in shrinkToOptimalSize volParams = self.getVolumeParams() File "/usr/share/vdsm/storage/volume.py", line 1008, in getVolumeParams volParams['imgUUID'] = self.getImage() File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage return self.getVolumeTag(TAG_PREFIX_IMAGE) File "/usr/share/vdsm/storage/blockVolume.py", line 464, in getVolumeTag return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix) File "/usr/share/vdsm/storage/blockVolume.py", line 662, in _getVolumeTag tags = lvm.getLV(sdUUID, volUUID).tags File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: ('11a077c7-658b-49bb-8596-a785109c24c9/_remove_me_aVmPgweS_c50561d9-c3ba-4366-b2bc-49bbfaa4cd23',) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,427::task::869::TaskManager.Task::(_run) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._run: 236b3c5a-452a-4614-801a-c30cefbce87e () {} failed - stopping task 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,430::task::1194::TaskManager.Task::(stop) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::stopping in state running (force False) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,433::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 1 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,436::task::900::TaskManager.Task::(_runJobs) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::aborting: Task is aborted: 'Logical volume does not exist' - code 610 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,440::task::974::TaskManager.Task::(_decref) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::ref 0 aborting True 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,443::task::909::TaskManager.Task::(_doAbort) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Task._doAbort: force False 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,445::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,449::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state running -> state aborting 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,457::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,509::task::534::TaskManager.Task::(__state_aborting) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_aborting: recover policy auto 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,512::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state aborting -> state racquiring 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,520::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,574::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state racquiring -> state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,581::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,633::task::782::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_recover 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,635::task::789::TaskManager.Task::(_recover) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::running recovery None 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,638::task::770::TaskManager.Task::(_recoverDone) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::Recover Done: state recovering 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,641::task::579::TaskManager.Task::(_updateState) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::moving from state recovering -> state recovered 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,649::task::736::TaskManager.Task::(_save) Task=`236b3c5a-452a-4614-801a-c30cefbce87e`::_save: orig /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e temp /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/master/tasks/236b3c5a-452a-4614-801a-c30cefbce87e.temp 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,697::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.11a077c7-658b-49bb-8596-a785109c24c9': < ResourceRef 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', isValid: 'True' obj: 'None'>} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,700::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,706::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,709::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' (0 active users) 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,712::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9' is free, finding out if anyone is waiting for it. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,714::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.11a077c7-658b-49bb-8596-a785109c24c9', Clearing records. 236b3c5a-452a-4614-801a-c30cefbce87e::DEBUG::2014-01-06 10:06:14,720::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 0 Dummy-1808243::DEBUG::2014-01-06 10:06:14,788::storage_mailbox::727::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.471388 s, 2.2 MB/s\n'; <rc> = 0
Le 06/01/2014 10:30, Maor Lipchuk a écrit :
Hi Nicolas, Can u please also add the VDSM log.
Thanks, Maor
On 01/06/2014 11:25 AM, Nicolas Ecarnot wrote:
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3.
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nicolas Ecarnot

Can you please attach vdsm.log? ----- Original Message -----
From: "Nicolas Ecarnot" <nicolas@ecarnot.net> To: "users" <users@ovirt.org> Sent: Monday, January 6, 2014 11:25:15 AM Subject: [Users] Unable to delete a snapshot
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've properly shutdown. It ran so far. Today, after having shut it down properly, I'm trying to delete the snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,837 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338 Type: VM 2014-01-06 10:10:58,840 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM , sharedLocks= ] 2014-01-06 10:10:58,844 INFO [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2014-01-06 10:10:58,848 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId = 69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId = 506085b6-40e0-4176-a4df-9102857f51f2, imageId2 = c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId = cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b 2014-01-06 10:10:59,511 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b 2014-01-06 10:10:59,518 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8) CommandAsyncTask::Adding CommandMultiAsyncTasks object for command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:10:59,519 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching task 6caec3bc-fc66-42be-a642-7733fc033103 to command b402868f-b7f9-4c0e-a6fd-bdc51ff49952. 2014-01-06 10:10:59,525 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8) Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-01-06 10:10:59,530 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674' was initiated by necarnot. 2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to poll task 6caec3bc-fc66-42be-a642-7733fc033103. 2014-01-06 10:11:01,811 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now 2014-01-06 10:11:01,824 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS method 2014-01-06 10:11:01,825 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-20) Error code GeneralException and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'. 2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure: Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:^M -- Result: cleanSuccess^M -- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M -- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2' 2014-01-06 10:11:01,832 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndActionIfNecessary: All tasks of command b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction 2014-01-06 10:11:01,833 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952): calling EndAction . 2014-01-06 10:11:01,834 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction RemoveSnapshot, executionIndex: 0 2014-01-06 10:11:01,839 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27) [3b3e6fb1] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotCommand 2014-01-06 10:11:01,844 ERROR [org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand] (pool-6-thread-27) [33fa2a5d] Ending command with failure: org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand 2014-01-06 10:11:01,848 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID: 53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM 'uc-674'. 2014-01-06 10:11:01,850 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot completed, handling the result. 2014-01-06 10:11:01,851 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for action type RemoveSnapshot succeeded, clearing tasks. 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear task 6caec3bc-fc66-42be-a642-7733fc033103 2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf 2014-01-06 10:11:01,873 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName = serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24, taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929 2014-01-06 10:11:01,884 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929 2014-01-06 10:11:01,885 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf 2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task 6caec3bc-fc66-42be-a642-7733fc033103 from DataBase 2014-01-06 10:11:01,887 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27) CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity b402868f-b7f9-4c0e-a6fd-bdc51ff49952 2014-01-06 10:11:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map contains now 1 tasks 2014-01-06 10:12:07,703 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map contains now 0 tasks 2014-01-06 10:12:07,704 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool 5849b030-626e-47cb-ad90-3ce782d831b3.
-- Nicolas Ecarnot
-- Nicolas Ecarnot _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Which subversion of oVirt was this? And which version of vdsm? I ran into the same problem with 3.3.0, snapshot deletion was fixed in 3.3.1
participants (6)
-
Maor Lipchuk
-
Meital Bourvine
-
Milan Dlauhy
-
Nicolas Ecarnot
-
Sander Grendelman
-
Sandro Bonazzola