[ovirt-users] VM disk update failure

Stefano Bovina bovy89 at gmail.com
Sun May 7 19:57:00 UTC 2017


Hi,
Ok, ignore what I said regards step 2. The volume exists.

2017-05-07 20:27 GMT+02:00 Stefano Bovina <bovy89 at gmail.com>:

> Hi,
> 1)
> What do you need exactly? Shutdown VM, edit disk, "extend size by" --->
> error
>
> 2)
> storage domain is up and running, but with lvm command, vg exists, lv does
> not exist
>
> Powering up a machine, I also saw these lines in vdsm log:
>
>
> Thread-22::DEBUG::2017-05-07 18:59:54,115::blockSD::596::St
> orage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0
> records out\n4096 bytes (4.1 kB) copied, 0.000411491 s, 10.0 MB/s\n'; <rc>
> = 0
> Thread-300::DEBUG::2017-05-07 18:59:55,539::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> Thread-1101::DEBUG::2017-05-07 18:59:55,539::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> VM Channels Listener::INFO::2017-05-07 18:59:55,560::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`79cd29dc-bb6c-4f1f-ae38-54d6cac05489`::Guest API version changed
> from 3 to 1
> VM Channels Listener::INFO::2017-05-07 18:59:55,865::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`d37b37e9-1dd7-4a90-8d8c-755743c714ad`::Guest API version changed
> from 2 to 1
> VM Channels Listener::INFO::2017-05-07 18:59:57,414::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`8a034ac2-2646-4fe7-8fda-7c33affa8009`::Guest API version changed
> from 2 to 1
> VM Channels Listener::INFO::2017-05-07 18:59:58,178::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`4d1dad66-4ada-445a-b5f6-c695220d6b19`::Guest API version changed
> from 3 to 1
> Thread-272::DEBUG::2017-05-07 18:59:58,187::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> Thread-166::DEBUG::2017-05-07 18:59:58,539::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
> 234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]
> Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
> 252::Storage.PersistentDict::(refresh) Empty metadata
> Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
> 192::Storage.PersistentDict::(__init__) Created a persistent dict with
> VGTagMetadataRW backend
> Thread-18::DEBUG::2017-05-07 19:00:33,805::lvm::504::Storag
> e.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation'
> got the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::506::Storag
> e.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation'
> released the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::514::Storag
> e.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation'
> got the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::526::Storag
> e.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation'
> released the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::371::Storage.OperationMutex::(_reloadvgs)
> Operation 'lvm reload operation' got the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::291::Storage.Misc.excCmd::(cmd)
> /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names =
> ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/360060160a62134002818778f949de411|/dev/
> mapper/360060160a621340040652b7582f5e511|/dev/mapper/3600601
> 60a621340064b1034cbbfce511|/dev/mapper/360060160a6213400c4
> b39e80949de411|/dev/mapper/360060160a6213400cce46e40949de411
> |/dev/mapper/360060160a6213400e622de69949de411|/dev/mapper/3
> 60060160a6213400fa2d31acbbfce511|'\'', '\''r|.*|'\'' ] }  global {
>  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0
> }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b
> --nosuffix --separator '|' --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count
> ,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 2c501858-bf8d-49a5-a42b-bca341b47827 (cwd None)
> Thread-18::DEBUG::2017-05-07 19:00:33,962::lvm::291::Storage.Misc.excCmd::(cmd)
> SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart
> lvmetad before enabling it!\n'; <rc> = 0
> Thread-18::DEBUG::2017-05-07 19:00:33,962::lvm::416::Storage.OperationMutex::(_reloadvgs)
> Operation 'lvm reload operation' released the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,963::persistentDict::
> 234::Storage.PersistentDict::(refresh) read lines
> (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=BKP01',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=512',
> 'POOL_UUID=715d1ba2-eabe-48db-9aea-c28c30359808',
> u'PV0=pv:360060160a6213400c4b39e80949de411,uuid:Epqopf-zKvd-
> Um2q-lHO9-6jmE-7IyB-qvib1I,pestart:0,pecount:3997,mapoffset:0',
> u'PV1=pv:360060160a6213400cce46e40949de411,uuid:GD9AnB-2pin-
> Jbof-kxHI-Gsa2-308p-zeoZ3B,pestart:0,pecount:4477,mapoffset:3997',
> 'ROLE=Regular', 'SDUUID=2c501858-bf8d-49a5-a42b-bca341b47827',
> 'TYPE=FCP', 'VERSION=3', 'VGUUID=p9xQ2y-gOuF-nUQU-fKJP-skMF-9xaY-H7W2ym',
> '_SHA_CKSUM=237ce5cf1ccddf89565560f570afb6622c58c90e']
> Thread-18::INFO::2017-05-07 19:00:33,963::sd::385::Storage
> .StorageDomain::(_registerResourceNamespaces) Resource namespace
> 2c501858-bf8d-49a5-a42b-bca341b47827_imageNS already registered
> Thread-18::INFO::2017-05-07 19:00:33,963::sd::393::Storage
> .StorageDomain::(_registerResourceNamespaces) Resource namespace
> 2c501858-bf8d-49a5-a42b-bca341b47827_volumeNS already registered
> Thread-18::INFO::2017-05-07 19:00:33,964::blockSD::448::St
> orage.StorageDomain::(_registerResourceNamespaces) Resource namespace
> 2c501858-bf8d-49a5-a42b-bca341b47827_lvmActivationNS already registered
> Thread-18::DEBUG::2017-05-07 19:00:33,964::lvm::291::Storage.Misc.excCmd::(cmd)
> /usr/bin/sudo -n /sbin/lvm vgck --config ' devices { preferred_names =
> ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/360060160a6213400c4b39e80949de411|/dev/
> mapper/360060160a6213400cce46e40949de411|'\'', '\''r|.*|'\'' ] }  global
> {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
>  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
> 2c501858-bf8d-49a5-a42b-bca341b47827 (cwd None)
>
>
> 2017-05-07 14:36 GMT+02:00 Kevin Goldblatt <kgoldbla at redhat.com>:
>
>> Hi Stephano,
>>
>> From the log I see that the image cannot be found. This would indicate
>> that you have an issue on your host connected to the storage provider:
>>
>> OneImageInfoReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=254,
>> mMessage=Image path does not exist or cannot be accessed/created:
>> (u'/rhev/data-center/mnt/blockSD/6a386652-629d-4045-835b-21d
>> 2f5c104aa/images/c5fb9190-d059-4d9b-af23-07618ff660ce',)]]
>>
>> Please provide the following:
>>
>> 1. Provide the exact scenario you performed in the Update of the disk
>>
>> 2. Can you please check to see that the storage domain on which the disk
>> resides is up and running,
>>
>> 3. If not then check whether the storage on the host is visible through
>> the host
>>
>> 4. If all the above is working then please submit a bug through bugzilla
>> and provide the engine, server and vdsm logs.
>>
>>
>> Regards,
>>
>>
>> Kevin
>>
>> On Tue, May 2, 2017 at 11:10 PM, Stefano Bovina <bovy89 at gmail.com> wrote:
>>
>>> Hi, while trying to update a VM disk, a failure was returned (forcing me
>>> to add a new disk)
>>>
>>> Any advice on how to resolve this error?
>>>
>>> Thanks
>>>
>>>
>>> Installation info:
>>>
>>> ovirt-release35-006-1.noarch
>>> libgovirt-0.3.3-1.el7_2.1.x86_64
>>> vdsm-4.16.30-0.el7.centos.x86_64
>>> vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
>>> vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
>>> vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
>>> vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
>>> vdsm-python-4.16.30-0.el7.centos.noarch
>>> vdsm-cli-4.16.30-0.el7.centos.noarch
>>> qemu-kvm-ev-2.3.0-29.1.el7.x86_64
>>> qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64
>>> qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64
>>> libvirt-client-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-driver-storage-1.2.17-13.el7_2.3.x86_64
>>> libvirt-python-1.2.17-2.el7.x86_64
>>> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.3.x86_64
>>> libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
>>> libvirt-glib-0.1.9-1.el7.x86_64
>>> libvirt-daemon-driver-network-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-driver-lxc-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-driver-interface-1.2.17-13.el7_2.3.x86_64
>>> libvirt-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-config-network-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-driver-secret-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-kvm-1.2.17-13.el7_2.3.x86_64
>>> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.3.x86_64
>>>
>>>
>>> -------- engine.log
>>>
>>> 2017-05-02 09:48:26,505 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
>>> (ajp--127.0.0.1-8702-6) [c3d7125] Lock Acquired to object EngineLock
>>> [exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
>>> VM_DISK_BOOT
>>> key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
>>> , sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
>>> ]
>>> 2017-05-02 09:48:26,515 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
>>> (ajp--127.0.0.1-8702-6) [c3d7125] Running command: UpdateVmDiskCommand
>>> internal: false. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
>>> Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
>>> 2017-05-02 09:48:26,562 INFO  [org.ovirt.engine.core.bll.ExtendImageSizeCommand]
>>> (ajp--127.0.0.1-8702-6) [ae718d8] Running command: ExtendImageSizeCommand
>>> internal: true. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
>>> Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
>>> 2017-05-02 09:48:26,565 INFO  [org.ovirt.engine.core.vdsbro
>>> ker.irsbroker.ExtendImageSizeVDSCommand] (ajp--127.0.0.1-8702-6)
>>> [ae718d8] START, ExtendImageSizeVDSCommand( storagePoolId =
>>> 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit = false), log
>>> id: 52aac743
>>> 2017-05-02 09:48:26,604 INFO  [org.ovirt.engine.core.vdsbro
>>> ker.irsbroker.ExtendImageSizeVDSCommand] (ajp--127.0.0.1-8702-6)
>>> [ae718d8] FINISH, ExtendImageSizeVDSCommand, log id: 52aac743
>>> 2017-05-02 09:48:26,650 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (ajp--127.0.0.1-8702-6) [ae718d8] CommandAsyncTask::Adding
>>> CommandMultiAsyncTasks object for command cb7958d9-6eae-44a9-891a-7fe088
>>> a79df8
>>> 2017-05-02 09:48:26,651 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
>>> (ajp--127.0.0.1-8702-6) [ae718d8] CommandMultiAsyncTasks::AttachTask:
>>> Attaching task 769a4b18-182b-4048-bb34-a276a55ccbff to command
>>> cb7958d9-6eae-44a9-891a-7fe088a79df8.
>>> 2017-05-02 09:48:26,661 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
>>> (ajp--127.0.0.1-8702-6) [ae718d8] Adding task 769a4b18-182b-4048-bb34-a276a55ccbff
>>> (Parent Command UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.a
>>> synctasks.AsyncTaskParameters), polling hasn't started yet..
>>> 2017-05-02 09:48:26,673 INFO  [org.ovirt.engine.core.dal.db
>>> broker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6)
>>> [ae718d8] Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1,
>>> Message: VM sysinfo-73 sysinfo-73_Disk3 disk was updated by admin at internal.
>>> 2017-05-02 09:48:26,674 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>>> (ajp--127.0.0.1-8702-6) [ae718d8] BaseAsyncTask::startPollingTask:
>>> Starting to poll task 769a4b18-182b-4048-bb34-a276a55ccbff.
>>> 2017-05-02 09:48:28,430 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
>>> (DefaultQuartzScheduler_Worker-48) [36cd2f7] Polling and updating Async
>>> Tasks: 1 tasks, 1 tasks to poll now
>>> 2017-05-02 09:48:28,435 ERROR [org.ovirt.engine.core.vdsbrok
>>> er.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
>>> (DefaultQuartzScheduler_Worker-48) [36cd2f7] Failed in
>>> HSMGetAllTasksStatusesVDS method
>>> 2017-05-02 09:48:28,436 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>>> (DefaultQuartzScheduler_Worker-48) [36cd2f7] SPMAsyncTask::PollTask:
>>> Polling task 769a4b18-182b-4048-bb34-a276a55ccbff (Parent Command
>>> UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters)
>>> returned status finished, result 'cleanSuccess'.
>>> 2017-05-02 09:48:28,446 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>>> (DefaultQuartzScheduler_Worker-48) [36cd2f7]
>>> BaseAsyncTask::logEndTaskFailure: Task 769a4b18-182b-4048-bb34-a276a55ccbff
>>> (Parent Command UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters)
>>> ended with failure:
>>> -- Result: cleanSuccess
>>> -- Message: VDSGenericException: VDSErrorException: Failed to
>>> HSMGetAllTasksStatusesVDS, error = Could not acquire resource. Probably
>>> resource factory threw an exception.: (), code = 100,
>>> -- Exception: VDSGenericException: VDSErrorException: Failed to
>>> HSMGetAllTasksStatusesVDS, error = Could not acquire resource. Probably
>>> resource factory threw an exception.: (), code = 100
>>> 2017-05-02 09:48:28,448 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (DefaultQuartzScheduler_Worker-48) [36cd2f7]
>>> CommandAsyncTask::endActionIfNecessary: Al
>>> l tasks of command cb7958d9-6eae-44a9-891a-7fe088a79df8 has ended ->
>>> executing endAction
>>> 2017-05-02 09:48:28,449 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (DefaultQuartzScheduler_Worker-48) [36cd2f7]
>>> CommandAsyncTask::endAction: Ending action
>>>  for 1 tasks (command ID: cb7958d9-6eae-44a9-891a-7fe088a79df8):
>>> calling endAction .
>>> 2017-05-02 09:48:28,450 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (org.ovirt.thread.pool-8-thread-6) [36cd2f7]
>>> CommandAsyncTask::endCommandAction [within
>>>  thread] context: Attempting to endAction UpdateVmDisk, executionIndex: 0
>>> 2017-05-02 09:48:28,501 ERROR [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
>>> (org.ovirt.thread.pool-8-thread-6) [c3d7125] Ending command with
>>> failure: org.ovirt.engine
>>> .core.bll.UpdateVmDiskCommand
>>> 2017-05-02 09:48:28,502 ERROR [org.ovirt.engine.core.bll.ExtendImageSizeCommand]
>>> (org.ovirt.thread.pool-8-thread-6) Ending command with failure:
>>> org.ovirt.engine.core.b
>>> ll.ExtendImageSizeCommand
>>> 2017-05-02 09:48:28,504 INFO  [org.ovirt.engine.core.vdsbro
>>> ker.irsbroker.GetImageInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6)
>>> START, GetImageInfoVDSCommand( stora
>>> gePoolId = 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit =
>>> false, storageDomainId = 6a386652-629d-4045-835b-21d2f5c104aa,
>>> imageGroupId = c5fb9190-d059-4d9b-
>>> af23-07618ff660ce, imageId = 562f5b50-1d1a-457c-8b5f-cd8020611550), log
>>> id: 674a2581
>>> 2017-05-02 09:48:28,511 ERROR [org.ovirt.engine.core.vdsbrok
>>> er.irsbroker.GetImageInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6)
>>> IrsBroker::getImageInfo::Failed gett
>>> ing image info imageId = 562f5b50-1d1a-457c-8b5f-cd8020611550 does not
>>> exist on domainName = VM01 , domainId = 6a386652-629d-4045-835b-21d2f5c104aa,
>>>  error code: ImagePathError, message: Image path does not exist or cannot
>>> be accessed/created: (u'/rhev/data-center/mnt/block
>>> SD/6a386652-629d-4045-835b-21d2f5c104aa/images/c5fb9190-d059
>>> -4d9b-af23-07618ff660ce',)
>>> 2017-05-02 09:48:28,512 INFO  [org.ovirt.engine.core.vdsbro
>>> ker.irsbroker.GetImageInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6)
>>> Command org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand
>>> return value
>>>
>>> OneImageInfoReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=254,
>>> mMessage=Image path does not exist or cannot be accessed/created:
>>> (u'/rhev/data-center/mnt/blockSD/6a386652-629d-4045-835b-21d
>>> 2f5c104aa/images/c5fb9190-d059-4d9b-af23-07618ff660ce',)]]
>>>
>>> 2017-05-02 09:48:28,512 INFO  [org.ovirt.engine.core.vdsbro
>>> ker.irsbroker.GetImageInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6)
>>> FINISH, GetImageInfoVDSCommand, log id: 674a2581
>>> 2017-05-02 09:48:28,533 ERROR [org.ovirt.engine.core.dal.dbb
>>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-6)
>>> Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:
>>> Failed to extend size of the disk 'sysinfo-73_Disk3' to 20 GB, User:
>>> admin at internal.
>>> 2017-05-02 09:48:28,534 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
>>> (org.ovirt.thread.pool-8-thread-6) Lock freed to object EngineLock
>>> [exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
>>> VM_DISK_BOOT
>>> key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
>>> , sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
>>> ]
>>> 2017-05-02 09:48:28,541 INFO  [org.ovirt.engine.core.dal.db
>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-6)
>>> Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1, Message: VM
>>> sysinfo-73 sysinfo-73_Disk3 disk was updated by admin at internal.
>>> 2017-05-02 09:48:28,541 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (org.ovirt.thread.pool-8-thread-6) CommandAsyncTask::HandleEndActionResult
>>> [within thread]: endAction for action type UpdateVmDisk completed, handling
>>> the result.
>>> 2017-05-02 09:48:28,541 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>>> (org.ovirt.thread.pool-8-thread-6) CommandAsyncTask::HandleEndActionResult
>>> [within thread]: endAction for action type UpdateVmDisk succeeded, clearing
>>> tasks.
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170507/01109cea/attachment-0001.html>


More information about the Users mailing list