--------------2B60783E056656DBE53B6977
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
The disks dropped off when I was doing A LOT of disk IO on the VM -
untaring a 2TB archive. I'd done a snapshot earlier (about 30m) of the VM.
The storage shows no faults at this time. The switch fabric was online
for all 4 paths.
Pat
On 02/20/2017 02:08 PM, Adam Litke wrote:
There definitely seems to be a problem with your storage domain
81f19871... The host is unable to join that domain's sanlock
lockspace. Also, it seems that some metadata for the disk with id
e17ebd7c... was corrupted or lost in translation somehow. Can you
provide more details about what happened when "the disk images got
'unregistered' from oVirt"? Were you performing any particular
operations (such as moving disks, snapshot create/delete, etc)? Was
there a problem with the storage at that time?
On Mon, Feb 20, 2017 at 9:51 AM, Pat Riehecky <riehecky(a)fnal.gov
<mailto:riehecky@fnal.gov>> wrote:
Hi Adam,
Thanks for looking! The storage is fibre attached and I've
verified with the SAN folks nothing went wonky during this window
on their side.
Here is what I've got from vdsm.log during the window (and a bit
surrounding it for context):
libvirtEventLoop::WARNING::2017-02-16
08:35:17,435::utils::140::root::(rmFile) File:
/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.com.redhat.rhevm.vdsm
already removed
libvirtEventLoop::WARNING::2017-02-16
08:35:17,435::utils::140::root::(rmFile) File:
/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.org.qemu.guest_agent.0
already removed
periodic/2::WARNING::2017-02-16
08:35:18,144::periodic::295::virt.vm::(__call__)
vmId=`ba806b93-b6fe-4873-99ec-55bb34c12e5f`::could not run on
ba806b93-b6fe-4873-99ec-55bb34c12e5f: domain not connected
periodic/3::WARNING::2017-02-16
08:35:18,305::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
Thread-23021::ERROR::2017-02-16
09:28:33,096::task::866::Storage.TaskManager.Task::(_setError)
Task=`ecab8086-261f-44b9-8123-eefb9bbf5b05`::Unexpected error
Thread-23021::ERROR::2017-02-16
09:28:33,097::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-23783::ERROR::2017-02-16
10:13:32,876::task::866::Storage.TaskManager.Task::(_setError)
Task=`ff628204-6e41-4e5e-b83a-dad6ec94d0d3`::Unexpected error
Thread-23783::ERROR::2017-02-16
10:13:32,877::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-24542::ERROR::2017-02-16
10:58:32,578::task::866::Storage.TaskManager.Task::(_setError)
Task=`f5111200-e980-46bb-bbc3-898ae312d556`::Unexpected error
Thread-24542::ERROR::2017-02-16
10:58:32,579::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,049::sdc::139::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,049::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,305::sdc::145::Storage.StorageDomainCache::(_findDomain)
domain 13127103-3f59-418a-90f1-5b1ade8526b1 not found
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
11:29:19,402::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
11:29:19,403::task::866::Storage.TaskManager.Task::(_setError)
Task=`6e31bf97-458c-4a30-9df5-14f475db3339`::Unexpected error
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
11:29:20,649::image::205::Storage.Image::(getChain) There is no
leaf in the image b4c4b53e-3813-4959-a145-16f1dfcf1838
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
11:29:20,650::task::866::Storage.TaskManager.Task::(_setError)
Task=`79ed31a2-5ac7-4304-ab4d-d05f72694860`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,063::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,064::task::866::Storage.TaskManager.Task::(_setError)
Task=`62f20e22-e850-44c8-8943-faa4ce71e973`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,065::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
jsonrpc.Executor/4::ERROR::2017-02-16
11:33:18,487::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/4::ERROR::2017-02-16
11:33:18,488::task::866::Storage.TaskManager.Task::(_setError)
Task=`e4d893f2-7be6-4f84-9ac6-58b5a5d1364e`::Unexpected error
jsonrpc.Executor/4::ERROR::2017-02-16
11:33:18,489::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
3132106a-ce35-4b12-9a72-812e415eff7f::ERROR::2017-02-16
11:34:47,595::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
3132106a-ce35-4b12-9a72-812e415eff7f::ERROR::2017-02-16
11:34:47,596::task::866::Storage.TaskManager.Task::(_setError)
Task=`3132106a-ce35-4b12-9a72-812e415eff7f`::Unexpected error
112fb772-a497-4788-829f-190d6d008d95::ERROR::2017-02-16
11:34:48,517::image::205::Storage.Image::(getChain) There is no
leaf in the image b4c4b53e-3813-4959-a145-16f1dfcf1838
112fb772-a497-4788-829f-190d6d008d95::ERROR::2017-02-16
11:34:48,517::task::866::Storage.TaskManager.Task::(_setError)
Task=`112fb772-a497-4788-829f-190d6d008d95`::Unexpected error
Thread-25336::ERROR::2017-02-16
11:43:32,726::task::866::Storage.TaskManager.Task::(_setError)
Task=`fafb120e-e7c6-4d3e-b87a-8116484f1c1a`::Unexpected error
Thread-25336::ERROR::2017-02-16
11:43:32,727::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
jsonrpc.Executor/0::WARNING::2017-02-16
11:54:05,875::momIF::113::MOM::(getStatus) MOM not available.
jsonrpc.Executor/0::WARNING::2017-02-16
11:54:05,877::momIF::76::MOM::(getKsmStats) MOM not available, KSM
stats will be missing.
ioprocess communication (10025)::ERROR::2017-02-16
11:54:05,890::__init__::176::IOProcessClient::(_communicate)
IOProcess failure
ioprocess communication (10364)::ERROR::2017-02-16
11:54:05,892::__init__::176::IOProcessClient::(_communicate)
IOProcess failure
ioprocess communication (23403)::ERROR::2017-02-16
11:54:05,892::__init__::176::IOProcessClient::(_communicate)
IOProcess failure
ioprocess communication (31710)::ERROR::2017-02-16
11:54:05,999::__init__::176::IOProcessClient::(_communicate)
IOProcess failure
ioprocess communication (31717)::ERROR::2017-02-16
11:54:05,999::__init__::176::IOProcessClient::(_communicate)
IOProcess failure
ioprocess communication (31724)::ERROR::2017-02-16
11:54:06,000::__init__::176::IOProcessClient::(_communicate)
IOProcess failure
Thread-16::ERROR::2017-02-16
11:54:21,657::monitor::387::Storage.Monitor::(_acquireHostId)
Error acquiring host id 2 for domain
81f19871-4d91-4698-a97d-36452bfae281
jsonrpc.Executor/7::ERROR::2017-02-16
11:54:21,885::API::1871::vds::(_getHaInfo) failed to retrieve
Hosted Engine HA info
jsonrpc.Executor/0::ERROR::2017-02-16
11:54:21,890::task::866::Storage.TaskManager.Task::(_setError)
Task=`73ca0c58-3e86-47e8-80f2-31d97346f0a3`::Unexpected error
jsonrpc.Executor/0::ERROR::2017-02-16
11:54:21,892::dispatcher::79::Storage.Dispatcher::(wrapper)
Secured object is not in safe state
Thread-16::ERROR::2017-02-16
11:54:31,673::monitor::387::Storage.Monitor::(_acquireHostId)
Error acquiring host id 2 for domain
81f19871-4d91-4698-a97d-36452bfae281
jsonrpc.Executor/4::ERROR::2017-02-16
11:54:34,309::API::1871::vds::(_getHaInfo) failed to retrieve
Hosted Engine HA info
jsonrpc.Executor/2::ERROR::2017-02-16
11:57:30,796::API::1871::vds::(_getHaInfo) failed to retrieve
Hosted Engine HA info
jsonrpc.Executor/7::ERROR::2017-02-16
11:57:39,847::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/7::ERROR::2017-02-16
11:57:39,848::task::866::Storage.TaskManager.Task::(_setError)
Task=`e4ae2972-77d4-406a-ac71-b285953b76ae`::Unexpected error
jsonrpc.Executor/7::ERROR::2017-02-16
11:57:39,849::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
jsonrpc.Executor/0::ERROR::2017-02-16
11:57:45,965::API::1871::vds::(_getHaInfo) failed to retrieve
Hosted Engine HA info
jsonrpc.Executor/5::ERROR::2017-02-16
13:01:26,274::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/5::ERROR::2017-02-16
13:01:26,275::task::866::Storage.TaskManager.Task::(_setError)
Task=`2a214b3a-a50b-425a-ad99-bf5cc6be13ef`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
13:01:26,276::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
periodic/3::WARNING::2017-02-16
13:13:52,268::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/2::WARNING::2017-02-16
13:50:15,062::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/1::WARNING::2017-02-16
13:51:15,085::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/3::WARNING::2017-02-16
13:51:45,081::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/0::WARNING::2017-02-16
15:21:45,347::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/0::WARNING::2017-02-16
16:21:00,522::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/3::WARNING::2017-02-16
17:49:00,858::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/3::WARNING::2017-02-16
17:50:00,868::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/0::WARNING::2017-02-16
17:51:30,899::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/0::WARNING::2017-02-16
17:52:30,907::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
On 02/20/2017 08:45 AM, Adam Litke wrote:
> Hi Pat. I'd like to help you investigate this issue further.
> Could you send a snippet of the vdsm.log on slam-vmnode-03 that
> covers the time period during this failure? Engine is reporting
> that vdsm has likely thrown an exception while acquiring locks
> associated with the VM disk you are exporting.
>
> On Thu, Feb 16, 2017 at 12:40 PM, Pat Riehecky <riehecky(a)fnal.gov
> <mailto:riehecky@fnal.gov>> wrote:
>
> Any attempts to export my VM error out. Last night the disk
> images got 'unregistered' from oVirt and I had to rescan the
> storage domain to find them again. Now I'm just trying to get
> a backup of the VM.
>
> The snapshots off of the old disks are still listed, but I
> don't know if the lvm slices are still real or if that is
> even what is wrong.
>
> steps I followed ->
> Halt VM
> Click Export
> leave things unchecked and click OK
>
> oVirt version:
> ovirt-engine-4.0.3-1.el7.centos.noarch
> ovirt-engine-backend-4.0.3-1.el7.centos.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.noarch
> ovirt-engine-dashboard-1.0.3-1.el7.centos.noarch
> ovirt-engine-dbscripts-4.0.3-1.el7.centos.noarch
> ovirt-engine-dwh-4.0.2-1.el7.centos.noarch
> ovirt-engine-dwh-setup-4.0.2-1.el7.centos.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.0-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-1.2.1-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-setup-1.2.1-1.el7.noarch
> ovirt-engine-extensions-api-impl-4.0.3-1.el7.centos.noarch
> ovirt-engine-lib-4.0.3-1.el7.centos.noarch
> ovirt-engine-restapi-4.0.3-1.el7.centos.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
> ovirt-engine-setup-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-base-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.0.3-1.el7.centos.no
> <
http://t-engine-4.0.3-1.el7.centos.no>arch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.0.3-1.el7.centos.noarch
> ovirt-engine-tools-4.0.3-1.el7.centos.noarch
> ovirt-engine-tools-backup-4.0.3-1.el7.centos.noarch
> ovirt-engine-userportal-4.0.3-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-4.0.3-1.el7.centos.noarch
> ovirt-engine-websocket-proxy-4.0.3-1.el7.centos.noarch
> ovirt-engine-wildfly-10.0.0-1.el7.x86_64
> ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
> ovirt-guest-agent-common-1.0.12-4.el7.noarch
> ovirt-host-deploy-1.5.1-1.el7.centos.noarch
> ovirt-host-deploy-java-1.5.1-1.el7.centos.noarch
> ovirt-imageio-common-0.3.0-1.el7.noarch
> ovirt-imageio-proxy-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
> ovirt-imageio-proxy-setup-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
> ovirt-image-uploader-4.0.0-1.el7.centos.noarch
> ovirt-iso-uploader-4.0.0-1.el7.centos.noarch
> ovirt-setup-lib-1.0.2-1.el7.centos.noarch
> ovirt-vmconsole-1.0.4-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
>
>
>
>
> log snippet:
> 2017-02-16 11:34:44,959 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
> (default task-28) [] START, GetVmsInfoVDSCommand(
> GetVmsInfoVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmIdList='null'}), log id: 3c406c84
> 2017-02-16 11:34:45,967 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
> (default task-28) [] FINISH, GetVmsInfoVDSCommand, log id:
> 3c406c84
> 2017-02-16 11:34:46,178 INFO
> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
> (default task-24) [50b27eef] Lock Acquired to object
>
'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f=<VM,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 11:34:46,221 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
> (default task-24) [50b27eef] START, GetVmsInfoVDSCommand(
> GetVmsInfoVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmIdList='null'}), log id: 61bfd908
> 2017-02-16 11:34:47,227 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
> (default task-24) [50b27eef] FINISH, GetVmsInfoVDSCommand,
> log id: 61bfd908
> 2017-02-16 11:34:47,242 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
> (default task-24) [50b27eef] START, GetVmsInfoVDSCommand(
> GetVmsInfoVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmIdList='null'}), log id: 7cd19381
> 2017-02-16 11:34:47,276 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
> (default task-24) [50b27eef] FINISH, GetVmsInfoVDSCommand,
> log id: 7cd19381
> 2017-02-16 11:34:47,294 INFO
> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
> (org.ovirt.thread.pool-8-thread-39) [50b27eef] Running
> command: ExportVmCommand internal: false. Entities affected
> : ID: 13127103-3f59-418a-90f1-5b1ade8526b1 Type:
> StorageAction group IMPORT_EXPORT_VM with role type ADMIN
> 2017-02-16 11:34:47,296 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [50b27eef] START,
> SetVmStatusVDSCommand(
> SetVmStatusVDSCommandParameters:{runAsync='true',
> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f',
> status='ImageLocked', exitStatus='Normal'}), log id:
61f2f832
> 2017-02-16 11:34:47,299 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [50b27eef] FINISH,
> SetVmStatusVDSCommand, log id: 61f2f832
> 2017-02-16 11:34:47,301 INFO
> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
> (org.ovirt.thread.pool-8-thread-39) [50b27eef] Lock freed to
> object
>
'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f=<VM,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 11:34:47,339 INFO
> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
> (org.ovirt.thread.pool-8-thread-39) [a12f549] Running
> command: CopyImageGroupCommand internal: true. Entities
> affected : ID: 13127103-3f59-418a-90f1-5b1ade8526b1 Type:
> Storage
> 2017-02-16 11:34:47,356 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [a12f549] START,
> MoveImageGroupVDSCommand(
> MoveImageGroupVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> storageDomainId='a89a626f-3f6b-452b-840e-ce0fee6f6461',
> imageGroupId='e17ebd7c-0763-42b2-b344-5ad7f9cf448e',
> dstDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', op='Copy',
> postZero='false', force='false'}), log id: 1ee1f0ae
> 2017-02-16 11:34:48,211 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [a12f549] FINISH,
> MoveImageGroupVDSCommand, log id: 1ee1f0ae
> 2017-02-16 11:34:48,216 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-39) [a12f549]
> CommandAsyncTask::Adding CommandMultiAsyncTasks object for
> command '0b807437-17fe-4773-a539-09ddee3df215'
> 2017-02-16 11:34:48,216 INFO [
org.ovirt.engine.core.bll.Com
> <
http://org.ovirt.engine.core.bll.Com>mandMultiAsyncTasks]
> (org.ovirt.thread.pool-8-thread-39) [a12f549]
> CommandMultiAsyncTasks::attachTask: Attaching task
> '3132106a-ce35-4b12-9a72-812e415eff7f' to command
> '0b807437-17fe-4773-a539-09ddee3df215'.
> 2017-02-16 11:34:48,225 INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (org.ovirt.thread.pool-8-thread-39) [a12f549] Adding task
> '3132106a-ce35-4b12-9a72-812e415eff7f' (Parent Command
> 'ExportVm', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'),
> polling hasn't started yet..
> 2017-02-16 11:34:48,256 INFO
> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8] Running
> command: CopyImageGroupCommand internal: true. Entities
> affected : ID: 13127103-3f59-418a-90f1-5b1ade8526b1 Type:
> Storage
> 2017-02-16 11:34:48,271 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8] START,
> MoveImageGroupVDSCommand(
> MoveImageGroupVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> storageDomainId='a89a626f-3f6b-452b-840e-ce0fee6f6461',
> imageGroupId='b4c4b53e-3813-4959-a145-16f1dfcf1838',
> dstDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', op='Copy',
> postZero='false', force='false'}), log id: 74c9d14e
> 2017-02-16 11:34:48,354 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
> (DefaultQuartzScheduler4) [2004a741] Fetched 1 VMs from VDS
> '627314c4-7861-4ded-8257-22023a6a748d'
> 2017-02-16 11:34:49,369 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8] FINISH,
> MoveImageGroupVDSCommand, log id: 74c9d14e
> 2017-02-16 11:34:49,373 INFO [
org.ovirt.engine.core.bll.Com
> <
http://org.ovirt.engine.core.bll.Com>mandMultiAsyncTasks]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8]
> CommandMultiAsyncTasks::attachTask: Attaching task
> '112fb772-a497-4788-829f-190d6d008d95' to command
> '0b807437-17fe-4773-a539-09ddee3df215'.
> 2017-02-16 11:34:49,390 INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8] Adding task
> '112fb772-a497-4788-829f-190d6d008d95' (Parent Command
> 'ExportVm', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'),
> polling hasn't started yet..
> 2017-02-16 11:34:49,410 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8] Correlation ID:
> 50b27eef, Job ID: 276e8e51-cbed-43bb-bcfa-9842467e978b, Call
> Stack: null, Custom Event ID: -1, Message: Starting export Vm
>
ecf-sat6.fnal.gov <
http://ecf-sat6.fnal.gov> to RITM0524722
> 2017-02-16 11:34:49,411 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8]
> BaseAsyncTask::startPollingTask: Starting to poll task
> '3132106a-ce35-4b12-9a72-812e415eff7f'.
> 2017-02-16 11:34:49,411 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-39) [8887fd8]
> BaseAsyncTask::startPollingTask: Starting to poll task
> '112fb772-a497-4788-829f-190d6d008d95'.
> 2017-02-16 11:34:50,302 INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (DefaultQuartzScheduler4) [2004a741] Polling and updating
> Async Tasks: 2 tasks, 2 tasks to poll now
> 2017-02-16 11:34:50,361 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (DefaultQuartzScheduler4) [2004a741] Failed in
> 'HSMGetAllTasksStatusesVDS' method
> 2017-02-16 11:34:50,365 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler4) [2004a741] Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: VDSM
>
slam-vmnode-03.fnal.gov <
http://slam-vmnode-03.fnal.gov>
> command failed: Could not acquire resource. Probably resource
> factory threw an exception.: ()
> 2017-02-16 11:34:50,365 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (DefaultQuartzScheduler4) [2004a741] Failed in
> 'HSMGetAllTasksStatusesVDS' method
> 2017-02-16 11:34:50,368 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler4) [2004a741] Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: VDSM
>
slam-vmnode-03.fnal.gov <
http://slam-vmnode-03.fnal.gov>
> command failed: Could not acquire resource. Probably resource
> factory threw an exception.: ()
> 2017-02-16 11:34:50,368 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (DefaultQuartzScheduler4) [2004a741] SPMAsyncTask::PollTask:
> Polling task '3132106a-ce35-4b12-9a72-812e415eff7f' (Parent
> Command 'ExportVm', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
> returned status 'finished', result 'cleanSuccess'.
> 2017-02-16 11:34:50,371 ERROR
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (DefaultQuartzScheduler4) [2004a741]
> BaseAsyncTask::logEndTaskFailure: Task
> '3132106a-ce35-4b12-9a72-812e415eff7f' (Parent Command
> 'ExportVm', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
> ended with failure:
> -- Result: 'cleanSuccess'
> -- Message: 'VDSGenericException: VDSErrorException: Failed
> to HSMGetAllTasksStatusesVDS, error = Could not acquire
> resource. Probably resource factory threw an exception.: (),
> code = 100',
> -- Exception: 'VDSGenericException: VDSErrorException: Failed
> to HSMGetAllTasksStatusesVDS, error = Could not acquire
> resource. Probably resource factory threw an exception.: (),
> code = 100'
> 2017-02-16 11:34:50,374 INFO [
org.ovirt.engine.core.bll.Com
> <
http://org.ovirt.engine.core.bll.Com>mandMultiAsyncTasks]
> (DefaultQuartzScheduler4) [2004a741] Task with DB Task ID
> 'ae5bd098-51e8-4415-b0f7-0f3ef010ec7b' and VDSM Task ID
> '112fb772-a497-4788-829f-190d6d008d95' is in state Polling.
> End action for command 0b807437-17fe-4773-a539-09ddee3df215
> will proceed when all the entity's tasks are completed.
> 2017-02-16 11:34:50,374 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (DefaultQuartzScheduler4) [2004a741] SPMAsyncTask::PollTask:
> Polling task '112fb772-a497-4788-829f-190d6d008d95' (Parent
> Command 'ExportVm', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
> returned status 'finished', result 'cleanSuccess'.
> 2017-02-16 11:34:50,377 ERROR
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (DefaultQuartzScheduler4) [2004a741]
> BaseAsyncTask::logEndTaskFailure: Task
> '112fb772-a497-4788-829f-190d6d008d95' (Parent Command
> 'ExportVm', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
> ended with failure:
> -- Result: 'cleanSuccess'
> -- Message: 'VDSGenericException: VDSErrorException: Failed
> to HSMGetAllTasksStatusesVDS, error = Could not acquire
> resource. Probably resource factory threw an exception.: (),
> code = 100',
> -- Exception: 'VDSGenericException: VDSErrorException: Failed
> to HSMGetAllTasksStatusesVDS, error = Could not acquire
> resource. Probably resource factory threw an exception.: (),
> code = 100'
> 2017-02-16 11:34:50,379 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (DefaultQuartzScheduler4) [2004a741]
> CommandAsyncTask::endActionIfNecessary: All tasks of command
> '0b807437-17fe-4773-a539-09ddee3df215' has ended -> executing
> 'endAction'
> 2017-02-16 11:34:50,379 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (DefaultQuartzScheduler4) [2004a741]
> CommandAsyncTask::endAction: Ending action for '2' tasks
> (command ID: '0b807437-17fe-4773-a539-09ddee3df215'): calling
> endAction '.
> 2017-02-16 11:34:50,380 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) [2004a741]
> CommandAsyncTask::endCommandAction [within thread] context:
> Attempting to endAction 'ExportVm', executionIndex: '0'
> 2017-02-16 11:34:50,495 ERROR
> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
> (org.ovirt.thread.pool-8-thread-41) [2004a741] Ending command
> 'org.ovirt.engine.core.bll.exportimport.ExportVmCommand' with
> failure.
> 2017-02-16 11:34:50,507 ERROR
> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
> (org.ovirt.thread.pool-8-thread-41) [a12f549] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand'
> with failure.
> 2017-02-16 11:34:50,529 ERROR
> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
> (org.ovirt.thread.pool-8-thread-41) [8887fd8] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand'
> with failure.
> 2017-02-16 11:34:50,534 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] START,
> SetVmStatusVDSCommand(
> SetVmStatusVDSCommandParameters:{runAsync='true',
> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', status='Down',
> exitStatus='Normal'}), log id: 28c5e88c
> 2017-02-16 11:34:50,536 INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
> SetVmStatusVDSCommand, log id: 28c5e88c
> 2017-02-16 11:34:50,549 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-41) [] Correlation ID:
> 50b27eef, Job ID: 276e8e51-cbed-43bb-bcfa-9842467e978b, Call
> Stack: null, Custom Event ID: -1, Message: Failed to export
> Vm
ecf-sat6.fnal.gov <
http://ecf-sat6.fnal.gov> to RITM0524722
> 2017-02-16 11:34:50,549 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> CommandAsyncTask::HandleEndActionResult [within thread]:
> endAction for action type 'ExportVm' completed, handling the
> result.
> 2017-02-16 11:34:50,549 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> CommandAsyncTask::HandleEndActionResult [within thread]:
> endAction for action type 'ExportVm' succeeded, clearing tasks.
> 2017-02-16 11:34:50,549 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
> '3132106a-ce35-4b12-9a72-812e415eff7f'
> 2017-02-16 11:34:50,551 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] START,
> SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> taskId='3132106a-ce35-4b12-9a72-812e415eff7f'}), log id: 675799a2
> 2017-02-16 11:34:50,552 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] START,
> HSMClearTaskVDSCommand(HostName =
slam-vmnode-03.fnal.gov
> <
http://slam-vmnode-03.fnal.gov>,
> HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
> hostId='eacb0ca6-794a-4c94-8dc8-00a8a6d88042',
> taskId='3132106a-ce35-4b12-9a72-812e415eff7f'}), log id: 2d8fe4d0
> 2017-02-16 11:34:50,554 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
> (DefaultQuartzScheduler6) [77b1baeb] Fetched 0 VMs from VDS
> 'eacb0ca6-794a-4c94-8dc8-00a8a6d88042'
> 2017-02-16 11:34:51,560 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
> HSMClearTaskVDSCommand, log id: 2d8fe4d0
> 2017-02-16 11:34:51,560 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
> SPMClearTaskVDSCommand, log id: 675799a2
> 2017-02-16 11:34:51,564 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> BaseAsyncTask::removeTaskFromDB: Removed task
> '3132106a-ce35-4b12-9a72-812e415eff7f' from DataBase
> 2017-02-16 11:34:51,564 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
> '112fb772-a497-4788-829f-190d6d008d95'
> 2017-02-16 11:34:51,566 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] START,
> SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
> storagePoolId='00000001-0001-0001-0001-0000000001a5',
> ignoreFailoverLimit='false',
> taskId='112fb772-a497-4788-829f-190d6d008d95'}), log id: 6b4cf8ff
> 2017-02-16 11:34:51,567 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] START,
> HSMClearTaskVDSCommand(HostName =
slam-vmnode-03.fnal.gov
> <
http://slam-vmnode-03.fnal.gov>,
> HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
> hostId='eacb0ca6-794a-4c94-8dc8-00a8a6d88042',
> taskId='112fb772-a497-4788-829f-190d6d008d95'}), log id: 6f2df357
> 2017-02-16 11:34:51,608 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
> HSMClearTaskVDSCommand, log id: 6f2df357
> 2017-02-16 11:34:51,608 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
> SPMClearTaskVDSCommand, log id: 6b4cf8ff
> 2017-02-16 11:34:51,611 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> BaseAsyncTask::removeTaskFromDB: Removed task
> '112fb772-a497-4788-829f-190d6d008d95' from DataBase
> 2017-02-16 11:34:51,611 INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (org.ovirt.thread.pool-8-thread-41) []
> CommandAsyncTask::HandleEndActionResult [within thread]:
> Removing CommandMultiAsyncTasks object for entity
> '0b807437-17fe-4773-a539-09ddee3df215'
>
>
> --
> Pat Riehecky
>
> Fermi National Accelerator Laboratory
>
www.fnal.gov <
http://www.fnal.gov>
>
www.scientificlinux.org <
http://www.scientificlinux.org>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/mailman/listinfo/users
> <
http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> --
> Adam Litke
--
Adam Litke
--------------2B60783E056656DBE53B6977
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
The disks dropped off when I was doing A LOT of disk IO on the VM -
untaring a 2TB archive. I'd done a snapshot earlier (about 30m) of
the VM.<br>
<br>
The storage shows no faults at this time. The switch fabric was
online for all 4 paths.<br>
<br>
Pat<br>
<br>
<div class="moz-cite-prefix">On 02/20/2017 02:08 PM, Adam Litke
wrote:<br>
</div>
<blockquote
cite="mid:CAG8F9UGRoUCRafxi94JFFKodvn3Qmtr96wg0O+a9rbGn0iuadg@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=utf-8">
<div dir="ltr">There definitely seems to be a problem with your
storage domain 81f19871... The host is unable to join that
domain's sanlock lockspace. Also, it seems that some metadata
for the disk with id e17ebd7c... was corrupted or lost in
translation somehow. Can you provide more details about what
happened when "the disk images got 'unregistered' from oVirt"?
Were you performing any particular operations (such as moving
disks, snapshot create/delete, etc)? Was there a problem with
the storage at that time? <br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Feb 20, 2017 at 9:51 AM, Pat
Riehecky <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:riehecky@fnal.gov"
target="_blank">riehecky(a)fnal.gov</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Hi
Adam,<br>
<br>
Thanks for looking! The storage is fibre attached and
I've verified with the SAN folks nothing went wonky during
this window on their side.<br>
<br>
Here is what I've got from vdsm.log during the window (and
a bit surrounding it for context):<br>
<br>
libvirtEventLoop::WARNING::<wbr>2017-02-16
08:35:17,435::utils::140::<wbr>root::(rmFile) File:
/var/lib/libvirt/qemu/<wbr>channels/ba806b93-b6fe-4873-<wbr>99ec-55bb34c12e5f.com.redhat.<wbr>rhevm.vdsm
already removed<br>
libvirtEventLoop::WARNING::<wbr>2017-02-16
08:35:17,435::utils::140::<wbr>root::(rmFile) File:
/var/lib/libvirt/qemu/<wbr>channels/ba806b93-b6fe-4873-<wbr>99ec-55bb34c12e5f.org.qemu.<wbr>guest_agent.0
already removed<br>
periodic/2::WARNING::2017-02-<wbr>16
08:35:18,144::periodic::295::<wbr>virt.vm::(__call__)
vmId=`ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f`::could
not run on ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f:
domain not connected<br>
periodic/3::WARNING::2017-02-<wbr>16
08:35:18,305::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
Thread-23021::ERROR::2017-02-<wbr>16
09:28:33,096::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`ecab8086-261f-44b9-8123-<wbr>eefb9bbf5b05`::Unexpected
error<br>
Thread-23021::ERROR::2017-02-<wbr>16
09:28:33,097::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of
pool:
'domain=81f19871-4d91-4698-<wbr>a97d-36452bfae281'",
'code': 900}}<br>
Thread-23783::ERROR::2017-02-<wbr>16
10:13:32,876::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`ff628204-6e41-4e5e-b83a-<wbr>dad6ec94d0d3`::Unexpected
error<br>
Thread-23783::ERROR::2017-02-<wbr>16
10:13:32,877::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of
pool:
'domain=81f19871-4d91-4698-<wbr>a97d-36452bfae281'",
'code': 900}}<br>
Thread-24542::ERROR::2017-02-<wbr>16
10:58:32,578::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`f5111200-e980-46bb-bbc3-<wbr>898ae312d556`::Unexpected
error<br>
Thread-24542::ERROR::2017-02-<wbr>16
10:58:32,579::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of
pool:
'domain=81f19871-4d91-4698-<wbr>a97d-36452bfae281'",
'code': 900}}<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:28:24,049::sdc::139::<wbr>Storage.StorageDomainCache::(_<wbr>findDomain)
looking for unfetched domain
13127103-3f59-418a-90f1-<wbr>5b1ade8526b1<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:28:24,049::sdc::156::<wbr>Storage.StorageDomainCache::(_<wbr>findUnfetchedDomain)
looking for domain
13127103-3f59-418a-90f1-<wbr>5b1ade8526b1<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:28:24,305::sdc::145::<wbr>Storage.StorageDomainCache::(_<wbr>findDomain)
domain 13127103-3f59-418a-90f1-<wbr>5b1ade8526b1 not found<br>
6e31bf97-458c-4a30-9df5-<wbr>14f475db3339::ERROR::2017-02-<wbr>16
11:29:19,402::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e<br>
6e31bf97-458c-4a30-9df5-<wbr>14f475db3339::ERROR::2017-02-<wbr>16
11:29:19,403::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`6e31bf97-458c-4a30-9df5-<wbr>14f475db3339`::Unexpected
error<br>
79ed31a2-5ac7-4304-ab4d-<wbr>d05f72694860::ERROR::2017-02-<wbr>16
11:29:20,649::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
b4c4b53e-3813-4959-a145-<wbr>16f1dfcf1838<br>
79ed31a2-5ac7-4304-ab4d-<wbr>d05f72694860::ERROR::2017-02-<wbr>16
11:29:20,650::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`79ed31a2-5ac7-4304-ab4d-<wbr>d05f72694860`::Unexpected
error<br>
jsonrpc.Executor/5::ERROR::<wbr>2017-02-16
11:30:17,063::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e<br>
jsonrpc.Executor/5::ERROR::<wbr>2017-02-16
11:30:17,064::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`62f20e22-e850-44c8-8943-<wbr>faa4ce71e973`::Unexpected
error<br>
jsonrpc.Executor/5::ERROR::<wbr>2017-02-16
11:30:17,065::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e',)",
'code':
262}}<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:33:18,487::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:33:18,488::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`e4d893f2-7be6-4f84-9ac6-<wbr>58b5a5d1364e`::Unexpected
error<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:33:18,489::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e',)",
'code':
262}}<br>
3132106a-ce35-4b12-9a72-<wbr>812e415eff7f::ERROR::2017-02-<wbr>16
11:34:47,595::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e<br>
3132106a-ce35-4b12-9a72-<wbr>812e415eff7f::ERROR::2017-02-<wbr>16
11:34:47,596::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`3132106a-ce35-4b12-9a72-<wbr>812e415eff7f`::Unexpected
error<br>
112fb772-a497-4788-829f-<wbr>190d6d008d95::ERROR::2017-02-<wbr>16
11:34:48,517::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
b4c4b53e-3813-4959-a145-<wbr>16f1dfcf1838<br>
112fb772-a497-4788-829f-<wbr>190d6d008d95::ERROR::2017-02-<wbr>16
11:34:48,517::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`112fb772-a497-4788-829f-<wbr>190d6d008d95`::Unexpected
error<br>
Thread-25336::ERROR::2017-02-<wbr>16
11:43:32,726::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`fafb120e-e7c6-4d3e-b87a-<wbr>8116484f1c1a`::Unexpected
error<br>
Thread-25336::ERROR::2017-02-<wbr>16
11:43:32,727::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of
pool:
'domain=81f19871-4d91-4698-<wbr>a97d-36452bfae281'",
'code': 900}}<br>
jsonrpc.Executor/0::WARNING::<wbr>2017-02-16
11:54:05,875::momIF::113::MOM:<wbr>:(getStatus) MOM not
available.<br>
jsonrpc.Executor/0::WARNING::<wbr>2017-02-16
11:54:05,877::momIF::76::MOM::<wbr>(getKsmStats) MOM not
available, KSM stats will be missing.<br>
ioprocess communication (10025)::ERROR::2017-02-16
11:54:05,890::__init__::176::<wbr>IOProcessClient::(_<wbr>communicate)
IOProcess failure<br>
ioprocess communication (10364)::ERROR::2017-02-16
11:54:05,892::__init__::176::<wbr>IOProcessClient::(_<wbr>communicate)
IOProcess failure<br>
ioprocess communication (23403)::ERROR::2017-02-16
11:54:05,892::__init__::176::<wbr>IOProcessClient::(_<wbr>communicate)
IOProcess failure<br>
ioprocess communication (31710)::ERROR::2017-02-16
11:54:05,999::__init__::176::<wbr>IOProcessClient::(_<wbr>communicate)
IOProcess failure<br>
ioprocess communication (31717)::ERROR::2017-02-16
11:54:05,999::__init__::176::<wbr>IOProcessClient::(_<wbr>communicate)
IOProcess failure<br>
ioprocess communication (31724)::ERROR::2017-02-16
11:54:06,000::__init__::176::<wbr>IOProcessClient::(_<wbr>communicate)
IOProcess failure<br>
Thread-16::ERROR::2017-02-16
11:54:21,657::monitor::387::<wbr>Storage.Monitor::(_<wbr>acquireHostId)
Error acquiring host id 2 for domain
81f19871-4d91-4698-a97d-<wbr>36452bfae281<br>
jsonrpc.Executor/7::ERROR::<wbr>2017-02-16
11:54:21,885::API::1871::vds::<wbr>(_getHaInfo) failed to
retrieve Hosted Engine HA info<br>
jsonrpc.Executor/0::ERROR::<wbr>2017-02-16
11:54:21,890::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`73ca0c58-3e86-47e8-80f2-<wbr>31d97346f0a3`::Unexpected
error<br>
jsonrpc.Executor/0::ERROR::<wbr>2017-02-16
11:54:21,892::dispatcher::79::<wbr>Storage.Dispatcher::(wrapper)
Secured object is not in safe state<br>
Thread-16::ERROR::2017-02-16
11:54:31,673::monitor::387::<wbr>Storage.Monitor::(_<wbr>acquireHostId)
Error acquiring host id 2 for domain
81f19871-4d91-4698-a97d-<wbr>36452bfae281<br>
jsonrpc.Executor/4::ERROR::<wbr>2017-02-16
11:54:34,309::API::1871::vds::<wbr>(_getHaInfo) failed to
retrieve Hosted Engine HA info<br>
jsonrpc.Executor/2::ERROR::<wbr>2017-02-16
11:57:30,796::API::1871::vds::<wbr>(_getHaInfo) failed to
retrieve Hosted Engine HA info<br>
jsonrpc.Executor/7::ERROR::<wbr>2017-02-16
11:57:39,847::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e<br>
jsonrpc.Executor/7::ERROR::<wbr>2017-02-16
11:57:39,848::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`e4ae2972-77d4-406a-ac71-<wbr>b285953b76ae`::Unexpected
error<br>
jsonrpc.Executor/7::ERROR::<wbr>2017-02-16
11:57:39,849::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e',)",
'code':
262}}<br>
jsonrpc.Executor/0::ERROR::<wbr>2017-02-16
11:57:45,965::API::1871::vds::<wbr>(_getHaInfo) failed to
retrieve Hosted Engine HA info<br>
jsonrpc.Executor/5::ERROR::<wbr>2017-02-16
13:01:26,274::image::205::<wbr>Storage.Image::(getChain)
There is no leaf in the image
e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e<br>
jsonrpc.Executor/5::ERROR::<wbr>2017-02-16
13:01:26,275::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)
Task=`2a214b3a-a50b-425a-ad99-<wbr>bf5cc6be13ef`::Unexpected
error<br>
jsonrpc.Executor/5::ERROR::<wbr>2017-02-16
13:01:26,276::dispatcher::76::<wbr>Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:
('e17ebd7c-0763-42b2-b344-<wbr>5ad7f9cf448e',)",
'code':
262}}<br>
periodic/3::WARNING::2017-02-<wbr>16
13:13:52,268::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/2::WARNING::2017-02-<wbr>16
13:50:15,062::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/1::WARNING::2017-02-<wbr>16
13:51:15,085::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/3::WARNING::2017-02-<wbr>16
13:51:45,081::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/0::WARNING::2017-02-<wbr>16
15:21:45,347::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/0::WARNING::2017-02-<wbr>16
16:21:00,522::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/3::WARNING::2017-02-<wbr>16
17:49:00,858::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/3::WARNING::2017-02-<wbr>16
17:50:00,868::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/0::WARNING::2017-02-<wbr>16
17:51:30,899::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']<br>
periodic/0::WARNING::2017-02-<wbr>16
17:52:30,907::periodic::261::<wbr>virt.periodic.VmDispatcher::(_<wbr>_call__)
could not run <class
'virt.periodic.<wbr>DriveWatermarkMonitor'>
on ['ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f']
<div>
<div class="h5"><br>
<br>
<br>
<div class="m_-4327628729759137140moz-cite-prefix">On
02/20/2017 08:45 AM, Adam Litke wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Pat. I'd like to help you
investigate this issue further. Could you send a
snippet of the vdsm.log on slam-vmnode-03 that
covers the time period during this failure?
Engine is reporting that vdsm has likely thrown an
exception while acquiring locks associated with
the VM disk you are exporting.<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Feb 16, 2017 at
12:40 PM, Pat Riehecky <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:riehecky@fnal.gov"
target="_blank">riehecky(a)fnal.gov</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">Any attempts to export
my VM error out. Last night the disk images
got 'unregistered' from oVirt and I had to
rescan the storage domain to find them again.
Now I'm just trying to get a backup of the VM.<br>
<br>
The snapshots off of the old disks are still
listed, but I don't know if the lvm slices are
still real or if that is even what is wrong.<br>
<br>
steps I followed -><br>
Halt VM<br>
Click Export<br>
leave things unchecked and click OK<br>
<br>
oVirt version:<br>
ovirt-engine-4.0.3-1.el7.cento<wbr>s.noarch<br>
ovirt-engine-backend-4.0.3-1.e<wbr>l7.centos.noarch<br>
ovirt-engine-cli-3.6.9.2-1.el7<wbr>.noarch<br>
ovirt-engine-dashboard-1.0.3-1<wbr>.el7.centos.noarch<br>
ovirt-engine-dbscripts-4.0.3-1<wbr>.el7.centos.noarch<br>
ovirt-engine-dwh-4.0.2-1.el7.c<wbr>entos.noarch<br>
ovirt-engine-dwh-setup-4.0.2-1<wbr>.el7.centos.noarch<br>
ovirt-engine-extension-aaa-jdb<wbr>c-1.1.0-1.el7.noarch<br>
ovirt-engine-extension-aaa-lda<wbr>p-1.2.1-1.el7.noarch<br>
ovirt-engine-extension-aaa-lda<wbr>p-setup-1.2.1-1.el7.noarch<br>
ovirt-engine-extensions-api-im<wbr>pl-4.0.3-1.el7.centos.noarch<br>
ovirt-engine-lib-4.0.3-1.el7.c<wbr>entos.noarch<br>
ovirt-engine-restapi-4.0.3-1.e<wbr>l7.centos.noarch<br>
ovirt-engine-sdk-python-3.6.9.<wbr>1-1.el7.noarch<br>
ovirt-engine-setup-4.0.3-1.el7<wbr>.centos.noarch<br>
ovirt-engine-setup-base-4.0.3-<wbr>1.el7.centos.noarch<br>
ovirt-engine-setup-plugin-ovir<wbr><a
moz-do-not-send="true"
href="http://t-engine-4.0.3-1.el7.centos.no">t-engine-4.0.3-1.el7.centos.no</a><wbr>arch<br>
ovirt-engine-setup-plugin-ovir<wbr>t-engine-common-4.0.3-1.el7.ce<wbr>ntos.noarch<br>
ovirt-engine-setup-plugin-vmco<wbr>nsole-proxy-helper-4.0.3-1.el7<wbr>.centos.noarch<br>
ovirt-engine-setup-plugin-webs<wbr>ocket-proxy-4.0.3-1.el7.centos<wbr>.noarch<br>
ovirt-engine-tools-4.0.3-1.el7<wbr>.centos.noarch<br>
ovirt-engine-tools-backup-4.0.<wbr>3-1.el7.centos.noarch<br>
ovirt-engine-userportal-4.0.3-<wbr>1.el7.centos.noarch<br>
ovirt-engine-vmconsole-proxy-h<wbr>elper-4.0.3-1.el7.centos.noarc<wbr>h<br>
ovirt-engine-webadmin-portal-4<wbr>.0.3-1.el7.centos.noarch<br>
ovirt-engine-websocket-proxy-4<wbr>.0.3-1.el7.centos.noarch<br>
ovirt-engine-wildfly-10.0.0-1.<wbr>el7.x86_64<br>
ovirt-engine-wildfly-overlay-1<wbr>0.0.0-1.el7.noarch<br>
ovirt-guest-agent-common-1.0.1<wbr>2-4.el7.noarch<br>
ovirt-host-deploy-1.5.1-1.el7.<wbr>centos.noarch<br>
ovirt-host-deploy-java-1.5.1-1<wbr>.el7.centos.noarch<br>
ovirt-imageio-common-0.3.0-1.e<wbr>l7.noarch<br>
ovirt-imageio-proxy-0.3.0-0.20<wbr>1606191345.git9f3d6d4.el7.cent<wbr>os.noarch<br>
ovirt-imageio-proxy-setup-0.3.<wbr>0-0.201606191345.git9f3d6d4.el<wbr>7.centos.noarch<br>
ovirt-image-uploader-4.0.0-1.e<wbr>l7.centos.noarch<br>
ovirt-iso-uploader-4.0.0-1.el7<wbr>.centos.noarch<br>
ovirt-setup-lib-1.0.2-1.el7.ce<wbr>ntos.noarch<br>
ovirt-vmconsole-1.0.4-1.el7.ce<wbr>ntos.noarch<br>
ovirt-vmconsole-proxy-1.0.4-1.<wbr>el7.centos.noarch<br>
<br>
<br>
<br>
<br>
log snippet:<br>
2017-02-16 11:34:44,959 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.GetVmsInfoVDSComm<wbr>and]
(default task-28) [] START,
GetVmsInfoVDSCommand(
GetVmsInfoVDSCommandParameters<wbr>:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
storageDomainId='13127103-3f59<wbr>-418a-90f1-5b1ade8526b1',
vmIdList='null'}), log id: 3c406c84<br>
2017-02-16 11:34:45,967 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.GetVmsInfoVDSComm<wbr>and]
(default task-28) [] FINISH,
GetVmsInfoVDSCommand, log id: 3c406c84<br>
2017-02-16 11:34:46,178 INFO
[org.ovirt.engine.core.bll.exp<wbr>ortimport.ExportVmCommand]
(default task-24) [50b27eef] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[<wbr>ba806b93-b6fe-4873-99ec-55bb34<wbr>c12e5f=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCK<wbr>ED>]',
sharedLocks='null'}'<br>
2017-02-16 11:34:46,221 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.GetVmsInfoVDSComm<wbr>and]
(default task-24) [50b27eef] START,
GetVmsInfoVDSCommand(
GetVmsInfoVDSCommandParameters<wbr>:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
storageDomainId='13127103-3f59<wbr>-418a-90f1-5b1ade8526b1',
vmIdList='null'}), log id: 61bfd908<br>
2017-02-16 11:34:47,227 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.GetVmsInfoVDSComm<wbr>and]
(default task-24) [50b27eef] FINISH,
GetVmsInfoVDSCommand, log id: 61bfd908<br>
2017-02-16 11:34:47,242 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.GetVmsInfoVDSComm<wbr>and]
(default task-24) [50b27eef] START,
GetVmsInfoVDSCommand(
GetVmsInfoVDSCommandParameters<wbr>:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
storageDomainId='13127103-3f59<wbr>-418a-90f1-5b1ade8526b1',
vmIdList='null'}), log id: 7cd19381<br>
2017-02-16 11:34:47,276 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.GetVmsInfoVDSComm<wbr>and]
(default task-24) [50b27eef] FINISH,
GetVmsInfoVDSCommand, log id: 7cd19381<br>
2017-02-16 11:34:47,294 INFO
[org.ovirt.engine.core.bll.exp<wbr>ortimport.ExportVmCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[50b27eef] Running command: ExportVmCommand
internal: false. Entities affected : ID:
13127103-3f59-418a-90f1-5b1ade<wbr>8526b1
Type: StorageAction group IMPORT_EXPORT_VM
with role type ADMIN<br>
2017-02-16 11:34:47,296 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[50b27eef] START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameter<wbr>s:{runAsync='true',
vmId='ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f',
status='ImageLocked', exitStatus='Normal'}),
log id: 61f2f832<br>
2017-02-16 11:34:47,299 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[50b27eef] FINISH, SetVmStatusVDSCommand, log
id: 61f2f832<br>
2017-02-16 11:34:47,301 INFO
[org.ovirt.engine.core.bll.exp<wbr>ortimport.ExportVmCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[50b27eef] Lock freed to object
'EngineLock:{exclusiveLocks='[<wbr>ba806b93-b6fe-4873-99ec-55bb34<wbr>c12e5f=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCK<wbr>ED>]',
sharedLocks='null'}'<br>
2017-02-16 11:34:47,339 INFO
[org.ovirt.engine.core.bll.sto<wbr>rage.disk.image.CopyImageGroup<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[a12f549] Running command:
CopyImageGroupCommand internal: true. Entities
affected : ID: 13127103-3f59-418a-90f1-5b1ade<wbr>8526b1
Type: Storage<br>
2017-02-16 11:34:47,356 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.MoveImageGroupVDS<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[a12f549] START, MoveImageGroupVDSCommand(
MoveImageGroupVDSCommandParame<wbr>ters:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
storageDomainId='a89a626f-3f6b<wbr>-452b-840e-ce0fee6f6461',
imageGroupId='e17ebd7c-0763-42<wbr>b2-b344-5ad7f9cf448e',
dstDomainId='13127103-3f59-418<wbr>a-90f1-5b1ade8526b1',
vmId='ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f',
op='Copy', postZero='false',
force='false'}),
log id: 1ee1f0ae<br>
2017-02-16 11:34:48,211 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.MoveImageGroupVDS<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[a12f549] FINISH, MoveImageGroupVDSCommand,
log id: 1ee1f0ae<br>
2017-02-16 11:34:48,216 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[a12f549] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
'0b807437-17fe-4773-a539-09dde<wbr>e3df215'<br>
2017-02-16 11:34:48,216 INFO [<a
moz-do-not-send="true"
href="http://org.ovirt.engine.core.bll.Com"
target="_blank">org.ovirt.engine.core.bll.Com</a><wbr>mandMultiAsyncTasks]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[a12f549] CommandMultiAsyncTasks::attach<wbr>Task:
Attaching task
'3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'
to command
'0b807437-17fe-4773-a539-09dde<wbr>e3df215'.<br>
2017-02-16 11:34:48,225 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.AsyncTaskManager]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[a12f549] Adding task
'3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'
(Parent Command 'ExportVm', Parameters Type
'org.ovirt.engine.core.common.<wbr>asynctasks.AsyncTaskParameters<wbr>'),
polling hasn't started yet..<br>
2017-02-16 11:34:48,256 INFO
[org.ovirt.engine.core.bll.sto<wbr>rage.disk.image.CopyImageGroup<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] Running command:
CopyImageGroupCommand internal: true. Entities
affected : ID: 13127103-3f59-418a-90f1-5b1ade<wbr>8526b1
Type: Storage<br>
2017-02-16 11:34:48,271 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.MoveImageGroupVDS<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] START, MoveImageGroupVDSCommand(
MoveImageGroupVDSCommandParame<wbr>ters:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
storageDomainId='a89a626f-3f6b<wbr>-452b-840e-ce0fee6f6461',
imageGroupId='b4c4b53e-3813-49<wbr>59-a145-16f1dfcf1838',
dstDomainId='13127103-3f59-418<wbr>a-90f1-5b1ade8526b1',
vmId='ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f',
op='Copy', postZero='false',
force='false'}),
log id: 74c9d14e<br>
2017-02-16 11:34:48,354 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.monitoring.VmsStatisticsFet<wbr>cher]
(DefaultQuartzScheduler4) [2004a741] Fetched 1
VMs from VDS
'627314c4-7861-4ded-8257-22023<wbr>a6a748d'<br>
2017-02-16 11:34:49,369 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.MoveImageGroupVDS<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] FINISH, MoveImageGroupVDSCommand,
log id: 74c9d14e<br>
2017-02-16 11:34:49,373 INFO [<a
moz-do-not-send="true"
href="http://org.ovirt.engine.core.bll.Com"
target="_blank">org.ovirt.engine.core.bll.Com</a><wbr>mandMultiAsyncTasks]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] CommandMultiAsyncTasks::attach<wbr>Task:
Attaching task
'112fb772-a497-4788-829f-190d6<wbr>d008d95'
to command
'0b807437-17fe-4773-a539-09dde<wbr>e3df215'.<br>
2017-02-16 11:34:49,390 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.AsyncTaskManager]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] Adding task
'112fb772-a497-4788-829f-190d6<wbr>d008d95'
(Parent Command 'ExportVm', Parameters Type
'org.ovirt.engine.core.common.<wbr>asynctasks.AsyncTaskParameters<wbr>'),
polling hasn't started yet..<br>
2017-02-16 11:34:49,410 INFO
[org.ovirt.engine.core.dal.dbb<wbr>roker.auditloghandling.AuditLo<wbr>gDirector]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] Correlation ID: 50b27eef, Job ID:
276e8e51-cbed-43bb-bcfa-984246<wbr>7e978b,
Call Stack: null, Custom Event ID: -1,
Message: Starting export Vm <a
moz-do-not-send="true"
href="http://ecf-sat6.fnal.gov"
rel="noreferrer"
target="_blank">ecf-sat6.fnal.gov</a>
to RITM0524722<br>
2017-02-16 11:34:49,411 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] BaseAsyncTask::startPollingTas<wbr>k:
Starting to poll task
'3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'.<br>
2017-02-16 11:34:49,411 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-39)
[8887fd8] BaseAsyncTask::startPollingTas<wbr>k:
Starting to poll task
'112fb772-a497-4788-829f-190d6<wbr>d008d95'.<br>
2017-02-16 11:34:50,302 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.AsyncTaskManager]
(DefaultQuartzScheduler4) [2004a741] Polling
and updating Async Tasks: 2 tasks, 2 tasks to
poll now<br>
2017-02-16 11:34:50,361 ERROR
[org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.HSMGetAllTasksSta<wbr>tusesVDSCommand]
(DefaultQuartzScheduler4) [2004a741] Failed in
'HSMGetAllTasksStatusesVDS' method<br>
2017-02-16 11:34:50,365 ERROR
[org.ovirt.engine.core.dal.dbb<wbr>roker.auditloghandling.AuditLo<wbr>gDirector]
(DefaultQuartzScheduler4) [2004a741]
Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: VDSM <a
moz-do-not-send="true"
href="http://slam-vmnode-03.fnal.gov"
rel="noreferrer"
target="_blank">slam-vmnode-03.fnal.gov</a>
command failed: Could not acquire resource.
Probably resource factory threw an exception.:
()<br>
2017-02-16 11:34:50,365 ERROR
[org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.HSMGetAllTasksSta<wbr>tusesVDSCommand]
(DefaultQuartzScheduler4) [2004a741] Failed in
'HSMGetAllTasksStatusesVDS' method<br>
2017-02-16 11:34:50,368 ERROR
[org.ovirt.engine.core.dal.dbb<wbr>roker.auditloghandling.AuditLo<wbr>gDirector]
(DefaultQuartzScheduler4) [2004a741]
Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: VDSM <a
moz-do-not-send="true"
href="http://slam-vmnode-03.fnal.gov"
rel="noreferrer"
target="_blank">slam-vmnode-03.fnal.gov</a>
command failed: Could not acquire resource.
Probably resource factory threw an exception.:
()<br>
2017-02-16 11:34:50,368 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(DefaultQuartzScheduler4) [2004a741]
SPMAsyncTask::PollTask: Polling task
'3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'
(Parent Command 'ExportVm', Parameters Type
'org.ovirt.engine.core.common.<wbr>asynctasks.AsyncTaskParameters<wbr>')
returned status 'finished', result
'cleanSuccess'.<br>
2017-02-16 11:34:50,371 ERROR
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(DefaultQuartzScheduler4) [2004a741]
BaseAsyncTask::logEndTaskFailu<wbr>re: Task
'3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'
(Parent Command 'ExportVm', Parameters Type
'org.ovirt.engine.core.common.<wbr>asynctasks.AsyncTaskParameters<wbr>')
ended with failure:<br>
-- Result: 'cleanSuccess'<br>
-- Message: 'VDSGenericException:
VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Could not
acquire resource. Probably resource factory
threw an exception.: (), code = 100',<br>
-- Exception: 'VDSGenericException:
VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Could not
acquire resource. Probably resource factory
threw an exception.: (), code = 100'<br>
2017-02-16 11:34:50,374 INFO [<a
moz-do-not-send="true"
href="http://org.ovirt.engine.core.bll.Com"
target="_blank">org.ovirt.engine.core.bll.Com</a><wbr>mandMultiAsyncTasks]
(DefaultQuartzScheduler4) [2004a741] Task with
DB Task ID
'ae5bd098-51e8-4415-b0f7-0f3ef<wbr>010ec7b'
and VDSM Task ID
'112fb772-a497-4788-829f-190d6<wbr>d008d95' is
in state Polling. End action for command
0b807437-17fe-4773-a539-09ddee<wbr>3df215 will
proceed when all the entity's tasks are
completed.<br>
2017-02-16 11:34:50,374 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(DefaultQuartzScheduler4) [2004a741]
SPMAsyncTask::PollTask: Polling task
'112fb772-a497-4788-829f-190d6<wbr>d008d95'
(Parent Command 'ExportVm', Parameters Type
'org.ovirt.engine.core.common.<wbr>asynctasks.AsyncTaskParameters<wbr>')
returned status 'finished', result
'cleanSuccess'.<br>
2017-02-16 11:34:50,377 ERROR
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(DefaultQuartzScheduler4) [2004a741]
BaseAsyncTask::logEndTaskFailu<wbr>re: Task
'112fb772-a497-4788-829f-190d6<wbr>d008d95'
(Parent Command 'ExportVm', Parameters Type
'org.ovirt.engine.core.common.<wbr>asynctasks.AsyncTaskParameters<wbr>')
ended with failure:<br>
-- Result: 'cleanSuccess'<br>
-- Message: 'VDSGenericException:
VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Could not
acquire resource. Probably resource factory
threw an exception.: (), code = 100',<br>
-- Exception: 'VDSGenericException:
VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Could not
acquire resource. Probably resource factory
threw an exception.: (), code = 100'<br>
2017-02-16 11:34:50,379 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(DefaultQuartzScheduler4) [2004a741]
CommandAsyncTask::endActionIfN<wbr>ecessary:
All tasks of command
'0b807437-17fe-4773-a539-09dde<wbr>e3df215'
has ended -> executing 'endAction'<br>
2017-02-16 11:34:50,379 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(DefaultQuartzScheduler4) [2004a741]
CommandAsyncTask::endAction: Ending action for
'2' tasks (command ID:
'0b807437-17fe-4773-a539-09dde<wbr>e3df215'):
calling endAction '.<br>
2017-02-16 11:34:50,380 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41)
[2004a741] CommandAsyncTask::endCommandAc<wbr>tion
[within thread] context: Attempting to
endAction 'ExportVm', executionIndex:
'0'<br>
2017-02-16 11:34:50,495 ERROR
[org.ovirt.engine.core.bll.exp<wbr>ortimport.ExportVmCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-41)
[2004a741] Ending command
'org.ovirt.engine.core.bll.exp<wbr>ortimport.ExportVmCommand'
with failure.<br>
2017-02-16 11:34:50,507 ERROR
[org.ovirt.engine.core.bll.sto<wbr>rage.disk.image.CopyImageGroup<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-41)
[a12f549] Ending command
'org.ovirt.engine.core.bll.sto<wbr>rage.disk.image.CopyImageGroup<wbr>Command'
with failure.<br>
2017-02-16 11:34:50,529 ERROR
[org.ovirt.engine.core.bll.sto<wbr>rage.disk.image.CopyImageGroup<wbr>Command]
(org.ovirt.thread.pool-8-threa<wbr>d-41)
[8887fd8] Ending command
'org.ovirt.engine.core.bll.sto<wbr>rage.disk.image.CopyImageGroup<wbr>Command'
with failure.<br>
2017-02-16 11:34:50,534 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameter<wbr>s:{runAsync='true',
vmId='ba806b93-b6fe-4873-99ec-<wbr>55bb34c12e5f',
status='Down', exitStatus='Normal'}), log id:
28c5e88c<br>
2017-02-16 11:34:50,536 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.SetVmStatusVDSCommand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
FINISH, SetVmStatusVDSCommand, log id:
28c5e88c<br>
2017-02-16 11:34:50,549 ERROR
[org.ovirt.engine.core.dal.dbb<wbr>roker.auditloghandling.AuditLo<wbr>gDirector]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
Correlation ID: 50b27eef, Job ID:
276e8e51-cbed-43bb-bcfa-984246<wbr>7e978b,
Call Stack: null, Custom Event ID: -1,
Message: Failed to export Vm <a
moz-do-not-send="true"
href="http://ecf-sat6.fnal.gov"
rel="noreferrer"
target="_blank">ecf-sat6.fnal.gov</a>
to RITM0524722<br>
2017-02-16 11:34:50,549 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
CommandAsyncTask::HandleEndAct<wbr>ionResult
[within thread]: endAction for action type
'ExportVm' completed, handling the result.<br>
2017-02-16 11:34:50,549 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
CommandAsyncTask::HandleEndAct<wbr>ionResult
[within thread]: endAction for action type
'ExportVm' succeeded, clearing tasks.<br>
2017-02-16 11:34:50,549 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
SPMAsyncTask::ClearAsyncTask: Attempting to
clear task
'3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'<br>
2017-02-16 11:34:50,551 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.SPMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
START, SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParam<wbr>eters:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
taskId='3132106a-ce35-4b12-9a7<wbr>2-812e415eff7f'}),
log id: 675799a2<br>
2017-02-16 11:34:50,552 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.HSMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
START, HSMClearTaskVDSCommand(HostNam<wbr>e =
<a moz-do-not-send="true"
href="http://slam-vmnode-03.fnal.gov"
rel="noreferrer"
target="_blank">slam-vmnode-03.fnal.gov</a>,
HSMTaskGuidBaseVDSCommandParam<wbr>eters:{runAsync='true',
hostId='eacb0ca6-794a-4c94-8dc<wbr>8-00a8a6d88042',
taskId='3132106a-ce35-4b12-9a7<wbr>2-812e415eff7f'}),
log id: 2d8fe4d0<br>
2017-02-16 11:34:50,554 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.monitoring.VmsStatisticsFet<wbr>cher]
(DefaultQuartzScheduler6) [77b1baeb] Fetched 0
VMs from VDS
'eacb0ca6-794a-4c94-8dc8-00a8a<wbr>6d88042'<br>
2017-02-16 11:34:51,560 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.HSMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
FINISH, HSMClearTaskVDSCommand, log id:
2d8fe4d0<br>
2017-02-16 11:34:51,560 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.SPMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
FINISH, SPMClearTaskVDSCommand, log id:
675799a2<br>
2017-02-16 11:34:51,564 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
BaseAsyncTask::removeTaskFromD<wbr>B: Removed
task '3132106a-ce35-4b12-9a72-812e4<wbr>15eff7f'
from DataBase<br>
2017-02-16 11:34:51,564 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
SPMAsyncTask::ClearAsyncTask: Attempting to
clear task
'112fb772-a497-4788-829f-190d6<wbr>d008d95'<br>
2017-02-16 11:34:51,566 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.SPMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
START, SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParam<wbr>eters:{runAsync='true',
storagePoolId='00000001-0001-0<wbr>001-0001-0000000001a5',
ignoreFailoverLimit='false',
taskId='112fb772-a497-4788-829<wbr>f-190d6d008d95'}),
log id: 6b4cf8ff<br>
2017-02-16 11:34:51,567 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.HSMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
START, HSMClearTaskVDSCommand(HostNam<wbr>e =
<a moz-do-not-send="true"
href="http://slam-vmnode-03.fnal.gov"
rel="noreferrer"
target="_blank">slam-vmnode-03.fnal.gov</a>,
HSMTaskGuidBaseVDSCommandParam<wbr>eters:{runAsync='true',
hostId='eacb0ca6-794a-4c94-8dc<wbr>8-00a8a6d88042',
taskId='112fb772-a497-4788-829<wbr>f-190d6d008d95'}),
log id: 6f2df357<br>
2017-02-16 11:34:51,608 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.HSMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
FINISH, HSMClearTaskVDSCommand, log id:
6f2df357<br>
2017-02-16 11:34:51,608 INFO
[org.ovirt.engine.core.vdsbrok<wbr>er.irsbroker.SPMClearTaskVDSCo<wbr>mmand]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
FINISH, SPMClearTaskVDSCommand, log id:
6b4cf8ff<br>
2017-02-16 11:34:51,611 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.SPMAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
BaseAsyncTask::removeTaskFromD<wbr>B: Removed
task '112fb772-a497-4788-829f-190d6<wbr>d008d95'
from DataBase<br>
2017-02-16 11:34:51,611 INFO
[org.ovirt.engine.core.bll.tas<wbr>ks.CommandAsyncTask]
(org.ovirt.thread.pool-8-threa<wbr>d-41) []
CommandAsyncTask::HandleEndAct<wbr>ionResult
[within thread]: Removing
CommandMultiAsyncTasks object for entity
'0b807437-17fe-4773-a539-09dde<wbr>e3df215'<span
class="m_-4327628729759137140HOEnZb"><font
color="#888888"><br>
<br>
<br>
-- <br>
Pat Riehecky<br>
<br>
Fermi National Accelerator Laboratory<br>
<a moz-do-not-send="true"
href="http://www.fnal.gov"
rel="noreferrer"
target="_blank">www.fnal.gov</a><br>
<a moz-do-not-send="true"
href="http://www.scientificlinux.org"
rel="noreferrer"
target="_blank">www.scientificlinux.org</a><br>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/mailman<wbr>/li...
</font></span></blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="m_-4327628729759137140gmail_signature"
data-smartmail="gmail_signature">
<div dir="ltr">Adam Litke</div>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="gmail_signature"
data-smartmail="gmail_signature">
<div dir="ltr">Adam Litke</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------2B60783E056656DBE53B6977--