[ovirt-users] Unable to export VM
Pat Riehecky
riehecky at fnal.gov
Mon Feb 20 20:16:41 UTC 2017
The disks dropped off when I was doing A LOT of disk IO on the VM -
untaring a 2TB archive. I'd done a snapshot earlier (about 30m) of the VM.
The storage shows no faults at this time. The switch fabric was online
for all 4 paths.
Pat
On 02/20/2017 02:08 PM, Adam Litke wrote:
> There definitely seems to be a problem with your storage domain
> 81f19871... The host is unable to join that domain's sanlock
> lockspace. Also, it seems that some metadata for the disk with id
> e17ebd7c... was corrupted or lost in translation somehow. Can you
> provide more details about what happened when "the disk images got
> 'unregistered' from oVirt"? Were you performing any particular
> operations (such as moving disks, snapshot create/delete, etc)? Was
> there a problem with the storage at that time?
>
> On Mon, Feb 20, 2017 at 9:51 AM, Pat Riehecky <riehecky at fnal.gov
> <mailto:riehecky at fnal.gov>> wrote:
>
> Hi Adam,
>
> Thanks for looking! The storage is fibre attached and I've
> verified with the SAN folks nothing went wonky during this window
> on their side.
>
> Here is what I've got from vdsm.log during the window (and a bit
> surrounding it for context):
>
> libvirtEventLoop::WARNING::2017-02-16
> 08:35:17,435::utils::140::root::(rmFile) File:
> /var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.com.redhat.rhevm.vdsm
> already removed
> libvirtEventLoop::WARNING::2017-02-16
> 08:35:17,435::utils::140::root::(rmFile) File:
> /var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.org.qemu.guest_agent.0
> already removed
> periodic/2::WARNING::2017-02-16
> 08:35:18,144::periodic::295::virt.vm::(__call__)
> vmId=`ba806b93-b6fe-4873-99ec-55bb34c12e5f`::could not run on
> ba806b93-b6fe-4873-99ec-55bb34c12e5f: domain not connected
> periodic/3::WARNING::2017-02-16
> 08:35:18,305::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> Thread-23021::ERROR::2017-02-16
> 09:28:33,096::task::866::Storage.TaskManager.Task::(_setError)
> Task=`ecab8086-261f-44b9-8123-eefb9bbf5b05`::Unexpected error
> Thread-23021::ERROR::2017-02-16
> 09:28:33,097::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> Thread-23783::ERROR::2017-02-16
> 10:13:32,876::task::866::Storage.TaskManager.Task::(_setError)
> Task=`ff628204-6e41-4e5e-b83a-dad6ec94d0d3`::Unexpected error
> Thread-23783::ERROR::2017-02-16
> 10:13:32,877::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> Thread-24542::ERROR::2017-02-16
> 10:58:32,578::task::866::Storage.TaskManager.Task::(_setError)
> Task=`f5111200-e980-46bb-bbc3-898ae312d556`::Unexpected error
> Thread-24542::ERROR::2017-02-16
> 10:58:32,579::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:28:24,049::sdc::139::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain 13127103-3f59-418a-90f1-5b1ade8526b1
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:28:24,049::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 13127103-3f59-418a-90f1-5b1ade8526b1
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:28:24,305::sdc::145::Storage.StorageDomainCache::(_findDomain)
> domain 13127103-3f59-418a-90f1-5b1ade8526b1 not found
> 6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
> 11:29:19,402::image::205::Storage.Image::(getChain) There is no
> leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> 6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
> 11:29:19,403::task::866::Storage.TaskManager.Task::(_setError)
> Task=`6e31bf97-458c-4a30-9df5-14f475db3339`::Unexpected error
> 79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
> 11:29:20,649::image::205::Storage.Image::(getChain) There is no
> leaf in the image b4c4b53e-3813-4959-a145-16f1dfcf1838
> 79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
> 11:29:20,650::task::866::Storage.TaskManager.Task::(_setError)
> Task=`79ed31a2-5ac7-4304-ab4d-d05f72694860`::Unexpected error
> jsonrpc.Executor/5::ERROR::2017-02-16
> 11:30:17,063::image::205::Storage.Image::(getChain) There is no
> leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> jsonrpc.Executor/5::ERROR::2017-02-16
> 11:30:17,064::task::866::Storage.TaskManager.Task::(_setError)
> Task=`62f20e22-e850-44c8-8943-faa4ce71e973`::Unexpected error
> jsonrpc.Executor/5::ERROR::2017-02-16
> 11:30:17,065::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Image is not a legal chain:
> ('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:33:18,487::image::205::Storage.Image::(getChain) There is no
> leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:33:18,488::task::866::Storage.TaskManager.Task::(_setError)
> Task=`e4d893f2-7be6-4f84-9ac6-58b5a5d1364e`::Unexpected error
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:33:18,489::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Image is not a legal chain:
> ('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
> 3132106a-ce35-4b12-9a72-812e415eff7f::ERROR::2017-02-16
> 11:34:47,595::image::205::Storage.Image::(getChain) There is no
> leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> 3132106a-ce35-4b12-9a72-812e415eff7f::ERROR::2017-02-16
> 11:34:47,596::task::866::Storage.TaskManager.Task::(_setError)
> Task=`3132106a-ce35-4b12-9a72-812e415eff7f`::Unexpected error
> 112fb772-a497-4788-829f-190d6d008d95::ERROR::2017-02-16
> 11:34:48,517::image::205::Storage.Image::(getChain) There is no
> leaf in the image b4c4b53e-3813-4959-a145-16f1dfcf1838
> 112fb772-a497-4788-829f-190d6d008d95::ERROR::2017-02-16
> 11:34:48,517::task::866::Storage.TaskManager.Task::(_setError)
> Task=`112fb772-a497-4788-829f-190d6d008d95`::Unexpected error
> Thread-25336::ERROR::2017-02-16
> 11:43:32,726::task::866::Storage.TaskManager.Task::(_setError)
> Task=`fafb120e-e7c6-4d3e-b87a-8116484f1c1a`::Unexpected error
> Thread-25336::ERROR::2017-02-16
> 11:43:32,727::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> jsonrpc.Executor/0::WARNING::2017-02-16
> 11:54:05,875::momIF::113::MOM::(getStatus) MOM not available.
> jsonrpc.Executor/0::WARNING::2017-02-16
> 11:54:05,877::momIF::76::MOM::(getKsmStats) MOM not available, KSM
> stats will be missing.
> ioprocess communication (10025)::ERROR::2017-02-16
> 11:54:05,890::__init__::176::IOProcessClient::(_communicate)
> IOProcess failure
> ioprocess communication (10364)::ERROR::2017-02-16
> 11:54:05,892::__init__::176::IOProcessClient::(_communicate)
> IOProcess failure
> ioprocess communication (23403)::ERROR::2017-02-16
> 11:54:05,892::__init__::176::IOProcessClient::(_communicate)
> IOProcess failure
> ioprocess communication (31710)::ERROR::2017-02-16
> 11:54:05,999::__init__::176::IOProcessClient::(_communicate)
> IOProcess failure
> ioprocess communication (31717)::ERROR::2017-02-16
> 11:54:05,999::__init__::176::IOProcessClient::(_communicate)
> IOProcess failure
> ioprocess communication (31724)::ERROR::2017-02-16
> 11:54:06,000::__init__::176::IOProcessClient::(_communicate)
> IOProcess failure
> Thread-16::ERROR::2017-02-16
> 11:54:21,657::monitor::387::Storage.Monitor::(_acquireHostId)
> Error acquiring host id 2 for domain
> 81f19871-4d91-4698-a97d-36452bfae281
> jsonrpc.Executor/7::ERROR::2017-02-16
> 11:54:21,885::API::1871::vds::(_getHaInfo) failed to retrieve
> Hosted Engine HA info
> jsonrpc.Executor/0::ERROR::2017-02-16
> 11:54:21,890::task::866::Storage.TaskManager.Task::(_setError)
> Task=`73ca0c58-3e86-47e8-80f2-31d97346f0a3`::Unexpected error
> jsonrpc.Executor/0::ERROR::2017-02-16
> 11:54:21,892::dispatcher::79::Storage.Dispatcher::(wrapper)
> Secured object is not in safe state
> Thread-16::ERROR::2017-02-16
> 11:54:31,673::monitor::387::Storage.Monitor::(_acquireHostId)
> Error acquiring host id 2 for domain
> 81f19871-4d91-4698-a97d-36452bfae281
> jsonrpc.Executor/4::ERROR::2017-02-16
> 11:54:34,309::API::1871::vds::(_getHaInfo) failed to retrieve
> Hosted Engine HA info
> jsonrpc.Executor/2::ERROR::2017-02-16
> 11:57:30,796::API::1871::vds::(_getHaInfo) failed to retrieve
> Hosted Engine HA info
> jsonrpc.Executor/7::ERROR::2017-02-16
> 11:57:39,847::image::205::Storage.Image::(getChain) There is no
> leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> jsonrpc.Executor/7::ERROR::2017-02-16
> 11:57:39,848::task::866::Storage.TaskManager.Task::(_setError)
> Task=`e4ae2972-77d4-406a-ac71-b285953b76ae`::Unexpected error
> jsonrpc.Executor/7::ERROR::2017-02-16
> 11:57:39,849::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Image is not a legal chain:
> ('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
> jsonrpc.Executor/0::ERROR::2017-02-16
> 11:57:45,965::API::1871::vds::(_getHaInfo) failed to retrieve
> Hosted Engine HA info
> jsonrpc.Executor/5::ERROR::2017-02-16
> 13:01:26,274::image::205::Storage.Image::(getChain) There is no
> leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> jsonrpc.Executor/5::ERROR::2017-02-16
> 13:01:26,275::task::866::Storage.TaskManager.Task::(_setError)
> Task=`2a214b3a-a50b-425a-ad99-bf5cc6be13ef`::Unexpected error
> jsonrpc.Executor/5::ERROR::2017-02-16
> 13:01:26,276::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Image is not a legal chain:
> ('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
> periodic/3::WARNING::2017-02-16
> 13:13:52,268::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/2::WARNING::2017-02-16
> 13:50:15,062::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/1::WARNING::2017-02-16
> 13:51:15,085::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/3::WARNING::2017-02-16
> 13:51:45,081::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/0::WARNING::2017-02-16
> 15:21:45,347::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/0::WARNING::2017-02-16
> 16:21:00,522::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/3::WARNING::2017-02-16
> 17:49:00,858::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/3::WARNING::2017-02-16
> 17:50:00,868::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/0::WARNING::2017-02-16
> 17:51:30,899::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
> periodic/0::WARNING::2017-02-16
> 17:52:30,907::periodic::261::virt.periodic.VmDispatcher::(__call__)
> could not run <class 'virt.periodic.DriveWatermarkMonitor'> on
> ['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
>
>
>
> On 02/20/2017 08:45 AM, Adam Litke wrote:
>> Hi Pat. I'd like to help you investigate this issue further.
>> Could you send a snippet of the vdsm.log on slam-vmnode-03 that
>> covers the time period during this failure? Engine is reporting
>> that vdsm has likely thrown an exception while acquiring locks
>> associated with the VM disk you are exporting.
>>
>> On Thu, Feb 16, 2017 at 12:40 PM, Pat Riehecky <riehecky at fnal.gov
>> <mailto:riehecky at fnal.gov>> wrote:
>>
>> Any attempts to export my VM error out. Last night the disk
>> images got 'unregistered' from oVirt and I had to rescan the
>> storage domain to find them again. Now I'm just trying to get
>> a backup of the VM.
>>
>> The snapshots off of the old disks are still listed, but I
>> don't know if the lvm slices are still real or if that is
>> even what is wrong.
>>
>> steps I followed ->
>> Halt VM
>> Click Export
>> leave things unchecked and click OK
>>
>> oVirt version:
>> ovirt-engine-4.0.3-1.el7.centos.noarch
>> ovirt-engine-backend-4.0.3-1.el7.centos.noarch
>> ovirt-engine-cli-3.6.9.2-1.el7.noarch
>> ovirt-engine-dashboard-1.0.3-1.el7.centos.noarch
>> ovirt-engine-dbscripts-4.0.3-1.el7.centos.noarch
>> ovirt-engine-dwh-4.0.2-1.el7.centos.noarch
>> ovirt-engine-dwh-setup-4.0.2-1.el7.centos.noarch
>> ovirt-engine-extension-aaa-jdbc-1.1.0-1.el7.noarch
>> ovirt-engine-extension-aaa-ldap-1.2.1-1.el7.noarch
>> ovirt-engine-extension-aaa-ldap-setup-1.2.1-1.el7.noarch
>> ovirt-engine-extensions-api-impl-4.0.3-1.el7.centos.noarch
>> ovirt-engine-lib-4.0.3-1.el7.centos.noarch
>> ovirt-engine-restapi-4.0.3-1.el7.centos.noarch
>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
>> ovirt-engine-setup-4.0.3-1.el7.centos.noarch
>> ovirt-engine-setup-base-4.0.3-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-4.0.3-1.el7.centos.no
>> <http://t-engine-4.0.3-1.el7.centos.no>arch
>> ovirt-engine-setup-plugin-ovirt-engine-common-4.0.3-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-websocket-proxy-4.0.3-1.el7.centos.noarch
>> ovirt-engine-tools-4.0.3-1.el7.centos.noarch
>> ovirt-engine-tools-backup-4.0.3-1.el7.centos.noarch
>> ovirt-engine-userportal-4.0.3-1.el7.centos.noarch
>> ovirt-engine-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
>> ovirt-engine-webadmin-portal-4.0.3-1.el7.centos.noarch
>> ovirt-engine-websocket-proxy-4.0.3-1.el7.centos.noarch
>> ovirt-engine-wildfly-10.0.0-1.el7.x86_64
>> ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
>> ovirt-guest-agent-common-1.0.12-4.el7.noarch
>> ovirt-host-deploy-1.5.1-1.el7.centos.noarch
>> ovirt-host-deploy-java-1.5.1-1.el7.centos.noarch
>> ovirt-imageio-common-0.3.0-1.el7.noarch
>> ovirt-imageio-proxy-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
>> ovirt-imageio-proxy-setup-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
>> ovirt-image-uploader-4.0.0-1.el7.centos.noarch
>> ovirt-iso-uploader-4.0.0-1.el7.centos.noarch
>> ovirt-setup-lib-1.0.2-1.el7.centos.noarch
>> ovirt-vmconsole-1.0.4-1.el7.centos.noarch
>> ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
>>
>>
>>
>>
>> log snippet:
>> 2017-02-16 11:34:44,959 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
>> (default task-28) [] START, GetVmsInfoVDSCommand(
>> GetVmsInfoVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
>> vmIdList='null'}), log id: 3c406c84
>> 2017-02-16 11:34:45,967 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
>> (default task-28) [] FINISH, GetVmsInfoVDSCommand, log id:
>> 3c406c84
>> 2017-02-16 11:34:46,178 INFO
>> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
>> (default task-24) [50b27eef] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f=<VM,
>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>> 2017-02-16 11:34:46,221 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
>> (default task-24) [50b27eef] START, GetVmsInfoVDSCommand(
>> GetVmsInfoVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
>> vmIdList='null'}), log id: 61bfd908
>> 2017-02-16 11:34:47,227 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
>> (default task-24) [50b27eef] FINISH, GetVmsInfoVDSCommand,
>> log id: 61bfd908
>> 2017-02-16 11:34:47,242 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
>> (default task-24) [50b27eef] START, GetVmsInfoVDSCommand(
>> GetVmsInfoVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
>> vmIdList='null'}), log id: 7cd19381
>> 2017-02-16 11:34:47,276 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
>> (default task-24) [50b27eef] FINISH, GetVmsInfoVDSCommand,
>> log id: 7cd19381
>> 2017-02-16 11:34:47,294 INFO
>> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
>> (org.ovirt.thread.pool-8-thread-39) [50b27eef] Running
>> command: ExportVmCommand internal: false. Entities affected
>> : ID: 13127103-3f59-418a-90f1-5b1ade8526b1 Type:
>> StorageAction group IMPORT_EXPORT_VM with role type ADMIN
>> 2017-02-16 11:34:47,296 INFO
>> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
>> (org.ovirt.thread.pool-8-thread-39) [50b27eef] START,
>> SetVmStatusVDSCommand(
>> SetVmStatusVDSCommandParameters:{runAsync='true',
>> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f',
>> status='ImageLocked', exitStatus='Normal'}), log id: 61f2f832
>> 2017-02-16 11:34:47,299 INFO
>> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
>> (org.ovirt.thread.pool-8-thread-39) [50b27eef] FINISH,
>> SetVmStatusVDSCommand, log id: 61f2f832
>> 2017-02-16 11:34:47,301 INFO
>> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
>> (org.ovirt.thread.pool-8-thread-39) [50b27eef] Lock freed to
>> object
>> 'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f=<VM,
>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>> 2017-02-16 11:34:47,339 INFO
>> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
>> (org.ovirt.thread.pool-8-thread-39) [a12f549] Running
>> command: CopyImageGroupCommand internal: true. Entities
>> affected : ID: 13127103-3f59-418a-90f1-5b1ade8526b1 Type:
>> Storage
>> 2017-02-16 11:34:47,356 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
>> (org.ovirt.thread.pool-8-thread-39) [a12f549] START,
>> MoveImageGroupVDSCommand(
>> MoveImageGroupVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> storageDomainId='a89a626f-3f6b-452b-840e-ce0fee6f6461',
>> imageGroupId='e17ebd7c-0763-42b2-b344-5ad7f9cf448e',
>> dstDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
>> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', op='Copy',
>> postZero='false', force='false'}), log id: 1ee1f0ae
>> 2017-02-16 11:34:48,211 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
>> (org.ovirt.thread.pool-8-thread-39) [a12f549] FINISH,
>> MoveImageGroupVDSCommand, log id: 1ee1f0ae
>> 2017-02-16 11:34:48,216 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-8-thread-39) [a12f549]
>> CommandAsyncTask::Adding CommandMultiAsyncTasks object for
>> command '0b807437-17fe-4773-a539-09ddee3df215'
>> 2017-02-16 11:34:48,216 INFO [org.ovirt.engine.core.bll.Com
>> <http://org.ovirt.engine.core.bll.Com>mandMultiAsyncTasks]
>> (org.ovirt.thread.pool-8-thread-39) [a12f549]
>> CommandMultiAsyncTasks::attachTask: Attaching task
>> '3132106a-ce35-4b12-9a72-812e415eff7f' to command
>> '0b807437-17fe-4773-a539-09ddee3df215'.
>> 2017-02-16 11:34:48,225 INFO
>> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
>> (org.ovirt.thread.pool-8-thread-39) [a12f549] Adding task
>> '3132106a-ce35-4b12-9a72-812e415eff7f' (Parent Command
>> 'ExportVm', Parameters Type
>> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'),
>> polling hasn't started yet..
>> 2017-02-16 11:34:48,256 INFO
>> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8] Running
>> command: CopyImageGroupCommand internal: true. Entities
>> affected : ID: 13127103-3f59-418a-90f1-5b1ade8526b1 Type:
>> Storage
>> 2017-02-16 11:34:48,271 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8] START,
>> MoveImageGroupVDSCommand(
>> MoveImageGroupVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> storageDomainId='a89a626f-3f6b-452b-840e-ce0fee6f6461',
>> imageGroupId='b4c4b53e-3813-4959-a145-16f1dfcf1838',
>> dstDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
>> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', op='Copy',
>> postZero='false', force='false'}), log id: 74c9d14e
>> 2017-02-16 11:34:48,354 INFO
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
>> (DefaultQuartzScheduler4) [2004a741] Fetched 1 VMs from VDS
>> '627314c4-7861-4ded-8257-22023a6a748d'
>> 2017-02-16 11:34:49,369 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.MoveImageGroupVDSCommand]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8] FINISH,
>> MoveImageGroupVDSCommand, log id: 74c9d14e
>> 2017-02-16 11:34:49,373 INFO [org.ovirt.engine.core.bll.Com
>> <http://org.ovirt.engine.core.bll.Com>mandMultiAsyncTasks]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8]
>> CommandMultiAsyncTasks::attachTask: Attaching task
>> '112fb772-a497-4788-829f-190d6d008d95' to command
>> '0b807437-17fe-4773-a539-09ddee3df215'.
>> 2017-02-16 11:34:49,390 INFO
>> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8] Adding task
>> '112fb772-a497-4788-829f-190d6d008d95' (Parent Command
>> 'ExportVm', Parameters Type
>> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'),
>> polling hasn't started yet..
>> 2017-02-16 11:34:49,410 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8] Correlation ID:
>> 50b27eef, Job ID: 276e8e51-cbed-43bb-bcfa-9842467e978b, Call
>> Stack: null, Custom Event ID: -1, Message: Starting export Vm
>> ecf-sat6.fnal.gov <http://ecf-sat6.fnal.gov> to RITM0524722
>> 2017-02-16 11:34:49,411 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8]
>> BaseAsyncTask::startPollingTask: Starting to poll task
>> '3132106a-ce35-4b12-9a72-812e415eff7f'.
>> 2017-02-16 11:34:49,411 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (org.ovirt.thread.pool-8-thread-39) [8887fd8]
>> BaseAsyncTask::startPollingTask: Starting to poll task
>> '112fb772-a497-4788-829f-190d6d008d95'.
>> 2017-02-16 11:34:50,302 INFO
>> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
>> (DefaultQuartzScheduler4) [2004a741] Polling and updating
>> Async Tasks: 2 tasks, 2 tasks to poll now
>> 2017-02-16 11:34:50,361 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
>> (DefaultQuartzScheduler4) [2004a741] Failed in
>> 'HSMGetAllTasksStatusesVDS' method
>> 2017-02-16 11:34:50,365 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (DefaultQuartzScheduler4) [2004a741] Correlation ID: null,
>> Call Stack: null, Custom Event ID: -1, Message: VDSM
>> slam-vmnode-03.fnal.gov <http://slam-vmnode-03.fnal.gov>
>> command failed: Could not acquire resource. Probably resource
>> factory threw an exception.: ()
>> 2017-02-16 11:34:50,365 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
>> (DefaultQuartzScheduler4) [2004a741] Failed in
>> 'HSMGetAllTasksStatusesVDS' method
>> 2017-02-16 11:34:50,368 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (DefaultQuartzScheduler4) [2004a741] Correlation ID: null,
>> Call Stack: null, Custom Event ID: -1, Message: VDSM
>> slam-vmnode-03.fnal.gov <http://slam-vmnode-03.fnal.gov>
>> command failed: Could not acquire resource. Probably resource
>> factory threw an exception.: ()
>> 2017-02-16 11:34:50,368 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (DefaultQuartzScheduler4) [2004a741] SPMAsyncTask::PollTask:
>> Polling task '3132106a-ce35-4b12-9a72-812e415eff7f' (Parent
>> Command 'ExportVm', Parameters Type
>> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
>> returned status 'finished', result 'cleanSuccess'.
>> 2017-02-16 11:34:50,371 ERROR
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (DefaultQuartzScheduler4) [2004a741]
>> BaseAsyncTask::logEndTaskFailure: Task
>> '3132106a-ce35-4b12-9a72-812e415eff7f' (Parent Command
>> 'ExportVm', Parameters Type
>> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
>> ended with failure:
>> -- Result: 'cleanSuccess'
>> -- Message: 'VDSGenericException: VDSErrorException: Failed
>> to HSMGetAllTasksStatusesVDS, error = Could not acquire
>> resource. Probably resource factory threw an exception.: (),
>> code = 100',
>> -- Exception: 'VDSGenericException: VDSErrorException: Failed
>> to HSMGetAllTasksStatusesVDS, error = Could not acquire
>> resource. Probably resource factory threw an exception.: (),
>> code = 100'
>> 2017-02-16 11:34:50,374 INFO [org.ovirt.engine.core.bll.Com
>> <http://org.ovirt.engine.core.bll.Com>mandMultiAsyncTasks]
>> (DefaultQuartzScheduler4) [2004a741] Task with DB Task ID
>> 'ae5bd098-51e8-4415-b0f7-0f3ef010ec7b' and VDSM Task ID
>> '112fb772-a497-4788-829f-190d6d008d95' is in state Polling.
>> End action for command 0b807437-17fe-4773-a539-09ddee3df215
>> will proceed when all the entity's tasks are completed.
>> 2017-02-16 11:34:50,374 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (DefaultQuartzScheduler4) [2004a741] SPMAsyncTask::PollTask:
>> Polling task '112fb772-a497-4788-829f-190d6d008d95' (Parent
>> Command 'ExportVm', Parameters Type
>> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
>> returned status 'finished', result 'cleanSuccess'.
>> 2017-02-16 11:34:50,377 ERROR
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (DefaultQuartzScheduler4) [2004a741]
>> BaseAsyncTask::logEndTaskFailure: Task
>> '112fb772-a497-4788-829f-190d6d008d95' (Parent Command
>> 'ExportVm', Parameters Type
>> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
>> ended with failure:
>> -- Result: 'cleanSuccess'
>> -- Message: 'VDSGenericException: VDSErrorException: Failed
>> to HSMGetAllTasksStatusesVDS, error = Could not acquire
>> resource. Probably resource factory threw an exception.: (),
>> code = 100',
>> -- Exception: 'VDSGenericException: VDSErrorException: Failed
>> to HSMGetAllTasksStatusesVDS, error = Could not acquire
>> resource. Probably resource factory threw an exception.: (),
>> code = 100'
>> 2017-02-16 11:34:50,379 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (DefaultQuartzScheduler4) [2004a741]
>> CommandAsyncTask::endActionIfNecessary: All tasks of command
>> '0b807437-17fe-4773-a539-09ddee3df215' has ended -> executing
>> 'endAction'
>> 2017-02-16 11:34:50,379 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (DefaultQuartzScheduler4) [2004a741]
>> CommandAsyncTask::endAction: Ending action for '2' tasks
>> (command ID: '0b807437-17fe-4773-a539-09ddee3df215'): calling
>> endAction '.
>> 2017-02-16 11:34:50,380 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) [2004a741]
>> CommandAsyncTask::endCommandAction [within thread] context:
>> Attempting to endAction 'ExportVm', executionIndex: '0'
>> 2017-02-16 11:34:50,495 ERROR
>> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
>> (org.ovirt.thread.pool-8-thread-41) [2004a741] Ending command
>> 'org.ovirt.engine.core.bll.exportimport.ExportVmCommand' with
>> failure.
>> 2017-02-16 11:34:50,507 ERROR
>> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
>> (org.ovirt.thread.pool-8-thread-41) [a12f549] Ending command
>> 'org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand'
>> with failure.
>> 2017-02-16 11:34:50,529 ERROR
>> [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
>> (org.ovirt.thread.pool-8-thread-41) [8887fd8] Ending command
>> 'org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand'
>> with failure.
>> 2017-02-16 11:34:50,534 INFO
>> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] START,
>> SetVmStatusVDSCommand(
>> SetVmStatusVDSCommandParameters:{runAsync='true',
>> vmId='ba806b93-b6fe-4873-99ec-55bb34c12e5f', status='Down',
>> exitStatus='Normal'}), log id: 28c5e88c
>> 2017-02-16 11:34:50,536 INFO
>> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
>> SetVmStatusVDSCommand, log id: 28c5e88c
>> 2017-02-16 11:34:50,549 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-41) [] Correlation ID:
>> 50b27eef, Job ID: 276e8e51-cbed-43bb-bcfa-9842467e978b, Call
>> Stack: null, Custom Event ID: -1, Message: Failed to export
>> Vm ecf-sat6.fnal.gov <http://ecf-sat6.fnal.gov> to RITM0524722
>> 2017-02-16 11:34:50,549 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> CommandAsyncTask::HandleEndActionResult [within thread]:
>> endAction for action type 'ExportVm' completed, handling the
>> result.
>> 2017-02-16 11:34:50,549 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> CommandAsyncTask::HandleEndActionResult [within thread]:
>> endAction for action type 'ExportVm' succeeded, clearing tasks.
>> 2017-02-16 11:34:50,549 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
>> '3132106a-ce35-4b12-9a72-812e415eff7f'
>> 2017-02-16 11:34:50,551 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] START,
>> SPMClearTaskVDSCommand(
>> SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> taskId='3132106a-ce35-4b12-9a72-812e415eff7f'}), log id: 675799a2
>> 2017-02-16 11:34:50,552 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] START,
>> HSMClearTaskVDSCommand(HostName = slam-vmnode-03.fnal.gov
>> <http://slam-vmnode-03.fnal.gov>,
>> HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
>> hostId='eacb0ca6-794a-4c94-8dc8-00a8a6d88042',
>> taskId='3132106a-ce35-4b12-9a72-812e415eff7f'}), log id: 2d8fe4d0
>> 2017-02-16 11:34:50,554 INFO
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
>> (DefaultQuartzScheduler6) [77b1baeb] Fetched 0 VMs from VDS
>> 'eacb0ca6-794a-4c94-8dc8-00a8a6d88042'
>> 2017-02-16 11:34:51,560 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
>> HSMClearTaskVDSCommand, log id: 2d8fe4d0
>> 2017-02-16 11:34:51,560 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
>> SPMClearTaskVDSCommand, log id: 675799a2
>> 2017-02-16 11:34:51,564 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> BaseAsyncTask::removeTaskFromDB: Removed task
>> '3132106a-ce35-4b12-9a72-812e415eff7f' from DataBase
>> 2017-02-16 11:34:51,564 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
>> '112fb772-a497-4788-829f-190d6d008d95'
>> 2017-02-16 11:34:51,566 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] START,
>> SPMClearTaskVDSCommand(
>> SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
>> storagePoolId='00000001-0001-0001-0001-0000000001a5',
>> ignoreFailoverLimit='false',
>> taskId='112fb772-a497-4788-829f-190d6d008d95'}), log id: 6b4cf8ff
>> 2017-02-16 11:34:51,567 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] START,
>> HSMClearTaskVDSCommand(HostName = slam-vmnode-03.fnal.gov
>> <http://slam-vmnode-03.fnal.gov>,
>> HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
>> hostId='eacb0ca6-794a-4c94-8dc8-00a8a6d88042',
>> taskId='112fb772-a497-4788-829f-190d6d008d95'}), log id: 6f2df357
>> 2017-02-16 11:34:51,608 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
>> HSMClearTaskVDSCommand, log id: 6f2df357
>> 2017-02-16 11:34:51,608 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
>> (org.ovirt.thread.pool-8-thread-41) [] FINISH,
>> SPMClearTaskVDSCommand, log id: 6b4cf8ff
>> 2017-02-16 11:34:51,611 INFO
>> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> BaseAsyncTask::removeTaskFromDB: Removed task
>> '112fb772-a497-4788-829f-190d6d008d95' from DataBase
>> 2017-02-16 11:34:51,611 INFO
>> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
>> (org.ovirt.thread.pool-8-thread-41) []
>> CommandAsyncTask::HandleEndActionResult [within thread]:
>> Removing CommandMultiAsyncTasks object for entity
>> '0b807437-17fe-4773-a539-09ddee3df215'
>>
>>
>> --
>> Pat Riehecky
>>
>> Fermi National Accelerator Laboratory
>> www.fnal.gov <http://www.fnal.gov>
>> www.scientificlinux.org <http://www.scientificlinux.org>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org <mailto:Users at ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>> <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
>>
>> --
>> Adam Litke
>
>
>
>
> --
> Adam Litke
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170220/629f3181/attachment-0001.html>
More information about the Users
mailing list