[Users] oVirt 3.3 -- Failed to run VM: internal error unexpected address type for ide disk

SULLIVAN, Chris (WGK) Chris.Sullivan at woodgroupkenny.com
Fri Sep 13 06:00:30 UTC 2013


Hi,

I am getting the exact same issue with a non-AIO oVirt 3.3.0-2.fc19 setup. The only workaround I've found so far is to delete the offending VM, recreate, and reattach the disks. The recreated VM will work normally until it  is shutdown, after which it will fail to start with the same error.

Engine and VDSM log excepts below. Versions:
- Fedora 19 (3.10.10-200)
- oVirt 3.3.0-2
- VDSM 4.12.1
- libvirt 1.1.2-1
- gluster 3.4.0.8

I'll upgrade to the latest oVirt 3.3 RC to see if the issue persists.

Kind regards,

Chris

ovirt-engine.log

2013-09-12 15:01:21,746 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-41) [4b57b27f] START, CreateVmVDSCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vmId=980cb3c8-8af8-4795-9c21-85582d37e042, vm=VM [rhev-compute-01]), log id: 1ea52d74
2013-09-12 15:01:21,749 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-41) [4b57b27f] START, CreateVDSCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vmId=980cb3c8-8af8-4795-9c21-85582d37e042, vm=VM [rhev-compute-01]), log id: 735950cf
2013-09-12 15:01:21,801 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-41) [4b57b27f] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand spiceSslCipherSuite=DEFAULT,memSize=8144,kvmEnable=true,smp=4,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,memGuaranteedSize=8144,pitReinjection=false,nice=0,display=vnc,smartcardEnable=false,tabletEnable=true,smpCoresPerSocket=4,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,displayNetwork=ovirtmgmt,timeOffset=-61,transparentHugePages=true,vmId=980cb3c8-8af8-4795-9c21-85582d37e042,devices=[Ljava.util.HashMap;@12177fe2,acpiEnable=true,vmName=rhev-compute-01,cpuType=hostPassthrough,custom={}
2013-09-12 15:01:21,802 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-41) [4b57b27f] FINISH, CreateVDSCommand, log id: 735950cf
2013-09-12 15:01:21,812 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-41) [4b57b27f] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 1ea52d74
2013-09-12 15:01:21,812 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) [4b57b27f] Lock freed to object EngineLock [exclusiveLocks= key: 980cb3c8-8af8-4795-9c21-85582d37e042 value: VM
, sharedLocks= ]
2013-09-12 15:01:21,820 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-41) [4b57b27f] Correlation ID: 4b57b27f, Job ID: 6be840c8-68cb-4c07-a365-c979c3c7e8ae, Call Stack: null, Custom Event ID: -1, Message: VM rhev-compute-01 was started by admin at internal (Host: r410-05).
2013-09-12 15:01:22,157 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-53) START, DestroyVDSCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vmId=980cb3c8-8af8-4795-9c21-85582d37e042, force=false, secondsToWait=0, gracefully=false), log id: 45ed2104
2013-09-12 15:01:22,301 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-53) FINISH, DestroyVDSCommand, log id: 45ed2104
2013-09-12 15:01:22,317 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-53) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM rhev-compute-01 is down. Exit message: internal error: unexpected address type for ide disk.
2013-09-12 15:01:22,317 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-53) Running on vds during rerun failed vm: null
2013-09-12 15:01:22,318 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-53) vm rhev-compute-01 running in db and not running in vds - add to rerun treatment. vds r410-05
2013-09-12 15:01:22,318 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-53) START, FullListVdsCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vds=Host[r410-05], vmIds=[980cb3c8-8af8-4795-9c21-85582d37e042]), log id: 20beb10f
2013-09-12 15:01:22,321 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-53) FINISH, FullListVdsCommand, return: [Ljava.util.HashMap;@475a6094, log id: 20beb10f
2013-09-12 15:01:22,334 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-53) Rerun vm 980cb3c8-8af8-4795-9c21-85582d37e042. Called from vds r410-05
2013-09-12 15:01:22,346 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-41) Correlation ID: 4b57b27f, Job ID: 6be840c8-68cb-4c07-a365-c979c3c7e8ae, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM rhev-compute-01 on Host r410-05.
2013-09-12 15:01:22,359 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) Lock Acquired to object EngineLock [exclusiveLocks= key: 980cb3c8-8af8-4795-9c21-85582d37e042 value: VM
, sharedLocks= ]
2013-09-12 15:01:22,378 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-41) START, IsVmDuringInitiatingVDSCommand( vmId = 980cb3c8-8af8-4795-9c21-85582d37e042), log id: 485ed444
2013-09-12 15:01:22,378 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-41) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 485ed444
2013-09-12 15:01:22,380 WARN  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM
2013-09-12 15:01:22,380 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) Lock freed to object EngineLock [exclusiveLocks= key: 980cb3c8-8af8-4795-9c21-85582d37e042 value: VM
, sharedLocks= ]
2013-09-12 15:01:22,390 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-41) Correlation ID: 4b57b27f, Job ID: 6be840c8-68cb-4c07-a365-c979c3c7e8ae, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM rhev-compute-01 (User: admin at internal).

vdsm.log

Thread-143925::DEBUG::2013-09-12 15:01:21,777::BindingXMLRPC::979::vds::(wrapper) client [172.30.18.242]::call vmCreate with ({'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'memGuaranteedSize': 8144, 'spiceSslCipherSuite': 'DEFAULT', 'timeOffset': '-61', 'cpuType': 'hostPassthrough', 'custom': {}, 'smp': '4', 'vmType': 'kvm', 'memSize': 8144, 'smpCoresPerSocket': '4', 'vmName': 'rhev-compute-01', 'nice': '0', 'smartcardEnable': 'false', 'keyboardLayout': 'en-us', 'kvmEnable': 'true', 'pitReinjection': 'false', 'transparentHugePages': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'cirrus', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '87df9d21-bf47-45f9-ab45-7f2f950fd788', 'address': {'bus': '0x00', ' slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}}, {'index': '2', 'iface': 'ide', 'address': {'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'ef25939b-a5ff-456e-978f-53e7600b83ce', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'specParams': {}, 'readonly': 'false', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'optional': 'false', 'deviceId': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'poolID': 'accbd988-31c6-4803-9204-a584067fa157', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:ab:9c:6a', 'linkActive': 'true', 'network': 'ovirtmgmt', 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'e9f8e70f-8cb9-496b-b44e-d75e56515c27', 'address': {'bus': '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '1c4aef1b-f0eb-47c9-83a8-f983ad3e47bf'}], 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'display': 'vnc'},) {} flowID [4b57b27f]
Thread-143925::INFO::2013-09-12 15:01:21,784::clientIF::366::vds::(createVm) vmContainerLock acquired by vm 980cb3c8-8af8-4795-9c21-85582d37e042
Thread-143925::DEBUG::2013-09-12 15:01:21,790::clientIF::380::vds::(createVm) Total desktops after creation of 980cb3c8-8af8-4795-9c21-85582d37e042 is 1
Thread-143926::DEBUG::2013-09-12 15:01:21,790::vm::2015::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Start
Thread-143925::DEBUG::2013-09-12 15:01:21,791::BindingXMLRPC::986::vds::(wrapper) return vmCreate with {'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 8144, 'timeOffset': '-61', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'hostPassthrough', 'smp': '4', 'clientIp': '', 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection': 'false', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'transparentHugePages': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'cirrus', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '87df9d21-bf47-45f9-ab45-7f2f950fd788', 'address': {'bus': '0x00', ' slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}}, {'index': '2', 'iface': 'ide', 'address': {'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'ef25939b-a5ff-456e-978f-53e7600b83ce', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'specParams': {}, 'readonly': 'false', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'optional': 'false', 'deviceId': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'poolID': 'accbd988-31c6-4803-9204-a584067fa157', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:ab:9c:6a', 'linkActive': 'true', 'network': 'ovirtmgmt', 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'e9f8e70f-8cb9-496b-b44e-d75e56515c27', 'address': {'bus': '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '1c4aef1b-f0eb-47c9-83a8-f983ad3e47bf'}], 'custom': {}, 'vmType': 'kvm', 'memSize': 8144, 'displayIp': '172.30.18.247', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'smpCoresPerSocket': '4', 'vmName': 'rhev-compute-01', 'display': 'vnc', 'nice': '0'}}
Thread-143926::DEBUG::2013-09-12 15:01:21,792::vm::2019::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::_ongoingCreations acquired
Thread-143926::INFO::2013-09-12 15:01:21,794::vm::2815::vm.Vm::(_run) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::VM wrapper has started
Thread-143926::DEBUG::2013-09-12 15:01:21,798::task::579::TaskManager.Task::(_updateState) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::moving from state init -> state preparing
Thread-143926::INFO::2013-09-12 15:01:21,800::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='e281bd49-bc11-4acb-8634-624eac6d3358', spUUID='accbd988-31c6-4803-9204-a584067fa157', imgUUID='8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', volUUID='1abdc967-c32c-4862-a36b-b93441c4a7d5', options=None)
Thread-143926::DEBUG::2013-09-12 15:01:21,815::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::INFO::2013-09-12 15:01:21,818::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '10737418240', 'apparentsize': '10737418240'}
Thread-143926::DEBUG::2013-09-12 15:01:21,818::task::1168::TaskManager.Task::(prepare) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::finished: {'truesize': '10737418240', 'apparentsize': '10737418240'}
Thread-143926::DEBUG::2013-09-12 15:01:21,818::task::579::TaskManager.Task::(_updateState) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::moving from state preparing -> state finished
Thread-143926::DEBUG::2013-09-12 15:01:21,818::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-143926::DEBUG::2013-09-12 15:01:21,819::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143926::DEBUG::2013-09-12 15:01:21,819::task::974::TaskManager.Task::(_decref) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::ref 0 aborting False
Thread-143926::INFO::2013-09-12 15:01:21,819::clientIF::325::vds::(prepareVolumePath) prepared volume path:
Thread-143926::DEBUG::2013-09-12 15:01:21,820::task::579::TaskManager.Task::(_updateState) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::moving from state init -> state preparing
Thread-143926::INFO::2013-09-12 15:01:21,820::logUtils::44::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='e281bd49-bc11-4acb-8634-624eac6d3358', spUUID='accbd988-31c6-4803-9204-a584067fa157', imgUUID='8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', volUUID='1abdc967-c32c-4862-a36b-b93441c4a7d5')
Thread-143926::DEBUG::2013-09-12 15:01:21,821::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`4b30196c-7b93-41b9-92c4-b632161a94a0`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3240' at 'prepareImage'
Thread-143926::DEBUG::2013-09-12 15:01:21,821::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' for lock type 'shared'
Thread-143926::DEBUG::2013-09-12 15:01:21,821::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free. Now locking as 'shared' (1 active user)
Thread-143926::DEBUG::2013-09-12 15:01:21,822::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`4b30196c-7b93-41b9-92c4-b632161a94a0`::Granted request
Thread-143926::DEBUG::2013-09-12 15:01:21,822::task::811::TaskManager.Task::(resourceAcquired) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::_resourcesAcquired: Storage.e281bd49-bc11-4acb-8634-624eac6d3358 (shared)
Thread-143926::DEBUG::2013-09-12 15:01:21,822::task::974::TaskManager.Task::(_decref) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::ref 1 aborting False
Thread-143926::DEBUG::2013-09-12 15:01:21,824::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::INFO::2013-09-12 15:01:21,877::image::215::Storage.Image::(getChain) sdUUID=e281bd49-bc11-4acb-8634-624eac6d3358 imgUUID=8863c4d0-0ff3-4590-8f37-e6bb6c9d195e chain=[<storage.glusterVolume.GlusterVolume object at 0x2500d10>]
Thread-143926::DEBUG::2013-09-12 15:01:21,904::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::INFO::2013-09-12 15:01:21,954::logUtils::47::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'chain': [{'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e'}]}
Thread-143926::DEBUG::2013-09-12 15:01:21,954::task::1168::TaskManager.Task::(prepare) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::finished: {'info': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'chain': [{'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e'}]}
Thread-143926::DEBUG::2013-09-12 15:01:21,954::task::579::TaskManager.Task::(_updateState) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::moving from state preparing -> state finished
Thread-143926::DEBUG::2013-09-12 15:01:21,955::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e281bd49-bc11-4acb-8634-624eac6d3358': < ResourceRef 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', isValid: 'True' obj: 'None'>}
Thread-143926::DEBUG::2013-09-12 15:01:21,955::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143926::DEBUG::2013-09-12 15:01:21,955::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358'
Thread-143926::DEBUG::2013-09-12 15:01:21,956::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' (0 active users)
Thread-143926::DEBUG::2013-09-12 15:01:21,956::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free, finding out if anyone is waiting for it.
Thread-143926::DEBUG::2013-09-12 15:01:21,956::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', Clearing records.
Thread-143926::DEBUG::2013-09-12 15:01:21,957::task::974::TaskManager.Task::(_decref) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::ref 0 aborting False
Thread-143926::INFO::2013-09-12 15:01:21,957::clientIF::325::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::DEBUG::2013-09-12 15:01:21,974::vm::2872::vm.Vm::(_run) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::<?xml version="1.0" encoding="utf-8"?>
<domain type="kvm">
        <name>rhev-compute-01</name>
        <uuid>980cb3c8-8af8-4795-9c21-85582d37e042</uuid>
        <memory>8339456</memory>
        <currentMemory>8339456</currentMemory>
        <vcpu>4</vcpu>
        <memtune>
                <min_guarantee>8339456</min_guarantee>
        </memtune>
        <devices>
                <channel type="unix">
                        <target name="com.redhat.rhevm.vdsm" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.com.redhat.rhevm.vdsm"/>
                </channel>
                <channel type="unix">
                        <target name="org.qemu.guest_agent.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.org.qemu.guest_agent.0"/>
                </channel>
                <input bus="usb" type="tablet"/>
                <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc">
                        <listen network="vdsm-ovirtmgmt" type="network"/>
                </graphics>
                <controller model="virtio-scsi" type="scsi"/>
                <video>
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/>
                        <model heads="1" type="cirrus" vram="65536"/>
                </video>
                <interface type="bridge">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/>
                        <mac address="00:1a:4a:ab:9c:6a"/>
                        <model type="virtio"/>
                        <source bridge="ovirtmgmt"/>
                        <filterref filter="vdsm-no-mac-spoofing"/>
                        <link state="up"/>
                </interface>
                <disk device="cdrom" snapshot="no" type="file">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
                        <source file="" startupPolicy="optional"/>
                        <target bus="ide" dev="hdc"/>
                        <readonly/>
                        <serial/>
                </disk>
                <disk device="disk" snapshot="no" type="network">
                        <source name="hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5" protocol="gluster">
                                <host name="r410-02" port="0" transport="tcp"/>
                        </source>
                        <target bus="virtio" dev="vda"/>
                        <serial>8863c4d0-0ff3-4590-8f37-e6bb6c9d195e</serial>
                        <boot order="1"/>
                        <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
                </disk>
                <memballoon model="virtio"/>
        </devices>
        <os>
                <type arch="x86_64" machine="pc-1.0">hvm</type>
                <smbios mode="sysinfo"/>
        </os>
        <sysinfo type="smbios">
                <system>
                        <entry name="manufacturer">oVirt</entry>
                        <entry name="product">oVirt Node</entry>
                        <entry name="version">19-3</entry>
                        <entry name="serial">4C4C4544-0031-4810-8042-B4C04F353253</entry>
                        <entry name="uuid">980cb3c8-8af8-4795-9c21-85582d37e042</entry>
                </system>
        </sysinfo>
        <clock adjustment="-61" offset="variable">
                <timer name="rtc" tickpolicy="catchup"/>
        </clock>
        <features>
                <acpi/>
        </features>
        <cpu match="exact" mode="host-passthrough">
                <topology cores="4" sockets="1" threads="1"/>
        </cpu>
</domain>

Thread-143926::DEBUG::2013-09-12 15:01:21,987::libvirtconnection::101::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: internal error: unexpected address type for ide disk
Thread-143926::DEBUG::2013-09-12 15:01:21,987::vm::2036::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::_ongoingCreations released
Thread-143926::ERROR::2013-09-12 15:01:21,987::vm::2062::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/vm.py", line 2906, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2909, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: unexpected address type for ide disk
Thread-143926::DEBUG::2013-09-12 15:01:21,989::vm::2448::vm.Vm::(setDownStatus) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Changed state to Down: internal error: unexpected address type for ide disk
Thread-143929::DEBUG::2013-09-12 15:01:22,162::BindingXMLRPC::979::vds::(wrapper) client [172.30.18.242]::call vmGetStats with ('980cb3c8-8af8-4795-9c21-85582d37e042',) {}
Thread-143929::DEBUG::2013-09-12 15:01:22,162::BindingXMLRPC::986::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'hash': '0', 'exitMessage': 'internal error: unexpected address type for ide disk', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'timeOffset': '-61', 'exitCode': 1}]}
Thread-143930::DEBUG::2013-09-12 15:01:22,166::BindingXMLRPC::979::vds::(wrapper) client [172.30.18.242]::call vmDestroy with ('980cb3c8-8af8-4795-9c21-85582d37e042',) {}
Thread-143930::INFO::2013-09-12 15:01:22,167::API::317::vds::(destroy) vmContainerLock acquired by vm 980cb3c8-8af8-4795-9c21-85582d37e042
Thread-143930::DEBUG::2013-09-12 15:01:22,167::vm::4258::vm.Vm::(destroy) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::destroy Called
Thread-143930::INFO::2013-09-12 15:01:22,167::vm::4204::vm.Vm::(releaseVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Release VM resources
Thread-143930::WARNING::2013-09-12 15:01:22,168::vm::1717::vm.Vm::(_set_lastStatus) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::trying to set state to Powering down when already Down
Thread-143930::WARNING::2013-09-12 15:01:22,168::clientIF::337::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method Drive._checkIoTuneCategories of <vm.Drive object at 0x7f0fb8a7d610>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7f0fb8a7d610>> _deviceXML:<disk device="cdrom" snapshot="no" type="file"><address  domain="0x0000"  function="0x0"  slot="0x06"  type="pci" bus="0x00"/><source file="" startupPolicy="optional"/><target bus="ide" dev="hdc"/><readonly/><serial></serial></disk> _makeName:<bound method Drive._makeName of <vm.Drive object at 0x7f0fb8a7d610>> _validateIoTuneParams:<bound method Drive._validateIoTuneParams of <vm.Drive object at 0x7f0fb8a7d610>> address:{'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'} apparentsize:0 blockDev:False cache:none conf:{'status': 'Down', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 8144, 'timeOffset': '-61', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'hostPassthrough', 'smp': '4', 'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection': 'false', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'transparentHugePages': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'cirrus', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '87df9d21-bf47-45f9-ab45-7f2f950fd788', 'address': {'bus': '0x00', ' slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:ab:9c:6a', 'linkActive': 'true', 'network': 'ovirtmgmt', 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'e9f8e70f-8cb9-496b-b44e-d75e56515c27', 'address': {'bus': '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'index': '2', 'iface': 'ide', 'address': {'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'ef25939b-a5ff-456e-978f-53e7600b83ce', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'volumeInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'index': 0, 'iface': 'virtio', 'apparentsize': '10737418240', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'readonly': 'false', 'shared': 'false', 'truesize': '10737418240', 'type': 'disk', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'reqsize': '0', 'format': 'raw', 'deviceId': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'poolID': 'accbd988-31c6-4803-9204-a584067fa157', 'device': 'disk', 'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'propagateErrors': 'off', 'optional': 'false', 'bootOrder': '1', 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'specParams': {}, 'volumeChain': [{'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e'}]}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '1c4aef1b-f0eb-47c9-83a8-f983ad3e47bf', 'target': 8339456}], 'custom': {}, 'vmType': 'kvm', 'exitMessage': 'internal error: unexpected address type for ide disk', 'memSize': 8144, 'displayIp': '172.30.18.247', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'smpCoresPerSocket': '4', 'vmName': 'rhev-compute-01', 'display': 'vnc', 'nice': '0'} createXmlElem:<bound method Drive.createXmlElem of <vm.Drive object at 0x7f0fb8a7d610>> device:cdrom deviceId:ef25939b-a5ff-456e-978f-53e7600b83ce getNextVolumeSize:<bound method Drive.getNextVolumeSize of <vm.Drive object at 0x7f0fb8a7d610>> getXML:<bound method Drive.getXML of <vm.Drive object at 0x7f0fb8a7d610>> iface:ide index:2 isDiskReplicationInProgress:<bound method Drive.isDiskReplicationInProgress of <vm.Drive object at 0x7f0fb8a7d610>> isVdsmImage:<bound method Drive.isVdsmImage of <vm.Drive object at 0x7f0fb8a7d610>> log:<logUtils.SimpleLogAdapter object at 0x7f0fb8a9ea90> name:hdc networkDev:False path: readonly:true reqsize:0 serial: shared:false specParams:{'path': ''} truesize:0 type:disk volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 331, in teardownVolumePath
    res = self.irs.teardownImage(drive['domainID'],
  File "/usr/share/vdsm/vm.py", line 1344, in __getitem__
    raise KeyError(key)
KeyError: 'domainID'
Thread-143930::DEBUG::2013-09-12 15:01:22,171::task::579::TaskManager.Task::(_updateState) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::moving from state init -> state preparing
Thread-143930::INFO::2013-09-12 15:01:22,172::logUtils::44::dispatcher::(wrapper) Run and protect: teardownImage(sdUUID='e281bd49-bc11-4acb-8634-624eac6d3358', spUUID='accbd988-31c6-4803-9204-a584067fa157', imgUUID='8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', volUUID=None)
Thread-143930::DEBUG::2013-09-12 15:01:22,172::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`3d2eb551-2767-44b0-958c-e2bc26b650ca`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3282' at 'teardownImage'
Thread-143930::DEBUG::2013-09-12 15:01:22,173::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' for lock type 'shared'
Thread-143930::DEBUG::2013-09-12 15:01:22,173::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free. Now locking as 'shared' (1 active user)
Thread-143930::DEBUG::2013-09-12 15:01:22,173::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`3d2eb551-2767-44b0-958c-e2bc26b650ca`::Granted request
Thread-143930::DEBUG::2013-09-12 15:01:22,174::task::811::TaskManager.Task::(resourceAcquired) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::_resourcesAcquired: Storage.e281bd49-bc11-4acb-8634-624eac6d3358 (shared)
Thread-143930::DEBUG::2013-09-12 15:01:22,174::task::974::TaskManager.Task::(_decref) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::ref 1 aborting False
Thread-143930::DEBUG::2013-09-12 15:01:22,188::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143930::DEBUG::2013-09-12 15:01:22,217::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143930::DEBUG::2013-09-12 15:01:22,246::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143930::INFO::2013-09-12 15:01:22,300::image::215::Storage.Image::(getChain) sdUUID=e281bd49-bc11-4acb-8634-624eac6d3358 imgUUID=8863c4d0-0ff3-4590-8f37-e6bb6c9d195e chain=[<storage.glusterVolume.GlusterVolume object at 0x7f0fb8760d90>]

Thread-143930::INFO::2013-09-12 15:01:22,300::logUtils::47::dispatcher::(wrapper) Run and protect: teardownImage, Return response: None
Thread-143930::DEBUG::2013-09-12 15:01:22,300::task::1168::TaskManager.Task::(prepare) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::finished: None
Thread-143930::DEBUG::2013-09-12 15:01:22,301::task::579::TaskManager.Task::(_updateState) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::moving from state preparing -> state finished
Thread-143930::DEBUG::2013-09-12 15:01:22,301::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e281bd49-bc11-4acb-8634-624eac6d3358': < ResourceRef 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', isValid: 'True' obj: 'None'>}
Thread-143930::DEBUG::2013-09-12 15:01:22,301::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358'
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' (0 active users)
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free, finding out if anyone is waiting for it.
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', Clearing records.
Thread-143930::DEBUG::2013-09-12 15:01:22,303::task::974::TaskManager.Task::(_decref) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::ref 0 aborting False
Thread-143930::WARNING::2013-09-12 15:01:22,303::utils::113::root::(rmFile) File: /var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.com.redhat.rhevm.vdsm already removed
Thread-143930::WARNING::2013-09-12 15:01:22,303::utils::113::root::(rmFile) File: /var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.org.qemu.guest_agent.0 already removed
Thread-143930::DEBUG::2013-09-12 15:01:22,304::task::579::TaskManager.Task::(_updateState) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::moving from state init -> state preparing
Thread-143930::INFO::2013-09-12 15:01:22,304::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='980cb3c8-8af8-4795-9c21-85582d37e042')
Thread-143930::INFO::2013-09-12 15:01:22,306::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Thread-143930::DEBUG::2013-09-12 15:01:22,306::task::1168::TaskManager.Task::(prepare) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::finished: None
Thread-143930::DEBUG::2013-09-12 15:01:22,307::task::579::TaskManager.Task::(_updateState) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::moving from state preparing -> state finished
Thread-143930::DEBUG::2013-09-12 15:01:22,307::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-143930::DEBUG::2013-09-12 15:01:22,307::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143930::DEBUG::2013-09-12 15:01:22,307::task::974::TaskManager.Task::(_decref) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::ref 0 aborting False
Thread-143930::DEBUG::2013-09-12 15:01:22,307::vm::4252::vm.Vm::(deleteVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Total desktops after destroy of 980cb3c8-8af8-4795-9c21-85582d37e042 is 0
Thread-143930::DEBUG::2013-09-12 15:01:22,307::BindingXMLRPC::986::vds::(wrapper) return vmDestroy with {'status': {'message': 'Machine destroyed', 'code': 0}}



PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY NEED TO.

This email and its attachments may contain information which is confidential and/or legally privileged. If you are not the intended recipient of this e-mail please notify the sender immediately by e-mail and delete this e-mail and its attachments from your computer and IT systems. You must not copy, re-transmit, use or disclose (other than to the sender) the existence or contents of this email or its attachments or permit anyone else to do so.

-----------------------------

-----Original Message-----
From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of users-request at ovirt.org
Sent: Friday, September 13, 2013 7:03 AM
To: users at ovirt.org
Subject: Users Digest, Vol 24, Issue 54

Message: 5
Date: Thu, 12 Sep 2013 16:45:49 -0400 (EDT)
From: Jason Brooks <jbrooks at redhat.com>
To: users <users at ovirt.org>
Subject: [Users] oVirt 3.3 -- Failed to run VM: internal error
        unexpected address type for ide disk
Message-ID:
        <1080415344.14385259.1379018749875.JavaMail.root at redhat.com>
Content-Type: text/plain; charset=utf-8

I'm experiencing an issue today on my oVirt 3.3 test setup -- it's an AIO
engine+host setup, with a second node on a separate machine. Both machines
are running F19, both have all current F19 updates and all current ovirt-
beta repo updates.

This is on a GlusterFS domain, hosted from a volume on the AIO machine.

Also, I have the neutron external network provider configured, but these
VMs aren't using one of these networks.

selinux permissive on both machines, firewall down on both as well
(firewall rules for gluster don't appear to be set by the engine)

1. Create a new VM w/ virtio disk
2. VM runs normally
3. Power down VM
4. VM won't start, w/ error msg:

internal error unexpected address type for ide disk

5. Changing disk to IDE, removing and re-adding, VM still won't start

6. If created w/ IDE disk from the beginning, VM runs and restarts as
expected.

Is anyone else experiencing something like this?  It appears to render the
Gluster FS domain type totally unusable. I wasn't having this problem last
week...

Here's a chunk from the VDSM log:

Thread-4526::ERROR::2013-09-12 16:02:53,199::vm::2062::vm.Vm::
(_startUnderlyingVm) vmId=`cc86596b-0a69-4f5e-a4c2-e8d8ca18067e`::
The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/vm.py", line 2906, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 76, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error unexpected address type for ide disk



Regards,

Jason

---

Jason Brooks
Red Hat Open Source and Standards

@jasonbrooks | @redhatopen
http://community.redhat.com



------------------------------




More information about the Users mailing list