[Users] virt-io SCSI duplicate disk ID

Itamar Heim iheim at redhat.com
Sun Jan 12 11:29:39 UTC 2014


On 01/09/2014 04:49 AM, Blaster wrote:
>
> Hi Daniel,
>
> Both times were on the same hypervisor which was a fresh 3.3.2 install,
> not an upgrade.  One time was using disk images and the other time was
> using direct LUN.
>
> I will send log files to you directly.

I'd expect the bug to originage from the engine having the duplicity, so 
host being a new 3.3.2 is less important.

>
>
> On 1/8/2014 3:15 PM, Daniel Erez wrote:
>> Hi Blaster,
>>
>> Have you added the second disk after upgrading oVirt version?
>> An explicit address setting mechanism has been introduced recently,
>> which might cause such problems between minor versions.
>> Can you please attach the full engine/vdsm logs?
>>
>> Thanks,
>> Daniel
>>
>> ----- Original Message -----
>>> From: "Blaster" <blaster at 556nato.com>
>>> To: users at ovirt.org
>>> Sent: Wednesday, January 8, 2014 8:53:57 PM
>>> Subject: [Users] virt-io SCSI duplicate disk ID
>>>
>>> So twice now under oVirt 3.3.2 I have added 2 virtio-scsi devices to
>>> a single
>>> virtual host.
>>>
>>> After doing so, the VM would fail to boot due to duplicate disk ID.
>>> The first
>>> time I thought it a fluke, second time’s a bug?
>>>
>>> Fortunately they were empty data disks and I was able to get around the
>>> problem by deleting one and recreating it.
>>>
>>> VDSM log:
>>>
>>> Thread-32154::INFO::2014-01-08
>>> 11:54:39,717::clientIF::350::vds::(prepareVolumePath) prepared volume
>>> path:
>>> /rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2
>>>
>>> Thread-32154::DEBUG::2014-01-08 11:54:39,740::vm::2984::vm.Vm::(_run)
>>> vmId=`c2aff4cc-0de6-4342-a565-669b1825838c`::<?xml version="1.0"
>>> encoding="utf-8"?>
>>> <domain type="kvm">
>>> <name>cobra</name>
>>> <uuid>c2aff4cc-0de6-4342-a565-669b1825838c</uuid>
>>> <memory>4194304</memory>
>>> <currentMemory>4194304</currentMemory>
>>> <vcpu>3</vcpu>
>>> <memtune>
>>> <min_guarantee>4194304</min_guarantee>
>>> </memtune>
>>> <devices>
>>> <channel type="unix">
>>> <target name="com.redhat.rhevm.vdsm" type="virtio"/>
>>> <source mode="bind"
>>> path="/var/lib/libvirt/qemu/channels/c2aff4cc-0de6-4342-a565-669b1825838c.com.redhat.rhevm.vdsm"/>
>>>
>>> </channel>
>>> <channel type="unix">
>>> <target name="org.qemu.guest_agent.0" type="virtio"/>
>>> <source mode="bind"
>>> path="/var/lib/libvirt/qemu/channels/c2aff4cc-0de6-4342-a565-669b1825838c.org.qemu.guest_agent.0"/>
>>>
>>> </channel>
>>> <input bus="ps2" type="mouse"/>
>>> <channel type="spicevmc">
>>> <target name="com.redhat.spice.0" type="virtio"/>
>>> </channel>
>>> <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****"
>>> passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
>>> <channel mode="secure" name="main"/>
>>> <channel mode="secure" name="inputs"/>
>>> <channel mode="secure" name="cursor"/>
>>> <channel mode="secure" name="playback"/>
>>> <channel mode="secure" name="record"/>
>>> <channel mode="secure" name="display"/>
>>> <channel mode="secure" name="usbredir"/>
>>>
>>> <channel mode="secure" name="display"/>
>>> <channel mode="secure" name="usbredir"/>
>>> <channel mode="secure" name="smartcard"/>
>>> </graphics>
>>> <controller model="virtio-scsi" type="scsi">
>>> <address bus="0x00" domain="0x0000" function="0x0" slot="0x05"
>>> type="pci"/>
>>> </controller>
>>> <video>
>>> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02"
>>> type="pci"/>
>>> <model heads="1" type="qxl" vram="32768"/>
>>> </video>
>>> <interface type="bridge">
>>> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
>>> type="pci"/>
>>> <mac address="00:1a:4a:5b:9f:02"/>
>>> <model type="virtio"/>
>>> <source bridge="ovirtmgmt"/>
>>> <filterref filter="vdsm-no-mac-spoofing"/>
>>> <link state="up"/>
>>> </interface>
>>> <disk device="cdrom" snapshot="no" type="file">
>>> <address bus="1" controller="0" target="0" type="drive" unit="0"/>
>>> <source file="" startupPolicy="optional"/>
>>> <target bus="ide" dev="hdc"/>
>>> <readonly/>
>>> <serial/>
>>> </disk>
>>> <disk device="disk" snapshot="no" type="file">
>>> <source
>>> file="/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5fa4a"/>
>>>
>>> <target bus="scsi" dev="sda"/>
>>> <serial>94a298cb-87a5-48cd-b78f-770582b50639</serial>
>>> <boot order="1"/>
>>> <driver cache="none" error_policy="stop" io="threads" name="qemu"
>>> type="raw"/>
>>> </disk>
>>> <disk device="disk" snapshot="no" type="file">
>>> <address bus="0x00" domain="0x0000" function="0x0" slot="0x07"
>>> type="pci"/>
>>> <source
>>> file="/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d40d"/>
>>>
>>> <target bus="virtio" dev="vda"/>
>>> <serial>8df43d38-c4c7-4711-bc87-55f35d1550e5</serial>
>>> <driver cache="none" error_policy="stop" io="threads" name="qemu"
>>> type="raw"/>
>>> </disk>
>>> <disk device="disk" snapshot="no" type="file">
>>>
>>> <address bus="0" controller="0" target="0" type="drive" unit="0"/>
>>> <source
>>> file="/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2"/>
>>>
>>> <target bus="scsi" dev="sdb"/>
>>> <serial>e1886286-3d81-47d0-ae8d-77454e464078</serial>
>>> <driver cache="none" error_policy="stop" io="threads" name="qemu"
>>> type="raw"/>
>>> </disk>
>>> <sound model="ich6">
>>> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04"
>>> type="pci"/>
>>> </sound>
>>> <memballoon model="virtio"/>
>>> </devices>
>>> <os>
>>> <type arch="x86_64" machine="pc-1.0">hvm</type>
>>> <smbios mode="sysinfo"/>
>>> </os>
>>> <sysinfo type="smbios">
>>> <system>
>>> <entry name="manufacturer">oVirt</entry>
>>> <entry name="product">oVirt Node</entry>
>>> <entry name="version">19-5</entry>
>>> <entry name="serial">2061001F-C600-0006-E1BC-BCAEC518BA45</entry>
>>> <entry name="uuid">c2aff4cc-0de6-4342-a565-669b1825838c</entry>
>>> </system>
>>> </sysinfo>
>>> <clock adjustment="-21600" offset="variable">
>>> <timer name="rtc" tickpolicy="catchup"/>
>>> </clock>
>>> <features>
>>> <acpi/>
>>> </features>
>>> <cpu match="exact">
>>> <model>Nehalem</model>
>>> <topology cores="1" sockets="3" threads="1"/>
>>> </cpu>
>>> </domain>
>>> Thread-32154::DEBUG::2014-01-08
>>> 11:54:40,218::libvirtconnection::108::libvirtconnection::(wrapper)
>>> Unknown
>>> libvirterror: ecode: 1 edom: 10 level: 2 message: internal error process
>>> exited while connecting to monitor: qemu-system-x86_64: -drive
>>> file=/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2,if=none,id=drive-scsi0-0-0-0,format=raw,serial=e1886286-3d81-47d0-ae8d-77454e464078,cache=none,werror=stop,rerror=stop,aio=threads:
>>>
>>> Duplicate ID 'drive-scsi0-0-0-0' for drive
>>>
>>> Thread-32154::DEBUG::2014-01-08
>>> 11:54:40,218::vm::2109::vm.Vm::(_startUnderlyingVm)
>>> vmId=`c2aff4cc-0de6-4342-a565-669b1825838c`::_ongoingCreations released
>>> Thread-32154::ERROR::2014-01-08
>>> 11:54:40,218::vm::2135::vm.Vm::(_startUnderlyingVm)
>>> vmId=`c2aff4cc-0de6-4342-a565-669b1825838c`::The vm start process failed
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/vm.py", line 2095, in _startUnderlyingVm
>>> self._run()
>>> File "/usr/share/vdsm/vm.py", line 3018, in _run
>>> self._connection.createXML(domxml, flags),
>>> File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
>>> line 76,
>>> in wrapper
>>> ret = f(*args, **kwargs)
>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in
>>> createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>>> conn=self)
>>> libvirtError: internal error process exited while connecting to monitor:
>>> qemu-system-x86_64: -drive
>>> file=/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2,if=none,id=drive-scsi0-0-0-0,format=raw,serial=e1886286-3d81-47d0-ae8d-77454e464078,cache=none,werror=stop,rerror=stop,aio=threads:
>>>
>>> Duplicate ID 'drive-scsi0-0-0-0' for drive
>>>
>>> Thread-32154::DEBUG::2014-01-08
>>> 11:54:40,223::vm::2577::vm.Vm::(setDownStatus)
>>> vmId=`c2aff4cc-0de6-4342-a565-669b1825838c`::Changed state to Down:
>>> internal
>>> error process exited while connecting to monitor: qemu-system-x86_64:
>>> -drive
>>> file=/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2,if=none,id=drive-scsi0-0-0-0,format=raw,serial=e1886286-3d81-47d0-ae8d-77454e464078,cache=none,werror=stop,rerror=stop,aio=threads:
>>>
>>> Duplicate ID 'drive-scsi0-0-0-0' for drive
>>>
>>> Thread-32158::WARNING::2014-01-08
>>> 11:54:42,185::clientIF::362::vds::(teardownVolumePath) Drive is not a
>>> vdsm
>>> image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
>>> VOLWM_FREE_PCT:50
>>> _blockDev:False _checkIoTuneCategories:<bound method
>>> Drive._checkIoTuneCategories of <vm.Drive object at 0x7f1150113a90>>
>>> _customize:<bound method Drive._customize of <vm.Drive object at
>>> 0x7f1150113a90>> _deviceXML:<disk device="cdrom" snapshot="no"
>>> type="file"><address bus="1" controller="0" target="0" type="drive"
>>> unit="0"/><source file="" startupPolicy="optional"/><target bus="ide"
>>> dev="hdc"/><readonly/><serial></serial></disk> _makeName:<bound method
>>> Drive._makeName of <vm.Drive object at 0x7f1150113a90>>
>>> _setExtSharedState:<bound method Drive._setExtSharedState of <vm.Drive
>>> object at 0x7f1150113a90>> _validateIoTuneParams:<bound method
>>> Drive._validateIoTuneParams of <vm.Drive object at 0x7f1150113a90>>
>>> address:{' controller': '0', ' target': '0', 'unit': '0', ' bus': '1', '
>>> type': 'drive'} apparentsize:0 blockDev:False cache:none conf:{'status':
>>> 'Down', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'vmId':
>>> 'c2aff4cc-0de6-4342-a565-669b1825838c', 'pid': '0', 'memGuaranteedSize':
>>> 4096, 'timeOffset': '-21600', 'keyboardLayout': 'en-us', 'displayPort':
>>> '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT',
>>> 'cpuType': 'Nehalem', 'custom':
>>> {'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5adevice_94db7fa0-071d-4181-bac6-826ecfca3dd8device_a2e6354f-4ad6-475f-bd18-754fcedf505f':
>>>
>>> 'VmDevice {vmId=c2aff4cc-0de6-4342-a565-669b1825838c,
>>> deviceId=a2e6354f-4ad6-475f-bd18-754fcedf505f, device=unix,
>>> type=CHANNEL,
>>> bootOrder=0, specParams={}, address={port=2, bus=0, controller=0,
>>> type=virtio-serial}, managed=false, plugged=true, readOnly=false,
>>> deviceAlias=channel1, customProperties={}, snapshotId=null}',
>>> 'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40': 'VmDevice
>>> {vmId=c2aff4cc-0de6-4342-a565-669b1825838c,
>>> deviceId=142f948d-f916-4f42-bd28-cb4f0b8ebb40, device=virtio-serial,
>>> type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00,
>>> domain=0x0000, type=pci, slot=0x06, function=0x0}, managed=false,
>>> plugged=true, readOnly=false, deviceAlias=virtio-serial0,
>>> customProperties={}, snapshotId=null}',
>>> 'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5a':
>>>
>>> 'VmDevice {vmId=c2aff4cc-0de6-4342-a565-669b1825838c,
>>> deviceId=861eb290-19bc-4ace-b2cb-85cbb2e0eb5a, device=ide,
>>> type=CONTROLLER,
>>> bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci,
>>> slot=0x01, function=0x1}, managed=false, plugged=true, readOnly=false,
>>> deviceAlias=ide0, customProperties={}, snapshotId=null}',
>>> 'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5adevice_94db7fa0-071d-4181-bac6-826ecfca3dd8':
>>>
>>> 'VmDevice {vmId=c2aff4cc-0de6-4342-a565-669b1825838c,
>>> deviceId=94db7fa0-071d-4181-bac6-826ecfca3dd8, device=unix,
>>> type=CHANNEL,
>>> bootOrder=0, specParams={}, address={port=1, bus=0, controller=0,
>>> type=virtio-serial}, managed=false, plugged=true, readOnly=false,
>>> deviceAlias=channel0, customProperties={}, snapshotId=null}',
>>> 'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb-60df1aaee1e8':
>>>
>>> 'VmDevice {vmId=c2aff4cc-0de6-4342-a565-669b1825838c,
>>> deviceId=615c1466-850e-4362-a4fb-60df1aaee1e8, device=spicevmc,
>>> type=CHANNEL, bootOrder=0, specParams={}, address={port=3, bus=0,
>>> controller=0, type=virtio-serial}, managed=false, plugged=true,
>>> readOnly=false, deviceAlias=channel2, customProperties={},
>>> snapshotId=null}'}, 'clientIp': '', 'exitCode': 1, 'nicModel':
>>> 'rtl8139,pv',
>>> 'smartcardEnable': 'false', 'kvmEnable': 'true', 'exitMessage':
>>> "internal
>>> error process exited while connecting to monitor: qemu-system-x86_64:
>>> -drive
>>> file=/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2,if=none,id=drive-scsi0-0-0-0,format=raw,serial=e1886286-3d81-47d0-ae8d-77454e464078,cache=none,werror=stop,rerror=stop,aio=threads:
>>>
>>> Duplicate ID 'drive-scsi0-0-0-0' for drive\n", 'transparentHugePages':
>>> 'true', 'devices': [{'specParams': {}, 'deviceId':
>>> 'db6166cb-e977-485e-8c82-fa48ca75e709', 'address': {'bus': '0x00', '
>>> slot':
>>> '0x05', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'},
>>> 'device':
>>> 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'qxl',
>>> 'specParams': {'vram': '32768', 'heads': '1'}, 'type': 'video',
>>> 'deviceId':
>>> '8b0e3dbc-27c6-4eae-ba6b-201c3e1736aa', 'address': {'bus': '0x00', '
>>> slot':
>>> '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}},
>>> {'nicModel': 'pv', 'macAddr': '00:1a:4a:5b:9f:02', 'linkActive': 'true',
>>> 'network': 'ovirtmgmt', 'filter': 'vdsm-no-mac-spoofing',
>>> 'specParams': {},
>>> 'deviceId': '738c8ebe-b014-4d65-8c78-942aaf12bfb5', 'address': {'bus':
>>> '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', '
>>> function':
>>> '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'index': '2',
>>> 'iface':
>>> 'ide', 'address': {' controller': '0', ' target': '0', 'unit': '0', '
>>> bus':
>>> '1', ' type': 'drive'}, 'specParams': {'path': ''}, 'readonly': 'true',
>>> 'deviceId': '5611019a-948e-41b3-8ffd-75790ebcdf84', 'path': '',
>>> 'device':
>>> 'cdrom', 'shared': 'false', 'type': 'disk'}, {'volumeInfo': {'domainID':
>>> 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path',
>>> 'leaseOffset': 0,
>>> 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5fa4a',
>>>
>>> 'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'leasePath':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5fa4a.lease',
>>>
>>> 'imageID': '94a298cb-87a5-48cd-b78f-770582b50639'}, 'index': 0, 'iface':
>>> 'scsi', 'apparentsize': '162135015424', 'imageID':
>>> '94a298cb-87a5-48cd-b78f-770582b50639', 'readonly': 'false', 'shared':
>>> 'false', 'truesize': '107119386624', 'type': 'disk', 'domainID':
>>> 'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw',
>>> 'deviceId': '94a298cb-87a5-48cd-b78f-770582b50639', 'poolID':
>>> '18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5fa4a',
>>>
>>> 'propagateErrors': 'off', 'optional': 'false', 'bootOrder': '1',
>>> 'volumeID':
>>> 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'specParams': {}, 'volumeChain':
>>> [{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path',
>>> 'leaseOffset': 0, 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5fa4a',
>>>
>>> 'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'leasePath':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5fa4a.lease',
>>>
>>> 'imageID': '94a298cb-87a5-48cd-b78f-770582b50639'}]}, {'address':
>>> {'bus':
>>> '0x00', ' slot': '0x07', ' domain': '0x0000', ' type': 'pci', '
>>> function':
>>> '0x0'}, 'volumeInfo': {'domainID':
>>> 'f14f471e-0cce-414d-af57-779eeb88c97a',
>>> 'volType': 'path', 'leaseOffset': 0, 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d40d',
>>>
>>> 'volumeID': '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'leasePath':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d40d.lease',
>>>
>>> 'imageID': '8df43d38-c4c7-4711-bc87-55f35d1550e5'}, 'index': '0',
>>> 'iface':
>>> 'virtio', 'apparentsize': '1073741824', 'imageID':
>>> '8df43d38-c4c7-4711-bc87-55f35d1550e5', 'readonly': 'false', 'shared':
>>> 'false', 'truesize': '0', 'type': 'disk', 'domainID':
>>> 'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw',
>>> 'deviceId': '8df43d38-c4c7-4711-bc87-55f35d1550e5', 'poolID':
>>> '18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d40d',
>>>
>>> 'propagateErrors': 'off', 'optional': 'false', 'volumeID':
>>> '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'specParams': {}, 'volumeChain':
>>> [{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path',
>>> 'leaseOffset': 0, 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d40d',
>>>
>>> 'volumeID': '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'leasePath':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d40d.lease',
>>>
>>> 'imageID': '8df43d38-c4c7-4711-bc87-55f35d1550e5'}]}, {'address': {'
>>> controller': '0', ' target': '0', 'unit': '0', ' bus': '0', ' type':
>>> 'drive'}, 'volumeInfo': {'domainID':
>>> 'f14f471e-0cce-414d-af57-779eeb88c97a',
>>> 'volType': 'path', 'leaseOffset': 0, 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2',
>>>
>>> 'volumeID': '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'leasePath':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2.lease',
>>>
>>> 'imageID': 'e1886286-3d81-47d0-ae8d-77454e464078'}, 'index': '1',
>>> 'iface':
>>> 'scsi', 'apparentsize': '1073741824', 'imageID':
>>> 'e1886286-3d81-47d0-ae8d-77454e464078', 'readonly': 'false', 'shared':
>>> 'false', 'truesize': '0', 'type': 'disk', 'domainID':
>>> 'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw',
>>> 'deviceId': 'e1886286-3d81-47d0-ae8d-77454e464078', 'poolID':
>>> '18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2',
>>>
>>> 'propagateErrors': 'off', 'optional': 'false', 'volumeID':
>>> '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'specParams': {}, 'volumeChain':
>>> [{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path',
>>> 'leaseOffset': 0, 'path':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2',
>>>
>>> 'volumeID': '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'leasePath':
>>> '/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6dea2.lease',
>>>
>>> 'imageID': 'e1886286-3d81-47d0-ae8d-77454e464078'}]}, {'device': 'ich6',
>>> 'specParams': {}, 'type': 'sound', 'deviceId':
>>> 'a1e596e9-218f-46ba-9f32-b9c966e11d73', 'address': {'bus': '0x00', '
>>> slot':
>>> '0x04', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}},
>>> {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type':
>>> 'balloon', 'deviceId': '5c04fd7e-7249-4e3a-b8eb-91cce72d5b60', 'target':
>>> 4194304}], 'smp': '3', 'vmType': 'kvm', 'memSize': 4096, 'displayIp':
>>> '0',
>>> 'spiceSecureChannels':
>>> 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
>>> 'smpCoresPerSocket': '1', 'vmName': 'cobra', 'display': 'qxl',
>>> 'nice': '0'}
>>> createXmlElem:<bound method Drive.createXmlElem of <vm.Drive object at
>>> 0x7f1150113a90>> device:cdrom
>>> deviceId:5611019a-948e-41b3-8ffd-75790ebcdf84
>>> extSharedState:none getLeasesXML:<bound method Drive.getLeasesXML of
>>> <vm.Drive object at 0x7f1150113a90>> getNextVolumeSize:<bound method
>>> Drive.getNextVolumeSize of <vm.Drive object at 0x7f1150113a90>>
>>> getXML:<bound method Drive.getXML of <vm.Drive object at
>>> 0x7f1150113a90>>
>>> hasVolumeLeases:False iface:ide index:2
>>> isDiskReplicationInProgress:<bound
>>> method Drive.isDiskReplicationInProgress of <vm.Drive object at
>>> 0x7f1150113a90>> isVdsmImage:<bound method Drive.isVdsmImage of
>>> <vm.Drive
>>> object at 0x7f1150113a90>> log:<logUtils.SimpleLogAdapter object at
>>> 0x7f111838af90> name:hdc networkDev:False path: readonly:true reqsize:0
>>> serial: shared:false specParams:{'path': ''} truesize:0 type:disk
>>> volExtensionChunk:1024 watermarkLimit:536870912
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath
>>> res = self.irs.teardownImage(drive['domainID'],
>>> File "/usr/share/vdsm/vm.py", line 1389, in __getitem__
>>> raise KeyError(key)
>>> KeyError: 'domainID'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,186::task::579::TaskManager.Task::(_updateState)
>>> Task=`30d76cca-4645-4893-8d68-5cc68ba42dc3`::moving from state init
>>> -> state
>>> preparing
>>> Thread-32158::INFO::2014-01-08
>>> 11:54:42,187::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> teardownImage(sdUUID='f14f471e-0cce-414d-af57-779eeb88c97a',
>>> spUUID='18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9',
>>> imgUUID='94a298cb-87a5-48cd-b78f-770582b50639', volUUID=None)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,187::resourceManager::197::ResourceManager.Request::(__init__)
>>> ResName=`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=`533f2699-0684-4247-9d5f-a858ffe96fe9`::Request
>>>
>>> was made in '/usr/share/vdsm/storage/hsm.py' line '3283' at
>>> 'teardownImage'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,187::resourceManager::541::ResourceManager::(registerResource)
>>> Trying to register resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
>>> for lock type 'shared'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,187::resourceManager::600::ResourceManager::(registerResource)
>>> Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now
>>> locking
>>> as 'shared' (1 active user)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,187::resourceManager::237::ResourceManager.Request::(grant)
>>> ResName=`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=`533f2699-0684-4247-9d5f-a858ffe96fe9`::Granted
>>>
>>> request
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,188::task::811::TaskManager.Task::(resourceAcquired)
>>> Task=`30d76cca-4645-4893-8d68-5cc68ba42dc3`::_resourcesAcquired:
>>> Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,188::task::974::TaskManager.Task::(_decref)
>>> Task=`30d76cca-4645-4893-8d68-5cc68ba42dc3`::ref 1 aborting False
>>> Thread-32158::INFO::2014-01-08
>>> 11:54:42,188::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> teardownImage, Return response: None
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,188::task::1168::TaskManager.Task::(prepare)
>>> Task=`30d76cca-4645-4893-8d68-5cc68ba42dc3`::finished: None
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,188::task::579::TaskManager.Task::(_updateState)
>>> Task=`30d76cca-4645-4893-8d68-5cc68ba42dc3`::moving from state
>>> preparing ->
>>> state finished
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,188::resourceManager::939::ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources
>>> {'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj:
>>> 'None'>}
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,188::resourceManager::976::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,189::resourceManager::615::ResourceManager::(releaseResource)
>>> Trying to release resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,189::resourceManager::634::ResourceManager::(releaseResource)
>>> Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0
>>> active
>>> users)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,189::resourceManager::640::ResourceManager::(releaseResource)
>>> Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free,
>>> finding out
>>> if anyone is waiting for it.
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,189::resourceManager::648::ResourceManager::(releaseResource) No
>>>
>>> one is waiting for resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a',
>>> Clearing records.
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,189::task::974::TaskManager.Task::(_decref)
>>> Task=`30d76cca-4645-4893-8d68-5cc68ba42dc3`::ref 0 aborting False
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,189::task::579::TaskManager.Task::(_updateState)
>>> Task=`4b2bde05-78f2-42b2-a5be-171377c6905e`::moving from state init
>>> -> state
>>> preparing
>>> Thread-32158::INFO::2014-01-08
>>> 11:54:42,190::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> teardownImage(sdUUID='f14f471e-0cce-414d-af57-779eeb88c97a',
>>> spUUID='18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9',
>>> imgUUID='8df43d38-c4c7-4711-bc87-55f35d1550e5', volUUID=None)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,190::resourceManager::197::ResourceManager.Request::(__init__)
>>> ResName=`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=`0907e117-ba60-4c9f-a190-ebb7e027b4c2`::Request
>>>
>>> was made in '/usr/share/vdsm/storage/hsm.py' line '3283' at
>>> 'teardownImage'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,190::resourceManager::541::ResourceManager::(registerResource)
>>> Trying to register resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
>>> for lock type 'shared'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,190::resourceManager::600::ResourceManager::(registerResource)
>>> Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now
>>> locking
>>> as 'shared' (1 active user)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,190::resourceManager::237::ResourceManager.Request::(grant)
>>> ResName=`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=`0907e117-ba60-4c9f-a190-ebb7e027b4c2`::Granted
>>>
>>> request
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,190::task::811::TaskManager.Task::(resourceAcquired)
>>> Task=`4b2bde05-78f2-42b2-a5be-171377c6905e`::_resourcesAcquired:
>>> Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,191::task::974::TaskManager.Task::(_decref)
>>> Task=`4b2bde05-78f2-42b2-a5be-171377c6905e`::ref 1 aborting False
>>> Thread-32158::INFO::2014-01-08
>>> 11:54:42,191::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> teardownImage, Return response: None
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,191::task::1168::TaskManager.Task::(prepare)
>>> Task=`4b2bde05-78f2-42b2-a5be-171377c6905e`::finished: None
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,191::task::579::TaskManager.Task::(_updateState)
>>> Task=`4b2bde05-78f2-42b2-a5be-171377c6905e`::moving from state
>>> preparing ->
>>> state finished
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,191::resourceManager::939::ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources
>>> {'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj:
>>> 'None'>}
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,191::resourceManager::976::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,191::resourceManager::615::ResourceManager::(releaseResource)
>>> Trying to release resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,192::resourceManager::634::ResourceManager::(releaseResource)
>>> Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0
>>> active
>>> users)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,192::resourceManager::640::ResourceManager::(releaseResource)
>>> Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free,
>>> finding out
>>> if anyone is waiting for it.
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,192::resourceManager::648::ResourceManager::(releaseResource) No
>>>
>>> one is waiting for resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a',
>>> Clearing records.
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,192::task::974::TaskManager.Task::(_decref)
>>> Task=`4b2bde05-78f2-42b2-a5be-171377c6905e`::ref 0 aborting False
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,192::task::579::TaskManager.Task::(_updateState)
>>> Task=`75d1051f-b118-4af8-b9f1-504fcd1802c2`::moving from state init
>>> -> state
>>> preparing
>>> Thread-32158::INFO::2014-01-08
>>> 11:54:42,192::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> teardownImage(sdUUID='f14f471e-0cce-414d-af57-779eeb88c97a',
>>> spUUID='18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9',
>>> imgUUID='e1886286-3d81-47d0-ae8d-77454e464078', volUUID=None)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,193::resourceManager::197::ResourceManager.Request::(__init__)
>>> ResName=`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=`8908ad39-2555-4ef5-a6ae-49e8504de015`::Request
>>>
>>> was made in '/usr/share/vdsm/storage/hsm.py' line '3283' at
>>> 'teardownImage'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,193::resourceManager::541::ResourceManager::(registerResource)
>>> Trying to register resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
>>> for lock type 'shared'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,196::resourceManager::600::ResourceManager::(registerResource)
>>> Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now
>>> locking
>>> as 'shared' (1 active user)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,196::resourceManager::237::ResourceManager.Request::(grant)
>>> ResName=`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=`8908ad39-2555-4ef5-a6ae-49e8504de015`::Granted
>>>
>>> request
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,196::task::811::TaskManager.Task::(resourceAcquired)
>>> Task=`75d1051f-b118-4af8-b9f1-504fcd1802c2`::_resourcesAcquired:
>>> Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,196::task::974::TaskManager.Task::(_decref)
>>> Task=`75d1051f-b118-4af8-b9f1-504fcd1802c2`::ref 1 aborting False
>>> Thread-32158::INFO::2014-01-08
>>> 11:54:42,196::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> teardownImage, Return response: None
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,196::task::1168::TaskManager.Task::(prepare)
>>> Task=`75d1051f-b118-4af8-b9f1-504fcd1802c2`::finished: None
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,197::task::579::TaskManager.Task::(_updateState)
>>> Task=`75d1051f-b118-4af8-b9f1-504fcd1802c2`::moving from state
>>> preparing ->
>>> state finished
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,197::resourceManager::939::ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources
>>> {'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj:
>>> 'None'>}
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,197::resourceManager::976::ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,197::resourceManager::615::ResourceManager::(releaseResource)
>>> Trying to release resource
>>> 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,197::resourceManager::634::ResourceManager::(releaseResource)
>>> Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0
>>> active
>>> users)
>>> Thread-32158::DEBUG::2014-01-08
>>> 11:54:42,197::resourceManager::640::ResourceManager::(releaseResource)
>>> Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free,
>>> finding out
>>> if anyone is waiting for it.
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




More information about the Users mailing list