Users
Threads by month
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 13 participants
- 19161 discussions
Dear all,
Because of our puppet integration at work, it would like to know if there
is a way in ovirt to know, from WITHIN a virtual machine, the hypervisor on
wich the vm is running?
Kind regards,
Koen
6
8
Hello All,
A little bit ago I wrote an email about network issues I was having.
I found the problem...
On the VM host, I had a bond set up between two network interfaces. The
bond mode was set to mode 1 (active/passive).
However when I look at the bond on the box, I get this:
[root@node02 bonding]# cat bond4
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d4:ae:52:6d:c8:cc
Slave queue ID: 0
Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d4:ae:52:6d:c8:ce
Slave queue ID: 0
Somehow, the OS is not setting the bonding mode right. I verified that
it was set to mode 1 in /etc/sysconfig/network-scripts/ifcfe-bond4
When I take the bond away, the host network works perfectly on both of
the formerly bonded interfaces.
So again, if anyone has any ideas, I'm open to suggestions.
Thanks,
Dan
5
5
------=_NextPartTM-000-1d072653-e875-4776-9205-da11a53cbe5b
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD173585BAE38EXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD173585BAE38EXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
after configuring noVNC websocket proxy I would like to load
an offically signed certificate into it. Otherwise I would always
have to accept the self signed certificate on port 6100. See here:
http://lists.ovirt.org/pipermail/users/2013-October/017108.html
>From the configuration file I know where to place the signed
certificate but our generated certificates depend on intermediate
certificates. Ah the moment I'm missing the option to load/advertise
that intermediate certificate.
# cat /ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
PROXY_PORT=3D6100
SSL_CERTIFICATE=3D/etc/pki/ovirt-engine/certs/websocket-proxy.cer
SSL_KEY=3D/etc/pki/ovirt-engine/keys/websocket-proxy.key.nopass
FORCE_DATA_VERIFICATION=3DTrue
CERT_FOR_DATA_VERIFICATION=3D/etc/pki/ovirt-engine/certs/engine.cer
SSL_ONLY=3DTrue
In apache I usally go with:
SSLCertificateFile /etc/pki/ovirt-engine/certs/apache.cer
SSLCertificateKeyFile /etc/pki/ovirt-engine/keys/apache.key.nopass
SSLCertificateChainFile /etc/pki/ovirt-engine/certs/server-chain.crt
Any tips?
Markus
--_000_12EF8D94C6F8734FB2FF37B9FBEDD173585BAE38EXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style id=3D"owaParaStyle" type=3D"text/css">P {margin-top:0;margin-bottom:=
0;}</style>
</head>
<body ocsi=3D"0" fpstyle=3D"1">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hello,<br>
<br>
after configuring noVNC websocket proxy I would like to load<br>
an offically signed certificate into it. Otherwise I would always<br>
have to accept the self signed certificate on port 6100. See here:<br>
<br>
<a href=3D"http://lists.ovirt.org/pipermail/users/2013-October/017108.html"=
target=3D"_blank">http://lists.ovirt.org/pipermail/users/2013-October/0171=
08.html</a><br>
<br>
>From the configuration file I know where to place the signed<br>
certificate but our generated certificates depend on intermediate<br>
certificates. Ah the moment I'm missing the option to load/advertise <br>
that intermediate certificate.<br>
<br>
# cat /ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf<br>
PROXY_PORT=3D6100<br>
SSL_CERTIFICATE=3D/etc/pki/ovirt-engine/certs/websocket-proxy.cer<br>
SSL_KEY=3D/etc/pki/ovirt-engine/keys/websocket-proxy.key.nopass<br>
FORCE_DATA_VERIFICATION=3DTrue<br>
CERT_FOR_DATA_VERIFICATION=3D/etc/pki/ovirt-engine/certs/engine.cer<br>
SSL_ONLY=3DTrue<br>
<br>
In apache I usally go with:<br>
<br>
SSLCertificateFile /etc/pki/ovirt-engine/certs/apache.cer<br>
SSLCertificateKeyFile /etc/pki/ovirt-engine/keys/apache.key.nopass<br>
SSLCertificateChainFile /etc/pki/ovirt-engine/certs/server-chain.crt <br>
<br>
Any tips?<br>
<br>
Markus<br>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD173585BAE38EXCHANGEcollogi_--
------=_NextPartTM-000-1d072653-e875-4776-9205-da11a53cbe5b
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-1d072653-e875-4776-9205-da11a53cbe5b--
2
3
Hello,
I am planning a small meet-up event oVirt community in Korea.
Is there any way to get some oVirt stickers? I would like to get some
stickers for our first events in Korea.
Please let me know anybody who knows how to get some stickers.
--
Best Regards,
Kyung Huh
2
1
--Apple-Mail=_877DC8BA-BA64-4899-AC8F-C9588B685A8B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
So twice now under oVirt 3.3.2 I have added 2 virtio-scsi devices to a =
single virtual host. =20
After doing so, the VM would fail to boot due to duplicate disk ID. The =
first time I thought it a fluke, second time=92s a bug?
Fortunately they were empty data disks and I was able to get around the =
problem by deleting one and recreating it.
VDSM log:
Thread-32154::INFO::2014-01-08 =
11:54:39,717::clientIF::350::vds::(prepareVolumePath) prepared volume =
path: =
/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/imag=
es/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6de=
a2
Thread-32154::DEBUG::2014-01-08 11:54:39,740::vm::2984::vm.Vm::(_run) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::<?xml version=3D"1.0" =
encoding=3D"utf-8"?>
<domain type=3D"kvm">
<name>cobra</name>
<uuid>c2aff4cc-0de6-4342-a565-669b1825838c</uuid>
<memory>4194304</memory>
<currentMemory>4194304</currentMemory>
<vcpu>3</vcpu>
<memtune>
<min_guarantee>4194304</min_guarantee>
</memtune>
<devices>
<channel type=3D"unix">
<target name=3D"com.redhat.rhevm.vdsm" =
type=3D"virtio"/>
<source mode=3D"bind" =
path=3D"/var/lib/libvirt/qemu/channels/c2aff4cc-0de6-4342-a565-669b1825838=
c.com.redhat.rhevm.vdsm"/>
</channel>
<channel type=3D"unix">
<target name=3D"org.qemu.guest_agent.0" =
type=3D"virtio"/>
<source mode=3D"bind" =
path=3D"/var/lib/libvirt/qemu/channels/c2aff4cc-0de6-4342-a565-669b1825838=
c.org.qemu.guest_agent.0"/>
</channel>
<input bus=3D"ps2" type=3D"mouse"/>
<channel type=3D"spicevmc">
<target name=3D"com.redhat.spice.0" =
type=3D"virtio"/>
</channel>
<graphics autoport=3D"yes" keymap=3D"en-us" listen=3D"0" =
passwd=3D"*****" passwdValidTo=3D"1970-01-01T00:00:01" port=3D"-1" =
tlsPort=3D"-1" type=3D"spice">
<channel mode=3D"secure" name=3D"main"/>
<channel mode=3D"secure" name=3D"inputs"/>
<channel mode=3D"secure" name=3D"cursor"/>
<channel mode=3D"secure" name=3D"playback"/>
<channel mode=3D"secure" name=3D"record"/>
<channel mode=3D"secure" name=3D"display"/>
<channel mode=3D"secure" name=3D"usbredir"/>
<channel mode=3D"secure" name=3D"display"/>
<channel mode=3D"secure" name=3D"usbredir"/>
<channel mode=3D"secure" name=3D"smartcard"/>
</graphics>
<controller model=3D"virtio-scsi" type=3D"scsi">
<address bus=3D"0x00" domain=3D"0x0000" =
function=3D"0x0" slot=3D"0x05" type=3D"pci"/>
</controller>
<video>
<address bus=3D"0x00" domain=3D"0x0000" =
function=3D"0x0" slot=3D"0x02" type=3D"pci"/>
<model heads=3D"1" type=3D"qxl" vram=3D"32768"/>
</video>
<interface type=3D"bridge">
<address bus=3D"0x00" domain=3D"0x0000" =
function=3D"0x0" slot=3D"0x03" type=3D"pci"/>
<mac address=3D"00:1a:4a:5b:9f:02"/>
<model type=3D"virtio"/>
<source bridge=3D"ovirtmgmt"/>
<filterref filter=3D"vdsm-no-mac-spoofing"/>
<link state=3D"up"/>
</interface>
<disk device=3D"cdrom" snapshot=3D"no" type=3D"file">
<address bus=3D"1" controller=3D"0" target=3D"0" =
type=3D"drive" unit=3D"0"/>
<source file=3D"" startupPolicy=3D"optional"/>
<target bus=3D"ide" dev=3D"hdc"/>
<readonly/>
<serial/>
</disk>
<disk device=3D"disk" snapshot=3D"no" type=3D"file">
<source =
file=3D"/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c=
97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29=
ea36d5fa4a"/>
<target bus=3D"scsi" dev=3D"sda"/>
=
<serial>94a298cb-87a5-48cd-b78f-770582b50639</serial>
<boot order=3D"1"/>
<driver cache=3D"none" error_policy=3D"stop" =
io=3D"threads" name=3D"qemu" type=3D"raw"/>
</disk>
<disk device=3D"disk" snapshot=3D"no" type=3D"file">
<address bus=3D"0x00" domain=3D"0x0000" =
function=3D"0x0" slot=3D"0x07" type=3D"pci"/>
<source =
file=3D"/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c=
97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6e=
aa2ca4d40d"/>
<target bus=3D"virtio" dev=3D"vda"/>
=
<serial>8df43d38-c4c7-4711-bc87-55f35d1550e5</serial>
<driver cache=3D"none" error_policy=3D"stop" =
io=3D"threads" name=3D"qemu" type=3D"raw"/>
</disk>
<disk device=3D"disk" snapshot=3D"no" type=3D"file">
<address bus=3D"0" controller=3D"0" target=3D"0" =
type=3D"drive" unit=3D"0"/>
<source =
file=3D"/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c=
97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28=
f05cf6dea2"/>
<target bus=3D"scsi" dev=3D"sdb"/>
=
<serial>e1886286-3d81-47d0-ae8d-77454e464078</serial>
<driver cache=3D"none" error_policy=3D"stop" =
io=3D"threads" name=3D"qemu" type=3D"raw"/>
</disk>
<sound model=3D"ich6">
<address bus=3D"0x00" domain=3D"0x0000" =
function=3D"0x0" slot=3D"0x04" type=3D"pci"/>
</sound>
<memballoon model=3D"virtio"/>
</devices>
<os>
<type arch=3D"x86_64" machine=3D"pc-1.0">hvm</type>
<smbios mode=3D"sysinfo"/>
</os>
<sysinfo type=3D"smbios">
<system>
<entry name=3D"manufacturer">oVirt</entry>
<entry name=3D"product">oVirt Node</entry>
<entry name=3D"version">19-5</entry>
<entry =
name=3D"serial">2061001F-C600-0006-E1BC-BCAEC518BA45</entry>
<entry =
name=3D"uuid">c2aff4cc-0de6-4342-a565-669b1825838c</entry>
</system>
</sysinfo>
<clock adjustment=3D"-21600" offset=3D"variable">
<timer name=3D"rtc" tickpolicy=3D"catchup"/>
</clock>
<features>
<acpi/>
</features>
<cpu match=3D"exact">
<model>Nehalem</model>
<topology cores=3D"1" sockets=3D"3" threads=3D"1"/>
</cpu>
</domain>
Thread-32154::DEBUG::2014-01-08 =
11:54:40,218::libvirtconnection::108::libvirtconnection::(wrapper) =
Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: internal error =
process exited while connecting to monitor: qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive
Thread-32154::DEBUG::2014-01-08 =
11:54:40,218::vm::2109::vm.Vm::(_startUnderlyingVm) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::_ongoingCreations =
released
Thread-32154::ERROR::2014-01-08 =
11:54:40,218::vm::2135::vm.Vm::(_startUnderlyingVm) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::The vm start process =
failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2095, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 3018, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", =
line 76, in wrapper
ret =3D f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in =
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', =
conn=3Dself)
libvirtError: internal error process exited while connecting to monitor: =
qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive
Thread-32154::DEBUG::2014-01-08 =
11:54:40,223::vm::2577::vm.Vm::(setDownStatus) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::Changed state to Down: =
internal error process exited while connecting to monitor: =
qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive
Thread-32158::WARNING::2014-01-08 =
11:54:42,185::clientIF::362::vds::(teardownVolumePath) Drive is not a =
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 =
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method =
Drive._checkIoTuneCategories of <vm.Drive object at 0x7f1150113a90>> =
_customize:<bound method Drive._customize of <vm.Drive object at =
0x7f1150113a90>> _deviceXML:<disk device=3D"cdrom" snapshot=3D"no" =
type=3D"file"><address bus=3D"1" controller=3D"0" target=3D"0" =
type=3D"drive" unit=3D"0"/><source file=3D"" =
startupPolicy=3D"optional"/><target bus=3D"ide" =
dev=3D"hdc"/><readonly/><serial></serial></disk> _makeName:<bound method =
Drive._makeName of <vm.Drive object at 0x7f1150113a90>> =
_setExtSharedState:<bound method Drive._setExtSharedState of <vm.Drive =
object at 0x7f1150113a90>> _validateIoTuneParams:<bound method =
Drive._validateIoTuneParams of <vm.Drive object at 0x7f1150113a90>> =
address:{' controller': '0', ' target': '0', 'unit': '0', ' bus': '1', ' =
type': 'drive'} apparentsize:0 blockDev:False cache:none conf:{'status': =
'Down', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'vmId': =
'c2aff4cc-0de6-4342-a565-669b1825838c', 'pid': '0', 'memGuaranteedSize': =
4096, 'timeOffset': '-21600', 'keyboardLayout': 'en-us', 'displayPort': =
'-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', =
'cpuType': 'Nehalem', 'custom': =
{'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4f=
b-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5adevice_94db7fa0-0=
71d-4181-bac6-826ecfca3dd8device_a2e6354f-4ad6-475f-bd18-754fcedf505f': =
'VmDevice {vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3Da2e6354f-4ad6-475f-bd18-754fcedf505f, device=3Dunix, =
type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, address=3D{port=3D2, =
bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Dfalse, =
plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel1, =
customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40': 'VmDevice =
{vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D142f948d-f916-4f42-bd28-cb4f0b8ebb40, device=3Dvirtio-serial, =
type=3DCONTROLLER, bootOrder=3D0, specParams=3D{}, address=3D{bus=3D0x00, =
domain=3D0x0000, type=3Dpci, slot=3D0x06, function=3D0x0}, =
managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, =
deviceAlias=3Dvirtio-serial0, customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb=
-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5a': 'VmDevice =
{vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D861eb290-19bc-4ace-b2cb-85cbb2e0eb5a, device=3Dide, =
type=3DCONTROLLER, bootOrder=3D0, specParams=3D{}, address=3D{bus=3D0x00, =
domain=3D0x0000, type=3Dpci, slot=3D0x01, function=3D0x1}, =
managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dide0, =
customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb=
-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5adevice_94db7fa0-07=
1d-4181-bac6-826ecfca3dd8': 'VmDevice =
{vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D94db7fa0-071d-4181-bac6-826ecfca3dd8, device=3Dunix, =
type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, address=3D{port=3D1, =
bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Dfalse, =
plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel0, =
customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb=
-60df1aaee1e8': 'VmDevice {vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D615c1466-850e-4362-a4fb-60df1aaee1e8, device=3Dspicevmc, =
type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, address=3D{port=3D3, =
bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Dfalse, =
plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel2, =
customProperties=3D{}, snapshotId=3Dnull}'}, 'clientIp': '', 'exitCode': =
1, 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': =
'true', 'exitMessage': "internal error process exited while connecting =
to monitor: qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive\n", =
'transparentHugePages': 'true', 'devices': [{'specParams': {}, =
'deviceId': 'db6166cb-e977-485e-8c82-fa48ca75e709', 'address': {'bus': =
'0x00', ' slot': '0x05', ' domain': '0x0000', ' type': 'pci', ' =
function': '0x0'}, 'device': 'scsi', 'model': 'virtio-scsi', 'type': =
'controller'}, {'device': 'qxl', 'specParams': {'vram': '32768', =
'heads': '1'}, 'type': 'video', 'deviceId': =
'8b0e3dbc-27c6-4eae-ba6b-201c3e1736aa', 'address': {'bus': '0x00', ' =
slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': =
'0x0'}}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:5b:9f:02', =
'linkActive': 'true', 'network': 'ovirtmgmt', 'filter': =
'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': =
'738c8ebe-b014-4d65-8c78-942aaf12bfb5', 'address': {'bus': '0x00', ' =
slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, =
'device': 'bridge', 'type': 'interface'}, {'index': '2', 'iface': 'ide', =
'address': {' controller': '0', ' target': '0', 'unit': '0', ' bus': =
'1', ' type': 'drive'}, 'specParams': {'path': ''}, 'readonly': 'true', =
'deviceId': '5611019a-948e-41b3-8ffd-75790ebcdf84', 'path': '', =
'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'volumeInfo': =
{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a', 'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a.lease', 'imageID': '94a298cb-87a5-48cd-b78f-770582b50639'}, 'index': =
0, 'iface': 'scsi', 'apparentsize': '162135015424', 'imageID': =
'94a298cb-87a5-48cd-b78f-770582b50639', 'readonly': 'false', 'shared': =
'false', 'truesize': '107119386624', 'type': 'disk', 'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw', =
'deviceId': '94a298cb-87a5-48cd-b78f-770582b50639', 'poolID': =
'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a', 'propagateErrors': 'off', 'optional': 'false', 'bootOrder': '1', =
'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'specParams': {}, =
'volumeChain': [{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', =
'volType': 'path', 'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a', 'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a.lease', 'imageID': '94a298cb-87a5-48cd-b78f-770582b50639'}]}, =
{'address': {'bus': '0x00', ' slot': '0x07', ' domain': '0x0000', ' =
type': 'pci', ' function': '0x0'}, 'volumeInfo': {'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d', 'volumeID': '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d.lease', 'imageID': '8df43d38-c4c7-4711-bc87-55f35d1550e5'}, 'index': =
'0', 'iface': 'virtio', 'apparentsize': '1073741824', 'imageID': =
'8df43d38-c4c7-4711-bc87-55f35d1550e5', 'readonly': 'false', 'shared': =
'false', 'truesize': '0', 'type': 'disk', 'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw', =
'deviceId': '8df43d38-c4c7-4711-bc87-55f35d1550e5', 'poolID': =
'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d', 'propagateErrors': 'off', 'optional': 'false', 'volumeID': =
'42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'specParams': {}, 'volumeChain': =
[{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d', 'volumeID': '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d.lease', 'imageID': '8df43d38-c4c7-4711-bc87-55f35d1550e5'}]}, =
{'address': {' controller': '0', ' target': '0', 'unit': '0', ' bus': =
'0', ' type': 'drive'}, 'volumeInfo': {'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2', 'volumeID': '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2.lease', 'imageID': 'e1886286-3d81-47d0-ae8d-77454e464078'}, 'index': =
'1', 'iface': 'scsi', 'apparentsize': '1073741824', 'imageID': =
'e1886286-3d81-47d0-ae8d-77454e464078', 'readonly': 'false', 'shared': =
'false', 'truesize': '0', 'type': 'disk', 'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw', =
'deviceId': 'e1886286-3d81-47d0-ae8d-77454e464078', 'poolID': =
'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2', 'propagateErrors': 'off', 'optional': 'false', 'volumeID': =
'1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'specParams': {}, 'volumeChain': =
[{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2', 'volumeID': '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2.lease', 'imageID': 'e1886286-3d81-47d0-ae8d-77454e464078'}]}, =
{'device': 'ich6', 'specParams': {}, 'type': 'sound', 'deviceId': =
'a1e596e9-218f-46ba-9f32-b9c966e11d73', 'address': {'bus': '0x00', ' =
slot': '0x04', ' domain': '0x0000', ' type': 'pci', ' function': =
'0x0'}}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, =
'type': 'balloon', 'deviceId': '5c04fd7e-7249-4e3a-b8eb-91cce72d5b60', =
'target': 4194304}], 'smp': '3', 'vmType': 'kvm', 'memSize': 4096, =
'displayIp': '0', 'spiceSecureChannels': =
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', =
'smpCoresPerSocket': '1', 'vmName': 'cobra', 'display': 'qxl', 'nice': =
'0'} createXmlElem:<bound method Drive.createXmlElem of <vm.Drive object =
at 0x7f1150113a90>> device:cdrom =
deviceId:5611019a-948e-41b3-8ffd-75790ebcdf84 extSharedState:none =
getLeasesXML:<bound method Drive.getLeasesXML of <vm.Drive object at =
0x7f1150113a90>> getNextVolumeSize:<bound method Drive.getNextVolumeSize =
of <vm.Drive object at 0x7f1150113a90>> getXML:<bound method =
Drive.getXML of <vm.Drive object at 0x7f1150113a90>> =
hasVolumeLeases:False iface:ide index:2 =
isDiskReplicationInProgress:<bound method =
Drive.isDiskReplicationInProgress of <vm.Drive object at =
0x7f1150113a90>> isVdsmImage:<bound method Drive.isVdsmImage of =
<vm.Drive object at 0x7f1150113a90>> log:<logUtils.SimpleLogAdapter =
object at 0x7f111838af90> name:hdc networkDev:False path: readonly:true =
reqsize:0 serial: shared:false specParams:{'path': ''} truesize:0 =
type:disk volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last):
File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath
res =3D self.irs.teardownImage(drive['domainID'],
File "/usr/share/vdsm/vm.py", line 1389, in __getitem__
raise KeyError(key)
KeyError: 'domainID'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,186::task::579::TaskManager.Task::(_updateState) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::moving from state init -> =
state preparing
Thread-32158::INFO::2014-01-08 =
11:54:42,187::logUtils::44::dispatcher::(wrapper) Run and protect: =
teardownImage(sdUUID=3D'f14f471e-0cce-414d-af57-779eeb88c97a', =
spUUID=3D'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', =
imgUUID=3D'94a298cb-87a5-48cd-b78f-770582b50639', volUUID=3DNone)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::197::ResourceManager.Request::(__init__) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`533f2699-=
0684-4247-9d5f-a858ffe96fe9`::Request was made in =
'/usr/share/vdsm/storage/hsm.py' line '3283' at 'teardownImage'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::541::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' for lock type 'shared'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::600::ResourceManager::(registerResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now =
locking as 'shared' (1 active user)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::237::ResourceManager.Request::(grant) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`533f2699-=
0684-4247-9d5f-a858ffe96fe9`::Granted request
Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::811::TaskManager.Task::(resourceAcquired) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::_resourcesAcquired: =
Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::974::TaskManager.Task::(_decref) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::ref 1 aborting False
Thread-32158::INFO::2014-01-08 =
11:54:42,188::logUtils::47::dispatcher::(wrapper) Run and protect: =
teardownImage, Return response: None
Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::1168::TaskManager.Task::(prepare) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::finished: None
Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::579::TaskManager.Task::(_updateState) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::moving from state =
preparing -> state finished
Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::resourceManager::939::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj: =
'None'>}
Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::resourceManager::976::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}
Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::615::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::634::ResourceManager::(releaseResource) =
Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0 =
active users)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::640::ResourceManager::(releaseResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free, finding =
out if anyone is waiting for it.
Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::648::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', Clearing records.
Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::task::974::TaskManager.Task::(_decref) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::ref 0 aborting False
Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::task::579::TaskManager.Task::(_updateState) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::moving from state init -> =
state preparing
Thread-32158::INFO::2014-01-08 =
11:54:42,190::logUtils::44::dispatcher::(wrapper) Run and protect: =
teardownImage(sdUUID=3D'f14f471e-0cce-414d-af57-779eeb88c97a', =
spUUID=3D'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', =
imgUUID=3D'8df43d38-c4c7-4711-bc87-55f35d1550e5', volUUID=3DNone)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::197::ResourceManager.Request::(__init__) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`0907e117-=
ba60-4c9f-a190-ebb7e027b4c2`::Request was made in =
'/usr/share/vdsm/storage/hsm.py' line '3283' at 'teardownImage'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::541::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' for lock type 'shared'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::600::ResourceManager::(registerResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now =
locking as 'shared' (1 active user)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::237::ResourceManager.Request::(grant) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`0907e117-=
ba60-4c9f-a190-ebb7e027b4c2`::Granted request
Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::task::811::TaskManager.Task::(resourceAcquired) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::_resourcesAcquired: =
Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::task::974::TaskManager.Task::(_decref) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::ref 1 aborting False
Thread-32158::INFO::2014-01-08 =
11:54:42,191::logUtils::47::dispatcher::(wrapper) Run and protect: =
teardownImage, Return response: None
Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::task::1168::TaskManager.Task::(prepare) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::finished: None
Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::task::579::TaskManager.Task::(_updateState) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::moving from state =
preparing -> state finished
Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::resourceManager::939::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj: =
'None'>}
Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::resourceManager::976::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}
Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::resourceManager::615::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::resourceManager::634::ResourceManager::(releaseResource) =
Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0 =
active users)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::resourceManager::640::ResourceManager::(releaseResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free, finding =
out if anyone is waiting for it.
Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::resourceManager::648::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', Clearing records.
Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::task::974::TaskManager.Task::(_decref) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::ref 0 aborting False
Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::task::579::TaskManager.Task::(_updateState) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::moving from state init -> =
state preparing
Thread-32158::INFO::2014-01-08 =
11:54:42,192::logUtils::44::dispatcher::(wrapper) Run and protect: =
teardownImage(sdUUID=3D'f14f471e-0cce-414d-af57-779eeb88c97a', =
spUUID=3D'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', =
imgUUID=3D'e1886286-3d81-47d0-ae8d-77454e464078', volUUID=3DNone)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,193::resourceManager::197::ResourceManager.Request::(__init__) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`8908ad39-=
2555-4ef5-a6ae-49e8504de015`::Request was made in =
'/usr/share/vdsm/storage/hsm.py' line '3283' at 'teardownImage'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,193::resourceManager::541::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' for lock type 'shared'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::resourceManager::600::ResourceManager::(registerResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now =
locking as 'shared' (1 active user)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::resourceManager::237::ResourceManager.Request::(grant) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`8908ad39-=
2555-4ef5-a6ae-49e8504de015`::Granted request
Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::task::811::TaskManager.Task::(resourceAcquired) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::_resourcesAcquired: =
Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::task::974::TaskManager.Task::(_decref) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::ref 1 aborting False
Thread-32158::INFO::2014-01-08 =
11:54:42,196::logUtils::47::dispatcher::(wrapper) Run and protect: =
teardownImage, Return response: None
Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::task::1168::TaskManager.Task::(prepare) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::finished: None
Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::task::579::TaskManager.Task::(_updateState) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::moving from state =
preparing -> state finished
Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::939::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj: =
'None'>}
Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::976::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}
Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::615::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'
Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::634::ResourceManager::(releaseResource) =
Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0 =
active users)
Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::640::ResourceManager::(releaseResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free, finding =
out if anyone is waiting for it.
--Apple-Mail=_877DC8BA-BA64-4899-AC8F-C9588B685A8B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">So =
twice now under oVirt 3.3.2 I have added 2 virtio-scsi devices to a =
single virtual host. <div><br></div><div>After doing so, the VM =
would fail to boot due to duplicate disk ID. The first time I =
thought it a fluke, second time=92s a =
bug?</div><div><br></div><div>Fortunately they were empty data disks and =
I was able to get around the problem by deleting one and recreating =
it.</div><div><br></div><div>VDSM log:</div><div><br></div><div><div =
style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;">Thread-32154::INFO::2014-01-08 =
11:54:39,717::clientIF::350::vds::(prepareVolumePath) prepared volume =
path: =
/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/imag=
es/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6de=
a2</div><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;">Thread-32154::DEBUG::2014-01-08 =
11:54:39,740::vm::2984::vm.Vm::(_run) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::<?xml version=3D"1.0" =
encoding=3D"utf-8"?></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"><domain type=3D"kvm"></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
<name>cobra</name></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
=
<uuid>c2aff4cc-0de6-4342-a565-669b1825838c</uuid></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
<memory>4194304</memory></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
=
<currentMemory>4194304</currentMemory></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
<vcpu>3</vcpu></div><div style=3D"margin:=
0px; font-size: 11px; font-family: Menlo;"> =
<memtune></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"> =
<min_guarantee>4194304</min_guarantee></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
</memtune></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<devices></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"> =
<channel type=3D"unix"></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<target =
name=3D"com.redhat.rhevm.vdsm" type=3D"virtio"/></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
=
<source mode=3D"bind" =
path=3D"/var/lib/libvirt/qemu/channels/c2aff4cc-0de6-4342-a565-669b1825838=
c.com.redhat.rhevm.vdsm"/></div><div style=3D"margin: 0px; font-size: =
11px; font-family: Menlo;"> =
</channel></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel type=3D"unix"></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
=
<target name=3D"org.qemu.guest_agent.0" =
type=3D"virtio"/></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"> =
<source mode=3D"bind" =
path=3D"/var/lib/libvirt/qemu/channels/c2aff4cc-0de6-4342-a565-669b1825838=
c.org.qemu.guest_agent.0"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
</channel></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<input bus=3D"ps2" type=3D"mouse"/></div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> =
<channel =
type=3D"spicevmc"></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"> =
<target name=3D"com.redhat.spice.0" =
type=3D"virtio"/></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"> =
</channel></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;"> =
<graphics autoport=3D"yes" keymap=3D"en-us" listen=3D"0" =
passwd=3D"*****" passwdValidTo=3D"1970-01-01T00:00:01" port=3D"-1" =
tlsPort=3D"-1" type=3D"spice"></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"main"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"inputs"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"cursor"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"playback"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"record"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"display"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo;"> =
<channel =
mode=3D"secure" name=3D"usbredir"/></div><div style=3D"margin: 0px; =
font-size: 11px; font-family: Menlo; min-height: =
13px;"><br></div></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo; min-height: 13px;"><div style=3D"margin: =
0px;"> =
<channel mode=3D"secure" =
name=3D"display"/></div><div style=3D"margin: 0px;"> =
=
<channel mode=3D"secure" name=3D"usbredir"/></div><div =
style=3D"margin: 0px;"> =
<channel mode=3D"secure" =
name=3D"smartcard"/></div><div style=3D"margin: 0px;"> =
</graphics></div><div =
style=3D"margin: 0px;"> =
<controller model=3D"virtio-scsi" type=3D"scsi"></div><div =
style=3D"margin: 0px;"> =
<address bus=3D"0x00" =
domain=3D"0x0000" function=3D"0x0" slot=3D"0x05" =
type=3D"pci"/></div><div style=3D"margin: 0px;"> =
</controller></div><div =
style=3D"margin: 0px;"> =
<video></div><div style=3D"margin: 0px;"> =
=
<address bus=3D"0x00" domain=3D"0x0000" function=3D"0x0" slot=3D"0x02" =
type=3D"pci"/></div><div style=3D"margin: 0px;"> =
<model =
heads=3D"1" type=3D"qxl" vram=3D"32768"/></div><div style=3D"margin: =
0px;"> =
</video></div><div style=3D"margin: 0px;"> =
<interface =
type=3D"bridge"></div><div style=3D"margin: 0px;"> =
=
<address bus=3D"0x00" domain=3D"0x0000" function=3D"0x0" slot=3D"0x03" =
type=3D"pci"/></div><div style=3D"margin: 0px;"> =
<mac =
address=3D"00:1a:4a:5b:9f:02"/></div><div style=3D"margin: =
0px;"> =
<model type=3D"virtio"/></div><div =
style=3D"margin: 0px;"> =
<source =
bridge=3D"ovirtmgmt"/></div><div style=3D"margin: 0px;"> =
=
<filterref filter=3D"vdsm-no-mac-spoofing"/></div><div =
style=3D"margin: 0px;"> =
<link state=3D"up"/></div><div =
style=3D"margin: 0px;"> =
</interface></div><div style=3D"margin: 0px;"> =
<disk device=3D"cdrom" =
snapshot=3D"no" type=3D"file"></div><div style=3D"margin: =
0px;"> =
<address bus=3D"1" controller=3D"0" target=3D"0" =
type=3D"drive" unit=3D"0"/></div><div style=3D"margin: 0px;"> =
=
<source file=3D"" startupPolicy=3D"optional"/></div><div =
style=3D"margin: 0px;"> =
<target bus=3D"ide" =
dev=3D"hdc"/></div><div style=3D"margin: 0px;"> =
=
<readonly/></div><div style=3D"margin: 0px;"> =
=
<serial/></div><div style=3D"margin: 0px;"> =
</disk></div><div =
style=3D"margin: 0px;"> =
<disk device=3D"disk" snapshot=3D"no" =
type=3D"file"></div><div style=3D"margin: 0px;"> =
=
<source =
file=3D"/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c=
97a/images/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29=
ea36d5fa4a"/></div><div style=3D"margin: 0px;"> =
=
<target bus=3D"scsi" dev=3D"sda"/></div><div style=3D"margin: =
0px;"> =
=
<serial>94a298cb-87a5-48cd-b78f-770582b50639</serial></div><di=
v style=3D"margin: 0px;"> =
<boot =
order=3D"1"/></div><div style=3D"margin: 0px;"> =
=
<driver cache=3D"none" error_policy=3D"stop" io=3D"threads" =
name=3D"qemu" type=3D"raw"/></div><div style=3D"margin: 0px;"> =
</disk></div><div =
style=3D"margin: 0px;"> =
<disk device=3D"disk" snapshot=3D"no" =
type=3D"file"></div><div style=3D"margin: 0px;"> =
=
<address bus=3D"0x00" domain=3D"0x0000" function=3D"0x0" slot=3D"0x07" =
type=3D"pci"/></div><div style=3D"margin: 0px;"> =
=
<source =
file=3D"/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c=
97a/images/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6e=
aa2ca4d40d"/></div><div style=3D"margin: 0px;"> =
=
<target bus=3D"virtio" dev=3D"vda"/></div><div style=3D"margin: =
0px;"> =
=
<serial>8df43d38-c4c7-4711-bc87-55f35d1550e5</serial></div><di=
v style=3D"margin: 0px;"> =
<driver cache=3D"none" =
error_policy=3D"stop" io=3D"threads" name=3D"qemu" =
type=3D"raw"/></div><div style=3D"margin: 0px;"> =
</disk></div><div =
style=3D"margin: 0px;"> =
<disk device=3D"disk" snapshot=3D"no" =
type=3D"file"></div><div><br></div><div><div style=3D"margin: =
0px;"> =
<address bus=3D"0" controller=3D"0" =
target=3D"0" type=3D"drive" unit=3D"0"/></div><div style=3D"margin: =
0px;"> =
<source =
file=3D"/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c=
97a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28=
f05cf6dea2"/></div><div style=3D"margin: 0px;"> =
=
<target bus=3D"scsi" dev=3D"sdb"/></div><div style=3D"margin: =
0px;"> =
=
<serial>e1886286-3d81-47d0-ae8d-77454e464078</serial></div><di=
v style=3D"margin: 0px;"> =
<driver cache=3D"none" =
error_policy=3D"stop" io=3D"threads" name=3D"qemu" =
type=3D"raw"/></div><div style=3D"margin: 0px;"> =
</disk></div><div =
style=3D"margin: 0px;"> =
<sound model=3D"ich6"></div><div style=3D"margin: =
0px;"> =
<address bus=3D"0x00" domain=3D"0x0000" =
function=3D"0x0" slot=3D"0x04" type=3D"pci"/></div><div =
style=3D"margin: 0px;"> =
</sound></div><div style=3D"margin: 0px;"> =
<memballoon =
model=3D"virtio"/></div><div style=3D"margin: 0px;"> =
</devices></div><div style=3D"margin: 0px;"> =
<os></div><div style=3D"margin: 0px;"> =
<type arch=3D"x86_64" =
machine=3D"pc-1.0">hvm</type></div><div style=3D"margin: =
0px;"> <smbios =
mode=3D"sysinfo"/></div><div style=3D"margin: 0px;"> =
</os></div><div style=3D"margin: 0px;"> =
<sysinfo type=3D"smbios"></div><div style=3D"margin: =
0px;"> =
<system></div><div style=3D"margin: 0px;"> =
<entry =
name=3D"manufacturer">oVirt</entry></div><div style=3D"margin: =
0px;"> =
<entry name=3D"product">oVirt =
Node</entry></div><div style=3D"margin: 0px;"> =
<entry =
name=3D"version">19-5</entry></div><div style=3D"margin: =
0px;"> =
<entry =
name=3D"serial">2061001F-C600-0006-E1BC-BCAEC518BA45</entry></div=
><div style=3D"margin: 0px;"> =
<entry =
name=3D"uuid">c2aff4cc-0de6-4342-a565-669b1825838c</entry></div><=
div style=3D"margin: 0px;"> =
</system></div><div style=3D"margin: 0px;"> =
</sysinfo></div><div style=3D"margin: =
0px;"> <clock adjustment=3D"-21600" =
offset=3D"variable"></div><div style=3D"margin: 0px;"> =
<timer name=3D"rtc" =
tickpolicy=3D"catchup"/></div><div style=3D"margin: 0px;"> =
</clock></div><div style=3D"margin: =
0px;"> <features></div><div =
style=3D"margin: 0px;"> =
<acpi/></div><div style=3D"margin: 0px;"> =
</features></div><div style=3D"margin: 0px;"> =
<cpu match=3D"exact"></div><div =
style=3D"margin: 0px;"> =
<model>Nehalem</model></div><div style=3D"margin: =
0px;"> =
<topology cores=3D"1" sockets=3D"3" threads=3D"1"/></div><div =
style=3D"margin: 0px;"> =
</cpu></div><div style=3D"margin: =
0px;"></domain></div></div><div style=3D"margin: 0px;"><div =
style=3D"margin: 0px;">Thread-32154::DEBUG::2014-01-08 =
11:54:40,218::libvirtconnection::108::libvirtconnection::(wrapper) =
Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: internal error =
process exited while connecting to monitor: qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive</div><div =
style=3D"margin: 0px; min-height: 13px;"><br></div><div style=3D"margin: =
0px;">Thread-32154::DEBUG::2014-01-08 =
11:54:40,218::vm::2109::vm.Vm::(_startUnderlyingVm) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::_ongoingCreations =
released</div><div style=3D"margin: =
0px;">Thread-32154::ERROR::2014-01-08 =
11:54:40,218::vm::2135::vm.Vm::(_startUnderlyingVm) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::The vm start process =
failed</div><div style=3D"margin: 0px;">Traceback (most recent call =
last):</div><div style=3D"margin: 0px;"> File =
"/usr/share/vdsm/vm.py", line 2095, in _startUnderlyingVm</div><div =
style=3D"margin: 0px;"> self._run()</div><div =
style=3D"margin: 0px;"> File "/usr/share/vdsm/vm.py", line 3018, =
in _run</div><div style=3D"margin: 0px;"> =
self._connection.createXML(domxml, flags),</div><div style=3D"margin: =
0px;"> File =
"/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, =
in wrapper</div><div style=3D"margin: 0px;"> ret =3D =
f(*args, **kwargs)</div><div style=3D"margin: 0px;"> File =
"/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in =
createXML</div><div style=3D"margin: 0px;"> if ret is =
None:raise libvirtError('virDomainCreateXML() failed', =
conn=3Dself)</div><div style=3D"margin: 0px;">libvirtError: internal =
error process exited while connecting to monitor: qemu-system-x86_64: =
-drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive</div><div =
style=3D"margin: 0px; min-height: 13px;"><br></div><div style=3D"margin: =
0px;">Thread-32154::DEBUG::2014-01-08 =
11:54:40,223::vm::2577::vm.Vm::(setDownStatus) =
vmId=3D`c2aff4cc-0de6-4342-a565-669b1825838c`::Changed state to Down: =
internal error process exited while connecting to monitor: =
qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive</div><div =
style=3D"margin: 0px; min-height: 13px;"><br></div><div style=3D"margin: =
0px;"><div style=3D"margin: 0px;">Thread-32158::WARNING::2014-01-08 =
11:54:42,185::clientIF::362::vds::(teardownVolumePath) Drive is not a =
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 =
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound =
method Drive._checkIoTuneCategories of <vm.Drive object at =
0x7f1150113a90>> _customize:<bound method Drive._customize of =
<vm.Drive object at 0x7f1150113a90>> _deviceXML:<disk =
device=3D"cdrom" snapshot=3D"no" type=3D"file"><address =
bus=3D"1" controller=3D"0" target=3D"0" type=3D"drive" =
unit=3D"0"/><source file=3D"" =
startupPolicy=3D"optional"/><target bus=3D"ide" =
dev=3D"hdc"/><readonly/><serial></serial></disk>=
; _makeName:<bound method Drive._makeName of <vm.Drive object at =
0x7f1150113a90>> _setExtSharedState:<bound method =
Drive._setExtSharedState of <vm.Drive object at =
0x7f1150113a90>> _validateIoTuneParams:<bound method =
Drive._validateIoTuneParams of <vm.Drive object at =
0x7f1150113a90>> address:{' controller': '0', ' target': '0', =
'unit': '0', ' bus': '1', ' type': 'drive'} apparentsize:0 =
blockDev:False cache:none conf:{'status': 'Down', 'acpiEnable': 'true', =
'emulatedMachine': 'pc-1.0', 'vmId': =
'c2aff4cc-0de6-4342-a565-669b1825838c', 'pid': '0', 'memGuaranteedSize': =
4096, 'timeOffset': '-21600', 'keyboardLayout': 'en-us', 'displayPort': =
'-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', =
'cpuType': 'Nehalem', 'custom': =
{'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4f=
b-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5adevice_94db7fa0-0=
71d-4181-bac6-826ecfca3dd8device_a2e6354f-4ad6-475f-bd18-754fcedf505f': =
'VmDevice {vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3Da2e6354f-4ad6-475f-bd18-754fcedf505f, device=3Dunix, =
type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, address=3D{port=3D2, =
bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Dfalse, =
plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel1, =
customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40': 'VmDevice =
{vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D142f948d-f916-4f42-bd28-cb4f0b8ebb40, device=3Dvirtio-serial, =
type=3DCONTROLLER, bootOrder=3D0, specParams=3D{}, address=3D{bus=3D0x00, =
domain=3D0x0000, type=3Dpci, slot=3D0x06, function=3D0x0}, =
managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, =
deviceAlias=3Dvirtio-serial0, customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb=
-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5a': 'VmDevice =
{vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D861eb290-19bc-4ace-b2cb-85cbb2e0eb5a, device=3Dide, =
type=3DCONTROLLER, bootOrder=3D0, specParams=3D{}, address=3D{bus=3D0x00, =
domain=3D0x0000, type=3Dpci, slot=3D0x01, function=3D0x1}, =
managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dide0, =
customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb=
-60df1aaee1e8device_861eb290-19bc-4ace-b2cb-85cbb2e0eb5adevice_94db7fa0-07=
1d-4181-bac6-826ecfca3dd8': 'VmDevice =
{vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D94db7fa0-071d-4181-bac6-826ecfca3dd8, device=3Dunix, =
type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, address=3D{port=3D1, =
bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Dfalse, =
plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel0, =
customProperties=3D{}, snapshotId=3Dnull}', =
'device_142f948d-f916-4f42-bd28-cb4f0b8ebb40device_615c1466-850e-4362-a4fb=
-60df1aaee1e8': 'VmDevice {vmId=3Dc2aff4cc-0de6-4342-a565-669b1825838c, =
deviceId=3D615c1466-850e-4362-a4fb-60df1aaee1e8, device=3Dspicevmc, =
type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, address=3D{port=3D3, =
bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Dfalse, =
plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel2, =
customProperties=3D{}, snapshotId=3Dnull}'}, 'clientIp': '', 'exitCode': =
1, 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': =
'true', 'exitMessage': "internal error process exited while connecting =
to monitor: qemu-system-x86_64: -drive =
file=3D/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c9=
7a/images/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f=
05cf6dea2,if=3Dnone,id=3Ddrive-scsi0-0-0-0,format=3Draw,serial=3De1886286-=
3d81-47d0-ae8d-77454e464078,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3D=
threads: Duplicate ID 'drive-scsi0-0-0-0' for drive\n", =
'transparentHugePages': 'true', 'devices': [{'specParams': {}, =
'deviceId': 'db6166cb-e977-485e-8c82-fa48ca75e709', 'address': {'bus': =
'0x00', ' slot': '0x05', ' domain': '0x0000', ' type': 'pci', ' =
function': '0x0'}, 'device': 'scsi', 'model': 'virtio-scsi', 'type': =
'controller'}, {'device': 'qxl', 'specParams': {'vram': '32768', =
'heads': '1'}, 'type': 'video', 'deviceId': =
'8b0e3dbc-27c6-4eae-ba6b-201c3e1736aa', 'address': {'bus': '0x00', ' =
slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': =
'0x0'}}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:5b:9f:02', =
'linkActive': 'true', 'network': 'ovirtmgmt', 'filter': =
'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': =
'738c8ebe-b014-4d65-8c78-942aaf12bfb5', 'address': {'bus': '0x00', ' =
slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, =
'device': 'bridge', 'type': 'interface'}, {'index': '2', 'iface': 'ide', =
'address': {' controller': '0', ' target': '0', 'unit': '0', ' bus': =
'1', ' type': 'drive'}, 'specParams': {'path': ''}, 'readonly': 'true', =
'deviceId': '5611019a-948e-41b3-8ffd-75790ebcdf84', 'path': '', =
'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'volumeInfo': =
{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a', 'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a.lease', 'imageID': '94a298cb-87a5-48cd-b78f-770582b50639'}, 'index': =
0, 'iface': 'scsi', 'apparentsize': '162135015424', 'imageID': =
'94a298cb-87a5-48cd-b78f-770582b50639', 'readonly': 'false', 'shared': =
'false', 'truesize': '107119386624', 'type': 'disk', 'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw', =
'deviceId': '94a298cb-87a5-48cd-b78f-770582b50639', 'poolID': =
'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a', 'propagateErrors': 'off', 'optional': 'false', 'bootOrder': '1', =
'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'specParams': {}, =
'volumeChain': [{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', =
'volType': 'path', 'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a', 'volumeID': 'c4864816-7384-4d0e-927e-29ea36d5fa4a', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/94a298cb-87a5-48cd-b78f-770582b50639/c4864816-7384-4d0e-927e-29ea36d5f=
a4a.lease', 'imageID': '94a298cb-87a5-48cd-b78f-770582b50639'}]}, =
{'address': {'bus': '0x00', ' slot': '0x07', ' domain': '0x0000', ' =
type': 'pci', ' function': '0x0'}, 'volumeInfo': {'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d', 'volumeID': '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d.lease', 'imageID': '8df43d38-c4c7-4711-bc87-55f35d1550e5'}, 'index': =
'0', 'iface': 'virtio', 'apparentsize': '1073741824', 'imageID': =
'8df43d38-c4c7-4711-bc87-55f35d1550e5', 'readonly': 'false', 'shared': =
'false', 'truesize': '0', 'type': 'disk', 'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw', =
'deviceId': '8df43d38-c4c7-4711-bc87-55f35d1550e5', 'poolID': =
'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d', 'propagateErrors': 'off', 'optional': 'false', 'volumeID': =
'42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'specParams': {}, 'volumeChain': =
[{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d', 'volumeID': '42786233-04c0-4a9f-b731-6eaa2ca4d40d', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/8df43d38-c4c7-4711-bc87-55f35d1550e5/42786233-04c0-4a9f-b731-6eaa2ca4d=
40d.lease', 'imageID': '8df43d38-c4c7-4711-bc87-55f35d1550e5'}]}, =
{'address': {' controller': '0', ' target': '0', 'unit': '0', ' bus': =
'0', ' type': 'drive'}, 'volumeInfo': {'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2', 'volumeID': '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2.lease', 'imageID': 'e1886286-3d81-47d0-ae8d-77454e464078'}, 'index': =
'1', 'iface': 'scsi', 'apparentsize': '1073741824', 'imageID': =
'e1886286-3d81-47d0-ae8d-77454e464078', 'readonly': 'false', 'shared': =
'false', 'truesize': '0', 'type': 'disk', 'domainID': =
'f14f471e-0cce-414d-af57-779eeb88c97a', 'reqsize': '0', 'format': 'raw', =
'deviceId': 'e1886286-3d81-47d0-ae8d-77454e464078', 'poolID': =
'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2', 'propagateErrors': 'off', 'optional': 'false', 'volumeID': =
'1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'specParams': {}, 'volumeChain': =
[{'domainID': 'f14f471e-0cce-414d-af57-779eeb88c97a', 'volType': 'path', =
'leaseOffset': 0, 'path': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2', 'volumeID': '1ef64ebd-6415-43d1-a34f-28f05cf6dea2', 'leasePath': =
'/rhev/data-center/mnt/_disk01_vm/f14f471e-0cce-414d-af57-779eeb88c97a/ima=
ges/e1886286-3d81-47d0-ae8d-77454e464078/1ef64ebd-6415-43d1-a34f-28f05cf6d=
ea2.lease', 'imageID': 'e1886286-3d81-47d0-ae8d-77454e464078'}]}, =
{'device': 'ich6', 'specParams': {}, 'type': 'sound', 'deviceId': =
'a1e596e9-218f-46ba-9f32-b9c966e11d73', 'address': {'bus': '0x00', ' =
slot': '0x04', ' domain': '0x0000', ' type': 'pci', ' function': =
'0x0'}}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, =
'type': 'balloon', 'deviceId': '5c04fd7e-7249-4e3a-b8eb-91cce72d5b60', =
'target': 4194304}], 'smp': '3', 'vmType': 'kvm', 'memSize': 4096, =
'displayIp': '0', 'spiceSecureChannels': =
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', =
'smpCoresPerSocket': '1', 'vmName': 'cobra', 'display': 'qxl', 'nice': =
'0'} createXmlElem:<bound method Drive.createXmlElem of <vm.Drive =
object at 0x7f1150113a90>> device:cdrom =
deviceId:5611019a-948e-41b3-8ffd-75790ebcdf84 extSharedState:none =
getLeasesXML:<bound method Drive.getLeasesXML of <vm.Drive object =
at 0x7f1150113a90>> getNextVolumeSize:<bound method =
Drive.getNextVolumeSize of <vm.Drive object at 0x7f1150113a90>> =
getXML:<bound method Drive.getXML of <vm.Drive object at =
0x7f1150113a90>> hasVolumeLeases:False iface:ide index:2 =
isDiskReplicationInProgress:<bound method =
Drive.isDiskReplicationInProgress of <vm.Drive object at =
0x7f1150113a90>> isVdsmImage:<bound method Drive.isVdsmImage of =
<vm.Drive object at 0x7f1150113a90>> =
log:<logUtils.SimpleLogAdapter object at 0x7f111838af90> name:hdc =
networkDev:False path: readonly:true reqsize:0 serial: shared:false =
specParams:{'path': ''} truesize:0 type:disk volExtensionChunk:1024 =
watermarkLimit:536870912</div><div style=3D"margin: 0px;">Traceback =
(most recent call last):</div><div style=3D"margin: 0px;"> File =
"/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath</div><div =
style=3D"margin: 0px;"> res =3D =
self.irs.teardownImage(drive['domainID'],</div><div style=3D"margin: =
0px;"> File "/usr/share/vdsm/vm.py", line 1389, in =
__getitem__</div><div style=3D"margin: 0px;"> raise =
KeyError(key)</div><div style=3D"margin: 0px;">KeyError: =
'domainID'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,186::task::579::TaskManager.Task::(_updateState) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::moving from state init =
-> state preparing</div><div><div style=3D"margin: =
0px;">Thread-32158::INFO::2014-01-08 =
11:54:42,187::logUtils::44::dispatcher::(wrapper) Run and protect: =
teardownImage(sdUUID=3D'f14f471e-0cce-414d-af57-779eeb88c97a', =
spUUID=3D'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', =
imgUUID=3D'94a298cb-87a5-48cd-b78f-770582b50639', =
volUUID=3DNone)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::197::ResourceManager.Request::(__init__) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`533f2699-=
0684-4247-9d5f-a858ffe96fe9`::Request was made in =
'/usr/share/vdsm/storage/hsm.py' line '3283' at =
'teardownImage'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::541::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' for lock type =
'shared'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::600::ResourceManager::(registerResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now =
locking as 'shared' (1 active user)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,187::resourceManager::237::ResourceManager.Request::(grant) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`533f2699-=
0684-4247-9d5f-a858ffe96fe9`::Granted request</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::811::TaskManager.Task::(resourceAcquired) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::_resourcesAcquired: =
Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)</div><div =
style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::974::TaskManager.Task::(_decref) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::ref 1 aborting =
False</div><div style=3D"margin: 0px;">Thread-32158::INFO::2014-01-08 =
11:54:42,188::logUtils::47::dispatcher::(wrapper) Run and protect: =
teardownImage, Return response: None</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::1168::TaskManager.Task::(prepare) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::finished: None</div><div =
style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::task::579::TaskManager.Task::(_updateState) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::moving from state =
preparing -> state finished</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::resourceManager::939::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj: =
'None'>}</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,188::resourceManager::976::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::615::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::634::ResourceManager::(releaseResource) =
Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0 =
active users)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::640::ResourceManager::(releaseResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free, finding =
out if anyone is waiting for it.</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::resourceManager::648::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', Clearing =
records.</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::task::974::TaskManager.Task::(_decref) =
Task=3D`30d76cca-4645-4893-8d68-5cc68ba42dc3`::ref 0 aborting =
False</div><div style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,189::task::579::TaskManager.Task::(_updateState) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::moving from state init =
-> state preparing</div><div style=3D"margin: =
0px;">Thread-32158::INFO::2014-01-08 =
11:54:42,190::logUtils::44::dispatcher::(wrapper) Run and protect: =
teardownImage(sdUUID=3D'f14f471e-0cce-414d-af57-779eeb88c97a', =
spUUID=3D'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', =
imgUUID=3D'8df43d38-c4c7-4711-bc87-55f35d1550e5', =
volUUID=3DNone)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::197::ResourceManager.Request::(__init__) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`0907e117-=
ba60-4c9f-a190-ebb7e027b4c2`::Request was made in =
'/usr/share/vdsm/storage/hsm.py' line '3283' at =
'teardownImage'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::541::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' for lock type =
'shared'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::600::ResourceManager::(registerResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now =
locking as 'shared' (1 active user)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::resourceManager::237::ResourceManager.Request::(grant) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`0907e117-=
ba60-4c9f-a190-ebb7e027b4c2`::Granted request</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,190::task::811::TaskManager.Task::(resourceAcquired) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::_resourcesAcquired: =
Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)</div><div =
style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::task::974::TaskManager.Task::(_decref) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::ref 1 aborting =
False</div><div style=3D"margin: 0px;">Thread-32158::INFO::2014-01-08 =
11:54:42,191::logUtils::47::dispatcher::(wrapper) Run and protect: =
teardownImage, Return response: None</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::task::1168::TaskManager.Task::(prepare) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::finished: None</div><div =
style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::task::579::TaskManager.Task::(_updateState) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::moving from state =
preparing -> state finished</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::resourceManager::939::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj: =
'None'>}</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::resourceManager::976::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,191::resourceManager::615::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::resourceManager::634::ResourceManager::(releaseResource) =
Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0 =
active users)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::resourceManager::640::ResourceManager::(releaseResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free, finding =
out if anyone is waiting for it.</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::resourceManager::648::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', Clearing =
records.</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::task::974::TaskManager.Task::(_decref) =
Task=3D`4b2bde05-78f2-42b2-a5be-171377c6905e`::ref 0 aborting =
False</div><div style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,192::task::579::TaskManager.Task::(_updateState) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::moving from state init =
-> state preparing</div><div style=3D"margin: =
0px;">Thread-32158::INFO::2014-01-08 =
11:54:42,192::logUtils::44::dispatcher::(wrapper) Run and protect: =
teardownImage(sdUUID=3D'f14f471e-0cce-414d-af57-779eeb88c97a', =
spUUID=3D'18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', =
imgUUID=3D'e1886286-3d81-47d0-ae8d-77454e464078', =
volUUID=3DNone)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,193::resourceManager::197::ResourceManager.Request::(__init__) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`8908ad39-=
2555-4ef5-a6ae-49e8504de015`::Request was made in =
'/usr/share/vdsm/storage/hsm.py' line '3283' at =
'teardownImage'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,193::resourceManager::541::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' for lock type =
'shared'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::resourceManager::600::ResourceManager::(registerResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free. Now =
locking as 'shared' (1 active user)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::resourceManager::237::ResourceManager.Request::(grant) =
ResName=3D`Storage.f14f471e-0cce-414d-af57-779eeb88c97a`ReqID=3D`8908ad39-=
2555-4ef5-a6ae-49e8504de015`::Granted request</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::task::811::TaskManager.Task::(resourceAcquired) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::_resourcesAcquired: =
Storage.f14f471e-0cce-414d-af57-779eeb88c97a (shared)</div><div =
style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::task::974::TaskManager.Task::(_decref) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::ref 1 aborting =
False</div><div style=3D"margin: 0px;">Thread-32158::INFO::2014-01-08 =
11:54:42,196::logUtils::47::dispatcher::(wrapper) Run and protect: =
teardownImage, Return response: None</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,196::task::1168::TaskManager.Task::(prepare) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::finished: None</div><div =
style=3D"margin: 0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::task::579::TaskManager.Task::(_updateState) =
Task=3D`75d1051f-b118-4af8-b9f1-504fcd1802c2`::moving from state =
preparing -> state finished</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::939::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.f14f471e-0cce-414d-af57-779eeb88c97a': < ResourceRef =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a', isValid: 'True' obj: =
'None'>}</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::976::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::615::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.f14f471e-0cce-414d-af57-779eeb88c97a'</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::634::ResourceManager::(releaseResource) =
Released resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' (0 =
active users)</div><div style=3D"margin: =
0px;">Thread-32158::DEBUG::2014-01-08 =
11:54:42,197::resourceManager::640::ResourceManager::(releaseResource) =
Resource 'Storage.f14f471e-0cce-414d-af57-779eeb88c97a' is free, finding =
out if anyone is waiting for it.</div></div><div style=3D"margin: =
0px;"><br></div></div><div style=3D"margin: 0px;"><div style=3D"margin: =
0px;"><br></div></div></div><div><br></div></div></body></html>=
--Apple-Mail=_877DC8BA-BA64-4899-AC8F-C9588B685A8B--
5
4
Hi,
Out of the blue some of the hosts got some LNs out of sync (not the same
in all of them). If I try to resync the LN, it does not show a resync
option, so I took one host and detached the lns and reconfigured them.
After that, ovirtmgmt is not created correctly, it does not create the
vlan interface on top of the bond and also, it's adding the bond to the
bridge instead of the vlan interface.
I got it partially running but it's still non operational as the engine
complains the host can't mount the gluster domain (it can reach the
network of the other brick):
[root@ovirt4 network-scripts]# gluster volume status
Status of volume: glusterfs
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.128.82:/glusterfs 49154 Y 13288
Brick 192.168.128.83:/glusterfs 49154 Y 5961
NFS Server on localhost 2049 Y 4557
Self-heal Daemon on localhost N/A Y 4558
NFS Server on 192.168.128.82 2049 Y 15815
Self-heal Daemon on 192.168.128.82 N/A Y 15819
I'm not synchronizing any other host just in case this happens to the
other host too.
Regards,
2
10
---450561345-1180534687-1389310229=:39022
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hi=0A=0AIs it possible to have oVirt engine installed on node1, while havin=
g both node1 and node2 as the virtualization nodes?=0A=0AYes, it doesn't so=
und right at all for redundancy and etc.=A0 However, I have a situation whe=
re I have two nodes in a small remote location where I can turn them into v=
irtualization nodes.=A0 I don't have another system I can leverage for ovir=
t engine. =A0=A0 Another option is to use the oVirt engine at the main site=
to access these two remote systems over a dedicated line.=0A=0AThanks=0AW
---450561345-1180534687-1389310229=:39022
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:He=
lveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;fo=
nt-size:12pt">Hi<br><br>Is it possible to have oVirt engine installed on no=
de1, while having both node1 and node2 as the virtualization nodes?<br><br>=
Yes, it doesn't sound right at all for redundancy and etc. However, I=
have a situation where I have two nodes in a small remote location where I=
can turn them into virtualization nodes. I don't have another system=
I can leverage for ovirt engine. Another option is to use the=
oVirt engine at the main site to access these two remote systems over a de=
dicated line.<br><br>Thanks<br>W<br><br><br></div></body></html>
---450561345-1180534687-1389310229=:39022--
3
2
Hi!
I just noticed my Hypervisor nodes starting to complain about disks
almost being full. I started investigation and noticed that:
# du -h /var/log/libvirtd.log
100G /var/log/libvirtd.log
And many Hosts system partition had indeed become full:S
Why weren´t the file rotated? Well:
# ls -lah /var/log/libvirtd.log.* | wc -l
100
And the rotate policy says:
/etc/logrotate.d/libvirtd
## beginning of configuration section by vdsm
/var/log/libvirt/libvirtd.log {
rotate 100
missingok
copytruncate
size 15M
compress
compresscmd /usr/bin/xz
uncompresscmd /usr/bin/unxz
compressext .xz
}
Now, I just handled it by changing "100" to "1000" but I think that a
better default needs to be placed in general, don´t you?
# rpm -qa | grep vdsm
vdsm-python-4.13.0-11.el6.x86_64
vdsm-python-cpopen-4.13.0-11.el6.x86_64
vdsm-4.13.0-11.el6.x86_64
vdsm-xmlrpc-4.13.0-11.el6.noarch
vdsm-cli-4.13.0-11.el6.noarch
--
Med Vänliga Hälsningar
-------------------------------------------------------------------------------
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg(a)slu.se
6
16
--1892395897-378289935-1389206871=:85368
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hi all,=0A=0ANeed some pointers here.=A0 I created one VM successfully with=
network connection.=A0 New VMs and clone can't seem to get network connect=
ions.=0A=0AI have ovirt 3.3.2-1.el6 on CentOS6.5.=A0 Two virtualization nod=
es are on 6.5 with GlusterFS.=0A=A0=A0=A0=A0 Both nodes have bond0 on manag=
ement subnet and bond1 on subnet1.=0A=A0=A0=A0=A0 bond0 has an IP assigned =
and bond1 has no IP assigned (just link up)=0A=0AI created a VM with two NI=
Cs attached to two the logical networks.=A0 This VM1 works fine with both n=
etwork connections. This VM can be started on either node fine with network=
connections.=0A=0AWhen I tried to build a new VM or build one from the tem=
plate of VM1, I can't seem to get another network connection working on the=
se new VMs.=A0 Yes, they have logical network attached to them.=A0 =0A=0ATh=
anks in advance,=0A=0AWill
--1892395897-378289935-1389206871=:85368
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:He=
lveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;fo=
nt-size:12pt">Hi all,<br><br>Need some pointers here. I created one V=
M successfully with network connection. New VMs and clone can't seem =
to get network connections.<br><br>I have ovirt 3.3.2-1.el6 on CentOS6.5.&n=
bsp; Two virtualization nodes are on 6.5 with GlusterFS.<br> &nb=
sp; Both nodes have bond0 on management subnet and bond1 on subnet1.<=
br> bond0 has an IP assigned and bond1 has no IP as=
signed (just link up)<br><br>I created a VM with two NICs attached to two t=
he logical networks. This VM1 works fine with both network connection=
s. This VM can be started on either node fine with network connections.<br>=
<br>When I tried to build a new VM or build one from the template of VM1, I=
can't seem to get another network connection working on these new
VMs. Yes, they have logical network attached to them. <br><br>=
Thanks in advance,<br><br>Will<br><br><div><br></div></div></body></html>
--1892395897-378289935-1389206871=:85368--
4
5
Hi,
oVirt 3.4.0 alpha will be delayed due to packages dependencies issue.
Maintainers are already working on this, alpha will be released just after all dependency issues will have been fixed.
Details on dependency missing:
on CentOS 6.5:
[ovirt-3.4.0-alpha]
name=Alpha builds of the oVirt 3.4 project
baseurl=http://resources.ovirt.org/releases/3.4.0-alpha/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0
# repoclosure -r ovirt-3.4.0-alpha -l ovirt-3.3.2 -l base -l epel -l glusterfs-epel -l updates -l extra -l glusterfs-noarch-epel -l ovirt-stable -n
Reading in repository metadata - please wait....
Checking Dependencies
Repos looked at: 8
base
epel
glusterfs-epel
glusterfs-noarch-epel
ovirt-3.3.2
ovirt-3.4.0-alpha
ovirt-stable
updates
Num Packages in Repos: 16581
package: mom-0.3.2-20140101.git2691f25.el6.noarch from ovirt-3.4.0-alpha
unresolved deps:
procps-ng
package: otopi-devel-1.2.0-0.0.master.20130910.git4387efb.el6.noarch from ovirt-3.4.0-alpha
unresolved deps:
otopi-java = 0:1.2.0-0.0.master.20130910.git4387efb.el6
package: ovirt-engine-sdk-java-3.4.0.1-1.el6.noarch from ovirt-3.4.0-alpha
unresolved deps:
httpcomponents-client >= 0:4.2
apache-commons-logging
apache-commons-beanutils
package: vdsm-hook-vhostmd-4.14.0-1.git6fdd55f.el6.noarch from ovirt-3.4.0-alpha
unresolved deps:
vhostmd
On Fedora 19:
[ovirt-3.4.0-alpha]
name=Alpha builds of the oVirt 3.4 project
baseurl=http://resources.ovirt.org/releases/3.4.0-alpha/rpm/Fedora/$release…
enabled=1
skip_if_unavailable=1
gpgcheck=0
# repoclosure -n -r ovirt-3.4.0-alpha -l fedora -l updates -l ovirt-stable
Reading in repository metadata - please wait....
Checking Dependencies
Repos looked at: 4
fedora
ovirt-3.4.0-alpha
ovirt-stable
updates
Num Packages in Repos: 38832
package: otopi-devel-1.2.0-0.0.master.20130910.git4387efb.fc19.noarch from ovirt-3.4.0-alpha
unresolved deps:
otopi-java = 0:1.2.0-0.0.master.20130910.git4387efb.fc19
package: ovirt-engine-3.4.0-0.2.master.20140109103311.git6524789.fc19.noarch from ovirt-3.4.0-alpha
unresolved deps:
openstack-java-resteasy-connector >= 0:3.0.2
openstack-java-quantum-model >= 0:3.0.2
openstack-java-quantum-client >= 0:3.0.2
openstack-java-keystone-model >= 0:3.0.2
openstack-java-keystone-client >= 0:3.0.2
openstack-java-glance-model >= 0:3.0.2
openstack-java-glance-client >= 0:3.0.2
openstack-java-client >= 0:3.0.2
package: ovirt-engine-reports-3.4.0-0.2.master.20140109102135.fc19.noarch from ovirt-3.4.0-alpha
unresolved deps:
ovirt-engine-dwh >= 0:3.4.0
On Fedora 20 (ovirt-stable doesn't support Fedora 20):
[ovirt-3.4.0-alpha]
name=Alpha builds of the oVirt 3.4 project
baseurl=http://resources.ovirt.org/releases/3.4.0-alpha/rpm/Fedora/$release…
enabled=1
skip_if_unavailable=1
gpgcheck=0
# repoclosure -n -r ovirt-3.4.0-alpha -l fedora -l updates
Reading in repository metadata - please wait....
Checking Dependencies
Repos looked at: 3
fedora
ovirt-3.4.0-alpha
updates
Num Packages in Repos: 38822
package: otopi-devel-1.2.0-0.0.master.20130910.git4387efb.fc19.noarch from ovirt-3.4.0-alpha
unresolved deps:
otopi-java = 0:1.2.0-0.0.master.20130910.git4387efb.fc19
package: ovirt-engine-3.4.0-0.2.master.20140109103311.git6524789.fc19.noarch from ovirt-3.4.0-alpha
unresolved deps:
openstack-java-resteasy-connector >= 0:3.0.2
openstack-java-quantum-model >= 0:3.0.2
openstack-java-quantum-client >= 0:3.0.2
openstack-java-keystone-model >= 0:3.0.2
openstack-java-keystone-client >= 0:3.0.2
openstack-java-glance-model >= 0:3.0.2
openstack-java-glance-client >= 0:3.0.2
openstack-java-client >= 0:3.0.2
package: ovirt-engine-reports-3.4.0-0.2.master.20140109102135.fc19.noarch from ovirt-3.4.0-alpha
unresolved deps:
ovirt-engine-dwh >= 0:3.4.0
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0