Users
Threads by month
- ----- 2026 -----
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 6 participants
- 19162 discussions
This is a multi-part message in MIME format.
------=_NextPartTM-000-3ae93704-ac0a-4cf7-af75-e659c59b48b4
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,=0A=
=0A=
=0A=
although we already upgraded several hypervisor nodes to Ovirt 3.5.1 =0A=
the newest upgrade has left the host in a very strang state. We did:=0A=
=0A=
- Host was removed from cluster=0A=
- Ovirt 3.5 repo was activated on host=0A=
- Host was "reinstalled" from enging=0A=
=0A=
And we got:=0A=
- A host that is active and looks nice in the engine=0A=
- We can start/stop VMs on the host=0A=
- But we cannot live migrate machines to (or even away from) the host=0A=
=0A=
Attached vdsm/libvirt/engine logs. Timestamps do not match as we=0A=
created them individually during different runs.=0A=
=0A=
Somhow lost ...=0A=
=0A=
Markus=0A=
=0A=
*****************************=0A=
libvirt on target host:=0A=
=0A=
2015-03-18 16:18:48.691+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"qmp_capabilities","id":"libvirt-1"}' for writ=
e with FD -1=0A=
2015-03-18 16:18:48.691+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"QMP": {"version": {"qemu": {"micro": 2, "minor": 1, "major": 2}=
, "package": ""}, "capabilities": []}}]=0A=
2015-03-18 16:18:48.691+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 105 bytes out of 105 available in buffer=0A=
2015-03-18 16:18:48.692+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": {}, "id": "libvirt-1"}]=0A=
2015-03-18 16:18:48.692+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": {}, "i=
d": "libvirt-1"}=0A=
2015-03-18 16:18:48.692+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 35 bytes out of 35 available in buffer=0A=
2015-03-18 16:18:48.692+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fbdb10=0A=
2015-03-18 16:18:48.692+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"query-chardev","id":"libvirt-2"}' for write w=
ith FD -1=0A=
2015-03-18 16:18:48.693+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"frontend-open": false, "filename": "spicevmc", "lab=
el": "charchannel2"}, {"frontend-open": false, "filename": "unix:/var/lib/l=
ibvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.org.qemu.guest_ag=
ent.0,server", "label": "charchannel1"}, {"frontend-open": false, "filename=
": "unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057=
b.com.redhat.rhevm.vdsm,server", "label": "charchannel0"}, {"frontend-open"=
: true, "filename": "unix:/var/lib/libvirt/qemu/colvm60.monitor,server", "l=
abel": "charmonitor"}], "id": "libvirt-2"}]=0A=
2015-03-18 16:18:48.693+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"fro=
ntend-open": false, "filename": "spicevmc", "label": "charchannel2"}, {"fro=
ntend-open": false, "filename": "unix:/var/lib/libvirt/qemu/channels/d2d8bd=
fd-99a6-41c0-84e7-26e1d6a6057b.org.qemu.guest_agent.0,server", "label": "ch=
archannel1"}, {"frontend-open": false, "filename": "unix:/var/lib/libvirt/q=
emu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.com.redhat.rhevm.vdsm,ser=
ver", "label": "charchannel0"}, {"frontend-open": true, "filename": "unix:/=
var/lib/libvirt/qemu/colvm60.monitor,server", "label": "charmonitor"}], "id=
": "libvirt-2"}=0A=
2015-03-18 16:18:48.693+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 559 bytes out of 559 available in buffer=0A=
2015-03-18 16:18:48.693+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445ffe110=0A=
2015-03-18 16:18:48.694+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"qom-list","arguments":{"path":"/machine/unatt=
ached/device[0]"},"id":"libvirt-3"}' for write with FD -1=0A=
2015-03-18 16:18:48.694+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 0 bytes out of 1023 available in buffer=0A=
2015-03-18 16:18:48.695+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"name": "apic", "type": "child<kvm-apic>"}, {"name":=
"filtered-features", "type": "X86CPUFeatureWordInfo"}, {"name": "feature-w=
ords", "type": "X86CPUFeatureWordInfo"}, {"name": "apic-id", "type": "int"}=
, {"name": "tsc-frequency", "type": "int"}, {"name": "model-id", "type": "s=
tring"}, {"name": "vendor", "type": "string"}, {"name": "xlevel", "type": "=
int"}, {"name": "level", "type": "int"}, {"name": "stepping", "type": "int"=
}, {"name": "model", "type": "int"}, {"name": "family", "type": "int"}, {"n=
ame": "parent_bus", "type": "link<bus>"}, {"name": "kvm", "type": "bool"}, =
{"name": "enforce", "type": "bool"}, {"name": "check", "type": "bool"}, {"n=
ame": "hv-time", "type": "bool"}, {"name": "hv-vapic", "type": "bool"}, {"n=
ame": "hv-relaxed", "type": "bool"}, {"name": "hv-spinlocks", "type": "int"=
}, {"name": "pmu", "type": "bool"}, {"name": "hotplugged", "type": "bool"},=
{"name": "hotpluggable", "type": "bool"}, {"name": "realized", "type": "bo=
ol"}, {"name": "type", "type": "string"}], "id": "libvirt-3"}]=0A=
2015-03-18 16:18:48.695+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"nam=
e": "apic", "type": "child<kvm-apic>"}, {"name": "filtered-features", "type=
": "X86CPUFeatureWordInfo"}, {"name": "feature-words", "type": "X86CPUFeatu=
reWordInfo"}, {"name": "apic-id", "type": "int"}, {"name": "tsc-frequency",=
"type": "int"}, {"name": "model-id", "type": "string"}, {"name": "vendor",=
"type": "string"}, {"name": "xlevel", "type": "int"}, {"name": "level", "t=
ype": "int"}, {"name": "stepping", "type": "int"}, {"name": "model", "type"=
: "int"}, {"name": "family", "type": "int"}, {"name": "parent_bus", "type":=
"link<bus>"}, {"name": "kvm", "type": "bool"}, {"name": "enforce", "type":=
"bool"}, {"name": "check", "type": "bool"}, {"name": "hv-time", "type": "b=
ool"}, {"name": "hv-vapic", "type": "bool"}, {"name": "hv-relaxed", "type":=
"bool"}, {"name": "hv-spinlocks", "type": "int"}, {"name": "pmu", "type": =
"bool"}, {"name": "hotplugged", "type": "bool"}, {"name": "hotpluggable", "=
type": "bool"}, {"name": "realized", "type": "bool"}, {"name": "type", "typ=
e": "string"}], "id": "libvirt-3"}=0A=
2015-03-18 16:18:48.695+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 1028 bytes out of 1028 available in buffer=0A=
2015-03-18 16:18:48.695+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fbe980=0A=
2015-03-18 16:18:48.695+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"qom-get","arguments":{"path":"/machine/unatta=
ched/device[0]","property":"feature-words"},"id":"libvirt-4"}' for write wi=
th FD -1=0A=
2015-03-18 16:18:48.696+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"cpuid-register": "EDX", "cpuid-input-eax": 21474836=
58, "features": 0}, {"cpuid-register": "EAX", "cpuid-input-eax": 1073741825=
, "features": 16777337}, {"cpuid-register": "EDX", "cpuid-input-eax": 32212=
25473, "features": 0}, {"cpuid-register": "EDX", "cpuid-input-eax": 2147483=
655, "features": 0}, {"cpuid-register": "ECX", "cpuid-input-eax": 214748364=
9, "features": 1}, {"cpuid-register": "EDX", "cpuid-input-eax": 2147483649,=
"features": 537921536}, {"cpuid-register": "EBX", "cpuid-input-ecx": 0, "c=
puid-input-eax": 7, "features": 0}, {"cpuid-register": "ECX", "cpuid-input-=
eax": 1, "features": 2157453825}, {"cpuid-register": "EDX", "cpuid-input-ea=
x": 1, "features": 126614525}], "id": "libvirt-4"}]=0A=
2015-03-18 16:18:48.696+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"cpu=
id-register": "EDX", "cpuid-input-eax": 2147483658, "features": 0}, {"cpuid=
-register": "EAX", "cpuid-input-eax": 1073741825, "features": 16777337}, {"=
cpuid-register": "EDX", "cpuid-input-eax": 3221225473, "features": 0}, {"cp=
uid-register": "EDX", "cpuid-input-eax": 2147483655, "features": 0}, {"cpui=
d-register": "ECX", "cpuid-input-eax": 2147483649, "features": 1}, {"cpuid-=
register": "EDX", "cpuid-input-eax": 2147483649, "features": 537921536}, {"=
cpuid-register": "EBX", "cpuid-input-ecx": 0, "cpuid-input-eax": 7, "featur=
es": 0}, {"cpuid-register": "ECX", "cpuid-input-eax": 1, "features": 215745=
3825}, {"cpuid-register": "EDX", "cpuid-input-eax": 1, "features": 12661452=
5}], "id": "libvirt-4"}=0A=
2015-03-18 16:18:48.696+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 717 bytes out of 717 available in buffer=0A=
2015-03-18 16:18:48.696+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445ffe550=0A=
2015-03-18 16:18:48.696+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"query-cpus","id":"libvirt-5"}' for write with=
FD -1=0A=
2015-03-18 16:18:48.697+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"current": true, "CPU": 0, "pc": 4294967280, "halted=
": false, "thread_id": 5826}, {"current": false, "CPU": 1, "pc": 4294967280=
, "halted": false, "thread_id": 5827}], "id": "libvirt-5"}]=0A=
2015-03-18 16:18:48.697+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"cur=
rent": true, "CPU": 0, "pc": 4294967280, "halted": false, "thread_id": 5826=
}, {"current": false, "CPU": 1, "pc": 4294967280, "halted": false, "thread_=
id": 5827}], "id": "libvirt-5"}=0A=
2015-03-18 16:18:48.698+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 200 bytes out of 200 available in buffer=0A=
2015-03-18 16:18:48.698+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445faf560=0A=
2015-03-18 16:18:48.698+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"query-iothreads","id":"libvirt-6"}' for write=
with FD -1=0A=
2015-03-18 16:18:48.698+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [], "id": "libvirt-6"}]=0A=
2015-03-18 16:18:48.699+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [], "i=
d": "libvirt-6"}=0A=
2015-03-18 16:18:48.699+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 35 bytes out of 35 available in buffer=0A=
2015-03-18 16:18:48.699+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fbb970=0A=
2015-03-18 16:18:48.701+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"set_password","arguments":{"protocol":"spice"=
,"password":"*****","connected":"keep"},"id":"libvirt-7"}' for write with F=
D -1=0A=
2015-03-18 16:18:48.703+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": {}, "id": "libvirt-7"}]=0A=
2015-03-18 16:18:48.703+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": {}, "i=
d": "libvirt-7"}=0A=
2015-03-18 16:18:48.703+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 35 bytes out of 35 available in buffer=0A=
2015-03-18 16:18:48.703+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fafe90=0A=
2015-03-18 16:18:48.703+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"expire_password","arguments":{"protocol":"spi=
ce","time":"now"},"id":"libvirt-8"}' for write with FD -1=0A=
2015-03-18 16:18:48.704+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": {}, "id": "libvirt-8"}]=0A=
2015-03-18 16:18:48.704+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": {}, "i=
d": "libvirt-8"}=0A=
2015-03-18 16:18:48.704+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 35 bytes out of 35 available in buffer=0A=
2015-03-18 16:18:48.704+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fbe350=0A=
2015-03-18 16:18:48.704+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"qom-list","arguments":{"path":"/machine/perip=
heral"},"id":"libvirt-9"}' for write with FD -1=0A=
2015-03-18 16:18:48.705+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"name": "sound0-codec0", "type": "child<hda-duplex>"=
}, {"name": "sound0", "type": "child<intel-hda>"}, {"name": "video0", "type=
": "child<qxl-vga>"}, {"name": "channel2", "type": "child<virtserialport>"}=
, {"name": "channel1", "type": "child<virtserialport>"}, {"name": "channel0=
", "type": "child<virtserialport>"}, {"name": "net0", "type": "child<virtio=
-net-pci>"}, {"name": "virtio-disk0", "type": "child<virtio-blk-pci>"}, {"n=
ame": "ide0-1-0", "type": "child<ide-cd>"}, {"name": "virtio-serial0", "typ=
e": "child<virtio-serial-pci>"}, {"name": "scsi0", "type": "child<virtio-sc=
si-pci>"}, {"name": "usb", "type": "child<piix3-usb-uhci>"}, {"name": "type=
", "type": "string"}], "id": "libvirt-9"}]=0A=
2015-03-18 16:18:48.705+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"nam=
e": "sound0-codec0", "type": "child<hda-duplex>"}, {"name": "sound0", "type=
": "child<intel-hda>"}, {"name": "video0", "type": "child<qxl-vga>"}, {"nam=
e": "channel2", "type": "child<virtserialport>"}, {"name": "channel1", "typ=
e": "child<virtserialport>"}, {"name": "channel0", "type": "child<virtseria=
lport>"}, {"name": "net0", "type": "child<virtio-net-pci>"}, {"name": "virt=
io-disk0", "type": "child<virtio-blk-pci>"}, {"name": "ide0-1-0", "type": "=
child<ide-cd>"}, {"name": "virtio-serial0", "type": "child<virtio-serial-pc=
i>"}, {"name": "scsi0", "type": "child<virtio-scsi-pci>"}, {"name": "usb", =
"type": "child<piix3-usb-uhci>"}, {"name": "type", "type": "string"}], "id"=
: "libvirt-9"}=0A=
2015-03-18 16:18:48.705+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 708 bytes out of 708 available in buffer=0A=
2015-03-18 16:18:48.705+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fbcc60=0A=
2015-03-18 16:18:48.706+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"balloon","arguments":{"value":4294967296},"id=
":"libvirt-10"}' for write with FD -1=0A=
2015-03-18 16:18:48.706+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"id": "libvirt-10", "error": {"class": "DeviceNotActive", "desc"=
: "No balloon device has been activated"}}]=0A=
2015-03-18 16:18:48.706+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"id": "libvirt-1=
0", "error": {"class": "DeviceNotActive", "desc": "No balloon device has be=
en activated"}}=0A=
2015-03-18 16:18:48.706+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 109 bytes out of 109 available in buffer=0A=
2015-03-18 16:18:48.706+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fae850=0A=
2015-03-18 16:18:48.710+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"query-migrate-capabilities","id":"libvirt-11"=
}' for write with FD -1=0A=
2015-03-18 16:18:48.711+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"state": false, "capability": "xbzrle"}, {"state": f=
alse, "capability": "rdma-pin-all"}, {"state": false, "capability": "auto-c=
onverge"}, {"state": false, "capability": "zero-blocks"}], "id": "libvirt-1=
1"}]=0A=
2015-03-18 16:18:48.711+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"sta=
te": false, "capability": "xbzrle"}, {"state": false, "capability": "rdma-p=
in-all"}, {"state": false, "capability": "auto-converge"}, {"state": false,=
"capability": "zero-blocks"}], "id": "libvirt-11"}=0A=
2015-03-18 16:18:48.711+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 220 bytes out of 220 available in buffer=0A=
2015-03-18 16:18:48.711+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb446051a80=0A=
2015-03-18 16:18:48.711+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"migrate-set-capabilities","arguments":{"capab=
ilities":[{"capability":"xbzrle","state":false}]},"id":"libvirt-12"}' for w=
rite with FD -1=0A=
2015-03-18 16:18:48.713+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": {}, "id": "libvirt-12"}]=0A=
2015-03-18 16:18:48.713+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": {}, "i=
d": "libvirt-12"}=0A=
2015-03-18 16:18:48.713+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 36 bytes out of 36 available in buffer=0A=
2015-03-18 16:18:48.713+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fae7a0=0A=
2015-03-18 16:18:48.713+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"query-migrate-capabilities","id":"libvirt-13"=
}' for write with FD -1=0A=
2015-03-18 16:18:48.714+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": [{"state": false, "capability": "xbzrle"}, {"state": f=
alse, "capability": "rdma-pin-all"}, {"state": false, "capability": "auto-c=
onverge"}, {"state": false, "capability": "zero-blocks"}], "id": "libvirt-1=
3"}]=0A=
2015-03-18 16:18:48.714+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": [{"sta=
te": false, "capability": "xbzrle"}, {"state": false, "capability": "rdma-p=
in-all"}, {"state": false, "capability": "auto-converge"}, {"state": false,=
"capability": "zero-blocks"}], "id": "libvirt-13"}=0A=
2015-03-18 16:18:48.714+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 220 bytes out of 220 available in buffer=0A=
2015-03-18 16:18:48.714+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb446051a80=0A=
2015-03-18 16:18:48.714+0000: 2093: debug : qemuMonitorJSONCommandWithFd:28=
6 : Send command '{"execute":"migrate-set-capabilities","arguments":{"capab=
ilities":[{"capability":"rdma-pin-all","state":false}]},"id":"libvirt-14"}'=
for write with FD -1=0A=
2015-03-18 16:18:48.715+0000: 2092: debug : qemuMonitorJSONIOProcessLine:17=
9 : Line [{"return": {}, "id": "libvirt-14"}]=0A=
2015-03-18 16:18:48.715+0000: 2092: debug : qemuMonitorJSONIOProcessLine:19=
9 : QEMU_MONITOR_RECV_REPLY: mon=3D0x7fb40c017670 reply=3D{"return": {}, "i=
d": "libvirt-14"}=0A=
2015-03-18 16:18:48.715+0000: 2092: debug : qemuMonitorJSONIOProcess:248 : =
Total used 36 bytes out of 36 available in buffer=0A=
2015-03-18 16:18:48.715+0000: 2093: debug : qemuMonitorJSONCommandWithFd:29=
1 : Receive command reply ret=3D0 rxObject=3D0x7fb445fbd2b0=0A=
2015-03-18 16:18:52.109+0000: 2093: error : virDBusCall:1537 : Fehler vom S=
ervice: TerminateMachine: No such file or directory=0A=
=0A=
=0A=
=0A=
=0A=
**************************************=0A=
vdsm log on target:=0A=
=0A=
Thread-580::DEBUG::2015-03-18 17:49:06,252::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
Thread-46::DEBUG::2015-03-18 17:49:06,294::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.252:_var_n=
as2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-46::DEBUG::2015-03-18 17:49:06,301::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n557 b=
ytes (557 B) copied, 0.000317456 s, 1.8 MB/s\n'; <rc> =3D 0=0A=
Thread-38::DEBUG::2015-03-18 17:49:06,919::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtIB/2c51d320-88ce-4f23-8215-e15f55f66906/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-33::DEBUG::2015-03-18 17:49:06,920::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.251:_var_n=
as1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-38::DEBUG::2015-03-18 17:49:06,932::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n644 b=
ytes (644 B) copied, 0.000132903 s, 4.8 MB/s\n'; <rc> =3D 0=0A=
Thread-33::DEBUG::2015-03-18 17:49:06,933::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n644 b=
ytes (644 B) copied, 0.000209828 s, 3.1 MB/s\n'; <rc> =3D 0=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:09,272::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:09,274::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-581::DEBUG::2015-03-18 17:49:09,276::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:12,293::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:12,294::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,296::task::595::Storage.TaskManager.=
Task::(_updateState) Task=3D`96a50a35-2ac9-44eb-b616-a4d7e76f5db8`::moving =
from state init -> state preparing=0A=
Thread-582::INFO::2015-03-18 17:49:12,302::logUtils::44::dispatcher::(wrapp=
er) Run and protect: repoStats(options=3DNone)=0A=
Thread-582::INFO::2015-03-18 17:49:12,302::logUtils::47::dispatcher::(wrapp=
er) Run and protect: repoStats, Return response: {'bff3a2be-fdd9-4e37-b416-=
fa4ef7fafba2': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.0003=
0443', 'lastCheck': '6.1', 'valid': True}, '965ca3b6-4f9c-4e81-b6e8-5ed4a9e=
58545': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000209828',=
'lastCheck': '5.4', 'valid': True}, '2c51d320-88ce-4f23-8215-e15f55f66906'=
: {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000132903', 'last=
Check': '5.4', 'valid': True}, '63041fa9-e093-4b44-b36f-f39f16d3974f': {'co=
de': 0, 'version': 0, 'acquired': True, 'delay': '0.000308898', 'lastCheck'=
: '6.1', 'valid': True}, '272ec473-6041-42ee-bd1a-732789dd18d4': {'code': 0=
, 'version': 3, 'acquired': True, 'delay': '0.000317456', 'lastCheck': '6.0=
', 'valid': True}}=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,302::task::1191::Storage.TaskManager=
.Task::(prepare) Task=3D`96a50a35-2ac9-44eb-b616-a4d7e76f5db8`::finished: {=
'bff3a2be-fdd9-4e37-b416-fa4ef7fafba2': {'code': 0, 'version': 0, 'acquired=
': True, 'delay': '0.00030443', 'lastCheck': '6.1', 'valid': True}, '965ca3=
b6-4f9c-4e81-b6e8-5ed4a9e58545': {'code': 0, 'version': 3, 'acquired': True=
, 'delay': '0.000209828', 'lastCheck': '5.4', 'valid': True}, '2c51d320-88c=
e-4f23-8215-e15f55f66906': {'code': 0, 'version': 3, 'acquired': True, 'del=
ay': '0.000132903', 'lastCheck': '5.4', 'valid': True}, '63041fa9-e093-4b44=
-b36f-f39f16d3974f': {'code': 0, 'version': 0, 'acquired': True, 'delay': '=
0.000308898', 'lastCheck': '6.1', 'valid': True}, '272ec473-6041-42ee-bd1a-=
732789dd18d4': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.0003=
17456', 'lastCheck': '6.0', 'valid': True}}=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,302::task::595::Storage.TaskManager.=
Task::(_updateState) Task=3D`96a50a35-2ac9-44eb-b616-a4d7e76f5db8`::moving =
from state preparing -> state finished=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,303::resourceManager::940::Storage.R=
esourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {=
}=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,303::resourceManager::977::Storage.R=
esourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,303::task::993::Storage.TaskManager.=
Task::(_decref) Task=3D`96a50a35-2ac9-44eb-b616-a4d7e76f5db8`::ref 0 aborti=
ng False=0A=
Thread-582::DEBUG::2015-03-18 17:49:12,305::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:12,345::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:12,347::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-583::DEBUG::2015-03-18 17:49:12,348::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,490::protocoldetector::187::vds=
.MultiProtocolAcceptor::(_add_connection) Adding connection from 192.168.11=
.43:37057=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,499::protocoldetector::201::vds=
.MultiProtocolAcceptor::(_remove_connection) Connection removed from 192.16=
8.11.43:37057=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,500::protocoldetector::246::vds=
.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml fro=
m 192.168.11.43:37057=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,500::BindingXMLRPC::1173::XmlDe=
tector::(handleSocket) xml over http detected from ('192.168.11.43', 37057)=
=0A=
Thread-584::DEBUG::2015-03-18 17:49:13,502::BindingXMLRPC::1133::vds::(wrap=
per) client [192.168.11.43]::call vmGetStats with ('d2d8bdfd-99a6-41c0-84e7=
-26e1d6a6057b',) {}=0A=
Thread-584::DEBUG::2015-03-18 17:49:13,503::BindingXMLRPC::1140::vds::(wrap=
per) return vmGetStats with {'status': {'message': 'Virtual machine does no=
t exist', 'code': 1}}=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,534::protocoldetector::187::vds=
.MultiProtocolAcceptor::(_add_connection) Adding connection from 192.168.11=
.43:37058=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,543::protocoldetector::201::vds=
.MultiProtocolAcceptor::(_remove_connection) Connection removed from 192.16=
8.11.43:37058=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,544::protocoldetector::246::vds=
.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml fro=
m 192.168.11.43:37058=0A=
Detector thread::DEBUG::2015-03-18 17:49:13,544::BindingXMLRPC::1173::XmlDe=
tector::(handleSocket) xml over http detected from ('192.168.11.43', 37058)=
=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,557::BindingXMLRPC::1133::vds::(wrap=
per) client [192.168.11.43]::call vmMigrationCreate with ({'guestFQDN': 'co=
lvm60.collogia.de', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'aft=
erMigrationStatus': '', 'vmId': 'd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b', 'me=
mGuaranteedSize': 4096, 'transparentHugePages': 'true', 'displaySecurePort'=
: '5902', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'smp': '2=
', 'pauseCode': 'NOERR', 'migrationDest': 'libvirt', 'smartcardEnable': 'fa=
lse', 'hypervEnable': 'true', 'custom': {'device_86204f23-f278-4937-85c7-81=
696db7c90cdevice_e8a3825b-d5b7-4007-a4d7-0e4eec94771bdevice_622098c3-ebed-4=
f48-b33f-125fcb757ed6device_7cd98734-54cb-4963-96a3-ab85e48bcc8a': 'VmDevic=
e {vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b, deviceId=3D7cd98734-54cb-49=
63-96a3-ab85e48bcc8a, device=3Dspicevmc, type=3DCHANNEL, bootOrder=3D0, spe=
cParams=3D{}, address=3D{port=3D3, bus=3D0, controller=3D0, type=3Dvirtio-s=
erial}, managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dch=
annel2, customProperties=3D{}, snapshotId=3Dnull, logicalName=3Dnull}', 'de=
vice_86204f23-f278-4937-85c7-81696db7c90cdevice_e8a3825b-d5b7-4007-a4d7-0e4=
eec94771bdevice_622098c3-ebed-4f48-b33f-125fcb757ed6': 'VmDevice {vmId=3Dd2=
d8bdfd-99a6-41c0-84e7-26e1d6a6057b, deviceId=3D622098c3-ebed-4f48-b33f-125f=
cb757ed6, device=3Dunix, type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, ad=
dress=3D{port=3D2, bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=
=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel1, customP=
roperties=3D{}, snapshotId=3Dnull, logicalName=3Dnull}', 'device_86204f23-f=
278-4937-85c7-81696db7c90c': 'VmDevice {vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1=
d6a6057b, deviceId=3D86204f23-f278-4937-85c7-81696db7c90c, device=3Dide, ty=
pe=3DCONTROLLER, bootOrder=3D0, specParams=3D{}, address=3D{bus=3D0x00, dom=
ain=3D0x0000, type=3Dpci, slot=3D0x01, function=3D0x1}, managed=3Dfalse, pl=
ugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dide0, customProperties=3D{}, =
snapshotId=3Dnull, logicalName=3Dnull}', 'device_86204f23-f278-4937-85c7-81=
696db7c90cdevice_e8a3825b-d5b7-4007-a4d7-0e4eec94771b': 'VmDevice {vmId=3Dd=
2d8bdfd-99a6-41c0-84e7-26e1d6a6057b, deviceId=3De8a3825b-d5b7-4007-a4d7-0e4=
eec94771b, device=3Dunix, type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, a=
ddress=3D{port=3D1, bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=
=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel0, customP=
roperties=3D{}, snapshotId=3Dnull, logicalName=3Dnull}'}, 'vmType': 'kvm', =
'_srcDomXML': "<domain type=3D'kvm' id=3D'29' xmlns:qemu=3D'http://libvirt.=
org/schemas/domain/qemu/1.0'>\n <name>colvm60</name>\n <uuid>d2d8bdfd-99a=
6-41c0-84e7-26e1d6a6057b</uuid>\n <memory unit=3D'KiB'>4194304</memory>\n =
<currentMemory unit=3D'KiB'>4194304</currentMemory>\n <vcpu placement=3D'=
static' current=3D'2'>16</vcpu>\n <cputune>\n <shares>1020</shares>\n =
</cputune>\n <resource>\n <partition>/machine</partition>\n </resource=
>\n <sysinfo type=3D'smbios'>\n <system>\n <entry name=3D'ma=
nufacturer'>oVirt</entry>\n <entry name=3D'product'>oVirt Node</entr=
y>\n <entry name=3D'version'>20-3</entry>\n <entry name=3D'se=
rial'>49434D53-0200-48D6-3000-D6483000EEC8</entry>\n <entry name=3D'=
uuid'>d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b</entry>\n </system>\n </=
sysinfo>\n <os>\n <type arch=3D'x86_64' machine=3D'pc-1.0'>hvm</type>\n=
<smbios mode=3D'sysinfo'/>\n </os>\n <features>\n <acpi/>\n <hy=
perv>\n <relaxed state=3D'on'/>\n </hyperv>\n </features>\n <cpu =
mode=3D'custom' match=3D'exact'>\n <model fallback=3D'allow'>Nehalem</mo=
del>\n <topology sockets=3D'16' cores=3D'1' threads=3D'1'/>\n </cpu>\n =
<clock offset=3D'variable' adjustment=3D'3600' basis=3D'utc'>\n <timer =
name=3D'rtc' tickpolicy=3D'catchup' track=3D'guest'/>\n <timer name=3D'p=
it' tickpolicy=3D'delay'/>\n </clock>\n <on_poweroff>destroy</on_poweroff=
>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <dev=
ices>\n <emulator>/usr/bin/qemu-kvm</emulator>\n <disk type=3D'file' =
device=3D'cdrom'>\n <driver name=3D'qemu' type=3D'raw'/>\n <sourc=
e startupPolicy=3D'optional'/>\n <backingStore/>\n <target dev=3D=
'hdc' bus=3D'ide'/>\n <readonly/>\n <serial></serial>\n <ali=
as name=3D'ide0-1-0'/>\n <address type=3D'drive' controller=3D'0' bus=
=3D'1' target=3D'0' unit=3D'0'/>\n </disk>\n <disk type=3D'file' devi=
ce=3D'disk' snapshot=3D'no'>\n <driver name=3D'qemu' type=3D'raw' cach=
e=3D'none' error_policy=3D'stop' io=3D'threads'/>\n <source file=3D'/r=
hev/data-center/94ed7a19-fade-4bd6-83f2-2cbb2f730b95/965ca3b6-4f9c-4e81-b6e=
8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4f=
b3-b662-da6a55a3d8f2'>\n <seclabel model=3D'selinux' labelskip=3D'ye=
s'/>\n </source>\n <backingStore/>\n <target dev=3D'vda' bus=
=3D'virtio'/>\n <serial>422a4486-6642-41ae-bb1d-b6a955550689</serial>\=
n <boot order=3D'1'/>\n <alias name=3D'virtio-disk0'/>\n <ad=
dress type=3D'pci' domain=3D'0x0000' bus=3D'0x00' slot=3D'0x07' function=3D=
'0x0'/>\n </disk>\n <controller type=3D'scsi' index=3D'0' model=3D'vi=
rtio-scsi'>\n <alias name=3D'scsi0'/>\n <address type=3D'pci' dom=
ain=3D'0x0000' bus=3D'0x00' slot=3D'0x05' function=3D'0x0'/>\n </control=
ler>\n <controller type=3D'virtio-serial' index=3D'0' ports=3D'16'>\n =
<alias name=3D'virtio-serial0'/>\n <address type=3D'pci' domain=3D'=
0x0000' bus=3D'0x00' slot=3D'0x06' function=3D'0x0'/>\n </controller>\n =
<controller type=3D'usb' index=3D'0'>\n <alias name=3D'usb0'/>\n =
<address type=3D'pci' domain=3D'0x0000' bus=3D'0x00' slot=3D'0x01' funct=
ion=3D'0x2'/>\n </controller>\n <controller type=3D'pci' index=3D'0' =
model=3D'pci-root'>\n <alias name=3D'pci.0'/>\n </controller>\n =
<controller type=3D'ide' index=3D'0'>\n <alias name=3D'ide0'/>\n =
<address type=3D'pci' domain=3D'0x0000' bus=3D'0x00' slot=3D'0x01' function=
=3D'0x1'/>\n </controller>\n <interface type=3D'bridge'>\n <mac =
address=3D'00:0c:29:7a:94:f1'/>\n <source bridge=3D'ovirtmgmt'/>\n =
<bandwidth>\n </bandwidth>\n <target dev=3D'vnet2'/>\n <mo=
del type=3D'virtio'/>\n <filterref filter=3D'vdsm-no-mac-spoofing'/>\n=
<link state=3D'up'/>\n <boot order=3D'2'/>\n <alias name=3D=
'net0'/>\n <address type=3D'pci' domain=3D'0x0000' bus=3D'0x00' slot=
=3D'0x03' function=3D'0x0'/>\n </interface>\n <channel type=3D'unix'>=
\n <source mode=3D'bind' path=3D'/var/lib/libvirt/qemu/channels/d2d8bd=
fd-99a6-41c0-84e7-26e1d6a6057b.com.redhat.rhevm.vdsm'/>\n <target type=
=3D'virtio' name=3D'com.redhat.rhevm.vdsm'/>\n <alias name=3D'channel0=
'/>\n <address type=3D'virtio-serial' controller=3D'0' bus=3D'0' port=
=3D'1'/>\n </channel>\n <channel type=3D'unix'>\n <source mode=
=3D'bind' path=3D'/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26=
e1d6a6057b.org.qemu.guest_agent.0'/>\n <target type=3D'virtio' name=3D=
'org.qemu.guest_agent.0'/>\n <alias name=3D'channel1'/>\n <addres=
s type=3D'virtio-serial' controller=3D'0' bus=3D'0' port=3D'2'/>\n </cha=
nnel>\n <channel type=3D'spicevmc'>\n <target type=3D'virtio' name=
=3D'com.redhat.spice.0'/>\n <alias name=3D'channel2'/>\n <address=
type=3D'virtio-serial' controller=3D'0' bus=3D'0' port=3D'3'/>\n </chan=
nel>\n <input type=3D'mouse' bus=3D'ps2'/>\n <input type=3D'keyboard'=
bus=3D'ps2'/>\n <graphics type=3D'spice' tlsPort=3D'5902' autoport=3D'y=
es' keymap=3D'de' passwdValidTo=3D'1970-01-01T00:00:01' connected=3D'keep'>=
\n <listen type=3D'network' address=3D'192.168.11.43' network=3D'vdsm-=
ovirtmgmt'/>\n <channel name=3D'main' mode=3D'secure'/>\n <channe=
l name=3D'display' mode=3D'secure'/>\n <channel name=3D'inputs' mode=
=3D'secure'/>\n <channel name=3D'cursor' mode=3D'secure'/>\n <cha=
nnel name=3D'playback' mode=3D'secure'/>\n <channel name=3D'record' mo=
de=3D'secure'/>\n <channel name=3D'smartcard' mode=3D'secure'/>\n =
<channel name=3D'usbredir' mode=3D'secure'/>\n </graphics>\n <sound =
model=3D'ich6'>\n <alias name=3D'sound0'/>\n <address type=3D'pci=
' domain=3D'0x0000' bus=3D'0x00' slot=3D'0x04' function=3D'0x0'/>\n </so=
und>\n <video>\n <model type=3D'qxl' ram=3D'65536' vram=3D'32768' h=
eads=3D'1'/>\n <alias name=3D'video0'/>\n <address type=3D'pci' d=
omain=3D'0x0000' bus=3D'0x00' slot=3D'0x02' function=3D'0x0'/>\n </video=
>\n <memballoon model=3D'none'>\n <alias name=3D'balloon0'/>\n <=
/memballoon>\n </devices>\n <seclabel type=3D'dynamic' model=3D'selinux' =
relabel=3D'yes'>\n <label>system_u:system_r:svirt_t:s0:c356,c730</label>=
\n <imagelabel>system_u:object_r:svirt_image_t:s0:c356,c730</imagelabel>=
\n </seclabel>\n <qemu:commandline>\n <qemu:env name=3D'SPICE_DEBUG_AL=
LOW_MC' value=3D'1'/>\n </qemu:commandline>\n</domain>\n", 'memSize': 4096=
, 'smpCoresPerSocket': '1', 'vmName': 'colvm60', 'nice': '0', 'username': '=
', 'clientIp': '', 'bootMenuEnable': 'false', 'copyPasteEnable': 'true', 'd=
isplayIp': '192.168.11.43', 'keyboardLayout': 'de', 'displayPort': '-1', 'g=
uestDiskMapping': {}, 'spiceSecureChannels': 'smain,sinputs,scursor,splayba=
ck,srecord,sdisplay,susbredir,ssmartcard', 'fileTransferEnable': 'true', 'n=
icModel': 'rtl8139,pv', 'elapsedTimeOffset': 4043.6238191127777, 'kvmEnable=
': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'alias': 'sound0', '=
specParams': {}, 'deviceId': 'c0611af3-b676-4b7b-a794-1cea5f54e1db', 'addre=
ss': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'fu=
nction': '0x0'}, 'device': 'ich6', 'type': 'sound'}, {'device': 'memballoon=
', 'specParams': {'model': 'none'}, 'type': 'balloon', 'alias': 'balloon0'}=
, {'index': '0', 'alias': 'scsi0', 'specParams': {}, 'deviceId': 'b0b3dbb0-=
dc4e-423c-8a56-9f06136db4d7', 'address': {'slot': '0x05', 'bus': '0x00', 'd=
omain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'scsi', 'mod=
el': 'virtio-scsi', 'type': 'controller'}, {'alias': 'virtio-serial0', 'spe=
cParams': {}, 'deviceId': '7ea156ea-94bf-4bc3-bbd6-302b646cd596', 'address'=
: {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'funct=
ion': '0x0'}, 'device': 'virtio-serial', 'type': 'controller'}, {'alias': '=
video0', 'specParams': {'vram': '32768', 'heads': '1'}, 'deviceId': '417a36=
be-98c1-4af7-950a-df13388c9e4d', 'address': {'slot': '0x02', 'bus': '0x00',=
'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'qxl', 't=
ype': 'video'}, {'device': 'spice', 'specParams': {'copyPasteEnable': 'true=
', 'displayNetwork': 'ovirtmgmt', 'keyMap': 'de', 'displayIp': '192.168.11.=
43', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdispl=
ay,susbredir,ssmartcard'}, 'type': 'graphics', 'tlsPort': '5902'}, {'nicMod=
el': 'pv', 'macAddr': '00:0c:29:7a:94:f1', 'linkActive': True, 'network': '=
ovirtmgmt', 'alias': 'net0', 'bootOrder': '2', 'filter': 'vdsm-no-mac-spoof=
ing', 'specParams': {'inbound': {}, 'outbound': {}}, 'deviceId': 'd8694952-=
f28d-4e97-a1d5-506daf4cc282', 'address': {'slot': '0x03', 'bus': '0x00', 'd=
omain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 't=
ype': 'interface', 'name': 'vnet2'}, {'index': '2', 'iface': 'ide', 'name':=
'hdc', 'alias': 'ide0-1-0', 'specParams': {'path': ''}, 'readonly': 'True'=
, 'deviceId': '855a5a5b-7e9a-4e94-b1f0-7d8c26552503', 'address': {'bus': '1=
', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device=
': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID': '94e=
d7a19-fade-4bd6-83f2-2cbb2f730b95', 'reqsize': '0', 'index': 0, 'iface': 'v=
irtio', 'apparentsize': '48318382080', 'alias': 'virtio-disk0', 'imageID': =
'422a4486-6642-41ae-bb1d-b6a955550689', 'readonly': 'False', 'shared': 'fal=
se', 'truesize': '38062170112', 'type': 'disk', 'domainID': '965ca3b6-4f9c-=
4e81-b6e8-5ed4a9e58545', 'volumeInfo': {'domainID': '965ca3b6-4f9c-4e81-b6e=
8-5ed4a9e58545', 'volType': 'path', 'leaseOffset': 0, 'volumeID': '26b4c1e3=
-faf8-4fb3-b662-da6a55a3d8f2', 'leasePath': '/rhev/data-center/mnt/10.10.30=
.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486=
-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2.lease', '=
imageID': '422a4486-6642-41ae-bb1d-b6a955550689', 'path': '/rhev/data-cente=
r/mnt/10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/i=
mages/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3=
d8f2'}, 'format': 'raw', 'deviceId': '422a4486-6642-41ae-bb1d-b6a955550689'=
, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'p=
ci', 'function': '0x0'}, 'device': 'disk', 'path': '/rhev/data-center/94ed7=
a19-fade-4bd6-83f2-2cbb2f730b95/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images=
/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2'=
, 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder'=
: '1', 'volumeID': '26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'specParams': {}=
, 'volumeChain': [{'domainID': '965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', 'vol=
Type': 'path', 'leaseOffset': 0, 'volumeID': '26b4c1e3-faf8-4fb3-b662-da6a5=
5a3d8f2', 'leasePath': '/rhev/data-center/mnt/10.10.30.251:_var_nas1_OVirtI=
B/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a95=
5550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2.lease', 'imageID': '422a4486-6=
642-41ae-bb1d-b6a955550689', 'path': '/rhev/data-center/mnt/10.10.30.251:_v=
ar_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-4=
1ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2'}]}, {'device': =
'usb', 'alias': 'usb0', 'type': 'controller', 'address': {'slot': '0x01', '=
bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'dev=
ice': 'ide', 'alias': 'ide0', 'type': 'controller', 'address': {'slot': '0x=
01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x1'}},=
{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bu=
s': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'devic=
e': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0',=
'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'spi=
cevmc', 'alias': 'channel2', 'type': 'channel', 'address': {'bus': '0', 'co=
ntroller': '0', 'type': 'virtio-serial', 'port': '3'}}], 'status': 'Up', 't=
imeOffset': '3600', 'maxVCpus': '16', 'guestIPs': '192.168.10.70', 'display=
': 'qxl'},) {}=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,557::API::545::vds::(migrationCreate=
) Migration create=0A=
Thread-585::INFO::2015-03-18 17:49:13,558::clientIF::394::vds::(createVm) v=
mContainerLock acquired by vm d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,570::clientIF::407::vds::(createVm) =
Total desktops after creation of d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b is 1=
=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,570::vm::2264::vm.Vm::(_startUnderly=
ingVm) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Start=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,572::vm::2268::vm.Vm::(_startUnderly=
ingVm) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::_ongoingCreations acq=
uired=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,572::vm::5658::vm.Vm::(waitForMigrat=
ionDestinationPrepare) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::migra=
tion destination: waiting for VM creation=0A=
Thread-586::INFO::2015-03-18 17:49:13,573::vm::3261::vm.Vm::(_run) vmId=3D`=
d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::VM wrapper has started=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,574::vm::5663::vm.Vm::(waitForMigrat=
ionDestinationPrepare) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::migra=
tion destination: waiting 36s for path preparation=0A=
Thread-586::WARNING::2015-03-18 17:49:13,575::vm::2056::vm.Vm::(buildConfDe=
vices) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Unknown type found, d=
evice: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address=
': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}' =
found=0A=
Thread-586::WARNING::2015-03-18 17:49:13,577::vm::2056::vm.Vm::(buildConfDe=
vices) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Unknown type found, d=
evice: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address=
': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}' =
found=0A=
Thread-586::WARNING::2015-03-18 17:49:13,577::vm::2056::vm.Vm::(buildConfDe=
vices) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Unknown type found, d=
evice: '{'device': 'spicevmc', 'alias': 'channel2', 'type': 'channel', 'add=
ress': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'=
}}' found=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,578::task::595::Storage.TaskManager.=
Task::(_updateState) Task=3D`44202a8b-c821-4571-8aab-0f8615914544`::moving =
from state init -> state preparing=0A=
Thread-586::INFO::2015-03-18 17:49:13,578::logUtils::44::dispatcher::(wrapp=
er) Run and protect: getVolumeSize(sdUUID=3D'965ca3b6-4f9c-4e81-b6e8-5ed4a9=
e58545', spUUID=3D'94ed7a19-fade-4bd6-83f2-2cbb2f730b95', imgUUID=3D'422a44=
86-6642-41ae-bb1d-b6a955550689', volUUID=3D'26b4c1e3-faf8-4fb3-b662-da6a55a=
3d8f2', options=3DNone)=0A=
Thread-586::INFO::2015-03-18 17:49:13,582::logUtils::47::dispatcher::(wrapp=
er) Run and protect: getVolumeSize, Return response: {'truesize': '38062174=
208', 'apparentsize': '48318382080'}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,582::task::1191::Storage.TaskManager=
.Task::(prepare) Task=3D`44202a8b-c821-4571-8aab-0f8615914544`::finished: {=
'truesize': '38062174208', 'apparentsize': '48318382080'}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,582::task::595::Storage.TaskManager.=
Task::(_updateState) Task=3D`44202a8b-c821-4571-8aab-0f8615914544`::moving =
from state preparing -> state finished=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,582::resourceManager::940::Storage.R=
esourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {=
}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,582::resourceManager::977::Storage.R=
esourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,583::task::993::Storage.TaskManager.=
Task::(_decref) Task=3D`44202a8b-c821-4571-8aab-0f8615914544`::ref 0 aborti=
ng False=0A=
Thread-586::INFO::2015-03-18 17:49:13,583::clientIF::335::vds::(prepareVolu=
mePath) prepared volume path:=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,583::task::595::Storage.TaskManager.=
Task::(_updateState) Task=3D`b4c42055-c38b-4427-90ad-a00120f4d5ee`::moving =
from state init -> state preparing=0A=
Thread-586::INFO::2015-03-18 17:49:13,583::logUtils::44::dispatcher::(wrapp=
er) Run and protect: prepareImage(sdUUID=3D'965ca3b6-4f9c-4e81-b6e8-5ed4a9e=
58545', spUUID=3D'94ed7a19-fade-4bd6-83f2-2cbb2f730b95', imgUUID=3D'422a448=
6-6642-41ae-bb1d-b6a955550689', leafUUID=3D'26b4c1e3-faf8-4fb3-b662-da6a55a=
3d8f2')=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,584::resourceManager::198::Storage.R=
esourceManager.Request::(__init__) ResName=3D`Storage.965ca3b6-4f9c-4e81-b6=
e8-5ed4a9e58545`ReqID=3D`bc4409f2-3b01-4467-998e-a29bee2de906`::Request was=
made in '/usr/share/vdsm/storage/hsm.py' line '3201' at 'prepareImage'=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,584::resourceManager::542::Storage.R=
esourceManager::(registerResource) Trying to register resource 'Storage.965=
ca3b6-4f9c-4e81-b6e8-5ed4a9e58545' for lock type 'shared'=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,585::resourceManager::601::Storage.R=
esourceManager::(registerResource) Resource 'Storage.965ca3b6-4f9c-4e81-b6e=
8-5ed4a9e58545' is free. Now locking as 'shared' (1 active user)=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,585::resourceManager::238::Storage.R=
esourceManager.Request::(grant) ResName=3D`Storage.965ca3b6-4f9c-4e81-b6e8-=
5ed4a9e58545`ReqID=3D`bc4409f2-3b01-4467-998e-a29bee2de906`::Granted reques=
t=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,585::task::827::Storage.TaskManager.=
Task::(resourceAcquired) Task=3D`b4c42055-c38b-4427-90ad-a00120f4d5ee`::_re=
sourcesAcquired: Storage.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545 (shared)=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,585::task::993::Storage.TaskManager.=
Task::(_decref) Task=3D`b4c42055-c38b-4427-90ad-a00120f4d5ee`::ref 1 aborti=
ng False=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,613::fileSD::499::Storage.StorageDom=
ain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/10.10.30=
.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486=
-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,615::fileUtils::142::Storage.fileUti=
ls::(createdir) Creating directory: /var/run/vdsm/storage/965ca3b6-4f9c-4e8=
1-b6e8-5ed4a9e58545=0A=
Thread-586::WARNING::2015-03-18 17:49:13,615::fileUtils::149::Storage.fileU=
tils::(createdir) Dir /var/run/vdsm/storage/965ca3b6-4f9c-4e81-b6e8-5ed4a9e=
58545 already exists=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,615::fileSD::474::Storage.StorageDom=
ain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/10.10.3=
0.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a448=
6-6642-41ae-bb1d-b6a955550689 to /var/run/vdsm/storage/965ca3b6-4f9c-4e81-b=
6e8-5ed4a9e58545/422a4486-6642-41ae-bb1d-b6a955550689=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,615::fileSD::479::Storage.StorageDom=
ain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/9=
65ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/422a4486-6642-41ae-bb1d-b6a955550689=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,618::fileVolume::529::Storage.Volume=
::(validateVolumePath) validate path for 26b4c1e3-faf8-4fb3-b662-da6a55a3d8=
f2=0A=
Thread-586::INFO::2015-03-18 17:49:13,621::logUtils::47::dispatcher::(wrapp=
er) Run and protect: prepareImage, Return response: {'info': {'domainID': '=
965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', 'volType': 'path', 'leaseOffset': 0,=
'path': u'/rhev/data-center/mnt/10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f=
9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a955550689/26b4c=
1e3-faf8-4fb3-b662-da6a55a3d8f2', 'volumeID': u'26b4c1e3-faf8-4fb3-b662-da6=
a55a3d8f2', 'leasePath': u'/rhev/data-center/mnt/10.10.30.251:_var_nas1_OVi=
rtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6=
a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2.lease', 'imageID': '422a448=
6-6642-41ae-bb1d-b6a955550689'}, 'path': u'/rhev/data-center/94ed7a19-fade-=
4bd6-83f2-2cbb2f730b95/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486=
-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'imgVol=
umesInfo': [{'domainID': '965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', 'volType':=
'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/10.10.30.251:_va=
r_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41=
ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'volumeID': u'2=
6b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'leasePath': u'/rhev/data-center/mnt/=
10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/=
422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2.l=
ease', 'imageID': '422a4486-6642-41ae-bb1d-b6a955550689'}]}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,621::task::1191::Storage.TaskManager=
.Task::(prepare) Task=3D`b4c42055-c38b-4427-90ad-a00120f4d5ee`::finished: {=
'info': {'domainID': '965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', 'volType': 'pa=
th', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/10.10.30.251:_var_na=
s1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-b=
b1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'volumeID': u'26b4c=
1e3-faf8-4fb3-b662-da6a55a3d8f2', 'leasePath': u'/rhev/data-center/mnt/10.1=
0.30.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a=
4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2.lease=
', 'imageID': '422a4486-6642-41ae-bb1d-b6a955550689'}, 'path': u'/rhev/data=
-center/94ed7a19-fade-4bd6-83f2-2cbb2f730b95/965ca3b6-4f9c-4e81-b6e8-5ed4a9=
e58545/images/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-=
da6a55a3d8f2', 'imgVolumesInfo': [{'domainID': '965ca3b6-4f9c-4e81-b6e8-5ed=
4a9e58545', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-cente=
r/mnt/10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/i=
mages/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3=
d8f2', 'volumeID': u'26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'leasePath': u'=
/rhev/data-center/mnt/10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e=
8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4f=
b3-b662-da6a55a3d8f2.lease', 'imageID': '422a4486-6642-41ae-bb1d-b6a9555506=
89'}]}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,622::task::595::Storage.TaskManager.=
Task::(_updateState) Task=3D`b4c42055-c38b-4427-90ad-a00120f4d5ee`::moving =
from state preparing -> state finished=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,622::resourceManager::940::Storage.R=
esourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {=
'Storage.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545': < ResourceRef 'Storage.965c=
a3b6-4f9c-4e81-b6e8-5ed4a9e58545', isValid: 'True' obj: 'None'>}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,622::resourceManager::977::Storage.R=
esourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,622::resourceManager::616::Storage.R=
esourceManager::(releaseResource) Trying to release resource 'Storage.965ca=
3b6-4f9c-4e81-b6e8-5ed4a9e58545'=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,622::resourceManager::635::Storage.R=
esourceManager::(releaseResource) Released resource 'Storage.965ca3b6-4f9c-=
4e81-b6e8-5ed4a9e58545' (0 active users)=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,623::resourceManager::641::Storage.R=
esourceManager::(releaseResource) Resource 'Storage.965ca3b6-4f9c-4e81-b6e8=
-5ed4a9e58545' is free, finding out if anyone is waiting for it.=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,623::resourceManager::649::Storage.R=
esourceManager::(releaseResource) No one is waiting for resource 'Storage.9=
65ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', Clearing records.=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,623::task::993::Storage.TaskManager.=
Task::(_decref) Task=3D`b4c42055-c38b-4427-90ad-a00120f4d5ee`::ref 0 aborti=
ng False=0A=
Thread-586::INFO::2015-03-18 17:49:13,623::clientIF::335::vds::(prepareVolu=
mePath) prepared volume path: /rhev/data-center/94ed7a19-fade-4bd6-83f2-2cb=
b2f730b95/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb=
1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,633::vm::1058::vm.Vm::(__init__) vmI=
d=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Ignoring param (target, 4194304=
) in BalloonDevice=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,633::vm::2294::vm.Vm::(_startUnderly=
ingVm) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::_ongoingCreations rel=
eased=0A=
Thread-586::DEBUG::2015-03-18 17:49:13,634::vm::4128::vm.Vm::(_waitForUnder=
lyingMigration) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Waiting 2160=
0 seconds for end of migration=0A=
Thread-585::ERROR::2015-03-18 17:49:13,634::vm::5638::vm.Vm::(_updateDevice=
sDomxmlCache) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Alias not foun=
d for device type graphics during migration at destination host=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,649::API::558::vds::(migrationCreate=
) Destination VM creation succeeded=0A=
Thread-585::DEBUG::2015-03-18 17:49:13,649::BindingXMLRPC::1140::vds::(wrap=
per) return vmMigrationCreate with {'status': {'message': 'Done', 'code': 0=
}, 'migrationPort': 0, 'params': {'username': '', 'acpiEnable': 'true', 'em=
ulatedMachine': 'pc-1.0', 'afterMigrationStatus': '', 'pid': '0', 'memGuara=
nteedSize': 4096, 'transparentHugePages': 'true', 'displaySecurePort': '-1'=
, 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {'devic=
e_86204f23-f278-4937-85c7-81696db7c90cdevice_e8a3825b-d5b7-4007-a4d7-0e4eec=
94771bdevice_622098c3-ebed-4f48-b33f-125fcb757ed6': 'VmDevice {vmId=3Dd2d8b=
dfd-99a6-41c0-84e7-26e1d6a6057b, deviceId=3D622098c3-ebed-4f48-b33f-125fcb7=
57ed6, device=3Dunix, type=3DCHANNEL, bootOrder=3D0, specParams=3D{}, addre=
ss=3D{port=3D2, bus=3D0, controller=3D0, type=3Dvirtio-serial}, managed=3Df=
alse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchannel1, customPrope=
rties=3D{}, snapshotId=3Dnull, logicalName=3Dnull}', 'device_86204f23-f278-=
4937-85c7-81696db7c90cdevice_e8a3825b-d5b7-4007-a4d7-0e4eec94771b': 'VmDevi=
ce {vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b, deviceId=3De8a3825b-d5b7-4=
007-a4d7-0e4eec94771b, device=3Dunix, type=3DCHANNEL, bootOrder=3D0, specPa=
rams=3D{}, address=3D{port=3D1, bus=3D0, controller=3D0, type=3Dvirtio-seri=
al}, managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dchann=
el0, customProperties=3D{}, snapshotId=3Dnull, logicalName=3Dnull}', 'devic=
e_86204f23-f278-4937-85c7-81696db7c90c': 'VmDevice {vmId=3Dd2d8bdfd-99a6-41=
c0-84e7-26e1d6a6057b, deviceId=3D86204f23-f278-4937-85c7-81696db7c90c, devi=
ce=3Dide, type=3DCONTROLLER, bootOrder=3D0, specParams=3D{}, address=3D{bus=
=3D0x00, domain=3D0x0000, type=3Dpci, slot=3D0x01, function=3D0x1}, managed=
=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, deviceAlias=3Dide0, customPrope=
rties=3D{}, snapshotId=3Dnull, logicalName=3Dnull}', 'device_86204f23-f278-=
4937-85c7-81696db7c90cdevice_e8a3825b-d5b7-4007-a4d7-0e4eec94771bdevice_622=
098c3-ebed-4f48-b33f-125fcb757ed6device_7cd98734-54cb-4963-96a3-ab85e48bcc8=
a': 'VmDevice {vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b, deviceId=3D7cd9=
8734-54cb-4963-96a3-ab85e48bcc8a, device=3Dspicevmc, type=3DCHANNEL, bootOr=
der=3D0, specParams=3D{}, address=3D{port=3D3, bus=3D0, controller=3D0, typ=
e=3Dvirtio-serial}, managed=3Dfalse, plugged=3Dtrue, readOnly=3Dfalse, devi=
ceAlias=3Dchannel2, customProperties=3D{}, snapshotId=3Dnull, logicalName=
=3Dnull}'}, 'pauseCode': 'NOERR', 'migrationDest': 'libvirt', 'hypervEnable=
': 'true', 'smp': '2', 'vmType': 'kvm', 'memSize': 4096, 'smpCoresPerSocket=
': '1', 'vmName': 'colvm60', 'nice': '0', 'guestFQDN': 'colvm60.collogia.de=
', 'bootMenuEnable': 'false', 'guestDiskMapping': {}, 'vmId': 'd2d8bdfd-99a=
6-41c0-84e7-26e1d6a6057b', 'copyPasteEnable': 'true', 'displayIp': '192.168=
.11.46', 'displayPort': '-1', 'guestIPs': '192.168.10.70', 'smartcardEnable=
': 'false', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord=
,sdisplay,susbredir,ssmartcard', 'fileTransferEnable': 'true', 'nicModel': =
'rtl8139,pv', 'keyboardLayout': 'de', 'kvmEnable': 'true', 'displayNetwork'=
: 'ovirtmgmt', 'devices': [{'specParams': {}, 'alias': 'sound0', 'deviceId'=
: 'c0611af3-b676-4b7b-a794-1cea5f54e1db', 'address': {'slot': '0x04', 'bus'=
: '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': =
'ich6', 'type': 'sound'}, {'device': 'memballoon', 'specParams': {'model': =
'none'}, 'type': 'balloon', 'alias': 'balloon0'}, {'index': '0', 'alias': '=
scsi0', 'specParams': {}, 'deviceId': 'b0b3dbb0-dc4e-423c-8a56-9f06136db4d7=
', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': '=
pci', 'function': '0x0'}, 'device': 'scsi', 'model': 'virtio-scsi', 'type':=
'controller'}, {'specParams': {}, 'alias': 'virtio-serial0', 'deviceId': '=
7ea156ea-94bf-4bc3-bbd6-302b646cd596', 'address': {'slot': '0x06', 'bus': '=
0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'vi=
rtio-serial', 'type': 'controller'}, {'specParams': {'vram': '32768', 'head=
s': '1'}, 'alias': 'video0', 'deviceId': '417a36be-98c1-4af7-950a-df13388c9=
e4d', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type'=
: 'pci', 'function': '0x0'}, 'device': 'qxl', 'type': 'video'}, {'device': =
'spice', 'specParams': {'spiceSecureChannels': 'smain,sinputs,scursor,splay=
back,srecord,sdisplay,susbredir,ssmartcard', 'copyPasteEnable': 'true', 'ke=
yMap': 'de', 'displayIp': '192.168.11.43', 'displayNetwork': 'ovirtmgmt'}, =
'type': 'graphics', 'tlsPort': '5902'}, {'nicModel': 'pv', 'macAddr': '00:0=
c:29:7a:94:f1', 'linkActive': True, 'network': 'ovirtmgmt', 'specParams': {=
'inbound': {}, 'outbound': {}}, 'bootOrder': '2', 'filter': 'vdsm-no-mac-sp=
oofing', 'alias': 'net0', 'deviceId': 'd8694952-f28d-4e97-a1d5-506daf4cc282=
', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': '=
pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': =
'vnet2'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0'=
, 'specParams': {'path': ''}, 'readonly': 'True', 'deviceId': '855a5a5b-7e9=
a-4e94-b1f0-7d8c26552503', 'address': {'bus': '1', 'controller': '0', 'type=
': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'fals=
e', 'path': '', 'type': 'disk'}, {'poolID': '94ed7a19-fade-4bd6-83f2-2cbb2f=
730b95', 'volumeInfo': {'domainID': '965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545',=
'volType': 'path', 'leaseOffset': 0, 'volumeID': '26b4c1e3-faf8-4fb3-b662-=
da6a55a3d8f2', 'leasePath': '/rhev/data-center/mnt/10.10.30.251:_var_nas1_O=
VirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-=
b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2.lease', 'imageID': '422a4=
486-6642-41ae-bb1d-b6a955550689', 'path': '/rhev/data-center/mnt/10.10.30.2=
51:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6=
642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2'}, 'index':=
0, 'iface': 'virtio', 'apparentsize': '48318382080', 'specParams': {}, 'im=
ageID': '422a4486-6642-41ae-bb1d-b6a955550689', 'readonly': 'False', 'share=
d': 'false', 'truesize': '38062170112', 'type': 'disk', 'domainID': '965ca3=
b6-4f9c-4e81-b6e8-5ed4a9e58545', 'reqsize': '0', 'format': 'raw', 'deviceId=
': '422a4486-6642-41ae-bb1d-b6a955550689', 'address': {'slot': '0x07', 'bus=
': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device':=
'disk', 'path': '/rhev/data-center/94ed7a19-fade-4bd6-83f2-2cbb2f730b95/96=
5ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a955550=
689/26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'propagateErrors': 'off', 'optio=
nal': 'false', 'name': 'vda', 'bootOrder': '1', 'volumeID': '26b4c1e3-faf8-=
4fb3-b662-da6a55a3d8f2', 'alias': 'virtio-disk0', 'volumeChain': [{'domainI=
D': '965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', 'volType': 'path', 'leaseOffset=
': 0, 'volumeID': '26b4c1e3-faf8-4fb3-b662-da6a55a3d8f2', 'leasePath': '/rh=
ev/data-center/mnt/10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5=
ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a955550689/26b4c1e3-faf8-4fb3-=
b662-da6a55a3d8f2.lease', 'imageID': '422a4486-6642-41ae-bb1d-b6a955550689'=
, 'path': '/rhev/data-center/mnt/10.10.30.251:_var_nas1_OVirtIB/965ca3b6-4f=
9c-4e81-b6e8-5ed4a9e58545/images/422a4486-6642-41ae-bb1d-b6a955550689/26b4c=
1e3-faf8-4fb3-b662-da6a55a3d8f2'}]}, {'device': 'usb', 'alias': 'usb0', 'ty=
pe': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x=
0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide0=
', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain=
': '0x0000', 'type': 'pci', 'function': '0x1'}}, {'device': 'unix', 'alias'=
: 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',=
'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'chann=
el1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type':=
'virtio-serial', 'port': '2'}}, {'device': 'spicevmc', 'alias': 'channel2'=
, 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'vi=
rtio-serial', 'port': '3'}}], 'status': 'Migration Destination', 'timeOffse=
t': '3600', 'maxVCpus': '16', 'clientIp': '', 'display': 'qxl'}}=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:14,221::vm::5571::vm.Vm::(_onLibv=
irtLifecycleEvent) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::event Sta=
rted detail 1 opaque None=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:15,424::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:15,425::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-588::DEBUG::2015-03-18 17:49:15,427::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:15,432::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:15,434::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-589::DEBUG::2015-03-18 17:49:15,436::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
Thread-32::DEBUG::2015-03-18 17:49:16,182::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtISO/bff3a2be-fdd9-4e37-b416-fa4ef7fafba2/dom_md/metadata iflag=3Dd=
irect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-32::DEBUG::2015-03-18 17:49:16,189::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n357 b=
ytes (357 B) copied, 0.000303553 s, 1.2 MB/s\n'; <rc> =3D 0=0A=
Thread-41::DEBUG::2015-03-18 17:49:16,193::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtEXP/63041fa9-e093-4b44-b36f-f39f16d3974f/dom_md/metadata iflag=3Dd=
irect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-41::DEBUG::2015-03-18 17:49:16,200::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n363 b=
ytes (363 B) copied, 0.000328831 s, 1.1 MB/s\n'; <rc> =3D 0=0A=
Thread-46::DEBUG::2015-03-18 17:49:16,305::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.252:_var_n=
as2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-46::DEBUG::2015-03-18 17:49:16,311::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n557 b=
ytes (557 B) copied, 0.000287691 s, 1.9 MB/s\n'; <rc> =3D 0=0A=
Thread-38::DEBUG::2015-03-18 17:49:16,946::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtIB/2c51d320-88ce-4f23-8215-e15f55f66906/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-33::DEBUG::2015-03-18 17:49:16,946::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.251:_var_n=
as1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-38::DEBUG::2015-03-18 17:49:16,956::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n644 b=
ytes (644 B) copied, 0.00027901 s, 2.3 MB/s\n'; <rc> =3D 0=0A=
Thread-33::DEBUG::2015-03-18 17:49:16,958::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n644 b=
ytes (644 B) copied, 0.000852178 s, 756 kB/s\n'; <rc> =3D 0=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,619::vm::5571::vm.Vm::(_onLibv=
irtLifecycleEvent) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::event Sto=
pped detail 5 opaque None=0A=
libvirtEventLoop::INFO::2015-03-18 17:49:17,619::vm::2366::vm.Vm::(_onQemuD=
eath) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::underlying process dis=
connected=0A=
libvirtEventLoop::INFO::2015-03-18 17:49:17,620::vm::4952::vm.Vm::(releaseV=
m) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Release VM resources=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,623::task::595::Storage.TaskMa=
nager.Task::(_updateState) Task=3D`e0d01cc9-0a30-4b30-a163-0bb123aa9be2`::m=
oving from state init -> state preparing=0A=
Thread-586::DEBUG::2015-03-18 17:49:17,624::libvirtconnection::143::root::(=
wrapper) Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain =
nicht gefunden: Keine Domain mit ?bereinstimmender UUID 'd2d8bdfd-99a6-41c0=
-84e7-26e1d6a6057b'=0A=
libvirtEventLoop::INFO::2015-03-18 17:49:17,624::logUtils::44::dispatcher::=
(wrapper) Run and protect: teardownImage(sdUUID=3D'965ca3b6-4f9c-4e81-b6e8-=
5ed4a9e58545', spUUID=3D'94ed7a19-fade-4bd6-83f2-2cbb2f730b95', imgUUID=3D'=
422a4486-6642-41ae-bb1d-b6a955550689', volUUID=3DNone)=0A=
Thread-586::ERROR::2015-03-18 17:49:17,625::vm::2325::vm.Vm::(_startUnderly=
ingVm) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Failed to start a mig=
ration destination vm=0A=
Traceback (most recent call last):=0A=
File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm=0A=
self._completeIncomingMigration()=0A=
File "/usr/share/vdsm/virt/vm.py", line 4107, in _completeIncomingMigrati=
on=0A=
self._incomingMigrationFinished.isSet(), usedTimeout)=0A=
File "/usr/share/vdsm/virt/vm.py", line 4160, in _attachLibvirtDomainAfte=
rMigration=0A=
raise MigrationError(e.get_error_message())=0A=
MigrationError: Domain nicht gefunden: Keine Domain mit ?bereinstimmender U=
UID 'd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b'=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,627::resourceManager::198::Sto=
rage.ResourceManager.Request::(__init__) ResName=3D`Storage.965ca3b6-4f9c-4=
e81-b6e8-5ed4a9e58545`ReqID=3D`2ec4cc19-966e-4269-a6fc-bb3cf7a519b2`::Reque=
st was made in '/usr/share/vdsm/storage/hsm.py' line '3253' at 'teardownIma=
ge'=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,629::resourceManager::542::Sto=
rage.ResourceManager::(registerResource) Trying to register resource 'Stora=
ge.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545' for lock type 'shared'=0A=
Detector thread::DEBUG::2015-03-18 17:49:17,629::protocoldetector::187::vds=
.MultiProtocolAcceptor::(_add_connection) Adding connection from 192.168.11=
.43:37061=0A=
Thread-586::DEBUG::2015-03-18 17:49:17,632::vm::2786::vm.Vm::(setDownStatus=
) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Changed state to Down: VM =
failed to migrate (code=3D8)=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,632::resourceManager::601::Sto=
rage.ResourceManager::(registerResource) Resource 'Storage.965ca3b6-4f9c-4e=
81-b6e8-5ed4a9e58545' is free. Now locking as 'shared' (1 active user)=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,634::resourceManager::238::Sto=
rage.ResourceManager.Request::(grant) ResName=3D`Storage.965ca3b6-4f9c-4e81=
-b6e8-5ed4a9e58545`ReqID=3D`2ec4cc19-966e-4269-a6fc-bb3cf7a519b2`::Granted =
request=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,638::task::827::Storage.TaskMa=
nager.Task::(resourceAcquired) Task=3D`e0d01cc9-0a30-4b30-a163-0bb123aa9be2=
`::_resourcesAcquired: Storage.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545 (shared=
)=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,638::task::993::Storage.TaskMa=
nager.Task::(_decref) Task=3D`e0d01cc9-0a30-4b30-a163-0bb123aa9be2`::ref 1 =
aborting False=0A=
libvirtEventLoop::INFO::2015-03-18 17:49:17,638::logUtils::47::dispatcher::=
(wrapper) Run and protect: teardownImage, Return response: None=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,638::task::1191::Storage.TaskM=
anager.Task::(prepare) Task=3D`e0d01cc9-0a30-4b30-a163-0bb123aa9be2`::finis=
hed: None=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,638::task::595::Storage.TaskMa=
nager.Task::(_updateState) Task=3D`e0d01cc9-0a30-4b30-a163-0bb123aa9be2`::m=
oving from state preparing -> state finished=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,639::resourceManager::940::Sto=
rage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resou=
rces {'Storage.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545': < ResourceRef 'Storag=
e.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', isValid: 'True' obj: 'None'>}=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,639::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,639::resourceManager::616::Sto=
rage.ResourceManager::(releaseResource) Trying to release resource 'Storage=
.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545'=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,639::resourceManager::635::Sto=
rage.ResourceManager::(releaseResource) Released resource 'Storage.965ca3b6=
-4f9c-4e81-b6e8-5ed4a9e58545' (0 active users)=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,640::resourceManager::641::Sto=
rage.ResourceManager::(releaseResource) Resource 'Storage.965ca3b6-4f9c-4e8=
1-b6e8-5ed4a9e58545' is free, finding out if anyone is waiting for it.=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,640::resourceManager::649::Sto=
rage.ResourceManager::(releaseResource) No one is waiting for resource 'Sto=
rage.965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545', Clearing records.=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,640::task::993::Storage.TaskMa=
nager.Task::(_decref) Task=3D`e0d01cc9-0a30-4b30-a163-0bb123aa9be2`::ref 0 =
aborting False=0A=
libvirtEventLoop::WARNING::2015-03-18 17:49:17,640::utils::129::root::(rmFi=
le) File: /var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a605=
7b.com.redhat.rhevm.vdsm already removed=0A=
libvirtEventLoop::WARNING::2015-03-18 17:49:17,641::utils::129::root::(rmFi=
le) File: /var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a605=
7b.org.qemu.guest_agent.0 already removed=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,641::task::595::Storage.TaskMa=
nager.Task::(_updateState) Task=3D`d02609f6-313a-45b7-b6fc-97c296f7456d`::m=
oving from state init -> state preparing=0A=
libvirtEventLoop::INFO::2015-03-18 17:49:17,641::logUtils::44::dispatcher::=
(wrapper) Run and protect: inappropriateDevices(thiefId=3D'd2d8bdfd-99a6-41=
c0-84e7-26e1d6a6057b')=0A=
Detector thread::DEBUG::2015-03-18 17:49:17,642::protocoldetector::201::vds=
.MultiProtocolAcceptor::(_remove_connection) Connection removed from 192.16=
8.11.43:37061=0A=
Detector thread::DEBUG::2015-03-18 17:49:17,642::protocoldetector::246::vds=
.MultiProtocolAcceptor::(_handle_connection_read) Detected protocol xml fro=
m 192.168.11.43:37061=0A=
libvirtEventLoop::INFO::2015-03-18 17:49:17,642::logUtils::47::dispatcher::=
(wrapper) Run and protect: inappropriateDevices, Return response: None=0A=
Detector thread::DEBUG::2015-03-18 17:49:17,643::BindingXMLRPC::1173::XmlDe=
tector::(handleSocket) xml over http detected from ('192.168.11.43', 37061)=
=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,644::task::1191::Storage.TaskM=
anager.Task::(prepare) Task=3D`d02609f6-313a-45b7-b6fc-97c296f7456d`::finis=
hed: None=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,646::task::595::Storage.TaskMa=
nager.Task::(_updateState) Task=3D`d02609f6-313a-45b7-b6fc-97c296f7456d`::m=
oving from state preparing -> state finished=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,647::resourceManager::940::Sto=
rage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resou=
rces {}=0A=
Thread-590::DEBUG::2015-03-18 17:49:17,648::BindingXMLRPC::1133::vds::(wrap=
per) client [192.168.11.43]::call vmDestroy with ('d2d8bdfd-99a6-41c0-84e7-=
26e1d6a6057b',) {}=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,648::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=0A=
Thread-590::INFO::2015-03-18 17:49:17,649::API::332::vds::(destroy) vmConta=
inerLock acquired by vm d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b=0A=
libvirtEventLoop::DEBUG::2015-03-18 17:49:17,650::task::993::Storage.TaskMa=
nager.Task::(_decref) Task=3D`d02609f6-313a-45b7-b6fc-97c296f7456d`::ref 0 =
aborting False=0A=
Thread-590::DEBUG::2015-03-18 17:49:17,651::vm::5026::vm.Vm::(destroy) vmId=
=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::destroy Called=0A=
libvirtEventLoop::WARNING::2015-03-18 17:49:17,651::vm::1953::vm.Vm::(_set_=
lastStatus) vmId=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::trying to set st=
ate to Down when already Down=0A=
Thread-590::DEBUG::2015-03-18 17:49:17,653::vm::5020::vm.Vm::(deleteVm) vmI=
d=3D`d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b`::Total desktops after destroy of=
d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b is 0=0A=
Thread-590::DEBUG::2015-03-18 17:49:17,653::BindingXMLRPC::1140::vds::(wrap=
per) return vmDestroy with {'status': {'message': 'Machine destroyed', 'cod=
e': 0}}=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:18,463::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:18,465::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-591::DEBUG::2015-03-18 17:49:18,466::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:21,487::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:21,489::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-592::DEBUG::2015-03-18 17:49:21,491::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
JsonRpc (StompReactor)::DEBUG::2015-03-18 17:49:24,669::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>=0A=
JsonRpcServer::DEBUG::2015-03-18 17:49:24,671::__init__::506::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request=0A=
Thread-593::DEBUG::2015-03-18 17:49:24,673::stompReactor::163::yajsonrpc.St=
ompServer::(send) Sending response=0A=
Thread-32::DEBUG::2015-03-18 17:49:26,195::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtISO/bff3a2be-fdd9-4e37-b416-fa4ef7fafba2/dom_md/metadata iflag=3Dd=
irect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-32::DEBUG::2015-03-18 17:49:26,202::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n357 b=
ytes (357 B) copied, 0.000315366 s, 1.1 MB/s\n'; <rc> =3D 0=0A=
Thread-41::DEBUG::2015-03-18 17:49:26,206::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtEXP/63041fa9-e093-4b44-b36f-f39f16d3974f/dom_md/metadata iflag=3Dd=
irect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-41::DEBUG::2015-03-18 17:49:26,213::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n363 b=
ytes (363 B) copied, 0.000316359 s, 1.1 MB/s\n'; <rc> =3D 0=0A=
Thread-46::DEBUG::2015-03-18 17:49:26,315::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.252:_var_n=
as2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-46::DEBUG::2015-03-18 17:49:26,322::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n557 b=
ytes (557 B) copied, 0.00033809 s, 1.6 MB/s\n'; <rc> =3D 0=0A=
Thread-38::DEBUG::2015-03-18 17:49:26,968::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.253:_var_n=
as3_OVirtIB/2c51d320-88ce-4f23-8215-e15f55f66906/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-33::DEBUG::2015-03-18 17:49:26,974::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) /usr/bin/dd if=3D/rhev/data-center/mnt/10.10.30.251:_var_n=
as1_OVirtIB/965ca3b6-4f9c-4e81-b6e8-5ed4a9e58545/dom_md/metadata iflag=3Ddi=
rect of=3D/dev/null bs=3D4096 count=3D1 (cwd None)=0A=
Thread-38::DEBUG::2015-03-18 17:49:26,975::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n644 b=
ytes (644 B) copied, 0.000260734 s, 2.5 MB/s\n'; <rc> =3D 0=0A=
Thread-33::DEBUG::2015-03-18 17:49:26,983::fileSD::261::Storage.Misc.excCmd=
::(getReadDelay) SUCCESS: <err> =3D '0+1 records in\n0+1 records out\n644 b=
ytes (644 B) copied, 0.000209731 s, 3.1 MB/s\n'; <rc> =3D 0=0A=
=0A=
=0A=
******************************************=0A=
engine log:=0A=
=0A=
2015-03-18 17:15:28,051 INFO [org.ovirt.engine.core.bll.RefreshHostCapabil=
itiesCommand] (org.ovirt.thread.pool-8-thread-20) [6a89c3bd] Before acquiri=
ng lock in order to prevent monitoring for host colovn06 from data-center C=
ollogia=0A=
2015-03-18 17:15:28,052 INFO [org.ovirt.engine.core.bll.RefreshHostCapabil=
itiesCommand] (org.ovirt.thread.pool-8-thread-20) [6a89c3bd] Lock acquired,=
from now a monitoring of host will be skipped for host colovn06 from data-=
center Collogia=0A=
2015-03-18 17:15:28,174 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tHardwareInfoVDSCommand] (org.ovirt.thread.pool-8-thread-20) [6a89c3bd] STA=
RT, GetHardwareInfoVDSCommand(HostName =3D colovn06, HostId =3D 68963fb8-b3=
86-4f5c-a88f-a7952bac78a3, vds=3DHost[colovn06,68963fb8-b386-4f5c-a88f-a795=
2bac78a3]), log id: 6e4d23dc=0A=
2015-03-18 17:15:28,180 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tHardwareInfoVDSCommand] (org.ovirt.thread.pool-8-thread-20) [6a89c3bd] FIN=
ISH, GetHardwareInfoVDSCommand, log id: 6e4d23dc=0A=
2015-03-18 17:15:28,273 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsO=
rClusterChangedCommand] (org.ovirt.thread.pool-8-thread-20) [2e2d467b] Runn=
ing command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entit=
ies affected : ID: 68963fb8-b386-4f5c-a88f-a7952bac78a3 Type: VDS=0A=
2015-03-18 17:15:28,300 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCo=
mmand] (org.ovirt.thread.pool-8-thread-20) [57ded262] Running command: Hand=
leVdsVersionCommand internal: true. Entities affected : ID: 68963fb8-b386-=
4f5c-a88f-a7952bac78a3 Type: VDS=0A=
2015-03-18 17:15:28,303 INFO [org.ovirt.engine.core.bll.RefreshHostCapabil=
itiesCommand] (org.ovirt.thread.pool-8-thread-20) [57ded262] Refresh host c=
apabilities finished. Lock released. Monitoring can run now for host colovn=
06 from data-center Collogia=0A=
2015-03-18 17:15:28,313 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogh=
andling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-20) [57ded262] Co=
rrelation ID: 6a89c3bd, Call Stack: null, Custom Event ID: -1, Message: Suc=
cessfully refreshed the capabilities of host colovn06.=0A=
2015-03-18 17:15:28,314 INFO [org.ovirt.engine.core.bll.RefreshHostCapabil=
itiesCommand] (org.ovirt.thread.pool-8-thread-20) [57ded262] Lock freed to =
object EngineLock [exclusiveLocks=3D key: 68963fb8-b386-4f5c-a88f-a7952bac7=
8a3 value: VDS=0A=
, sharedLocks=3D ]=0A=
2015-03-18 17:18:47,844 INFO [org.ovirt.engine.core.bll.MigrateVmToServerC=
ommand] (ajp--127.0.0.1-8702-8) [2dd847cf] Lock Acquired to object EngineLo=
ck [exclusiveLocks=3D key: d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b value: VM=
=0A=
, sharedLocks=3D ]=0A=
2015-03-18 17:18:48,125 INFO [org.ovirt.engine.core.bll.MigrateVmToServerC=
ommand] (org.ovirt.thread.pool-8-thread-30) [2dd847cf] Running command: Mig=
rateVmToServerCommand internal: false. Entities affected : ID: d2d8bdfd-99=
a6-41c0-84e7-26e1d6a6057b Type: VMAction group MIGRATE_VM with role type US=
ER, ID: d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b Type: VMAction group EDIT_VM_=
PROPERTIES with role type USER, ID: 77ec9f6d-a0be-46f5-a2d0-6002f097e587 T=
ype: VdsGroupsAction group CREATE_VM with role type USER=0A=
2015-03-18 17:18:48,261 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCo=
mmand] (org.ovirt.thread.pool-8-thread-30) [2dd847cf] START, MigrateVDSComm=
and(HostName =3D colovn03, HostId =3D 708845a9-8b82-4d22-97e7-3a907f3005d5,=
vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b, srcHost=3D192.168.11.43, dstV=
dsId=3D68963fb8-b386-4f5c-a88f-a7952bac78a3, dstHost=3D192.168.11.46:54321,=
migrationMethod=3DONLINE, tunnelMigration=3Dfalse, migrationDowntime=3D0),=
log id: 71fe79b2=0A=
2015-03-18 17:18:48,275 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-30) [2dd847cf] START=
, MigrateBrokerVDSCommand(HostName =3D colovn03, HostId =3D 708845a9-8b82-4=
d22-97e7-3a907f3005d5, vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b, srcHost=
=3D192.168.11.43, dstVdsId=3D68963fb8-b386-4f5c-a88f-a7952bac78a3, dstHost=
=3D192.168.11.46:54321, migrationMethod=3DONLINE, tunnelMigration=3Dfalse, =
migrationDowntime=3D0), log id: 3496db06=0A=
2015-03-18 17:18:48,294 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateBrokerVDSCommand] (org.ovirt.thread.pool-8-thread-30) [2dd847cf] FINIS=
H, MigrateBrokerVDSCommand, log id: 3496db06=0A=
2015-03-18 17:18:48,321 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCo=
mmand] (org.ovirt.thread.pool-8-thread-30) [2dd847cf] FINISH, MigrateVDSCom=
mand, return: MigratingFrom, log id: 71fe79b2=0A=
2015-03-18 17:18:48,339 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogh=
andling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [2dd847cf] Co=
rrelation ID: 2dd847cf, Job ID: 89ee5f05-f210-4700-9cc3-ae1efdc54a8d, Call =
Stack: null, Custom Event ID: -1, Message: Migration started (VM: colvm60, =
Source: colovn03, Destination: colovn06, User: stockhausen(a)collogia.de).=0A=
2015-03-18 17:18:51,160 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRun=
TimeInfo] (DefaultQuartzScheduler_Worker-60) RefreshVmList vm id d2d8bdfd-9=
9a6-41c0-84e7-26e1d6a6057b is migrating to vds colovn06 ignoring it in the =
refresh until migration is done=0A=
2015-03-18 17:18:52,591 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRun=
TimeInfo] (DefaultQuartzScheduler_Worker-28) [5059f179] VM colvm60 d2d8bdfd=
-99a6-41c0-84e7-26e1d6a6057b moved from MigratingFrom --> Up=0A=
2015-03-18 17:18:52,594 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRun=
TimeInfo] (DefaultQuartzScheduler_Worker-28) [5059f179] Adding VM d2d8bdfd-=
99a6-41c0-84e7-26e1d6a6057b to re-run list=0A=
2015-03-18 17:18:52,613 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRun=
TimeInfo] (DefaultQuartzScheduler_Worker-28) [5059f179] Rerun vm d2d8bdfd-9=
9a6-41c0-84e7-26e1d6a6057b. Called from vds colovn03=0A=
2015-03-18 17:18:52,677 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] START=
, MigrateStatusVDSCommand(HostName =3D colovn03, HostId =3D 708845a9-8b82-4=
d22-97e7-3a907f3005d5, vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b), log id=
: 20cb664b=0A=
2015-03-18 17:18:52,683 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] Faile=
d in MigrateStatusVDS method=0A=
2015-03-18 17:18:52,684 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] Comma=
nd org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return=
value=0A=
StatusOnlyReturnForXmlRpc [mStatus=3DStatusForXmlRpc [mCode=3D12, mMessage=
=3DFatal error during migration]]=0A=
2015-03-18 17:18:52,686 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] HostN=
ame =3D colovn03=0A=
2015-03-18 17:18:52,687 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] Comma=
nd MigrateStatusVDSCommand(HostName =3D colovn03, HostId =3D 708845a9-8b82-=
4d22-97e7-3a907f3005d5, vmId=3Dd2d8bdfd-99a6-41c0-84e7-26e1d6a6057b) execut=
ion failed. Exception: VDSErrorException: VDSGenericException: VDSErrorExce=
ption: Failed to MigrateStatusVDS, error =3D Fatal error during migration, =
code =3D 12=0A=
2015-03-18 17:18:52,690 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Mi=
grateStatusVDSCommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] FINIS=
H, MigrateStatusVDSCommand, log id: 20cb664b=0A=
2015-03-18 17:18:52,705 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlogh=
andling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-27) [5059f179] Co=
rrelation ID: 2dd847cf, Job ID: 89ee5f05-f210-4700-9cc3-ae1efdc54a8d, Call =
Stack: null, Custom Event ID: -1, Message: Migration failed due to Error: F=
atal error during migration (VM: colvm60, Source: colovn03, Destination: co=
lovn06).=0A=
2015-03-18 17:18:52,729 INFO [org.ovirt.engine.core.bll.MigrateVmToServerC=
ommand] (org.ovirt.thread.pool-8-thread-27) [5059f179] Lock freed to object=
EngineLock [exclusiveLocks=3D key: d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b va=
lue: VM=0A=
, sharedLocks=3D ]=0A=
=0A=
------=_NextPartTM-000-3ae93704-ac0a-4cf7-af75-e659c59b48b4
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-3ae93704-ac0a-4cf7-af75-e659c59b48b4--
2
2
3
21
Minutes: http://ovirt.org/meetings/ovirt/2015/ovirt.2015-03-18-14.01.html
Minutes (text): http://ovirt.org/meetings/ovirt/2015/ovirt.2015-03-18-14.01.txt
Log: http://ovirt.org/meetings/ovirt/2015/ovirt.2015-03-18-14.01.log.html
Meeting summary
---------------
* Agenda and Roll Call (bkp, 14:02:30)
* infra update (bkp, 14:02:30)
* 3.5.z updates (bkp, 14:02:30)
* 3.6 status (bkp, 14:02:30)
* Conferences and Workshops (bkp, 14:02:30)
* other topics (bkp, 14:02:32)
* infra update (bkp, 14:05:00)
* infra update Upstream is mostly stable afaik, other than some gerrit
issues dcaro is looking into (bkp, 14:13:33)
* infra update We're waiting for a memory upgrade due to hosts in phx
that will allow us to add more slaves to Jenkins (bkp, 14:13:36)
* infra update There are a few slaves out of the pool currently
upstream, waiting to be checked up, but no updates (bkp, 14:13:39)
* infra update Some duplicated gerrit accounts were also fixed (bkp,
14:13:42)
* 3.5.z updates (bkp, 14:13:50)
* 3.5.z updates Full status at
http://lists.ovirt.org/pipermail/users/2015-March/031881.html (bkp,
14:18:13)
* 3.5.z updates 3.5.2 RC2 build just completed, testing in progress.
Released today, if tests passed (bkp, 14:18:16)
* 3.5.z updates Some blockers open, some of them should be fixed by
end of week (bkp, 14:18:19)
* 3.5.z updates RC2 won't become GA due to existing blockers but at
least we can test fixed bugs (bkp, 14:18:22)
* 3.5.z updates Another RC will likely be available next week (bkp,
14:18:25)
* 3.5.z updates We'll need to test it on Centos 7.1, too (bkp,
14:18:29)
* 3.6 status (bkp, 14:18:29)
* 3.6 status integration Full status at:
http://lists.ovirt.org/pipermail/users/2015-March/031878.html (bkp,
14:22:06)
* 3.6 status integration Bug count on rise, no blockers yet (bkp,
14:22:09)
* 3.6 status integration There is about one month before feature
submission is closed (bkp, 14:22:12)
* 3.6 status integration We had some progress with hosted engine
features, moving the conf to shared storage and provisioning
additional hosts by UX. But nothing testable yet (bkp, 14:22:15)
* status UX Not many updates, we are making GOOD progress on the
tooltips, stuff is being merged, hopefully soon we can mark it
complete. (bkp, 14:24:16)
* 3.6 status network No important milestones this week, nor last week.
Progress proceeding apace. (bkp, 14:26:49)
* 3.6 status network Some testing started on our host networking API
feature even though it's not even merged! Some issues discovered.
(bkp, 14:26:52)
* 3.6 status network And the SR-IOV feature is going well, should
start being reviewed/merged soon enough. (bkp, 14:26:55)
* 3.6 status Gluster No updates this week. (bkp, 14:29:38)
* 3.6 status storage Good progress all round, starting to see first
drops of features to QA stakeholders (draft builds) (bkp, 14:32:01)
* 3.6 status virt (bkp, 14:34:53)
* 3.6 status virt No updates this week (bkp, 14:35:10)
* 3.6 status Node No updates this week (bkp, 14:37:18)
* 3.6 status SLA No updates this week (bkp, 14:40:37)
* conferences and workshops (bkp, 14:40:52)
* conferences and workshops FOSSAsia went very well. Talks were
well-received, and we had great attendance at the oVirt workshop.
(bkp, 14:41:27)
* conferences and workshops James Jiang from Cloud-Times came in from
Beijing and spoke on VDI. He outlined use cases for their commercial
version of oVirt that deploy up to 10,000 VMs! (bkp, 14:41:30)
* conferences and workshops Reminder: KVM Forum registration is now
open
http://events.linuxfoundation.org/events/kvm-forum/attend/register
(bkp, 14:41:34)
* conferences and workshops KVM Forum CfP is open, too, at:
http://events.linuxfoundation.org/events/kvm-forum/program/cfp
(bkp, 14:41:37)
* conferences and workshops Please note, again: there *will* be an
official oVirt track in KVM Forum this year that will serve as the
oVirt Workshop, so keep that in mind when submitting proposals.
(bkp, 14:41:40)
* conferences and workshops CfP is open for 10th Workshop on
Virtualization in High-Performance Cloud Computing (VHPC '15), in
conjunction with Euro-Par 2015, August 24-28, Vienna, Austria (bkp,
14:41:44)
* conferences and workshops oss2015 in Florence, Italy is coming up on
May 16-17: we'll have an oVirt session on May 16 at 17:00 (bkp,
14:41:48)
* Other Topics (bkp, 14:42:37)
* ACTION: Everyone make a note of the sync time next week: it's 1000
EDT. (bkp, 14:45:23)
Meeting ended at 14:45:46 UTC.
Action Items
------------
* Everyone make a note of the sync time next week: it's 1000 EDT.
Action Items, by person
-----------------------
* **UNASSIGNED**
* Everyone make a note of the sync time next week: it's 1000 EDT.
People Present (lines said)
---------------------------
* bkp (99)
* lvernia (19)
* sbonazzo (18)
* eedri (5)
* awels (4)
* amureini (3)
* ovirtbot (3)
* infra_owner|dcaro (2)
* tal (1)
* bkorren (1)
* mlipchuk (1)
--
Brian Proffitt
Community Liaison
oVirt
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
1
0
I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There are two
virtualisation clusters one with two nehelem nodes and one with four
sandybridge nodes. My master storage domain is a GlusterFS backed by a
replica 3 gluster volume from 3 of the gluster nodes. The engine is a
hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage broviede by
nfs from a different gluster volume. All the hosts are CentOS 6.6.
vdsm-4.16.10-8.gitc937927.el6
> glusterfs-3.6.2-1.el6
> 2.6.32 - 504.8.1.el6.x86_64
Problems happen when I try to add a new brick or replace a brick eventually
the self heal will kill the VMs. In the VM's logs I see kernel hung task
messages.
Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for more than
> 120 seconds.
> Mar 12 23:05:16 static1 kernel: Not tainted 2.6.32-504.3.3.el6.x86_64
> #1
> Mar 12 23:05:16 static1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 12 23:05:16 static1 kernel: nginx D 0000000000000001 0
> 1736 1735 0x00000080
> Mar 12 23:05:16 static1 kernel: ffff8800778b17a8 0000000000000082
> 0000000000000000 00000000000126c0
> Mar 12 23:05:16 static1 kernel: ffff88007e5c6500 ffff880037170080
> 0006ce5c85bd9185 ffff88007e5c64d0
> Mar 12 23:05:16 static1 kernel: ffff88007a614ae0 00000001722b64ba
> ffff88007a615098 ffff8800778b1fd8
> Mar 12 23:05:16 static1 kernel: Call Trace:
> Mar 12 23:05:16 static1 kernel: [<ffffffff8152a885>]
> schedule_timeout+0x215/0x2e0
> Mar 12 23:05:16 static1 kernel: [<ffffffff8152a503>]
> wait_for_common+0x123/0x180
> Mar 12 23:05:16 static1 kernel: [<ffffffff81064b90>] ?
> default_wake_function+0x0/0x20
> Mar 12 23:05:16 static1 kernel: [<ffffffffa0210a76>] ?
> _xfs_buf_read+0x46/0x60 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa02063c7>] ?
> xfs_trans_read_buf+0x197/0x410 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffff8152a61d>]
> wait_for_completion+0x1d/0x20
> Mar 12 23:05:16 static1 kernel: [<ffffffffa020ff5b>]
> xfs_buf_iowait+0x9b/0x100 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa02063c7>] ?
> xfs_trans_read_buf+0x197/0x410 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa0210a76>]
> _xfs_buf_read+0x46/0x60 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa0210b3b>]
> xfs_buf_read+0xab/0x100 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa02063c7>]
> xfs_trans_read_buf+0x197/0x410 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa01ee6a4>]
> xfs_imap_to_bp+0x54/0x130 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa01f077b>] xfs_iread+0x7b/0x1b0
> [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffff811ab77e>] ?
> inode_init_always+0x11e/0x1c0
> Mar 12 23:05:16 static1 kernel: [<ffffffffa01eb5ee>] xfs_iget+0x27e/0x6e0
> [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa01eae1d>] ?
> xfs_iunlock+0x5d/0xd0 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa0209366>] xfs_lookup+0xc6/0x110
> [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa0216024>]
> xfs_vn_lookup+0x54/0xa0 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffff8119dc65>] do_lookup+0x1a5/0x230
> Mar 12 23:05:16 static1 kernel: [<ffffffff8119e8f4>]
> __link_path_walk+0x7a4/0x1000
> Mar 12 23:05:16 static1 kernel: [<ffffffff811738e7>] ?
> cache_grow+0x217/0x320
> Mar 12 23:05:16 static1 kernel: [<ffffffff8119f40a>] path_walk+0x6a/0xe0
> Mar 12 23:05:16 static1 kernel: [<ffffffff8119f61b>]
> filename_lookup+0x6b/0xc0
> Mar 12 23:05:16 static1 kernel: [<ffffffff811a0747>] user_path_at+0x57/0xa0
> Mar 12 23:05:16 static1 kernel: [<ffffffffa0204e74>] ?
> _xfs_trans_commit+0x214/0x2a0 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffffa01eae3e>] ?
> xfs_iunlock+0x7e/0xd0 [xfs]
> Mar 12 23:05:16 static1 kernel: [<ffffffff81193bc0>] vfs_fstatat+0x50/0xa0
> Mar 12 23:05:16 static1 kernel: [<ffffffff811aaf5d>] ?
> touch_atime+0x14d/0x1a0
> Mar 12 23:05:16 static1 kernel: [<ffffffff81193d3b>] vfs_stat+0x1b/0x20
> Mar 12 23:05:16 static1 kernel: [<ffffffff81193d64>] sys_newstat+0x24/0x50
> Mar 12 23:05:16 static1 kernel: [<ffffffff810e5c87>] ?
> audit_syscall_entry+0x1d7/0x200
> Mar 12 23:05:16 static1 kernel: [<ffffffff810e5a7e>] ?
> __audit_syscall_exit+0x25e/0x290
> Mar 12 23:05:16 static1 kernel: [<ffffffff8100b072>]
> system_call_fastpath+0x16/0x1b
I am wondering if my volume settings are causing this. Can anyone with
more knowledge take a look and let me know:
network.remote-dio: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> nfs.export-volumes: on
> network.ping-timeout: 20
> cluster.self-heal-readdir-size: 64KB
> cluster.quorum-type: auto
> cluster.data-self-heal-algorithm: diff
> cluster.self-heal-window-size: 8
> cluster.heal-timeout: 500
> cluster.self-heal-daemon: on
> cluster.entry-self-heal: on
> cluster.data-self-heal: on
> cluster.metadata-self-heal: on
> cluster.readdir-optimize: on
> cluster.background-self-heal-count: 20
> cluster.rebalance-stats: on
> cluster.min-free-disk: 5%
> cluster.eager-lock: enable
> storage.owner-uid: 36
> storage.owner-gid: 36
> auth.allow:*
> user.cifs: disable
> cluster.server-quorum-ratio: 51%
Many Thanks, Alastair
2
1
18 Mar '15
Hi,
we still have 5 open blockers for 3.5.2[1]:
Bug ID Whiteboard Status Summary
1161012 infra POST task cleaning utility should erase commands that have running tasks
1187244 network POST [RHEL 7.0 + 7.1] Host configure with DHCP is losing connectivity after some time - dhclient is not running
1177220 storage ASSIGNED [BLOCKED] Failed to Delete First snapshot with live merge
1196327 virt ASSIGNED [performance] bad getVMList output creates unnecessary calls from Engine
1202360 virt POST [performance] bad getVMList output creates unnecessary calls from Engine
And 2 dependency on libvirt not yet fixed:
Bug ID Status Summary
1199182 POST 2nd active commit after snapshot triggers qemu failure
1199036 POST Libvirtd was restarted when do active blockcommit while there is a blockpull job running
ACTION: Assignee to provide ETA for the blocker bug.
Despite the blockers bug count, we're going to build RC2 today 2015-03-18 at 12:00 UTC for allowing the verification of fixed bugs and testing on
CentOS 7.1.
If you're going to test this release candidate on CentOS please be sure to have the CR[2] repository enabled and system fully updated to CentOS 7.1.
We still have 7 bugs in MODIFIED and 31 on QA[3]:
MODIFIED ON_QA Total
infra 2 10 12
integration 0 2 2
network 0 2 2
node 0 1 1
sla 2 1 3
storage 3 11 14
virt 0 4 4
Total 7 31 38
ACTION: Testers: you're welcome to verify bugs currently ON_QA.
All remaining bugs not marked as blockers have been moved to 3.5.3.
A release management entry has been added for tracking the schedule of 3.5.3[4]
A bug tracker [5] has been created for 3.5.3.
We have 32 bugs currently targeted to 3.5.3[6]:
Whiteboard NEW ASSIGNED POST Total
docs 2 0 0 2
external 1 0 0 1
gluster 4 0 1 5
infra 2 2 0 4
node 2 0 1 3
ppc 0 0 1 1
sla 4 0 0 4
storage 8 0 0 8
ux 1 0 1 2
virt 1 0 1 2
Total 25 2 5 32
ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring they're correctly targeted.
ACTION: Maintainers: to fill release notes for 3.5.2, the page has been created and updated here [7]
ACTION: Testers: please add yourself to the test page [8]
7 Patches have been merged for 3.5.3 and not backported to 3.5.2 branch according to Change-Id
commit 6b5a8169093357656d3e638c7018ee516d1f44bd
Author: Maor Lipchuk <mlipchuk(a)redhat.com>
Date: Thu Feb 19 14:40:23 2015 +0200
core: Add validation when Storage Domain is blocked.
Change-Id: I9a7c12609b3780c74396dab6edf26e4deaff490f
commit 7fd4dca0a7fb15d3e9179457f1f2aea6c727d325
Author: Maor Lipchuk <mlipchuk(a)redhat.com>
Date: Sun Mar 1 17:17:16 2015 +0200
restapi: reconfigure values on import data Storage Domain.
Change-Id: I2ef7baa850bd6da08ae27d41ebe9e4ad525fbe9b
commit 4283f755e6b77995247ecb9ddd904139bc8c322c
Author: Maor Lipchuk <mlipchuk(a)redhat.com>
Date: Tue Mar 10 12:05:05 2015 +0200
restapi: Quering FCP unregistered Storage Domains
Change-Id: Iafe2f2afcd0e6e68adbbbb2054c857388acc30a7
commit a3d8b687620817b38a64a3917f4440274831bca3
Author: Maor Lipchuk <mlipchuk(a)redhat.com>
Date: Wed Feb 25 17:00:47 2015 +0200
core: Add fk constraint on vm_interface_statistics
Change-Id: I53cf2737ef91cf967c93990fcb237f6c4e12a8f8
commit c8caaceb6b1678c702961d298b3d6c48183d9390
Author: emesika <emesika(a)redhat.com>
Date: Mon Mar 9 18:01:58 2015 +0200
core: do not use distinct if sort expr have func
Change-Id: I7c036b2b9ee94266b6e3df54f2c50167e454ed6a
commit 4332194e55ad40eee423e8611eceb95fd59dac7e
Author: Vered Volansky <vvolansk(a)redhat.com>
Date: Thu Mar 12 17:38:35 2015 +0200
webadmin: Fix punctuation in threshold warnings
Change-Id: If30f094e52f42b78537e215a2699cf74c248bd83
commit 773f2a108ce18e0029f864c8748d7068b71f8ff3
Author: Maor Lipchuk <mlipchuk(a)redhat.com>
Date: Sat Feb 28 11:37:26 2015 +0200
core: Add managed devices to OVF
Change-Id: Ie0e912c9b2950f1461ae95f4704f18b818b83a3b
ACTION: Authors please verify they're not meant to be targeted to 3.5.2.
[1] https://bugzilla.redhat.com/1186161
[2] http://mirror.centos.org/centos/7/cr/x86_64/
[3] http://goo.gl/UEVTCf
[4] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3
[5] https://bugzilla.redhat.com/1198142
[6] https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20targe…
[7] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
[8] http://www.ovirt.org/Testing/oVirt_3.5.2_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
3
2
Hi Darell,
Sorry for my late reply.
I've been able to test the 2 different scenarios :
- Host not responding => Host fenced => HA VMs restarted on another
Host.
- Host not operational => Host not fenced, resilience policy configured
to "Migrate Virtual Machines" => All VMs migrated to another Host.
Thank you very much for your answer.
Kind regards,
Le 2015-03-17 14:59, Darrell Budic a écrit :
> Resilience policy refers to migration behavior only. if VDSM on a host
> node detects a storage or network problem, for instance, it will
> migrate All, HA, or no VMs to a new node.
>
> Sounds like you’re thinking in terms of “I want Ovirt to restart these
> VMs if the host dies”, for that, set HA on the VMs you want it to
> restart if the VM dies for whatever reason.
>
>
>> On Mar 16, 2015, at 3:34 PM, Guillaume Penin
>> <guillaume(a)onlineacid.com> wrote:
>>
>> Hi all,
>>
>> I'm building a test ovirt (3.5.1) infrastructure, based on 3 ovirt
>> nodes and 1 ovirt engine.
>>
>> Everything runs (almost) fine, but i don't exactly understand the
>> interaction between resilience policy (Cluster) and HA (VM).
>>
>> => What I understand, in case of host failure :
>>
>> - Setting resilience policy to :
>>
>> - Migrate Virtual Machines => All VMs (HA and non HA) will be
>> started on another host.
>> - Migrate only Highly Available Virtual Machines => HA VMs only
>> will be started on another host.
>> - Do Not Migrate Virtual Machines => HA and non HA VMs won't be
>> started on another host.
>>
>> => In practice :
>>
>> - No matter what parameter i use in resilience policy, HA VMs only
>> will be started on another host in case of a host failure.
>>
>> Is this the expected behaviour ? Am I misunderstanding the way it
>> works ?
>>
>> Kind regards,
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
1
0
Hi all,
I'm building a test ovirt (3.5.1) infrastructure, based on 3 ovirt nodes
and 1 ovirt engine.
Everything runs (almost) fine, but i don't exactly understand the
interaction between resilience policy (Cluster) and HA (VM).
=> What I understand, in case of host failure :
- Setting resilience policy to :
- Migrate Virtual Machines => All VMs (HA and non HA) will be
started on another host.
- Migrate only Highly Available Virtual Machines => HA VMs only will
be started on another host.
- Do Not Migrate Virtual Machines => HA and non HA VMs won't be
started on another host.
=> In practice :
- No matter what parameter i use in resilience policy, HA VMs only
will be started on another host in case of a host failure.
Is this the expected behaviour ? Am I misunderstanding the way it works
?
Kind regards,
2
2
Hi all,
I have no latence for attaching an existing v1 export domain to any
datacenter. However, detaching the same export domain takes a while
(more than 30 min) saying "preparing for mainteance". Is it a regular
behaviour? If yes, what is it done at this step for being so long?
2
1
Hi,
VM failed to create, failed to reboot and through the errors :-
2015-02-23 17:27:11,879 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-64) [1546215a] VM
8325d21f97888adff6bd6b70bfd6c13b (b74d945c-c9f8-4336-a91b-390a11f07650)* is
running in db and not running in VDS compute11*
*and for delete VM :- *
2015-02-23 17:21:44,625 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-57) [4534b0a2] Correlation ID: 6bbbb769, Job ID:
76d135de-ef8d-41dc-9cc0-855134ededce, Call Stack: null, Custom Event ID:
-1, Message: VM 1feac62b8cd19fcc2ff296957adc8a4a *has been removed, but the
following disks could not be removed: Disk1. These disks will appear in the
main disks tab in illegal state, please remove manually when possible.*
*Thanks,*
*Punit*
2
8
Re: [ovirt-users] [ovirt-devel] [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)
by Oved Ourfali 18 Mar '15
by Oved Ourfali 18 Mar '15
18 Mar '15
On Mar 18, 2015 9:57 AM, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
> Hi,
> we still have 5 open blockers for 3.5.2[1]:
>
> Bug ID Whiteboard Status Summary
> 1161012 infra POST task cleaning utility should erase commands that have running tasks
Simone, latest comment by you implies that it is working now with the latest patches. All appear in the bugzilla as merged. Should it move to modified?
> 1187244 network POST [RHEL 7.0 + 7.1] Host configure with DHCP is losing connectivity after some time - dhclient is not running
> 1177220 storage ASSIGNED [BLOCKED] Failed to Delete First snapshot with live merge
> 1196327 virt ASSIGNED [performance] bad getVMList output creates unnecessary calls from Engine
> 1202360 virt POST [performance] bad getVMList output creates unnecessary calls from Engine
>
> And 2 dependency on libvirt not yet fixed:
> Bug ID Status Summary
> 1199182 POST 2nd active commit after snapshot triggers qemu failure
> 1199036 POST Libvirtd was restarted when do active blockcommit while there is a blockpull job running
>
> ACTION: Assignee to provide ETA for the blocker bug.
>
> Despite the blockers bug count, we're going to build RC2 today 2015-03-18 at 12:00 UTC for allowing the verification of fixed bugs and testing on
> CentOS 7.1.
> If you're going to test this release candidate on CentOS please be sure to have the CR[2] repository enabled and system fully updated to CentOS 7.1.
>
> We still have 7 bugs in MODIFIED and 31 on QA[3]:
>
> MODIFIED ON_QA Total
> infra 2 10 12
> integration 0 2 2
> network 0 2 2
> node 0 1 1
> sla 2 1 3
> storage 3 11 14
> virt 0 4 4
> Total 7 31 38
>
> ACTION: Testers: you're welcome to verify bugs currently ON_QA.
>
> All remaining bugs not marked as blockers have been moved to 3.5.3.
> A release management entry has been added for tracking the schedule of 3.5.3[4]
> A bug tracker [5] has been created for 3.5.3.
> We have 32 bugs currently targeted to 3.5.3[6]:
>
> Whiteboard NEW ASSIGNED POST Total
> docs 2 0 0 2
> external 1 0 0 1
> gluster 4 0 1 5
> infra 2 2 0 4
> node 2 0 1 3
> ppc 0 0 1 1
> sla 4 0 0 4
> storage 8 0 0 8
> ux 1 0 1 2
> virt 1 0 1 2
> Total 25 2 5 32
>
>
> ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring they're correctly targeted.
> ACTION: Maintainers: to fill release notes for 3.5.2, the page has been created and updated here [7]
> ACTION: Testers: please add yourself to the test page [8]
>
> 7 Patches have been merged for 3.5.3 and not backported to 3.5.2 branch according to Change-Id
>
> commit 6b5a8169093357656d3e638c7018ee516d1f44bd
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Thu Feb 19 14:40:23 2015 +0200
> core: Add validation when Storage Domain is blocked.
> Change-Id: I9a7c12609b3780c74396dab6edf26e4deaff490f
>
> commit 7fd4dca0a7fb15d3e9179457f1f2aea6c727d325
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Sun Mar 1 17:17:16 2015 +0200
> restapi: reconfigure values on import data Storage Domain.
> Change-Id: I2ef7baa850bd6da08ae27d41ebe9e4ad525fbe9b
>
> commit 4283f755e6b77995247ecb9ddd904139bc8c322c
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Tue Mar 10 12:05:05 2015 +0200
> restapi: Quering FCP unregistered Storage Domains
> Change-Id: Iafe2f2afcd0e6e68adbbbb2054c857388acc30a7
>
> commit a3d8b687620817b38a64a3917f4440274831bca3
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Wed Feb 25 17:00:47 2015 +0200
> core: Add fk constraint on vm_interface_statistics
> Change-Id: I53cf2737ef91cf967c93990fcb237f6c4e12a8f8
>
> commit c8caaceb6b1678c702961d298b3d6c48183d9390
> Author: emesika <emesika(a)redhat.com>
> Date: Mon Mar 9 18:01:58 2015 +0200
> core: do not use distinct if sort expr have func
> Change-Id: I7c036b2b9ee94266b6e3df54f2c50167e454ed6a
>
> commit 4332194e55ad40eee423e8611eceb95fd59dac7e
> Author: Vered Volansky <vvolansk(a)redhat.com>
> Date: Thu Mar 12 17:38:35 2015 +0200
> webadmin: Fix punctuation in threshold warnings
> Change-Id: If30f094e52f42b78537e215a2699cf74c248bd83
>
> commit 773f2a108ce18e0029f864c8748d7068b71f8ff3
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Sat Feb 28 11:37:26 2015 +0200
> core: Add managed devices to OVF
> Change-Id: Ie0e912c9b2950f1461ae95f4704f18b818b83a3b
>
> ACTION: Authors please verify they're not meant to be targeted to 3.5.2.
>
>
> [1] https://bugzilla.redhat.com/1186161
> [2] http://mirror.centos.org/centos/7/cr/x86_64/
> [3] http://goo.gl/UEVTCf
> [4] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3
> [5] https://bugzilla.redhat.com/1198142
> [6] https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20targe…
> [7] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
> [8] http://www.ovirt.org/Testing/oVirt_3.5.2_Testing
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
2
1