--_004_D18BA9F0D3BAsoerenmalchowmconnet_
Content-Type: multipart/alternative;
boundary="_000_D18BA9F0D3BAsoerenmalchowmconnet_"
--_000_D18BA9F0D3BAsoerenmalchowmconnet_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
Dear all,
We are experiencing this problem over and over again with different Vms, th=
e situation is as follows
* we are backing up all VM=92s through iterating through the VM (see at=
tached python file) which basically follow the recommendations
* This process run well for a while but at some point we get a problem =
with a random VM (it is always different machines) the backup process tries=
to remove all snapshots and this is in the log files
Vdsm.log
<=97 snip =97
Thread-8246::DEBUG::2015-05-27
16:56:00,003::libvirtconnection::143::root::=
(wrapper) Unknown libvirterror: ecode: 68 edom: 10 level: 2 message: Timed =
out during operation: cannot acquire state change lock
Thread-8246::ERROR::2015-05-27 16:56:00,016::vm::5761::vm.Vm::(queryBlockJo=
bs) vmId=3D`84da8d5e-4a9d-4272-861a-a706ebce3160`::Error getting block job =
info
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs
liveInfo =3D self._dom.blockJobInfo(drive.name, 0)
File "/usr/share/vdsm/virt/vm.py", line 697, in f
raise toe
TimeoutError: Timed out during operation: cannot acquire state change lock
VM Channels Listener::DEBUG::2015-05-27 16:56:00,561::vmchannels::96::vds::=
(_handle_timeouts) Timeout on fileno 55.
<=97 snip =97
Syslog / journarctl
<=97 snip =97
May 27 16:55:15
mc-dc3ham-compute-04-live.mc.mcon.net libvirtd[1751]: Canno=
t start job (modify, none) for domain fab-cms-app-01-live-fab-mcon-net; cur=
rent job is (modify, none) owned by (1780, 0)
May 27 16:55:15
mc-dc3ham-compute-04-live.mc.mcon.net libvirtd[1751]: Timed=
out during operation: cannot acquire state change lock
May 27 16:55:15
mc-dc3ham-compute-04-live.mc.mcon.net vdsm[10478]: vdsm vm.=
Vm ERROR vmId=3D`84da8d5e-4a9d-4272-861a-a706ebce3160`::Error getting block=
job info
Tracebac=
k (most recent call last):
File "=
/usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs
live=
Info =3D self._dom.blockJobInfo(drive.name, 0)
File "=
/usr/share/vdsm/virt/vm.py", line 697, in f
rais=
e toe
TimeoutE=
rror: Timed out during operation: cannot acquire state change lock
<=97 snip =97
The result is, that the VM is non-operational, the qemu process is running =
and ovirt show it with a =93?=94, this itself would not be that bad if we c=
ould recover from this problem, but the only thing we found so far to resol=
ve this, is to put the hypervisor host in maintenance and then with the las=
t VM with the =93?=94 reboot it, we found no other way that allowed us to r=
eboot this VM.
Even after actually killing the qemu process, there is no way to do anythin=
g with this VM.
I think i understand that the problem arises when 2 threads are making requ=
ests against the same VM, however, in the last case the VM was not doing an=
ything else as far as we can see.
I found a bug that seems to be very similar (=91a little older though) in l=
aunchpad, but libvirt related.
https://bugs.launchpad.net/nova/+bug/1254872
These are the libvirt versions on the server
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
VDSM version
vdsm-python-4.16.14-0.el7.noarch
vdsm-jsonrpc-4.16.14-0.el7.noarch
vdsm-yajsonrpc-4.16.14-0.el7.noarch
vdsm-4.16.14-0.el7.x86_64
vdsm-python-zombiereaper-4.16.14-0.el7.noarch
vdsm-cli-4.16.14-0.el7.noarch
vdsm-xmlrpc-4.16.14-0.el7.noarch
Kernel
3.10.0-229.4.2.el7.x86_64
Any idea where to go ?
Regards
Soeren
From: Soeren Malchow <soeren.malchow@mcon.net<mailto:soeren.malchow@mcon.ne=
t>
Date: Monday 25 May 2015 22:27
To: "users@ovirt.org<mailto:users@ovirt.org>"
<users@ovirt.org<mailto:users=
@ovirt.org>
Subject: [ovirt-users] VM crashes
during snapshot/clone/export and show onl=
y "?"
Dear all,
In version 3.5.2 on CentOS 7.1 we now have the problem that the backup scri=
pt seems to trigger a crash of Vms, this is the second time, the first time=
i could only solve the problem by rebooting the hypervisor host and acknow=
leding =93host has been rebooted=94
This problems happens while removing snapshots after snapshot =96> clone ->=
export procedures.
The actual qemu process is still running after following log output, but th=
e VM is not responsive anymore, i can kill the process problem.
Two questions for this:
How can I avoid this problem ?
Is there a way to tell ovirt that the qemu process is gone and that the VM =
can be started again ?
<=97 snip =97
May 25 22:03:47
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: metad=
ata not found: Requested metadata element is not present
May 25 22:03:47
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: metad=
ata not found: Requested metadata element is not present
May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: inter=
nal error: End of file from monitor
May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kernel: IDMZ_MC_PUBLI=
C: port 3(vnet3) entered disabled state
May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kernel: device vnet3 =
left promiscuous mode
May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kernel: IDMZ_MC_PUBLI=
C: port 3(vnet3) entered disabled state
May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kvm[22973]: 15 guests=
now active
May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net systemd-machined[1441=
2]: Machine
qemu-mc-glpi-app-01-live.mc.mcon.net terminated.
May 25 22:04:11
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: Canno=
t start job (modify, none) for domain
mc-glpi-app-01-live.mc.mcon.net; curr=
ent job is (modify, none) own
ed by (1534, 0)
May 25 22:04:11
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: Timed=
out during operation: cannot acquire state change lock
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: Canno=
t start job (modify, none) for domain
mc-glpi-app-01-live.mc.mcon.net; curr=
ent job is (modify, none) own
ed by (1534, 0)
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: Timed=
out during operation: cannot acquire state change lock
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: metad=
ata not found: Requested metadata element is not present
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm vm.V=
m ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Error getting block =
job info
Traceback=
(most recent call last):
File "/=
usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs
liveI=
nfo =3D self._dom.blockJobInfo(drive.name, 0)
File "/=
usr/share/vdsm/virt/vm.py", line 697, in f
raise=
toe
TimeoutEr=
ror: Timed out during operation: cannot acquire state change lock
<=97 snip =97
<=97 snip =97
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm vm.V=
m ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Stats function faile=
d: <AdvancedStatsFunction _samp
Traceback=
(most recent call last):
File "/=
usr/share/vdsm/virt/sampling.py", line 484, in collect
stats=
Function()
File "/=
usr/share/vdsm/virt/sampling.py", line 359, in __call__
retVa=
lue =3D self._function(*args, **kwargs)
File "/=
usr/share/vdsm/virt/vm.py", line 346, in _sampleVmJobs
retur=
n self._vm.queryBlockJobs()
File "/=
usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs
liveI=
nfo =3D self._dom.blockJobInfo(drive.name, 0)
Attribute=
Error: 'NoneType' object has no attribute 'blockJobInfo'
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: metad=
ata not found: Requested metadata element is not present
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: metad=
ata not found: Requested metadata element is not present
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: metad=
ata not found: Requested metadata element is not present
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm vm.V=
m ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Stats function faile=
d: <AdvancedStatsFunction _samp
Traceback=
(most recent call last):
File "/=
usr/share/vdsm/virt/sampling.py", line 484, in collect
stats=
Function()
File "/=
usr/share/vdsm/virt/sampling.py", line 359, in __call__
retVa=
lue =3D self._function(*args, **kwargs)
File "/=
usr/share/vdsm/virt/vm.py", line 338, in _sampleVcpuPinning
vCpuI=
nfos =3D self._vm._dom.vcpus()
Attribute=
Error: 'NoneType' object has no attribute 'vcpus'
May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm vm.V=
m ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Stats function faile=
d: <AdvancedStatsFunction _samp
Traceback=
(most recent call last):
File "/=
usr/share/vdsm/virt/sampling.py", line 484, in collect
stats=
Function()
File "/=
usr/share/vdsm/virt/sampling.py", line 359, in __call__
retVa=
lue =3D self._function(*args, **kwargs)
File "/=
usr/share/vdsm/virt/vm.py", line 349, in _sampleCpuTune
infos=
=3D self._vm._dom.schedulerParameters()
Attribute=
Error: 'NoneType' object has no attribute =91schedulerParameters'
<=97 snip =97
--_000_D18BA9F0D3BAsoerenmalchowmconnet_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <D58BD2B9532959489C27FB5BF71F88C7(a)liquidcampaign.com
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3DWindows-1=
252"
</head
<body style=3D"word-wrap: break-word;
-webkit-nbsp-mode: space; -webkit-lin=
e-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-fami=
ly: Calibri, sans-serif;"
<div>Dear all,</div
<div><br
</div
<div>We are experiencing this
problem over and over again with different Vm=
s, the situation is as follows</div
<ul
<li>we are backing up all
VM=92s through iterating through the VM (see atta=
ched python file) which basically follow the recommendations</li><li>This p=
rocess run well for a while but at some point we get a problem with a rando=
m VM (it is always different machines) the backup process tries to remove a=
ll snapshots and this is in the log files</li></ul
<div><br
</div
<div>Vdsm.log</div
<div><=97 snip =97></div
<div
<div>Thread-8246::DEBUG::2015-05-27 16:56:00,003::libvirtconnection::143::r=
oot::(wrapper) Unknown libvirterror: ecode: 68 edom: 10 level: 2 message: T=
imed out during operation: cannot acquire state change lock</div
<div>Thread-8246::ERROR::2015-05-27
16:56:00,016::vm::5761::vm.Vm::(queryBl=
ockJobs) vmId=3D`84da8d5e-4a9d-4272-861a-a706ebce3160`::Error getting block=
job info</div
<div>Traceback (most recent
call last):</div
<div> File
"/usr/share/vdsm/virt/vm.py", line 5759, in quer=
yBlockJobs</div
<div>
liveInfo =3D self._dom.blockJobInfo(drive.name, 0)</div
<div> File "/usr/share/vdsm/virt/vm.py", line 697,
in f</di=
v
<div> raise toe</div
<div>TimeoutError: Timed out during operation: cannot
acquire state change =
lock</div
<div>VM Channels
Listener::DEBUG::2015-05-27 16:56:00,561::vmchannels::96::=
vds::(_handle_timeouts) Timeout on fileno 55.</div
</div
<div><=97 snip
=97></div
<div><br
</div
<div>Syslog / journarctl</div
<div><=97 snip =97></div
<div
<div>May 27 16:55:15
mc-dc3ham-compute-04-live.mc.mcon.net libvirtd[1751]: =
Cannot start job (modify, none) for domain fab-cms-app-01-live-fab-mcon-net=
; current job is (modify, none) owned by (1780, 0)</div
<div>May 27 16:55:15
mc-dc3ham-compute-04-live.mc.mcon.net libvirtd[1751]: =
Timed out during operation: cannot acquire state change lock</div
<div>May 27 16:55:15
mc-dc3ham-compute-04-live.mc.mcon.net vdsm[10478]: vds=
m vm.Vm ERROR vmId=3D`84da8d5e-4a9d-4272-861a-a706ebce3160`::Error getting =
block job info</div
<div>
=
 =
;
&nb=
sp; Traceback (most recent call last):</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py",
line 5=
759, in queryBlockJobs</div
<div>
=
 =
;
&nb=
sp; liveInfo =3D
self._dom.blockJobInfo(drive.na=
me, 0)</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py",
line 6=
97, in f</div
<div>
=
 =
;
&nb=
sp; raise toe</div
<div>
=
 =
;
&nb=
sp; TimeoutError: Timed out during operation: cannot acquire s=
tate change lock</div
</div
<div><=97 snip =97> </div
<div><br
</div
<div>The result is, that the
VM is non-operational, the qemu process is run=
ning and ovirt show it with a =93?=94, this itself would not be that bad if=
we could recover from this problem, but the only thing we found so far to =
resolve this, is to put the hypervisor
host in maintenance and then with the last VM with the =93?=94 reboot it, =
we found no other way that allowed us to reboot this VM.</div
<div>Even after actually killing the qemu process, there
is no way to do an=
ything with this VM.</div
<div><br
</div
<div>I think i understand that the problem arises when 2 threads are making=
requests against the same VM, however, in the last case the VM was not doi=
ng anything else as far as we can see.</div
<div><br
</div
<div>I found a bug that seems to be very similar (=91a
little older though)=
in launchpad, but libvirt related.</div
<div><br
</div
<div><a
href=3D"https://bugs.launchpad.net/nova/+bug/1254872"&g...
ugs.launchpad.net/nova/+bug/1254872</a></div
<div><br
</div
<div>These are the libvirt versions on the
server</div
<div><br
</div
<div
<div>libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-client-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-python-1.2.8-7.el7_1.1.x86_64</div
<div>libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64</div
<div>libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64</div
</div
<div><br
</div
<div>VDSM version</div
<div
<div>vdsm-python-4.16.14-0.el7.noarch</div
<div>vdsm-jsonrpc-4.16.14-0.el7.noarch</div
<div>vdsm-yajsonrpc-4.16.14-0.el7.noarch</div
<div>vdsm-4.16.14-0.el7.x86_64</div
<div>vdsm-python-zombiereaper-4.16.14-0.el7.noarch</div
<div>vdsm-cli-4.16.14-0.el7.noarch</div
<div>vdsm-xmlrpc-4.16.14-0.el7.noarch</div
</div
<div><br
</div
<div>Kernel</div
<div>3.10.0-229.4.2.el7.x86_64</div
<div><br
</div
<div><br
</div
<div>Any idea where to go
?</div
<div><br
</div
<div><br
</div
<div><br
</div
<div><br
</div
<div><br
</div
<div>Regards</div
<div>Soeren </div
<div><br
</div
<div><br
</div
<span
id=3D"OLK_SRC_BODY_SECTION"
<div
style=3D"font-family:Calibri; font-size:11pt; text-align:left; color:b=
lack; BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; PADDING-BOTTOM:=
0in; PADDING-LEFT: 0in; PADDING-RIGHT: 0in; BORDER-TOP: #b5c4df 1pt solid;=
BORDER-RIGHT: medium none; PADDING-TOP: 3pt"
<span
style=3D"font-weight:bold">From: </span>Soeren Malchow <<a
href=3D=
"mailto:soeren.malchow@mcon.net">soeren.malchow@mcon.net</a>><br
<span style=3D"font-weight:bold">Date:
</span>Monday 25 May 2015 22:27<br
<span
style=3D"font-weight:bold">To: </span>"<a
href=3D"mailto:users@o=
virt.org">users(a)ovirt.org</a>&quot; <<a
href=3D"mailto:users@ovirt.org">=
users(a)ovirt.org</a>&gt;<br
<span
style=3D"font-weight:bold">Subject: </span>[ovirt-users] VM crashes d=
uring snapshot/clone/export and show only "?"<br
</div
<div><br
</div
<div
<div style=3D"word-wrap:
break-word; -webkit-nbsp-mode: space; -webkit-line=
-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-famil=
y: Calibri, sans-serif;"
<div>Dear all,</div
<div><br
</div
<div>In version 3.5.2 on
CentOS 7.1 we now have the problem that the backup=
script seems to trigger a crash of Vms, this is the second time, the first=
time i could only solve the problem by rebooting the hypervisor host and a=
cknowleding =93host has been rebooted=94</div
<div><br
</div
<div>This problems happens while removing snapshots after
snapshot =96> =
clone -> export procedures.</div
<div><br
</div
<div>The actual qemu process is still running after
following log output, b=
ut the VM is not responsive anymore, i can kill the process problem.</div
<div><br
</div
<div>Two questions for
this:</div
<div><br
</div
<div>How can I avoid this problem ?</div
<div>Is there a way to tell ovirt that the qemu process is gone and that th=
e VM can be started again ? </div
<div><br
</div
<div><=97 snip =97> </div
<div
<div>May 25 22:03:47
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
metadata not found: Requested metadata element is not present</div
<div>May 25 22:03:47
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
metadata not found: Requested metadata element is not present</div
<div>May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
internal error: End of file from monitor</div
<div>May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kernel: IDMZ_MC_=
PUBLIC: port 3(vnet3) entered disabled state</div
<div>May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kernel: device v=
net3 left promiscuous mode</div
<div>May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kernel: IDMZ_MC_=
PUBLIC: port 3(vnet3) entered disabled state</div
<div>May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net kvm[22973]: 15 g=
uests now active</div
<div>May 25 22:03:48
mc-dc3ham-compute-02-live.mc.mcon.net systemd-machined=
[14412]: Machine
qemu-mc-glpi-app-01-live.mc.mcon.net terminated.</div
<div>May 25 22:04:11
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
Cannot start job (modify, none) for domain mc-glpi-app-01-live.mc.mcon.net;=
current job is (modify, none) own</div
<div>ed by (1534, 0)</div
<div>May 25 22:04:11
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
Timed out during operation: cannot acquire state change lock</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
Cannot start job (modify, none) for domain mc-glpi-app-01-live.mc.mcon.net;=
current job is (modify, none) own</div
<div>ed by (1534, 0)</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
Timed out during operation: cannot acquire state change lock</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
metadata not found: Requested metadata element is not present</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm=
vm.Vm ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Error getting b=
lock job info</div
<div>
=
 =
;
&nb=
sp; Traceback (most recent call last):</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py", line 5759,
i=
n queryBlockJobs</div
<div>
=
 =
;
&nb=
sp; liveInfo =3D self._dom.blockJobInfo(drive.name, 0)=
</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py", line 697,
in=
f</div
<div>
=
 =
;
&nb=
sp; raise toe</div
<div>
=
 =
;
&nb=
sp; TimeoutError: Timed out during operation: cannot acquire state c=
hange lock</div
</div
<div><=97 snip =97></div
<div><br
</div
<div><br
</div
<div><=97 snip =97></div
<div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm=
vm.Vm ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Stats function =
failed: <AdvancedStatsFunction _samp</div
<div>
=
 =
;
&nb=
sp; Traceback (most recent call last):</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/sampling.py", line
4=
84, in collect</div
<div>
=
 =
;
&nb=
sp; statsFunction()</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/sampling.py", line
3=
59, in __call__</div
<div>
=
 =
;
&nb=
sp; retValue =3D self._function(*args,
**kwargs)</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py", line 346,
in=
_sampleVmJobs</div
<div>
=
 =
;
&nb=
sp; return self._vm.queryBlockJobs()</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py", line 5759,
i=
n queryBlockJobs</div
<div>
=
 =
;
&nb=
sp; liveInfo =3D self._dom.blockJobInfo(drive.name, 0)=
</div
<div>
=
 =
;
&nb=
sp; AttributeError: 'NoneType' object has no attribute
'blockJobInfo=
'</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
metadata not found: Requested metadata element is not present</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
metadata not found: Requested metadata element is not present</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1386]: =
metadata not found: Requested metadata element is not present</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm=
vm.Vm ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Stats function =
failed: <AdvancedStatsFunction _samp</div
<div>
=
 =
;
&nb=
sp; Traceback (most recent call last):</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/sampling.py", line
4=
84, in collect</div
<div>
=
 =
;
&nb=
sp; statsFunction()</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/sampling.py", line
3=
59, in __call__</div
<div>
=
 =
;
&nb=
sp; retValue =3D self._function(*args,
**kwargs)</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py", line 338,
in=
_sampleVcpuPinning</div
<div>
=
 =
;
&nb=
sp; vCpuInfos =3D self._vm._dom.vcpus()</div
<div>
=
 =
;
&nb=
sp; AttributeError: 'NoneType' object has no attribute
'vcpus'</div
<div>May 25 22:04:18
mc-dc3ham-compute-02-live.mc.mcon.net vdsm[3230]: vdsm=
vm.Vm ERROR vmId=3D`598bdf61-2f2c-4569-9513-93043890f676`::Stats function =
failed: <AdvancedStatsFunction _samp</div
<div>
=
 =
;
&nb=
sp; Traceback (most recent call last):</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/sampling.py", line
4=
84, in collect</div
<div>
=
 =
;
&nb=
sp; statsFunction()</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/sampling.py", line
3=
59, in __call__</div
<div>
=
 =
;
&nb=
sp; retValue =3D self._function(*args,
**kwargs)</div
<div>
=
 =
;
&nb=
sp; File "/usr/share/vdsm/virt/vm.py", line 349,
in=
_sampleCpuTune</div
<div>
=
 =
;
&nb=
sp; infos =3D
self._vm._dom.schedulerParameters()</div=
<div>
=
 =
;
&nb=
sp; AttributeError: 'NoneType' object has no attribute =91schedulerP=
arameters'</div
</div
<div><br
</div
<div><=97 snip
=97> </div
</div
</div
</span
</body
</html
--_000_D18BA9F0D3BAsoerenmalchowmconnet_--
--_004_D18BA9F0D3BAsoerenmalchowmconnet_
Content-Type: text/x-python-script; name="backup-queue.py"
Content-Description: backup-queue.py
Content-Disposition: attachment; filename="backup-queue.py"; size=5313;
creation-date="Wed, 27 May 2015 16:00:06 GMT";
modification-date="Wed, 27 May 2015 16:00:06 GMT"
Content-ID: <2A40C8CDD07D684C95D4A9B89D405254(a)liquidcampaign.com
Content-Transfer-Encoding: base64
IyEvdXNyL2Jpbi9weXRob24KCmltcG9ydCBRdWV1ZQppbXBvcnQgdGhyZWFkaW5nCmltcG9ydCB0
aW1lCmZyb20gb3ZpcnRzZGsuYXBpIGltcG9ydCBBUEkKZnJvbSBvdmlydHNkay54bWwgaW1wb3J0
IHBhcmFtcwppbXBvcnQgc3lzCmltcG9ydCBkYXRldGltZQppbXBvcnQgc210cGxpYgpmcm9tIGVt
YWlsLm1pbWUudGV4dCBpbXBvcnQgTUlNRVRleHQKCgpnbG9iYWwgU05BUFNIT1RfTkFNRQoKVkVS
U0lPTiAgICAgICAgICAgICA9IHBhcmFtcy5WZXJzaW9uKG1ham9yPSczJywgbWlub3I9JzAnKQpF
TkdJTkVfU0VSVkVSICAgICAgID0gJycKRU5HSU5FX1VTRVIgICAgICAgICA9ICcnCkVOR0lORV9Q
QVNTV09SRCAgICAgPSAnJwpFTkdJTkVfQ0VSVCAgICAgICAgID0gJycKTk9XICAgICAgICAgICAg
ICAgICA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpClNOQVBTSE9UX05BTUUgICAgICAgPSAnQkFD
S1VQXycgKyBOT1cuc3RyZnRpbWUoIiVZLSVtLSVkLSVIJU0iKQpEQVlfT0ZfV0VFSyAgICAgICAg
ID0gTk9XLnN0cmZ0aW1lKCIldyIpCkJBQ0tVUCAgICAgICAgICAgICAgPSAiRlVMTCIKCmV4aXRG
bGFnID0gMAoKY2xhc3MgbXlUaHJlYWQgKHRocmVhZGluZy5UaHJlYWQpOgogICAgZGVmIF9faW5p
dF9fKHNlbGYsIHRocmVhZElELCBuYW1lLCBxKToKICAgICAgICB0aHJlYWRpbmcuVGhyZWFkLl9f
aW5pdF9fKHNlbGYpCiAgICAgICAgc2VsZi50aHJlYWRJRCA9IHRocmVhZElECiAgICAgICAgc2Vs
Zi5uYW1lID0gbmFtZQogICAgICAgIHNlbGYucSA9IHEKICAgICAgICBzZWxmLmFwaSA9IGFwaQog
ICAgICAgIGdsb2JhbCBtZXNzYWdlCiAgICBkZWYgcnVuKHNlbGYpOgogICAgICAgIHByaW50ICJT
dGFydGluZyAiICsgc2VsZi5uYW1lCiAgICAgICAgcHJvY2Vzc19kYXRhKHNlbGYubmFtZSwgc2Vs
Zi5xKQogICAgICAgIHByaW50ICJFeGl0aW5nICIgKyBzZWxmLm5hbWUKCmRlZiBwcm9jZXNzX2Rh
dGEodGhyZWFkTmFtZSwgcSk6CiAgICB3aGlsZSBub3QgZXhpdEZsYWc6CiAgICAgICAgcXVldWVM
b2NrLmFjcXVpcmUoKQogICAgICAgIGlmIG5vdCB3b3JrUXVldWUuZW1wdHkoKToKICAgICAgICAg
ICAgZGF0YSA9IHEuZ2V0KCkKICAgICAgICAgICAgcXVldWVMb2NrLnJlbGVhc2UoKQogICAgICAg
ICAgICBwcmludCAiJXMgcHJvY2Vzc2luZyAlcyIgJSAodGhyZWFkTmFtZSwgZGF0YS5uYW1lKQog
ICAgICAgICAgICB2bSA9IGFwaS52bXMuZ2V0KG5hbWU9ZGF0YS5uYW1lKQogICAgICAgICAgICB2
bW5hbWUgPSBkYXRhLm5hbWUgKyJfIgogICAgICAgICAgICBuZXd2bW5hbWUgPSB2bW5hbWUgKyBT
TkFQU0hPVF9OQU1FCiAgICAgICAgICAgIGNsdXN0ZXIgPSBhcGkuY2x1c3RlcnMuZ2V0KGlkPXZt
LmNsdXN0ZXIuaWQpCiAgICAgICAgICAgIGRjID0gYXBpLmRhdGFjZW50ZXJzLmdldChpZD1jbHVz
dGVyLmRhdGFfY2VudGVyLmlkKQogICAgICAgICAgICBleHBvcnQgPSBOb25lCiAgICAgICAgICAg
IGZvciBzZCBpbiBkYy5zdG9yYWdlZG9tYWlucy5saXN0KCk6CiAgICAgICAgICAgICAgICBpZiBz
ZC50eXBlXyA9PSAiZXhwb3J0IjoKICAgICAgICAgICAgICAgICAgICBleHBvcnQgPSBzZAogICAg
ICAgICAgICBpZiBub3QgZXhwb3J0OgogICAgICAgICAgICAgICAgcHJpbnQoIkV4cG9ydCBkb21h
aW4gcmVxdWlyZWQsIGFuZCBub25lIGZvdW5kLCBleGl0dGluZy4uLlxuIikKICAgICAgICAgICAg
ICAgIHN5cy5leGl0KDEpCiAgICAgICAgICAgIHByaW50ICJDbHVzdGVyOiAlcyIgJSBjbHVzdGVy
Lm5hbWUKICAgICAgICAgICAgaWYgKGRhdGEubmFtZSAhPSAnSG9zdGVkRW5naW5lJyBhbmQgY2x1
c3Rlci5uYW1lID09ICdDQy0wMScpOgogICAgICAgICAgICAgICAgdm0uc25hcHNob3RzLmFkZChw
YXJhbXMuU25hcHNob3QoZGVzY3JpcHRpb249U05BUFNIT1RfTkFNRSwgdm09dm0gKSkKICAgICAg
ICAgICAgICAgIHNuYXAgPSB2bS5zbmFwc2hvdHMubGlzdChkZXNjcmlwdGlvbj1TTkFQU0hPVF9O
QU1FKVswXQogICAgICAgICAgICAgICAgd2hpbGUgdm0uc25hcHNob3RzLmdldChpZD1zbmFwLmlk
KS5zbmFwc2hvdF9zdGF0dXMgPT0gImxvY2tlZCI6CiAgICAgICAgICAgICAgICAgICAgcHJpbnQo
IiVzIFdhaXRpbmcgZm9yIHNuYXBzaG90IG9mICVzIHRvIGZpbmlzaCIpICUgKHRocmVhZE5hbWUs
IHZtLm5hbWUpCiAgICAgICAgICAgICAgICAgICAgdGltZS5zbGVlcCgxMCkKICAgICAgICAgICAg
ICAgIHByaW50KCIlcyBTbmFwc2hvdHRpbmcgJXMgaXMgZG9uZSIpICUgKHRocmVhZE5hbWUsdm0u
bmFtZSkKICAgICAgICAgICAgICAgIHRyeToKICAgICAgICAgICAgICAgICAgICBzbmFwc2hvdHMg
PSBwYXJhbXMuU25hcHNob3RzKHNuYXBzaG90PVtwYXJhbXMuU25hcHNob3QoaWQ9c25hcC5pZCld
KQogICAgICAgICAgICAgICAgICAgIGFwaS52bXMuYWRkKHBhcmFtcy5WTShuYW1lPW5ld3ZtbmFt
ZSwgc25hcHNob3RzPXNuYXBzaG90cywgY2x1c3Rlcj1jbHVzdGVyLCB0ZW1wbGF0ZT1hcGkudGVt
cGxhdGVzLmdldChuYW1lPSJCbGFuayIpKSkKICAgICAgICAgICAgICAgICAgICB3aGlsZSBhcGku
dm1zLmdldChuYW1lPW5ld3ZtbmFtZSkuc3RhdHVzLnN0YXRlID09ICJpbWFnZV9sb2NrZWQiOgog
ICAgICAgICAgICAgICAgICAgICAgICBwcmludCgiJXMgV2FpdGluZyBmb3IgY2xvbmUgb2YgJXMg
dG8gZmluaXNoIikgJSAodGhyZWFkTmFtZSwgdm0ubmFtZSkKICAgICAgICAgICAgICAgICAgICAg
ICAgdGltZS5zbGVlcCg2MCkKICAgICAgICAgICAgICAgICAgICBwcmludCgiJXMgQ2xvbmluZyBv
ZiAlcyAgZG9uZSIpICUgKHRocmVhZE5hbWUsIHZtLm5hbWUpCiAgICAgICAgICAgICAgICAgICAg
YXBpLnZtcy5nZXQobmFtZT1uZXd2bW5hbWUpLmV4cG9ydChwYXJhbXMuQWN0aW9uKHN0b3JhZ2Vf
ZG9tYWluPWV4cG9ydCkpCiAgICAgICAgICAgICAgICAgICAgd2hpbGUgYXBpLnZtcy5nZXQobmFt
ZT1uZXd2bW5hbWUpLnN0YXR1cy5zdGF0ZSA9PSAiaW1hZ2VfbG9ja2VkIjoKICAgICAgICAgICAg
ICAgICAgICAgICAgcHJpbnQoIiVzIFdhaXRpbmcgZm9yIGV4cG9ydCBvZiAlcyBmaW5pc2giKSAl
ICh0aHJlYWROYW1lLCB2bS5uYW1lKQogICAgICAgICAgICAgICAgICAgICAgICB0aW1lLnNsZWVw
KDYwKQogICAgICAgICAgICAgICAgICAgIHByaW50KCIlcyBFeHBvcnRpbmcgJXMgZG9uZSIpICUg
KHRocmVhZE5hbWUsIHZtLm5hbWUpCiAgICAgICAgICAgICAgICAgICAgYXBpLnZtcy5nZXQobmFt
ZT1uZXd2bW5hbWUpLmRlbGV0ZSgpCiAgICAgICAgICAgICAgICBleGNlcHQgRXhjZXB0aW9uIGFz
IGU6CiAgICAgICAgICAgICAgICAgICAgcHJpbnQgKCJTb21ldGhpbmcgd2VudCB3cm9uZyB3aXRo
IHRoZSBjb2xpbmcgb3IgZXhwb3J0aW5nXG4lcyIpICUgc3RyKGUpCiAgICAgICAgICAgICAgICBz
bmFwc2hvdGxpc3QgPSB2bS5zbmFwc2hvdHMubGlzdCgpCiAgICAgICAgICAgICAgICBmb3Igc25h
cHNob3QgaW4gc25hcHNob3RsaXN0OgogICAgICAgICAgICAgICAgICAgIGlmIHNuYXBzaG90LmRl
c2NyaXB0aW9uICE9ICJBY3RpdmUgVk0iOgogICAgICAgICAgICAgICAgICAgICAgICBzbmFwc2hv
dC5kZWxldGUoKQogICAgICAgICAgICAgICAgICAgICAgICB0aW1lLnNsZWVwKDMpCiAgICAgICAg
ICAgICAgICAgICAgICAgIHRyeToKICAgICAgICAgICAgICAgICAgICAgICAgICAgIHdoaWxlIGFw
aS52bXMuZ2V0KG5hbWU9dm0ubmFtZSkuc25hcHNob3RzLmdldChpZD1zbmFwc2hvdC5pZCkuc25h
cHNob3Rfc3RhdHVzID09ICJsb2NrZWQiOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHByaW50KCIlcyBXYWl0aW5nIGZvciBzbmFwc2hvdCAlcyBvbiAlcyBkZWxldGlvbiB0byBmaW5p
c2giKSAlICh0aHJlYWROYW1lLCBzbmFwc2hvdC5uYW1lLCB2bS5uYW1lKQogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHRpbWUuc2xlZXAoMTApCiAgICAgICAgICAgICAgICAgICAgICAg
IGV4Y2VwdCBFeGNlcHRpb24gYXMgZToKICAgICAgICAgICAgICAgICAgICAgICAgICAgIHByaW50
ICgiJXMgU25hcHNob3Qgc3RhdHVzIHJlcXVlc3QgbWlnaHQgaGF2ZSBmYWlsZWQsIHRoaXMgdXN1
YWxseSBtZWFucyB0aGF0IHRoZSBzbnBhc2hvdCB3YXMgZGVsZXRlZCBwcm9wZXJseSIpICUgdGhy
ZWFkTmFtZQogICAgICAgICAgICAgICAgICAgICAgICBwcmludCgiJXMgRGVsZXRpbmcgc25hcHNo
b3QgJXMgb24gJXMgZG9uZSIpICUgKHRocmVhZE5hbWUsIHNuYXBzaG90Lm5hbWUsIHZtLm5hbWUp
CgogICAgICAgIGVsc2U6CiAgICAgICAgICAgIHF1ZXVlTG9jay5yZWxlYXNlKCkKICAgICAgICB0
aW1lLnNsZWVwKDEpCgp0aHJlYWRMaXN0ID0gWyJCYWNrdXAtVGhyZWFkLTEiLCAiQmFja3VwLVRo
cmVhZC0yIiwgIkJhY2t1cC1UaHJlYWQtMyJdCgpkZWYgQ29ubmVjdCgpOgogICAgZ2xvYmFsIGFw
aQogICAgYXBpID0gQVBJKHVybD1FTkdJTkVfU0VSVkVSLCB1c2VybmFtZT1FTkdJTkVfVVNFUiwg
cGFzc3dvcmQ9RU5HSU5FX1BBU1NXT1JELCBjYV9maWxlPUVOR0lORV9DRVJUKQoKZGVmIERpc2Nv
bm5lY3QoZXhpdGNvZGUpOgogICAgYXBpLmRpc2Nvbm5lY3QoKQogICAgc3lzLmV4aXQoZXhpdGNv
ZGUpCgp0cnk6CiAgICBDb25uZWN0KCkKICAgIHZtcyA9IGFwaS52bXMubGlzdCgpCgpleGNlcHQg
RXhjZXB0aW9uIGFzIGU6CiAgICBwcmludCAnRmFpbGVkOlxuJXMnICUgc3RyKGUpCgpuYW1lTGlz
dCA9IHZtcyAKcXVldWVMb2NrID0gdGhyZWFkaW5nLkxvY2soKQp3b3JrUXVldWUgPSBRdWV1ZS5R
dWV1ZSgwKQp0aHJlYWRzID0gW10KdGhyZWFkSUQgPSAxCgojIENyZWF0ZSBuZXcgdGhyZWFkcwpm
b3IgdE5hbWUgaW4gdGhyZWFkTGlzdDoKICAgIHRocmVhZCA9IG15VGhyZWFkKHRocmVhZElELCB0
TmFtZSwgd29ya1F1ZXVlKQogICAgdGhyZWFkLnN0YXJ0KCkKICAgIHRocmVhZHMuYXBwZW5kKHRo
cmVhZCkKICAgIHRocmVhZElEICs9IDEKCiMgRmlsbCB0aGUgcXVldWUKcXVldWVMb2NrLmFjcXVp
cmUoKQpmb3Igd29yZCBpbiBuYW1lTGlzdDoKICAgIHdvcmtRdWV1ZS5wdXQod29yZCkKcXVldWVM
b2NrLnJlbGVhc2UoKQoKIyBXYWl0IGZvciBxdWV1ZSB0byBlbXB0eQp3aGlsZSBub3Qgd29ya1F1
ZXVlLmVtcHR5KCk6CiAgICBwYXNzCgojIE5vdGlmeSB0aHJlYWRzIGl0J3MgdGltZSB0byBleGl0
CmV4aXRGbGFnID0gMQoKIyBXYWl0IGZvciBhbGwgdGhyZWFkcyB0byBjb21wbGV0ZQpmb3IgdCBp
biB0aHJlYWRzOgogICAgdC5qb2luKCkKcHJpbnQgIkV4aXRpbmcgTWFpbiBUaHJlYWQiCmFwaS5k
aXNjb25uZWN0KCkK
--_004_D18BA9F0D3BAsoerenmalchowmconnet_--