--_000_bc1b2333d81043eb96bcc53b2e15b467acroniscom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
The fix in 4.17.35 is backported from oVirt 4.0. You will not hit it again.
Technically, vdsm 4.17.35 has been released as part of RHEV 3.6.9. So it's =
kinda recommended version if you run 3.6.
________________________________
From: Beau Sapach <bsapach(a)ualberta.ca
Sent: Jan
19, 2017 10:58 PM
To: Michael Watters
Cc: Pavel Gashev; users(a)ovirt.org
Subject: Re: [ovirt-users] Select As SPM Fails
Hmmm, makes sense, thanks for the info! I'm not enthusiastic about install=
ing packages outside of the ovirt repos so will probably look into an upgra=
de regardless. I noticed that ovirt 4 only lists support for RHEL/CentOS 7=
.2, will a situation such as this crop up again eventually as incremental u=
pdates for the OS continue to push it past the supported version? I've bee=
n running oVirt for less than a year now so I'm curious what to expect.
On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters <Michael.Watters(a)dart.biz=
<mailto:Michael.Watters@dart.biz>> wrote:
You can upgrade vdsm without upgrading to ovirt 4. I went through the
same issue on our cluster a few weeks ago and the process was pretty
simple.
You'll need to do this on each of your hosts.
yum --enablerepo=3Dextras install -y epel-release git
git clone
https://github.com/oVirt/vdsm.git
cd vdsm
git checkout v4.17.35
yum install -y `cat ./automation/build-artifacts.packages`
./automation/build-artifacts.sh
cd /root/rpmbuild/RPMS/noarch
yum --enablerepo=3Dextras install centos-release-qemu-ev
yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm vdsm-hook-vmfex-dev-=
4.17.35-1.el7.centos.noarch.rpm vdsm-infra-4.17.35-1.el7.centos.noarch.rpm =
vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-python-4.17.35-1.el7.cent=
os.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-yajsonrpc-4.=
17.35-1.el7.centos.noarch.rpm vdsm-python-4.17.35-1.el7.centos.noarch.rpm v=
dsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-cli-4.17.35-1.el7.centos.no=
arch.rpm
systemctl restart vdsmd
The qemu-ev repo is needed to avoid dependency errors.
On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:
Uh oh, looks like an upgrade to version 4 is the only option
then....
unless I'm missing something.
On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev <Pax@acronis.com<mailto:Pax=
@acronis.com>
wrote:
> Beau,
> Looks like you have upgraded to CentOS 7.3. Now you have
to update
> the vdsm package to 4.17.35.
> From:
<users-bounces@ovirt.org<mailto:users-bounces@ovirt.org>> on beha=
lf of
Beau Sapach <bsapach@u
> alberta.ca<http://alberta.ca>
>
Date: Wednesday 18 January 2017 at 23:56
> To: "users@ovirt.org<mailto:users@ovirt.org>"
<users@ovirt.org<mailto:u=
sers(a)ovirt.org>
> Subject: [ovirt-users] Select As SPM Fails
> Hello everyone,
> I'm about to start digging through the mailing list
archives in
> search of a solution but thought I would post to the list as well.
> I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber
> channel storage and with a separate engine VM running outside of
> the cluster (NOT hosted-engine).
> When I try to move the SPM role from one node to the
other I get
> the following in the web interface:
>
When I look into /var/log/ovirt-engine/engine.log I see the
> following:
> 2017-01-18 13:35:09,332 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVD
> SCommand] (default task-26) [6990cfca] Failed in
> 'HSMGetAllTasksStatusesVDS' method
> 2017-01-18 13:35:09,340 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:
> null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical
> Volume extend failed
> When I look at the task list on the host currently
holding the SPM
> role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see
> a long list like this:
> dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
> verb =3D downloadImageFromStream
> code =3D 554
> state =3D recovered
> tag =3D spm
> result =3D
> message =3D Logical Volume extend failed
> id =3D dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
> When I look at /var/log/vdsm/vdsm.log on the host in
question (v6)
> I see messages like this:
> '531dd533-22b1-47a0-aae8-76c1dd7d9a56':
{'code': 554, 'tag':
> u'spm', 'state': 'recovered', 'verb':
'downloadImageFromStreaam',
> 'result': '', 'message': 'Logical Volume extend
failed', 'id':
> '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}
> As well as the error from the attempted extend of the
logical
> volume:
> e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18
> 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError)
> Task=3D`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/storage/task.py", line 332, in run
> return self.cmd(*self.argslist, **self.argsdict)
> File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
> return method(self, *args, **kwargs)
> File "/usr/share/vdsm/storage/sp.py", line 1776, in
> downloadImageFromStream
> .copyToImage(methodArgs, sdUUID, imgUUID, volUUID)
> File "/usr/share/vdsm/storage/image.py", line 1373, in
> copyToImage
> / volume.BLOCK_SIZE)
> File "/usr/share/vdsm/storage/blockVolume.py", line 310, in
> extend
> lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
> File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV
> _resizeLV("lvextend", vgName, lvName, size)
> File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV
> raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" %
> (size, ))
> LogicalVolumeExtendError:
> Logical Volume extend failed: 'vgname=3Dae05947f-875c-4507-ad51-
> 62b0d35ef567 lvname=3Dcaaef597-eddd-4c24-8df2-a61f35f744f8
> newsize=3D1M'
> e980df5f-d068-4c84-8aa7-9ce792690562::DEBUG::2017-01-18
> 13:24:50,711::task::885::Storage.TaskManager.Task::(_run)
> Task=3D`e980df5f-d068-4c84-8aa7-9ce792690562`::Task._run: e980df5f-
> d068-4c84-8aa7-9ce792690562 () {} failed - stopping task
> The logical volume in question is an OVF_STORE disk that
lives on
> one of the fiber channel backed LUNs. If I run:
> vdsClient -s 0 ClearTask TASK-UUID-HERE
> for each task that appears in the:
> vdsClient -s 0 getAllTasks
> output then they disappear and I'm able to move the
SPM role to the
> other host.
> This problem then crops up again on the new host once the
SPM role
> is moved. What's going on here? Does anyone have any insight as
> to how to prevent this task from re-appearing? Or why it's failing
> in the first place?
> Beau
--
Beau Sapach
System Administrator | Information Technology Services | University
of Alberta Libraries
Phone: 780.492.4181 | Email: Beau.Sapach@ualberta.ca<mailto:Beau.Sapach@u=
alberta.ca
--
Beau Sapach
System Administrator | Information Technology Services | University of Albe=
rta Libraries
Phone: 780.492.4181 | Email: Beau.Sapach@ualberta.ca<mailto:Beau.Sapach@ual=
berta.ca
--_000_bc1b2333d81043eb96bcc53b2e15b467acroniscom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Dus-ascii"=
<meta content=3D"text/html; charset=3Dutf-8"
</head
<body
<style
type=3D"text/css"
<!--
p
{font-family:Calibri,Arial,Helvetica,sans-serif;
font-size:12.0pt;
color:#1F497D}
--
</style
<div
style=3D"font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12.=
0pt; color:#1F497D"
<p dir=3D"ltr">The
fix in 4.17.35 is backported from oVirt 4.0. You will no=
t hit it again.</p
<p
dir=3D"ltr">Technically, vdsm 4.17.35 has been released as part of RHEV =
3.6.9. So it's kinda recommended version if you run 3.6.</p
<div id=3D"signature-x"
style=3D"font-family:Calibri,Arial,Helvetica,sans-s=
erif; font-size:12.0pt; color:#1F497D"
<br
</div
</div
<div
id=3D"quoted_header" style=3D"clear:both"
<hr
style=3D"border:none; height:1px; color:#E1E1E1; background-color:#E1E1=
E1"
<div style=3D"border:none;
padding:3.0pt 0cm 0cm 0cm"><span style=3D"font-s=
ize:11.0pt;
font-family:'Calibri','sans-serif'"><b>From:</b> Beau
Sapach &l=
t;bsapach(a)ualberta.ca&gt;<br
<b>Sent:</b> Jan 19, 2017 10:58 PM<br
<b>To:</b> Michael Watters<br
<b>Cc:</b> Pavel Gashev; users(a)ovirt.org<br
<b>Subject:</b> Re: [ovirt-users] Select As SPM
Fails<br
</span></div
</div
<br
type=3D"attribution"
<div
<div
dir=3D"ltr">Hmmm, makes sense, thanks for the info! I'm not
enth=
usiastic about installing packages outside of the ovirt repos so will proba=
bly look into an upgrade regardless. I noticed that ovirt 4 only list=
s support for RHEL/CentOS 7.2, will a situation
such as this crop up again eventually as incremental updates for the OS co=
ntinue to push it past the supported version? I've been running oVirt=
for less than a year now so I'm curious what to expect.</div
<div class=3D"gmail_extra"><br
<div class=3D"gmail_quote">On Thu, Jan 19, 2017
at 10:42 AM, Michael Watter=
s <span dir=3D"ltr"
<<a
href=3D"mailto:Michael.Watters@dart.biz"
target=3D"_blank">Michael.W=
atters(a)dart.biz</a>&gt;</span> wrote:<br
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;
border-left:1=
px #ccc solid; padding-left:1ex"
You can
upgrade vdsm without upgrading to ovirt 4. I went through the=
<br
same issue on our cluster a few weeks ago and the process was
pretty<br
simple.<br
<br
You'll need to do this on each
of your hosts.<br
<br
yum
--enablerepo=3Dextras install -y epel-release git<br
git
clone <a
href=3D"https://github.com/oVirt/vdsm.git"
rel=3D"norefer=
rer"
target=3D"_blank">https://github.com/<wbr>oVirt/vdsm.g...
cd vdsm<br
git
checkout v4.17.35<br
yum install -y `cat
./automation/build-artifacts.<wbr>packages`<br
./automation/build-artifacts.<wbr>sh<br
<br
cd
/root/rpmbuild/RPMS/noarch<br
yum --enablerepo=3Dextras install
centos-release-qemu-ev<br
yum localinstall
vdsm-4.17.35-1.el7.centos.<wbr>noarch.rpm vdsm-hook-=
vmfex-dev-4.17.35-1.<wbr>el7.centos.noarch.rpm vdsm-infra-4.17.35-1.el7.<wb=
r>centos.noarch.rpm vdsm-jsonrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-=
python-4.17.35-1.el7.<wbr>centos.noarch.rpm
vdsm-xmlrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-yajsonrpc-4.17.35-1.=
el7.<wbr>centos.noarch.rpm vdsm-python-4.17.35-1.el7.<wbr>centos.noarch.rpm=
vdsm-xmlrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-cli-4.17.35-1.el7.ce=
ntos.<wbr>noarch.rpm<br
systemctl restart vdsmd<br
<br
The qemu-ev repo is needed to avoid
dependency errors.<br
<span class=3D"im
HOEnZb"><br
<br
On Thu,
2017-01-19 at 09:16 -0700, Beau Sapach wrote:<br
>
Uh oh, looks like an upgrade to version 4 is the only option then....<=
br
> unless I'm missing something.<br
><br
>
On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev <<a href=3D"mailto:Pa=
x@acronis.com">Pax(a)acronis.com</a>&gt;<br
> wrote:<br
>
> Beau,<br
> >
<br
> > Looks like you
have upgraded to CentOS 7.3. Now you have to updat=
e<br
> > the vdsm package to 4.17.35.<br
> > <br
>
> <br
> > From:
<<a href=3D"mailto:users-bounces@ovirt.org">users-bounce=
s(a)ovirt.org</a>&gt; on behalf of Beau Sapach <bsapach@u<br
> > <a href=3D"http://alberta.ca"
rel=3D"noreferrer" target=3D"_blank=
">alberta.ca</a>><br
>
> Date: Wednesday 18 January 2017 at 23:56<br
>
> To: "<a
href=3D"mailto:users@ovirt.org">users@ovirt.org</a>&=
quot; <<a
href=3D"mailto:users@ovirt.org">users@ovirt.org</a>><br
> > Subject: [ovirt-users] Select As SPM
Fails<br
> >
<br
> > Hello
everyone,<br
> >
<br
> > I'm about to
start digging through the mailing list archives in<b=
r
> > search of a solution but thought I would post
to the list as well=
. <br
> > I'm running
oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber=
<br
> > channel storage and with a separate engine VM
running outside of<=
br
> > the cluster (NOT
hosted-engine).<br
> >
<br
> > When I try to move
the SPM role from one node to the other I get<=
br
> > the following in the web interface:<br
> > <br
>
><br
> >
<br
</span
<div class=3D"HOEnZb"
<div
class=3D"h5">> > When I look into
/var/log/ovirt-engine/engine.<=
wbr>log I see the<br
> >
following:<br
> >
<br
> > 2017-01-18
13:35:09,332 ERROR<br
> >
[org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>HSMGetAllTas=
ksStatusesVD<br
> > SCommand] (default
task-26) [6990cfca] Failed in<br
>
> 'HSMGetAllTasksStatusesVDS' method<br
>
> 2017-01-18 13:35:09,340 ERROR<br
>
> [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>Au=
ditLogDirect<br
> > or] (default
task-26) [6990cfca] Correlation ID: null, Call Stack=
:<br
> > null, Custom Event ID: -1, Message: VDSM v6
command failed: Logic=
al<br
> > Volume extend
failed<br
> >
<br
> > When I look at the
task list on the host currently holding the SP=
M<br
> > role (in this case 'v6'), using:
vdsClient -s 0 getAllTasks, I se=
e<br
> > a long list like this:<br
> > <br
>
> dc75d3e7-cea7-449b-9a04-<wbr>76fd8ef0f82b :<br
> >
verb =3D downloadImageFromStrea=
m<br
> >
code =3D 554<br
> >
state =3D recovered<br
> >
tag =3D spm<br
> >
result =3D<br
>
> message =3D Logical Volume
exte=
nd failed<br
> >
id =3D dc75d3e7-cea7-449b-9a04-=
<wbr>76fd8ef0f82b<br
> >
<br
> > When I look at
/var/log/vdsm/vdsm.log on the host in question (v6=
)<br
> > I see messages like this:<br
> > <br
>
> '531dd533-22b1-47a0-aae8-<wbr>76c1dd7d9a56': {'code': 554,
'tag':=
<br
> > u'spm', 'state':
'recovered', 'verb': 'downloadImageFromStreaam',=
<br
> > 'result': '',
'message': 'Logical Volume extend failed', 'id':<br=
> >
'531dd533-22b1-47a0-aae8-<wbr>76c1dd7d9a56'}<br
> > <br
>
> As well as the error from the attempted extend of the logical<br
> > volume:<br
>
> <br
> >
e980df5f-d068-4c84-8aa7-<wbr>9ce792690562::ERROR::2017-01-<wbr>18=
<br
> >
13:24:50,710::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>se=
tError)<br
> >
Task=3D`e980df5f-d068-4c84-8aa7-<wbr>9ce792690562`::Unexpected er=
ror<br
> > Traceback (most
recent call last):<br
> > File
"/usr/share/vdsm/storage/task.<wbr>py", lin=
e 873, in _run<br
> >
return fn(*args, **kargs)<br
>
> File "/usr/share/vdsm/storage/task.<wbr>py",
lin=
e 332, in run<br
> >
return self.cmd(*self.argslist, **self.argsdict)<br=
> > File
"/usr/share/vdsm/storage/<wbr>securable.py"=
, line 77, in wrapper<br
> >
return method(self, *args, **kwargs)<br
>
> File "/usr/share/vdsm/storage/sp.<wbr>py",
line =
1776, in<br
> >
downloadImageFromStream<br
> >
.copyToImage(methodArgs, sdUUID, imgUUID, volUUID)<=
br
> > File
"/usr/share/vdsm/storage/<wbr>image.py", li=
ne 1373, in<br
> >
copyToImage<br
> >
/ volume.BLOCK_SIZE)<br
>
> File
"/usr/share/vdsm/storage/<wbr>blockVolume.py&quo=
t;, line 310, in<br
> > extend<br
> >
lvm.extendLV(self.sdUUID, self.volUUID, sizemb)<br
>
> File "/usr/share/vdsm/storage/lvm.<wbr>py",
line=
1179, in extendLV<br
> >
_resizeLV("lvextend", vgName, lvName, siz=
e)<br
> > File
"/usr/share/vdsm/storage/lvm.<wbr>py", line=
1175, in _resizeLV<br
> >
raise se.LogicalVolumeExtendError(<wbr>vgName, lvNa=
me, "%sM" %<br
>
> (size, ))<br
> >
LogicalVolumeExtendError:<br
> > Logical Volume
extend failed: 'vgname=3Dae05947f-875c-4507-<wbr>a=
d51-<br
> > 62b0d35ef567
lvname=3Dcaaef597-eddd-4c24-<wbr>8df2-a61f35f744f8<b=
r
> > newsize=3D1M'<br
> >
e980df5f-d068-4c84-8aa7-<wbr>9ce792690562::DEBUG::2017-01-<wbr>18=
<br
> >
13:24:50,711::task::885::<wbr>Storage.TaskManager.Task::(_<wbr>ru=
n)<br
> >
Task=3D`e980df5f-d068-4c84-8aa7-<wbr>9ce792690562`::Task._run: e9=
80df5f-<br
> >
d068-4c84-8aa7-9ce792690562 () {} failed - stopping task<br
> > <br
>
> The logical volume in question is an OVF_STORE disk that lives on=
<br
> > one of the fiber channel backed
LUNs. If I run:<br
> >
<br
> > vdsClient -s 0
ClearTask TASK-UUID-HERE<br
> >
<br
> > for each task that
appears in the:<br
> >
<br
> > vdsClient -s 0
getAllTasks <br
> >
<br
> > output then they
disappear and I'm able to move the SPM role to t=
he<br
> > other
host.<br
> >
<br
> > This problem then
crops up again on the new host once the SPM rol=
e<br
> > is moved. What's going on
here? Does anyone have any =
insight as<br
> > to how to prevent
this task from re-appearing? Or why it's =
failing<br
> > in the first
place?<br
> >
<br
> > Beau<br
> > <br
>
> <br
> >
<br
> ><br
><br
><br
><br
> -- <br
>
Beau Sapach<br
> System Administrator |
Information Technology Services | University<br=
> of Alberta Libraries<br
> Phone: 780.492.4181 | Email: <a
href=3D"mailto:Beau.Sapach@ualberta.ca=
">Beau.Sapach(a)ualberta.ca</a><br
>
</div
</div
</blockquote
</div
<br
<br
clear=3D"all"
<div><br
</div
--
<br
<div class=3D"gmail_signature"
<div dir=3D"ltr"
<div><font size=3D"1" face=3D"arial, helvetica,
sans-serif">Beau Sapach</fo=
nt></div
<div><font
size=3D"1" face=3D"arial, helvetica, sans-serif"
color=3D"#99999=
9"><b>System Administrator | Information Technology Services | University
o=
f Alberta Libraries</b></font></div
<div><span style=3D"font-family:arial,helvetica,sans-serif;
font-size:x-sma=
ll; background-color:rgb(255,255,255)"><font
color=3D"#999999"><b>Phone: 78=
0.492.4181 | Email:
<a href=3D"mailto:Beau.Sapach@ualberta.ca"
target=3D"_blank">Beau.Sapach@ua=
lberta.ca</a></b></font></span></div
<div><br
</div
</div
</div
</div
</div
</body
</html
--_000_bc1b2333d81043eb96bcc53b2e15b467acroniscom_--