<div dir="ltr">Hmmm, makes sense, thanks for the info! I'm not enthusiastic about installing packages outside of the ovirt repos so will probably look into an upgrade regardless. I noticed that ovirt 4 only lists support for RHEL/CentOS 7.2, will a situation such as this crop up again eventually as incremental updates for the OS continue to push it past the supported version? I've been running oVirt for less than a year now so I'm curious what to expect.</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters <span dir="ltr"><<a href="mailto:Michael.Watters@dart.biz" target="_blank">Michael.Watters@dart.biz</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">You can upgrade vdsm without upgrading to ovirt 4. I went through the<br>
same issue on our cluster a few weeks ago and the process was pretty<br>
simple.<br>
<br>
You'll need to do this on each of your hosts.<br>
<br>
yum --enablerepo=extras install -y epel-release git<br>
git clone <a href="https://github.com/oVirt/vdsm.git" rel="noreferrer" target="_blank">https://github.com/<wbr>oVirt/vdsm.git</a><br>
cd vdsm<br>
git checkout v4.17.35<br>
yum install -y `cat ./automation/build-artifacts.<wbr>packages`<br>
./automation/build-artifacts.<wbr>sh<br>
<br>
cd /root/rpmbuild/RPMS/noarch<br>
yum --enablerepo=extras install centos-release-qemu-ev<br>
yum localinstall vdsm-4.17.35-1.el7.centos.<wbr>noarch.rpm vdsm-hook-vmfex-dev-4.17.35-1.<wbr>el7.centos.noarch.rpm vdsm-infra-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-jsonrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-python-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-yajsonrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-python-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-cli-4.17.35-1.el7.centos.<wbr>noarch.rpm<br>
systemctl restart vdsmd<br>
<br>
The qemu-ev repo is needed to avoid dependency errors.<br>
<span class="im HOEnZb"><br>
<br>
On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:<br>
> Uh oh, looks like an upgrade to version 4 is the only option then....<br>
> unless I'm missing something.<br>
><br>
> On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev <<a href="mailto:Pax@acronis.com">Pax@acronis.com</a>><br>
> wrote:<br>
> > Beau,<br>
> > <br>
> > Looks like you have upgraded to CentOS 7.3. Now you have to update<br>
> > the vdsm package to 4.17.35.<br>
> > <br>
> > <br>
> > From: <<a href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>> on behalf of Beau Sapach <bsapach@u<br>
> > <a href="http://alberta.ca" rel="noreferrer" target="_blank">alberta.ca</a>><br>
> > Date: Wednesday 18 January 2017 at 23:56<br>
> > To: "<a href="mailto:users@ovirt.org">users@ovirt.org</a>" <<a href="mailto:users@ovirt.org">users@ovirt.org</a>><br>
> > Subject: [ovirt-users] Select As SPM Fails<br>
> > <br>
> > Hello everyone,<br>
> > <br>
> > I'm about to start digging through the mailing list archives in<br>
> > search of a solution but thought I would post to the list as well. <br>
> > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber<br>
> > channel storage and with a separate engine VM running outside of<br>
> > the cluster (NOT hosted-engine).<br>
> > <br>
> > When I try to move the SPM role from one node to the other I get<br>
> > the following in the web interface:<br>
> > <br>
> ><br>
> > <br>
</span><div class="HOEnZb"><div class="h5">> > When I look into /var/log/ovirt-engine/engine.<wbr>log I see the<br>
> > following:<br>
> > <br>
> > 2017-01-18 13:35:09,332 ERROR<br>
> > [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>HSMGetAllTasksStatusesVD<br>
> > SCommand] (default task-26) [6990cfca] Failed in<br>
> > 'HSMGetAllTasksStatusesVDS' method<br>
> > 2017-01-18 13:35:09,340 ERROR<br>
> > [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirect<br>
> > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:<br>
> > null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical<br>
> > Volume extend failed<br>
> > <br>
> > When I look at the task list on the host currently holding the SPM<br>
> > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see<br>
> > a long list like this:<br>
> > <br>
> > dc75d3e7-cea7-449b-9a04-<wbr>76fd8ef0f82b :<br>
> > verb = downloadImageFromStream<br>
> > code = 554<br>
> > state = recovered<br>
> > tag = spm<br>
> > result =<br>
> > message = Logical Volume extend failed<br>
> > id = dc75d3e7-cea7-449b-9a04-<wbr>76fd8ef0f82b<br>
> > <br>
> > When I look at /var/log/vdsm/vdsm.log on the host in question (v6)<br>
> > I see messages like this:<br>
> > <br>
> > '531dd533-22b1-47a0-aae8-<wbr>76c1dd7d9a56': {'code': 554, 'tag':<br>
> > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',<br>
> > 'result': '', 'message': 'Logical Volume extend failed', 'id':<br>
> > '531dd533-22b1-47a0-aae8-<wbr>76c1dd7d9a56'}<br>
> > <br>
> > As well as the error from the attempted extend of the logical<br>
> > volume:<br>
> > <br>
> > e980df5f-d068-4c84-8aa7-<wbr>9ce792690562::ERROR::2017-01-<wbr>18<br>
> > 13:24:50,710::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)<br>
> > Task=`e980df5f-d068-4c84-8aa7-<wbr>9ce792690562`::Unexpected error<br>
> > Traceback (most recent call last):<br>
> > File "/usr/share/vdsm/storage/task.<wbr>py", line 873, in _run<br>
> > return fn(*args, **kargs)<br>
> > File "/usr/share/vdsm/storage/task.<wbr>py", line 332, in run<br>
> > return self.cmd(*self.argslist, **self.argsdict)<br>
> > File "/usr/share/vdsm/storage/<wbr>securable.py", line 77, in wrapper<br>
> > return method(self, *args, **kwargs)<br>
> > File "/usr/share/vdsm/storage/sp.<wbr>py", line 1776, in<br>
> > downloadImageFromStream<br>
> > .copyToImage(methodArgs, sdUUID, imgUUID, volUUID)<br>
> > File "/usr/share/vdsm/storage/<wbr>image.py", line 1373, in<br>
> > copyToImage<br>
> > / volume.BLOCK_SIZE)<br>
> > File "/usr/share/vdsm/storage/<wbr>blockVolume.py", line 310, in<br>
> > extend<br>
> > lvm.extendLV(self.sdUUID, self.volUUID, sizemb)<br>
> > File "/usr/share/vdsm/storage/lvm.<wbr>py", line 1179, in extendLV<br>
> > _resizeLV("lvextend", vgName, lvName, size)<br>
> > File "/usr/share/vdsm/storage/lvm.<wbr>py", line 1175, in _resizeLV<br>
> > raise se.LogicalVolumeExtendError(<wbr>vgName, lvName, "%sM" %<br>
> > (size, ))<br>
> > LogicalVolumeExtendError:<br>
> > Logical Volume extend failed: 'vgname=ae05947f-875c-4507-<wbr>ad51-<br>
> > 62b0d35ef567 lvname=caaef597-eddd-4c24-<wbr>8df2-a61f35f744f8<br>
> > newsize=1M'<br>
> > e980df5f-d068-4c84-8aa7-<wbr>9ce792690562::DEBUG::2017-01-<wbr>18<br>
> > 13:24:50,711::task::885::<wbr>Storage.TaskManager.Task::(_<wbr>run)<br>
> > Task=`e980df5f-d068-4c84-8aa7-<wbr>9ce792690562`::Task._run: e980df5f-<br>
> > d068-4c84-8aa7-9ce792690562 () {} failed - stopping task<br>
> > <br>
> > The logical volume in question is an OVF_STORE disk that lives on<br>
> > one of the fiber channel backed LUNs. If I run:<br>
> > <br>
> > vdsClient -s 0 ClearTask TASK-UUID-HERE<br>
> > <br>
> > for each task that appears in the:<br>
> > <br>
> > vdsClient -s 0 getAllTasks <br>
> > <br>
> > output then they disappear and I'm able to move the SPM role to the<br>
> > other host.<br>
> > <br>
> > This problem then crops up again on the new host once the SPM role<br>
> > is moved. What's going on here? Does anyone have any insight as<br>
> > to how to prevent this task from re-appearing? Or why it's failing<br>
> > in the first place?<br>
> > <br>
> > Beau<br>
> > <br>
> > <br>
> > <br>
> ><br>
><br>
><br>
><br>
> -- <br>
> Beau Sapach<br>
> System Administrator | Information Technology Services | University<br>
> of Alberta Libraries<br>
> Phone: 780.492.4181 | Email: <a href="mailto:Beau.Sapach@ualberta.ca">Beau.Sapach@ualberta.ca</a><br>
> </div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><font size="1" face="arial, helvetica, sans-serif">Beau Sapach</font></div><div><font size="1" face="arial, helvetica, sans-serif" color="#999999"><b>System Administrator | Information Technology Services | University of Alberta Libraries</b></font></div><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small;background-color:rgb(255,255,255)"><font color="#999999"><b>Phone: 780.492.4181 | Email: <a href="mailto:Beau.Sapach@ualberta.ca" target="_blank">Beau.Sapach@ualberta.ca</a></b></font></span></div><div><br></div></div></div>
</div>