
This is a multi-part message in MIME format. --------------28AAD9CADF8786DDC32A665C Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Anything is possible however I haven't had any issues since upgrading to vdsm 4.17.35. On 01/19/2017 02:58 PM, Beau Sapach wrote:
Hmmm, makes sense, thanks for the info! I'm not enthusiastic about installing packages outside of the ovirt repos so will probably look into an upgrade regardless. I noticed that ovirt 4 only lists support for RHEL/CentOS 7.2, will a situation such as this crop up again eventually as incremental updates for the OS continue to push it past the supported version? I've been running oVirt for less than a year now so I'm curious what to expect.
On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters <Michael.Watters@dart.biz <mailto:Michael.Watters@dart.biz>> wrote:
You can upgrade vdsm without upgrading to ovirt 4. I went through the same issue on our cluster a few weeks ago and the process was pretty simple.
You'll need to do this on each of your hosts.
yum --enablerepo=extras install -y epel-release git git clone https://github.com/oVirt/vdsm.git <https://github.com/oVirt/vdsm.git> cd vdsm git checkout v4.17.35 yum install -y `cat ./automation/build-artifacts.packages` ./automation/build-artifacts.sh
cd /root/rpmbuild/RPMS/noarch yum --enablerepo=extras install centos-release-qemu-ev yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm vdsm-hook-vmfex-dev-4.17.35-1.el7.centos.noarch.rpm vdsm-infra-4.17.35-1.el7.centos.noarch.rpm vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-python-4.17.35-1.el7.centos.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-yajsonrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-python-4.17.35-1.el7.centos.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-cli-4.17.35-1.el7.centos.noarch.rpm systemctl restart vdsmd
The qemu-ev repo is needed to avoid dependency errors.
On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote: > Uh oh, looks like an upgrade to version 4 is the only option then.... > unless I'm missing something. > > On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev <Pax@acronis.com <mailto:Pax@acronis.com>> > wrote: > > Beau, > > > > Looks like you have upgraded to CentOS 7.3. Now you have to update > > the vdsm package to 4.17.35. > > > > > > From: <users-bounces@ovirt.org <mailto:users-bounces@ovirt.org>> on behalf of Beau Sapach <bsapach@u > > alberta.ca <http://alberta.ca>> > > Date: Wednesday 18 January 2017 at 23:56 > > To: "users@ovirt.org <mailto:users@ovirt.org>" <users@ovirt.org <mailto:users@ovirt.org>> > > Subject: [ovirt-users] Select As SPM Fails > > > > Hello everyone, > > > > I'm about to start digging through the mailing list archives in > > search of a solution but thought I would post to the list as well. > > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber > > channel storage and with a separate engine VM running outside of > > the cluster (NOT hosted-engine). > > > > When I try to move the SPM role from one node to the other I get > > the following in the web interface: > > > > > > > > When I look into /var/log/ovirt-engine/engine.log I see the > > following: > > > > 2017-01-18 13:35:09,332 ERROR > > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVD > > SCommand] (default task-26) [6990cfca] Failed in > > 'HSMGetAllTasksStatusesVDS' method > > 2017-01-18 13:35:09,340 ERROR > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect > > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack: > > null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical > > Volume extend failed > > > > When I look at the task list on the host currently holding the SPM > > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see > > a long list like this: > > > > dc75d3e7-cea7-449b-9a04-76fd8ef0f82b : > > verb = downloadImageFromStream > > code = 554 > > state = recovered > > tag = spm > > result = > > message = Logical Volume extend failed > > id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b > > > > When I look at /var/log/vdsm/vdsm.log on the host in question (v6) > > I see messages like this: > > > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag': > > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam', > > 'result': '', 'message': 'Logical Volume extend failed', 'id': > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56'} > > > > As well as the error from the attempted extend of the logical > > volume: > > > > e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18 > > 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError) > > Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error > > Traceback (most recent call last): > > File "/usr/share/vdsm/storage/task.py", line 873, in _run > > return fn(*args, **kargs) > > File "/usr/share/vdsm/storage/task.py", line 332, in run > > return self.cmd(*self.argslist, **self.argsdict) > > File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper > > return method(self, *args, **kwargs) > > File "/usr/share/vdsm/storage/sp.py", line 1776, in > > downloadImageFromStream > > .copyToImage(methodArgs, sdUUID, imgUUID, volUUID) > > File "/usr/share/vdsm/storage/image.py", line 1373, in > > copyToImage > > / volume.BLOCK_SIZE) > > File "/usr/share/vdsm/storage/blockVolume.py", line 310, in > > extend > > lvm.extendLV(self.sdUUID, self.volUUID, sizemb) > > File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV > > _resizeLV("lvextend", vgName, lvName, size) > > File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV > > raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % > > (size, )) > > LogicalVolumeExtendError: > > Logical Volume extend failed: 'vgname=ae05947f-875c-4507-ad51- > > 62b0d35ef567 lvname=caaef597-eddd-4c24-8df2-a61f35f744f8 > > newsize=1M' > > e980df5f-d068-4c84-8aa7-9ce792690562::DEBUG::2017-01-18 > > 13:24:50,711::task::885::Storage.TaskManager.Task::(_run) > > Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Task._run: e980df5f- > > d068-4c84-8aa7-9ce792690562 () {} failed - stopping task > > > > The logical volume in question is an OVF_STORE disk that lives on > > one of the fiber channel backed LUNs. If I run: > > > > vdsClient -s 0 ClearTask TASK-UUID-HERE > > > > for each task that appears in the: > > > > vdsClient -s 0 getAllTasks > > > > output then they disappear and I'm able to move the SPM role to the > > other host. > > > > This problem then crops up again on the new host once the SPM role > > is moved. What's going on here? Does anyone have any insight as > > to how to prevent this task from re-appearing? Or why it's failing > > in the first place? > > > > Beau > > > > > > > > > > > > -- > Beau Sapach > System Administrator | Information Technology Services | University > of Alberta Libraries > Phone: 780.492.4181 | Email: Beau.Sapach@ualberta.ca <mailto:Beau.Sapach@ualberta.ca> >
-- Beau Sapach *System Administrator | Information Technology Services | University of Alberta Libraries* *Phone: 780.492.4181 | Email: Beau.Sapach@ualberta.ca <mailto:Beau.Sapach@ualberta.ca>*
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------28AAD9CADF8786DDC32A665C Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p>Anything is possible however I haven't had any issues since upgrading to vdsm 4.17.35. <br> </p> <p><span style="color: rgb(46, 52, 54); font-family: Monospace; font-size: 14.666666984558105px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-tap-highlight-color: rgba(0, 0, 0, 0.4); -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); display: inline !important; float: none;"> </span><br> </p> On 01/19/2017 02:58 PM, Beau Sapach wrote:<br> <blockquote cite="mid:CAP+v7h1hp3xpZhG5opMOOCHb3SQsk2YBYAwH5eHSA+8xT8b2wQ@mail.gmail.com" type="cite"> <div dir="ltr">Hmmm, makes sense, thanks for the info! I'm not enthusiastic about installing packages outside of the ovirt repos so will probably look into an upgrade regardless. I noticed that ovirt 4 only lists support for RHEL/CentOS 7.2, will a situation such as this crop up again eventually as incremental updates for the OS continue to push it past the supported version? I've been running oVirt for less than a year now so I'm curious what to expect.</div> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters <span dir="ltr"><<a moz-do-not-send="true" href="mailto:Michael.Watters@dart.biz" target="_blank">Michael.Watters@dart.biz</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">You can upgrade vdsm without upgrading to ovirt 4. I went through the<br> same issue on our cluster a few weeks ago and the process was pretty<br> simple.<br> <br> You'll need to do this on each of your hosts.<br> <br> yum --enablerepo=extras install -y epel-release git<br> git clone <a moz-do-not-send="true" href="https://github.com/oVirt/vdsm.git" rel="noreferrer" target="_blank">https://github.com/<wbr>oVirt/vdsm.git</a><br> cd vdsm<br> git checkout v4.17.35<br> yum install -y `cat ./automation/build-artifacts.<wbr>packages`<br> ./automation/build-artifacts.<wbr>sh<br> <br> cd /root/rpmbuild/RPMS/noarch<br> yum --enablerepo=extras install centos-release-qemu-ev<br> yum localinstall vdsm-4.17.35-1.el7.centos.<wbr>noarch.rpm vdsm-hook-vmfex-dev-4.17.35-1.<wbr>el7.centos.noarch.rpm vdsm-infra-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-jsonrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-python-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-yajsonrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-python-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-xmlrpc-4.17.35-1.el7.<wbr>centos.noarch.rpm vdsm-cli-4.17.35-1.el7.centos.<wbr>noarch.rpm<br> systemctl restart vdsmd<br> <br> The qemu-ev repo is needed to avoid dependency errors.<br> <span class="im HOEnZb"><br> <br> On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:<br> > Uh oh, looks like an upgrade to version 4 is the only option then....<br> > unless I'm missing something.<br> ><br> > On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev <<a moz-do-not-send="true" href="mailto:Pax@acronis.com">Pax@acronis.com</a>><br> > wrote:<br> > > Beau,<br> > > <br> > > Looks like you have upgraded to CentOS 7.3. Now you have to update<br> > > the vdsm package to 4.17.35.<br> > > <br> > > <br> > > From: <<a moz-do-not-send="true" href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>> on behalf of Beau Sapach <bsapach@u<br> > > <a moz-do-not-send="true" href="http://alberta.ca" rel="noreferrer" target="_blank">alberta.ca</a>><br> > > Date: Wednesday 18 January 2017 at 23:56<br> > > To: "<a moz-do-not-send="true" href="mailto:users@ovirt.org">users@ovirt.org</a>" <<a moz-do-not-send="true" href="mailto:users@ovirt.org">users@ovirt.org</a>><br> > > Subject: [ovirt-users] Select As SPM Fails<br> > > <br> > > Hello everyone,<br> > > <br> > > I'm about to start digging through the mailing list archives in<br> > > search of a solution but thought I would post to the list as well. <br> > > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber<br> > > channel storage and with a separate engine VM running outside of<br> > > the cluster (NOT hosted-engine).<br> > > <br> > > When I try to move the SPM role from one node to the other I get<br> > > the following in the web interface:<br> > > <br> > ><br> > > <br> </span> <div class="HOEnZb"> <div class="h5">> > When I look into /var/log/ovirt-engine/engine.<wbr>log I see the<br> > > following:<br> > > <br> > > 2017-01-18 13:35:09,332 ERROR<br> > > [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>HSMGetAllTasksStatusesVD<br> > > SCommand] (default task-26) [6990cfca] Failed in<br> > > 'HSMGetAllTasksStatusesVDS' method<br> > > 2017-01-18 13:35:09,340 ERROR<br> > > [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirect<br> > > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:<br> > > null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical<br> > > Volume extend failed<br> > > <br> > > When I look at the task list on the host currently holding the SPM<br> > > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see<br> > > a long list like this:<br> > > <br> > > dc75d3e7-cea7-449b-9a04-<wbr>76fd8ef0f82b :<br> > > verb = downloadImageFromStream<br> > > code = 554<br> > > state = recovered<br> > > tag = spm<br> > > result =<br> > > message = Logical Volume extend failed<br> > > id = dc75d3e7-cea7-449b-9a04-<wbr>76fd8ef0f82b<br> > > <br> > > When I look at /var/log/vdsm/vdsm.log on the host in question (v6)<br> > > I see messages like this:<br> > > <br> > > '531dd533-22b1-47a0-aae8-<wbr>76c1dd7d9a56': {'code': 554, 'tag':<br> > > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',<br> > > 'result': '', 'message': 'Logical Volume extend failed', 'id':<br> > > '531dd533-22b1-47a0-aae8-<wbr>76c1dd7d9a56'}<br> > > <br> > > As well as the error from the attempted extend of the logical<br> > > volume:<br> > > <br> > > e980df5f-d068-4c84-8aa7-<wbr>9ce792690562::ERROR::2017-01-<wbr>18<br> > > 13:24:50,710::task::866::<wbr>Storage.TaskManager.Task::(_<wbr>setError)<br> > > Task=`e980df5f-d068-4c84-8aa7-<wbr>9ce792690562`::Unexpected error<br> > > Traceback (most recent call last):<br> > > File "/usr/share/vdsm/storage/task.<wbr>py", line 873, in _run<br> > > return fn(*args, **kargs)<br> > > File "/usr/share/vdsm/storage/task.<wbr>py", line 332, in run<br> > > return self.cmd(*self.argslist, **self.argsdict)<br> > > File "/usr/share/vdsm/storage/<wbr>securable.py", line 77, in wrapper<br> > > return method(self, *args, **kwargs)<br> > > File "/usr/share/vdsm/storage/sp.<wbr>py", line 1776, in<br> > > downloadImageFromStream<br> > > .copyToImage(methodArgs, sdUUID, imgUUID, volUUID)<br> > > File "/usr/share/vdsm/storage/<wbr>image.py", line 1373, in<br> > > copyToImage<br> > > / volume.BLOCK_SIZE)<br> > > File "/usr/share/vdsm/storage/<wbr>blockVolume.py", line 310, in<br> > > extend<br> > > lvm.extendLV(self.sdUUID, self.volUUID, sizemb)<br> > > File "/usr/share/vdsm/storage/lvm.<wbr>py", line 1179, in extendLV<br> > > _resizeLV("lvextend", vgName, lvName, size)<br> > > File "/usr/share/vdsm/storage/lvm.<wbr>py", line 1175, in _resizeLV<br> > > raise se.LogicalVolumeExtendError(<wbr>vgName, lvName, "%sM" %<br> > > (size, ))<br> > > LogicalVolumeExtendError:<br> > > Logical Volume extend failed: 'vgname=ae05947f-875c-4507-<wbr>ad51-<br> > > 62b0d35ef567 lvname=caaef597-eddd-4c24-<wbr>8df2-a61f35f744f8<br> > > newsize=1M'<br> > > e980df5f-d068-4c84-8aa7-<wbr>9ce792690562::DEBUG::2017-01-<wbr>18<br> > > 13:24:50,711::task::885::<wbr>Storage.TaskManager.Task::(_<wbr>run)<br> > > Task=`e980df5f-d068-4c84-8aa7-<wbr>9ce792690562`::Task._run: e980df5f-<br> > > d068-4c84-8aa7-9ce792690562 () {} failed - stopping task<br> > > <br> > > The logical volume in question is an OVF_STORE disk that lives on<br> > > one of the fiber channel backed LUNs. If I run:<br> > > <br> > > vdsClient -s 0 ClearTask TASK-UUID-HERE<br> > > <br> > > for each task that appears in the:<br> > > <br> > > vdsClient -s 0 getAllTasks <br> > > <br> > > output then they disappear and I'm able to move the SPM role to the<br> > > other host.<br> > > <br> > > This problem then crops up again on the new host once the SPM role<br> > > is moved. What's going on here? Does anyone have any insight as<br> > > to how to prevent this task from re-appearing? Or why it's failing<br> > > in the first place?<br> > > <br> > > Beau<br> > > <br> > > <br> > > <br> > ><br> ><br> ><br> ><br> > -- <br> > Beau Sapach<br> > System Administrator | Information Technology Services | University<br> > of Alberta Libraries<br> > Phone: 780.492.4181 | Email: <a moz-do-not-send="true" href="mailto:Beau.Sapach@ualberta.ca">Beau.Sapach@ualberta.ca</a><br> > </div> </div> </blockquote> </div> <br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail_signature" data-smartmail="gmail_signature"> <div dir="ltr"> <div><font size="1" face="arial, helvetica, sans-serif">Beau Sapach</font></div> <div><font color="#999999" size="1" face="arial, helvetica, sans-serif"><b>System Administrator | Information Technology Services | University of Alberta Libraries</b></font></div> <div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small;background-color:rgb(255,255,255)"><font color="#999999"><b>Phone: 780.492.4181 | Email: <a moz-do-not-send="true" href="mailto:Beau.Sapach@ualberta.ca" target="_blank">Beau.Sapach@ualberta.ca</a></b></font></span></div> <div><br> </div> </div> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------28AAD9CADF8786DDC32A665C--