Hossted Engine Setup Fails - Failed to execute stage 'Misc configuration': [Errno 1] Operation not permitted:
by Jeremy Tourville
--_000_CY4PR1301MB216687DC6A8BA816D8544850FA3E0CY4PR1301MB2166_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
The full on-screen message is:
Failed to execute stage 'Misc configuration': [Errno 1] Operation not permi=
tted: '/var/run/vdsm/storage/a3964421-dc4a-45d9-ac24-3ce4e5972d1e/50178ece-=
273e-47fe-8083-be10923c0c74'
Hosted Engine deployment failed: this system is not reliable, please check =
the issue,fix and redeploy
I have tried deploying the self hosted engine using both the UI as well as =
CLI (even though not recommended) to see if there was any difference. Ther=
e was not, both methods fail with the same error message.
I was able however, to deploy the engine on a separate host and complete my=
setup. This proved to me that there was nothing wrong with my NFS storage=
setup AFAIK.
It is my goal to have an all in one setup so the self hosted method would b=
e my preferred solution. I have attached a log for review:
https://pastebin.com/CfZzAm4P
<https://pastebin.com/CfZzAm4P>
Many thanks in advance for your assistance!
<https://pastebin.com/CfZzAm4P>
<https://pastebin.com/CfZzAm4P>
<https://pastebin.com/CfZzAm4P>
[https://pastebin.com/i/facebook.png]<https://pastebin.com/CfZzAm4P>
2017-12-02 14:33:39 DEBUG otopi.context context.dumpEnvironment:760 ENVIRON=
MENT - Pastebin.com<https://pastebin.com/CfZzAm4P>
pastebin.com
--_000_CY4PR1301MB216687DC6A8BA816D8544850FA3E0CY4PR1301MB2166_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p style=3D"margin-top:0;margin-bottom:0"></p>
<div>The full on-screen message is:<br>
</div>
<div><br>
</div>
<div>Failed to execute stage 'Misc configuration': [Errno 1] Operation not =
permitted: '/var/run/vdsm/storage/a3964421-dc4a-45d9-ac24-3ce4e5972d1e/5017=
8ece-273e-47fe-8083-be10923c0c74'<br>
Hosted Engine deployment failed: this system is not reliable, please check =
the issue,fix and redeploy
<br>
</div>
<div><br>
</div>
<div>I have tried deploying the self hosted engine using both the UI as wel=
l as CLI (even though not recommended) to see if there was any difference.&=
nbsp; There was not, both methods fail with the same error message.</div>
<div><br>
</div>
<div>I was able however, to deploy the engine on a separate host and comple=
te my setup. This proved to me that there was nothing wrong with my N=
FS storage setup AFAIK.<br>
</div>
<p></p>
<p style=3D"margin-top:0;margin-bottom:0"><br>
</p>
<p style=3D"margin-top:0;margin-bottom:0">It is my goal to have an all in o=
ne setup so the self hosted method would be my preferred solution. I =
have attached a log for review:</p>
<p style=3D"margin-top:0;margin-bottom:0"><a href=3D"https://pastebin.com/C=
fZzAm4P" class=3D"OWAAutoLink" id=3D"LPlnk43233" previewremoved=3D"true" ta=
bindex=3D"-1" disabled=3D"true">https://pastebin.com/CfZzAm4P</a></p>
<p style=3D"margin-top:0;margin-bottom:0"><a href=3D"https://pastebin.com/C=
fZzAm4P" class=3D"OWAAutoLink" id=3D"LPlnk43233" previewremoved=3D"true" ta=
bindex=3D"-1" disabled=3D"true"><br>
</a></p>
<p style=3D"margin-top:0;margin-bottom:0"><a href=3D"https://pastebin.com/C=
fZzAm4P" class=3D"OWAAutoLink" id=3D"LPlnk43233" previewremoved=3D"true" ta=
bindex=3D"-1" disabled=3D"true">Many thanks in advance for your assistance!
<br>
</a></p>
<p style=3D"margin-top:0;margin-bottom:0"><a href=3D"https://pastebin.com/C=
fZzAm4P" class=3D"OWAAutoLink" id=3D"LPlnk43233" previewremoved=3D"true" ta=
bindex=3D"-1" disabled=3D"true"><br>
</a></p>
<p style=3D"margin-top:0;margin-bottom:0"><a href=3D"https://pastebin.com/C=
fZzAm4P" class=3D"OWAAutoLink" id=3D"LPlnk43233" previewremoved=3D"true" ta=
bindex=3D"-1" disabled=3D"true"><br>
</a></p>
<div id=3D"LPBorder_GT_15122582569030.36195870517000817" style=3D"margin-bo=
ttom: 20px; overflow: auto; width: 100%; text-indent: 0px;">
<table id=3D"LPContainer_15122582568940.6784679520832382" style=3D"width: 9=
0%; background-color: rgb(255, 255, 255); position: relative; overflow: aut=
o; padding-top: 20px; padding-bottom: 20px; margin-top: 20px; border-top: 1=
px dotted rgb(200, 200, 200); border-bottom: 1px dotted rgb(200, 200, 200);=
" role=3D"presentation" cellspacing=3D"0">
<tbody>
<tr style=3D"border-spacing: 0px;" valign=3D"top">
<td id=3D"ImageCell_15122582568960.3892562072564215" style=3D"width: 250px;=
position: relative; display: table-cell; padding-right: 20px;" colspan=3D"=
1">
<div id=3D"LPImageContainer_15122582568960.5164700832438345" style=3D"backg=
round-color: rgb(255, 255, 255); height: 250px; position: relative; margin:=
auto; display: table; width: 250px;">
<a id=3D"LPImageAnchor_15122582568980.3615641635618033" style=3D"display: t=
able-cell; text-align: center;" href=3D"https://pastebin.com/CfZzAm4P" targ=
et=3D"_blank" tabindex=3D"-1" disabled=3D"true"><img style=3D"display: inli=
ne-block; max-width: 250px; max-height: 250px; height: 250px; width: 250px;=
border-width: 0px; vertical-align: bottom;" id=3D"LPThumbnailImageID_15122=
582568980.7836388114170441" width=3D"250" height=3D"250" src=3D"https://pas=
tebin.com/i/facebook.png"></a></div>
</td>
<td id=3D"TextCell_15122582568990.4546592956780994" style=3D"vertical-align=
: top; position: relative; padding: 0px; display: table-cell;" colspan=3D"2=
">
<div id=3D"LPRemovePreviewContainer_15122582568990.73254787290038"></div>
<div id=3D"LPTitle_15122582568990.6337056475782945" style=3D"top: 0px; colo=
r: rgb(0, 120, 215); font-weight: 400; font-size: 21px; font-family: "=
wf_segoe-ui_light", "Segoe UI Light", "Segoe WP Light&q=
uot;, "Segoe UI", "Segoe WP", Tahoma, Arial, sans-serif=
; line-height: 21px;">
<a id=3D"LPUrlAnchor_15122582569000.7185941393110225" style=3D"text-decorat=
ion: none;" href=3D"https://pastebin.com/CfZzAm4P" target=3D"_blank" tabind=
ex=3D"-1" disabled=3D"true">2017-12-02 14:33:39 DEBUG otopi.context context=
.dumpEnvironment:760 ENVIRONMENT - Pastebin.com</a></div>
<div id=3D"LPMetadata_15122582569020.37104493943227146" style=3D"margin: 10=
px 0px 16px; color: rgb(102, 102, 102); font-weight: 400; font-family: &quo=
t;wf_segoe-ui_normal", "Segoe UI", "Segoe WP", Tah=
oma, Arial, sans-serif; font-size: 14px; line-height: 14px;">
pastebin.com</div>
</td>
</tr>
</tbody>
</table>
</div>
<br>
<p></p>
</div>
</body>
</html>
--_000_CY4PR1301MB216687DC6A8BA816D8544850FA3E0CY4PR1301MB2166_--
7 years
Convert local storage domain to shared
by Demeter Tibor
--=_345563be-5719-4ca3-a875-90e8c94299c9
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear Users,
We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we created a new data center, that based on 4.1 and it use only shared infrastructure.
I would like to migrate an big VM from the old local datacenter to our new, but I don't have enough downtime.
Is it possible to convert the old local storage to shared (by share via NFS) and attach that as new storage domain to the new cluster?
I just want to import VM and copy (while running) with live storage migration function.
I know, the official way for move vms between ovirt clusters is the export domain, but it has very big disks.
What can I do?
Thanks
Tibor
--=_345563be-5719-4ca3-a875-90e8c94299c9
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div id=3D"zimbraEditorContainer" style=3D"font-family: arial, =
helvetica, sans-serif; font-size: 12pt; color: #000000" class=3D"403"><div>=
<br></div><div>Dear Users,</div><div><br data-mce-bogus=3D"1"></div><div>We=
have an old ovirt3.5 install with a local and a shared clusters. Meanwhile=
we created a new data center, that based on 4.1 and it use only shared inf=
rastructure.</div><div>I would like to migrate an big VM from the old local=
datacenter to our new, but I don't have enough downtime. </div><div><=
br data-mce-bogus=3D"1"></div><div>Is it possible to convert the old local =
storage to shared (by share via NFS) and attach that as new storage domain =
to the new cluster?</div><div></div><div>I just want to import VM and copy =
(while running) with live storage migration function.</div><div><br data-mc=
e-bogus=3D"1"></div><div>I know, the official way for move vms between ovir=
t clusters is the export domain, but it has very big disks.</div><div><br d=
ata-mce-bogus=3D"1"></div><div>What can I do?</div><div><br data-mce-bogus=
=3D"1"></div><div>Thanks </div><div><br data-mce-bogus=3D"1"></div><di=
v>Tibor</div><div data-marker=3D"__SIG_PRE__"><p style=3D"font-family: 'Tim=
es New Roman'; font-size: medium; margin: 0px;" data-mce-style=3D"font-fami=
ly: 'Times New Roman'; font-size: medium; margin: 0px;"><strong><span style=
=3D"font-size: medium;" data-mce-style=3D"font-size: medium;"><span style=
=3D"color: #2d67b0;" data-mce-style=3D"color: #2d67b0;"></span></span></str=
ong></p><p></p></div></div></body></html>
--=_345563be-5719-4ca3-a875-90e8c94299c9--
7 years
Efficient networks API for external OVN provider
by Roman Bolshakov
Hi,
We're using for OVN provider for half a year and noticed a few performance
issues when networks are created programmatically:
1. Network listing on external provider takes around a second with ~1000
OVN networks on bare oVirt cluster (engine + 2 hosts) with no VMs
2. Network import from external provider takes around 1,6 second on the
cluster
3. External network provider API doesn't allow to query specific network by
name.
4. System networks don't have external id field. Without it we need to list
all external networks to delete one, which is slow due to #3.
In the end, we get a few seconds to delete/create an external network
end-to-end.
We're using oVirt 4.1.7. Which of the issues are going to be fixed in 4.2?
Thanks,
Roman
7 years
Strange drive performance over 10Gb
by Wesley Stewart
I was curious if anyone else has seen this or had any suggestions.
I have recently began playing around with my two servers (Freenas Box and
oVirt box) and their 10Gb Ethernet ports.
I can access the Freenas SMB share over the 10Gb port without issue and I
have been playing around with the capabilities. After finding out that my
Linux Raid (MDADM) mirror is having horrible write performance, I decided
to plug in an NVMe drive that I had lying around and check out its
performance.
*For my first test*,I added the NVMe drive as a passthrough device to a
Windows guest and was able to transfer to and from Freenas box without
issue. Speeds were typically ~350-400 MB/s but could drop down to 250 MB/s
or so, and would top out around 525 MB/s, pretty slick!
*For my second test*, I decided to mount the NVMe drive on the CentOS ovirt
host and make it a local datastore. I migrated my Windows Guest to it, and
decided to test and see what sort of transfer speeds I got and saw some
weird results...
Writing TOO the NAS worked about the same. Perhaps a little slower but at
least had a steady 250-300 MB/s.
Writing to the Windows Guest had a very "Fast and then slow, fast and then
slow" type of throughput. I took a few screenshots:
(Writing TO the NAS was fairly consistent)
[image: Inline image 1]
https://i.imgur.com/jWNNvfp.png
(Writing TO the Windows Guest on NVMe storage)
Sometimes these hit the *low 10-20 MB/s* during the transfer.
[image: Inline image 2]
https://i.imgur.com/aizG6n0.png
[image: Inline image 3]
https://i.imgur.com/AjRpR0K.png
7 years
Migration from Proxmox
by Gabriel Stein
Hi again!
well, I'm trying to migrate all my VMs from Proxmox to oVirt. Proxmox
doesn't have libvirt and I can dump the files using vzdump <vm-id>
<directory> and the output is a *.vma file I think from Proxmox. I can't
even find the files, Proxmox create Logical Volumes for every VM.
I converted that to *.qcow2 using qemu-img convert, the conversion
worked(at least no errors) but I can't import it using a script that I
found on web* and the export storage domain(oVirt didn't found it).
I would like to know if there is a way to do that. I read a lot about and
found that one could build a conversion server and use virt-v2v to import.
But it will require a RH Enterprise for that, right? I don't have a
subscription and I would like to know if is possible without the
subscription?
And sorry for the offtopic question, if there is a "redhat" which would
like to answer me privately, if I buy a subscription for 1 Server and I
have NN CentOS Servers, it will be possible to use all benefits since I
have a valid subscription(of course, support just for the RH Server)?
* https://rwmj.wordpress.com/2015/09/18/importing-kvm-
guests-to-ovirt-or-rhev/
Thanks in Advance!
All the best
Gabriel
Gabriel Stein
------------------------------
Gabriel Ferraz Stein
Tel.: +49 (0) 170 2881531
7 years
Re: [ovirt-users] Method to easily verify version of host
by Sandro Bonazzola
Il 01 Dic 2017 10:41, "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com> ha
scritto:
Hello,
currently in web admin gui one can easily verify the exact version of its
engine.
In my case I see
oVirt Engine Version: 4.1.7.6-1.el7.centos
Supposing you fully update your hosts with yum update at every upgrade you
do, ovirt-release41 package gives you exact release installed on your host.
But the same it seems doesn't happen for hypervisors.
Eg in my case I see that they are not aligned with my engine level because
of the "Update available" icon in side of them in Hosts tab and the related
event of type
"
Check for available updates on host ov300 was completed successfully with
message 'found updates for packages ... qemu-img-ev-2.9.0-16.el7_4.8.1 ...
vdsm-4.19.37-1.el7.centos ...
"
One can crosscheck vdsm version for each hypervisor, but it is suboptimal
if you have many hosts and it is not immediate to see the potentially
different levels of misalignment, in case you have many hosts...
Is there anything or do you think it worth's while to open an RFE for it?
Gianluca
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years
Re: [ovirt-users] oVirt Node ng upgrade failed
by Yuval Turgeman
Great, thanks! we already have a patch in POST ready here:
https://gerrit.ovirt.org/#/c/84957/
Thanks,
Yuval
On Dec 1, 2017 15:04, "Kilian Ries" <mail(a)kilian-ries.de> wrote:
Bug is opened:
https://bugzilla.redhat.com/show_bug.cgi?id=1519784
Ok, i try to fix my host next week. Thanks for your help ;)
------------------------------
*Von:* Yuval Turgeman <yuvalt(a)redhat.com>
*Gesendet:* Donnerstag, 30. November 2017 09:22:39
*An:* Kilian Ries
*Cc:* users
*Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
Looks like it, yes - we try to add setfiles_t to permissive, because we
assume selinux is on, and if it's disabled, semanage fails with the error
you mentioned. Can you open a bug on this ?
If you would like to fix the system, you will need to clean the unused LVs,
remove the relevant boot entries from grub (if they exist) and
/boot/ovirt-node-ng-4.1.7-0.20171108.0+1 (if it exists), then reinstall the
rpm.
On Thu, Nov 30, 2017 at 10:16 AM, Kilian Ries <mail(a)kilian-ries.de> wrote:
> Yes, selinux is disabled via /etc/selinux/config; Is that the problem? :/
> ------------------------------
> *Von:* Yuval Turgeman <yuvalt(a)redhat.com>
> *Gesendet:* Donnerstag, 30. November 2017 09:13:34
> *An:* Kilian Ries
> *Cc:* users
>
> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>
> Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?
>
> On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman <yuvalt(a)redhat.com> wrote:
>
>> Looks like selinux is broken on your machine for some reason, can you
>> share /etc/selinux ?
>>
>> Thanks,
>> Yuval.
>>
>> On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries <mail(a)kilian-ries.de> wrote:
>>
>>> @Yuval Turgeman
>>>
>>>
>>> ###
>>>
>>>
>>> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>>>
>>> SELinux: Could not downgrade policy file /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.30:
>>> No such file or directory
>>>
>>> /sbin/load_policy: Can't load policy: No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> SELinux: Could not downgrade policy file /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.30:
>>> No such file or directory
>>>
>>> /sbin/load_policy: Can't load policy: No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> OSError: No such file or directory
>>>
>>>
>>> ###
>>>
>>>
>>> @Ryan Barry
>>>
>>>
>>> Manual yum upgrade finished without any error but imgbased.log still
>>> shows me the following:
>>>
>>>
>>> ###
>>>
>>>
>>> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as
>>> {'attach': True, 'size': '1G'}
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
>>> <open file '/dev/null', mode 'w' at 0x7fa2d1ad8ed0>}
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
>>> True, 'stderr': <open file '/dev/null', mode 'w' at 0x7fa2d1ad8ed0>}
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>>>
>>> onn/tmp
>>>
>>> onn/var_log
>>>
>>> onn/var_log_audit
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', '/etc'],) {}
>>>
>>> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> '/etc'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc
>>>
>>> Traceback (most recent call last):
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 109, in on_new_layer
>>>
>>> check_nist_layout(imgbase, new_lv)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 179, in check_nist_layout
>>>
>>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py",
>>> line 48, in create
>>>
>>> "Path is already a volume: %s" % where
>>>
>>> AssertionError: Path is already a volume: /home
>>>
>>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.bEW2k'],) {}
>>>
>>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.bEW2k'],) {}
>>>
>>> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.UB5Yg'],) {}
>>>
>>> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:29,625 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:29,625 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.UB5Yg'],) {}
>>>
>>> 2017-11-28 17:25:29,626 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:29,631 [DEBUG] (MainThread) Returned:
>>>
>>> Traceback (most recent call last):
>>>
>>> File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>>>
>>> "__main__", fname, loader, pkg_name)
>>>
>>> File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>>>
>>> exec code in run_globals
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/__main__.py",
>>> line 53, in <module>
>>>
>>> CliApplication()
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/__init__.py",
>>> line 82, in CliApplication
>>>
>>> app.hooks.emit("post-arg-parse", args)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/hooks.py",
>>> line 120, in emit
>>>
>>> cb(self.context, *args)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>>> line 56, in post_argparse
>>>
>>> base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>>> line 118, in extract
>>>
>>> "%s" % size, nvr)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>>> line 99, in add_base_with_tree
>>>
>>> new_layer_lv = self.imgbase.add_layer(new_base)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/imgbase.py",
>>> line 191, in add_layer
>>>
>>> self.hooks.emit("new-layer-added", prev_lv, new_lv)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/hooks.py",
>>> line 120, in emit
>>>
>>> cb(self.context, *args)
>>>
>>> File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 123, in on_new_layer
>>>
>>> raise ConfigMigrationError()
>>>
>>> imgbased.plugins.osupdater.ConfigMigrationError
>>>
>>>
>>> ###
>>> ------------------------------
>>> *Von:* Yuval Turgeman <yuvalt(a)redhat.com>
>>> *Gesendet:* Sonntag, 26. November 2017 17:23:55
>>> *An:* Kilian Ries
>>> *Cc:* Ryan Barry; Lev Veyde; users
>>>
>>> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>>>
>>> Hi,
>>>
>>> Can you try to run on your 4.1.1 `semanage permissive -a setfiles_t` and
>>> share your output ?
>>>
>>> Thanks,
>>> Yuval
>>>
>>> On Fri, Nov 24, 2017 at 11:01 AM, Kilian Ries <mail(a)kilian-ries.de>
>>> wrote:
>>>
>>>> This is the imgbased.log:
>>>>
>>>>
>>>> https://www.dropbox.com/s/v9dmgz14cpzfcsn/imgbased.log.tar.gz?dl=0
>>>>
>>>> Ok, i'll try your steps and come back later ...
>>>>
>>>>
>>>> ------------------------------
>>>> *Von:* Ryan Barry <rbarry(a)redhat.com>
>>>> *Gesendet:* Donnerstag, 23. November 2017 23:33:34
>>>> *An:* Kilian Ries; Lev Veyde; users
>>>> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>>>>
>>>> Can you grab imgbased.log?
>>>>
>>>> To retry, "rpm -e ovirt-node-ng-image-update" and remove the new LVs.
>>>> "yum install ovirt-node-ng-image-update" from the CLI instead of engine so
>>>> we can get full logs would be useful
>>>>
>>>> On Thu, Nov 23, 2017 at 16:01 Lev Veyde <lveyde(a)redhat.com> wrote:
>>>>
>>>>>
>>>>> ---------- Forwarded message ----------
>>>>> From: Kilian Ries <mail(a)kilian-ries.de>
>>>>> Date: Thu, Nov 23, 2017 at 5:16 PM
>>>>> Subject: [ovirt-users] oVirt Node ng upgrade failed
>>>>> To: "Users(a)ovirt.org" <Users(a)ovirt.org>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>> just tried to upgrade from
>>>>>
>>>>>
>>>>> ovirt-node-ng-4.1.1.1-0.20170504.0+1
>>>>>
>>>>>
>>>>> to
>>>>>
>>>>>
>>>>> ovirt-node-ng-4.1.7-0.20171108.0+1
>>>>>
>>>>>
>>>>> but it failed:
>>>>>
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.info:80 Yum Verify: 1/4: ovirt-node-ng-image-update.noarch
>>>>> 0:4.1.7-1.el7.centos - u
>>>>>
>>>>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.info:80 Yum Verify: 2/4: ovirt-node-ng-image-update-placeholder.noarch
>>>>> 0:4.1.1.1-1.el7.centos - od
>>>>>
>>>>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.info:80 Yum Verify: 3/4: ovirt-node-ng-image.noarch
>>>>> 0:4.1.1.1-1.el7.centos - od
>>>>>
>>>>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.info:80 Yum Verify: 4/4: ovirt-node-ng-image-update.noarch
>>>>> 0:4.1.1.1-1.el7.centos - ud
>>>>>
>>>>> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Transaction processed
>>>>>
>>>>> 2017-11-23 10:19:21 DEBUG otopi.context context._executeMethod:142
>>>>> method exception
>>>>>
>>>>> Traceback (most recent call last):
>>>>>
>>>>> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/context.py", line 132,
>>>>> in _executeMethod
>>>>>
>>>>> method['method']()
>>>>>
>>>>> File "/tmp/ovirt-3JI9q14aGS/otopi-plugins/otopi/packagers/yumpackager.py",
>>>>> line 261, in _packages
>>>>>
>>>>> self._miniyum.processTransaction()
>>>>>
>>>>> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 1049,
>>>>> in processTransaction
>>>>>
>>>>> _('One or more elements within Yum transaction failed')
>>>>>
>>>>> RuntimeError: One or more elements within Yum transaction failed
>>>>>
>>>>> 2017-11-23 10:19:21 ERROR otopi.context context._executeMethod:151
>>>>> Failed to execute stage 'Package installation': One or more elements within
>>>>> Yum transaction failed
>>>>>
>>>>> 2017-11-23 10:19:21 DEBUG otopi.transaction transaction.abort:119
>>>>> aborting 'Yum Transaction'
>>>>>
>>>>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.info:80 Yum Performing yum transaction rollback
>>>>>
>>>>> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: centos-opstools-release/7/x86_64/filelists_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: centos-opstools-release/7/x86_64/filelists_db
>>>>> 374 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: centos-opstools-release/7/x86_64/other_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: centos-opstools-release/7/x86_64/other_db
>>>>> 53 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db (0%)
>>>>>
>>>>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 55 k(4%)
>>>>>
>>>>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 201 k(17%)
>>>>>
>>>>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 648 k(56%)
>>>>>
>>>>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(99%)
>>>>>
>>>>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(100%)
>>>>>
>>>>> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db (0%)
>>>>>
>>>>> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 45 k(14%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 207 k(66%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 311 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-centos-gluster38/x86_64/filelists_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-centos-gluster38/x86_64/filelists_db
>>>>> 18 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-centos-gluster38/x86_64/other_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-centos-gluster38/x86_64/other_db
>>>>> 7.6 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:27 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
>>>>> 7.5 M(76%)
>>>>>
>>>>> 2017-11-23 10:19:27 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
>>>>> 9.9 M(100%)
>>>>>
>>>>> 2017-11-23 10:19:29 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/other_db (0%)
>>>>>
>>>>> 2017-11-23 10:19:29 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/other_db 2.9
>>>>> M(100%)
>>>>>
>>>>> 2017-11-23 10:19:30 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-patternfly1-noarch-epel/x86_64/filelists_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:30 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-patternfly1-noarch-epel/x86_64/filelists_db
>>>>> 6.5 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-patternfly1-noarch-epel/x86_64/other_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-patternfly1-noarch-epel/x86_64/other_db
>>>>> 851 (100%)
>>>>>
>>>>> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-centos-ovirt41/7/x86_64/filelists_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-centos-ovirt41/7/x86_64/filelists_db
>>>>> 312 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-centos-ovirt41/7/x86_64/other_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: ovirt-centos-ovirt41/7/x86_64/other_db
>>>>> 84 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/filelists_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/filelists_db
>>>>> 4.5 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/other_db
>>>>> (0%)
>>>>>
>>>>> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/other_db
>>>>> 1.4 k(100%)
>>>>>
>>>>> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/filelists_db (0%)
>>>>>
>>>>> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/filelists_db 3.9
>>>>> k(100%)
>>>>>
>>>>> 2017-11-23 10:19:33 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/other_db (0%)
>>>>>
>>>>> 2017-11-23 10:19:33 DEBUG otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/other_db 4.3
>>>>> k(100%)
>>>>>
>>>>> 2017-11-23 10:19:33 ERROR otopi.plugins.otopi.packagers.yumpackager
>>>>> yumpackager.error:85 Yum Transaction close failed: Traceback (most recent
>>>>> call last):
>>>>>
>>>>> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 761,
>>>>> in endTransaction
>>>>>
>>>>> if self._yb.history_undo(transactionCurrent):
>>>>>
>>>>> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6086,
>>>>> in history_undo
>>>>>
>>>>> if self.install(pkgtup=pkg.pkgtup):
>>>>>
>>>>> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 4910,
>>>>> in install
>>>>>
>>>>> raise Errors.InstallError, _('No package(s) available to install')
>>>>>
>>>>> InstallError: Kein(e) Paket(e) zum Installieren verfügbar.
>>>>>
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>>
>>>>> Some more information on my system:
>>>>>
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>> $ mount
>>>>>
>>>>> ...
>>>>>
>>>>> /dev/mapper/onn-ovirt--node--ng--4.1.1.1--0.20170504.0+1 on / type
>>>>> ext4 (rw,relatime,discard,stripe=128,data=ordered)
>>>>>
>>>>>
>>>>>
>>>>> $ imgbase layout
>>>>>
>>>>> ovirt-node-ng-4.1.1.1-0.20170406.0
>>>>>
>>>>> ovirt-node-ng-4.1.1.1-0.20170504.0
>>>>>
>>>>> +- ovirt-node-ng-4.1.1.1-0.20170504.0+1
>>>>>
>>>>> ovirt-node-ng-4.1.7-0.20171108.0
>>>>>
>>>>> +- ovirt-node-ng-4.1.7-0.20171108.0+1
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> $ rpm -q ovirt-node-ng-image
>>>>>
>>>>> Das Paket ovirt-node-ng-image ist nicht installiert
>>>>>
>>>>>
>>>>>
>>>>> $ nodectl check
>>>>>
>>>>> Status: OK
>>>>>
>>>>> Bootloader ... OK
>>>>>
>>>>> Layer boot entries ... OK
>>>>>
>>>>> Valid boot entries ... OK
>>>>>
>>>>> Mount points ... OK
>>>>>
>>>>> Separate /var ... OK
>>>>>
>>>>> Discard is used ... OK
>>>>>
>>>>> Basic storage ... OK
>>>>>
>>>>> Initialized VG ... OK
>>>>>
>>>>> Initialized Thin Pool ... OK
>>>>>
>>>>> Initialized LVs ... OK
>>>>>
>>>>> Thin storage ... OK
>>>>>
>>>>> Checking available space in thinpool ... OK
>>>>>
>>>>> Checking thinpool auto-extend ... OK
>>>>>
>>>>> vdsmd ... OK
>>>>>
>>>>>
>>>>> ###
>>>>>
>>>>>
>>>>> I can restart my Node and VMs are running, but oVirt Engine tells me
>>>>> no update is available. It seems 4.1.7 is installed, but Node still boots
>>>>> the old 4.1.1 image.
>>>>>
>>>>>
>>>>> Can i force run the upgrade again or is there another way to fix this?
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> Greets
>>>>>
>>>>> Kilian
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Lev Veyde
>>>>>
>>>>> Software Engineer, RHCE | RHCVA | MCITP
>>>>>
>>>>> Red Hat Israel
>>>>>
>>>>> <https://www.redhat.com>
>>>>>
>>>>> lev(a)redhat.com | lveyde(a)redhat.com
>>>>> <https://red.ht/sig>
>>>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>>>
>>>> --
>>>>
>>>> RYAN BARRY
>>>>
>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR
>>>>
>>>> Red Hat NA <https://www.redhat.com/>
>>>>
>>>> rbarry(a)redhat.com M: +1-651-815-9306 IM: rbarry
>>>> <https://red.ht/sig>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
7 years
Re: [ovirt-users] oVirt Node ng upgrade failed
by Ryan Barry
Can you grab imgbased.log?
To retry, "rpm -e ovirt-node-ng-image-update" and remove the new LVs. "yum
install ovirt-node-ng-image-update" from the CLI instead of engine so we
can get full logs would be useful
On Thu, Nov 23, 2017 at 16:01 Lev Veyde <lveyde(a)redhat.com> wrote:
>
> ---------- Forwarded message ----------
> From: Kilian Ries <mail(a)kilian-ries.de>
> Date: Thu, Nov 23, 2017 at 5:16 PM
> Subject: [ovirt-users] oVirt Node ng upgrade failed
> To: "Users(a)ovirt.org" <Users(a)ovirt.org>
>
>
> Hi,
>
>
> just tried to upgrade from
>
>
> ovirt-node-ng-4.1.1.1-0.20170504.0+1
>
>
> to
>
>
> ovirt-node-ng-4.1.7-0.20171108.0+1
>
>
> but it failed:
>
>
> ###
>
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 1/4: ovirt-node-ng-image-update.noarch
> 0:4.1.7-1.el7.centos - u
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 2/4:
> ovirt-node-ng-image-update-placeholder.noarch 0:4.1.1.1-1.el7.centos - od
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 3/4: ovirt-node-ng-image.noarch
> 0:4.1.1.1-1.el7.centos - od
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 4/4: ovirt-node-ng-image-update.noarch
> 0:4.1.1.1-1.el7.centos - ud
>
> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Transaction processed
>
> 2017-11-23 10:19:21 DEBUG otopi.context context._executeMethod:142 method
> exception
>
> Traceback (most recent call last):
>
> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/context.py", line 132, in
> _executeMethod
>
> method['method']()
>
> File
> "/tmp/ovirt-3JI9q14aGS/otopi-plugins/otopi/packagers/yumpackager.py", line
> 261, in _packages
>
> self._miniyum.processTransaction()
>
> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 1049, in
> processTransaction
>
> _('One or more elements within Yum transaction failed')
>
> RuntimeError: One or more elements within Yum transaction failed
>
> 2017-11-23 10:19:21 ERROR otopi.context context._executeMethod:151 Failed
> to execute stage 'Package installation': One or more elements within Yum
> transaction failed
>
> 2017-11-23 10:19:21 DEBUG otopi.transaction transaction.abort:119 aborting
> 'Yum Transaction'
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Performing yum transaction rollback
>
> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/filelists_db 374 k(100%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/other_db (0%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/other_db 53 k(100%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db (0%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 55 k(4%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 201 k(17%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 648 k(56%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(99%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(100%)
>
> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db (0%)
>
> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 45 k(14%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 207 k(66%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 311 k(100%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/filelists_db 18 k(100%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/other_db (0%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/other_db 7.6 k(100%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
> (0%)
>
> 2017-11-23 10:19:27 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
> 7.5 M(76%)
>
> 2017-11-23 10:19:27 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
> 9.9 M(100%)
>
> 2017-11-23 10:19:29 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/other_db (0%)
>
> 2017-11-23 10:19:29 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/other_db 2.9
> M(100%)
>
> 2017-11-23 10:19:30 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:30 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/filelists_db 6.5 k(100%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/other_db (0%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/other_db 851 (100%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/filelists_db 312 k(100%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/other_db (0%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/other_db 84 k(100%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> rnachimu-gdeploy/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> rnachimu-gdeploy/x86_64/filelists_db 4.5 k(100%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/other_db
> (0%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/other_db
> 1.4 k(100%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/filelists_db (0%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/filelists_db 3.9
> k(100%)
>
> 2017-11-23 10:19:33 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/other_db (0%)
>
> 2017-11-23 10:19:33 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/other_db 4.3
> k(100%)
>
> 2017-11-23 10:19:33 ERROR otopi.plugins.otopi.packagers.yumpackager
> yumpackager.error:85 Yum Transaction close failed: Traceback (most recent
> call last):
>
> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 761, in
> endTransaction
>
> if self._yb.history_undo(transactionCurrent):
>
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6086, in
> history_undo
>
> if self.install(pkgtup=pkg.pkgtup):
>
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 4910, in
> install
>
> raise Errors.InstallError, _('No package(s) available to install')
>
> InstallError: Kein(e) Paket(e) zum Installieren verfügbar.
>
>
> ###
>
>
>
> Some more information on my system:
>
>
> ###
>
>
> $ mount
>
> ...
>
> /dev/mapper/onn-ovirt--node--ng--4.1.1.1--0.20170504.0+1 on / type ext4
> (rw,relatime,discard,stripe=128,data=ordered)
>
>
>
> $ imgbase layout
>
> ovirt-node-ng-4.1.1.1-0.20170406.0
>
> ovirt-node-ng-4.1.1.1-0.20170504.0
>
> +- ovirt-node-ng-4.1.1.1-0.20170504.0+1
>
> ovirt-node-ng-4.1.7-0.20171108.0
>
> +- ovirt-node-ng-4.1.7-0.20171108.0+1
>
>
>
>
>
> $ rpm -q ovirt-node-ng-image
>
> Das Paket ovirt-node-ng-image ist nicht installiert
>
>
>
> $ nodectl check
>
> Status: OK
>
> Bootloader ... OK
>
> Layer boot entries ... OK
>
> Valid boot entries ... OK
>
> Mount points ... OK
>
> Separate /var ... OK
>
> Discard is used ... OK
>
> Basic storage ... OK
>
> Initialized VG ... OK
>
> Initialized Thin Pool ... OK
>
> Initialized LVs ... OK
>
> Thin storage ... OK
>
> Checking available space in thinpool ... OK
>
> Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
> ###
>
>
> I can restart my Node and VMs are running, but oVirt Engine tells me no
> update is available. It seems 4.1.7 is installed, but Node still boots the
> old 4.1.1 image.
>
>
> Can i force run the upgrade again or is there another way to fix this?
>
>
> Thanks
>
> Greets
>
> Kilian
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> <https://www.redhat.com>
>
> lev(a)redhat.com | lveyde(a)redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
--
RYAN BARRY
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR
Red Hat NA <https://www.redhat.com/>
rbarry(a)redhat.com M: +1-651-815-9306 <javascript:void(0);> IM: rbarry
<https://red.ht/sig>
7 years
Method to easily verify version of host
by Gianluca Cecchi
Hello,
currently in web admin gui one can easily verify the exact version of its
engine.
In my case I see
oVirt Engine Version: 4.1.7.6-1.el7.centos
But the same it seems doesn't happen for hypervisors.
Eg in my case I see that they are not aligned with my engine level because
of the "Update available" icon in side of them in Hosts tab and the related
event of type
"
Check for available updates on host ov300 was completed successfully with
message 'found updates for packages ... qemu-img-ev-2.9.0-16.el7_4.8.1 ...
vdsm-4.19.37-1.el7.centos ...
"
One can crosscheck vdsm version for each hypervisor, but it is suboptimal
if you have many hosts and it is not immediate to see the potentially
different levels of misalignment, in case you have many hosts...
Is there anything or do you think it worth's while to open an RFE for it?
Gianluca
7 years