Hi Yuval,
as you can see in my last attachment, after lv meta restore i was unable to modify
LV's in pool00.
Thin pool has queued transactions got 23 expect 16 or so.
I reboot and try repairing from Centos 7 USB Stick and can’t access / remove LV because
they
has Read LOCK and then Write LOCK is prohibited.
The System boots only into the dracut emergency console and i decide me for reliability
to reinstall it with a fresh 4.2.4 NODE after cleaning the disk. :-)
Now it running overt-node-ng-4.2.4.
-
Noticeable on this Issue is:
- ng-node should not be installed on previously used CentOS Disks without cleaning.
(var_crash LV)
- upgrades eg. 4.2.4 should be easy reinstall-able.
- What about old version in LV thin pool, how to remove them safely ?
- fstrim -av trims also LV thin pool volumes, nice :-)
Many thanks to you, i have learned a lot of lvm.
Oliver
Am 03.07.2018 um 22:58 schrieb Yuval Turgeman
<yturgema(a)redhat.com>:
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1 still exists
without its base - try this:
1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
2. nodectl info
On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <Oliver.Riesener(a)hs-bremen.de>
wrote:
I did it, with issues, see attachment.
> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman <yturgema(a)redhat.com>:
>
> Hi Oliver,
>
> I would try the following, but please notice it is *very* dangerous, so a backup is
probably a good idea (man vgcfgrestore)...
>
> 1. vgcfgrestore --list onn_ovn-monster
> 2. search for a .vg file that was created before deleting those 2 lvs
(ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?)
> 7. nodectl info to make sure everything is ok
> 8. reinstall the image-update rpm
>
> Thanks,
> Yuval.
>
>
>
> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman <yturgema(a)redhat.com> wrote:
> Hi Oliver,
>
> The KeyError happens because there are no bases for the layers. For each LV that
ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng
images, you're supposed to have 6 layers. This is the reason nodectl info fails, and
the upgrade will fail also. In your original email it looks OK - I have never seen this
happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
>
> You can find me on #ovirt (
irc.oftc.net) also :)
>
>
> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <Oliver.Riesener(a)hs-bremen.de>
wrote:
> Yuval, here comes the lvs output.
>
> The IO Errors are because Node is in maintenance.
> The LV root is from previous installed centos 7.5.
> The i have installed node-ng 4.2.1 and got this MIX.
> The LV turbo is a SSD in it’s own VG named ovirt.
>
> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
> because nodectl info error:
>
> KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
>
> Now i get the error @4.2.3:
> [root@ovn-monster ~]# nodectl info
> Traceback (most recent call last):
> File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
> File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
> File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in
<module>
> CliApplication()
> File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in
CliApplication
> return cmdmap.command(args)
> File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in
command
> return self.commands[command](**kwargs)
> File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in
info
> Info(self.imgbased, self.machine).write()
> File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
__init__
> self._fetch_information()
> File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
_fetch_information
> self._get_layout()
> File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
_get_layout
> layout = LayoutParser(self.app.imgbase.layout()).parse()
> File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in
layout
> return self.naming.layout()
> File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in
layout
> tree = self.tree(lvs)
> File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in
tree
> bases[img.base.nvr].layers.append(img)
> KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
>
> lvs -a
>
> [root@ovn-monster ~]# lvs -a
> /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at
5497568559104: Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at
5497568616448: Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096:
Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at
1099526242304: Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at
1099526299648: Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096:
Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at
1099526242304: Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at
1099526299648: Eingabe-/Ausgabefehler
> /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at
0: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at
536805376: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at
536862720: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at
2147418112: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at
2147475456: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at
0: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at
1073676288: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at
1073733632: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at
1073676288: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at
1073733632: Eingabe-/Ausgabefehler
> /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at
0: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at
536805376: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at
536862720: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at
0: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at
536805376: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at
536862720: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at
2147418112: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at
2147475456: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at
2147418112: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at
2147475456: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at
0: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at
1073676288: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at
1073733632: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at
1073676288: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at
1073733632: Eingabe-/Ausgabefehler
> /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at
134152192: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at
134209536: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0:
Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at
1073676288: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at
1073733632: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at
0: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at
1073676288: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at
1073733632: Eingabe-/Ausgabefehler
> /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at
4096: Eingabe-/Ausgabefehler
> LV VG Attr LSize Pool
Origin Data% Meta% Move Log Cpy%Sync Convert
> home onn_ovn-monster Vwi-aotz-- 1,00g pool00
4,79
> [lvol0_pmspare] onn_ovn-monster ewi------- 144,00m
> ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00
2,88
> ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00
0,86
> ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00
0,85
> ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00
ovirt-node-ng-4.2.4-0.20180626.0 0,85
> pool00 onn_ovn-monster twi-aotz-- <279,38g
6,76 1,01
> [pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g
> [pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g
> root onn_ovn-monster Vwi-a-tz-- <252,38g pool00
1,24
> swap onn_ovn-monster -wi-ao---- 4,00g
> tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00
5,01
> var onn_ovn-monster Vwi-aotz-- 15,00g pool00
3,56
> var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00
2,86
> var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00
38,48
> var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00
6,77
> turbo ovirt -wi-ao---- 894,25g
>
>> Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema(a)redhat.com>:
>>
>> Oliver, can you share the output from lvs ?
>>
>> On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener
<Oliver.Riesener(a)hs-bremen.de> wrote:
>> Hi Yuval,
>>
>> * reinstallation failed, because LV already exists.
>> ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g
pool00 0,85
>> ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g
pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
>> See attachment imgbased.reinstall.log
>>
>> * I removed them and re-reinstall without luck.
>>
>> I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
>>
>> See attachment imgbased.rereinstall.log
>>
>> Also a new problem with nodectl info
>> [root@ovn-monster tmp]# nodectl info
>> Traceback (most recent call last):
>> File "/usr/lib64/python2.7/runpy.py", line 162, in
_run_module_as_main
>> "__main__", fname, loader, pkg_name)
>> File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>> File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
in <module>
>> CliApplication()
>> File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
200, in CliApplication
>> return cmdmap.command(args)
>> File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
118, in command
>> return self.commands[command](**kwargs)
>> File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
in info
>> Info(self.imgbased, self.machine).write()
>> File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
__init__
>> self._fetch_information()
>> File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
_fetch_information
>> self._get_layout()
>> File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
_get_layout
>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>> File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line
155, in layout
>> return self.naming.layout()
>> File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
in layout
>> tree = self.tree(lvs)
>> File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
in tree
>> bases[img.base.nvr].layers.append(img)
>> KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
>>
>>
>>
>>
>>
>>
>>> Am 02.07.2018 um 22:22 schrieb Oliver Riesener
<Oliver.Riesener(a)hs-bremen.de>:
>>>
>>> Hi Yuval,
>>>
>>> yes you are right, there was a unused and deactivated var_crash LV.
>>>
>>> * I activated and mount it to /var/crash via /etc/fstab.
>>> * /var/crash was empty, and LV has already ext4 fs.
>>> var_crash onn_ovn-monster Vwi-aotz-- 10,00g
pool00 2,86
>>>
>>> * Now i will try to upgrade again.
>>> * yum reinstall ovirt-node-ng-image-update.noarch
>>>
>>> BTW, no more imgbased.log files found.
>>>
>>>> Am 02.07.2018 um 20:57 schrieb Yuval Turgeman
<yturgema(a)redhat.com>:
>>>>
>>>> From your log:
>>>>
>>>> AssertionError: Path is already a volume: /var/crash
>>>>
>>>> Basically, it means that you already have an LV for /var/crash but
it's not mounted for some reason, so either mount it (if the data good) or remove it
and then reinstall the image-update rpm. Before that, check that you dont have any other
LVs in that same state - or you can post the output for lvs... btw, do you have any more
imgbased.log files laying around ?
>>>>
>>>> You can find more details about this here:
>>>>
>>>>
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/...
>>>>
>>>> On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener
<Oliver.Riesener(a)hs-bremen.de> wrote:
>>>> Hi,
>>>>
>>>> i attached my /tmp/imgbased.log
>>>>
>>>> Sheers
>>>>
>>>> Oliver
>>>>
>>>>
>>>>
>>>>> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman
<yuvalt(a)redhat.com>:
>>>>>
>>>>> Looks like the upgrade script failed - can you please attach
/var/log/imgbased.log or /tmp/imgbased.log ?
>>>>>
>>>>> Thanks,
>>>>> Yuval.
>>>>>
>>>>> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola
<sbonazzo(a)redhat.com> wrote:
>>>>> Yuval, can you please have a look?
>>>>>
>>>>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener
<Oliver.Riesener(a)hs-bremen.de>:
>>>>> Yes, here is the same.
>>>>>
>>>>> It seams the bootloader isn’t configured right ?
>>>>>
>>>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>>>
>>>>> [root@ovn-monster ~]# nodectl info
>>>>> layers:
>>>>> ovirt-node-ng-4.2.4-0.20180626.0:
>>>>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>>>> ovirt-node-ng-4.2.3.1-0.20180530.0:
>>>>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>>>> ovirt-node-ng-4.2.3-0.20180524.0:
>>>>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>>>> ovirt-node-ng-4.2.1.1-0.20180223.0:
>>>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>>>> bootloader:
>>>>> default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>>> entries:
>>>>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>>>> index: 0
>>>>> title: ovirt-node-ng-4.2.3-0.20180524.0
>>>>> kernel:
/boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>>>> args: "ro crashkernel=auto
rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
>>>>> initrd:
/boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>>>> root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>>>> index: 1
>>>>> title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>>>> kernel:
/boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>>>> args: "ro crashkernel=auto
rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>>>> initrd:
/boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>>>> root:
/dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>>>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>>> [root@ovn-monster ~]# uptime
>>>>> 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
>>>>>
>>>>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen
<matt(a)khoza.com>:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the
prebuilt node platform and it doesn't appear the updates worked.
>>>>>>
>>>>>>
>>>>>> [root@node6-g8-h4 ~]# yum update
>>>>>> Loaded plugins: enabled_repos_upload, fastestmirror,
imgbased-persist,
>>>>>> : package_upload, product-id,
search-disabled-repos, subscription-
>>>>>> : manager
>>>>>> This system is not registered with an entitlement server. You can
use subscription-manager to register.
>>>>>> Loading mirror speeds from cached hostfile
>>>>>> * ovirt-4.2-epel:
linux.mirrors.es.net
>>>>>> Resolving Dependencies
>>>>>> --> Running transaction check
>>>>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7
will be updated
>>>>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
will be obsoleting
>>>>>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
0:4.2.3.1-1.el7 will be obsoleted
>>>>>> --> Finished Dependency Resolution
>>>>>>
>>>>>> Dependencies Resolved
>>>>>>
>>>>>>
=========================================================================================================================
>>>>>> Package Arch Version
Repository Size
>>>>>>
=========================================================================================================================
>>>>>> Installing:
>>>>>> ovirt-node-ng-image-update noarch 4.2.4-1.el7
ovirt-4.2 647 M
>>>>>> replacing ovirt-node-ng-image-update-placeholder.noarch
4.2.3.1-1.el7
>>>>>>
>>>>>> Transaction Summary
>>>>>>
=========================================================================================================================
>>>>>> Install 1 Package
>>>>>>
>>>>>> Total download size: 647 M
>>>>>> Is this ok [y/d/N]: y
>>>>>> Downloading packages:
>>>>>> warning:
/var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm:
Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY
>>>>>> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm
is not installed
>>>>>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB
00:02:07
>>>>>> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>>>>>> Importing GPG key 0xFE590CB7:
>>>>>> Userid : "oVirt <infra(a)ovirt.org>"
>>>>>> Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>>>>>> Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>>>>>> From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>>>>>> Is this ok [y/N]: y
>>>>>> Running transaction check
>>>>>> Running transaction test
>>>>>> Transaction test succeeded
>>>>>> Running transaction
>>>>>> Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
>>>>>> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch)
scriptlet failed, exit status 1
>>>>>> Non-fatal POSTIN scriptlet failure in rpm package
ovirt-node-ng-image-update-4.2.4-1.el7.noarch
>>>>>> Erasing :
ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3
>>>>>> Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch
3/3
>>>>>> warning: file
/usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove
failed: No such file or directory
>>>>>> Uploading Package Profile
>>>>>> Unable to upload Package Profile
>>>>>> Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
>>>>>> Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch
2/3
>>>>>> Verifying :
ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
>>>>>>
>>>>>> Installed:
>>>>>> ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
>>>>>>
>>>>>> Replaced:
>>>>>> ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
>>>>>>
>>>>>> Complete!
>>>>>> Uploading Enabled Repositories Report
>>>>>> Loaded plugins: fastestmirror, product-id, subscription-manager
>>>>>> This system is not registered with an entitlement server. You can
use subscription-manager to register.
>>>>>> Cannot upload enabled repos report, is this client registered?
>>>>>>
>>>>>>
>>>>>> My engine shows the nodes as having no updates, however the major
components including the kernel version and port 9090 admin GUI show 4.2.3
>>>>>>
>>>>>> Is there anything I can provide to help diagnose the issue?
>>>>>>
>>>>>>
>>>>>> [root@node6-g8-h4 ~]# rpm -qa | grep ovirt
>>>>>>
>>>>>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch
>>>>>> ovirt-host-deploy-1.7.3-1.el7.centos.noarch
>>>>>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
>>>>>> ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch
>>>>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
>>>>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch
>>>>>> ovirt-release42-4.2.3.1-1.el7.noarch
>>>>>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch
>>>>>> ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch
>>>>>> ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64
>>>>>> ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch
>>>>>> ovirt-host-4.2.2-2.el7.centos.x86_64
>>>>>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch
>>>>>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch
>>>>>> ovirt-release-host-node-4.2.3.1-1.el7.noarch
>>>>>> cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch
>>>>>> ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch
>>>>>> python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
>>>>>>
>>>>>> [root@node6-g8-h4 ~]# yum update
>>>>>> Loaded plugins: enabled_repos_upload, fastestmirror,
imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager
>>>>>> This system is not registered with an entitlement server. You can
use subscription-manager to register.
>>>>>> Loading mirror speeds from cached hostfile
>>>>>> * ovirt-4.2-epel:
linux.mirrors.es.net
>>>>>> No packages marked for update
>>>>>> Uploading Enabled Repositories Report
>>>>>> Loaded plugins: fastestmirror, product-id, subscription-manager
>>>>>> This system is not registered with an entitlement server. You can
use subscription-manager to register.
>>>>>> Cannot upload enabled repos report, is this client registered?
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
https://lists.ovirt.org/archive
...
[Message clipped]