Re: Ovirt 4.2.7 won't start and drops to emergency console

Trust Red Hat :) At least their approach should be safer. Of course, you can raise a docu bug but RHEL7 is in such phase that it might not be fixed unless this is found in v8. Best Regards, Strahil NikolovOn Oct 2, 2019 05:43, jeremy_tourville@hotmail.com wrote:
http://man7.org/linux/man-pages/man7/lvmthin.7.html Command to repair a thin pool: lvconvert --repair VG/ThinPoolLV
Repair performs the following steps:
1. Creates a new, repaired copy of the metadata. lvconvert runs the thin_repair command to read damaged metadata from the existing pool metadata LV, and writes a new repaired copy to the VG's pmspare LV.
2. Replaces the thin pool metadata LV. If step 1 is successful, the thin pool metadata LV is replaced with the pmspare LV containing the corrected metadata. The previous thin pool metadata LV, containing the damaged metadata, becomes visible with the new name ThinPoolLV_tmetaN (where N is 0,1,...).
If the repair works, the thin pool LV and its thin LVs can be activated, and the LV containing the damaged thin pool metadata can be removed. It may be useful to move the new metadata LV (previously pmspare) to a better PV.
If the repair does not work, the thin pool LV and its thin LVs are lost.
This info seems to conflict with Red Hat advice. Red Hat says if metadat volume is full don't run a lvconvert --repair operation. Now I am confused. I am familiar with LVM and comfortable with it but this is my first time trying to repair thin LVM. The concepts of a metadata volume are new to me. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFN3HWUQVRPFSM...

Sorry for the multiple posts. I had so many thoughts rolling around in my head. I'll try to consolidate my questions here and rephrase the last three responses.
"Try to get all data in advance (before deactivating the VG)".
Can you clarify? What do you mean by this?
"I still can't imagine why the VG will disappear. Try with 'pvscan --cache' to redetect the PV. Afrer all , all VG info is in the PVs' headers and should be visible no matter the VG is deactivated or not."
I'll try that and see what happens.
"Is this an oVirt Node or a regular CentOS/RHEL"
oVirt Node
"You got 15GiB of metadata, so create your new metadata LV at least 30 GiB."
My presumption is that 15 GB is the max size the was specified when it was initially built but not the actual size right now, but yes, making it larger does make sense. I was curious to know what the present size is. Also, lvconvert man page specifies a supported value between 2M and 16G I will create a larger metadata volume assuming I can get the procedure to work properly I am having some difficulty with the procedure, see below. My most important question was here: I am trying to follow the procedure from https://access.redhat.com/solutions/3251681 I am on step #2. Step 2a and 2b work without issue. Step 2c gives me an error. Here are the values I am using: # lvcreate -L 2G gluster_vg3 --name tmpLV (created with no issues) # lvchange -ay gluster_vg3/tmpLV (command completed with no issues) # lvconvert --thinpool gluster_vg1/lvthinpool --poolmetadata gluster_vg3/tmpLV Error: VG name mismatch from position arg (gluster_vg1) and option arg (gluster_vg3) Do I need to create the LV on the same disk that failed? (gluster_vg1) i.e.- perhaps my command is wrong and should be lvconvert --thinpool gluster_vg3/lvthinpool --poolmetadata gluster_vg3/tmpLV ** Perhaps this command assumes you already have a VG=gluster_vg3 and LV=lvthinpool?

I did some checking and my disk is not in a state I expected. (The system doesn't even know the VG exists in it's present state) See the results: # pv PV VG Fmt Attr PSize PFree /dev/md127 onn_vmh lvm2 a-- 222.44g 43.66g /dev/sdd1 gluster_vg3 lvm2 a-- <4.00g <2.00g # pvs -a PV VG Fmt Attr PSize PFree /dev/md127 onn_vmh lvm2 a-- 222.44g 43.66g /dev/onn_vmh/home --- 0 0 /dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1 --- 0 0 /dev/onn_vmh/root --- 0 0 /dev/onn_vmh/swap --- 0 0 /dev/onn_vmh/tmp --- 0 0 /dev/onn_vmh/var --- 0 0 /dev/onn_vmh/var_crash --- 0 0 /dev/onn_vmh/var_log --- 0 0 /dev/onn_vmh/var_log_audit --- 0 0 /dev/sda1 --- 0 0 /dev/sdb1 --- 0 0 /dev/sdd1 gluster_vg3 lvm2 a-- <4.00g <2.00g /dev/sde1 --- 0 0 # vgs VG #PV #LV #SN Attr VSize VFree gluster_vg3 1 1 0 wz--n- <4.00g <2.00g onn_vmh 1 11 0 wz--n- 222.44g 43.66g # vgs -a VG #PV #LV #SN Attr VSize VFree gluster_vg3 1 1 0 wz--n- <4.00g <2.00g onn_vmh 1 11 0 wz--n- 222.44g 43.66g # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert tmpLV gluster_vg3 -wi------- 2.00g home onn_vmh Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.7.1-0.20181216.0 onn_vmh Vwi---tz-k 146.60g pool00 root ovirt-node-ng-4.2.7.1-0.20181216.0+1 onn_vmh Vwi-aotz-- 146.60g pool00 ovirt-node-ng-4.2.7.1-0.20181216.0 4.81 pool00 onn_vmh twi-aotz-- 173.60g 7.21 2.30 root onn_vmh Vwi-a-tz-- 146.60g pool00 2.92 swap onn_vmh -wi-ao---- 4.00g tmp onn_vmh Vwi-aotz-- 1.00g pool00 53.66 var onn_vmh Vwi-aotz-- 15.00g pool00 15.75 var_crash onn_vmh Vwi-aotz-- 10.00g pool00 2.86 var_log onn_vmh Vwi-aotz-- 8.00g pool00 14.73 var_log_audit onn_vmh Vwi-aotz-- 2.00g pool00 6.91 # lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert tmpLV gluster_vg3 -wi------- 2.00g home onn_vmh Vwi-aotz-- 1.00g pool00 4.79 [lvol0_pmspare] onn_vmh ewi------- 180.00m ovirt-node-ng-4.2.7.1-0.20181216.0 onn_vmh Vwi---tz-k 146.60g pool00 root ovirt-node-ng-4.2.7.1-0.20181216.0+1 onn_vmh Vwi-aotz-- 146.60g pool00 ovirt-node-ng-4.2.7.1-0.20181216.0 4.81 pool00 onn_vmh twi-aotz-- 173.60g 7.21 2.30 [pool00_tdata] onn_vmh Twi-ao---- 173.60g [pool00_tmeta] onn_vmh ewi-ao---- 1.00g root onn_vmh Vwi-a-tz-- 146.60g pool00 2.92 swap onn_vmh -wi-ao---- 4.00g tmp onn_vmh Vwi-aotz-- 1.00g pool00 53.66 var onn_vmh Vwi-aotz-- 15.00g pool00 15.75 var_crash onn_vmh Vwi-aotz-- 10.00g pool00 2.86 var_log onn_vmh Vwi-aotz-- 8.00g pool00 14.73 var_log_audit onn_vmh Vwi-aotz-- 2.00g pool00 6.91 # pvscan PV /dev/md127 VG onn_vmh lvm2 [222.44 GiB / 43.66 GiB free] PV /dev/sdd1 VG gluster_vg3 lvm2 [<4.00 GiB / <2.00 GiB free] Total: 2 [<226.44 GiB] / in use: 2 [<226.44 GiB] / in no VG: 0 [0 ] # vgscan ACTIVE '/dev/onn_vmh/pool00' [173.60 GiB] inherit ACTIVE '/dev/onn_vmh/root' [146.60 GiB] inherit ACTIVE '/dev/onn_vmh/home' [1.00 GiB] inherit ACTIVE '/dev/onn_vmh/tmp' [1.00 GiB] inherit ACTIVE '/dev/onn_vmh/var' [15.00 GiB] inherit ACTIVE '/dev/onn_vmh/var_log' [8.00 GiB] inherit ACTIVE '/dev/onn_vmh/var_log_audit' [2.00 GiB] inherit ACTIVE '/dev/onn_vmh/swap' [4.00 GiB] inherit inactive '/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0' [146.60 GiB] inherit ACTIVE '/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1' [146.60 GiB] inherit ACTIVE '/dev/onn_vmh/var_crash' [10.00 GiB] inherit inactive '/dev/gluster_vg3/tmpLV' [2.00 GiB] inherit # lvscan Reading all physical volumes. This may take a while... Found volume group "onn_vmh" using metadata type lvm2 Found volume group "gluster_vg3" using metadata type lvm2 I sort of expect I may need to do a restore/rebuild of the disk using the data from the lvm backup folder. I found some interesting articles: https://www3.unixrealm.com/repair-a-thin-pool/ and https://chappie800.wordpress.com/2017/06/13/lvm-repair-metadata/ I have read through them and I'm trying to slowly digest the information in them. Working with thin volumes to do a repair is definitely new to me. My past experience is that they just work as they should. I am only now learning the hard way that thin disks have meta data and a very specific structure. At least I'm not learning LVM and thin at the same time. Anyway, I digress. Based on what I read I was able to download and install device-mapper-persistent-data rpm to the node. I then went and checked the tools location to see what tools are available: # cd /usr/bin (checking for the existence of available tools) # ls | grep pv fipvlan pvchange pvck pvcreate pvdisplay pvmove pvremove pvresize pvs pvscan # ls | grep vg vgcfgbackup vgcfgrestore vgchange vgck vgconvert vgcreate vgdisplay vgexport vgextend vgimport vgimportclone vgmerge vgmknodes vgreduce vgremove vgrename vgs vgscan vgsplit # ls | grep lv lvchange lvconvert lvcreate lvdisplay lvextend lvm lvmconf lvmconfig lvmdiskscan lvmdump lvmetad lvmpolld lvmsadc lvmsar lvreduce lvremove lvrename lvresize lvs lvscan Based on the scenario here, how do I get my disks in a state that I can both find and rebuild the data for VG and VG metadata / LV and LV metdata ?
participants (2)
-
jeremy_tourville@hotmail.com
-
Strahil