Adding Sachi
This only started to happen with oVirt node 4.3, 4.2 didn't have issue. Since I updated to 4.3, every reboot the host goes into emergency mode. First few times this happened I re-installed O/S from scratch, but after some digging I found out that the drives it mounts in /etc/fstab cause the problem, specifically these mounts. All three are single drives, one is an SSD and the other 2 are individual NVME drives.
UUID=732f939c-f133-4e48-8dc8-c9d21dbc0853 /gluster_bricks/storage_nvme1 auto defaults 0 0
UUID=5bb67f61-9d14-4d0b-8aa4-ae3905276797 /gluster_bricks/storage_ssd auto defaults 0 0
UUID=f55082ca-1269-4477-9bf8-7190f1add9ef /gluster_bricks/storage_nvme2 auto defaults 0 0
In order to get the host to actually boot, I have to go to console, delete those mounts, reboot, and then re-add them, and they end up with new UUIDs. all of these hosts reliably rebooted in 4.2 and earlier, but all the versions of 4.3 have this same problem (I keep updating to hope issue is fixed).
Hello Michael,
I need your help in resolving this. I would like to understand if the environment is
affecting something.
What is the out put of:
# blkid /dev/vgname/lvname
For the three bricks you have.
And also what is the error you see when you run the command
# mount /gluster_bricks/storage_nvme1
# mount /gluster_bricks/storage_ssd
Also can you please attach your variable file and playbook?
In my setup things work fine, which is making it difficult for me to fix.
-sac