I'm attempting to update our oVirt cluster to 4.5.5 from 4.5.4, running oVirt Node NG
on the hosts.
When I tried updating a host through the oVirt Manager GUI, after the host reboots, it
fails to start up and goes into emergency recovery mode:
[ 4.534872] localhost systemd[1]: Reached target Local File Systems.
[ 4.535119] localhost systemd[1]: Reached target System Initialization.
[ 4.535343] localhost systemd[1]: Reached target Basic System.
[ 4.536759] localhost systemd[1]: Started Hardware RNG Entropy Gatherer Daemon.
[ 4.541801] localhost rngd[1512]: Disabling 7: PKCS11 Entropy generator (pkcs11)
[ 4.541801] localhost rngd[1512]: Disabling 5: NIST Network Entropy Beacon (nist)
[ 4.541801] localhost rngd[1512]: Disabling 9: Qrypt quantum entropy beacon (qrypt)
[ 4.541801] localhost rngd[1512]: Initializing available sources
[ 4.542073] localhost rngd[1512]: [hwrng ]: Initialization Failed
[ 4.542073] localhost rngd[1512]: [rdrand]: Enabling RDSEED rng support
[ 4.542073] localhost rngd[1512]: [rdrand]: Initialized
[ 4.542073] localhost rngd[1512]: [jitter]: JITTER timeout set to 5 sec
[ 4.582381] localhost rngd[1512]: [jitter]: Initializing AES buffer
[ 8.309063] localhost rngd[1512]: [jitter]: Enabling JITTER rng support
[ 8.309063] localhost rngd[1512]: [jitter]: Initialized
[ 133.884355] localhost dracut-initqueue[1095]: Warning: dracut-initqueue: timeout, still
waiting for following initqueue hooks:
[ 133.885349] localhost dracut-initqueue[1095]: Warning:
/lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-id\x2fmd-uuid-3f47cad8:fecb96ea:0ea37615:4e5dec4e.sh:
"[ -e "/dev/disk/by-id/md-uuid-3f47cad8:fecb96ea:0ea37615:4e5dec4e"
]"
[ 133.886485] localhost dracut-initqueue[1095]: Warning:
/lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-id\x2fmd-uuid-d446b801:d515c112:116ff07f:9ae52466.sh:
"[ -e "/dev/disk/by-id/md-uuid-d446b801:d515c112:116ff07f:9ae52466"
]"
[ 133.887619] localhost dracut-initqueue[1095]: Warning:
/lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fonn\x2fovirt-node-ng-4.5.5-0.20231130.0+1.sh:
"if ! grep -q After=remote-fs-pre.target
/run/systemd/generator/systemd-cryptsetup(a)*.service 2>/dev/null; then
[ 133.887619] localhost dracut-initqueue[1095]: [ -e
"/dev/onn/ovirt-node-ng-4.5.5-0.20231130.0+1" ]
[ 133.887619] localhost dracut-initqueue[1095]: fi"
[ 133.888667] localhost dracut-initqueue[1095]: Warning:
/lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fonn\x2fswap.sh: "[ -e
"/dev/onn/swap" ]"
[ 133.890050] localhost dracut-initqueue[1095]: Warning: dracut-initqueue: starting
timeout scripts
[ 133.969228] localhost dracut-initqueue[7366]: Scanning devices md126p2 for LVM logical
volumes onn/ovirt-node-ng-4.5.5-0.20231130.0+1
[ 133.969228] localhost dracut-initqueue[7366]: onn/swap
[ 134.001560] localhost dracut-initqueue[7366]: onn/ovirt-node-ng-4.5.5-0.20231130.0+1
thin
[ 134.001560] localhost dracut-initqueue[7366]: onn/swap
linear
[ 134.014259] localhost dracut-initqueue[7381]: /etc/lvm/profile/imgbased-pool.profile:
stat failed: No such file or directory
[ 134.532608] localhost dracut-initqueue[7381]: Check of pool onn/pool00 failed
(status:64). Manual repair required!
I then attempted installing the oVirt Node NG 4.5.5 iso to a USB stick and tried
installing that way, however, after going through the GUI and setting up storage, network,
hostname, etc, the install fails shortly after clicking "Begin".
22:11:32,671 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:blivet:executing
action: [468] destroy device lvmthinlv onn-var_log_audit (id 216)
22:11:32,672 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet:
LVMLogicalVolumeDevice.destroy: onn-var_log_audit ; status: False ;
22:11:32,673 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet:
LVMLogicalVolumeDevice.teardown: onn-var_log_audit ; status: False ;
controllable: False ;
22:11:32,674 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet:
LVMVolumeGroupDevice.setup_parents: name: onn ; orig: True ;
22:11:32,674 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet:
PartitionDevice.setup: Volume0_0p2 ; orig: True ; status: True ; controllable:
True ;
22:11:32,675 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet:
LVMPhysicalVolume.setup: device: /dev/md/Volume0_0p2 ; type: lvmpv ; status:
False ;
22:11:32,676 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet:
LVMLogicalVolumeDevice._destroy: onn-var_log_audit ; status: False ;
22:11:32,676 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:program:Running [97]
lvm lvremove --yes onn/var_log_audit --config= log {level=7 file=/tmp/lvm.log syslog=0}
--devices=/dev/md/Volume0_0p2 ...
22:11:33,104 ERR rsyslogd:imjournal: open() failed for path:
'/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9
try
https://www.rsyslog.com/e/2433 ]
22:11:33,105 ERR rsyslogd:imjournal: open() failed for path:
'/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9
try
https://www.rsyslog.com/e/2433 ]
22:11:33,105 ERR rsyslogd:imjournal: open() failed for path:
'/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9
try
https://www.rsyslog.com/e/2433 ]
22:11:33,106 ERR rsyslogd:imjournal: open() failed for path:
'/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9
try
https://www.rsyslog.com/e/2433 ]
22:11:33,106 ERR rsyslogd:imjournal: open() failed for path:
'/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9
try
https://www.rsyslog.com/e/2433 ]
22:11:33,107 ERR rsyslogd:imjournal: open() failed for path:
'/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9
try
https://www.rsyslog.com/e/2433 ]
22:11:33,309 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:program:stdout[97]:
22:11:33,310 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:program:stderr[97]:
/etc/lvm/profile/imgbased-pool.profile: stat failed: No such file or directory
22:11:33,310 WARNING org.fedoraproject.Anaconda.Modules.Storage: Check of pool onn/pool00
failed (status:64). Manual repair required!
I'm wondering if it has to do with installing oVirt node on a RAID mirror?