Hello everyone,
the engine switched to paused state after attempting to place a host into maintenance,
specifically when i selected the option to stop the gluster service.
Glusterfs is 3 bricks
[root@ovirt10 glusterfs]# gluster v heal engine info
Brick 10.64.7.10:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick 10.64.7.11:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick 10.64.7.12:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
--== Host ovirt11 (id: 3) status ==--
Host ID : 3
Host timestamp : 34188104
Score : 3400
Engine status : {"vm": "up", "health":
"bad", "detail": "Paused", "reason": "bad vm
status"}
Cluster is in global maintenance
The ovirt node i attempted to place in maintenance had no vms running (ovirt21)
attempting to restart the engine when gluster is not running on ovirt21 results in the
following error:
MainThread::WARNING::2025-07-01
13:05:25,988::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
Can't connect vdsm storage: path to storage domain
4d31ba9e-5167-4af8-bc2a-361cce9b8278 not found in /rhev/data-center/mnt/glusterSD
If i restart glusterd, and ovirt-agent and ovirt-broker on the host i put into maintenance
(ovirt21), then attempt to restart the engine vm, it starts up but then is paused shortly
after when glusterd service is automatically stopped on ovirt node 21.
It seems the engine is stuck trying to put that host into maintenance, as soon as the
engine vm starts it's paused when glusterd is stopped on ovirt21.
Is there a way to safely unmount the engine gluster volume from the empty host without
affecting the rest of the cluster?
and/or, is there a way to stop the engine from putting that host into maintenance?
Thanks!
Mario M