Hello
maybe this is more glustefs then ovirt related but since OVirt integrates Gluster management and I'm experiencing the problem in an ovirt cluster, I'm writing here.
The problem is simple: I have a data domain mappend on a replica 3 arbiter1 Gluster volume with 6 bricks, like this:
Status of volume: data_ssd
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick vm01.storage.billy:/gluster/ssd/data/
brick 49153 0 Y 19298
Brick vm02.storage.billy:/gluster/ssd/data/
brick 49153 0 Y 6146
Brick vm03.storage.billy:/gluster/ssd/data/
arbiter_brick 49153 0 Y 6552
Brick vm03.storage.billy:/gluster/ssd/data/
brick 49154 0 Y 6559
Brick vm04.storage.billy:/gluster/ssd/data/
brick 49152 0 Y 6077
Brick vm02.storage.billy:/gluster/ssd/data/
arbiter_brick 49154 0 Y 6153
Self-heal Daemon on localhost N/A N/A Y 30746
Self-heal Daemon on vm01.storage.billy N/A N/A Y 196058
Self-heal Daemon on vm03.storage.billy N/A N/A Y 23205
Self-heal Daemon on vm04.storage.billy N/A N/A Y 8246 Now, I've put in maintenance the vm04 host, from ovirt, ticking the "Stop gluster" checkbox, and Ovirt didn't complain about anything. But when I tried to run a new VM it complained about "storage I/O problem", while the storage data status was always UP.