[ovirt-users] oVirt split brain resolution

Denis Chaplygin dchaplyg at redhat.com
Fri Jun 23 15:05:00 UTC 2017


Hello Abi,

On Fri, Jun 23, 2017 at 4:47 PM, Abi Askushi <rightkicktech at gmail.com>
wrote:

> Hi All,
>
> I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
> issues. Upon restoration I have the following split brain, although the
> hosts have mounted the storage domains:
>
> gluster volume heal engine info split-brain
> Brick gluster0:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster1:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster2:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
>
>
It is definitely on gluster side. You could try to use

gluster volume heal engine split-brain latest-mtime
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent


I also added gluster developers to that thread, so they may provide you
with better advices.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170623/21dd0ece/attachment.html>


More information about the Users mailing list