Hi Joop,
There is currently no split brain in my gluster file systems. The
virtualization setup is a two node hypervisor (ysmha01 and ysmha02),
but I have a 3 node gluster where one node has no bricks
(10.0.1.6->ysmha01, 10.0.1.7->ysmha02 and 10.0.1.5 no bricks), but
helps define quorum, see below:
[root@ysmha01 ~]# gluster volume status engine
Status of volume: engine
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 10.0.1.6:/bricks/she/brick 49152 Y 4620
NFS Server on localhost 2049 Y 4637
Self-heal Daemon on localhost N/A Y 4648
NFS Server on 10.0.1.5 N/A N N/A
Self-heal Daemon on 10.0.1.5 N/A Y 14563
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
[root@ysmha01 ~]# gluster volume heal engine info split-brain
Gathering list of split brain entries on volume engine has been successful
Brick 10.0.1.7:/bricks/she/brick
Number of entries: 0
Brick 10.0.1.6:/bricks/she/brick
Number of entries: 0
[root@ysmha01 ~]# gluster volume heal vmstorage info split-brain
Gathering list of split brain entries on volume vmstorage has been successful
Brick 10.0.1.7:/bricks/vmstorage/brick
Number of entries: 0
Brick 10.0.1.6:/bricks/vmstorage/brick
Number of entries: 0
[root@ysmha01 ~]# gluster volume heal export info split-brain
Gathering list of split brain entries on volume export has been successful
Brick 10.0.1.7:/bricks/hdds/brick
Number of entries: 0
Brick 10.0.1.6:/bricks/hdds/brick
Number of entries: 0
Diego
On Thu, Jul 16, 2015 at 8:25 AM, Joop <jvdwege(a)xs4all.nl> wrote:
On 16-7-2015 14:20, Diego Remolina wrote:
> I have two virtualization/storage servers, ysmha01 and ysmha02 running
> Ovirt hosted engine on top of glusterfs storage. I have two Windows
> server vms called ysmad01 and ysmad02. The current problem is that
> ysmad02 will *not* start on ysmha02 any more.
>
I might have missed it but did you check for a split-brain situation
since you're using a 2-node gluster?
Regards,
Joop
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users