[ovirt-users] 4-node oVirt with replica-3 gluster

Davide Ferrari davide at billymob.com
Fri Sep 23 11:32:29 UTC 2016


2016-09-23 13:17 GMT+02:00 Sahina Bose <sabose at redhat.com>:

>
> What are the stability issues you're facing? Data volume if used as a data
> storage domain should be a replica 3 volume as well.
>

Basically that after the first host installation+deploy (from the CLI),
after I enable the gluster management in the cluster, I have to manually
restart vdsmd on host1 to be able to install the other hosts. But maybe I
should just wait more time for vdmsd catch up with everything, I don't know.

Then I have some other problem like a ghost VM stuck on one host  after
moving the host to maintenance and the VM (the hosted-engine, the only one
running in the whole cluster) being correctly migrated to another host,
solved only by a manual reboot of the whole host (and consequent HE fencing
of the host). I must say that that particular host is giving ECC correction
errors in one DIMM, so maybe it could just be an HW related problem.


>

>> Deploy the hosted-engine on the first VM (with the engine volume) froom
>> the CLI, then log in Ovirt admin, enable gluster support, install *and
>> deploy* from the GUI host2 and host3 (where the engine bricks are) and then
>> install host4 without deploying. This should get you the 4 hosts online,
>> but the engine will run only on the first 3
>>
>
> Right. You can add the 4th node to the cluster, but not have any bricks on
> this volume in which case VMs will be run on this node but will access data
> from the other 3 nodes.
>

Well, actually I *do* have data bricks on the 4th host, it's just the
engine volume that's not present there (but that host is not HE eligible
anyway). Am I doing something wrong?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160923/c84bbf0a/attachment-0001.html>


More information about the Users mailing list