On Fri, Sep 23, 2016 at 5:02 PM, Davide Ferrari <davide(a)billymob.com> wrote:
2016-09-23 13:17 GMT+02:00 Sahina Bose <sabose(a)redhat.com>:
>
> What are the stability issues you're facing? Data volume if used as a
> data storage domain should be a replica 3 volume as well.
>
Basically that after the first host installation+deploy (from the CLI),
after I enable the gluster management in the cluster, I have to manually
restart vdsmd on host1 to be able to install the other hosts. But maybe I
should just wait more time for vdmsd catch up with everything, I don't know.
Once gluster management is enabled on cluster, we have not noticed the need
to restart vdsm to install other hosts. There was an issue that hosts would
not be identified as gluster hosts unless it was activated again. This will
be fixed in the next 4.0 release as the patches have been merged already.
If you encounter issue again, could you post the hosted-engine deploy logs
from the 2nd host?
Then I have some other problem like a ghost VM stuck on one host after
moving the host to maintenance and the VM (the hosted-engine, the only one
running in the whole cluster) being correctly migrated to another host,
solved only by a manual reboot of the whole host (and consequent HE fencing
of the host). I must say that that particular host is giving ECC correction
errors in one DIMM, so maybe it could just be an HW related problem.
vdsm and engine logs would help here
>
>> Deploy the hosted-engine on the first VM (with the engine volume) froom
>> the CLI, then log in Ovirt admin, enable gluster support, install *and
>> deploy* from the GUI host2 and host3 (where the engine bricks are) and then
>> install host4 without deploying. This should get you the 4 hosts online,
>> but the engine will run only on the first 3
>>
>
> Right. You can add the 4th node to the cluster, but not have any bricks
> on this volume in which case VMs will be run on this node but will access
> data from the other 3 nodes.
>
Well, actually I *do* have data bricks on the 4th host, it's just the
engine volume that's not present there (but that host is not HE eligible
anyway). Am I doing something wrong?
If you have additional capacity on the other 3 hosts , then yes, you can
create a new gluster volume with a brick on the newly added 4th node and
bricks from other nodes - this volume can be used as another storage
domain. You are not doing anything wrong :) Keep in mind that all gluster
volumes used as data storage domains should be replica 3 or replica 3
-arbiter to avoid split-brain and data loss issues.