[ovirt-users] hyperconverged question

Kasturi Narra knarra at redhat.com
Thu Aug 31 07:03:03 UTC 2017


Hi,

   During Hosted Engine setup question about glusterfs volume is being
asked because you have setup the volumes yourself. If cockpit+gdeploy
plugin would be have been used then that would have automatically detected
glusterfs replica 3 volume created during Hosted Engine deployment and this
question would not have been asked.

   During new storage domain creation when glusterfs is selected there is a
feature called 'use managed gluster volumes' and upon checking this all
glusterfs volumes managed will be listed and you could choose the volume of
your choice from the dropdown list.

    There is a conf file called /etc/hosted-engine/hosted-engine.conf where
there is a parameter called backup-volfile-servers="h1:h2" and if one of
the gluster node goes down engine uses this parameter to provide ha /
failover.

 Hope this helps !!

Thanks
kasturi



On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler <ckozleriii at gmail.com>
wrote:

> Hello -
>
> I have successfully created a hyperconverged hosted engine setup
> consisting of 3 nodes - 2 for VM's and the third purely for storage. I
> manually configured it all, did not use ovirt node or anything. Built the
> gluster volumes myself
>
> However, I noticed that when setting up the hosted engine and even when
> adding a new storage domain with glusterfs type, it still asks for
> hostname:/volumename
>
> This leads me to believe that if that one node goes down (ex:
> node1:/data), then ovirt engine wont be able to communicate with that
> volume because its trying to reach it on node 1 and thus, go down
>
> I know glusterfs fuse client can connect to all nodes to provide
> failover/ha but how does the engine handle this?
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170831/5f93b730/attachment.html>


More information about the Users mailing list