Hi,
What I learned in the way glusterfs works is you specify the host only to
grab the initial volume information, then it'll go directly to the other
hosts to connect to the datastore - this avoids the bottleneck issue that
NFS has.
Knowing this, the work around I used was to setup keepalived on the gluster
hosts (make sure you set it up on an interface other than your ovirtmgmt or
you'll clash with the live migration components). So now if one of my hosts
drop from the cluster, the storage access is not lost. I haven't fully
tested the whole infrastructure yet but my only fear is they may drop into
"PAUSE" mode during the keepalived transition period.
Also - you may need to change your glusterfs ports so they don't interfere
with vdsm. My post here was a little outdated but it still has my findings
on keepalived etc.
http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/
The other thing to note, is you've only got two gluster hosts. I believe by
default now ovirt sets the quorum setting which enforces that there must be
atleast 2 nodes alive in your configuration. This means when there is only
1 gluster server up, you'll be able to read but not write this is to avoid
split-brain.
Thanks,
Andrew
On Thu, Dec 19, 2013 at 5:12 AM, <gregoire.leroy(a)retenodus.net> wrote:
Hello,
As I said in a previous email, I have this configuration with Ovirt 3.3 :
1 Ovirt Engine
2 Hosts Centos 6.5
I successfully setup GlusterFS. I created a distributed replicate volume
with 2 bricks : host1:/gluster and host2:/gluster.
Then, I created a storage storage_gluster POSIXFS with the option
glusterfs and I gave the path "host1:/gluster".
First, I'm rather surprised I have to specify an host for the storage as I
wish to have a distribute replicated storage. I expected to specify both
hosts.
Then I create a VM on this storage. The expected behaviour if I shutdown
host1 should be that my VM keeps running on the second brick. Yet, not only
I lose my VM but host2 is in a non operationnal status because one of its
data storage is not reachable.
Did I miss something in the configuration ? How could I get the wanted
behaviour ?
Thanks a lot,
Regards,
Grégoire Leroy
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users