[ovirt-users] ovirt and glusterfs setup

Darrell Budic budic at onholyground.com
Wed Feb 18 16:51:51 EST 2015


For a somewhat dissenting opinion, I have a (currently) 2 node gluster/compute system with a self hosted engine running and a 2nd cluster of 3 hosts sharing the gluster servers from the first just fine in production and a 3 node dev system. On top of ZFS even :)

That said, I’ve broken it several times and had interesting experiences fixing it. There are also 2 bugs out there that I’d consider blockers for production use of gluster server/hosts with ovirt. In particular: Bug 1172905 <https://bugzilla.redhat.com/show_bug.cgi?id=1172905> (gluster vms pause when vdsmd restarted) in combination with 1158108 <https://bugzilla.redhat.com/show_bug.cgi?id=1158108> (vddm leaks small amounts of memory) mean you have to be careful not to run out of ram on a gluster serer/host node or you pause some VMs and have to restart them to recover. I was already running this when this surfaced in 3.5, so I’m limping along, but I wouldn’t setup this config or use gluster in a new deployment right now. Side note, afaict this doesn’t freeze VMs mounted via NFS from a gluster server, so you can do that (and I’m working on migrating to it). And I currently have Fencing disabled because it can be ugly on a gluster system. The new arguments to prevent fencing if part of the cluster is down should work around this, I’m just waiting until my 3rd gluster node is online before going back to that.

Ovirt won’t help you understand the best way to deploy your gluster bricks either. For instance, your hosted engine should be on a brick by itself, with n-way replication across any hosted engine servers. Your VMs should probably be on a distributed-replicated or the newly available dispersed mode volume to get the benefits of multiple servers and not have to write to every brick all the time (unless your use case demands 4 copies of your data for redundancy). Allocate extra RAM for caching, too, it helps a lot. Proper setup of the server name and use of localhost mounts, cddb, or keepalived (and an understanding of why you want that) is important too.

Bottom line, Gluster is complex no matter how easy the Ovirt interface makes it look. If you aren’t prepared to get down and dirty with your network file system, I wouldn't recommend this. If you are, it’s good stuff, and I’m really looking forward to libgfapi integration in Ovirt beyond the dev builds. 

  -Darrell

> On Feb 18, 2015, at 3:06 PM, Donny D <donny at cloudspin.me> wrote:
> 
> 
> What he said
> 
> 
> Happy Connecting. Sent from my Sprint Samsung Galaxy S® 5
> 
> 
> -------- Original message --------
> From: Scott Worthington <scott.c.worthington at gmail.com> 
> Date: 02/18/2015 2:03 PM (GMT-07:00) 
> To: Donny D <donny at cloudspin.me> 
> Subject: Re: [ovirt-users] ovirt and glusterfs setup 
> 
> > I did not have a good experience putting both gluster and virt on the same
> > node. I was doing hosted engine with replicate across two nodes, and one day
> > it went into split brain hell... i was never able to track down why. However
> > i do have a gluster with distribute and replica setup on its own with a
> > couple nodes, and it has given me zero problems in the last 60 days. It
> > seems to me that gluster and virt need to stay seperate for now. Both are
> > great products and both work as described, just not on the same node at the
> > same time.
> >
> 
> The issue, as I perceive it, is newbies find Jason Brook's blog:
>   http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/
> 
> And then newbies think this Red Hat blog is production quality.  In my
> opinion, the blog how-to is okay (not really, IMHO) for a lab, but not for
> production.
> 
> Since fencing is important in oVirt, having gluster on the hosts is a no-no
> since a non-responsive host could be fenced at any time -- and the engine
> could fence multiple hosts (and bork a locally hosted gluster file system and
> then screw up the entire gluster cluster).
> 
> --ScottW
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150218/0e8afe22/attachment.html>


More information about the Users mailing list