[Users] Asking for advice on hosted engine

Ted Miller tmiller at hcjb.org
Tue Feb 18 22:25:11 UTC 2014


On 2/17/2014 4:20 AM, Giorgio Bersano wrote:
> Hello everybody,
> I discovered oVirt a couple of months ago when I was looking for the
> best way to manage our small infrastructure. I have read any document
> I considered useful but I would like to receive advice from the many
> experts that are on this list.
>
> I think it worths an introduction (I hope doesn't get you bored).
>
> I work in a small local government entity and I try to manage
> effectively our limited resources.
> We have many years of experience with Linux and especially with CentOS
> which we have deployed on PC (i.e. for using as firewall in remote
> locations) and moreover on servers.
>
> We have been using Xen virtualization from the early days of CentOS 5
> and  we have built our positive experience on KVM too.
> I have to say that libvirt in a small environment like ours is really
> a nice tool.
> So nothing to regret.
>
> Trying to go a little further, as already said, I stumbled upon oVirt
> and I've found the project intriguing.
>
> At the moment we are thinking of deploying it on a small environment
> of four very similar servers each having:
> - a couple of Xeon E5504
> - 6 x 1Gb ethernet interfaces
> - 40 GB of RAM
> two of them have 72 GB of disk (mirrored)
> two of them have almost 500GB of useful RAID array
>
> Moreover we have an HP iSCSI storage that should easily satisfy our
> current storage requirement.
>
> So, given our small server pool, the necessity of another host just to
> run the supervisor seems a requirement too high.
>
> Enter "hosted engine" and the picture takes brighter colors. Well, I'm
> usually not the adventurous guy but after experimenting a little with
> oVirt 3.4 I developed better confidence.
> We would want to install the engine over the two hosts with smaller disks.
>
> For what I know, installing hosted engine mandates NFS storage. But we
> want this to be highly available too, and possibly to have it on the
> very same hosts.
>
> Here is my solution: make a gluster replicated volume across the two
> hosts and take advantage of that NFS server.
> Then I put 127.0.0.1 as the address of the NFS server in the
> hosted-engine-setup so  the host is always able to reach the storage
> server (itself).
> GlusterFS configuration is done outside of oVirt that, regarding
> engine's storage, doesn't even know that it's a gluster thing.
>
> Relax, we've finally reached the point where I'm asking advice :-)
>
> Storage and virtualization experts, do you see in this configuration
> any pitfall that I've overlooked given my inexperience in oVirt,
> Gluster, NFS or clustered filesystems?
> Do you think that not only it's feasable (I know it is, I made it and
> it's working now) but it's also reliable and dependable and I'm not
> risking my neck on this setup?
>
> I've obviously made some test but I'm not at the confidence level of
> saying that all is right in the way it is designed.
>
> OK, I think I've already written too much, better I stop and humbly
> wait for your opinion but I'm obviously here if any clarification by
> my part  is needed.
>
> Thank you very much for reading until this point.
> Best Regards,
> Giorgio.
Giorgio,

Gluster on two hosts only is not a good idea.  Installed for high reliability 
(quorum activated), gluster requires that >50% of the nodes be working before 
anything can be written.  When you have only two nodes, that means both nodes 
must be up before anything can happen.

You can turn off quorum, but then you are almost guaranteeing yourself a 
split-brain headache the first time communication between the two hosts is 
interrupted, even briefly (been there, done that). Ovirt is constantly 
writing to the storage, so if they are not communicating you WILL get 
different things written to the same files in both servers, especially the 
sanlock files.  This is called split-brain, and it will give you a splitting 
headache.

For replicated gluster to work well, you need a minimum of three gluster 
nodes in replica mode.  Two nodes is a recipe for unhappiness.  It is either 
low-availability (quorum on) or a split-brain waiting to spring on you 
(quorum off).  You don't want either one.

Figure out how to use some storage on some third computer to provide a third 
gluster node.  That way only two of the three have to be working for things 
to keep working.

Ted Miller
Elkhart, IN



More information about the Users mailing list