[ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

David King david at rexden.us
Fri Aug 29 14:04:14 UTC 2014


Paul,

Thanks for the response.

You mention that the issue is orphaned files during updates when one node
is down.  However I am less concerned about adding and removing files
because the file server will be predominately VM disks so the file
structure is fairly static.  Those VM files will be quite active however -
will gluster be able to keep track of partial updates to a large file when
one out of two bricks are down?

Right now I am leaning towards using SSD for "host local" disk - single
brick gluster volumes intended for VMs which are node specific and then 3
way replicas for the higher availability zones which tend to be more read
oriented.   I presume that read-only access only needs to get data from one
of the 3 replicas so that should be reasonably performant.

Thanks,
David



On Thu, Aug 28, 2014 at 6:13 PM, Paul Robert Marino <prmarino1 at gmail.com>
wrote:

> I'll try to answer some of these.
> 1) its not a serious problem persay the issue is if one node goes down and
> you delete a file while the second node is down it will be restored when
> the second node comes back which may cause orphaned files where as if you
> use 3 servers they will use quorum to figure out what needs to be restored
> or deleted. Further more your read and write performance may suffer
> especially in comparison to having 1 replica of the file with stripping.
>
> 2) see answer 1 and just create the volume with 1 replica and only include
> the URI for bricks on two of the hosts when you create it.
>
> 3) I think so but have never tried it you just have to define it as a
> local storage domain.
>
> 4) well that's a philosophical question. You can theory have two hosted
> engines on separate VMs on two separate physical boxes but if for any
> reason they both go down you will "be living in interesting times" (as in
> the Chinese curse)
>
> 5) YES! And have more than one.
>
> -- Sent from my HP Pre3
>
> ------------------------------
> On Aug 28, 2014 9:39 AM, David King <david at rexden.us> wrote:
>
> Hi,
>
> I am currently testing oVirt 3.4.3 + gluster 3.5.2 for use in my
> relatively small home office environment on a single host.  I have 2  Intel
> hosts with SSD and magnetic disk and one AMD host with only magnetic disk.
>  I have been trying to figure out the best way to configure my environment
> given my previous attempt with oVirt 3.3 encountered storage issues.
>
> I will be hosting two types of VMs - VMs that can be tied to a particular
> system (such as 3 node FreeIPA domain or some test VMs), and VMs which
> could migrate between systems for improved uptime.
>
> The processor issue seems straightforward.  Have a single datacenter with
> two clusters - one for the Intel systems and one for the AMD systems.  Put
> VMs which need to live migrate on the Intel cluster.  If necessary VMs can
> be manually switched between the Intel and AMD cluster with a downtime.
>
> The Gluster side of the storage seems less clear.  The bulk of the gluster
> with oVirt issues I experienced and have seen on the list seem to be two
> node setups with 2 bricks in the Gluster volume.
>
> So here are my questions:
>
> 1) Should I avoid 2 brick Gluster volumes?
>
> 2) What is the risk in having the SSD volumes with only 2 bricks given
> that there would be 3 gluster servers?  How should I configure them?
>
> 3) Is there a way to use local storage for a host locked VM other than
> creating a gluster volume with one brick?
>
> 4) Should I avoid using the hosted engine configuration?  I do have an
> external VMware ESXi system to host the engine for now but would like to
> phase it out eventually.
>
> 5) If I do the hosted engine should I make the underlying gluster volume 3
> brick replicated?
>
> Thanks in advance for any help you can provide.
>
> -David
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140829/0c9a547b/attachment-0001.html>


More information about the Users mailing list