[ovirt-users] moving storage away from a single point of failure

Donny Davis donny at cloudspin.me
Thu Sep 24 23:57:40 UTC 2015


Gluster is pretty stable, you shouldn't have any issues. It works best when
there are more than 2 or 3 nodes though.

What hardware do you have
On Sep 24, 2015 3:44 PM, "Michael Kleinpaste" <
michael.kleinpaste at sharperlending.com> wrote:

> I thought I had read where Gluster had corrected this behavior.  That's
> disappointing.
>
> On Tue, Sep 22, 2015 at 4:18 AM Alastair Neil <ajneil.tech at gmail.com>
> wrote:
>
>> My own experience with gluster for VMs is that it is just fine until you
>> need to bring down a node and need the VM's to be live.  I have a replica 3
>> gluster server and, while the VMs are fine while the node is down, when it
>> is brought back up, gluster attempts to heal the files on the downed node
>> and the ensuing i/o freezes the VM's until the heal is complete, and with
>> many VM's on a storage volume that can take hours.  I have migrated all my
>> critical VMs back onto NFS.   There are changes coming soon in gluster that
>> will hopefully mitigate this (better granualarity in the data heals, i/o
>> throttling during heals etc.)  but for now I am keeping most of my VMs on
>> nfs.
>>
>> The alternative is to set the quorum so that the VM volume goes read only
>> when a node goes down.  This may seem mad, but at least your VMs are frozen
>> only while a node is down and not for hours afterwards.
>>
>>
>>
>> On 22 September 2015 at 05:32, Daniel Helgenberger <
>> daniel.helgenberger at m-box.de> wrote:
>>
>>>
>>>
>>> On 18.09.2015 23:04, Robert Story wrote:
>>> > Hi,
>>>
>>> Hello Robert,
>>>
>>> >
>>> > I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a
>>> single
>>> > server. I'd like to move away from having a single point of failure.
>>>
>>> In this case have a look at iSCSI or FC storage. If you have redundant
>>> contollers and switches
>>> the setup should be reliable enough?
>>>
>>> > Watching the mailing list, all the issues with gluster getting out of
>>> sync
>>> > and replica issues has me nervous about gluster, plus I just have 2
>>> > machines with lots of drive bays for storage.
>>>
>>> Still, I would stick to gluster if you want a replicated storage:
>>>  - It is supported out of the box and you get active support from lots
>>> of users here
>>>  - Replica3 will solve most out of sync cases
>>>  - I dare say other replicated storage backends do suffer from the same
>>> issues, this is by design.
>>>
>>> Two things you should keep in mind when running gluster in production:
>>>  - Do not run compute and storage on the same hosts
>>>  - Do not (yet) use Gluster as storage for Hosted Engine
>>>
>>> > I've been reading about GFS2
>>> > and DRBD, and wanted opinions on if either is a good/bad idea, or to
>>> see if
>>> > there are other alternatives.
>>> >
>>> > My oVirt setup is currently 5 nodes and about 25 VMs, might double in
>>> size
>>> > eventually, but probably won't get much bigger than that.
>>>
>>> In the end, it is quite easy to migrate storage domains. If you are
>>> satisfied with your lab
>>> setup, put it in production and add storage later and move the disks.
>>> Afterwards, remove old
>>> storage domains.
>>>
>>> My to cent with gluster: It runs quite stable since some time now if you
>>> do not touch it.
>>> I never had issues when adding bricks, though removing and replacing
>>> them can be very tricky.
>>>
>>> HTH,
>>>
>>> >
>>> >
>>> > Thanks,
>>> >
>>> > Robert
>>> >
>>>
>>> --
>>> Daniel Helgenberger
>>> m box bewegtbild GmbH
>>>
>>> P: +49/30/2408781-22
>>> F: +49/30/2408781-10
>>>
>>> ACKERSTR. 19
>>> D-10115 BERLIN
>>>
>>>
>>> www.m-box.de  www.monkeymen.tv
>>>
>>> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
>>> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
> --
> *Michael Kleinpaste*
> Senior Systems Administrator
> SharperLending, LLC.
> www.SharperLending.com
> Michael.Kleinpaste at SharperLending.com
> (509) 324-1230   Fax: (509) 324-1234
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150924/94cfa7a6/attachment-0001.html>


More information about the Users mailing list