<div dir="ltr">I thought I had read where Gluster had corrected this behavior. That's disappointing.<div><div><br><div class="gmail_quote"><div dir="ltr">On Tue, Sep 22, 2015 at 4:18 AM Alastair Neil <<a href="mailto:ajneil.tech@gmail.com">ajneil.tech@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">My own experience with gluster for VMs is that it is just fine until you need to bring down a node and need the VM's to be live. I have a replica 3 gluster server and, while the VMs are fine while the node is down, when it is brought back up, gluster attempts to heal the files on the downed node and the ensuing i/o freezes the VM's until the heal is complete, and with many VM's on a storage volume that can take hours. I have migrated all my critical VMs back onto NFS. There are changes coming soon in gluster that will hopefully mitigate this (better granualarity in the data heals, i/o throttling during heals etc.) but for now I am keeping most of my VMs on nfs.<div><br></div><div>The alternative is to set the quorum so that the VM volume goes read only when a node goes down. This may seem mad, but at least your VMs are frozen only while a node is down and not for hours afterwards.</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 22 September 2015 at 05:32, Daniel Helgenberger <span dir="ltr"><<a href="mailto:daniel.helgenberger@m-box.de" target="_blank">daniel.helgenberger@m-box.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On 18.09.2015 23:04, Robert Story wrote:<br>
> Hi,<br>
<br>
Hello Robert,<br>
<span><br>
><br>
> I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single<br>
> server. I'd like to move away from having a single point of failure.<br>
<br>
</span>In this case have a look at iSCSI or FC storage. If you have redundant contollers and switches<br>
the setup should be reliable enough?<br>
<span><br>
> Watching the mailing list, all the issues with gluster getting out of sync<br>
> and replica issues has me nervous about gluster, plus I just have 2<br>
> machines with lots of drive bays for storage.<br>
<br>
</span>Still, I would stick to gluster if you want a replicated storage:<br>
- It is supported out of the box and you get active support from lots of users here<br>
- Replica3 will solve most out of sync cases<br>
- I dare say other replicated storage backends do suffer from the same issues, this is by design.<br>
<br>
Two things you should keep in mind when running gluster in production:<br>
- Do not run compute and storage on the same hosts<br>
- Do not (yet) use Gluster as storage for Hosted Engine<br>
<span><br>
> I've been reading about GFS2<br>
> and DRBD, and wanted opinions on if either is a good/bad idea, or to see if<br>
> there are other alternatives.<br>
><br>
> My oVirt setup is currently 5 nodes and about 25 VMs, might double in size<br>
> eventually, but probably won't get much bigger than that.<br>
<br>
</span>In the end, it is quite easy to migrate storage domains. If you are satisfied with your lab<br>
setup, put it in production and add storage later and move the disks. Afterwards, remove old<br>
storage domains.<br>
<br>
My to cent with gluster: It runs quite stable since some time now if you do not touch it.<br>
I never had issues when adding bricks, though removing and replacing them can be very tricky.<br>
<br>
HTH,<br>
<br>
><br>
><br>
> Thanks,<br>
><br>
> Robert<br>
><br>
<br>
--<br>
Daniel Helgenberger<br>
m box bewegtbild GmbH<br>
<br>
P: +49/30/2408781-22<br>
F: +49/30/2408781-10<br>
<br>
ACKERSTR. 19<br>
D-10115 BERLIN<br>
<br>
<br>
<a href="http://www.m-box.de" rel="noreferrer" target="_blank">www.m-box.de</a> <a href="http://www.monkeymen.tv" rel="noreferrer" target="_blank">www.monkeymen.tv</a><br>
<br>
Geschäftsführer: Martin Retschitzegger / Michaela Göllner<br>
Handeslregister: Amtsgericht Charlottenburg / HRB 112767<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote></div><br></div>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote></div></div></div></div><div dir="ltr">-- <br></div><div dir="ltr"><b>Michael Kleinpaste</b><br><span>Senior Systems Administrator</span><br><span>SharperLending, LLC.</span><br><a>www.SharperLending.com</a><br><span>Michael.Kleinpaste@SharperLending.com</span><br><span>(509) 324-1230 Fax: (509) 324-1234</span></div>