[Users] Opinions needed: 3 node gluster replica 3 | NFS async | snapshots for consistency

Steve Dainard sdainard at miovision.com
Sun Feb 23 18:50:10 UTC 2014


On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron <abaron at redhat.com> wrote:

>
>
> ----- Original Message -----
> > I'm looking for some opinions on this configuration in an effort to
> increase
> > write performance:
> >
> > 3 storage nodes using glusterfs in replica 3, quorum.
>
> gluster doesn't support replica 3 yet, so I'm not sure how heavily I'd
> rely on this.
>

Glusterfs or RHSS doesn't support rep 3? How could I create a quorum
without 3+ hosts?


>
> > Ovirt storage domain via NFS
>
> why NFS and not gluster?
>

Gluster via posix SD doesn't have any performance gains over NFS, maybe the
opposite.

Gluster 'native' SD's are broken on EL6.5 so I have been unable to test
performance. I have heard performance can be upwards of 3x NFS for raw
write.

Gluster doesn't have an async write option, so its doubtful it will ever be
close to NFS async speeds.


>
> > Volume set nfs.trusted-sync on
> > On Ovirt, taking snapshots often enough to recover from a storage crash
>
> Note that this would have negative write performance impact
>

The difference between NFS sync (<50MB/s) and async (>300MB/s on 10g) write
speeds should more than compensate for the performance hit of taking
snapshots more often. And that's just raw speed. If we take into
consideration IOPS (guest small writes) async is leaps and bounds ahead.


If we assume the site has backup UPS and generator power and we can build a
highly available storage system with 3 nodes in quorum, are there any
potential issues other than a write performance hit?

The issue I thought might be most prevalent is if an ovirt host goes down
and the VM's are automatically brought back up on another host, they could
incur disk corruption and need to be brought back down and restored to the
last snapshot state. This basically means the HA feature should be disabled.

Even worse, if the gluster node with CTDB NFS IP goes down, it may not have
written out and replicated to its peers.  <-- I think I may have just
answered my own question.


Thanks,
Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140223/5dc8fcd1/attachment-0001.html>


More information about the Users mailing list