On Wed, Apr 19, 2017 at 5:07 PM, Bryan Sockel <Bryan.Sockel(a)altn.com> wrote:
Thank you for the information, i did check my servers this morning,
in
total i have 4 servers configured as part of my ovirt deployment, two
virtualization servers and 2 gluster servers, with one of the
virtualization being the arbiter for my gluster replicated storage.
From what i can see on my 2 dedicated gluster boxes i see traffic going
out over multiple links. On both of my virtualization hosts i am seeing
all traffic go out via em1, and no traffic going out over the other
interfaces. All four interfaces are configured in a single bond as 802.3ad
on both hosts with my logical networks attached to the bond.
the balancing is based on hash with either L2+L3, or L3+L4. It may well be
that both end up with the same hash and therefore go through the same link.
Y.
-----Original Message-----
From: Yaniv Kaul <ykaul(a)redhat.com>
To: Bryan Sockel <Bryan.Sockel(a)altn.com>
Cc: users <users(a)ovirt.org>
Date: Wed, 19 Apr 2017 10:41:40 +0300
Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <Bryan.Sockel(a)altn.com>
wrote:
>
> Was reading over this post to the group about storage options. I am more
> of a windows guy as appose to a linux guy, but am learning quickly and had
> a question. You said that LACP will not provide extra band with
> (Especially with NFS). Does the same hold true with GlusterFS. We are
> currently using GlusterFS for the file replication piece. Does Glusterfs
> take advantage of any multipathing?
>
> Thanks
>
>
I'd expect Gluster to take advantage of LACP, as it has replication to
multiple peers (as opposed to NFS). See[1].
Y.
[1]
https://gluster.readthedocs.io/en/latest/Administrator%
20Guide/Network%20Configurations%20Techniques/
>
>
> -----Original Message-----
> From: Yaniv Kaul <ykaul(a)redhat.com>
> To: Charles Tassell <ctassell(a)gmail.com>
> Cc: users <users(a)ovirt.org>
> Date: Sun, 26 Mar 2017 10:40:00 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell(a)gmail.com>
> wrote:
>>
>> Hi Everyone,
>>
>> I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server. Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus
is on
>> reliability, with performance being a close second. Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
> support, which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>> Our servers will be using a 1G network backbone for regular traffic
>> and a dedicated 10G backbone with LACP for redundancy and extra bandwidth
>> for storage traffic if that makes a difference.
>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>> I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for. Has anyone had
>> any particularly bad experiences with a particular storage option? We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>
>