[ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

Marcin Kruk askifyouneed at gmail.com
Sun Mar 26 20:42:09 UTC 2017


But on the Dell MD32x00 you have got two controllers. The trick is that you
have to sustain link to both controllers, so the best option is to use
multipath as Yaniv said. Otherwise you get an error notifications from the
array.
The problem is with iSCSI target.
After server reboot, VDSM tries to connect to target which was previously
set, but it could be inactive.
So in that case you have to remember to edit configuration in vdsm.conf,
because vdsm.conf do not accept target with multi IP addresses.

2017-03-26 9:40 GMT+02:00 Yaniv Kaul <ykaul at redhat.com>:

>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell at gmail.com>
> wrote:
>
>> Hi Everyone,
>>
>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server.  Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus is on
>> reliability, with performance being a close second.  Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support,
> which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>>   Our servers will be using a 1G network backbone for regular traffic and
>> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for
>> storage traffic if that makes a difference.
>>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>>   I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for.  Has anyone had
>> any particularly bad experiences with a particular storage option?  We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170326/5ce6533e/attachment.html>


More information about the Users mailing list