Best Storage Option: iSCSI/NFS/GlusterFS?

Hi Everyone, I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case. Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference. I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.

On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com> wrote:
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case.
NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really.
Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference.
LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing.
I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.
A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots). I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

But on the Dell MD32x00 you have got two controllers. The trick is that you have to sustain link to both controllers, so the best option is to use multipath as Yaniv said. Otherwise you get an error notifications from the array. The problem is with iSCSI target. After server reboot, VDSM tries to connect to target which was previously set, but it could be inactive. So in that case you have to remember to edit configuration in vdsm.conf, because vdsm.conf do not accept target with multi IP addresses. 2017-03-26 9:40 GMT+02:00 Yaniv Kaul <ykaul@redhat.com>:
On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com> wrote:
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case.
NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really.
Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference.
LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing.
I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.
A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots).
I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------A8E57FF88753DC01914C0685 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Hi Marcin, Hmm, so if you are using multipath with VDSM you have to manually edit the vdsm.conf file to put the right IP in every time the active controller switches? That sort of defeats the purpose of multipath.... That was the issue we were having: we'd spin up another host, it would connect to the SAN which would then reballance the disks among controllers, and all our other hosts would lose their connection to the active controller and pause all of the VMs. It's the "Device is not on preferred path" issue that is common on the MD3x00 line. We had the same errors with VMWare, but VMWare was able to automatically switch to the active path. On 2017-03-26 05:42 PM, Marcin Kruk wrote:
But on the Dell MD32x00 you have got two controllers. The trick is that you have to sustain link to both controllers, so the best option is to use multipath as Yaniv said. Otherwise you get an error notifications from the array. The problem is with iSCSI target. After server reboot, VDSM tries to connect to target which was previously set, but it could be inactive. So in that case you have to remember to edit configuration in vdsm.conf, because vdsm.conf do not accept target with multi IP addresses.
2017-03-26 9:40 GMT+02:00 Yaniv Kaul <ykaul@redhat.com <mailto:ykaul@redhat.com>>:
On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com <mailto:ctassell@gmail.com>> wrote:
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case.
NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really.
Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference.
LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing.
I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.
A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots).
I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------A8E57FF88753DC01914C0685 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">Hi Marcin,<br> <br> Hmm, so if you are using multipath with VDSM you have to manually edit the vdsm.conf file to put the right IP in every time the active controller switches? That sort of defeats the purpose of multipath.... That was the issue we were having: we'd spin up another host, it would connect to the SAN which would then reballance the disks among controllers, and all our other hosts would lose their connection to the active controller and pause all of the VMs. It's the "Device is not on preferred path" issue that is common on the MD3x00 line. We had the same errors with VMWare, but VMWare was able to automatically switch to the active path.<br> <br> On 2017-03-26 05:42 PM, Marcin Kruk wrote:<br> </div> <blockquote cite="mid:CAFJSZpSb_ZVHB0uvOPTFmAdEVCpHHuMk1Vj0zVZ+jdX2=A=R5Q@mail.gmail.com" type="cite"> <div dir="ltr"> <div>But on the Dell MD32x00 you have got two controllers. The trick is that you have to sustain link to both controllers, so the best option is to use multipath as Yaniv said. Otherwise you get an error notifications from the array.<br> </div> The problem is with iSCSI target.<br> After server reboot, VDSM tries to connect to target which was previously set, but it could be inactive.<br> So in that case you have to remember to edit configuration in vdsm.conf, because vdsm.conf do not accept target with multi IP addresses.<br> </div> <div class="gmail_extra"><br> <div class="gmail_quote">2017-03-26 9:40 GMT+02:00 Yaniv Kaul <span dir="ltr"><<a moz-do-not-send="true" href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>></span>:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote"><span class="">On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <span dir="ltr"><<a moz-do-not-send="true" href="mailto:ctassell@gmail.com" target="_blank">ctassell@gmail.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Everyone,<br> <br> I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case.<br> </blockquote> <div><br> </div> </span> <div>NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice.</div> <div>Gluster probably requires 3 servers.</div> <div>In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really.</div> <span class=""> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference.<br> </blockquote> <div><br> </div> </span> <div>LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link.</div> <div>It's one of the reasons I personally prefer iSCSI with multipathing.</div> <span class=""> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.<br> </blockquote> <div><br> </div> </span> <div>A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots).</div> <div><br> </div> <div>I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc.</div> <span class="HOEnZb"><font color="#888888"> <div>Y.</div> </font></span><span class=""> <div><br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </span></div> <br> </div> </div> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br> <br> </blockquote> </div> <br> </div> </blockquote> <p><br> </p> </body> </html> --------------A8E57FF88753DC01914C0685--

No. You have to edit vdsm.conf, when: 1) link will be broken, and it point to the iSCSI target IP and 2) you want to reboot your host or restart VDSM I don't know, why but VDSM during startup tries to connect to IP target in my opinion it should use the /var/lib/iscsi configuration which was set previously. I also had problem "Device is not on preferred path", but I edited multipath.conf file and set the round-robin alghoritm, because during installation multipathd.conf was changed. If you want to get right configuration to your array execute: 1) multipath -k #console mode 2) show config #find the proper configuration to your array 3) modify multipath.conf and put above configuration.

Was reading over this post to the group about storage options. I am more of a windows guy as appose to a linux guy, but am learning quickly and had a question. You said that LACP will not provide extra band with (Especially with NFS). Does the same hold true with GlusterFS. We are currently using GlusterFS for the file replication piece. Does Glusterfs take advantage of any multipathing? Thanks -----Original Message----- From: Yaniv Kaul <ykaul@redhat.com> To: Charles Tassell <ctassell@gmail.com> Cc: users <users@ovirt.org> Date: Sun, 26 Mar 2017 10:40:00 +0300 Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS? On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com> wrote: Hi Everyone, I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case. NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really. Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference. LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing. I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface. A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots). I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <Bryan.Sockel@altn.com> wrote:
Was reading over this post to the group about storage options. I am more of a windows guy as appose to a linux guy, but am learning quickly and had a question. You said that LACP will not provide extra band with (Especially with NFS). Does the same hold true with GlusterFS. We are currently using GlusterFS for the file replication piece. Does Glusterfs take advantage of any multipathing?
Thanks
I'd expect Gluster to take advantage of LACP, as it has replication to multiple peers (as opposed to NFS). See[1]. Y. [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Con...
-----Original Message----- From: Yaniv Kaul <ykaul@redhat.com> To: Charles Tassell <ctassell@gmail.com> Cc: users <users@ovirt.org> Date: Sun, 26 Mar 2017 10:40:00 +0300 Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com> wrote:
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case.
NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really.
Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference.
LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing.
I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.
A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots).
I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

From what i can see on my 2 dedicated gluster boxes i see traffic going out over multiple links. On both of my virtualization hosts i am seeing all
Thank you for the information, i did check my servers this morning, in total i have 4 servers configured as part of my ovirt deployment, two virtualization servers and 2 gluster servers, with one of the virtualization being the arbiter for my gluster replicated storage. traffic go out via em1, and no traffic going out over the other interfaces. All four interfaces are configured in a single bond as 802.3ad on both hosts with my logical networks attached to the bond. -----Original Message----- From: Yaniv Kaul <ykaul@redhat.com> To: Bryan Sockel <Bryan.Sockel@altn.com> Cc: users <users@ovirt.org> Date: Wed, 19 Apr 2017 10:41:40 +0300 Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS? On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <Bryan.Sockel@altn.com> wrote: Was reading over this post to the group about storage options. I am more of a windows guy as appose to a linux guy, but am learning quickly and had a question. You said that LACP will not provide extra band with (Especially with NFS). Does the same hold true with GlusterFS. We are currently using GlusterFS for the file replication piece. Does Glusterfs take advantage of any multipathing? Thanks I'd expect Gluster to take advantage of LACP, as it has replication to multiple peers (as opposed to NFS). See[1]. Y. [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Con... -----Original Message----- From: Yaniv Kaul <ykaul@redhat.com> To: Charles Tassell <ctassell@gmail.com> Cc: users <users@ovirt.org> Date: Sun, 26 Mar 2017 10:40:00 +0300 Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS? On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com> wrote: Hi Everyone, I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case. NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really. Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference. LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing. I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface. A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots). I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Apr 19, 2017 at 5:07 PM, Bryan Sockel <Bryan.Sockel@altn.com> wrote:
Thank you for the information, i did check my servers this morning, in total i have 4 servers configured as part of my ovirt deployment, two virtualization servers and 2 gluster servers, with one of the virtualization being the arbiter for my gluster replicated storage.
From what i can see on my 2 dedicated gluster boxes i see traffic going out over multiple links. On both of my virtualization hosts i am seeing all traffic go out via em1, and no traffic going out over the other interfaces. All four interfaces are configured in a single bond as 802.3ad on both hosts with my logical networks attached to the bond.
the balancing is based on hash with either L2+L3, or L3+L4. It may well be that both end up with the same hash and therefore go through the same link. Y.
-----Original Message----- From: Yaniv Kaul <ykaul@redhat.com> To: Bryan Sockel <Bryan.Sockel@altn.com> Cc: users <users@ovirt.org> Date: Wed, 19 Apr 2017 10:41:40 +0300 Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <Bryan.Sockel@altn.com> wrote:
Was reading over this post to the group about storage options. I am more of a windows guy as appose to a linux guy, but am learning quickly and had a question. You said that LACP will not provide extra band with (Especially with NFS). Does the same hold true with GlusterFS. We are currently using GlusterFS for the file replication piece. Does Glusterfs take advantage of any multipathing?
Thanks
I'd expect Gluster to take advantage of LACP, as it has replication to multiple peers (as opposed to NFS). See[1]. Y.
[1] https://gluster.readthedocs.io/en/latest/Administrator% 20Guide/Network%20Configurations%20Techniques/
-----Original Message----- From: Yaniv Kaul <ykaul@redhat.com> To: Charles Tassell <ctassell@gmail.com> Cc: users <users@ovirt.org> Date: Sun, 26 Mar 2017 10:40:00 +0300 Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctassell@gmail.com> wrote:
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a Linux storage server. Since the Linux box can provide the storage in pretty much any form, I'm wondering which option is "best." Our primary focus is on reliability, with performance being a close second. Since we will only be using a single storage server I was thinking NFS would probably beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had assumed that that iSCSI would be better performance wise, but from what I'm seeing online that might not be the case.
NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, which is nice. Gluster probably requires 3 servers. In most cases, I don't think people see the difference in performance between NFS and iSCSI. The theory is that block storage is faster, but in practice, most don't get to those limits where it matters really.
Our servers will be using a 1G network backbone for regular traffic and a dedicated 10G backbone with LACP for redundancy and extra bandwidth for storage traffic if that makes a difference.
LCAP many times (especially on NFS) does not provide extra bandwidth, as the (single) NFS connection tends to be sticky to a single physical link. It's one of the reasons I personally prefer iSCSI with multipathing.
I'll probably try to do some performance benchmarks with 2-3 options, but the reliability issue is a little harder to test for. Has anyone had any particularly bad experiences with a particular storage option? We have been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with the multipath setup, but that won't be a problem with the new SAN since it's only got a single controller interface.
A single controller is not very reliable. If reliability is your primary concern, I suggest ensuring there is no single point of failure - or at least you are aware of all of them (does the storage server have redundant power supply? to two power sources? Of course in some scenarios it's an overkill and perhaps not practical, but you should be aware of your weak spots).
I'd stick with what you are most comfortable managing - creating, backing up, extending, verifying health, etc. Y.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Bryan Sockel
-
Charles Tassell
-
Marcin Kruk
-
Yaniv Kaul