[ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts

Karli Sjöberg karli.sjoberg at slu.se
Sat Feb 18 12:35:34 UTC 2017


Den 18 feb. 2017 8:56 fm skrev Gianluca Cecchi <gianluca.cecchi at gmail.com>:
>
>
>
> On Feb 17, 2017 7:22 PM, "Karli Sjöberg" <karli.sjoberg at slu.se> wrote:
>>
>>
>>
>> Den 17 feb. 2017 6:30 em skrev Gianluca Cecchi <gianluca.cecchi at gmail.com>:
>>>
>>> Hello,
>>> I'm going to setup an environment where I will have 2 hosts and each with 2 adapters to connect to storage domain(s). This will be a test environment, not a production one.
>>> The storage domain(s) will be NFS, provided by a Netapp system.
>>> The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and VMs (through bonding and VLANs) and to dedicate the other 2 adapters to the NFS domain connectivity.
>>> What would be the best setup to have both HA on the connection and also using the whole 2Gb/s in normal load scenario?
>>> Is it better to make more storage domains (and more svm on Netapp side) or only one?
>>> What would be the suitable bonding mode to put on adapters? I normally use 802.3ad provided by the switches, but I'm not sure if in this configuration I can use both the network adapters for the overall load of the different VMs that I would have in place...
>>>
>>> Thanks in advance for every suggestion,
>>>
>>> Gianluca
>>
>>
>> Hey G!
>>
>> If it was me doing this, I would make one 4x1Gb/s 802.3ad bond on filer and hosts to KISS. Then, if bandwidth is of concern, I would set up two VLANs for storage interfaces with addresses on separate subnets (10.0.0.1 and 10.0.1.1 on filer. 10.0.0.(2,3) and 10.0.1.(2,3) on hosts) and then on the filer set up only two NFS exports where you try to as evenly as possible provision your VMs. This way the network load would evenly spread over all interfaces for simplest config and best fault tolerance, while keeping storage traffic at max 2Gb/s. You only need one SVM with several addresses to achieve this. We have our VMWare environment set up similar to this towards our NetApp. We also have our oVirt environment set up like this, but towards a different NFS storage, with great success.
>>
>> /K
>
>
> Thanks for your answer, K!
> So you mean to make a unique bond composed by all 4 network adapters and put all the networks on it, comprised ovirtmgmt and such, through clans?
> How do you configure 802.3ad on 4 adapters? How many switches do you have to connect to, from these 4 adapters? Or do you use round robin bonding (but I presume it is not supported in court this bond)?
> Thanks!

Well, in our case, we have two clustered switches from C-company so two NICs in each. And then, yeah, different VLANs for every network on top of the same bond. Works like a charm:)

/K
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170218/e1d094fc/attachment.html>


More information about the Users mailing list