[ovirt-users] Best setup for nfs domain and 1gbs adapters on hosts
Karli Sjöberg
karli.sjoberg at slu.se
Fri Feb 17 18:22:17 UTC 2017
Den 17 feb. 2017 6:30 em skrev Gianluca Cecchi <gianluca.cecchi at gmail.com>:
Hello,
I'm going to setup an environment where I will have 2 hosts and each with 2 adapters to connect to storage domain(s). This will be a test environment, not a production one.
The storage domain(s) will be NFS, provided by a Netapp system.
The hosts have 4 x 1Gb/s adapters and I think to use 2 for ovirtmgmt and VMs (through bonding and VLANs) and to dedicate the other 2 adapters to the NFS domain connectivity.
What would be the best setup to have both HA on the connection and also using the whole 2Gb/s in normal load scenario?
Is it better to make more storage domains (and more svm on Netapp side) or only one?
What would be the suitable bonding mode to put on adapters? I normally use 802.3ad provided by the switches, but I'm not sure if in this configuration I can use both the network adapters for the overall load of the different VMs that I would have in place...
Thanks in advance for every suggestion,
Gianluca
Hey G!
If it was me doing this, I would make one 4x1Gb/s 802.3ad bond on filer and hosts to KISS. Then, if bandwidth is of concern, I would set up two VLANs for storage interfaces with addresses on separate subnets (10.0.0.1 and 10.0.1.1 on filer. 10.0.0.(2,3) and 10.0.1.(2,3) on hosts) and then on the filer set up only two NFS exports where you try to as evenly as possible provision your VMs. This way the network load would evenly spread over all interfaces for simplest config and best fault tolerance, while keeping storage traffic at max 2Gb/s. You only need one SVM with several addresses to achieve this. We have our VMWare environment set up similar to this towards our NetApp. We also have our oVirt environment set up like this, but towards a different NFS storage, with great success.
/K
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170217/3120cf34/attachment.html>
More information about the Users
mailing list