[ovirt-users] Storage network question

Alan Murrell lists at murrell.ca
Fri Jul 31 06:00:53 UTC 2015


Actually, I have to make a correction to my earlier statement... the 
article I referred to was using bond mode 0 (bond-rr) and not mode 1 as 
I had indicated.

I know mode 0 is not supported in the oVirt interface as one of the 
official options (but can be specified under "custom") and probably is 
not typically recommended, but if setup correctly, it seems it would be 
perfect for the storage (and migration?) network/bonds?

-Alan


On 30/07/2015 10:41 PM, Patrick Russell wrote:
> We just changed this up a little this week. We split our traffic into 2 bonds, 10GB mode 1 as follows:
>
> Guest vlans, managment vlan (including some NFS storage) -> bond0
> Migration layer 2 only vlan -> bond1
>
> This allowed us to tweak the vdsm.conf to speed up migrations without impacting management and guest traffic. As a result we’re currently pushing about 5Gb on bond1 when we do live migrations between hosts.
>
> -Patrick
>
>> On Jul 28, 2015, at 1:34 AM, Alan Murrell <lists at murrell.ca> wrote:
>>
>> Hi Patrick,
>>
>> On 27/07/2015 7:25 AM, Patrick Russell wrote:
>>> We currently have all our nics in the same bond. So we have guest
>>> traffic, management,  and storage running over the same physical
>>> nics, but different vlans.
>>
>> Which bond mode do you use, out of curiousity?  Not sure I would go to this extreme, though; I would still want the physical isolation of Management vs. network/VM traffic vs. storage, but just curious which bonding mode?
>>
>> Modes 1 and 5 would seem to be the best ones, as far as maximising throughput.  I read an article just the other day where a guy detailed how he bonded four 1Gbit NICs in mode 1 (with each on a different VLAN) and was able to achieve 320MB/s throughput to NFS storage.
>>
>> As far as the storage question, I like to put other storage on the network (smaller NAS devices, maybe SANs for other storage) and would want the VMs to be bale to get at those.  Being to use a NIC to carry VM traffic for storage as well as for host access to storage would cut down on the number of NICs I would need to have in each node.
>>
>> -Alan
>>
>>
>> -Alan
>>
>




More information about the Users mailing list