[ovirt-users] oVirt Infiniband Support - Etherrnet vs Infiniband

Markus Stockhausen stockhausen at collogia.de
Sun Jun 1 09:13:10 UTC 2014


> Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von "René Koch [rkoch at linuxland.at]
> Gesendet: Freitag, 30. Mai 2014 21:08
> An: ml ml
> Cc: users at ovirt.org
> Betreff: Re: [ovirt-users] oVirt Infiniband Support - Etherrnet vs Infiniband
> 
> On 05/30/2014 04:55 PM, ml ml wrote:
> > Hello List,
> >
> > i am very new to ovirt. I am setting up a 2 Node Cluster with GlusterFS
> > Sync Replication.
> >
> > I was about to buy a 2-Port Intel Gbit nic for each host which is about
> > 150Eur each.
> >
> > However, for a little extra of 180Eur you get a 10GBit Infiniband -
> > MCX311A-XCAT.
> >
> > I would like to have quick opinions on this.
> >
> > Should i pay a little more and then get 10MBit instead of 2GBit bonded?
> 
> Infiniband is good for GlusterFS / storage connection but never never
> never never ever use it as a vm network - believe me it's a nightmare.
> 
> So you would direct connect your 2 nodes as otherwise you would have to
> buy an Infiniband switch as well (btw I'm not sure if direct connect is
> possible, but I guess it is). Keep in mind that Infiniband cables aren't
> cheap, too.
> Why don't you go for 10Gbit/s Ethernet?

Welcome to Ovirt.

We are on infiniband IPoIB for about 3 years now (started with VMWare). So we
jumped on the 10GBit path long before 10GB ethernet had reasonable prices. 
We are fine with it but you should consider the following aspects.

- Go for ConnectX-2 cards or higher. They allow for IPoIB TCP offloading. Do not
believe the other information you will find on the internet. A helful statment here:
http://permalink.gmane.org/gmane.linux.drivers.rdma/18032

- With IPoIB infiniband you should always be happy to achieve more than 1GBit 
throughput. A hear a lot of people complaining that they do not get wire speed.
The stack is quite complex and from our experience we can reach 5-7GBit speeds 
easily on a DDR backbone (16 Gbps payload). Better CPUs will increase the 
throughput. A jump from Xeon 5400 to 5600 gave a nice boost.

- Everything around NFS over RDMA is currently a mess. 

- Cable prices are a real pain (at least in Europe). 

- Older CX4 Switches are very cheap and rock solid. We have a CISCO 7000D.
Though limited to 2044MTU they are fine.

- Infiniband requires a subnet manager on the network. Cards will only get a
link if that process is available. Either you need a switch with an builtin SM or
in case of direct connection it must run on one of the hosts.

- always remember that GBit bonding will not improve the throughput of one
connection. So you can easily migrate a single VM or disks over infiniband with 
much more than 1GBit while an ethernet bond will be limited to 1Gbit.

Maybe you should have a look at the 10GBit mellanox ethernet adapters. They are 
floating around for cheap prices ~$60 have a SFP+ port and are build from the
same silicone as the infiniband cards. For direct connections they should be fine
but switching hardware is expensive.

Markus
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: InterScan_Disclaimer.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140601/34d3d013/attachment-0001.txt>


More information about the Users mailing list