after a bit of playing with the system, I found a nice workaround:
I added a host route on each host:
# route add -host 10.100.101.101 gw 172.22.22.1
and on the other one:
# route add -host 10.100.101.100 gw 172.22.22.2
where 10.100.101/24 is the slow network and 172.22.22/24 is the fast
network.
migration now works through the fast link, with nothing changed in ovirt's
configuration.
Yuval
On Sun, Jan 6, 2013 at 11:49 PM, Dan Kenigsberg <danken(a)redhat.com> wrote:
On Sun, Jan 06, 2013 at 10:08:48PM +0200, Yuval M wrote:
> Hi,
> I'm running the following setup:
> 2 hosts,
> each has 2 physical NICs,
> the first NIC (which is bridged to ovirtmgmt) is a 100Mbps ethernet card
> connected to a switch (and to the internet)
> the 2nd NIC is a fast Infiniband card which is connected back-to-back to
> the other host.
>
> both links are running fine, and I managed to have the 2nd host mount the
> storage via the fast link.
> The problem is that VM migration takes place over the slow link.
>
> How do I configure the cluster so that the migration uses the fast link?
> I've already created a network using the web interface. the migration
still
> uses the slow link.
Currently, oVirt always uses the ovirtmgmt network for migration data,
even though libvirt already supports choosing a non-mananagement IP
address for that. So I do not see how you could use the Infiniband card
for migration without defining ovirtmgmt there, or with serious vdsm
hacking.
When I'm saying "serious", I mean something in the lines of
(the untested!)
http://gerrit.ovirt.org/10696 .
To use this patch, you would first need to define a non-VM network over
the Infiniband connection on both hosts, and provide them with IP
address. Then, the name of this network should be passed as the value of
a VM custom propery call 'mignet'.
It would be great if you can try it out and debug it and/or comment on
it.
Dan