On Wed, Nov 16, 2016 at 10:01 PM, Clint Boggio <clint(a)theboggios.com> wrote:
That is correct. The ib0 in all of the HV nodes are accessing iSCSI
and
NFS over that IB link successfully.
What we are trying to do now is create a network that utilizes the second
IB port (ib1) on the cards for some of the virtual machines that live
inside the environment.
> On Nov 16, 2016, at 1:40 PM, Markus Stockhausen <stockhausen(a)collogia.de>
wrote:
>
> Hi,
>
> we are running Infiniband on the NFS storage network only. Did I get
> it aight that this works or do you already have issues there?
>
> Best regards.
>
> Markus
>
> Web:
www.collogia.de
>
> ________________________________________
> Von: users-bounces(a)ovirt.org [users-bounces(a)ovirt.org]&quot; im Auftrag
von &quot;clint(a)theboggios.com [clint(a)theboggios.com]
> Gesendet: Mittwoch, 16. November 2016 20:10
> An: users(a)ovirt.org
> Betreff: [ovirt-users] Adding Infiniband VM Network Fails
>
> Good Day;
>
> I am trying to add an infiniband VM network to the hosts on my ovirt
> deployment, and the network configuration on the hosts fails to save.
> The network bridge is added successfully, but applying the bridge to the
> IB1 nic fails with little information other than it failed.
>
> My system:
>
> 6 HV nodes running CentOS 7 and OV version 4
> 1 Dedicated engine running CentOS 7 and engine version 4 in 3.6 mode.
>
> The HV nodes all have Mellanox IB cards, dual port. Port 0 is for iSCSI
> and NFS connectivity and runs fine. Port 1 is for VM usage of the 10Gb
> network.
>
> Have any of you had any dealings with this ?
>
Could you please share Engine and node vdsm logs?
(On the node, look for the vdsm.log and supervdsm.log)
Thanks,
Edy.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
> =
> <InterScan_Disclaimer.txt>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users