Adding Infiniband VM Network Fails

Good Day; I am trying to add an infiniband VM network to the hosts on my ovirt deployment, and the network configuration on the hosts fails to save. The network bridge is added successfully, but applying the bridge to the IB1 nic fails with little information other than it failed. My system: 6 HV nodes running CentOS 7 and OV version 4 1 Dedicated engine running CentOS 7 and engine version 4 in 3.6 mode. The HV nodes all have Mellanox IB cards, dual port. Port 0 is for iSCSI and NFS connectivity and runs fine. Port 1 is for VM usage of the 10Gb network. Have any of you had any dealings with this ?

This is a multi-part message in MIME format. ------=_NextPartTM-000-5a06b3b0-dd12-447d-a798-a46a446def6e Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi,=0A= =0A= we are running Infiniband on the NFS storage network only. Did I get=0A= it aight that this works or do you already have issues there?=0A= =0A= Best regards.=0A= =0A= Markus=0A= =0A= Web: www.collogia.de=0A= =0A= ________________________________________=0A= Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von= "clint@theboggios.com [clint@theboggios.com]=0A= Gesendet: Mittwoch, 16. November 2016 20:10=0A= An: users@ovirt.org=0A= Betreff: [ovirt-users] Adding Infiniband VM Network Fails=0A= =0A= Good Day;=0A= =0A= I am trying to add an infiniband VM network to the hosts on my ovirt=0A= deployment, and the network configuration on the hosts fails to save.=0A= The network bridge is added successfully, but applying the bridge to the=0A= IB1 nic fails with little information other than it failed.=0A= =0A= My system:=0A= =0A= 6 HV nodes running CentOS 7 and OV version 4=0A= 1 Dedicated engine running CentOS 7 and engine version 4 in 3.6 mode.=0A= =0A= The HV nodes all have Mellanox IB cards, dual port. Port 0 is for iSCSI=0A= and NFS connectivity and runs fine. Port 1 is for VM usage of the 10Gb=0A= network.=0A= =0A= Have any of you had any dealings with this ?=0A= =0A= =0A= _______________________________________________=0A= Users mailing list=0A= Users@ovirt.org=0A= http://lists.ovirt.org/mailman/listinfo/users=0A= ------=_NextPartTM-000-5a06b3b0-dd12-447d-a798-a46a446def6e Content-Type: text/plain; name="InterScan_Disclaimer.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="InterScan_Disclaimer.txt" **************************************************************************** Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. Über das Internet versandte E-Mails können unter fremden Namen erstellt oder manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine rechtsverbindliche Willenserklärung. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln Vorstand: Kadir Akin Dr. Michael Höhnerbach Vorsitzender des Aufsichtsrates: Hans Kristian Langva Registergericht: Amtsgericht Köln Registernummer: HRB 52 497 This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. e-mails sent over the internet may have been written under a wrong name or been manipulated. That is why this message sent as an e-mail is not a legally binding declaration of intention. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln executive board: Kadir Akin Dr. Michael Höhnerbach President of the supervisory board: Hans Kristian Langva Registry office: district court Cologne Register number: HRB 52 497 **************************************************************************** ------=_NextPartTM-000-5a06b3b0-dd12-447d-a798-a46a446def6e--

That is correct. The ib0 in all of the HV nodes are accessing iSCSI and NFS over that IB link successfully. What we are trying to do now is create a network that utilizes the second IB port (ib1) on the cards for some of the virtual machines that live inside the environment.
On Nov 16, 2016, at 1:40 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Hi,
we are running Infiniband on the NFS storage network only. Did I get it aight that this works or do you already have issues there?
Best regards.
Markus
Web: www.collogia.de
________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "clint@theboggios.com [clint@theboggios.com] Gesendet: Mittwoch, 16. November 2016 20:10 An: users@ovirt.org Betreff: [ovirt-users] Adding Infiniband VM Network Fails
Good Day;
I am trying to add an infiniband VM network to the hosts on my ovirt deployment, and the network configuration on the hosts fails to save. The network bridge is added successfully, but applying the bridge to the IB1 nic fails with little information other than it failed.
My system:
6 HV nodes running CentOS 7 and OV version 4 1 Dedicated engine running CentOS 7 and engine version 4 in 3.6 mode.
The HV nodes all have Mellanox IB cards, dual port. Port 0 is for iSCSI and NFS connectivity and runs fine. Port 1 is for VM usage of the 10Gb network.
Have any of you had any dealings with this ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users = <InterScan_Disclaimer.txt>

On Wed, Nov 16, 2016 at 10:01 PM, Clint Boggio <clint@theboggios.com> wrote:
That is correct. The ib0 in all of the HV nodes are accessing iSCSI and NFS over that IB link successfully.
What we are trying to do now is create a network that utilizes the second IB port (ib1) on the cards for some of the virtual machines that live inside the environment.
On Nov 16, 2016, at 1:40 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Hi,
we are running Infiniband on the NFS storage network only. Did I get it aight that this works or do you already have issues there?
Best regards.
Markus
Web: www.collogia.de
________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "clint@theboggios.com [clint@theboggios.com] Gesendet: Mittwoch, 16. November 2016 20:10 An: users@ovirt.org Betreff: [ovirt-users] Adding Infiniband VM Network Fails
Good Day;
I am trying to add an infiniband VM network to the hosts on my ovirt deployment, and the network configuration on the hosts fails to save. The network bridge is added successfully, but applying the bridge to the IB1 nic fails with little information other than it failed.
My system:
6 HV nodes running CentOS 7 and OV version 4 1 Dedicated engine running CentOS 7 and engine version 4 in 3.6 mode.
The HV nodes all have Mellanox IB cards, dual port. Port 0 is for iSCSI and NFS connectivity and runs fine. Port 1 is for VM usage of the 10Gb network.
Have any of you had any dealings with this ?
Could you please share Engine and node vdsm logs? (On the node, look for the vdsm.log and supervdsm.log) Thanks, Edy.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users = <InterScan_Disclaimer.txt>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I believe IP-over-Infiniband is an OSI level 3 transport, so it can’t be used in a level 2 (Ethernet) bridge. -----Original Message----- From: <users-bounces@ovirt.org> on behalf of "clint@theboggios.com" <clint@theboggios.com> Date: Wednesday 16 November 2016 at 22:10 To: "users@ovirt.org" <users@ovirt.org> Subject: [ovirt-users] Adding Infiniband VM Network Fails Good Day; I am trying to add an infiniband VM network to the hosts on my ovirt deployment, and the network configuration on the hosts fails to save. The network bridge is added successfully, but applying the bridge to the IB1 nic fails with little information other than it failed. My system: 6 HV nodes running CentOS 7 and OV version 4 1 Dedicated engine running CentOS 7 and engine version 4 in 3.6 mode. The HV nodes all have Mellanox IB cards, dual port. Port 0 is for iSCSI and NFS connectivity and runs fine. Port 1 is for VM usage of the 10Gb network. Have any of you had any dealings with this ? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (5)
-
Clint Boggio
-
clint@theboggios.com
-
Edward Haas
-
Markus Stockhausen
-
Pavel Gashev