VM access to infiniband network

Hi all, i am facing a problem while trying to associate a Mellanox infiniband interface to a network and using it for VM traffic. vdsm log shows the following message: The bridge <bridge name> cannot use IP over InfiniBand interface <interface name> as port. Please use RoCE interface instead. Did anybody face the same problem and solve it? Actually ib interface is configured with an ip address and we are mounting NFS filesystems on cluster nodes through infiniband network.

The error says that you can’t use IPoIB. And you’re using it. IP address on Infiniband is IPoIB. I don’t know the reasoning behind this, but as my experience with IB, IPoIB is a resource hog when you have NFS on top of it. There’s no offload on the hardware. A 1GbE ethernet card will perform better in some cases. Regards,
On 26 May 2022, at 10:14, Roberto Bertucci <rob.bertucci@gmail.com> wrote:
Hi all, i am facing a problem while trying to associate a Mellanox infiniband interface to a network and using it for VM traffic.
vdsm log shows the following message: The bridge <bridge name> cannot use IP over InfiniBand interface <interface name> as port. Please use RoCE interface instead.
Did anybody face the same problem and solve it? Actually ib interface is configured with an ip address and we are mounting NFS filesystems on cluster nodes through infiniband network. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4B554ANIYXAFE...

Thank you Vinicious, customer has beegfs through infiniband and he wants vms having access to beegfs storage. I will try to convince him to buy a new mellanox card dedicated to beegfs.

Hmm, BeeGFS natively supports RDMA. Which is good. So the customer wants to access BeeGFS storage inside the VM? Is that the purpose? If yes you should use IOMMU to provide ib0 interfaces direct to the VM. If it’s a unique VM just passthrough the Connect-X card to it. If it’s more than a VM, you may need to look at SR-IOV, and enable it. Which case are you looking for? Regards,
On 1 Jun 2022, at 14:47, Roberto Bertucci <rob.bertucci@gmail.com> wrote:
Thank you Vinicious, customer has beegfs through infiniband and he wants vms having access to beegfs storage. I will try to convince him to buy a new mellanox card dedicated to beegfs. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBONEZNZBC6UXR...

Hi Vinicius, sorry for delay, but project stopped for some weeks. The goal is to have a couple of VMs to access Infiniband network. Actually i managed to enable virtual functions for mellanox card and i am trying to configure pass-through to virtual cards (ib0 to ib8 devices). Still i am unable to create a passthrough vnic as i am not able to associate ib* ports to a network bridge due to this error: "The bridge <bridge name> cannot use IP over InfiniBand interface <interface name> as port. Please use RoCE interface instead" Therefore i can't have a defined network profile and vnic creation is not possible. Any idea? I looked at documentation finding nothing. Thank you
participants (2)
-
Roberto Bertucci
-
Vinícius Ferrão