I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the mellanox tar/iso and
running the mellanox install script after adding the required dependencies with --enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 4.4.4.0/CentOS8.3) and 7.9 (node 4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I would like to know if there's something that we can do make both play nice on the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package glusterfs-6.0-49.1.el8.x86_64
=============================================================================================================================================================
 Package                            Architecture            Version                           Repository                                                Size
=============================================================================================================================================================
Installing dependencies:
 openvswitch                        x86_64                  2.14.1-1.54103                    mlnx_ofed_5.4-1.0.3.0_base                                17 M
 ovirt-openvswitch                  noarch                  2.11-1.el8ev                      rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms                  8.7 k
     replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
 unbound                            x86_64                  1.7.3-15.el8                      rhel-8-for-x86_64-appstream-rpms                         895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 glusterfs                          x86_64                  3.12.2-40.2.el8                   rhel-8-for-x86_64-baseos-rpms                            558 k
 glusterfs                          x86_64                  6.0-15.el8                        rhel-8-for-x86_64-baseos-rpms                            658 k
 glusterfs                          x86_64                  6.0-20.el8                        rhel-8-for-x86_64-baseos-rpms                            659 k
 glusterfs                          x86_64                  6.0-37.el8                        rhel-8-for-x86_64-baseos-rpms                            663 k
 glusterfs                          x86_64                  6.0-37.2.el8                      rhel-8-for-x86_64-baseos-rpms                            662 k
Skipping packages with broken dependencies:
 glusterfs-rdma                     x86_64                  3.12.2-40.2.el8                   rhel-8-for-x86_64-baseos-rpms                             49 k
 glusterfs-rdma                     x86_64                  6.0-15.el8                        rhel-8-for-x86_64-baseos-rpms                             46 k
 glusterfs-rdma                     x86_64                  6.0-20.el8                        rhel-8-for-x86_64-baseos-rpms                             46 k
 glusterfs-rdma                     x86_64                  6.0-37.2.el8                      rhel-8-for-x86_64-baseos-rpms                             48 k
 glusterfs-rdma                     x86_64                  6.0-37.el8                        rhel-8-for-x86_64-baseos-rpms                             48 k

Transaction Summary
=============================================================================================================================================================
Install   3 Packages
Skip     10 Packages

Total size: 18 M
Is this ok [y/N]:

I really don't care for GlusterFS on this cluster, but Mellanox OFED is much more relevant do me.

Thank you all,
Vinícius.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MGQBIBM4BCHBBMLCY2QDKAR3Q6OE5LCX/