Yes it is deprecated on RHGS 3.5; but I really don't care for Gluster and I don't use it. What I would like to use is things like NFS over RDMA, that only Mellanox OFED provides and the host have other users that we need MLNX OFED to get support from Mellanox.

That's why I'm trying to install the MLNX OFED distribution. This is a development machine, it's not for production so we don't care we things break. But even when I try to force the install of MLNX OFED packages things does not work as expected.

Thank you.

On 5 Aug 2021, at 06:55, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:

As far as I know rdma is deprecated ong glusterfs, but it most probably works.

Best Regards,
Strahil Nikolov

On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I would like to know if there's something that we can do make both play nice on the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package glusterfs-6.0-49.1.el8.x86_64
=============================================================================================================================================================
Package                            Architecture            Version                          Repository                                                Size
=============================================================================================================================================================
Installing dependencies:
openvswitch                        x86_64                  2.14.1-1.54103                    mlnx_ofed_5.4-1.0.3.0_base                                17 M
ovirt-openvswitch                  noarch                  2.11-1.el8ev                      rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms                  8.7 k
    replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
unbound                            x86_64                  1.7.3-15.el8                      rhel-8-for-x86_64-appstream-rpms                        895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
glusterfs                          x86_64                  3.12.2-40.2.el8                  rhel-8-for-x86_64-baseos-rpms                            558 k
glusterfs                          x86_64                  6.0-15.el8                        rhel-8-for-x86_64-baseos-rpms                            658 k
glusterfs                          x86_64                  6.0-20.el8                        rhel-8-for-x86_64-baseos-rpms                            659 k
glusterfs                          x86_64                  6.0-37.el8                        rhel-8-for-x86_64-baseos-rpms                            663 k
glusterfs                          x86_64                  6.0-37.2.el8                      rhel-8-for-x86_64-baseos-rpms                            662 k
Skipping packages with broken dependencies:
glusterfs-rdma                    x86_64                  3.12.2-40.2.el8                  rhel-8-for-x86_64-baseos-rpms                            49 k
glusterfs-rdma                    x86_64                  6.0-15.el8                        rhel-8-for-x86_64-baseos-rpms                            46 k
glusterfs-rdma                    x86_64                  6.0-20.el8                        rhel-8-for-x86_64-baseos-rpms                            46 k
glusterfs-rdma                    x86_64                  6.0-37.2.el8                      rhel-8-for-x86_64-baseos-rpms                            48 k
glusterfs-rdma                    x86_64                  6.0-37.el8                        rhel-8-for-x86_64-baseos-rpms                            48 k

Transaction Summary
=============================================================================================================================================================
Install  3 Packages
Skip    10 Packages

Total size: 18 M
Is this ok [y/N]:

I really don't care for GlusterFS on this cluster, but Mellanox OFED is much more relevant do me.

Thank you all,
Vinícius.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org