When I deployed my gluster hyperconverged setup using nvme drives I had to disable
multipath for all my drives. I'm not sure if this is your issue but here are the
instruction I followed to disable it.
Create a custom multipath configuration file.
# mkdir /etc/multipath/conf.d
# touch /etc/multipath/conf.d/99-custom-multipath.conf
Add the following content to the file, replacing <device> with the name of the
device to blacklist:
blacklist {
devnode "<device>"
}
For example, to blacklist the /dev/sdb device, add the following:
blacklist {
devnode "sdb"
}
Restart multipathd.
# systemctl restart multipathd
-----Original Message-----
From: Charles Lam <clam2718(a)gmail.com>
Sent: Friday, December 18, 2020 11:51 AM
To: users(a)ovirt.org
Subject: [EXT] [ovirt-users] Re: v4.4.3 Node Cockpit Gluster deploy fails
I have been asked if multipath has been disabled for the cluster's nvme drives.
I have not enabled or disabled multipath for the nvme drives. In Gluster deploy Step 4 -
Bricks I have checked "Multipath Configuration: Blacklist Gluster Devices." I
have not performed any custom setup of nvme drives other than wiping them in between
deployment attempts. Below is the output of lsscsi and multipath -ll on the first host
after failed Gluster deployment and before cleanup.
Thanks! Should I set up multipath? If so, if you could point me to documentation re
setup for oVirt. I still have a lot to learn and appreciate any direction.
[root@Host1 conf.d]# lsscsi
[15:0:0:0] disk ATA DELLBOSS VD 00-0 /dev/sda
[17:0:0:0] process Marvell Console 1.01 -
[N:0:33:1] disk Dell Express Flash PM1725b 1.6TB SFF__1 /dev/nvme0n1
[N:1:33:1] disk Dell Express Flash PM1725b 1.6TB SFF__1 /dev/nvme1n1
[N:2:33:1] disk Dell Express Flash PM1725b 1.6TB SFF__1 /dev/nvme2n1
[N:3:33:1] disk Dell Express Flash PM1725b 1.6TB SFF__1 /dev/nvme3n1
[N:4:33:1] disk Dell Express Flash PM1725b 1.6TB SFF__1 /dev/nvme4n1
[root@Host1 conf.d]# multipath -ll
eui.343756304d7020220025385800000004 dm-0 NVME,Dell Express Flash PM1725b 1.6TB SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 0:33:1:1 nvme0n1 259:1 active ready running
eui.343756304d7020540025385800000004 dm-1 NVME,Dell Express Flash PM1725b 1.6TB SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 1:33:1:1 nvme1n1 259:0 active ready running
eui.343756304d7007630025385800000004 dm-2 NVME,Dell Express Flash PM1725b 1.6TB SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 2:33:1:1 nvme2n1 259:3 active ready running
eui.343756304d7020470025385800000004 dm-4 NVME,Dell Express Flash PM1725b 1.6TB SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 4:33:1:1 nvme4n1 259:4 active ready running
eui.343756304d7020460025385800000004 dm-3 NVME,Dell Express Flash PM1725b 1.6TB SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 3:33:1:1 nvme3n1 259:2 active ready running
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S42IKSHJ7NH...