
On Tue, Sep 22, 2020 at 11:23 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In my setup , I got no filter at all (yet, I'm on 4.3.10): [root@ovirt ~]# lvmconfig | grep -i filter
We create lvm filter automatically since 4.4.1. If you don't use block storage (FC, iSCSI) you don't need lvm filter. If you do, you can create it manually using vdsm-tool.
[root@ovirt ~]#
P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a local copy of the lvm.conf
Good point
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 23:05:29 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
Correct.. on wwid
I do want to make clear here. that to geta around the error you must ADD (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete setup of drives.
[root@thor log]# cat /etc/lvm/lvm.conf |grep filter # Broken for gluster in oVirt #filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "r|.*|"] # working for gluster wizard in oVirt filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]
On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Obtaining the wwid is not exactly correct. You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" }
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа:
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this:
1. Find the device wwid
For NVMe, you need the device ID_WWN:
$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69
2. Add local blacklist file:
$ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" }
3. Reconfigure multipath
$ multipathd reconfigure
Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...
-- jeremey.wise@gmail.com