oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

[image: image.png] vdo: ERROR - Device /dev/sdc excluded by a filter [image: image.png] Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter. All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group. Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"] [root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/ So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume [root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1 So I don't think this is LVM filtering things Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]# Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change. I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter. Ideas? -- jeremey.wise@gmail.com

Hey! Can you try editing the lvm cache filter and including sdc multipath into the filter? I see that it is missing, and hence the error that sdc is excluded. Add "a|^/dev/sdc$|" to the lvmfilter and try again. Thanks On Mon, Sep 21, 2020 at 11:34 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
[image: image.png]
vdo: ERROR - Device /dev/sdc excluded by a filter
[image: image.png]
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...

On sobota 19. září 2020 5:58:43 CEST Jeremey Wise wrote:
[image: image.png]
vdo: ERROR - Device /dev/sdc excluded by a filter
[image: image.png]
when this error happens? When you install ovirt HCI?
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
it's installed by vdsm to exclude ovirt devices from common use

On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server
vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
This filter is correct for a normal oVirt host. But gluster wants to use more local disks, so you should: 1. remove the lvm filter 2. configure gluster 3. create the lvm filter This will create a filter including all the mounted logical volumes created by gluster. Can you explain how do you reproduce this? The lvm filter is created when you add a host to engine. Did you add the host to engine before configuring gluster? Or maybe you are trying to add a host that was used previously by oVirt? In the last case, removing the filter before installing gluster will fix the issue. Nir
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...

Old System Three servers.. Centos 7 -> Lay down VDO (dedup / compression) add those VDO volumes as bricks to gluster. New cluster (remove boot drives and run wipe of all data drives) Goal: use first 512GB Drives to ignite the cluster and get things on feet and stage infrastructure things. Then use one of the 1TB drives in each server for my "production" volume. And second 1TB drive in each server as staging. I want to be able to "learn" and not loose days / weeks of data... so disk level rather give up capacity for sake of "oh.. well .. that messed up.. rebuild. After minimal install. Setup of network.. run HCI wizard. It failed various times along build... lack SELInux permissive, .. did not wipe 1TB drives with hope of importing old Gluster file system / VDO voluemes to import my five or six custom and important VMs. (OCP cluster bootstrap environment, Plex servers, DNS / DHCP / Proxy HA cluster nodes et....) Gave up on too many HCI failures about disk.. so wiped drives (will use external NAS to repopulate important VMs back (or so is plan... see other posting on no import of qcow2 images / xml :P ) Ran into next batch of issues about use of true device ID ... as name too long... but /dev/sd? makes me nervious as I have seen many systems with issues when they use this old and should be depricated means to address disk ID: use UUID or raw ID... "/dev/disk/by-id/ata-Samsung_SSD_850_PRO_512GB_S250NXAGA15787L Started getting errors about HCI failing with "excluded by filter" errors. wiped drives ( gdisk /dev/sd? => x => z => y => y) filters errors I could not fiture out what they were.. .. error of "filter exists" to me meant .. you have one.. remove it so I can remove drive. Did full dd if=/dev/zero of=dev/sd? .. still same issue filtered in multipath just for grins.... still same issue. Posted to forums.. nobody had ideas https://forums.centos.org/viewtopic.php?f=54&t=75687 Posted to slack gluster channel.. they looked at it and could not figure out... Wiped systems.. started over. This time the HCI wizard deployed. My guess... is once I polished setup to make sure wizard did not attempt before SELinux set to permissive (vs disable) drives all wiped (even though they SHOULD just be ignored.. I I think VDO scanned and saw VDO definition on drive so freeked some ansible wizard script out). Now cluster is up.. but then went to add "production" gluster +VDO and "staging" gluster + vdo volumes... and having issues. Sorry for long back story but I think this will add color to issues. My Thoughts as to root issues 1) HCI wizard has issues just using drives told, and ignoring other data drives in system ... VDO as example I saw notes about failed attempt ... but it should not have touched that volume... just used one it needed and igored rest. 2) HCI wizard bug of ignoring user set /dev/sd? for each server again, was another failure attempt where clean up may not have run. (noted this in posting about manual edit .. and apply button :P to ingest) 3) HCI wizard bug of name I was using of device ID vs /sd? which is IMAO ... bad form.. but name too long.. again. another cleanup where things may not have fully cleaned.. or I forgot to click clean ... where system was left in non-pristine state 2) HCI wizard does NOT clean itself up properly if it fails ... or when I ran clean up, maybe it did not complete and I closed wizard which then created this orphaned state. 3) HCI Setup and post setup needs to add filtering With a perfect and pristine process .. it ran. But only when all other learning and requirements to get it just right were setup first. oVirt HCI is Soooo very close to being a great platform , well thought out and production class. Just needs some more nerds beating on it to find these cracks, and get the GUI and setup polished. My $0.002 On Mon, Sep 21, 2020 at 8:06 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server
vdo: ERROR - Device
/dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this
filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter =
["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
This filter is correct for a normal oVirt host. But gluster wants to use more local disks, so you should:
1. remove the lvm filter 2. configure gluster 3. create the lvm filter
This will create a filter including all the mounted logical volumes created by gluster.
Can you explain how do you reproduce this?
The lvm filter is created when you add a host to engine. Did you add the host to engine before configuring gluster? Or maybe you are trying to add a host that was used previously by oVirt?
In the last case, removing the filter before installing gluster will fix the issue.
Nir
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk
├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath
│ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l
nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD
size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com

On Mon, Sep 21, 2020 at 3:30 PM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Old System Three servers.. Centos 7 -> Lay down VDO (dedup / compression) add those VDO volumes as bricks to gluster.
New cluster (remove boot drives and run wipe of all data drives)
Goal: use first 512GB Drives to ignite the cluster and get things on feet and stage infrastructure things. Then use one of the 1TB drives in each server for my "production" volume. And second 1TB drive in each server as staging. I want to be able to "learn" and not loose days / weeks of data... so disk level rather give up capacity for sake of "oh.. well .. that messed up.. rebuild.
After minimal install. Setup of network.. run HCI wizard.
It failed various times along build... lack SELInux permissive, .. did not wipe 1TB drives with hope of importing old Gluster file system / VDO voluemes to import my five or six custom and important VMs. (OCP cluster bootstrap environment, Plex servers, DNS / DHCP / Proxy HA cluster nodes et....)
Gave up on too many HCI failures about disk.. so wiped drives (will use external NAS to repopulate important VMs back (or so is plan... see other posting on no import of qcow2 images / xml :P )
Ran into next batch of issues about use of true device ID ... as name too long... but /dev/sd? makes me nervious as I have seen many systems with issues when they use this old and should be depricated means to address disk ID: use UUID or raw ID... "/dev/disk/by-id/ata-Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
Started getting errors about HCI failing with "excluded by filter" errors.
I'm not sure I follow your long story, but this error is caused by too strict lvm filter in /etc/lvm/lvm.conf. Edit this file and remove the line that looks like this: filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-80ovnb-mZIO-J65Y-rl9n-YAY7-h0Q9-Aezk8D$|", "r|.*|"] Then install gluster, it will stop complaining about the filter. At the end of the installation, you are going to add the hosts to engine. At this point a new lvm filter will be created, considering all the mounted logical volumes. Maybe gluster setup should warn about lvm filter or remove it before the installation.
wiped drives ( gdisk /dev/sd? => x => z => y => y)
filters errors I could not fiture out what they were.. .. error of "filter exists" to me meant .. you have one.. remove it so I can remove drive.
Did full dd if=/dev/zero of=dev/sd? .. still same issue filtered in multipath just for grins.... still same issue.
Posted to forums.. nobody had ideas https://forums.centos.org/viewtopic.php?f=54&t=75687 Posted to slack gluster channel.. they looked at it and could not figure out...
Wiped systems.. started over. This time the HCI wizard deployed.
My guess... is once I polished setup to make sure wizard did not attempt before SELinux set to permissive (vs disable) drives all wiped (even though they SHOULD just be ignored.. I I think VDO scanned and saw VDO definition on drive so freeked some ansible wizard script out).
Now cluster is up.. but then went to add "production" gluster +VDO and "staging" gluster + vdo volumes... and having issues.
Sorry for long back story but I think this will add color to issues.
My Thoughts as to root issues 1) HCI wizard has issues just using drives told, and ignoring other data drives in system ... VDO as example I saw notes about failed attempt ... but it should not have touched that volume... just used one it needed and igored rest. 2) HCI wizard bug of ignoring user set /dev/sd? for each server again, was another failure attempt where clean up may not have run. (noted this in posting about manual edit .. and apply button :P to ingest) 3) HCI wizard bug of name I was using of device ID vs /sd? which is IMAO ... bad form.. but name too long.. again. another cleanup where things may not have fully cleaned.. or I forgot to click clean ... where system was left in non-pristine state 2) HCI wizard does NOT clean itself up properly if it fails ... or when I ran clean up, maybe it did not complete and I closed wizard which then created this orphaned state. 3) HCI Setup and post setup needs to add filtering
With a perfect and pristine process .. it ran. But only when all other learning and requirements to get it just right were setup first. oVirt HCI is Soooo very close to being a great platform , well thought out and production class. Just needs some more nerds beating on it to find these cracks, and get the GUI and setup polished.
My $0.002
On Mon, Sep 21, 2020 at 8:06 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server
vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
This filter is correct for a normal oVirt host. But gluster wants to use more local disks, so you should:
1. remove the lvm filter 2. configure gluster 3. create the lvm filter
This will create a filter including all the mounted logical volumes created by gluster.
Can you explain how do you reproduce this?
The lvm filter is created when you add a host to engine. Did you add the host to engine before configuring gluster? Or maybe you are trying to add a host that was used previously by oVirt?
In the last case, removing the filter before installing gluster will fix the issue.
Nir
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com

Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file. Best Regards, Strahil Nikolov В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа: vdo: ERROR - Device /dev/sdc excluded by a filter Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter. All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group. Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"] [root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/ So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume [root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1 So I don't think this is LVM filtering things Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]# Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change. I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter. Ideas? -- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...

Agree about an NVMe Card being put under mpath control. I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices. I will post once I cross bridge of getting standard SSD volumes working On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise < jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com

On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546 Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this: 1. Find the device wwid For NVMe, you need the device ID_WWN: $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69 2. Add local blacklist file: $ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" } 3. Reconfigure multipath $ multipathd reconfigure Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...

Obtaining the wwid is not exactly correct. You can identify them via: multipath -v4 | grep 'got wwid of' Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133' Of course if you are planing to use only gluster it could be far easier to set: [root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" } Best Regards, Strahil Nikolov В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа: On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546 Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this: 1. Find the device wwid For NVMe, you need the device ID_WWN: $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69 2. Add local blacklist file: $ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" } 3. Reconfigure multipath $ multipathd reconfigure Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...

Correct.. on wwid I do want to make clear here. that to geta around the error you must ADD (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete setup of drives. [root@thor log]# cat /etc/lvm/lvm.conf |grep filter # Broken for gluster in oVirt #filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "r|.*|"] # working for gluster wizard in oVirt filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"] On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Obtaining the wwid is not exactly correct. You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" }
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer < nsoffer@redhat.com> написа:
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this:
1. Find the device wwid
For NVMe, you need the device ID_WWN:
$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69
2. Add local blacklist file:
$ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" }
3. Reconfigure multipath
$ multipathd reconfigure
Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also
,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <
jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device
/dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this
filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter =
["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40
lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda three gluster base volumes )
lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk
├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath
│ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l
size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all
nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...
-- jeremey.wise@gmail.com

On Tue, Sep 22, 2020 at 11:05 PM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Correct.. on wwid
I do want to make clear here. that to geta around the error you must ADD (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete setup of drives.
[root@thor log]# cat /etc/lvm/lvm.conf |grep filter # Broken for gluster in oVirt #filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "r|.*|"] # working for gluster wizard in oVirt filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]
Yes, you need to add the devices gluster is going to use to the filter. The easiest way it to remove the filter before you install gluster, and then created the filter using vdsm-tool config-lvm-filter It should add all the devices needed for the mounted logical volumes automatically. Please file a bug if it does not do this.
On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Obtaining the wwid is not exactly correct. You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" }
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа:
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this:
1. Find the device wwid
For NVMe, you need the device ID_WWN:
$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69
2. Add local blacklist file:
$ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" }
3. Reconfigure multipath
$ multipathd reconfigure
Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...
-- jeremey.wise@gmail.com

In my setup , I got no filter at all (yet, I'm on 4.3.10): [root@ovirt ~]# lvmconfig | grep -i filter [root@ovirt ~]# P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a local copy of the lvm.conf Best Regards, Strahil Nikolov В вторник, 22 септември 2020 г., 23:05:29 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа: Correct.. on wwid I do want to make clear here. that to geta around the error you must ADD (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete setup of drives. [root@thor log]# cat /etc/lvm/lvm.conf |grep filter # Broken for gluster in oVirt #filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "r|.*|"] # working for gluster wizard in oVirt filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"] On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Obtaining the wwid is not exactly correct. You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" }
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа:
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this:
1. Find the device wwid
For NVMe, you need the device ID_WWN:
$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69
2. Add local blacklist file:
$ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" }
3. Reconfigure multipath
$ multipathd reconfigure
Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...
-- jeremey.wise@gmail.com

On Tue, Sep 22, 2020 at 11:23 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In my setup , I got no filter at all (yet, I'm on 4.3.10): [root@ovirt ~]# lvmconfig | grep -i filter
We create lvm filter automatically since 4.4.1. If you don't use block storage (FC, iSCSI) you don't need lvm filter. If you do, you can create it manually using vdsm-tool.
[root@ovirt ~]#
P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a local copy of the lvm.conf
Good point
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 23:05:29 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
Correct.. on wwid
I do want to make clear here. that to geta around the error you must ADD (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete setup of drives.
[root@thor log]# cat /etc/lvm/lvm.conf |grep filter # Broken for gluster in oVirt #filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "r|.*|"] # working for gluster wizard in oVirt filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", "a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", "a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]
On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Obtaining the wwid is not exactly correct. You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" }
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа:
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this:
1. Find the device wwid
For NVMe, you need the device ID_WWN:
$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69
2. Add local blacklist file:
$ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" }
3. Reconfigure multipath
$ multipathd reconfigure
Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...
-- jeremey.wise@gmail.com

On Tue, Sep 22, 2020 at 10:57 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Obtaining the wwid is not exactly correct.
It is correct - for nvme devices, see: https://github.com/oVirt/vdsm/blob/353e7b1e322aa02d4767b6617ed094be0643b094/... This matches the way that multipath lookup devices wwids.
You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example: [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of' Sep 22 22:55:58 | nvme0n1: got wwid of 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001' Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S' Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7' Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189' Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
There are 2 issues with this: - It detects and setup maps for all devices in the system, unwanted when you want to blacklist devices - It depends on debug output that may change, not on public documented API You can use these commands: Show devices that multipath does not use yet without setting up maps: $ sudo multipath -d Show devices that multipath is already using: $ sudo multipath -ll But I'm not sure if these commands work if dm_multipath kernel module is not loaded or multiapthd is not running. Getting the device wwid using udevadm works regardless of multipathd/dm_multipath module.
Of course if you are planing to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf blacklist { devnode "*" }
Best Regards, Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа:
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise <jeremey.wise@gmail.com> wrote:
Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1: https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it via multipath. To avoid this, you need to blacklist the devices like this:
1. Find the device wwid
For NVMe, you need the device ID_WWN:
$ udevadm info -q property /dev/nvme0n1 | grep ID_WWN ID_WWN=eui.5cd2e42a81a11f69
2. Add local blacklist file:
$ mkdir /etc/multipath/conf.d $ cat /etc/multipath/conf.d/local.conf blacklist { wwid "eui.5cd2e42a81a11f69" }
3. Reconfigure multipath
$ multipathd reconfigure
Gluster should do this for you automatically during installation, but it does not you can do this manually.
I have not even gotten to that volume / issue. My guess is something weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Why is your NVME under multipath ? That doesn't make sense at all . I have modified my multipath.conf to block all local disks . Also ,don't forget the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <jeremey.wise@gmail.com> написа:
vdo: ERROR - Device /dev/sdc excluded by a filter
Other server vdo: ERROR - Device /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this filter error. All disk outside of the HCI wizard setup are now blocked from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", "r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/ total 0 drwxr-xr-x. 2 root root 1220 Sep 18 14:32 . drwxr-xr-x. 6 root root 120 Sep 18 14:32 .. lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> ../../dm-1 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> ../../dm-2 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> ../../dm-0 lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> ../../dm-6 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> ../../dm-11 lrwxrwxrwx. 1 root root 11 Sep 18 16:40 dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> ../../dm-12 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-4 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 14:32 lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 13 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 -> ../../sdc lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 -> ../../sdb lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 -> ../../sda lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 10 Sep 18 23:35 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 -> ../../dm-3 lrwxrwxrwx. 1 root root 10 Sep 18 23:49 wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../dm-4 lrwxrwxrwx. 1 root root 15 Sep 18 14:32 wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1 -> ../../nvme0n1p1 [root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects: lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three gluster base volumes ) lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 73.5G 0 part ├─cl-root 253:0 0 44.4G 0 lvm / ├─cl-swap 253:1 0 7.5G 0 lvm [SWAP] └─cl-home 253:2 0 21.7G 0 lvm /home sdb 8:16 0 477G 0 disk └─vdo_sdb 253:5 0 2.1T 0 vdo ├─gluster_vg_sdb-gluster_lv_engine 253:6 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:7 0 1G 0 lvm │ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm │ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm │ ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:8 0 2T 0 lvm └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:9 0 2T 0 lvm ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:10 0 2T 1 lvm ├─gluster_vg_sdb-gluster_lv_data 253:11 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdb-gluster_lv_vmstore 253:12 0 1000G 0 lvm /gluster_bricks/vmstore sdc 8:32 0 931.5G 0 disk nvme0n1 259:0 0 953.9G 0 disk ├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 253:3 0 953.9G 0 mpath │ └─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1 253:4 0 953.9G 0 part └─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside this converstation [root@odin ~]# multipath -l nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001 dm-3 NVME,SPCC M.2 PCIe SSD size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:1:1:1 nvme0n1 259:0 active undef running [root@odin ~]#
Where is getting this filter. I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are now locked from any use due to this error about filter.
Ideas?
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIG3DCS72QCYYY...
-- jeremey.wise@gmail.com _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NNMZUSENFJAQJ...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/62X4ELFAMPUIZY...
participants (5)
-
Jeremey Wise
-
Nir Soffer
-
Parth Dhanjal
-
Strahil Nikolov
-
Vojtech Juranek