Sac,
To answer some of your questions:
fdisk -l:
[root@host1 ~]# fdisk -l /dev/sdb
Disk /dev/sde: 480.1 GB, 480070426624 bytes, 937637552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes

[root@host1 ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 3000.6 GB, 3000559427584 bytes, 5860467632 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes

[root@host1 ~]# fdisk -l /dev/sdd

Disk /dev/sdd: 3000.6 GB, 3000559427584 bytes, 5860467632 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes



1) i did  wipefs to all /dev/sdb,c,d,e
2) I did not zero out the disks as I had done it thru the controller.

3) cat /proc/partitions:
[root@host1 ~]# cat /proc/partitions
major minor  #blocks  name

   8        0  586029016 sda
   8        1    1048576 sda1
   8        2  584978432 sda2
   8       16 2930233816 sdb
   8       32 2930233816 sdc
   8       48 2930233816 sdd
   8       64  468818776 sde




4) grep filter /etc/lvm/lvm.conf (I did not modify the  lvm.conf file)
[root@host1 ~]# grep "filter =" /etc/lvm/lvm.conf
# filter = [ "a|.*/|" ]
# filter = [ "r|/dev/cdrom|" ]
# filter = [ "a|loop|", "r|.*|" ]
# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
# filter = [ "a|.*/|" ]
# global_filter = [ "a|.*/|" ]
# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]





What I did to get it working:

I re-installed my first 3 hosts using "ovirt-node-ng-installer-4.3.3-2019041712.el7.iso"and made sure I zeroed the disks from within the controller, then I performed the following steps:

1.- modifed the blacklist section on /etc/multipath.conf to this:
blacklist {
 #       protocol "(scsi:adt|scsi:sbp)"
    devnode "*"
}
2.-Made sure the second line of /etc/multipath.conf has:
    # VDSM PRIVATE
3.-Increased /var/log to 15GB
4.-Rebuilt initramfs, rebooted
5.-wipefs -a /dev/sdb /dev/sdc /dev/sdd /dev/sde
6.-started the hyperconverged setup wizard and added "gluster_features_force_varlogsizecheck: false" to the "vars:" section on the  Generated Ansible inventory : /etc/ansible/hc_wizard_inventory.yml file as it was complaining about /var/log messages LV.

EUREKA: After doing the above I was able to get past the filter issues, however I am still concerned if during a reboot the disks might come up differently. For example /dev/sdb might come up as /dev/sdx...


I am trying to make sure this setup is always the same as we want to move this to production, however seems I still don't have the full hang of it and the RHV 4.1 course is way to old :)

Thanks again for helping out with this.



-AQ




On Tue, May 21, 2019 at 3:29 AM Sachidananda URS <surs@redhat.com> wrote:


On Tue, May 21, 2019 at 12:16 PM Sahina Bose <sabose@redhat.com> wrote:


On Mon, May 20, 2019 at 9:55 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Sahina,
Yesterday I started with a fresh install, I completely wiped clean all the disks, recreated the arrays from within my controller of our DL380 Gen 9's.

OS: RAID 1 (2x600GB HDDs): /dev/sda    // Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde

After the OS install on the first 3 servers and setting up ssh keys,  I started the Hyperconverged deploy process:
1.-Logged int to the first server http://host1.example.com:9090
2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks, Review)
Hosts/FQDNs:
Packages:
Volumes:
engine:replicate:/gluster_bricks/engine/engine
vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
data1:replicate:/gluster_bricks/data1/data1
data2:replicate:/gluster_bricks/data2/data2
Bricks:
engine:/dev/sdb:100GB:/gluster_bricks/engine
vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
data1:/dev/sdc:2700GB:/gluster_bricks/data1
data2:/dev/sdd:2700GB:/gluster_bricks/data2
LV Cache:
/dev/sde:400GB:writethrough
4.-After I hit deploy on the last step of the "Wizard" that is when I get the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}

Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and the "Deployment Failed" file




Also wondering if I hit this bug?
https://bugzilla.redhat.com/show_bug.cgi?id=1635614


+Sachidananda URS +Gobinda Das to review the inventory file and failures

Hello Adrian,

Can you please provide the output of:
# fdisk -l /dev/sdd
# fdisk -l /dev/sdb

I think there could be stale signature on the disk causing this error.
Some of the possible solutions to try:
1)
# wipefs -a /dev/sdb
# wipefs -a /dev/sdd

2)
You can zero out first few sectors of disk by:

# dd if=/dev/zero of=/dev/sdb bs=1M count=10

3)
Check if partition is visible in /proc/partitions
If not:
# partprobe /dev/sdb

4)
Check if filtering is configured wrongly in /etc/lvm/lvm.conf
grep for 'filter ='

-sac


--
Adrian Quintero