under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.


However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.

-------------------------------------------------------------
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
        devnode "*"
}
# END Added by gluster_hci role
----------------------------------------------------------
After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?

Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.


thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?

Best Regards,
Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа:


Strahil,
this is the issue I am seeing now

image.png

The is thru the UI when I try to create a new brick.

So my concern is if I modify the filters on the OS what impact will that have after server reboots?

thanks,



On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> Same thing for sdc, sdd
>
> Should I manually edit the filters inside the OS, what will be the impact?
>
> thanks again.
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/


--
Adrian Quintero
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/


--
Adrian Quintero