adding hosts problems with GlusterFS

Hi, I setup my 3 host HCI cluster and things are going pretty good. I have a few issues though... one of them is that I added two more hosts to the three node cluster to make it a 5 nodes. This seemed to work fine until I went to create bricks. I have 6 x 2 TB disks for use as JBOD in each node. After I added the nodes, I went to the host and storage devices. This looks significantly different that the view on my first 3 hosts. On the two new hosts all my drives sdb - sdg have locks next to them and file system type is multipath__member. Further up the page then I see a UID under the name and description is PERC H710P dm-multipath - no lock next to it. I created a brick on this and it created the lvmpv file system, however when I go to bricks view, there are no bricks. So, did I make a mistake while adding the hosts? is there some way to prevent this dm-multipath configuration? Thanks Bill

On Fri, Jul 27, 2018 at 8:12 PM, <william.dossett@gmail.com> wrote:
Hi, I setup my 3 host HCI cluster and things are going pretty good. I have a few issues though...
one of them is that I added two more hosts to the three node cluster to make it a 5 nodes. This seemed to work fine until I went to create bricks. I have 6 x 2 TB disks for use as JBOD in each node.
After I added the nodes, I went to the host and storage devices. This looks significantly different that the view on my first 3 hosts. On the two new hosts all my drives sdb - sdg have locks next to them and file system type is multipath__member. Further up the page then I see a UID under the name and description is PERC H710P dm-multipath - no lock next to it.
I created a brick on this and it created the lvmpv file system, however when I go to bricks view, there are no bricks.
Did you create a brick from the UI or using CLI on the hosts. Can you check where the bricks are mounted? If it's mounted at /gluster_bricks - the bricks should be listed in the New volume screen
So, did I make a mistake while adding the hosts? is there some way to prevent this dm-multipath configuration?
There' a bug to track this - https://bugzilla.redhat.com/show_bug.cgi?id=1016535
Thanks Bill _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/CMU774UMB33KXG7W3UWJEXBQJMUBZI5N/

I did it from the UI. I've just removed and re-added the host and I can't see any thing different I could do while adding it. all the disks are locked. the brick is mounted in /gluster_bricks, but does not show in the bricks tab... actually I think it may have been in the new volume view though. I didn't actually want that volume though now that I realize I don't need an ISO domain anymore. I am trying to clean this up now and try again.

I found the problem... with the cockpit deploy, all disks are blacklisted from multipath by a script called blacklist_all_disks.sh This does not happen when a host is added manually. To work around the problem edit /etc/multipath.conf add the second line to the file below... this makes sure that vdsm will not ever mondify this file again. VDSM REVISION 1.5 # VDSM PRIVATE then add the following to the end of the file blacklist { devnode "*" } this blacklists all disk from multipath. Had to dig fairly deep for this, but now its working.

unfortunately I can't access that bug with any of my redhat accounts. I tried to make a bugzilla account and it says it is restricted access to that bug, internal only.
participants (2)
-
Sahina Bose
-
william.dossett@gmail.com