Hi,
i try to set up a 3 node gluster based ovirt cluster, following this guide:
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-glus...
oVirt nodes were installed with all disks available in the system, installer limited to
use only /dev/sda (both sda and sdb are HPE logical volumes on a p410 raid controller)
Glusterfs deployment fails in the last step before engine setup:
PLAY RECAP *********************************************************************
hv1.iw : ok=1 changed=1 unreachable=0 failed=0
hv2.iw : ok=1 changed=1 unreachable=0 failed=0
hv3.iw : ok=1 changed=1 unreachable=0 failed=0
PLAY [gluster_servers] *********************************************************
TASK [Clean up filesystem signature] *******************************************
skipping: [hv1.iw] => (item=/dev/sdb)
skipping: [hv2.iw] => (item=/dev/sdb)
skipping: [hv3.iw] => (item=/dev/sdb)
TASK [Create Physical Volume] **************************************************
failed: [hv3.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb",
"msg": " Device /dev/sdb not found (or ignored by filtering).\n",
"rc": 5}
failed: [hv1.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb",
"msg": " Device /dev/sdb not found (or ignored by filtering).\n",
"rc": 5}
failed: [hv2.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb",
"msg": " Device /dev/sdb not found (or ignored by filtering).\n",
"rc": 5}
But: /dev/sdb exists on all hosts
[root@hv1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
sda 8:0 0 136,7G 0 disk
...
sdb 8:16 0 558,9G 0 disk
└─3600508b1001c350a2c1748b0a0ff3860 253:5 0 558,9G 0 mpath
What can i do to make this work?
___________________________________________________________
Oliver Dietzel
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi Oliver,
I see that multipath is enabled on your system and for the device
sdb it creates mpath and once this is created system will identify sdb
as "3600508b1001c350a2c1748b0a0ff3860". To make this work perform the
steps below.
1) multipath -l (to list all multipath devices)
2) black list devices in /etc/multipath.conf by adding the lines below,
if you do not see this file run the command 'vdsm-tool configure
--force' which will create the file for you.
blacklist {
devnode "*"
}
3) mutipath -F which flushes all the mpath devices.
4) Restart mutipathd by running the command 'systemctl restart multipathd'
This should solve the issue.
Thanks
kasturi.