Ovirt 4.4.3 Hyper-converged Deployment with GlusterFS

Trying to deploy a 3 Node Hyperconverged Ovirt Cluster with Gluster as the backend storage. I have tried this against the three nodes that I have as well as with just a single node to get a working base line. The failure that I keep getting stuck on is: TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:59 failed: [ovirt01-storage.poling.local] (item={'key': 'gluster_vg_sdb', 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} I have verified my dns records and have reverse dns set up. The Front End Network and Storage Networks are physical separated and are 10GB connections. In the reading i have done this seems to point to possibly being a multipath issue, but i do see multipath configs being set in the Gluster Wizard and when I check after the wizard fails out - it does look like the mpath is set correctly. [root@ovirt01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.1G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 445.1G 0 part ├─onn-pool00_tmeta 253:0 0 1G 0 lvm │ └─onn-pool00-tpool 253:2 0 351.7G 0 lvm │ ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3 0 314.7G 0 lvm / │ ├─onn-pool00 253:5 0 351.7G 1 lvm │ ├─onn-var_log_audit 253:6 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:7 0 8G 0 lvm /var/log │ ├─onn-var_crash 253:8 0 10G 0 lvm /var/crash │ ├─onn-var 253:9 0 15G 0 lvm /var │ ├─onn-tmp 253:10 0 1G 0 lvm /tmp │ ├─onn-home 253:11 0 1G 0 lvm /home │ └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12 0 314.7G 0 lvm ├─onn-pool00_tdata 253:1 0 351.7G 0 lvm │ └─onn-pool00-tpool 253:2 0 351.7G 0 lvm │ ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3 0 314.7G 0 lvm / │ ├─onn-pool00 253:5 0 351.7G 1 lvm │ ├─onn-var_log_audit 253:6 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:7 0 8G 0 lvm /var/log │ ├─onn-var_crash 253:8 0 10G 0 lvm /var/crash │ ├─onn-var 253:9 0 15G 0 lvm /var │ ├─onn-tmp 253:10 0 1G 0 lvm /tmp │ ├─onn-home 253:11 0 1G 0 lvm /home │ └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12 0 314.7G 0 lvm └─onn-swap 253:4 0 4G 0 lvm [SWAP] sdb 8:16 0 5.5T 0 disk └─sdb1 8:17 0 5.5T 0 part /sdb Looking for any pointers on what else I should be looking at to get gluster to deploy successfully. Thanks ~ R

Hey! Are you running over CentOS? Either you have to uncheck the "Blacklist gluster devices" option on the bricks page and try again Or you can add a filter to /etc/lvm/lvm.conf something like this - a|^/dev/sda2$|", On Mon, Nov 23, 2020 at 6:17 PM <rcpoling@gmail.com> wrote:
Trying to deploy a 3 Node Hyperconverged Ovirt Cluster with Gluster as the backend storage. I have tried this against the three nodes that I have as well as with just a single node to get a working base line. The failure that I keep getting stuck on is:
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:59 failed: [ovirt01-storage.poling.local] (item={'key': 'gluster_vg_sdb', 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
I have verified my dns records and have reverse dns set up. The Front End Network and Storage Networks are physical separated and are 10GB connections. In the reading i have done this seems to point to possibly being a multipath issue, but i do see multipath configs being set in the Gluster Wizard and when I check after the wizard fails out - it does look like the mpath is set correctly.
[root@ovirt01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.1G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 445.1G 0 part ├─onn-pool00_tmeta 253:0 0 1G 0 lvm │ └─onn-pool00-tpool 253:2 0 351.7G 0 lvm │ ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3 0 314.7G 0 lvm / │ ├─onn-pool00 253:5 0 351.7G 1 lvm │ ├─onn-var_log_audit 253:6 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:7 0 8G 0 lvm /var/log │ ├─onn-var_crash 253:8 0 10G 0 lvm /var/crash │ ├─onn-var 253:9 0 15G 0 lvm /var │ ├─onn-tmp 253:10 0 1G 0 lvm /tmp │ ├─onn-home 253:11 0 1G 0 lvm /home │ └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12 0 314.7G 0 lvm ├─onn-pool00_tdata 253:1 0 351.7G 0 lvm │ └─onn-pool00-tpool 253:2 0 351.7G 0 lvm │ ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3 0 314.7G 0 lvm / │ ├─onn-pool00 253:5 0 351.7G 1 lvm │ ├─onn-var_log_audit 253:6 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:7 0 8G 0 lvm /var/log │ ├─onn-var_crash 253:8 0 10G 0 lvm /var/crash │ ├─onn-var 253:9 0 15G 0 lvm /var │ ├─onn-tmp 253:10 0 1G 0 lvm /tmp │ ├─onn-home 253:11 0 1G 0 lvm /home │ └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12 0 314.7G 0 lvm └─onn-swap 253:4 0 4G 0 lvm [SWAP] sdb 8:16 0 5.5T 0 disk └─sdb1 8:17 0 5.5T 0 part /sdb
Looking for any pointers on what else I should be looking at to get gluster to deploy successfully. Thanks ~ R _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5DB2ENDLVT7E2F...

Yes - CentOS [root@ovirt01 ~]# cat /etc/os-release NAME="CentOS Linux" VERSION="8 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="8" VARIANT="oVirt Node 4.4.3" VARIANT_ID="ovirt-node" PRETTY_NAME="oVirt Node 4.4.3" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:8" HOME_URL="https://www.ovirt.org/" BUG_REPORT_URL="https://bugzilla.redhat.com/" PLATFORM_ID="platform:el8" [root@ovirt01 ~]# I thought that the Gluster storage devices had to be BlackListed? Is it really that you only want to black list the local devices that are not going to be used for Gluster? And that checkbox actually blacklists all local attached disks? Just looking for some clarification.

Figured it out: filter = [ "a|^/dev/sda2$|" ] > /etc/lvm/lvm.conf wipefs -af /dev/sdb multipath -F pvcreate /dev/sdb Those steps got me past the filter error - Gluster deployed.

Hi Sorry if this goes against convention here, maybe best practises have moved on, but I always create a partition on a disk before using it. ... multipath -F parted /dev/sdb mklabel gpt parted /dev/sdb mkpart primary 0% 100% parted /dev/sdb set 1 lvm on pvcreate /dev/sdb1 It's good practise to get into, especially when a disk is moved from one OS to another so as to avoid data loss. Regards Angus ________________________________ From: rcpoling@gmail.com <rcpoling@gmail.com> Sent: 24 November 2020 15:29 To: users@ovirt.org <users@ovirt.org> Subject: [ovirt-users] Re: Ovirt 4.4.3 Hyper-converged Deployment with GlusterFS Figured it out: filter = [ "a|^/dev/sda2$|" ] > /etc/lvm/lvm.conf wipefs -af /dev/sdb multipath -F pvcreate /dev/sdb Those steps got me past the filter error - Gluster deployed. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.... oVirt Code of Conduct: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.... List Archives: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovir...
participants (3)
-
Angus Clarke
-
Parth Dhanjal
-
rcpoling@gmail.com