
Hi, Gluster will not set up and fails... can anyone see why ? /etc/hosts set up for both backend Gluster network and front end, also LAN DNS set up on the subnet for the front end. TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17 skipping: [gfs2.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs1.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs3.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24 ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34 ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46 failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* gfs1.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs2.gluster.private : ok=11 changed=1 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs3.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 0 / 0 Reply

It’s looking for a storage device on /dev/sdb to use for gluster bricks and is not finding one. Do you have a secondary storage device aside from OS? On Mon, Nov 18, 2019 at 6:42 AM <rob.downer@orbitalsystems.co.uk> wrote:
Hi,
Gluster will not set up and fails... can anyone see why ?
/etc/hosts set up for both backend Gluster network and front end, also LAN DNS set up on the subnet for the front end.
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17 skipping: [gfs2.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs1.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs3.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24 ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34 ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46 failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP ********************************************************************* gfs1.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs2.gluster.private : ok=11 changed=1 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs3.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 0 / 0 Reply _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WN7U626EMZCWOT...

Logical Volumes Create new Logical Volume 1.35 TiB Pool for Thin Volumes pool00 1 GiB ext4 File System /dev/onn_ovirt1/home 1.32 TiB Inactive volume ovirt-node-ng-4.3.6-0.20190926.0

You will need to edit to provide the correct device during installation. Check output of lsblk On Mon, Nov 18, 2019 at 5:19 PM <rob.downer@orbitalsystems.co.uk> wrote:
Logical Volumes Create new Logical Volume 1.35 TiB Pool for Thin Volumes pool00 1 GiB ext4 File System /dev/onn_ovirt1/home 1.32 TiB Inactive volume ovirt-node-ng-4.3.6-0.20190926.0 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZP4B7IPBDUYCB...

[root@ovirt1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.5T 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 1.5T 0 part ├─onn_ovirt1-swap 253:0 0 4G 0 lvm [SWAP] ├─onn_ovirt1-pool00_tmeta 253:1 0 1G 0 lvm │ └─onn_ovirt1-pool00-tpool 253:3 0 1.4T 0 lvm │ ├─onn_ovirt1-ovirt--node--ng--4.3.6--0.20190926.0+1 253:4 0 1.3T 0 lvm / │ ├─onn_ovirt1-pool00 253:5 0 1.4T 0 lvm │ ├─onn_ovirt1-var_log_audit 253:6 0 2G 0 lvm /var/log/audit │ ├─onn_ovirt1-var_log 253:7 0 8G 0 lvm /var/log │ ├─onn_ovirt1-var 253:8 0 15G 0 lvm /var │ ├─onn_ovirt1-tmp 253:9 0 1G 0 lvm /tmp │ ├─onn_ovirt1-home 253:10 0 1G 0 lvm /home │ └─onn_ovirt1-var_crash 253:11 0 10G 0 lvm /var/crash └─onn_ovirt1-pool00_tdata 253:2 0 1.4T 0 lvm └─onn_ovirt1-pool00-tpool 253:3 0 1.4T 0 lvm ├─onn_ovirt1-ovirt--node--ng--4.3.6--0.20190926.0+1 253:4 0 1.3T 0 lvm / ├─onn_ovirt1-pool00 253:5 0 1.4T 0 lvm ├─onn_ovirt1-var_log_audit 253:6 0 2G 0 lvm /var/log/audit ├─onn_ovirt1-var_log 253:7 0 8G 0 lvm /var/log ├─onn_ovirt1-var 253:8 0 15G 0 lvm /var ├─onn_ovirt1-tmp 253:9 0 1G 0 lvm /tmp ├─onn_ovirt1-home 253:10 0 1G 0 lvm /home └─onn_ovirt1-var_crash 253:11 0 10G 0 lvm /var/crash [root@ovirt1 ~]#

You need a separate block device for gluster storage. On Mon, Nov 18, 2019 at 7:49 AM <rob.downer@orbitalsystems.co.uk> wrote:
Logical Volumes Create new Logical Volume 1.35 TiB Pool for Thin Volumes pool00 1 GiB ext4 File System /dev/onn_ovirt1/home 1.32 TiB Inactive volume ovirt-node-ng-4.3.6-0.20190926.0 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZP4B7IPBDUYCB...

Can you provide the contents of /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml ,as it seems that I do not have it (maybe it's only available during deployment) ? Best Regards,Strahil Nikolov В понеделник, 18 ноември 2019 г., 12:42:26 ч. Гринуич+2, rob.downer@orbitalsystems.co.uk <rob.downer@orbitalsystems.co.uk> написа: Hi, Gluster will not set up and fails... can anyone see why ? /etc/hosts set up for both backend Gluster network and front end, also LAN DNS set up on the subnet for the front end. TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17 skipping: [gfs2.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs1.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [gfs3.gluster.private] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24 ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign": "3072K\n"}, "changed": false} TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34 ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize": "3072K\n"}, "changed": false} TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46 failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* gfs1.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs2.gluster.private : ok=11 changed=1 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 gfs3.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16 rescued=0 ignored=0 0 / 0 Reply _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WN7U626EMZCWOT...

Hi, I believe I need to create a Storage Block which I was unaware of as I thought one would be able to use part of the free space on the disks automatically provisioned by the node installer, I believe this requires a reinstall and creation of a new volume or reduce the current size of the volume used and create a new volume on the live system. Additionally on another note how do I remove my email address from showing on posts this is not great to have it showing. that file you wanted is below... [root@ovirt1 ~]# cat /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml --- # We have to set the dataalignment for physical volumes, and physicalextentsize # for volume groups. For JBODs we use a constant alignment value of 256K # however, for RAID we calculate it by multiplying the RAID stripe unit size # with the number of data disks. Hence in case of RAID stripe_unit_size and data # disks are mandatory parameters. - name: Check if valid disktype is provided fail: msg: "Unknown disktype. Allowed disktypes: JBOD, RAID6, RAID10, RAID5." when: gluster_infra_disktype not in [ 'JBOD', 'RAID6', 'RAID10', 'RAID5' ] # Set data alignment for JBODs, by default it is 256K. This set_fact is not # needed if we can always assume 256K for JBOD, however we provide this extra # variable to override it. - name: Set PV data alignment for JBOD set_fact: pv_dataalign: "{{ gluster_infra_dalign | default('256K') }}" when: gluster_infra_disktype == 'JBOD' # Set data alignment for RAID # We need KiB: ensure to keep the trailing `K' in the pv_dataalign calculation. - name: Set PV data alignment for RAID set_fact: pv_dataalign: > {{ gluster_infra_diskcount|int * gluster_infra_stripe_unit_size|int }}K when: > gluster_infra_disktype == 'RAID6' or gluster_infra_disktype == 'RAID10' or gluster_infra_disktype == 'RAID5' - name: Set VG physical extent size for RAID set_fact: vg_pesize: > {{ gluster_infra_diskcount|int * gluster_infra_stripe_unit_size|int }}K when: > gluster_infra_disktype == 'RAID6' or gluster_infra_disktype == 'RAID10' or gluster_infra_disktype == 'RAID5' # Tasks to create a volume group # The devices in `pvs' can be a regular device or a VDO device - name: Create volume groups lvg: state: present vg: "{{ item.vgname }}" pvs: "{{ item.pvname }}" pv_options: "--dataalignment {{ pv_dataalign }}" # pesize is 4m by default for JBODs pesize: "{{ vg_pesize | default(4) }}" with_items: "{{ gluster_infra_volume_groups }}"
participants (4)
-
Jayme
-
rob.downer@orbitalsystems.co.uk
-
Sahina Bose
-
Strahil Nikolov