Greetings:

3 x servers  each server has 1 x 512GB ssd  2 x 1TB SSD  JBOD

Goal: use HCI disk setup wizard to deploy initial structure

Each server has disk scanning in as different /dev/sd#  and so trying to use more clear /dev/mapper/<disk ID>

As such I set this per table below:

 

# Select each server and set each drive  <<<<<<<<<<Double Check drive device ID as they do NOT match up per host

# I transitioned to /dev/mapper object to avoid unclearness of /dev/sd#

thor

/dev/sdc

/dev/mapper/Samsung_SSD_850_PRO_512GB_S250NXAGA15787L

odin

/dev/sdb

/dev/mapper/Micron_1100_MTFDDAV512TBN_17401F699137

medusa

/dev/sdb

/dev/mapper/SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306

# Note that drives need to be completely clear of any partition or file system

[root@thor /]# gdisk /dev/sdc

GPT fdisk (gdisk) version 1.0.3

 

Partition table scan:

  MBR: protective

  BSD: not present

  APM: not present

  GPT: present

 

Found valid GPT with protective MBR; using GPT.

 

Command (? for help): x

 

Expert command (? for help): z

About to wipe out GPT on /dev/sdc. Proceed? (Y/N): y

GPT data structures destroyed! You may now partition the disk using fdisk or

other utilities.

Blank out MBR? (Y/N): y

image.png

 

 

image.png


But deployment fails with Error:
<snip>

TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ********
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml:38
ok: [thorst.penguinpages.local] => {"ansible_facts": {"gluster_volumes_by_groupname": {}}, "changed": false}
ok: [odinst.penguinpages.local] => {"ansible_facts": {"gluster_volumes_by_groupname": {}}, "changed": false}
ok: [medusast.penguinpages.local] => {"ansible_facts": {"gluster_volumes_by_groupname": {}}, "changed": false}

TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in volume group] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:37

TASK [gluster.infra/roles/backend_setup : update LVM fact's] *******************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:83
skipping: [thorst.penguinpages.local] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [odinst.penguinpages.local] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [medusast.penguinpages.local] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *********
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:90
failed: [medusast.penguinpages.local] (item={'vgname': 'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname': 'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "  Volume group \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\" not found.\n  Cannot process volume group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname": "gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Volume group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L does not exist.", "rc": 5}
changed: [thorst.penguinpages.local] => (item={'vgname': 'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname': 'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": true, "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname": "gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": ""}
failed: [odinst.penguinpages.local] (item={'vgname': 'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname': 'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "  Volume group \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\" not found.\n  Cannot process volume group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname": "gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Volume group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L does not exist.", "rc": 5}

NO MORE HOSTS LEFT *************************************************************

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
medusast.penguinpages.local : ok=23   changed=5    unreachable=0    failed=1    skipped=34   rescued=0    ignored=0  
odinst.penguinpages.local  : ok=23   changed=5    unreachable=0    failed=1    skipped=34   rescued=0    ignored=0  
thorst.penguinpages.local  : ok=30   changed=9    unreachable=0    failed=0    skipped=29   rescued=0    ignored=0  

Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations.

############

Why is oVirt ignoring when I set (and double check explicite device call for deployment?

Attached is the ansible file it creates and then one I had to edit to correct what wizard should have built it as.



--
penguinpages