[ovirt-users] Hyperconverged oVirt installation gluster problems

knarra knarra at redhat.com
Sat Jun 17 03:39:07 UTC 2017


Hi,

     grafton_sanity_check.sh checks if the disk has any labels or 
partitions present on it. Since your disk has already a partition and 
you are using the same disk to create gluster brick as well it fails. 
commenting out this script in the conf file and running again would 
resolve your issue.

Thanks
kasturi.

On 06/16/2017 06:56 PM, jesper andersson wrote:
> Hi.
>
> I'm trying to set up a 3 node ovirt cluster with gluster as this guide 
> describes:
> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
> I've installed oVirt node 4.1.2 in one partition and left a partition 
> to hold the gluster volumes on all three nodes. The problem is that I 
> can't get through gdeploy for gluster install. I only get the error:
> Error: Unsupported disk type!
>
>
>
> PLAY [gluster_servers] 
> *********************************************************
>
> TASK [Run a shell script] 
> ******************************************************
> changed: [host03] => 
> (item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
> sdb -h host01,host02,host03)
> changed: [host02] => 
> (item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
> sdb -h host01,host02,host03)
> changed: [host01] => 
> (item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
> sdb -h host01,host02,host03)
>
> TASK [debug] 
> *******************************************************************
> ok: [host01] => {
>     "changed": false,
>     "msg": "All items completed"
> }
> ok: [host02] => {
>     "changed": false,
>     "msg": "All items completed"
> }
> ok: [host03] => {
>     "changed": false,
>     "msg": "All items completed"
> }
>
> PLAY RECAP 
> *********************************************************************
> host01                     : ok=2    changed=1 unreachable=0    failed=0
> host02                     : ok=2    changed=1 unreachable=0    failed=0
> host03                     : ok=2    changed=1 unreachable=0    failed=0
>
>
> PLAY [gluster_servers] 
> *********************************************************
>
> TASK [Enable or disable services] 
> **********************************************
> ok: [host01] => (item=chronyd)
> ok: [host03] => (item=chronyd)
> ok: [host02] => (item=chronyd)
>
> PLAY RECAP 
> *********************************************************************
> host01                     : ok=1    changed=0 unreachable=0    failed=0
> host02                     : ok=1    changed=0 unreachable=0    failed=0
> host03                     : ok=1    changed=0 unreachable=0    failed=0
>
>
> PLAY [gluster_servers] 
> *********************************************************
>
> TASK [start/stop/restart/reload services] 
> **************************************
> changed: [host03] => (item=chronyd)
> changed: [host01] => (item=chronyd)
> changed: [host02] => (item=chronyd)
>
> PLAY RECAP 
> *********************************************************************
> host01                     : ok=1    changed=1 unreachable=0    failed=0
> host02                     : ok=1    changed=1 unreachable=0    failed=0
> host03                     : ok=1    changed=1 unreachable=0    failed=0
>
>
> Error: Unsupported disk type!
>
>
>
>
>
> [root at host01 scripts]# fdisk -l
>
> Disk /dev/sdb: 898.3 GB, 898319253504 bytes, 1754529792 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk label type: dos
> Disk identifier: 0x0629cdcf
>
>    Device Boot      Start         End      Blocks   Id System
>
> Disk /dev/sda: 299.4 GB, 299439751168 bytes, 584843264 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk label type: dos
> Disk identifier: 0x00007c39
>
>    Device Boot      Start         End      Blocks   Id System
> /dev/sda1   *        2048     2099199     1048576   83 Linux
> /dev/sda2         2099200   584843263   291372032   8e Linux LVM
>
> Disk /dev/mapper/onn_host01-swap: 16.9 GB, 16911433728 bytes, 33030144 
> sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
> Disk /dev/mapper/onn_host01-pool00_tmeta: 1073 MB, 1073741824 bytes, 
> 2097152 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
> Disk /dev/mapper/onn_host01-pool00_tdata: 264.3 GB, 264266317824 
> bytes, 516145152 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
> Disk /dev/mapper/onn_host01-pool00-tpool: 264.3 GB, 264266317824 
> bytes, 516145152 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 131072 bytes / 131072 bytes
>
>
> Disk /dev/mapper/onn_host01-ovirt--node--ng--4.1.2--0.20170613.0+1: 
> 248.2 GB, 248160190464 bytes, 484687872 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 131072 bytes / 131072 bytes
>
>
> Disk /dev/mapper/onn_host01-pool00: 264.3 GB, 264266317824 bytes, 
> 516145152 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 131072 bytes / 131072 bytes
>
>
> Disk /dev/mapper/onn_host01-var: 16.1 GB, 16106127360 bytes, 31457280 
> sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 131072 bytes / 131072 bytes
>
>
> Disk /dev/mapper/onn_host01-root: 248.2 GB, 248160190464 bytes, 
> 484687872 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 131072 bytes / 131072 bytes
>
> Any input is appreciated
>
> Best regards
> Jesper
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170617/1d488abd/attachment-0001.html>


More information about the Users mailing list