<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Hi,<br>
      <br>
          grafton_sanity_check.sh checks if the disk has any labels or
      partitions present on it. Since your disk has already a partition
      and you are using the same disk to create gluster brick as well it
      fails. commenting out this script in the conf file and running
      again would resolve your issue.<br>
      <br>
      Thanks<br>
      kasturi.<br>
       <br>
      On 06/16/2017 06:56 PM, jesper andersson wrote:<br>
    </div>
    <blockquote
cite="mid:CABmxLUMtcrdV+2Wk_=raCPU1kz2AzM2e-XsPBmEx9FkRh8ZmGA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>Hi.<br>
                  <br>
                </div>
                I'm trying to set up a 3 node ovirt cluster with gluster
                as this guide describes: <br>
                <a moz-do-not-send="true"
href="https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/">https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/</a><br>
              </div>
              I've installed oVirt node 4.1.2 in one partition and left
              a partition to hold the gluster volumes on all three
              nodes. The problem is that I can't get through gdeploy for
              gluster install. I only get the error: <br>
              Error: Unsupported disk type!<br>
              <br>
              <br>
              <br>
              PLAY [gluster_servers]
              *********************************************************<br>
              <br>
              TASK [Run a shell script]
              ******************************************************<br>
              changed: [host03] =&gt;
              (item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
              -d sdb -h host01,host02,host03)<br>
              changed: [host02] =&gt;
              (item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
              -d sdb -h host01,host02,host03)<br>
              changed: [host01] =&gt;
              (item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
              -d sdb -h host01,host02,host03)<br>
              <br>
              TASK [debug]
              *******************************************************************<br>
              ok: [host01] =&gt; {<br>
                  "changed": false, <br>
                  "msg": "All items completed"<br>
              }<br>
              ok: [host02] =&gt; {<br>
                  "changed": false, <br>
                  "msg": "All items completed"<br>
              }<br>
              ok: [host03] =&gt; {<br>
                  "changed": false, <br>
                  "msg": "All items completed"<br>
              }<br>
              <br>
              PLAY RECAP
              *********************************************************************<br>
              host01                     : ok=2    changed=1   
              unreachable=0    failed=0   <br>
              host02                     : ok=2    changed=1   
              unreachable=0    failed=0   <br>
              host03                     : ok=2    changed=1   
              unreachable=0    failed=0   <br>
              <br>
              <br>
              PLAY [gluster_servers]
              *********************************************************<br>
              <br>
              TASK [Enable or disable services]
              **********************************************<br>
              ok: [host01] =&gt; (item=chronyd)<br>
              ok: [host03] =&gt; (item=chronyd)<br>
              ok: [host02] =&gt; (item=chronyd)<br>
              <br>
              PLAY RECAP
              *********************************************************************<br>
              host01                     : ok=1    changed=0   
              unreachable=0    failed=0   <br>
              host02                     : ok=1    changed=0   
              unreachable=0    failed=0   <br>
              host03                     : ok=1    changed=0   
              unreachable=0    failed=0   <br>
              <br>
              <br>
              PLAY [gluster_servers]
              *********************************************************<br>
              <br>
              TASK [start/stop/restart/reload services]
              **************************************<br>
              changed: [host03] =&gt; (item=chronyd)<br>
              changed: [host01] =&gt; (item=chronyd)<br>
              changed: [host02] =&gt; (item=chronyd)<br>
              <br>
              PLAY RECAP
              *********************************************************************<br>
              host01                     : ok=1    changed=1   
              unreachable=0    failed=0   <br>
              host02                     : ok=1    changed=1   
              unreachable=0    failed=0   <br>
              host03                     : ok=1    changed=1   
              unreachable=0    failed=0   <br>
              <br>
              <br>
              Error: Unsupported disk type!<br>
              <br>
              <br>
              <br>
              <br>
              <br>
              [root@host01 scripts]# fdisk -l<br>
              <br>
              Disk /dev/sdb: 898.3 GB, 898319253504 bytes, 1754529792
              sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 512 bytes / 512 bytes<br>
              Disk label type: dos<br>
              Disk identifier: 0x0629cdcf<br>
              <br>
                 Device Boot      Start         End      Blocks   Id 
              System<br>
              <br>
              Disk /dev/sda: 299.4 GB, 299439751168 bytes, 584843264
              sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 512 bytes / 512 bytes<br>
              Disk label type: dos<br>
              Disk identifier: 0x00007c39<br>
              <br>
                 Device Boot      Start         End      Blocks   Id 
              System<br>
              /dev/sda1   *        2048     2099199     1048576   83 
              Linux<br>
              /dev/sda2         2099200   584843263   291372032   8e 
              Linux LVM<br>
              <br>
              Disk /dev/mapper/onn_host01-swap: 16.9 GB, 16911433728
              bytes, 33030144 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 512 bytes / 512 bytes<br>
              <br>
              <br>
              Disk /dev/mapper/onn_host01-pool00_tmeta: 1073 MB,
              1073741824 bytes, 2097152 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 512 bytes / 512 bytes<br>
              <br>
              <br>
              Disk /dev/mapper/onn_host01-pool00_tdata: 264.3 GB,
              264266317824 bytes, 516145152 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 512 bytes / 512 bytes<br>
              <br>
              <br>
              Disk /dev/mapper/onn_host01-pool00-tpool: 264.3 GB,
              264266317824 bytes, 516145152 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 131072 bytes / 131072 bytes<br>
              <br>
              <br>
              Disk
              /dev/mapper/onn_host01-ovirt--node--ng--4.1.2--0.20170613.0+1:
              248.2 GB, 248160190464 bytes, 484687872 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 131072 bytes / 131072 bytes<br>
              <br>
              <br>
              Disk /dev/mapper/onn_host01-pool00: 264.3 GB, 264266317824
              bytes, 516145152 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 131072 bytes / 131072 bytes<br>
              <br>
              <br>
              Disk /dev/mapper/onn_host01-var: 16.1 GB, 16106127360
              bytes, 31457280 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 131072 bytes / 131072 bytes<br>
              <br>
              <br>
              Disk /dev/mapper/onn_host01-root: 248.2 GB, 248160190464
              bytes, 484687872 sectors<br>
              Units = sectors of 1 * 512 = 512 bytes<br>
              Sector size (logical/physical): 512 bytes / 512 bytes<br>
              I/O size (minimum/optimal): 131072 bytes / 131072 bytes<br>
              <br>
            </div>
            Any input is appreciated<br>
            <br>
          </div>
          Best regards<br>
        </div>
        Jesper<br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>