<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 19, 2017 at 7:32 PM, Ralf Schenk <span dir="ltr">&lt;<a href="mailto:rs@databay.de" target="_blank">rs@databay.de</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    <p><font face="Helvetica, Arial, sans-serif">Hello,</font></p>
    <p><font face="Helvetica, Arial, sans-serif">Gluster-Performance is
        bad. Thats why I asked for native qemu-libgfapi access for
        Ovirt-VM&#39;s to gluster volumes which I thought to be possible
        since 3.6.x. Documentation is misleading and still in 4.1.2
        Ovirt is using fuse to mount gluster-based VM-Disks.</font></p></div></blockquote><div><br></div><div>Can you please open a bug to fix documentation? We are working on libgfapi, but it&#39;s indeed not in yet.</div><div>Y.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF">
    <p><font face="Helvetica, Arial, sans-serif"></font>Bye<br>
    </p><div><div class="h5">
    <br>
    <div class="m_-4951314004038723548moz-cite-prefix">Am 19.06.2017 um 17:23 schrieb Darrell
      Budic:<br>
    </div>
    <blockquote type="cite">
      
      Chris-
      <div><br>
      </div>
      <div>You probably need to head over to <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a> for help
        with performance issues.</div>
      <div><br>
      </div>
      <div>That said, what kind of performance are you getting,
        via some form or testing like bonnie++ or even dd runs? Raw
        bricks vs gluster performance is useful to determine what kind
        of performance you’re actually getting.</div>
      <div><br>
      </div>
      <div>Beyond that, I’d recommend dropping the arbiter
        bricks and re-adding them as full replicas, they can’t serve
        distributed data in this configuration and may be slowing things
        down on you. If you’ve got a storage network setup, make sure
        it’s using the largest MTU it can, and consider adding/testing
        these settings that I use on my main storage volume:</div>
      <div><br>
      </div>
      <div>
        <div style="margin:0px;line-height:normal"><a href="http://performance.io" target="_blank">performance.io</a>-thread-count:
          32</div>
        <div style="margin:0px;line-height:normal"><span style="font-variant-ligatures:no-common-ligatures">client.event-threads:
            8</span></div>
        <div style="margin:0px;line-height:normal"><span style="font-variant-ligatures:no-common-ligatures">server.event-threads:
            3</span></div>
        <div style="margin:0px;line-height:normal">performance.stat-prefetch:
          on</div>
      </div>
      <div><span style="font-variant-ligatures:no-common-ligatures"><br>
        </span></div>
      <div><span style="font-variant-ligatures:no-common-ligatures">Good luck,</span></div>
      <div><span style="font-variant-ligatures:no-common-ligatures"><br>
        </span></div>
      <div><span style="font-variant-ligatures:no-common-ligatures">  -Darrell</span></div>
      <div><br>
      </div>
      <div><br>
        <div>
          <blockquote type="cite">
            <div>On Jun 19, 2017, at 9:46 AM, Chris Boot &lt;<a href="mailto:bootc@bootc.net" target="_blank">bootc@bootc.net</a>&gt; wrote:</div>
            <br class="m_-4951314004038723548Apple-interchange-newline">
            <div>
              <div>Hi folks,<br>
                <br>
                I have 3x servers in a &quot;hyper-converged&quot; oVirt 4.1.2 +
                GlusterFS 3.10<br>
                configuration. My VMs run off a replica 3 arbiter 1
                volume comprised of<br>
                6 bricks, which themselves live on two SSDs in each of
                the servers (one<br>
                brick per SSD). The bricks are XFS on LVM thin volumes
                straight onto the<br>
                SSDs. Connectivity is 10G Ethernet.<br>
                <br>
                Performance within the VMs is pretty terrible. I
                experience very low<br>
                throughput and random IO is really bad: it feels like a
                latency issue.<br>
                On my oVirt nodes the SSDs are not generally very busy.
                The 10G network<br>
                seems to run without errors (iperf3 gives bandwidth
                measurements of &gt;=<br>
                9.20 Gbits/sec between the three servers).<br>
                <br>
                To put this into perspective: I was getting better
                behaviour from NFS4<br>
                on a gigabit connection than I am with GlusterFS on 10G:
                that doesn&#39;t<br>
                feel right at all.<br>
                <br>
                My volume configuration looks like this:<br>
                <br>
                Volume Name: vmssd<br>
                Type: Distributed-Replicate<br>
                Volume ID: d5a5ddd1-a140-4e0d-b514-<wbr>701cfe464853<br>
                Status: Started<br>
                Snapshot Count: 0<br>
                Number of Bricks: 2 x (2 + 1) = 6<br>
                Transport-type: tcp<br>
                Bricks:<br>
                Brick1: ovirt3:/gluster/ssd0_vmssd/<wbr>brick<br>
                Brick2: ovirt1:/gluster/ssd0_vmssd/<wbr>brick<br>
                Brick3: ovirt2:/gluster/ssd0_vmssd/<wbr>brick (arbiter)<br>
                Brick4: ovirt3:/gluster/ssd1_vmssd/<wbr>brick<br>
                Brick5: ovirt1:/gluster/ssd1_vmssd/<wbr>brick<br>
                Brick6: ovirt2:/gluster/ssd1_vmssd/<wbr>brick (arbiter)<br>
                Options Reconfigured:<br>
                nfs.disable: on<br>
                transport.address-family: inet6<br>
                performance.quick-read: off<br>
                performance.read-ahead: off<br>
                <a href="http://performance.io" target="_blank">performance.io</a>-cache: off<br>
                performance.stat-prefetch: off<br>
                performance.low-prio-threads: 32<br>
                network.remote-dio: off<br>
                cluster.eager-lock: enable<br>
                cluster.quorum-type: auto<br>
                cluster.server-quorum-type: server<br>
                cluster.data-self-heal-<wbr>algorithm: full<br>
                cluster.locking-scheme: granular<br>
                cluster.shd-max-threads: 8<br>
                cluster.shd-wait-qlength: 10000<br>
                features.shard: on<br>
                user.cifs: off<br>
                storage.owner-uid: 36<br>
                storage.owner-gid: 36<br>
                features.shard-block-size: 128MB<br>
                performance.strict-o-direct: on<br>
                network.ping-timeout: 30<br>
                cluster.granular-entry-heal: enable<br>
                <br>
                I would really appreciate some guidance on this to try
                to improve things<br>
                because at this rate I will need to reconsider using
                GlusterFS altogether.<br>
                <br>
                Cheers,<br>
                Chris<br>
                <br>
                -- <br>
                Chris Boot<br>
                <a href="mailto:bootc@bootc.net" target="_blank">bootc@bootc.net</a><br>
                ______________________________<wbr>_________________<br>
                Users mailing list<br>
                <a class="m_-4951314004038723548moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
                <a class="m_-4951314004038723548moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="m_-4951314004038723548mimeAttachmentHeader"></fieldset>
      <br>
      <pre>______________________________<wbr>_________________
Users mailing list
<a class="m_-4951314004038723548moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_-4951314004038723548moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>
</pre>
    </blockquote>
    <br>
    </div></div><div class="m_-4951314004038723548moz-signature">-- <br>
      <p>
      </p>
      <table cellspacing="0" cellpadding="0" border="0">
        <tbody>
          <tr>
            <td colspan="3"><img src="cid:part6.276D40AB.8385CD25@databay.de" height="30" width="151" border="0"></td>
          </tr>
          <tr>
            <td valign="top"> <font size="-1" face="Verdana, Arial,
                sans-serif"><br>
                <b>Ralf Schenk</b><br>
                fon <a href="tel:+49%202405%20408370" value="+492405408370" target="_blank">+49 (0) 24 05 / 40 83 70</a><br>
                fax <a href="tel:+49%202405%204083759" value="+4924054083759" target="_blank">+49 (0) 24 05 / 40 83 759</a><br>
                mail <a href="mailto:rs@databay.de" target="_blank"><font color="#FF0000"><b>rs@databay.de</b></font></a><br>
              </font> </td>
            <td width="30"> </td>
            <td valign="top"> <font size="-1" face="Verdana, Arial,
                sans-serif"><br>
                <b>Databay AG</b><br>
                Jens-Otto-Krag-Straße 11<br>
                D-52146 Würselen<br>
                <a href="http://www.databay.de" target="_blank"><font color="#FF0000"><b>www.databay.de</b></font></a>
              </font> </td>
          </tr>
          <tr>
            <td colspan="3" valign="top"> <font size="1" face="Verdana,
                Arial, sans-serif"><br>
                Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE
                210844202<br>
                Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch
                Yavari, Dipl.-Kfm. Philipp Hermanns<br>
                Aufsichtsratsvorsitzender: Wilhelm Dohmen </font> </td>
          </tr>
        </tbody>
      </table>
      <hr noshade size="1" color="#000000" width="100%">
    </div>
  </div>

<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>