<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p><font face="Helvetica, Arial, sans-serif">Hello,</font></p>
<p><font face="Helvetica, Arial, sans-serif">Gluster-Performance is
bad. Thats why I asked for native qemu-libgfapi access for
Ovirt-VM's to gluster volumes which I thought to be possible
since 3.6.x. Documentation is misleading and still in 4.1.2
Ovirt is using fuse to mount gluster-based VM-Disks.</font></p>
<p><font face="Helvetica, Arial, sans-serif"></font>Bye<br>
</p>
<br>
<div class="moz-cite-prefix">Am 19.06.2017 um 17:23 schrieb Darrell
Budic:<br>
</div>
<blockquote type="cite"
cite="mid:D8AAF9DB-02FB-4B38-9E33-174134F5377C@onholyground.com">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Chris-
<div class=""><br class="">
</div>
<div class="">You probably need to head over to <a
href="mailto:gluster-users@gluster.org" class=""
moz-do-not-send="true">gluster-users@gluster.org</a> for help
with performance issues.</div>
<div class=""><br class="">
</div>
<div class="">That said, what kind of performance are you getting,
via some form or testing like bonnie++ or even dd runs? Raw
bricks vs gluster performance is useful to determine what kind
of performance you’re actually getting.</div>
<div class=""><br class="">
</div>
<div class="">Beyond that, I’d recommend dropping the arbiter
bricks and re-adding them as full replicas, they can’t serve
distributed data in this configuration and may be slowing things
down on you. If you’ve got a storage network setup, make sure
it’s using the largest MTU it can, and consider adding/testing
these settings that I use on my main storage volume:</div>
<div class=""><br class="">
</div>
<div class="">
<div style="margin: 0px; line-height: normal;" class=""><a
href="http://performance.io" class="" moz-do-not-send="true">performance.io</a>-thread-count:
32</div>
<div style="margin: 0px; line-height: normal;" class=""><span
style="font-variant-ligatures: no-common-ligatures" class="">client.event-threads:
8</span></div>
<div style="margin: 0px; line-height: normal;" class=""><span
style="font-variant-ligatures: no-common-ligatures" class="">server.event-threads:
3</span></div>
<div style="margin: 0px; line-height: normal;" class="">performance.stat-prefetch:
on</div>
</div>
<div class=""><span style="font-variant-ligatures:
no-common-ligatures" class=""><br class="">
</span></div>
<div class=""><span style="font-variant-ligatures:
no-common-ligatures" class="">Good luck,</span></div>
<div class=""><span style="font-variant-ligatures:
no-common-ligatures" class=""><br class="">
</span></div>
<div class=""><span style="font-variant-ligatures:
no-common-ligatures" class=""> -Darrell</span></div>
<div class=""><br class="">
</div>
<div class=""><br class="">
<div>
<blockquote type="cite" class="">
<div class="">On Jun 19, 2017, at 9:46 AM, Chris Boot <<a
href="mailto:bootc@bootc.net" class=""
moz-do-not-send="true">bootc@bootc.net</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div class="">Hi folks,<br class="">
<br class="">
I have 3x servers in a "hyper-converged" oVirt 4.1.2 +
GlusterFS 3.10<br class="">
configuration. My VMs run off a replica 3 arbiter 1
volume comprised of<br class="">
6 bricks, which themselves live on two SSDs in each of
the servers (one<br class="">
brick per SSD). The bricks are XFS on LVM thin volumes
straight onto the<br class="">
SSDs. Connectivity is 10G Ethernet.<br class="">
<br class="">
Performance within the VMs is pretty terrible. I
experience very low<br class="">
throughput and random IO is really bad: it feels like a
latency issue.<br class="">
On my oVirt nodes the SSDs are not generally very busy.
The 10G network<br class="">
seems to run without errors (iperf3 gives bandwidth
measurements of >=<br class="">
9.20 Gbits/sec between the three servers).<br class="">
<br class="">
To put this into perspective: I was getting better
behaviour from NFS4<br class="">
on a gigabit connection than I am with GlusterFS on 10G:
that doesn't<br class="">
feel right at all.<br class="">
<br class="">
My volume configuration looks like this:<br class="">
<br class="">
Volume Name: vmssd<br class="">
Type: Distributed-Replicate<br class="">
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br
class="">
Status: Started<br class="">
Snapshot Count: 0<br class="">
Number of Bricks: 2 x (2 + 1) = 6<br class="">
Transport-type: tcp<br class="">
Bricks:<br class="">
Brick1: ovirt3:/gluster/ssd0_vmssd/brick<br class="">
Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br class="">
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br
class="">
Brick4: ovirt3:/gluster/ssd1_vmssd/brick<br class="">
Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br class="">
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br
class="">
Options Reconfigured:<br class="">
nfs.disable: on<br class="">
transport.address-family: inet6<br class="">
performance.quick-read: off<br class="">
performance.read-ahead: off<br class="">
<a href="http://performance.io" class=""
moz-do-not-send="true">performance.io</a>-cache: off<br
class="">
performance.stat-prefetch: off<br class="">
performance.low-prio-threads: 32<br class="">
network.remote-dio: off<br class="">
cluster.eager-lock: enable<br class="">
cluster.quorum-type: auto<br class="">
cluster.server-quorum-type: server<br class="">
cluster.data-self-heal-algorithm: full<br class="">
cluster.locking-scheme: granular<br class="">
cluster.shd-max-threads: 8<br class="">
cluster.shd-wait-qlength: 10000<br class="">
features.shard: on<br class="">
user.cifs: off<br class="">
storage.owner-uid: 36<br class="">
storage.owner-gid: 36<br class="">
features.shard-block-size: 128MB<br class="">
performance.strict-o-direct: on<br class="">
network.ping-timeout: 30<br class="">
cluster.granular-entry-heal: enable<br class="">
<br class="">
I would really appreciate some guidance on this to try
to improve things<br class="">
because at this rate I will need to reconsider using
GlusterFS altogether.<br class="">
<br class="">
Cheers,<br class="">
Chris<br class="">
<br class="">
-- <br class="">
Chris Boot<br class="">
<a href="mailto:bootc@bootc.net" class=""
moz-do-not-send="true">bootc@bootc.net</a><br class="">
_______________________________________________<br
class="">
Users mailing list<br class="">
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br class="">
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br
class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
<div class="moz-signature">-- <br>
<p>
</p>
<table cellspacing="0" cellpadding="0" border="0">
<tbody>
<tr>
<td colspan="3"><img
src="cid:part6.276D40AB.8385CD25@databay.de" height="30"
width="151" border="0"></td>
</tr>
<tr>
<td valign="top"> <font size="-1" face="Verdana, Arial,
sans-serif"><br>
<b>Ralf Schenk</b><br>
fon +49 (0) 24 05 / 40 83 70<br>
fax +49 (0) 24 05 / 40 83 759<br>
mail <a href="mailto:rs@databay.de"><font
color="#FF0000"><b>rs@databay.de</b></font></a><br>
</font> </td>
<td width="30"> </td>
<td valign="top"> <font size="-1" face="Verdana, Arial,
sans-serif"><br>
<b>Databay AG</b><br>
Jens-Otto-Krag-Straße 11<br>
D-52146 Würselen<br>
<a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a>
</font> </td>
</tr>
<tr>
<td colspan="3" valign="top"> <font size="1" face="Verdana,
Arial, sans-serif"><br>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE
210844202<br>
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch
Yavari, Dipl.-Kfm. Philipp Hermanns<br>
Aufsichtsratsvorsitzender: Wilhelm Dohmen </font> </td>
</tr>
</tbody>
</table>
<hr noshade="noshade" size="1" color="#000000" width="100%">
</div>
</body>
</html>