<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Chris-<div class=""><br class=""></div><div class="">You probably need to head over to <a href="mailto:gluster-users@gluster.org" class="">gluster-users@gluster.org</a>&nbsp;for help with performance issues.</div><div class=""><br class=""></div><div class="">That said, what kind of performance are you getting, via some form or testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to determine what kind of performance you’re actually getting.</div><div class=""><br class=""></div><div class="">Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as full replicas, they can’t serve distributed data in this configuration and may be slowing things down on you. If you’ve got a storage network setup, make sure it’s using the largest MTU it can, and consider adding/testing these settings that I use on my main storage volume:</div><div class=""><br class=""></div><div class=""><div style="margin: 0px; line-height: normal;" class=""><a href="http://performance.io" class="">performance.io</a>-thread-count: 32</div><div style="margin: 0px; line-height: normal;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">client.event-threads: 8</span></div><div style="margin: 0px; line-height: normal;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">server.event-threads: 3</span></div><div style="margin: 0px; line-height: normal;" class="">performance.stat-prefetch: on</div></div><div class=""><span style="font-variant-ligatures: no-common-ligatures" class=""><br class=""></span></div><div class=""><span style="font-variant-ligatures: no-common-ligatures" class="">Good luck,</span></div><div class=""><span style="font-variant-ligatures: no-common-ligatures" class=""><br class=""></span></div><div class=""><span style="font-variant-ligatures: no-common-ligatures" class="">&nbsp; -Darrell</span></div><div class=""><br class=""></div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Jun 19, 2017, at 9:46 AM, Chris Boot &lt;<a href="mailto:bootc@bootc.net" class="">bootc@bootc.net</a>&gt; wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi folks,<br class=""><br class="">I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10<br class="">configuration. My VMs run off a replica 3 arbiter 1 volume comprised of<br class="">6 bricks, which themselves live on two SSDs in each of the servers (one<br class="">brick per SSD). The bricks are XFS on LVM thin volumes straight onto the<br class="">SSDs. Connectivity is 10G Ethernet.<br class=""><br class="">Performance within the VMs is pretty terrible. I experience very low<br class="">throughput and random IO is really bad: it feels like a latency issue.<br class="">On my oVirt nodes the SSDs are not generally very busy. The 10G network<br class="">seems to run without errors (iperf3 gives bandwidth measurements of &gt;=<br class="">9.20 Gbits/sec between the three servers).<br class=""><br class="">To put this into perspective: I was getting better behaviour from NFS4<br class="">on a gigabit connection than I am with GlusterFS on 10G: that doesn't<br class="">feel right at all.<br class=""><br class="">My volume configuration looks like this:<br class=""><br class="">Volume Name: vmssd<br class="">Type: Distributed-Replicate<br class="">Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br class="">Status: Started<br class="">Snapshot Count: 0<br class="">Number of Bricks: 2 x (2 + 1) = 6<br class="">Transport-type: tcp<br class="">Bricks:<br class="">Brick1: ovirt3:/gluster/ssd0_vmssd/brick<br class="">Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br class="">Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br class="">Brick4: ovirt3:/gluster/ssd1_vmssd/brick<br class="">Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br class="">Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br class="">Options Reconfigured:<br class="">nfs.disable: on<br class="">transport.address-family: inet6<br class="">performance.quick-read: off<br class="">performance.read-ahead: off<br class=""><a href="http://performance.io" class="">performance.io</a>-cache: off<br class="">performance.stat-prefetch: off<br class="">performance.low-prio-threads: 32<br class="">network.remote-dio: off<br class="">cluster.eager-lock: enable<br class="">cluster.quorum-type: auto<br class="">cluster.server-quorum-type: server<br class="">cluster.data-self-heal-algorithm: full<br class="">cluster.locking-scheme: granular<br class="">cluster.shd-max-threads: 8<br class="">cluster.shd-wait-qlength: 10000<br class="">features.shard: on<br class="">user.cifs: off<br class="">storage.owner-uid: 36<br class="">storage.owner-gid: 36<br class="">features.shard-block-size: 128MB<br class="">performance.strict-o-direct: on<br class="">network.ping-timeout: 30<br class="">cluster.granular-entry-heal: enable<br class=""><br class="">I would really appreciate some guidance on this to try to improve things<br class="">because at this rate I will need to reconsider using GlusterFS altogether.<br class=""><br class="">Cheers,<br class="">Chris<br class=""><br class="">-- <br class="">Chris Boot<br class=""><a href="mailto:bootc@bootc.net" class="">bootc@bootc.net</a><br class="">_______________________________________________<br class="">Users mailing list<br class="">Users@ovirt.org<br class="">http://lists.ovirt.org/mailman/listinfo/users<br class=""></div></div></blockquote></div><br class=""></div></body></html>