
Hi folks, I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet. Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers). To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. My volume configuration looks like this: Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether. Cheers, Chris -- Chris Boot bootc@bootc.net

On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net> wrote: =20 Hi folks, =20 I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised = of 6 bricks, which themselves live on two SSDs in each of the servers = (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto =
--Apple-Mail=_DC124CB5-AC50-425D-BE5D-768F40A9DA3F Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Chris- You probably need to head over to gluster-users@gluster.org = <mailto:gluster-users@gluster.org> for help with performance issues. That said, what kind of performance are you getting, via some form or = testing like bonnie++ or even dd runs? Raw bricks vs gluster performance = is useful to determine what kind of performance you=E2=80=99re actually = getting. Beyond that, I=E2=80=99d recommend dropping the arbiter bricks and = re-adding them as full replicas, they can=E2=80=99t serve distributed = data in this configuration and may be slowing things down on you. If = you=E2=80=99ve got a storage network setup, make sure it=E2=80=99s using = the largest MTU it can, and consider adding/testing these settings that = I use on my main storage volume: performance.io-thread-count: 32 client.event-threads: 8 server.event-threads: 3 performance.stat-prefetch: on Good luck, -Darrell the
SSDs. Connectivity is 10G Ethernet. =20 Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G = network seems to run without errors (iperf3 gives bandwidth measurements of >=3D=
9.20 Gbits/sec between the three servers). =20 To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. =20 My volume configuration looks like this: =20 Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) =3D 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable =20 I would really appreciate some guidance on this to try to improve = things because at this rate I will need to reconsider using GlusterFS = altogether. =20 Cheers, Chris =20 --=20 Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_DC124CB5-AC50-425D-BE5D-768F40A9DA3F Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D"">Chris-<div class=3D""><br class=3D""></div><div class=3D"">You = probably need to head over to <a href=3D"mailto:gluster-users@gluster.org"= class=3D"">gluster-users@gluster.org</a> for help with performance = issues.</div><div class=3D""><br class=3D""></div><div class=3D"">That = said, what kind of performance are you getting, via some form or testing = like bonnie++ or even dd runs? Raw bricks vs gluster performance is = useful to determine what kind of performance you=E2=80=99re actually = getting.</div><div class=3D""><br class=3D""></div><div class=3D"">Beyond = that, I=E2=80=99d recommend dropping the arbiter bricks and re-adding = them as full replicas, they can=E2=80=99t serve distributed data in this = configuration and may be slowing things down on you. If you=E2=80=99ve = got a storage network setup, make sure it=E2=80=99s using the largest = MTU it can, and consider adding/testing these settings that I use on my = main storage volume:</div><div class=3D""><br class=3D""></div><div = class=3D""><div style=3D"margin: 0px; line-height: normal;" class=3D""><a = href=3D"http://performance.io" class=3D"">performance.io</a>-thread-count:= 32</div><div style=3D"margin: 0px; line-height: normal;" class=3D""><span= style=3D"font-variant-ligatures: no-common-ligatures" = class=3D"">client.event-threads: 8</span></div><div style=3D"margin: = 0px; line-height: normal;" class=3D""><span = style=3D"font-variant-ligatures: no-common-ligatures" = class=3D"">server.event-threads: 3</span></div><div style=3D"margin: = 0px; line-height: normal;" class=3D"">performance.stat-prefetch: = on</div></div><div class=3D""><span style=3D"font-variant-ligatures: = no-common-ligatures" class=3D""><br class=3D""></span></div><div = class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" = class=3D"">Good luck,</span></div><div class=3D""><span = style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br = class=3D""></span></div><div class=3D""><span = style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> = -Darrell</span></div><div class=3D""><br class=3D""></div><div = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On Jun 19, 2017, at 9:46 AM, Chris Boot <<a = href=3D"mailto:bootc@bootc.net" class=3D"">bootc@bootc.net</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = class=3D"">Hi folks,<br class=3D""><br class=3D"">I have 3x servers in a = "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10<br = class=3D"">configuration. My VMs run off a replica 3 arbiter 1 volume = comprised of<br class=3D"">6 bricks, which themselves live on two SSDs = in each of the servers (one<br class=3D"">brick per SSD). The bricks are = XFS on LVM thin volumes straight onto the<br class=3D"">SSDs. = Connectivity is 10G Ethernet.<br class=3D""><br class=3D"">Performance = within the VMs is pretty terrible. I experience very low<br = class=3D"">throughput and random IO is really bad: it feels like a = latency issue.<br class=3D"">On my oVirt nodes the SSDs are not = generally very busy. The 10G network<br class=3D"">seems to run without = errors (iperf3 gives bandwidth measurements of >=3D<br class=3D"">9.20 = Gbits/sec between the three servers).<br class=3D""><br class=3D"">To = put this into perspective: I was getting better behaviour from NFS4<br = class=3D"">on a gigabit connection than I am with GlusterFS on 10G: that = doesn't<br class=3D"">feel right at all.<br class=3D""><br class=3D"">My = volume configuration looks like this:<br class=3D""><br class=3D"">Volume = Name: vmssd<br class=3D"">Type: Distributed-Replicate<br class=3D"">Volume= ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br class=3D"">Status: = Started<br class=3D"">Snapshot Count: 0<br class=3D"">Number of Bricks: = 2 x (2 + 1) =3D 6<br class=3D"">Transport-type: tcp<br = class=3D"">Bricks:<br class=3D"">Brick1: = ovirt3:/gluster/ssd0_vmssd/brick<br class=3D"">Brick2: = ovirt1:/gluster/ssd0_vmssd/brick<br class=3D"">Brick3: = ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br class=3D"">Brick4: = ovirt3:/gluster/ssd1_vmssd/brick<br class=3D"">Brick5: = ovirt1:/gluster/ssd1_vmssd/brick<br class=3D"">Brick6: = ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br class=3D"">Options = Reconfigured:<br class=3D"">nfs.disable: on<br = class=3D"">transport.address-family: inet6<br = class=3D"">performance.quick-read: off<br = class=3D"">performance.read-ahead: off<br class=3D""><a = href=3D"http://performance.io" class=3D"">performance.io</a>-cache: = off<br class=3D"">performance.stat-prefetch: off<br = class=3D"">performance.low-prio-threads: 32<br = class=3D"">network.remote-dio: off<br class=3D"">cluster.eager-lock: = enable<br class=3D"">cluster.quorum-type: auto<br = class=3D"">cluster.server-quorum-type: server<br = class=3D"">cluster.data-self-heal-algorithm: full<br = class=3D"">cluster.locking-scheme: granular<br = class=3D"">cluster.shd-max-threads: 8<br = class=3D"">cluster.shd-wait-qlength: 10000<br class=3D"">features.shard: = on<br class=3D"">user.cifs: off<br class=3D"">storage.owner-uid: 36<br = class=3D"">storage.owner-gid: 36<br class=3D"">features.shard-block-size: = 128MB<br class=3D"">performance.strict-o-direct: on<br = class=3D"">network.ping-timeout: 30<br = class=3D"">cluster.granular-entry-heal: enable<br class=3D""><br = class=3D"">I would really appreciate some guidance on this to try to = improve things<br class=3D"">because at this rate I will need to = reconsider using GlusterFS altogether.<br class=3D""><br = class=3D"">Cheers,<br class=3D"">Chris<br class=3D""><br class=3D"">-- = <br class=3D"">Chris Boot<br class=3D""><a href=3D"mailto:bootc@bootc.net"= class=3D"">bootc@bootc.net</a><br = class=3D"">_______________________________________________<br = class=3D"">Users mailing list<br class=3D"">Users@ovirt.org<br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></div></blockquote></div><br = class=3D""></div></body></html>= --Apple-Mail=_DC124CB5-AC50-425D-BE5D-768F40A9DA3F--

This is a multi-part message in MIME format. --------------9DE07E5668EB7C56D4297046 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Hello, Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster volumes which I thought to be possible since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disks. Bye Am 19.06.2017 um 17:23 schrieb Darrell Budic:
Chris-
You probably need to head over to gluster-users@gluster.org <mailto:gluster-users@gluster.org> for help with performance issues.
That said, what kind of performance are you getting, via some form or testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to determine what kind of performance you’re actually getting.
Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as full replicas, they can’t serve distributed data in this configuration and may be slowing things down on you. If you’ve got a storage network setup, make sure it’s using the largest MTU it can, and consider adding/testing these settings that I use on my main storage volume:
performance.io <http://performance.io>-thread-count: 32 client.event-threads: 8 server.event-threads: 3 performance.stat-prefetch: on
Good luck,
-Darrell
On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net <mailto:bootc@bootc.net>> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io <http://performance.io>-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Cheers, Chris
-- Chris Boot bootc@bootc.net <mailto:bootc@bootc.net> _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------ --------------9DE07E5668EB7C56D4297046 Content-Type: multipart/related; boundary="------------CE9FA4B243E602B4EF2B1DB6" --------------CE9FA4B243E602B4EF2B1DB6 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p><font face="Helvetica, Arial, sans-serif">Hello,</font></p> <p><font face="Helvetica, Arial, sans-serif">Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster volumes which I thought to be possible since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disks.</font></p> <p><font face="Helvetica, Arial, sans-serif"></font>Bye<br> </p> <br> <div class="moz-cite-prefix">Am 19.06.2017 um 17:23 schrieb Darrell Budic:<br> </div> <blockquote type="cite" cite="mid:D8AAF9DB-02FB-4B38-9E33-174134F5377C@onholyground.com"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> Chris- <div class=""><br class=""> </div> <div class="">You probably need to head over to <a href="mailto:gluster-users@gluster.org" class="" moz-do-not-send="true">gluster-users@gluster.org</a> for help with performance issues.</div> <div class=""><br class=""> </div> <div class="">That said, what kind of performance are you getting, via some form or testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to determine what kind of performance you’re actually getting.</div> <div class=""><br class=""> </div> <div class="">Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as full replicas, they can’t serve distributed data in this configuration and may be slowing things down on you. If you’ve got a storage network setup, make sure it’s using the largest MTU it can, and consider adding/testing these settings that I use on my main storage volume:</div> <div class=""><br class=""> </div> <div class=""> <div style="margin: 0px; line-height: normal;" class=""><a href="http://performance.io" class="" moz-do-not-send="true">performance.io</a>-thread-count: 32</div> <div style="margin: 0px; line-height: normal;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">client.event-threads: 8</span></div> <div style="margin: 0px; line-height: normal;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">server.event-threads: 3</span></div> <div style="margin: 0px; line-height: normal;" class="">performance.stat-prefetch: on</div> </div> <div class=""><span style="font-variant-ligatures: no-common-ligatures" class=""><br class=""> </span></div> <div class=""><span style="font-variant-ligatures: no-common-ligatures" class="">Good luck,</span></div> <div class=""><span style="font-variant-ligatures: no-common-ligatures" class=""><br class=""> </span></div> <div class=""><span style="font-variant-ligatures: no-common-ligatures" class=""> -Darrell</span></div> <div class=""><br class=""> </div> <div class=""><br class=""> <div> <blockquote type="cite" class=""> <div class="">On Jun 19, 2017, at 9:46 AM, Chris Boot <<a href="mailto:bootc@bootc.net" class="" moz-do-not-send="true">bootc@bootc.net</a>> wrote:</div> <br class="Apple-interchange-newline"> <div class=""> <div class="">Hi folks,<br class=""> <br class=""> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10<br class=""> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of<br class=""> 6 bricks, which themselves live on two SSDs in each of the servers (one<br class=""> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the<br class=""> SSDs. Connectivity is 10G Ethernet.<br class=""> <br class=""> Performance within the VMs is pretty terrible. I experience very low<br class=""> throughput and random IO is really bad: it feels like a latency issue.<br class=""> On my oVirt nodes the SSDs are not generally very busy. The 10G network<br class=""> seems to run without errors (iperf3 gives bandwidth measurements of >=<br class=""> 9.20 Gbits/sec between the three servers).<br class=""> <br class=""> To put this into perspective: I was getting better behaviour from NFS4<br class=""> on a gigabit connection than I am with GlusterFS on 10G: that doesn't<br class=""> feel right at all.<br class=""> <br class=""> My volume configuration looks like this:<br class=""> <br class=""> Volume Name: vmssd<br class=""> Type: Distributed-Replicate<br class=""> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br class=""> Status: Started<br class=""> Snapshot Count: 0<br class=""> Number of Bricks: 2 x (2 + 1) = 6<br class=""> Transport-type: tcp<br class=""> Bricks:<br class=""> Brick1: ovirt3:/gluster/ssd0_vmssd/brick<br class=""> Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br class=""> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br class=""> Brick4: ovirt3:/gluster/ssd1_vmssd/brick<br class=""> Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br class=""> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br class=""> Options Reconfigured:<br class=""> nfs.disable: on<br class=""> transport.address-family: inet6<br class=""> performance.quick-read: off<br class=""> performance.read-ahead: off<br class=""> <a href="http://performance.io" class="" moz-do-not-send="true">performance.io</a>-cache: off<br class=""> performance.stat-prefetch: off<br class=""> performance.low-prio-threads: 32<br class=""> network.remote-dio: off<br class=""> cluster.eager-lock: enable<br class=""> cluster.quorum-type: auto<br class=""> cluster.server-quorum-type: server<br class=""> cluster.data-self-heal-algorithm: full<br class=""> cluster.locking-scheme: granular<br class=""> cluster.shd-max-threads: 8<br class=""> cluster.shd-wait-qlength: 10000<br class=""> features.shard: on<br class=""> user.cifs: off<br class=""> storage.owner-uid: 36<br class=""> storage.owner-gid: 36<br class=""> features.shard-block-size: 128MB<br class=""> performance.strict-o-direct: on<br class=""> network.ping-timeout: 30<br class=""> cluster.granular-entry-heal: enable<br class=""> <br class=""> I would really appreciate some guidance on this to try to improve things<br class=""> because at this rate I will need to reconsider using GlusterFS altogether.<br class=""> <br class=""> Cheers,<br class=""> Chris<br class=""> <br class=""> -- <br class=""> Chris Boot<br class=""> <a href="mailto:bootc@bootc.net" class="" moz-do-not-send="true">bootc@bootc.net</a><br class=""> _______________________________________________<br class=""> Users mailing list<br class=""> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br class=""> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br class=""> </div> </div> </blockquote> </div> <br class=""> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <div class="moz-signature">-- <br> <p> </p> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td colspan="3"><img src="cid:part6.276D40AB.8385CD25@databay.de" height="30" width="151" border="0"></td> </tr> <tr> <td valign="top"> <font size="-1" face="Verdana, Arial, sans-serif"><br> <b>Ralf Schenk</b><br> fon +49 (0) 24 05 / 40 83 70<br> fax +49 (0) 24 05 / 40 83 759<br> mail <a href="mailto:rs@databay.de"><font color="#FF0000"><b>rs@databay.de</b></font></a><br> </font> </td> <td width="30"> </td> <td valign="top"> <font size="-1" face="Verdana, Arial, sans-serif"><br> <b>Databay AG</b><br> Jens-Otto-Krag-Straße 11<br> D-52146 Würselen<br> <a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a> </font> </td> </tr> <tr> <td colspan="3" valign="top"> <font size="1" face="Verdana, Arial, sans-serif"><br> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202<br> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns<br> Aufsichtsratsvorsitzender: Wilhelm Dohmen </font> </td> </tr> </tbody> </table> <hr noshade="noshade" size="1" color="#000000" width="100%"> </div> </body> </html> --------------CE9FA4B243E602B4EF2B1DB6 Content-Type: image/gif; name="geegbplfmmafjmcf.gif" Content-Transfer-Encoding: base64 Content-ID: <part6.276D40AB.8385CD25@databay.de> Content-Disposition: inline; filename="geegbplfmmafjmcf.gif" R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8L EhIQEKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA/// /yH5BAAAAAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+K QMFUYCDCqHRKJVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJF bA07F35aFBiEkJEpfXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8 rgkkD7y5KhMZB3drqSoVFQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRf FigvChRxFJwkBBvk5A7cpZhAjgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg4 4oDCXBFC/3qj9SEluZEpHnjYQFIGgpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzS cLCAg38OWI4Y4GECgQcSOEwYcADnh6/FNjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGAD x8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5YWjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl 0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1guN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1 kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uEkQAAZucpVw1xIsjkgf8B863mQVYt eQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhBSAJ+1ThH32AfRFZNayNAtUFi wFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjBgNcgKQKMHmwjgnCSpeCb ULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP5jJoNQ4Y4Gh8jpFg HH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhGGZgDEKArABGA ed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUpo0ceOQ4D 0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfDcNrc CEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4 oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiX OkDEGaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6U V165CpaHukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3C BajgfsROuxcPA8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkw cPfs+xACADs= --------------CE9FA4B243E602B4EF2B1DB6-- --------------9DE07E5668EB7C56D4297046--

On Mon, Jun 19, 2017 at 7:32 PM, Ralf Schenk <rs@databay.de> wrote:
Hello,
Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster volumes which I thought to be possible since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disks.
Can you please open a bug to fix documentation? We are working on libgfapi, but it's indeed not in yet. Y.
Bye
Am 19.06.2017 um 17:23 schrieb Darrell Budic:
Chris-
You probably need to head over to gluster-users@gluster.org for help with performance issues.
That said, what kind of performance are you getting, via some form or testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to determine what kind of performance you’re actually getting.
Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as full replicas, they can’t serve distributed data in this configuration and may be slowing things down on you. If you’ve got a storage network setup, make sure it’s using the largest MTU it can, and consider adding/testing these settings that I use on my main storage volume:
performance.io-thread-count: 32 client.event-threads: 8 server.event-threads: 3 performance.stat-prefetch: on
Good luck,
-Darrell
On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Cheers, Chris
-- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759> mail *rs@databay.de* <rs@databay.de>
*Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_004_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: multipart/alternative; boundary="_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_" --_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable So ovirt access gluster vai FUSE ? i thought its using libgfapi. When can we expect it to work with libgfapi ? and what about the changelog of 4.1.3 ? BZ 1022961 Gluster: running a VM from a gluster domain should use gluster U= RI instead of a fuse mount" -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Ralf S= chenk <rs@databay.de> Sent: Monday, June 19, 2017 7:32:45 PM To: users@ovirt.org Subject: Re: [ovirt-users] Very poor GlusterFS performance Hello, Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi acce= ss for Ovirt-VM's to gluster volumes which I thought to be possible since 3= .6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to= mount gluster-based VM-Disks. Bye Am 19.06.2017 um 17:23 schrieb Darrell Budic: Chris- You probably need to head over to gluster-users@gluster.org<mailto:gluster-= users@gluster.org> for help with performance issues. That said, what kind of performance are you getting, via some form or testi= ng like bonnie++ or even dd runs? Raw bricks vs gluster performance is usef= ul to determine what kind of performance you=92re actually getting. Beyond that, I=92d recommend dropping the arbiter bricks and re-adding them= as full replicas, they can=92t serve distributed data in this configuratio= n and may be slowing things down on you. If you=92ve got a storage network = setup, make sure it=92s using the largest MTU it can, and consider adding/t= esting these settings that I use on my main storage volume: performance.io<http://performance.io>-thread-count: 32 client.event-threads: 8 server.event-threads: 3 performance.stat-prefetch: on Good luck, -Darrell On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net<mailto:bootc@bootc= .net>> wrote: Hi folks, I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet. Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >=3D 9.20 Gbits/sec between the three servers). To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. My volume configuration looks like this: Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) =3D 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io<http://performance.io>-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether. Cheers, Chris -- Chris Boot bootc@bootc.net<mailto:bootc@bootc.net> _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users -- [cid:part6.276D40AB.8385CD25@databay.de] Ralf Schenk fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail rs@databay.de<mailto:rs@databay.de> Databay AG Jens-Otto-Krag-Stra=DFe 11 D-52146 W=FCrselen www.databay.de<http://www.databay.de> Sitz/Amtsgericht Aachen =95 HRB:8437 =95 USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Phi= lipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ________________________________ --_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> <style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi= n-bottom:0;} --></style> <div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font= -family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"> <p>So ovirt access gluster vai FUSE ? i thought its using libgfapi.</p> <p>When can we expect it to work with libgfapi ? </p> <p>and what about the changelog of 4.1.3 ?</p> <div><span style=3D"font-size: 12pt;">BZ 1022961 Gluster: running a VM from= a gluster domain should use gluster URI instead of a fuse mount"</spa= n><br> </div> <p></p> <p><br> </p> <div id=3D"Signature"><br> <div class=3D"ecxmoz-signature">-- <br> <br> <font color=3D"#3366ff"><font color=3D"#000000">Respectfully<b><br> </b><b>Mahdi A. Mahdi</b></font></font><font color=3D"#3366ff"><br> <br> </font><font color=3D"#3366ff"></font></div> </div> </div> <hr style=3D"display:inline-block;width:98%" tabindex=3D"-1"> <div id=3D"divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" st= yle=3D"font-size:11pt" color=3D"#000000"><b>From:</b> users-bounces@ovirt.o= rg <users-bounces@ovirt.org> on behalf of Ralf Schenk <rs@databay.= de><br> <b>Sent:</b> Monday, June 19, 2017 7:32:45 PM<br> <b>To:</b> users@ovirt.org<br> <b>Subject:</b> Re: [ovirt-users] Very poor GlusterFS performance</font> <div> </div> </div> <div> <p><font face=3D"Helvetica, Arial, sans-serif">Hello,</font></p> <p><font face=3D"Helvetica, Arial, sans-serif">Gluster-Performance is bad. = Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster= volumes which I thought to be possible since 3.6.x. Documentation is misle= ading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disks.</font></p> <p><font face=3D"Helvetica, Arial, sans-serif"></font>Bye<br> </p> <br> <div class=3D"moz-cite-prefix">Am 19.06.2017 um 17:23 schrieb Darrell Budic= :<br> </div> <blockquote type=3D"cite" cite=3D"mid:D8AAF9DB-02FB-4B38-9E33-174134F5377C@= onholyground.com"> Chris- <div class=3D""><br class=3D""> </div> <div class=3D"">You probably need to head over to <a href=3D"mailto:gluster= -users@gluster.org" class=3D"" moz-do-not-send=3D"true"> gluster-users@gluster.org</a> for help with performance issues.</div> <div class=3D""><br class=3D""> </div> <div class=3D"">That said, what kind of performance are you getting, via so= me form or testing like bonnie++ or even dd runs? Raw bricks vs glu= ster performance is useful to determine what kind of performance you=92re a= ctually getting.</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Beyond that, I=92d recommend dropping the arbiter bricks an= d re-adding them as full replicas, they can=92t serve distributed data in t= his configuration and may be slowing things down on you. If you=92ve got a = storage network setup, make sure it=92s using the largest MTU it can, and consider adding/testing these settings that I = use on my main storage volume:</div> <div class=3D""><br class=3D""> </div> <div class=3D""> <div style=3D"margin: 0px; line-height: normal;" class=3D""><a href=3D"http= ://performance.io" class=3D"" moz-do-not-send=3D"true">performance.io</a>-t= hread-count: 32</div> <div style=3D"margin: 0px; line-height: normal;" class=3D""><span style=3D"= font-variant-ligatures: no-common-ligatures" class=3D"">client.event-thread= s: 8</span></div> <div style=3D"margin: 0px; line-height: normal;" class=3D""><span style=3D"= font-variant-ligatures: no-common-ligatures" class=3D"">server.event-thread= s: 3</span></div> <div style=3D"margin: 0px; line-height: normal;" class=3D"">performance.sta= t-prefetch: on</div> </div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br class=3D""> </span></div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Good luck,</span></div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br class=3D""> </span></div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> -Darrell</span></div> <div class=3D""><br class=3D""> </div> <div class=3D""><br class=3D""> <div> <blockquote type=3D"cite" class=3D""> <div class=3D"">On Jun 19, 2017, at 9:46 AM, Chris Boot <<a href=3D"mail= to:bootc@bootc.net" class=3D"" moz-do-not-send=3D"true">bootc@bootc.net</a>= > wrote:</div> <br class=3D"Apple-interchange-newline"> <div class=3D""> <div class=3D"">Hi folks,<br class=3D""> <br class=3D""> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + Gluste= rFS 3.10<br class=3D""> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of<br = class=3D""> 6 bricks, which themselves live on two SSDs in each of the servers (one<br = class=3D""> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the<br= class=3D""> SSDs. Connectivity is 10G Ethernet.<br class=3D""> <br class=3D""> Performance within the VMs is pretty terrible. I experience very low<br cla= ss=3D""> throughput and random IO is really bad: it feels like a latency issue.<br c= lass=3D""> On my oVirt nodes the SSDs are not generally very busy. The 10G network<br = class=3D""> seems to run without errors (iperf3 gives bandwidth measurements of >=3D= <br class=3D""> 9.20 Gbits/sec between the three servers).<br class=3D""> <br class=3D""> To put this into perspective: I was getting better behaviour from NFS4<br c= lass=3D""> on a gigabit connection than I am with GlusterFS on 10G: that doesn't<br cl= ass=3D""> feel right at all.<br class=3D""> <br class=3D""> My volume configuration looks like this:<br class=3D""> <br class=3D""> Volume Name: vmssd<br class=3D""> Type: Distributed-Replicate<br class=3D""> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br class=3D""> Status: Started<br class=3D""> Snapshot Count: 0<br class=3D""> Number of Bricks: 2 x (2 + 1) =3D 6<br class=3D""> Transport-type: tcp<br class=3D""> Bricks:<br class=3D""> Brick1: ovirt3:/gluster/ssd0_vmssd/brick<br class=3D""> Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br class=3D""> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br class=3D""> Brick4: ovirt3:/gluster/ssd1_vmssd/brick<br class=3D""> Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br class=3D""> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br class=3D""> Options Reconfigured:<br class=3D""> nfs.disable: on<br class=3D""> transport.address-family: inet6<br class=3D""> performance.quick-read: off<br class=3D""> performance.read-ahead: off<br class=3D""> <a href=3D"http://performance.io" class=3D"" moz-do-not-send=3D"true">perfo= rmance.io</a>-cache: off<br class=3D""> performance.stat-prefetch: off<br class=3D""> performance.low-prio-threads: 32<br class=3D""> network.remote-dio: off<br class=3D""> cluster.eager-lock: enable<br class=3D""> cluster.quorum-type: auto<br class=3D""> cluster.server-quorum-type: server<br class=3D""> cluster.data-self-heal-algorithm: full<br class=3D""> cluster.locking-scheme: granular<br class=3D""> cluster.shd-max-threads: 8<br class=3D""> cluster.shd-wait-qlength: 10000<br class=3D""> features.shard: on<br class=3D""> user.cifs: off<br class=3D""> storage.owner-uid: 36<br class=3D""> storage.owner-gid: 36<br class=3D""> features.shard-block-size: 128MB<br class=3D""> performance.strict-o-direct: on<br class=3D""> network.ping-timeout: 30<br class=3D""> cluster.granular-entry-heal: enable<br class=3D""> <br class=3D""> I would really appreciate some guidance on this to try to improve things<br= class=3D""> because at this rate I will need to reconsider using GlusterFS altogether.<= br class=3D""> <br class=3D""> Cheers,<br class=3D""> Chris<br class=3D""> <br class=3D""> -- <br class=3D""> Chris Boot<br class=3D""> <a href=3D"mailto:bootc@bootc.net" class=3D"" moz-do-not-send=3D"true">boot= c@bootc.net</a><br class=3D""> _______________________________________________<br class=3D""> Users mailing list<br class=3D""> <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org">Users= @ovirt.org</a><br class=3D""> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br class= =3D""> </div> </div> </blockquote> </div> <br class=3D""> </div> <br> <fieldset class=3D"mimeAttachmentHeader"></fieldset> <br> <pre wrap=3D"">_______________________________________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org">Users= @ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <div class=3D"moz-signature">-- <br> <p></p> <table cellspacing=3D"0" cellpadding=3D"0" border=3D"0"> <tbody> <tr> <td colspan=3D"3"><img height=3D"30" width=3D"151" border=3D"0" src=3D"cid:= part6.276D40AB.8385CD25@databay.de"></td> </tr> <tr> <td valign=3D"top"><font size=3D"-1" face=3D"Verdana, Arial, sans-serif"><br> <b>Ralf Schenk</b><br> fon +49 (0) 24 05 / 40 83 70<br> fax +49 (0) 24 05 / 40 83 759<br> mail <a href=3D"mailto:rs@databay.de"><font color=3D"#FF0000"><b>rs@databay= .de</b></font></a><br> </font></td> <td width=3D"30"> </td> <td valign=3D"top"><font size=3D"-1" face=3D"Verdana, Arial, sans-serif"><br> <b>Databay AG</b><br> Jens-Otto-Krag-Stra=DFe 11<br> D-52146 W=FCrselen<br> <a href=3D"http://www.databay.de"><font color=3D"#FF0000"><b>www.databay.de= </b></font></a> </font></td> </tr> <tr> <td colspan=3D"3" valign=3D"top"><font size=3D"1" face=3D"Verdana, Arial, sans-serif"><br> Sitz/Amtsgericht Aachen =95 HRB:8437 =95 USt-IdNr.: DE 210844202<br> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Phi= lipp Hermanns<br> Aufsichtsratsvorsitzender: Wilhelm Dohmen </font></td> </tr> </tbody> </table> <hr noshade=3D"noshade" size=3D"1" color=3D"#000000" width=3D"100%"> </div> </div> </body> </html> --_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_-- --_004_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: image/gif; name="geegbplfmmafjmcf.gif" Content-Description: geegbplfmmafjmcf.gif Content-Disposition: inline; filename="geegbplfmmafjmcf.gif"; size=1250; creation-date="Mon, 19 Jun 2017 16:41:10 GMT"; modification-date="Mon, 19 Jun 2017 16:41:10 GMT" Content-ID: <part6.276D40AB.8385CD25@databay.de> Content-Transfer-Encoding: base64 R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8LEhIQ EKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA////yH5BAAA AAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+KQMFUYCDCqHRK JVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJFbA07F35aFBiEkJEp fXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8rgkkD7y5KhMZB3drqSoV FQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRfFigvChRxFJwkBBvk5A7cpZhA jgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg44oDCXBFC/3qj9SEluZEpHnjYQFIG gpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzScLCAg38OWI4Y4GECgQcSOEwYcADnh6/F NjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGADx8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5Y WjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1g uN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uE kQAAZucpVw1xIsjkgf8B863mQVYteQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhB SAJ+1ThH32AfRFZNayNAtUFiwFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjB gNcgKQKMHmwjgnCSpeCbULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP 5jJoNQ4Y4Gh8jpFgHH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhG GZgDEKArABGAed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUp o0ceOQ4D0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfD cNrcCEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4 oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiXOkDE GaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6UV165CpaH ukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3CBajgfsROuxcP A8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkwcPfs+xACADs= --_004_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_--

and what about the changelog of 4.1.3 ?<o:p></o:p></span></p><div><p class= =3DMsoNormal><span style=3D'font-family:"Calibri",sans-serif'>BZ 1022961 Gl= uster: running a VM from a gluster domain should use gluster URI instead of= a fuse mount"<o:p></o:p></span></p></div><p><span style=3D'font-famil= y:"Calibri",sans-serif'><o:p> </o:p></span></p><div id=3DSignature><p = class=3DMsoNormal><span style=3D'font-family:"Calibri",sans-serif'><o:p>&nb= sp;</o:p></span></p><div><p class=3DMsoNormal style=3D'margin-bottom:12.0pt= '><span style=3D'font-family:"Calibri",sans-serif'>-- <br><br>Respectfully<= b><br>Mahdi A. Mahdi</b><o:p></o:p></span></p></div></div></div><div class= =3DMsoNormal align=3Dcenter style=3D'text-align:center'><hr size=3D4 width= =3D"98%" align=3Dcenter></div><div id=3DdivRplyFwdMsg><p class=3DMsoNormal>= <b><span style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'>From:<= /span></b><span style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'= <a href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a> &l= t;<a href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>>= ; on behalf of Ralf Schenk <<a href=3D"mailto:rs@databay.de">rs@databay.= de</a>><br><b>Sent:</b> Monday, June 19, 2017 7:32:45 PM<br><b>To:</b> <= a href=3D"mailto:users@ovirt.org">users@ovirt.org</a><br><b>Subject:</b> Re= : [ovirt-users] Very poor GlusterFS performance</span> <o:p></o:p></p><div>= <p class=3DMsoNormal> <o:p></o:p></p></div></div><div><p><span style= =3D'font-family:"Helvetica",sans-serif'>Hello,</span><o:p></o:p></p><p><spa= n style=3D'font-family:"Helvetica",sans-serif'>Gluster-Performance is bad. = Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster= volumes which I thought to be possible since 3.6.x. Documentation is misle= ading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disk= s.</span><o:p></o:p></p><p>Bye<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbs=
Good luck,<o:p></o:p></p></div><div><p class=3DMsoNormal><br><br><o:p></o:=
-cache: off<br>performance.stat-prefetch: off<br>performance.low-prio-thre= ads: 32<br>network.remote-dio: off<br>cluster.eager-lock: enable<br>cluster= .quorum-type: auto<br>cluster.server-quorum-type: server<br>cluster.data-se= lf-heal-algorithm: full<br>cluster.locking-scheme: granular<br>cluster.shd-= max-threads: 8<br>cluster.shd-wait-qlength: 10000<br>features.shard: on<br>= user.cifs: off<br>storage.owner-uid: 36<br>storage.owner-gid: 36<br>feature= s.shard-block-size: 128MB<br>performance.strict-o-direct: on<br>network.pin= g-timeout: 30<br>cluster.granular-entry-heal: enable<br><br>I would really = appreciate some guidance on this to try to improve things<br>because at thi= s rate I will need to reconsider using GlusterFS altogether.<br><br>Cheers,= <br>Chris<br><br>-- <br>Chris Boot<br><a href=3D"mailto:bootc@bootc.net">bo= otc@bootc.net</a><br>_______________________________________________<br>Use= rs mailing list<br><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><b= r><a href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ov= irt.org/mailman/listinfo/users</a><o:p></o:p></p></div></div></blockquote><= /div><p class=3DMsoNormal><o:p> </o:p></p></div><p class=3DMsoNormal><= br><br><br><o:p></o:p></p><pre>____________________________________________= ___<o:p></o:p></pre><pre>Users mailing list<o:p></o:p></pre><pre><a href=3D= "mailto:Users@ovirt.org">Users@ovirt.org</a><o:p></o:p></pre><pre><a href= =3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/m= ailman/listinfo/users</a><o:p></o:p></pre></blockquote><p class=3DMsoNormal= <o:p> </o:p></p><div><p class=3DMsoNormal>-- <o:p></o:p></p><table cl= ass=3DMsoNormalTable border=3D0 cellspacing=3D0 cellpadding=3D0><tr><td col= span=3D3 style=3D'padding:0cm 0cm 0cm 0cm'><p class=3DMsoNormal><img border= =3D0 width=3D151 height=3D30 id=3D"_x0000_i1026" src=3D"cid:image001.gif@01= D2F8B7.C9A43050"><o:p></o:p></p></td></tr><tr><td valign=3Dtop style=3D'pad= ding:0cm 0cm 0cm 0cm'><p class=3DMsoNormal><span style=3D'font-size:10.0pt;= font-family:"Verdana",sans-serif'><br><b>Ralf Schenk</b><br>fon +49 (0) 24 = 05 / 40 83 70<br>fax +49 (0) 24 05 / 40 83 759<br>mail <a href=3D"mailto:rs= @databay.de"><b><span style=3D'color:red'>rs@databay.de</span></b></a></spa= n><o:p></o:p></p></td><td width=3D33 style=3D'width:22.5pt;padding:0cm 0cm = 0cm 0cm'><p class=3DMsoNormal> <o:p></o:p></p></td><td valign=3Dtop st= yle=3D'padding:0cm 0cm 0cm 0cm'><p class=3DMsoNormal><span style=3D'font-si= ze:10.0pt;font-family:"Verdana",sans-serif'><br><b>Databay AG</b><br>Jens-O= tto-Krag-Stra=DFe 11<br>D-52146 W=FCrselen<br><a href=3D"http://www.databay= .de"><b><span style=3D'color:red'>www.databay.de</span></b></a> </span><o:p= </o:p></p></td></tr><tr><td colspan=3D3 valign=3Dtop style=3D'padding:0cm = 0cm 0cm 0cm'><p class=3DMsoNormal><span style=3D'font-size:7.5pt;font-famil= y:"Verdana",sans-serif'><br>Sitz/Amtsgericht Aachen • HRB:8437 •= ; USt-IdNr.: DE 210844202<br>Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, = Aresch Yavari, Dipl.-Kfm. Philipp Hermanns<br>Aufsichtsratsvorsitzender: Wi= lhelm Dohmen </span><o:p></o:p></p></td></tr></table><div class=3DMsoNormal= align=3Dcenter style=3D'text-align:center'><hr size=3D1 width=3D"100%" nos= hade style=3D'color:black' align=3Dcenter></div></div></div></div></body></=
--_004_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_ Content-Type: multipart/alternative; boundary="_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_" --_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi All, it's the same for me. I've update all my hosts to the latest release and th= ought it would now use libgfapi since BZ 1022961<https://bugzilla.redhat.co= m/1022961> is listed in the release notes under enhancements. Are there an= y steps that need to be taken after upgrading for this to work ? Thank you, Sven Von: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] Im Auftrag vo= n Mahdi Adnan Gesendet: Samstag, 8. Juli 2017 09:35 An: Ralf Schenk <rs@databay.de>; users@ovirt.org; ykaul@redhat.com Betreff: Re: [ovirt-users] Very poor GlusterFS performance So ovirt access gluster vai FUSE ? i thought its using libgfapi. When can we expect it to work with libgfapi ? and what about the changelog of 4.1.3 ? BZ 1022961 Gluster: running a VM from a gluster domain should use gluster U= RI instead of a fuse mount" -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces@ovirt.org<mailto:users-bounces@ovirt.org> <users-bounce= s@ovirt.org<mailto:users-bounces@ovirt.org>> on behalf of Ralf Schenk <rs@d= atabay.de<mailto:rs@databay.de>> Sent: Monday, June 19, 2017 7:32:45 PM To: users@ovirt.org<mailto:users@ovirt.org> Subject: Re: [ovirt-users] Very poor GlusterFS performance Hello, Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi acce= ss for Ovirt-VM's to gluster volumes which I thought to be possible since 3= .6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to= mount gluster-based VM-Disks. Bye Am 19.06.2017 um 17:23 schrieb Darrell Budic: Chris- You probably need to head over to gluster-users@gluster.org<mailto:gluster-= users@gluster.org> for help with performance issues. That said, what kind of performance are you getting, via some form or testi= ng like bonnie++ or even dd runs? Raw bricks vs gluster performance is usef= ul to determine what kind of performance you're actually getting. Beyond that, I'd recommend dropping the arbiter bricks and re-adding them a= s full replicas, they can't serve distributed data in this configuration an= d may be slowing things down on you. If you've got a storage network setup,= make sure it's using the largest MTU it can, and consider adding/testing t= hese settings that I use on my main storage volume: performance.io<http://performance.io>-thread-count: 32 client.event-threads: 8 server.event-threads: 3 performance.stat-prefetch: on Good luck, -Darrell On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net<mailto:bootc@bootc= .net>> wrote: Hi folks, I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet. Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >=3D 9.20 Gbits/sec between the three servers). To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. My volume configuration looks like this: Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) =3D 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io<http://performance.io>-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether. Cheers, Chris -- Chris Boot bootc@bootc.net<mailto:bootc@bootc.net> _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users -- [cid:image001.gif@01D2F8B7.C9A43050] Ralf Schenk fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail rs@databay.de<mailto:rs@databay.de> Databay AG Jens-Otto-Krag-Stra=DFe 11 D-52146 W=FCrselen www.databay.de<http://www.databay.de> Sitz/Amtsgericht Aachen * HRB:8437 * USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Phi= lipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ________________________________ --_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content= =3D"text/html; charset=3Diso-8859-1"><meta name=3DGenerator content=3D"Micr= osoft Word 15 (filtered medium)"><!--[if !mso]><style>v\:* {behavior:url(#d= efault#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} </style><![endif]--><style><!-- /* Font Definitions */ @font-face {font-family:Helvetica; panose-1:2 11 6 4 2 2 2 2 2 4;} @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Consolas; panose-1:2 11 6 9 2 2 4 3 2 4;} @font-face {font-family:Verdana; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif; color:black;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p {mso-style-priority:99; margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif; color:black;} pre {mso-style-priority:99; mso-style-link:"HTML Vorformatiert Zchn"; margin:0cm; margin-bottom:.0001pt; font-size:10.0pt; font-family:"Courier New"; color:black;} span.HTMLVorformatiertZchn {mso-style-name:"HTML Vorformatiert Zchn"; mso-style-priority:99; mso-style-link:"HTML Vorformatiert"; font-family:"Consolas",serif; color:black;} span.E-MailFormatvorlage20 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @page WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 2.0cm 70.85pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body bgcolor=3Dwhite lang=3DDE li= nk=3Dblue vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><sp= an style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D= ;mso-fareast-language:EN-US'>Hi All, <o:p></o:p></span></p><p class=3DMsoNo= rmal><span style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color= :#1F497D;mso-fareast-language:EN-US'><o:p> </o:p></span></p><p class= =3DMsoNormal><span lang=3DEN-US style=3D'font-size:11.0pt;font-family:"Cali= bri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'>it’s the sa= me for me. I’ve update all my hosts to the latest release and thought= it would now use libgfapi since <a href=3D"https://bugzilla.redhat.com/102= 2961"><span style=3D'color:#1F497D;text-decoration:none'>BZ 1022961</span><= /a> is listed in the release notes under enhancements. =A0Are there any ste= ps that need to be taken after upgrading for this to work ? <o:p></o:p></sp= an></p><p class=3DMsoNormal><span lang=3DEN-US style=3D'font-size:11.0pt;fo= nt-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'><o= :p> </o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US style=3D'= font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast= -language:EN-US'>Thank you, <o:p></o:p></span></p><p class=3DMsoNormal><spa= n lang=3DEN-US style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;c= olor:#1F497D;mso-fareast-language:EN-US'>Sven <o:p></o:p></span></p><p clas= s=3DMsoNormal><a name=3D"_MailEndCompose"><span lang=3DEN-US style=3D'font-= size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-lang= uage:EN-US'><o:p> </o:p></span></a></p><div><div style=3D'border:none;= border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=3DMsoNor= mal><b><span style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;col= or:windowtext'>Von:</span></b><span style=3D'font-size:11.0pt;font-family:"= Calibri",sans-serif;color:windowtext'> users-bounces@ovirt.org [mailto:user= s-bounces@ovirt.org] <b>Im Auftrag von </b>Mahdi Adnan<br><b>Gesendet:</b> = Samstag, 8. Juli 2017 09:35<br><b>An:</b> Ralf Schenk <rs@databay.de>= ; users@ovirt.org; ykaul@redhat.com<br><b>Betreff:</b> Re: [ovirt-users] Ve= ry poor GlusterFS performance<o:p></o:p></span></p></div></div><p class=3DM= soNormal><o:p> </o:p></p><div id=3Ddivtagdefaultwrapper><p><span style= =3D'font-family:"Calibri",sans-serif'>So ovirt access gluster vai FUSE ? i = thought its using libgfapi.<o:p></o:p></span></p><p><span style=3D'font-fam= ily:"Calibri",sans-serif'>When can we expect it to work with libgfapi ?&nbs= p;<o:p></o:p></span></p><p><span style=3D'font-family:"Calibri",sans-serif'= p;</o:p></p><div><p class=3DMsoNormal>Am 19.06.2017 um 17:23 schrieb Darrel= l Budic:<o:p></o:p></p></div><blockquote style=3D'margin-top:5.0pt;margin-b= ottom:5.0pt'><p class=3DMsoNormal>Chris- <o:p></o:p></p><div><p class=3DMso= Normal><o:p> </o:p></p></div><div><p class=3DMsoNormal>You probably ne= ed to head over to <a href=3D"mailto:gluster-users@gluster.org">gluster-use= rs@gluster.org</a> for help with performance issues.<o:p></o:p></p></d= iv><div><p class=3DMsoNormal><o:p> </o:p></p></div><div><p class=3DMso= Normal>That said, what kind of performance are you getting, via some form o= r testing like bonnie++ or even dd runs? Raw bricks vs gluster performance = is useful to determine what kind of performance you’re actually getti= ng.<o:p></o:p></p></div><div><p class=3DMsoNormal><o:p> </o:p></p></di= v><div><p class=3DMsoNormal>Beyond that, I’d recommend dropping the a= rbiter bricks and re-adding them as full replicas, they can’t serve d= istributed data in this configuration and may be slowing things down on you= . If you’ve got a storage network setup, make sure it’s using t= he largest MTU it can, and consider adding/testing these settings that I us= e on my main storage volume:<o:p></o:p></p></div><div><p class=3DMsoNormal>= <o:p> </o:p></p></div><div><div><p class=3DMsoNormal><a href=3D"http:/= /performance.io">performance.io</a>-thread-count: 32<o:p></o:p></p></div><d= iv><p class=3DMsoNormal>client.event-threads: 8<o:p></o:p></p></div><div><p= class=3DMsoNormal>server.event-threads: 3<o:p></o:p></p></div><div><p clas= s=3DMsoNormal>performance.stat-prefetch: on<o:p></o:p></p></div></div><div>= <p class=3DMsoNormal><br><br><o:p></o:p></p></div><div><p class=3DMsoNormal= p></p></div><div><p class=3DMsoNormal> -Darrell<o:p></o:p></p></div><= div><p class=3DMsoNormal><o:p> </o:p></p></div><div><p class=3DMsoNorm= al><o:p> </o:p></p><div><blockquote style=3D'margin-top:5.0pt;margin-b= ottom:5.0pt'><div><p class=3DMsoNormal>On Jun 19, 2017, at 9:46 AM, Chris B= oot <<a href=3D"mailto:bootc@bootc.net">bootc@bootc.net</a>> wrote:<o= :p></o:p></p></div><p class=3DMsoNormal><o:p> </o:p></p><div><div><p c= lass=3DMsoNormal>Hi folks,<br><br>I have 3x servers in a "hyper-conver= ged" oVirt 4.1.2 + GlusterFS 3.10<br>configuration. My VMs run off a r= eplica 3 arbiter 1 volume comprised of<br>6 bricks, which themselves live o= n two SSDs in each of the servers (one<br>brick per SSD). The bricks are XF= S on LVM thin volumes straight onto the<br>SSDs. Connectivity is 10G Ethern= et.<br><br>Performance within the VMs is pretty terrible. I experience very= low<br>throughput and random IO is really bad: it feels like a latency iss= ue.<br>On my oVirt nodes the SSDs are not generally very busy. The 10G netw= ork<br>seems to run without errors (iperf3 gives bandwidth measurements of = >=3D<br>9.20 Gbits/sec between the three servers).<br><br>To put this in= to perspective: I was getting better behaviour from NFS4<br>on a gigabit co= nnection than I am with GlusterFS on 10G: that doesn't<br>feel right at all= .<br><br>My volume configuration looks like this:<br><br>Volume Name: vmssd= <br>Type: Distributed-Replicate<br>Volume ID: d5a5ddd1-a140-4e0d-b514-701cf= e464853<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 2 x (2= + 1) =3D 6<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovirt3:/gluster/ss= d0_vmssd/brick<br>Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br>Brick3: ovirt= 2:/gluster/ssd0_vmssd/brick (arbiter)<br>Brick4: ovirt3:/gluster/ssd1_vmssd= /brick<br>Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br>Brick6: ovirt2:/glust= er/ssd1_vmssd/brick (arbiter)<br>Options Reconfigured:<br>nfs.disable: on<b= r>transport.address-family: inet6<br>performance.quick-read: off<br>perform= ance.read-ahead: off<br><a href=3D"http://performance.io">performance.io</a= html>= --_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_-- --_004_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_ Content-Type: image/gif; name="image001.gif" Content-Description: image001.gif Content-Disposition: inline; filename="image001.gif"; size=1250; creation-date="Sun, 09 Jul 2017 06:34:25 GMT"; modification-date="Sun, 09 Jul 2017 06:34:25 GMT" Content-ID: <image001.gif@01D2F8B7.C9A43050> Content-Transfer-Encoding: base64 R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8LEhIQ EKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA////yH5BAAA AAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+KQMFUYCDCqHRK JVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJFbA07F35aFBiEkJEp fXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8rgkkD7y5KhMZB3drqSoV FQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRfFigvChRxFJwkBBvk5A7cpZhA jgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg44oDCXBFC/3qj9SEluZEpHnjYQFIG gpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzScLCAg38OWI4Y4GECgQcSOEwYcADnh6/F NjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGADx8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5Y WjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1g uN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uE kQAAZucpVw1xIsjkgf8B863mQVYteQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhB SAJ+1ThH32AfRFZNayNAtUFiwFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjB gNcgKQKMHmwjgnCSpeCbULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP 5jJoNQ4Y4Gh8jpFgHH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhG GZgDEKArABGAed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUp o0ceOQ4D0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfD cNrcCEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4 oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiXOkDE GaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6UV165CpaH ukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3CBajgfsROuxcP A8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkwcPfs+xACADs= --_004_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_--

Hi Sven, libgfapi is not fully operational yet. There's some additional work which just got merged[1] in order to enable it. Hopefully it'll be included in one of the next releases. Doron [1] https://gerrit.ovirt.org/#/c/78938/ On 9 July 2017 at 14:34, Sven Achtelik <Sven.Achtelik@eps.aero> wrote:
Hi All,
it’s the same for me. I’ve update all my hosts to the latest release and thought it would now use libgfapi since BZ 1022961 <https://bugzilla.redhat.com/1022961> is listed in the release notes under enhancements. Are there any steps that need to be taken after upgrading for this to work ?
Thank you,
Sven
*Von:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *Im Auftrag von *Mahdi Adnan *Gesendet:* Samstag, 8. Juli 2017 09:35 *An:* Ralf Schenk <rs@databay.de>; users@ovirt.org; ykaul@redhat.com *Betreff:* Re: [ovirt-users] Very poor GlusterFS performance
So ovirt access gluster vai FUSE ? i thought its using libgfapi.
When can we expect it to work with libgfapi ?
and what about the changelog of 4.1.3 ?
BZ 1022961 Gluster: running a VM from a gluster domain should use gluster URI instead of a fuse mount"
--
Respectfully *Mahdi A. Mahdi* ------------------------------
*From:* users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Ralf Schenk <rs@databay.de> *Sent:* Monday, June 19, 2017 7:32:45 PM *To:* users@ovirt.org *Subject:* Re: [ovirt-users] Very poor GlusterFS performance
Hello,
Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster volumes which I thought to be possible since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disks.
Bye
Am 19.06.2017 um 17:23 schrieb Darrell Budic:
Chris-
You probably need to head over to gluster-users@gluster.org for help with performance issues.
That said, what kind of performance are you getting, via some form or testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to determine what kind of performance you’re actually getting.
Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as full replicas, they can’t serve distributed data in this configuration and may be slowing things down on you. If you’ve got a storage network setup, make sure it’s using the largest MTU it can, and consider adding/testing these settings that I use on my main storage volume:
performance.io-thread-count: 32
client.event-threads: 8
server.event-threads: 3
performance.stat-prefetch: on
Good luck,
-Darrell
On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Cheers, Chris
-- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759> mail *rs@databay.de* <rs@databay.de>
*Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_DM5PR01MB25069E435FC89BDDA5534AB2FFC40DM5PR01MB2506prod_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, Can you put some numbers ? what tests are you doing ? Im running oVirt with Gluster without performance issues, but im running re= plica 2 all SSDs. Gluster logs might help too. -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Chris = Boot <bootc@bootc.net> Sent: Monday, June 19, 2017 5:46:08 PM To: oVirt users Subject: [ovirt-users] Very poor GlusterFS performance Hi folks, I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet. Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >=3D 9.20 Gbits/sec between the three servers). To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. My volume configuration looks like this: Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) =3D 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether. Cheers, Chris -- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users --_000_DM5PR01MB25069E435FC89BDDA5534AB2FFC40DM5PR01MB2506prod_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft Exchange Server"> <!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; pad= ding-left: 4pt; border-left: #800000 2px solid; } --></style> </head> <body> <meta content=3D"text/html; charset=3DUTF-8"> <style type=3D"text/css" style=3D""> <!-- p {margin-top:0; margin-bottom:0} --> </style> <div dir=3D"ltr"> <div id=3D"x_divtagdefaultwrapper" dir=3D"ltr" style=3D"font-size:12pt; col= or:#000000; font-family:Calibri,Helvetica,sans-serif"> <p>Hi,</p> <p><br> </p> <p>Can you put some numbers ? what tests are you doing ?</p> <p>Im running oVirt with Gluster without performance issues, but im running= replica 2 all SSDs.</p> <p>Gluster logs might help too.</p> <p><br> </p> <div id=3D"x_Signature"><br> <div class=3D"x_ecxmoz-signature">-- <br> <br> <font color=3D"#3366ff"><font color=3D"#000000">Respectfully<b><br> </b><b>Mahdi A. Mahdi</b></font></font><font color=3D"#3366ff"><br> <br> </font><font color=3D"#3366ff"></font></div> </div> </div> <hr tabindex=3D"-1" style=3D"display:inline-block; width:98%"> <div id=3D"x_divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" = color=3D"#000000" style=3D"font-size:11pt"><b>From:</b> users-bounces@ovirt= .org <users-bounces@ovirt.org> on behalf of Chris Boot <bootc@boot= c.net><br> <b>Sent:</b> Monday, June 19, 2017 5:46:08 PM<br> <b>To:</b> oVirt users<br> <b>Subject:</b> [ovirt-users] Very poor GlusterFS performance</font> <div> </div> </div> </div> <font size=3D"2"><span style=3D"font-size:10pt;"> <div class=3D"PlainText">Hi folks,<br> <br> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + Gluste= rFS 3.10<br> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of<br> 6 bricks, which themselves live on two SSDs in each of the servers (one<br> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the<br=
SSDs. Connectivity is 10G Ethernet.<br> <br> Performance within the VMs is pretty terrible. I experience very low<br> throughput and random IO is really bad: it feels like a latency issue.<br> On my oVirt nodes the SSDs are not generally very busy. The 10G network<br> seems to run without errors (iperf3 gives bandwidth measurements of >=3D= <br> 9.20 Gbits/sec between the three servers).<br> <br> To put this into perspective: I was getting better behaviour from NFS4<br> on a gigabit connection than I am with GlusterFS on 10G: that doesn't<br> feel right at all.<br> <br> My volume configuration looks like this:<br> <br> Volume Name: vmssd<br> Type: Distributed-Replicate<br> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 2 x (2 + 1) =3D 6<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovirt3:/gluster/ssd0_vmssd/brick<br> Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br> Brick4: ovirt3:/gluster/ssd1_vmssd/brick<br> Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br> Options Reconfigured:<br> nfs.disable: on<br> transport.address-family: inet6<br> performance.quick-read: off<br> performance.read-ahead: off<br> performance.io-cache: off<br> performance.stat-prefetch: off<br> performance.low-prio-threads: 32<br> network.remote-dio: off<br> cluster.eager-lock: enable<br> cluster.quorum-type: auto<br> cluster.server-quorum-type: server<br> cluster.data-self-heal-algorithm: full<br> cluster.locking-scheme: granular<br> cluster.shd-max-threads: 8<br> cluster.shd-wait-qlength: 10000<br> features.shard: on<br> user.cifs: off<br> storage.owner-uid: 36<br> storage.owner-gid: 36<br> features.shard-block-size: 128MB<br> performance.strict-o-direct: on<br> network.ping-timeout: 30<br> cluster.granular-entry-heal: enable<br> <br> I would really appreciate some guidance on this to try to improve things<br=
because at this rate I will need to reconsider using GlusterFS altogether.<= br> <br> Cheers,<br> Chris<br> <br> -- <br> Chris Boot<br> bootc@bootc.net<br> _______________________________________________<br> Users mailing list<br> Users@ovirt.org<br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovir= t.org/mailman/listinfo/users</a><br> </div> </span></font> </body> </html> --_000_DM5PR01MB25069E435FC89BDDA5534AB2FFC40DM5PR01MB2506prod_--

[Adding gluster-users] On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc@bootc.net> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Could you provide the gluster volume profile output while you're running your I/O tests. # gluster volume profile <volname> start to start profiling # gluster volume profile <volname> info for the profile output.
Cheers, Chris
-- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users]
On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc@bootc.net> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Could you provide the gluster volume profile output while you're running your I/O tests.
# gluster volume profile <volname> start to start profiling
# gluster volume profile <volname> info
for the profile output.
Cheers, Chris
-- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
-- Lindsay

Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4 2. Also glusterfs-3.10.1 and above has a shard performance bug fix - https://review.gluster.org/#/c/16966/ With these two changes, we saw great improvement in performance in our internal testing. Do you mind trying these two options above? -Krutika On Tue, Jun 20, 2017 at 1:00 PM, Lindsay Mathieson < lindsay.mathieson@gmail.com> wrote:
Have you tried with:
performance.strict-o-direct : off performance.strict-write-ordering : off
They can be changed dynamically.
On 20 June 2017 at 17:21, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users]
On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc@bootc.net> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Could you provide the gluster volume profile output while you're running your I/O tests.
# gluster volume profile <volname> start to start profiling
# gluster volume profile <volname> info
for the profile output.
Cheers, Chris
-- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
-- Lindsay
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --b1_dae58fffa0a735c3749409759402b004 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 RGVhciBLcnV0aWthLAoKU29ycnkgZm9yIGFza2luZyBzbyBuYWl2ZWx5IGJ1dCBjYW4geW91IHRl bGwgbWUgb24gd2hhdCBmYWN0b3IgZG8geW91IGJhc2UgdGhhdCB0aGUgY2xpZW50IGFuZCBzZXJ2 ZXIgZXZlbnQtdGhyZWFkcyBwYXJhbWV0ZXJzIGZvciBhIHZvbHVtZSBzaG91bGQgYmUgc2V0IHRv IDQ/CgpJcyB0aGlzIG1ldHJpYyBmb3IgZXhhbXBsZSBiYXNlZCBvbiB0aGUgbnVtYmVyIG9mIGNv cmVzIGEgR2x1c3RlckZTIHNlcnZlciBoYXM/CgpJIGFtIGFza2luZyBiZWNhdXNlIEkgc2F3IG15 IEdsdXN0ZXJGUyB2b2x1bWVzIGFyZSBzZXQgdG8gMiBhbmQgd291bGQgbGlrZSB0byBzZXQgdGhl c2UgcGFyYW1ldGVycyB0byBzb21ldGhpbmcgbWVhbmluZ2Z1bCBmb3IgcGVyZm9ybWFuY2UgdHVu aW5nLiBNeSBzZXR1cCBpcyBhIHR3byBub2RlIHJlcGxpY2Egd2l0aCBHbHVzdGVyRlMgMy44LjEx LgoKQmVzdCByZWdhcmRzLApNLgoKLS0tLS0tLS0gT3JpZ2luYWwgTWVzc2FnZSAtLS0tLS0tLQpT dWJqZWN0OiBSZTogW0dsdXN0ZXItdXNlcnNdIFtvdmlydC11c2Vyc10gVmVyeSBwb29yIEdsdXN0 ZXJGUyBwZXJmb3JtYW5jZQpMb2NhbCBUaW1lOiBKdW5lIDIwLCAyMDE3IDEyOjIzIFBNClVUQyBU aW1lOiBKdW5lIDIwLCAyMDE3IDEwOjIzIEFNCkZyb206IGtkaGFuYW5qQHJlZGhhdC5jb20KVG86 IExpbmRzYXkgTWF0aGllc29uIDxsaW5kc2F5Lm1hdGhpZXNvbkBnbWFpbC5jb20+CmdsdXN0ZXIt dXNlcnMgPGdsdXN0ZXItdXNlcnNAZ2x1c3Rlci5vcmc+LCBvVmlydCB1c2VycyA8dXNlcnNAb3Zp cnQub3JnPgoKQ291cGxlIG9mIHRoaW5nczoKMS4gTGlrZSBEYXJyZWxsIHN1Z2dlc3RlZCwgeW91 IHNob3VsZCBlbmFibGUgc3RhdC1wcmVmZXRjaCBhbmQgaW5jcmVhc2UgY2xpZW50IGFuZCBzZXJ2 ZXIgZXZlbnQgdGhyZWFkcyB0byA0LgojIGdsdXN0ZXIgdm9sdW1lIHNldCA8Vk9MPiBwZXJmb3Jt YW5jZS5zdGF0LXByZWZldGNoIG9uCiMgZ2x1c3RlciB2b2x1bWUgc2V0IDxWT0w+IGNsaWVudC5l dmVudC10aHJlYWRzIDQKIyBnbHVzdGVyIHZvbHVtZSBzZXQgPFZPTD4gc2VydmVyLmV2ZW50LXRo cmVhZHMgNAoKMi4gQWxzbyBnbHVzdGVyZnMtMy4xMC4xIGFuZCBhYm92ZSBoYXMgYSBzaGFyZCBw ZXJmb3JtYW5jZSBidWcgZml4IC0gaHR0cHM6Ly9yZXZpZXcuZ2x1c3Rlci5vcmcvIy9jLzE2OTY2 LwoKV2l0aCB0aGVzZSB0d28gY2hhbmdlcywgd2Ugc2F3IGdyZWF0IGltcHJvdmVtZW50IGluIHBl cmZvcm1hbmNlIGluIG91ciBpbnRlcm5hbCB0ZXN0aW5nLgoKRG8geW91IG1pbmQgdHJ5aW5nIHRo ZXNlIHR3byBvcHRpb25zIGFib3ZlPwotS3J1dGlrYQoKT24gVHVlLCBKdW4gMjAsIDIwMTcgYXQg MTowMCBQTSwgTGluZHNheSBNYXRoaWVzb24gPGxpbmRzYXkubWF0aGllc29uQGdtYWlsLmNvbT4g d3JvdGU6CgpIYXZlIHlvdSB0cmllZCB3aXRoOgoKcGVyZm9ybWFuY2Uuc3RyaWN0LW8tZGlyZWN0 IDogb2ZmCnBlcmZvcm1hbmNlLnN0cmljdC13cml0ZS1vcmRlcmluZyA6IG9mZgpUaGV5IGNhbiBi ZSBjaGFuZ2VkIGR5bmFtaWNhbGx5LgoKT24gMjAgSnVuZSAyMDE3IGF0IDE3OjIxLCBTYWhpbmEg Qm9zZSA8c2Fib3NlQHJlZGhhdC5jb20+IHdyb3RlOgoKW0FkZGluZyBnbHVzdGVyLXVzZXJzXQoK T24gTW9uLCBKdW4gMTksIDIwMTcgYXQgODoxNiBQTSwgQ2hyaXMgQm9vdCA8Ym9vdGNAYm9vdGMu bmV0PiB3cm90ZToKSGkgZm9sa3MsCgpJIGhhdmUgM3ggc2VydmVycyBpbiBhICJoeXBlci1jb252 ZXJnZWQiIG9WaXJ0IDQuMS4yICsgR2x1c3RlckZTIDMuMTAKY29uZmlndXJhdGlvbi4gTXkgVk1z IHJ1biBvZmYgYSByZXBsaWNhIDMgYXJiaXRlciAxIHZvbHVtZSBjb21wcmlzZWQgb2YKNiBicmlj a3MsIHdoaWNoIHRoZW1zZWx2ZXMgbGl2ZSBvbiB0d28gU1NEcyBpbiBlYWNoIG9mIHRoZSBzZXJ2 ZXJzIChvbmUKYnJpY2sgcGVyIFNTRCkuIFRoZSBicmlja3MgYXJlIFhGUyBvbiBMVk0gdGhpbiB2 b2x1bWVzIHN0cmFpZ2h0IG9udG8gdGhlClNTRHMuIENvbm5lY3Rpdml0eSBpcyAxMEcgRXRoZXJu ZXQuCgpQZXJmb3JtYW5jZSB3aXRoaW4gdGhlIFZNcyBpcyBwcmV0dHkgdGVycmlibGUuIEkgZXhw ZXJpZW5jZSB2ZXJ5IGxvdwp0aHJvdWdocHV0IGFuZCByYW5kb20gSU8gaXMgcmVhbGx5IGJhZDog aXQgZmVlbHMgbGlrZSBhIGxhdGVuY3kgaXNzdWUuCk9uIG15IG9WaXJ0IG5vZGVzIHRoZSBTU0Rz IGFyZSBub3QgZ2VuZXJhbGx5IHZlcnkgYnVzeS4gVGhlIDEwRyBuZXR3b3JrCnNlZW1zIHRvIHJ1 biB3aXRob3V0IGVycm9ycyAoaXBlcmYzIGdpdmVzIGJhbmR3aWR0aCBtZWFzdXJlbWVudHMgb2Yg Pj0KOS4yMCBHYml0cy9zZWMgYmV0d2VlbiB0aGUgdGhyZWUgc2VydmVycykuCgpUbyBwdXQgdGhp cyBpbnRvIHBlcnNwZWN0aXZlOiBJIHdhcyBnZXR0aW5nIGJldHRlciBiZWhhdmlvdXIgZnJvbSBO RlM0Cm9uIGEgZ2lnYWJpdCBjb25uZWN0aW9uIHRoYW4gSSBhbSB3aXRoIEdsdXN0ZXJGUyBvbiAx MEc6IHRoYXQgZG9lc24ndApmZWVsIHJpZ2h0IGF0IGFsbC4KCk15IHZvbHVtZSBjb25maWd1cmF0 aW9uIGxvb2tzIGxpa2UgdGhpczoKClZvbHVtZSBOYW1lOiB2bXNzZApUeXBlOiBEaXN0cmlidXRl ZC1SZXBsaWNhdGUKVm9sdW1lIElEOiBkNWE1ZGRkMS1hMTQwLTRlMGQtYjUxNC03MDFjZmU0NjQ4 NTMKU3RhdHVzOiBTdGFydGVkClNuYXBzaG90IENvdW50OiAwCk51bWJlciBvZiBCcmlja3M6IDIg eCAoMiArIDEpID0gNgpUcmFuc3BvcnQtdHlwZTogdGNwCkJyaWNrczoKQnJpY2sxOiBvdmlydDM6 L2dsdXN0ZXIvc3NkMF92bXNzZC9icmljawpCcmljazI6IG92aXJ0MTovZ2x1c3Rlci9zc2QwX3Zt c3NkL2JyaWNrCkJyaWNrMzogb3ZpcnQyOi9nbHVzdGVyL3NzZDBfdm1zc2QvYnJpY2sgKGFyYml0 ZXIpCkJyaWNrNDogb3ZpcnQzOi9nbHVzdGVyL3NzZDFfdm1zc2QvYnJpY2sKQnJpY2s1OiBvdmly dDE6L2dsdXN0ZXIvc3NkMV92bXNzZC9icmljawpCcmljazY6IG92aXJ0MjovZ2x1c3Rlci9zc2Qx X3Ztc3NkL2JyaWNrIChhcmJpdGVyKQpPcHRpb25zIFJlY29uZmlndXJlZDoKbmZzLmRpc2FibGU6 IG9uCnRyYW5zcG9ydC5hZGRyZXNzLWZhbWlseTogaW5ldDYKcGVyZm9ybWFuY2UucXVpY2stcmVh ZDogb2ZmCnBlcmZvcm1hbmNlLnJlYWQtYWhlYWQ6IG9mZgpwZXJmb3JtYW5jZS5pby1jYWNoZTog b2ZmCnBlcmZvcm1hbmNlLnN0YXQtcHJlZmV0Y2g6IG9mZgpwZXJmb3JtYW5jZS5sb3ctcHJpby10 aHJlYWRzOiAzMgpuZXR3b3JrLnJlbW90ZS1kaW86IG9mZgpjbHVzdGVyLmVhZ2VyLWxvY2s6IGVu YWJsZQpjbHVzdGVyLnF1b3J1bS10eXBlOiBhdXRvCmNsdXN0ZXIuc2VydmVyLXF1b3J1bS10eXBl OiBzZXJ2ZXIKY2x1c3Rlci5kYXRhLXNlbGYtaGVhbC1hbGdvcml0aG06IGZ1bGwKY2x1c3Rlci5s b2NraW5nLXNjaGVtZTogZ3JhbnVsYXIKY2x1c3Rlci5zaGQtbWF4LXRocmVhZHM6IDgKY2x1c3Rl ci5zaGQtd2FpdC1xbGVuZ3RoOiAxMDAwMApmZWF0dXJlcy5zaGFyZDogb24KdXNlci5jaWZzOiBv ZmYKc3RvcmFnZS5vd25lci11aWQ6IDM2CnN0b3JhZ2Uub3duZXItZ2lkOiAzNgpmZWF0dXJlcy5z aGFyZC1ibG9jay1zaXplOiAxMjhNQgpwZXJmb3JtYW5jZS5zdHJpY3Qtby1kaXJlY3Q6IG9uCm5l dHdvcmsucGluZy10aW1lb3V0OiAzMApjbHVzdGVyLmdyYW51bGFyLWVudHJ5LWhlYWw6IGVuYWJs ZQoKSSB3b3VsZCByZWFsbHkgYXBwcmVjaWF0ZSBzb21lIGd1aWRhbmNlIG9uIHRoaXMgdG8gdHJ5 IHRvIGltcHJvdmUgdGhpbmdzCmJlY2F1c2UgYXQgdGhpcyByYXRlIEkgd2lsbCBuZWVkIHRvIHJl Y29uc2lkZXIgdXNpbmcgR2x1c3RlckZTIGFsdG9nZXRoZXIuCgpDb3VsZCB5b3UgcHJvdmlkZSB0 aGUgZ2x1c3RlciB2b2x1bWUgcHJvZmlsZSBvdXRwdXQgd2hpbGUgeW91J3JlIHJ1bm5pbmcgeW91 ciBJL08gdGVzdHMuCiMgZ2x1c3RlciB2b2x1bWUgcHJvZmlsZSA8dm9sbmFtZT4gc3RhcnQKCnRv IHN0YXJ0IHByb2ZpbGluZwoKIyBnbHVzdGVyIHZvbHVtZSBwcm9maWxlIDx2b2xuYW1lPiBpbmZv CmZvciB0aGUgcHJvZmlsZSBvdXRwdXQuCgpDaGVlcnMsCkNocmlzCgotLQpDaHJpcyBCb290CmJv b3RjQGJvb3RjLm5ldApfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fXwpVc2VycyBtYWlsaW5nIGxpc3QKVXNlcnNAb3ZpcnQub3JnCmh0dHA6Ly9saXN0cy5vdmly dC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycwoKX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX18KR2x1c3Rlci11c2VycyBtYWlsaW5nIGxpc3QKR2x1c3Rlci11 c2Vyc0BnbHVzdGVyLm9yZwpodHRwOi8vbGlzdHMuZ2x1c3Rlci5vcmcvbWFpbG1hbi9saXN0aW5m by9nbHVzdGVyLXVzZXJzCgotLQpMaW5kc2F5CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXwpVc2VycyBtYWlsaW5nIGxpc3QKVXNlcnNAb3ZpcnQub3JnCmh0 dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw== --b1_dae58fffa0a735c3749409759402b004 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: base64 PGRpdj5EZWFyIEtydXRpa2EsPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+U29ycnkgZm9y IGFza2luZyBzbyBuYWl2ZWx5IGJ1dCBjYW4geW91IHRlbGwgbWUgb24gd2hhdCBmYWN0b3IgZG8g eW91IGJhc2UgdGhhdCB0aGUgY2xpZW50IGFuZCBzZXJ2ZXIgZXZlbnQtdGhyZWFkcyBwYXJhbWV0 ZXJzIGZvciBhIHZvbHVtZSBzaG91bGQgYmUgc2V0IHRvIDQ/PGJyPjwvZGl2PjxkaXY+PGJyPjwv ZGl2PjxkaXY+SXMgdGhpcyBtZXRyaWMgZm9yIGV4YW1wbGUgYmFzZWQgb24gdGhlIG51bWJlciBv ZiBjb3JlcyBhIEdsdXN0ZXJGUyBzZXJ2ZXIgaGFzPzxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48 ZGl2PkkgYW0gYXNraW5nIGJlY2F1c2UgSSBzYXcgbXkgR2x1c3RlckZTIHZvbHVtZXMgYXJlIHNl dCB0byAyIGFuZCB3b3VsZCBsaWtlIHRvIHNldCB0aGVzZSAgcGFyYW1ldGVycyB0byBzb21ldGhp bmcgbWVhbmluZ2Z1bCBmb3IgcGVyZm9ybWFuY2UgdHVuaW5nLiBNeSBzZXR1cCBpcyBhIHR3byBu b2RlIHJlcGxpY2Egd2l0aCBHbHVzdGVyRlMgMy44LjExLjxicj48L2Rpdj48ZGl2Pjxicj48L2Rp dj48ZGl2PkJlc3QgcmVnYXJkcyw8YnI+PC9kaXY+PGRpdj5NLjxicj48L2Rpdj48ZGl2IGNsYXNz PSJwcm90b25tYWlsX3NpZ25hdHVyZV9ibG9jayBwcm90b25tYWlsX3NpZ25hdHVyZV9ibG9jay1l bXB0eSI+PGRpdiBjbGFzcz0icHJvdG9ubWFpbF9zaWduYXR1cmVfYmxvY2stdXNlciBwcm90b25t YWlsX3NpZ25hdHVyZV9ibG9jay1lbXB0eSI+PGRpdj48YnI+PC9kaXY+PC9kaXY+PGRpdiBjbGFz cz0icHJvdG9ubWFpbF9zaWduYXR1cmVfYmxvY2stcHJvdG9uIHByb3Rvbm1haWxfc2lnbmF0dXJl X2Jsb2NrLWVtcHR5Ij48YnI+PC9kaXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGJsb2NrcXVvdGUg dHlwZT0iY2l0ZSIgY2xhc3M9InByb3Rvbm1haWxfcXVvdGUiPjxkaXY+LS0tLS0tLS0gT3JpZ2lu YWwgTWVzc2FnZSAtLS0tLS0tLTxicj48L2Rpdj48ZGl2PlN1YmplY3Q6IFJlOiBbR2x1c3Rlci11 c2Vyc10gW292aXJ0LXVzZXJzXSAgVmVyeSBwb29yIEdsdXN0ZXJGUyBwZXJmb3JtYW5jZTxicj48 L2Rpdj48ZGl2PkxvY2FsIFRpbWU6IEp1bmUgMjAsIDIwMTcgMTI6MjMgUE08YnI+PC9kaXY+PGRp dj5VVEMgVGltZTogSnVuZSAyMCwgMjAxNyAxMDoyMyBBTTxicj48L2Rpdj48ZGl2PkZyb206IGtk aGFuYW5qQHJlZGhhdC5jb208YnI+PC9kaXY+PGRpdj5UbzogTGluZHNheSBNYXRoaWVzb24gJmx0 O2xpbmRzYXkubWF0aGllc29uQGdtYWlsLmNvbSZndDs8YnI+PC9kaXY+PGRpdj5nbHVzdGVyLXVz ZXJzICZsdDtnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3JnJmd0Oywgb1ZpcnQgdXNlcnMgJmx0O3Vz ZXJzQG92aXJ0Lm9yZyZndDs8YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdiBkaXI9Imx0ciI+ PGRpdj48ZGl2PjxkaXY+PGRpdj48ZGl2PjxkaXY+PGRpdj48ZGl2PjxkaXY+PGRpdj5Db3VwbGUg b2YgdGhpbmdzOjxicj48L2Rpdj48L2Rpdj48ZGl2PjEuIExpa2UgRGFycmVsbCBzdWdnZXN0ZWQs IHlvdSBzaG91bGQgZW5hYmxlIHN0YXQtcHJlZmV0Y2ggYW5kIGluY3JlYXNlIGNsaWVudCBhbmQg c2VydmVyIGV2ZW50IHRocmVhZHMgdG8gNC48YnI+PC9kaXY+PC9kaXY+PGRpdj4jIGdsdXN0ZXIg dm9sdW1lIHNldCAmbHQ7Vk9MJmd0OyBwZXJmb3JtYW5jZS5zdGF0LXByZWZldGNoIG9uPGJyPjwv ZGl2PjwvZGl2PjxkaXY+IyBnbHVzdGVyIHZvbHVtZSBzZXQgJmx0O1ZPTCZndDsgY2xpZW50LmV2 ZW50LXRocmVhZHMgNDxicj48L2Rpdj48L2Rpdj48ZGl2PiMgZ2x1c3RlciB2b2x1bWUgc2V0ICZs dDtWT0wmZ3Q7IHNlcnZlci5ldmVudC10aHJlYWRzIDQ8YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+ PC9kaXY+PGRpdj4yLiBBbHNvIGdsdXN0ZXJmcy0zLjEwLjEgYW5kIGFib3ZlIGhhcyBhIHNoYXJk IHBlcmZvcm1hbmNlIGJ1ZyBmaXggLSA8YSBocmVmPSJodHRwczovL3Jldmlldy5nbHVzdGVyLm9y Zy8jL2MvMTY5NjYvIiByZWw9Im5vcmVmZXJyZXIgbm9mb2xsb3cgbm9vcGVuZXIiPmh0dHBzOi8v cmV2aWV3LmdsdXN0ZXIub3JnLyMvYy8xNjk2Ni88L2E+PGJyPjwvZGl2PjwvZGl2PjxkaXY+PGJy PjwvZGl2PjwvZGl2PjxkaXY+V2l0aCB0aGVzZSB0d28gY2hhbmdlcywgd2Ugc2F3IGdyZWF0IGlt cHJvdmVtZW50IGluIHBlcmZvcm1hbmNlIGluIG91ciBpbnRlcm5hbCB0ZXN0aW5nLjxicj48L2Rp dj48ZGl2Pjxicj48L2Rpdj48L2Rpdj48ZGl2PkRvIHlvdSBtaW5kIHRyeWluZyB0aGVzZSB0d28g b3B0aW9ucyBhYm92ZT88YnI+PC9kaXY+PC9kaXY+PGRpdj4tS3J1dGlrYTxicj48L2Rpdj48L2Rp dj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGRpdj48YnI+PC9kaXY+PGRpdiBjbGFzcz0iZ21h aWxfcXVvdGUiPjxkaXY+T24gVHVlLCBKdW4gMjAsIDIwMTcgYXQgMTowMCBQTSwgTGluZHNheSBN YXRoaWVzb24gPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86bGluZHNheS5tYXRo aWVzb25AZ21haWwuY29tIiByZWw9Im5vcmVmZXJyZXIgbm9mb2xsb3cgbm9vcGVuZXIiPmxpbmRz YXkubWF0aGllc29uQGdtYWlsLmNvbTwvYT4mZ3Q7PC9zcGFuPiB3cm90ZTo8YnI+PC9kaXY+PGJs b2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9y ZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBkaXI9Imx0ciI+ PGRpdj48ZGl2PkhhdmUgeW91IHRyaWVkIHdpdGg6PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk aXY+cGVyZm9ybWFuY2Uuc3RyaWN0LW8tZGlyZWN0IDogb2ZmPGJyPjwvZGl2PjxkaXY+cGVyZm9y bWFuY2Uuc3RyaWN0LXdyaXRlLTx3YnI+b3JkZXJpbmcgOiBvZmY8YnI+PC9kaXY+PC9kaXY+PGRp dj5UaGV5IGNhbiBiZSBjaGFuZ2VkIGR5bmFtaWNhbGx5Ljxicj48L2Rpdj48ZGl2Pjxicj48L2Rp dj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGRpdj48YnI+PC9kaXY+PGRpdiBjbGFz cz0iZ21haWxfcXVvdGUiPjxkaXY+PGRpdiBjbGFzcz0iaDUiPk9uIDIwIEp1bmUgMjAxNyBhdCAx NzoyMSwgU2FoaW5hIEJvc2UgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86c2Fi b3NlQHJlZGhhdC5jb20iIHJlbD0ibm9yZWZlcnJlciBub2ZvbGxvdyBub29wZW5lciI+c2Fib3Nl QHJlZGhhdC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPjwvZGl2PjwvZGl2PjxibG9ja3F1 b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1s ZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPjxkaXY+PGRpdiBjbGFzcz0iaDUi PjxkaXYgZGlyPSJsdHIiPjxkaXY+W0FkZGluZyBnbHVzdGVyLXVzZXJzXTxicj48L2Rpdj48ZGl2 PjxkaXYgY2xhc3M9ImdtYWlsX2V4dHJhIj48ZGl2Pjxicj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFp bF9xdW90ZSI+PGRpdj5PbiBNb24sIEp1biAxOSwgMjAxNyBhdCA4OjE2IFBNLCBDaHJpcyBCb290 IDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFpbHRvOmJvb3RjQGJvb3RjLm5ldCIgcmVs PSJub3JlZmVycmVyIG5vZm9sbG93IG5vb3BlbmVyIj5ib290Y0Bib290Yy5uZXQ8L2E+Jmd0Ozwv c3Bhbj4gd3JvdGU6PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5 bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmct bGVmdDoxZXgiPjxkaXY+SGkgZm9sa3MsPGJyPjwvZGl2PjxkaXY+IDxicj48L2Rpdj48ZGl2PiBJ IGhhdmUgM3ggc2VydmVycyBpbiBhICJoeXBlci1jb252ZXJnZWQiIG9WaXJ0IDQuMS4yICsgR2x1 c3RlckZTIDMuMTA8YnI+PC9kaXY+PGRpdj4gY29uZmlndXJhdGlvbi4gTXkgVk1zIHJ1biBvZmYg YSByZXBsaWNhIDMgYXJiaXRlciAxIHZvbHVtZSBjb21wcmlzZWQgb2Y8YnI+PC9kaXY+PGRpdj4g NiBicmlja3MsIHdoaWNoIHRoZW1zZWx2ZXMgbGl2ZSBvbiB0d28gU1NEcyBpbiBlYWNoIG9mIHRo ZSBzZXJ2ZXJzIChvbmU8YnI+PC9kaXY+PGRpdj4gYnJpY2sgcGVyIFNTRCkuIFRoZSBicmlja3Mg YXJlIFhGUyBvbiBMVk0gdGhpbiB2b2x1bWVzIHN0cmFpZ2h0IG9udG8gdGhlPGJyPjwvZGl2Pjxk aXY+IFNTRHMuIENvbm5lY3Rpdml0eSBpcyAxMEcgRXRoZXJuZXQuPGJyPjwvZGl2PjxkaXY+IDxi cj48L2Rpdj48ZGl2PiBQZXJmb3JtYW5jZSB3aXRoaW4gdGhlIFZNcyBpcyBwcmV0dHkgdGVycmli bGUuIEkgZXhwZXJpZW5jZSB2ZXJ5IGxvdzxicj48L2Rpdj48ZGl2PiB0aHJvdWdocHV0IGFuZCBy YW5kb20gSU8gaXMgcmVhbGx5IGJhZDogaXQgZmVlbHMgbGlrZSBhIGxhdGVuY3kgaXNzdWUuPGJy PjwvZGl2PjxkaXY+IE9uIG15IG9WaXJ0IG5vZGVzIHRoZSBTU0RzIGFyZSBub3QgZ2VuZXJhbGx5 IHZlcnkgYnVzeS4gVGhlIDEwRyBuZXR3b3JrPGJyPjwvZGl2PjxkaXY+IHNlZW1zIHRvIHJ1biB3 aXRob3V0IGVycm9ycyAoaXBlcmYzIGdpdmVzIGJhbmR3aWR0aCBtZWFzdXJlbWVudHMgb2YgJmd0 Oz08YnI+PC9kaXY+PGRpdj4gOS4yMCBHYml0cy9zZWMgYmV0d2VlbiB0aGUgdGhyZWUgc2VydmVy cykuPGJyPjwvZGl2PjxkaXY+IDxicj48L2Rpdj48ZGl2PiBUbyBwdXQgdGhpcyBpbnRvIHBlcnNw ZWN0aXZlOiBJIHdhcyBnZXR0aW5nIGJldHRlciBiZWhhdmlvdXIgZnJvbSBORlM0PGJyPjwvZGl2 PjxkaXY+IG9uIGEgZ2lnYWJpdCBjb25uZWN0aW9uIHRoYW4gSSBhbSB3aXRoIEdsdXN0ZXJGUyBv biAxMEc6IHRoYXQgZG9lc24ndDxicj48L2Rpdj48ZGl2PiBmZWVsIHJpZ2h0IGF0IGFsbC48YnI+ PC9kaXY+PGRpdj4gPGJyPjwvZGl2PjxkaXY+IE15IHZvbHVtZSBjb25maWd1cmF0aW9uIGxvb2tz IGxpa2UgdGhpczo8YnI+PC9kaXY+PGRpdj4gPGJyPjwvZGl2PjxkaXY+IFZvbHVtZSBOYW1lOiB2 bXNzZDxicj48L2Rpdj48ZGl2PiBUeXBlOiBEaXN0cmlidXRlZC1SZXBsaWNhdGU8YnI+PC9kaXY+ PGRpdj4gVm9sdW1lIElEOiBkNWE1ZGRkMS1hMTQwLTRlMGQtYjUxNC03MDFjZmU8d2JyPjQ2NDg1 Mzxicj48L2Rpdj48ZGl2PiBTdGF0dXM6IFN0YXJ0ZWQ8YnI+PC9kaXY+PGRpdj4gU25hcHNob3Qg Q291bnQ6IDA8YnI+PC9kaXY+PGRpdj4gTnVtYmVyIG9mIEJyaWNrczogMiB4ICgyICsgMSkgPSA2 PGJyPjwvZGl2PjxkaXY+IFRyYW5zcG9ydC10eXBlOiB0Y3A8YnI+PC9kaXY+PGRpdj4gQnJpY2tz Ojxicj48L2Rpdj48ZGl2PiBCcmljazE6IG92aXJ0MzovZ2x1c3Rlci9zc2QwX3Ztc3NkL2JyaTx3 YnI+Y2s8YnI+PC9kaXY+PGRpdj4gQnJpY2syOiBvdmlydDE6L2dsdXN0ZXIvc3NkMF92bXNzZC9i cmk8d2JyPmNrPGJyPjwvZGl2PjxkaXY+IEJyaWNrMzogb3ZpcnQyOi9nbHVzdGVyL3NzZDBfdm1z c2QvYnJpPHdicj5jayAoYXJiaXRlcik8YnI+PC9kaXY+PGRpdj4gQnJpY2s0OiBvdmlydDM6L2ds dXN0ZXIvc3NkMV92bXNzZC9icmk8d2JyPmNrPGJyPjwvZGl2PjxkaXY+IEJyaWNrNTogb3ZpcnQx Oi9nbHVzdGVyL3NzZDFfdm1zc2QvYnJpPHdicj5jazxicj48L2Rpdj48ZGl2PiBCcmljazY6IG92 aXJ0MjovZ2x1c3Rlci9zc2QxX3Ztc3NkL2JyaTx3YnI+Y2sgKGFyYml0ZXIpPGJyPjwvZGl2Pjxk aXY+IE9wdGlvbnMgUmVjb25maWd1cmVkOjxicj48L2Rpdj48ZGl2PiBuZnMuZGlzYWJsZTogb248 YnI+PC9kaXY+PGRpdj4gdHJhbnNwb3J0LmFkZHJlc3MtZmFtaWx5OiBpbmV0Njxicj48L2Rpdj48 ZGl2PiBwZXJmb3JtYW5jZS5xdWljay1yZWFkOiBvZmY8YnI+PC9kaXY+PGRpdj4gcGVyZm9ybWFu Y2UucmVhZC1haGVhZDogb2ZmPGJyPjwvZGl2PjxkaXY+IHBlcmZvcm1hbmNlLmlvLWNhY2hlOiBv ZmY8YnI+PC9kaXY+PGRpdj4gcGVyZm9ybWFuY2Uuc3RhdC1wcmVmZXRjaDogb2ZmPGJyPjwvZGl2 PjxkaXY+IHBlcmZvcm1hbmNlLmxvdy1wcmlvLXRocmVhZHM6IDMyPGJyPjwvZGl2PjxkaXY+IG5l dHdvcmsucmVtb3RlLWRpbzogb2ZmPGJyPjwvZGl2PjxkaXY+IGNsdXN0ZXIuZWFnZXItbG9jazog ZW5hYmxlPGJyPjwvZGl2PjxkaXY+IGNsdXN0ZXIucXVvcnVtLXR5cGU6IGF1dG88YnI+PC9kaXY+ PGRpdj4gY2x1c3Rlci5zZXJ2ZXItcXVvcnVtLXR5cGU6IHNlcnZlcjxicj48L2Rpdj48ZGl2PiBj bHVzdGVyLmRhdGEtc2VsZi1oZWFsLWFsZ29yaXQ8d2JyPmhtOiBmdWxsPGJyPjwvZGl2PjxkaXY+ IGNsdXN0ZXIubG9ja2luZy1zY2hlbWU6IGdyYW51bGFyPGJyPjwvZGl2PjxkaXY+IGNsdXN0ZXIu c2hkLW1heC10aHJlYWRzOiA4PGJyPjwvZGl2PjxkaXY+IGNsdXN0ZXIuc2hkLXdhaXQtcWxlbmd0 aDogMTAwMDA8YnI+PC9kaXY+PGRpdj4gZmVhdHVyZXMuc2hhcmQ6IG9uPGJyPjwvZGl2PjxkaXY+ IHVzZXIuY2lmczogb2ZmPGJyPjwvZGl2PjxkaXY+IHN0b3JhZ2Uub3duZXItdWlkOiAzNjxicj48 L2Rpdj48ZGl2PiBzdG9yYWdlLm93bmVyLWdpZDogMzY8YnI+PC9kaXY+PGRpdj4gZmVhdHVyZXMu c2hhcmQtYmxvY2stc2l6ZTogMTI4TUI8YnI+PC9kaXY+PGRpdj4gcGVyZm9ybWFuY2Uuc3RyaWN0 LW8tZGlyZWN0OiBvbjxicj48L2Rpdj48ZGl2PiBuZXR3b3JrLnBpbmctdGltZW91dDogMzA8YnI+ PC9kaXY+PGRpdj4gY2x1c3Rlci5ncmFudWxhci1lbnRyeS1oZWFsOiBlbmFibGU8YnI+PC9kaXY+ PGRpdj4gPGJyPjwvZGl2PjxkaXY+IEkgd291bGQgcmVhbGx5IGFwcHJlY2lhdGUgc29tZSBndWlk YW5jZSBvbiB0aGlzIHRvIHRyeSB0byBpbXByb3ZlIHRoaW5nczxicj48L2Rpdj48ZGl2PiBiZWNh dXNlIGF0IHRoaXMgcmF0ZSBJIHdpbGwgbmVlZCB0byByZWNvbnNpZGVyIHVzaW5nIEdsdXN0ZXJG UyBhbHRvZ2V0aGVyLjxicj48L2Rpdj48L2Jsb2NrcXVvdGU+PGRpdj48ZGl2Pjxicj48L2Rpdj48 L2Rpdj48ZGl2PjxkaXY+Q291bGQgeW91IHByb3ZpZGUgdGhlIGdsdXN0ZXIgdm9sdW1lIHByb2Zp bGUgb3V0cHV0IHdoaWxlIHlvdSdyZSBydW5uaW5nIHlvdXIgSS9PIHRlc3RzLjxicj48L2Rpdj48 L2Rpdj48ZGl2PiMgZ2x1c3RlciB2b2x1bWUgcHJvZmlsZSAmbHQ7dm9sbmFtZSZndDsgc3RhcnQg PGJyPjwvZGl2PjxkaXY+PGRpdj50byBzdGFydCBwcm9maWxpbmc8YnI+PC9kaXY+PC9kaXY+PGRp dj48ZGl2PiMgZ2x1c3RlciB2b2x1bWUgcHJvZmlsZSAmbHQ7dm9sbmFtZSZndDsgaW5mbzxicj48 L2Rpdj48L2Rpdj48ZGl2PmZvciB0aGUgcHJvZmlsZSBvdXRwdXQuPGJyPjwvZGl2PjxkaXY+Jm5i c3A7PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdp bjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgi PjxkaXY+PGJyPjwvZGl2PjxkaXY+Q2hlZXJzLDxicj48L2Rpdj48ZGl2PiBDaHJpczxzcGFuIGNs YXNzPSJtXy0xNzcwMTcyMDQyODMyODU2NDM1SE9FblpiIj48c3BhbiBjbGFzcz0iY29sb3VyIiBz dHlsZT0iY29sb3I6Izg4ODg4OCI+PGJyPiA8c3BhbiBjbGFzcz0ibV8tMTc3MDE3MjA0MjgzMjg1 NjQzNW1fMzM3NzQxOTMyODE0Nzc3ODk4SE9FblpiIj48c3BhbiBjbGFzcz0iY29sb3VyIiBzdHls ZT0iY29sb3I6Izg4ODg4OCI+PGJyPiAtLTxicj4gQ2hyaXMgQm9vdDxicj4gPGEgaHJlZj0ibWFp bHRvOmJvb3RjQGJvb3RjLm5ldCIgcmVsPSJub3JlZmVycmVyIG5vZm9sbG93IG5vb3BlbmVyIj5i b290Y0Bib290Yy5uZXQ8L2E+PGJyPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX188d2Jy Pl9fX19fX19fX19fX19fX19fPGJyPiBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+IDxhIGhyZWY9Im1h aWx0bzpVc2Vyc0BvdmlydC5vcmciIHJlbD0ibm9yZWZlcnJlciBub2ZvbGxvdyBub29wZW5lciI+ VXNlcnNAb3ZpcnQub3JnPC9hPjxicj4gPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9t YWlsbWFuL2xpc3RpbmZvL3VzZXJzIiByZWw9Im5vcmVmZXJyZXIgbm9mb2xsb3cgbm9vcGVuZXIi Pmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbjx3YnI+L2xpc3RpbmZvL3VzZXJzPC9hPjwv c3Bhbj48L3NwYW4+PC9zcGFuPjwvc3Bhbj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PGRpdj48 YnI+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PC9kaXY+PC9kaXY+PGRp dj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188d2JyPl9fX19fX19fX19fX19fX19fPGJy PjwvZGl2PjxkaXY+IEdsdXN0ZXItdXNlcnMgbWFpbGluZyBsaXN0PGJyPjwvZGl2PjxkaXY+IDxh IGhyZWY9Im1haWx0bzpHbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3JnIiByZWw9Im5vcmVmZXJyZXIg bm9mb2xsb3cgbm9vcGVuZXIiPkdsdXN0ZXItdXNlcnNAZ2x1c3Rlci5vcmc8L2E+PGJyPjwvZGl2 PjxkaXY+IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5nbHVzdGVyLm9yZy9tYWlsbWFuL2xpc3RpbmZv L2dsdXN0ZXItdXNlcnMiIHJlbD0ibm9yZWZlcnJlciBub2ZvbGxvdyBub29wZW5lciI+aHR0cDov L2xpc3RzLmdsdXN0ZXIub3JnL21haWxtPHdicj5hbi9saXN0aW5mby9nbHVzdGVyLXVzZXJzPC9h PjxzcGFuIGNsYXNzPSJIT0VuWmIiPjxzcGFuIGNsYXNzPSJjb2xvdXIiIHN0eWxlPSJjb2xvcjoj ODg4ODg4Ij48L3NwYW4+PC9zcGFuPjxicj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PHNwYW4g Y2xhc3M9IkhPRW5aYiI+PHNwYW4gY2xhc3M9ImNvbG91ciIgc3R5bGU9ImNvbG9yOiM4ODg4ODgi PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+LS0gPGJy PjwvZGl2PjxkaXYgY2xhc3M9Im1fLTE3NzAxNzIwNDI4MzI4NTY0MzVnbWFpbF9zaWduYXR1cmUi IGRhdGEtc21hcnRtYWlsPSJnbWFpbF9zaWduYXR1cmUiPkxpbmRzYXk8YnI+PC9kaXY+PC9zcGFu Pjwvc3Bhbj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pl9fX19fX19fX19fX19fX19fX19fX19f X19fX19fXzx3YnI+X19fX19fX19fX19fX19fX188YnI+PC9kaXY+PGRpdj4gVXNlcnMgbWFpbGlu ZyBsaXN0PGJyPjwvZGl2PjxkaXY+IDxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciIHJl bD0ibm9yZWZlcnJlciBub2ZvbGxvdyBub29wZW5lciI+VXNlcnNAb3ZpcnQub3JnPC9hPjxicj48 L2Rpdj48ZGl2PiA8YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGlu Zm8vdXNlcnMiIHJlbD0ibm9yZWZlcnJlciBub2ZvbGxvdyBub29wZW5lciI+aHR0cDovL2xpc3Rz Lm92aXJ0Lm9yZy88d2JyPm1haWxtYW4vbGlzdGluZm8vdXNlcnM8L2E+PGJyPjwvZGl2PjxkaXY+ IDxicj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjxkaXY+PGJy PjwvZGl2Pg== --b1_dae58fffa0a735c3749409759402b004--

No. It's just that in the internal testing that was done here, increasing the thread count beyond 4 did not improve the performance any further. -Krutika On Tue, Jun 20, 2017 at 11:30 PM, mabi <mabi@protonmail.ch> wrote:
Dear Krutika,
Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4?
Is this metric for example based on the number of cores a GlusterFS server has?
I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance tuning. My setup is a two node replica with GlusterFS 3.8.11.
Best regards, M.
-------- Original Message -------- Subject: Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance Local Time: June 20, 2017 12:23 PM UTC Time: June 20, 2017 10:23 AM From: kdhananj@redhat.com To: Lindsay Mathieson <lindsay.mathieson@gmail.com> gluster-users <gluster-users@gluster.org>, oVirt users <users@ovirt.org>
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4
2. Also glusterfs-3.10.1 and above has a shard performance bug fix - https://review.gluster.org/#/c/16966/
With these two changes, we saw great improvement in performance in our internal testing.
Do you mind trying these two options above? -Krutika
On Tue, Jun 20, 2017 at 1:00 PM, Lindsay Mathieson < lindsay.mathieson@gmail.com> wrote:
Have you tried with:
performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically.
On 20 June 2017 at 17:21, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users]
On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc@bootc.net> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >= 9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all.
My volume configuration looks like this:
Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether.
Could you provide the gluster volume profile output while you're running your I/O tests. # gluster volume profile <volname> start to start profiling # gluster volume profile <volname> info for the profile output.
Cheers, Chris
-- Chris Boot bootc@bootc.net _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
-- Lindsay
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

[replying to lists this time] On 20/06/17 11:23, Krutika Dhananjay wrote:
Couple of things:
1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4
2. Also glusterfs-3.10.1 and above has a shard performance bug fix - https://review.gluster.org/#/c/16966/
With these two changes, we saw great improvement in performance in our internal testing.
Hi Krutika, Thanks for your input. I have yet to run any benchmarks, but I'll do that once I have a bit more time to work on this. I've tweaked the options as you suggest, but that doesn't seem to have made an appreciable difference. I admit that without benchmarks it's a bit like sticking your finger in the air, though. Do I need to restart my bricks and/or remount the volumes for these to take effect? I'm actually running GlusterFS 3.10.2-1. This is all coming from the CentOS Storage SIG's centos-release-gluster310 repository. Thanks again. Chris -- Chris Boot bootc@bootc.net

No, you don't need to do any of that. Just executing volume-set commands is sufficient for the changes to take effect. -Krutika On Wed, Jun 21, 2017 at 3:48 PM, Chris Boot <bootc@bootc.net> wrote:
[replying to lists this time]
On 20/06/17 11:23, Krutika Dhananjay wrote:
Couple of things:
1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4
2. Also glusterfs-3.10.1 and above has a shard performance bug fix - https://review.gluster.org/#/c/16966/
With these two changes, we saw great improvement in performance in our internal testing.
Hi Krutika,
Thanks for your input. I have yet to run any benchmarks, but I'll do that once I have a bit more time to work on this.
I've tweaked the options as you suggest, but that doesn't seem to have made an appreciable difference. I admit that without benchmarks it's a bit like sticking your finger in the air, though. Do I need to restart my bricks and/or remount the volumes for these to take effect?
I'm actually running GlusterFS 3.10.2-1. This is all coming from the CentOS Storage SIG's centos-release-gluster310 repository.
Thanks again.
Chris
-- Chris Boot bootc@bootc.net

On 21/06/17 11:18, Chris Boot wrote:
Thanks for your input. I have yet to run any benchmarks, but I'll do that once I have a bit more time to work on this.
Is there a particular benchmark test that I should run to gather some stats for this? Would certain tests be more useful than others? Thanks, Chris -- Chris Boot bootc@bootc.net
participants (11)
-
Chris Boot
-
Darrell Budic
-
Doron Fediuck
-
Krutika Dhananjay
-
Lindsay Mathieson
-
mabi
-
Mahdi Adnan
-
Ralf Schenk
-
Sahina Bose
-
Sven Achtelik
-
Yaniv Kaul