
--_004_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: multipart/alternative; boundary="_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_" --_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable So ovirt access gluster vai FUSE ? i thought its using libgfapi. When can we expect it to work with libgfapi ? and what about the changelog of 4.1.3 ? BZ 1022961 Gluster: running a VM from a gluster domain should use gluster U= RI instead of a fuse mount" -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Ralf S= chenk <rs@databay.de> Sent: Monday, June 19, 2017 7:32:45 PM To: users@ovirt.org Subject: Re: [ovirt-users] Very poor GlusterFS performance Hello, Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi acce= ss for Ovirt-VM's to gluster volumes which I thought to be possible since 3= .6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to= mount gluster-based VM-Disks. Bye Am 19.06.2017 um 17:23 schrieb Darrell Budic: Chris- You probably need to head over to gluster-users@gluster.org<mailto:gluster-= users@gluster.org> for help with performance issues. That said, what kind of performance are you getting, via some form or testi= ng like bonnie++ or even dd runs? Raw bricks vs gluster performance is usef= ul to determine what kind of performance you=92re actually getting. Beyond that, I=92d recommend dropping the arbiter bricks and re-adding them= as full replicas, they can=92t serve distributed data in this configuratio= n and may be slowing things down on you. If you=92ve got a storage network = setup, make sure it=92s using the largest MTU it can, and consider adding/t= esting these settings that I use on my main storage volume: performance.io<http://performance.io>-thread-count: 32 client.event-threads: 8 server.event-threads: 3 performance.stat-prefetch: on Good luck, -Darrell On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net<mailto:bootc@bootc= .net>> wrote: Hi folks, I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of 6 bricks, which themselves live on two SSDs in each of the servers (one brick per SSD). The bricks are XFS on LVM thin volumes straight onto the SSDs. Connectivity is 10G Ethernet. Performance within the VMs is pretty terrible. I experience very low throughput and random IO is really bad: it feels like a latency issue. On my oVirt nodes the SSDs are not generally very busy. The 10G network seems to run without errors (iperf3 gives bandwidth measurements of >=3D 9.20 Gbits/sec between the three servers). To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. My volume configuration looks like this: Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) =3D 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovirt1:/gluster/ssd1_vmssd/brick Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet6 performance.quick-read: off performance.read-ahead: off performance.io<http://performance.io>-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 features.shard-block-size: 128MB performance.strict-o-direct: on network.ping-timeout: 30 cluster.granular-entry-heal: enable I would really appreciate some guidance on this to try to improve things because at this rate I will need to reconsider using GlusterFS altogether. Cheers, Chris -- Chris Boot bootc@bootc.net<mailto:bootc@bootc.net> _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users -- [cid:part6.276D40AB.8385CD25@databay.de] Ralf Schenk fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail rs@databay.de<mailto:rs@databay.de> Databay AG Jens-Otto-Krag-Stra=DFe 11 D-52146 W=FCrselen www.databay.de<http://www.databay.de> Sitz/Amtsgericht Aachen =95 HRB:8437 =95 USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Phi= lipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ________________________________ --_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> <style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi= n-bottom:0;} --></style> <div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font= -family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"> <p>So ovirt access gluster vai FUSE ? i thought its using libgfapi.</p> <p>When can we expect it to work with libgfapi ? </p> <p>and what about the changelog of 4.1.3 ?</p> <div><span style=3D"font-size: 12pt;">BZ 1022961 Gluster: running a VM from= a gluster domain should use gluster URI instead of a fuse mount"</spa= n><br> </div> <p></p> <p><br> </p> <div id=3D"Signature"><br> <div class=3D"ecxmoz-signature">-- <br> <br> <font color=3D"#3366ff"><font color=3D"#000000">Respectfully<b><br> </b><b>Mahdi A. Mahdi</b></font></font><font color=3D"#3366ff"><br> <br> </font><font color=3D"#3366ff"></font></div> </div> </div> <hr style=3D"display:inline-block;width:98%" tabindex=3D"-1"> <div id=3D"divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" st= yle=3D"font-size:11pt" color=3D"#000000"><b>From:</b> users-bounces@ovirt.o= rg <users-bounces@ovirt.org> on behalf of Ralf Schenk <rs@databay.= de><br> <b>Sent:</b> Monday, June 19, 2017 7:32:45 PM<br> <b>To:</b> users@ovirt.org<br> <b>Subject:</b> Re: [ovirt-users] Very poor GlusterFS performance</font> <div> </div> </div> <div> <p><font face=3D"Helvetica, Arial, sans-serif">Hello,</font></p> <p><font face=3D"Helvetica, Arial, sans-serif">Gluster-Performance is bad. = Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster= volumes which I thought to be possible since 3.6.x. Documentation is misle= ading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disks.</font></p> <p><font face=3D"Helvetica, Arial, sans-serif"></font>Bye<br> </p> <br> <div class=3D"moz-cite-prefix">Am 19.06.2017 um 17:23 schrieb Darrell Budic= :<br> </div> <blockquote type=3D"cite" cite=3D"mid:D8AAF9DB-02FB-4B38-9E33-174134F5377C@= onholyground.com"> Chris- <div class=3D""><br class=3D""> </div> <div class=3D"">You probably need to head over to <a href=3D"mailto:gluster= -users@gluster.org" class=3D"" moz-do-not-send=3D"true"> gluster-users@gluster.org</a> for help with performance issues.</div> <div class=3D""><br class=3D""> </div> <div class=3D"">That said, what kind of performance are you getting, via so= me form or testing like bonnie++ or even dd runs? Raw bricks vs glu= ster performance is useful to determine what kind of performance you=92re a= ctually getting.</div> <div class=3D""><br class=3D""> </div> <div class=3D"">Beyond that, I=92d recommend dropping the arbiter bricks an= d re-adding them as full replicas, they can=92t serve distributed data in t= his configuration and may be slowing things down on you. If you=92ve got a = storage network setup, make sure it=92s using the largest MTU it can, and consider adding/testing these settings that I = use on my main storage volume:</div> <div class=3D""><br class=3D""> </div> <div class=3D""> <div style=3D"margin: 0px; line-height: normal;" class=3D""><a href=3D"http= ://performance.io" class=3D"" moz-do-not-send=3D"true">performance.io</a>-t= hread-count: 32</div> <div style=3D"margin: 0px; line-height: normal;" class=3D""><span style=3D"= font-variant-ligatures: no-common-ligatures" class=3D"">client.event-thread= s: 8</span></div> <div style=3D"margin: 0px; line-height: normal;" class=3D""><span style=3D"= font-variant-ligatures: no-common-ligatures" class=3D"">server.event-thread= s: 3</span></div> <div style=3D"margin: 0px; line-height: normal;" class=3D"">performance.sta= t-prefetch: on</div> </div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br class=3D""> </span></div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Good luck,</span></div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D""><br class=3D""> </span></div> <div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures" class=3D""> -Darrell</span></div> <div class=3D""><br class=3D""> </div> <div class=3D""><br class=3D""> <div> <blockquote type=3D"cite" class=3D""> <div class=3D"">On Jun 19, 2017, at 9:46 AM, Chris Boot <<a href=3D"mail= to:bootc@bootc.net" class=3D"" moz-do-not-send=3D"true">bootc@bootc.net</a>= > wrote:</div> <br class=3D"Apple-interchange-newline"> <div class=3D""> <div class=3D"">Hi folks,<br class=3D""> <br class=3D""> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + Gluste= rFS 3.10<br class=3D""> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of<br = class=3D""> 6 bricks, which themselves live on two SSDs in each of the servers (one<br = class=3D""> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the<br= class=3D""> SSDs. Connectivity is 10G Ethernet.<br class=3D""> <br class=3D""> Performance within the VMs is pretty terrible. I experience very low<br cla= ss=3D""> throughput and random IO is really bad: it feels like a latency issue.<br c= lass=3D""> On my oVirt nodes the SSDs are not generally very busy. The 10G network<br = class=3D""> seems to run without errors (iperf3 gives bandwidth measurements of >=3D= <br class=3D""> 9.20 Gbits/sec between the three servers).<br class=3D""> <br class=3D""> To put this into perspective: I was getting better behaviour from NFS4<br c= lass=3D""> on a gigabit connection than I am with GlusterFS on 10G: that doesn't<br cl= ass=3D""> feel right at all.<br class=3D""> <br class=3D""> My volume configuration looks like this:<br class=3D""> <br class=3D""> Volume Name: vmssd<br class=3D""> Type: Distributed-Replicate<br class=3D""> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853<br class=3D""> Status: Started<br class=3D""> Snapshot Count: 0<br class=3D""> Number of Bricks: 2 x (2 + 1) =3D 6<br class=3D""> Transport-type: tcp<br class=3D""> Bricks:<br class=3D""> Brick1: ovirt3:/gluster/ssd0_vmssd/brick<br class=3D""> Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br class=3D""> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br class=3D""> Brick4: ovirt3:/gluster/ssd1_vmssd/brick<br class=3D""> Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br class=3D""> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br class=3D""> Options Reconfigured:<br class=3D""> nfs.disable: on<br class=3D""> transport.address-family: inet6<br class=3D""> performance.quick-read: off<br class=3D""> performance.read-ahead: off<br class=3D""> <a href=3D"http://performance.io" class=3D"" moz-do-not-send=3D"true">perfo= rmance.io</a>-cache: off<br class=3D""> performance.stat-prefetch: off<br class=3D""> performance.low-prio-threads: 32<br class=3D""> network.remote-dio: off<br class=3D""> cluster.eager-lock: enable<br class=3D""> cluster.quorum-type: auto<br class=3D""> cluster.server-quorum-type: server<br class=3D""> cluster.data-self-heal-algorithm: full<br class=3D""> cluster.locking-scheme: granular<br class=3D""> cluster.shd-max-threads: 8<br class=3D""> cluster.shd-wait-qlength: 10000<br class=3D""> features.shard: on<br class=3D""> user.cifs: off<br class=3D""> storage.owner-uid: 36<br class=3D""> storage.owner-gid: 36<br class=3D""> features.shard-block-size: 128MB<br class=3D""> performance.strict-o-direct: on<br class=3D""> network.ping-timeout: 30<br class=3D""> cluster.granular-entry-heal: enable<br class=3D""> <br class=3D""> I would really appreciate some guidance on this to try to improve things<br= class=3D""> because at this rate I will need to reconsider using GlusterFS altogether.<= br class=3D""> <br class=3D""> Cheers,<br class=3D""> Chris<br class=3D""> <br class=3D""> -- <br class=3D""> Chris Boot<br class=3D""> <a href=3D"mailto:bootc@bootc.net" class=3D"" moz-do-not-send=3D"true">boot= c@bootc.net</a><br class=3D""> _______________________________________________<br class=3D""> Users mailing list<br class=3D""> <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org">Users= @ovirt.org</a><br class=3D""> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br class= =3D""> </div> </div> </blockquote> </div> <br class=3D""> </div> <br> <fieldset class=3D"mimeAttachmentHeader"></fieldset> <br> <pre wrap=3D"">_______________________________________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org">Users= @ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l= istinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <div class=3D"moz-signature">-- <br> <p></p> <table cellspacing=3D"0" cellpadding=3D"0" border=3D"0"> <tbody> <tr> <td colspan=3D"3"><img height=3D"30" width=3D"151" border=3D"0" src=3D"cid:= part6.276D40AB.8385CD25@databay.de"></td> </tr> <tr> <td valign=3D"top"><font size=3D"-1" face=3D"Verdana, Arial, sans-serif"><br> <b>Ralf Schenk</b><br> fon +49 (0) 24 05 / 40 83 70<br> fax +49 (0) 24 05 / 40 83 759<br> mail <a href=3D"mailto:rs@databay.de"><font color=3D"#FF0000"><b>rs@databay= .de</b></font></a><br> </font></td> <td width=3D"30"> </td> <td valign=3D"top"><font size=3D"-1" face=3D"Verdana, Arial, sans-serif"><br> <b>Databay AG</b><br> Jens-Otto-Krag-Stra=DFe 11<br> D-52146 W=FCrselen<br> <a href=3D"http://www.databay.de"><font color=3D"#FF0000"><b>www.databay.de= </b></font></a> </font></td> </tr> <tr> <td colspan=3D"3" valign=3D"top"><font size=3D"1" face=3D"Verdana, Arial, sans-serif"><br> Sitz/Amtsgericht Aachen =95 HRB:8437 =95 USt-IdNr.: DE 210844202<br> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Phi= lipp Hermanns<br> Aufsichtsratsvorsitzender: Wilhelm Dohmen </font></td> </tr> </tbody> </table> <hr noshade=3D"noshade" size=3D"1" color=3D"#000000" width=3D"100%"> </div> </div> </body> </html> --_000_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_-- --_004_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_ Content-Type: image/gif; name="geegbplfmmafjmcf.gif" Content-Description: geegbplfmmafjmcf.gif Content-Disposition: inline; filename="geegbplfmmafjmcf.gif"; size=1250; creation-date="Mon, 19 Jun 2017 16:41:10 GMT"; modification-date="Mon, 19 Jun 2017 16:41:10 GMT" Content-ID: <part6.276D40AB.8385CD25@databay.de> Content-Transfer-Encoding: base64 R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8LEhIQ EKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA////yH5BAAA AAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+KQMFUYCDCqHRK JVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJFbA07F35aFBiEkJEp fXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8rgkkD7y5KhMZB3drqSoV FQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRfFigvChRxFJwkBBvk5A7cpZhA jgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg44oDCXBFC/3qj9SEluZEpHnjYQFIG gpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzScLCAg38OWI4Y4GECgQcSOEwYcADnh6/F NjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGADx8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5Y WjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1g uN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uE kQAAZucpVw1xIsjkgf8B863mQVYteQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhB SAJ+1ThH32AfRFZNayNAtUFiwFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjB gNcgKQKMHmwjgnCSpeCbULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP 5jJoNQ4Y4Gh8jpFgHH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhG GZgDEKArABGAed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUp o0ceOQ4D0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfD cNrcCEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4 oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiXOkDE GaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6UV165CpaH ukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3CBajgfsROuxcP A8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkwcPfs+xACADs= --_004_DM5PR01MB2506B56CD79F414521F215E7FFAB0DM5PR01MB2506prod_--