--_000_DM5PR01MB25069E435FC89BDDA5534AB2FFC40DM5PR01MB2506prod_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
Can you put some numbers ? what tests are you doing ?
Im running oVirt with Gluster without performance issues, but im running re=
plica 2 all SSDs.
Gluster logs might help too.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of Chris =
Boot <bootc(a)bootc.net
Sent: Monday, June 19, 2017 5:46:08
PM
To: oVirt users
Subject: [ovirt-users] Very poor GlusterFS performance
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=3D
9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.
My volume configuration looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) =3D 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.
Cheers,
Chris
--
Chris Boot
bootc(a)bootc.net
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_DM5PR01MB25069E435FC89BDDA5534AB2FFC40DM5PR01MB2506prod_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft
Exchange Server"
<!-- converted from text
--><style><!-- .EmailQuote { margin-left: 1pt; pad=
ding-left: 4pt; border-left: #800000 2px solid; } --></style
</head
<body
<meta content=3D"text/html;
charset=3DUTF-8"
<style
type=3D"text/css" style=3D""
<!--
p
{margin-top:0;
margin-bottom:0}
--
</style
<div
dir=3D"ltr"
<div
id=3D"x_divtagdefaultwrapper" dir=3D"ltr"
style=3D"font-size:12pt; col=
or:#000000; font-family:Calibri,Helvetica,sans-serif"
<p>Hi,</p
<p><br
</p
<p>Can you put some numbers ?
what tests are you doing ?</p
<p>Im running oVirt with
Gluster without performance issues, but im running=
replica 2 all SSDs.</p
<p>Gluster logs might help
too.</p
<p><br
</p
<div
id=3D"x_Signature"><br
<div
class=3D"x_ecxmoz-signature">-- <br
<br
<font
color=3D"#3366ff"><font
color=3D"#000000">Respectfully<b><br
</b><b>Mahdi A. Mahdi</b></font></font><font
color=3D"#3366ff"><br
<br
</font><font
color=3D"#3366ff"></font></div
</div
</div
<hr tabindex=3D"-1"
style=3D"display:inline-block; width:98%"
<div
id=3D"x_divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri,
sans-serif" =
color=3D"#000000" style=3D"font-size:11pt"><b>From:</b>
users-bounces@ovirt=
.org &lt;users-bounces(a)ovirt.org&gt; on behalf of Chris Boot <bootc@boot=
c.net><br
<b>Sent:</b> Monday,
June 19, 2017 5:46:08 PM<br
<b>To:</b> oVirt
users<br
<b>Subject:</b>
[ovirt-users] Very poor GlusterFS performance</font
<div> </div
</div
</div
<font size=3D"2"><span
style=3D"font-size:10pt;"
<div
class=3D"PlainText">Hi folks,<br
<br
I have 3x servers in a
"hyper-converged" oVirt 4.1.2 + Gluste=
rFS 3.10<br
configuration. My VMs run off a
replica 3 arbiter 1 volume comprised of<br
6 bricks,
which themselves live on two SSDs in each of the servers (one<br
brick per SSD). The bricks are XFS on LVM thin volumes straight
onto the<br=
SSDs. Connectivity is 10G Ethernet.<br
<br
Performance within the VMs is pretty
terrible. I experience very low<br
throughput and random IO is really bad: it feels like a latency issue.<br
On my oVirt nodes the SSDs are not generally very busy. The 10G
network<br
seems to run without errors (iperf3
gives bandwidth measurements of >=3D=
<br
9.20 Gbits/sec between the three servers).<br
<br
To put this into perspective: I was
getting better behaviour from NFS4<br
on a
gigabit connection than I am with GlusterFS on 10G: that doesn't<br
feel right at all.<br
<br
My volume configuration looks like
this:<br
<br
Volume
Name: vmssd<br
Type:
Distributed-Replicate<br
Volume ID:
d5a5ddd1-a140-4e0d-b514-701cfe464853<br
Status:
Started<br
Snapshot Count: 0<br
Number of Bricks: 2 x (2 + 1) =3D 6<br
Transport-type: tcp<br
Bricks:<br
Brick1:
ovirt3:/gluster/ssd0_vmssd/brick<br
Brick2:
ovirt1:/gluster/ssd0_vmssd/brick<br
Brick3:
ovirt2:/gluster/ssd0_vmssd/brick (arbiter)<br
Brick4:
ovirt3:/gluster/ssd1_vmssd/brick<br
Brick5:
ovirt1:/gluster/ssd1_vmssd/brick<br
Brick6:
ovirt2:/gluster/ssd1_vmssd/brick (arbiter)<br
Options
Reconfigured:<br
nfs.disable: on<br
transport.address-family: inet6<br
performance.quick-read: off<br
performance.read-ahead: off<br
performance.io-cache: off<br
performance.stat-prefetch: off<br
performance.low-prio-threads: 32<br
network.remote-dio: off<br
cluster.eager-lock: enable<br
cluster.quorum-type: auto<br
cluster.server-quorum-type: server<br
cluster.data-self-heal-algorithm: full<br
cluster.locking-scheme: granular<br
cluster.shd-max-threads: 8<br
cluster.shd-wait-qlength: 10000<br
features.shard: on<br
user.cifs: off<br
storage.owner-uid: 36<br
storage.owner-gid: 36<br
features.shard-block-size:
128MB<br
performance.strict-o-direct:
on<br
network.ping-timeout: 30<br
cluster.granular-entry-heal: enable<br
<br
I would really appreciate some
guidance on this to try to improve things<br=
because at this rate I will need to reconsider using GlusterFS
altogether.<=
br
<br
Cheers,<br
Chris<br
<br
-- <br
Chris Boot<br
bootc(a)bootc.net<br
_______________________________________________<br
Users
mailing list<br
Users(a)ovirt.org<br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users">http:...
t.org/mailman/listinfo/users</a><br
</div
</span></font
</body
</html
--_000_DM5PR01MB25069E435FC89BDDA5534AB2FFC40DM5PR01MB2506prod_--