[Users] Very bad write performance in VM (ovirt 3.3.3)

Hello List, i set up a Cluster with 2 Nodes and Glusterfs. gluster> volume info all Volume Name: Repl2 Type: Replicate Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1.local:/data Brick2: node2.local:/data Options Reconfigured: auth.allow: * user.cifs: enable nfs.disable: off I turned node2 off. Just to make sure i have not network bottle neck and that it will not replicate for my first benchmarks. My first test with bonnie on my local raw disk of node1 gave me 130MB/sec write speed. Then i did the same test on my cluster dir /data: 130MB/sec Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec This is terrible and i wonder why?! My tests where made with: bonnie++ -u root -s <double mem) -d <dir> Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg Since node2 is turned off, this cant be a network bottle neck. Any ideas? Thanks, Mario

anyone? On Friday, February 7, 2014, ml ml <mliebherr99@googlemail.com> wrote:
Hello List,
i set up a Cluster with 2 Nodes and Glusterfs.
gluster> volume info all
Volume Name: Repl2 Type: Replicate Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1.local:/data Brick2: node2.local:/data Options Reconfigured: auth.allow: * user.cifs: enable nfs.disable: off
I turned node2 off. Just to make sure i have not network bottle neck and that it will not replicate for my first benchmarks.
My first test with bonnie on my local raw disk of node1 gave me 130MB/sec write speed. Then i did the same test on my cluster dir /data: 130MB/sec Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec
This is terrible and i wonder why?!
My tests where made with: bonnie++ -u root -s <double mem) -d <dir>
Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg
Since node2 is turned off, this cant be a network bottle neck.
Any ideas?
Thanks, Mario

--Apple-Mail=_1ACF3B32-08CE-4A06-AC57-0A55A02AB107 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=iso-8859-1 Hello, What version of GlusterFS you are using? ml ml <mliebherr99@googlemail.com> kirjoitti 8.2.2014 kello 21.24:
anyone? =20 On Friday, February 7, 2014, ml ml <mliebherr99@googlemail.com> wrote: Hello List, =20 i set up a Cluster with 2 Nodes and Glusterfs. =20 =20 gluster> volume info all =20 Volume Name: Repl2 Type: Replicate Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7 Status: Started Number of Bricks: 1 x 2 =3D 2 Transport-type: tcp Bricks: Brick1: node1.local:/data Brick2: node2.local:/data Options Reconfigured: auth.allow: * user.cifs: enable nfs.disable: off =20 =20 I turned node2 off. Just to make sure i have not network bottle neck = and that it will not replicate for my first benchmarks. =20 =20 My first test with bonnie on my local raw disk of node1 gave me = 130MB/sec write speed. Then i did the same test on my cluster dir /data: 130MB/sec Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec=20=
=20 This is terrible and i wonder why?! =20 My tests where made with: bonnie++ -u root -s <double mem) -d <dir> =20 Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg =20 =20 Since node2 is turned off, this cant be a network bottle neck. =20 Any ideas?=20 =20 Thanks, Mario _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_1ACF3B32-08CE-4A06-AC57-0A55A02AB107 Content-Transfer-Encoding: 7bit Content-Type: text/html; charset=iso-8859-1 <html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hello,<div><br></div><div>What version of GlusterFS you are using?</div><div><br><div><div><div>ml ml <<a href="mailto:mliebherr99@googlemail.com">mliebherr99@googlemail.com</a>> kirjoitti 8.2.2014 kello 21.24:</div><br class="Apple-interchange-newline"><blockquote type="cite">anyone?<br><br>On Friday, February 7, 2014, ml ml <<a href="mailto:mliebherr99@googlemail.com">mliebherr99@googlemail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div dir="ltr"><div><div><div><div><div><div>Hello List,<br><br></div>i set up a Cluster with 2 Nodes and Glusterfs.<br><br><br>gluster> volume info all<br> <br>Volume Name: Repl2<br>Type: Replicate<br>Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7<br> Status: Started<br>Number of Bricks: 1 x 2 = 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: node1.local:/data<br>Brick2: node2.local:/data<br>Options Reconfigured:<br>auth.allow: *<br>user.cifs: enable<br>nfs.disable: off<br> <br><br></div>I turned node2 off. Just to make sure i have not network bottle neck and that it will not replicate for my first benchmarks.<br><br><br></div>My first test with bonnie on my local raw disk of node1 gave me 130MB/sec write speed.<br> </div>Then i did the same test on my cluster dir /data: 130MB/sec<br></div>Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec <br><br></div>This is terrible and i wonder why?!<br><br>My tests where made with:<br> bonnie++ -u root -s <double mem) -d <dir><br><div><br></div><div>Here are my bonnie results: <a href="http://oi62.tinypic.com/20aara0.jpg" target="_blank">http://oi62.tinypic.com/20aara0.jpg</a><br><br><br></div> <div>Since node2 is turned off, this cant be a network bottle neck.<br> <br></div><div>Any ideas? <br><br></div><div>Thanks,<br>Mario<br></div></div> </blockquote> _______________________________________________<br>Users mailing list<br><a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>http://lists.ovirt.org/mailman/listinfo/users<br></blockquote></div><br></div></div></body></html> --Apple-Mail=_1ACF3B32-08CE-4A06-AC57-0A55A02AB107--

I am on Cent OS 6.5 and i am using: [root@node1 ~]# rpm -qa | grep gluster glusterfs-rdma-3.4.2-1.el6.x86_64 glusterfs-server-3.4.2-1.el6.x86_64 glusterfs-fuse-3.4.2-1.el6.x86_64 glusterfs-libs-3.4.2-1.el6.x86_64 glusterfs-3.4.2-1.el6.x86_64 glusterfs-api-3.4.2-1.el6.x86_64 glusterfs-cli-3.4.2-1.el6.x86_64 vdsm-gluster-4.13.3-3.el6.noarch [root@node1 ~]# uname -a Linux node1.hq.imos.net 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Thanks, Mario On Sat, Feb 8, 2014 at 10:06 PM, Samuli Heinonen <samppah@neutraali.net>wrote:
Hello,
What version of GlusterFS you are using?
ml ml <mliebherr99@googlemail.com> kirjoitti 8.2.2014 kello 21.24:
anyone?
On Friday, February 7, 2014, ml ml <mliebherr99@googlemail.com> wrote:
Hello List,
i set up a Cluster with 2 Nodes and Glusterfs.
gluster> volume info all
Volume Name: Repl2 Type: Replicate Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1.local:/data Brick2: node2.local:/data Options Reconfigured: auth.allow: * user.cifs: enable nfs.disable: off
I turned node2 off. Just to make sure i have not network bottle neck and that it will not replicate for my first benchmarks.
My first test with bonnie on my local raw disk of node1 gave me 130MB/sec write speed. Then i did the same test on my cluster dir /data: 130MB/sec Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec
This is terrible and i wonder why?!
My tests where made with: bonnie++ -u root -s <double mem) -d <dir>
Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg
Since node2 is turned off, this cant be a network bottle neck.
Any ideas?
Thanks, Mario
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 02/09/2014 09:11 PM, ml ml wrote:
I am on Cent OS 6.5 and i am using:
[root@node1 ~]# rpm -qa | grep gluster glusterfs-rdma-3.4.2-1.el6.x86_64 glusterfs-server-3.4.2-1.el6.x86_64 glusterfs-fuse-3.4.2-1.el6.x86_64 glusterfs-libs-3.4.2-1.el6.x86_64 glusterfs-3.4.2-1.el6.x86_64 glusterfs-api-3.4.2-1.el6.x86_64 glusterfs-cli-3.4.2-1.el6.x86_64 vdsm-gluster-4.13.3-3.el6.noarch
Have you turned on "Optimize for Virt Store" for the gluster volume? -Vijay

Yes, the only thing which brings the wirte I/O almost on my Host Level is by enabling viodiskcache = writeback. As far as i can tell this is caching enabled for the guest and the host which is critical if sudden power loss happens. Can i turn this is on if i have a BBU in my Host System? On Sun, Feb 9, 2014 at 6:25 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 02/09/2014 09:11 PM, ml ml wrote:
I am on Cent OS 6.5 and i am using:
[root@node1 ~]# rpm -qa | grep gluster glusterfs-rdma-3.4.2-1.el6.x86_64 glusterfs-server-3.4.2-1.el6.x86_64 glusterfs-fuse-3.4.2-1.el6.x86_64 glusterfs-libs-3.4.2-1.el6.x86_64 glusterfs-3.4.2-1.el6.x86_64 glusterfs-api-3.4.2-1.el6.x86_64 glusterfs-cli-3.4.2-1.el6.x86_64 vdsm-gluster-4.13.3-3.el6.noarch
Have you turned on "Optimize for Virt Store" for the gluster volume?
-Vijay

On 02/09/2014 11:08 PM, ml ml wrote:
Yes, the only thing which brings the wirte I/O almost on my Host Level is by enabling viodiskcache = writeback. As far as i can tell this is caching enabled for the guest and the host which is critical if sudden power loss happens. Can i turn this is on if i have a BBU in my Host System?
I was referring to the set of gluster volume tunables in [1]. These options can be enabled through "volume set" interface in gluster CLI. The quorum options are used for providing tolerance against split-brains and the remaining ones are recommended normally for performance. -Vijay [1] https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example

Hi Vijay, i deleted the Cluster/Datacenter and set it up with two new (physical) Hosts and now the performance looks great. I dunno what i did wrong. Thanks a lot.... On Mon, Feb 10, 2014 at 6:10 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 02/09/2014 11:08 PM, ml ml wrote:
Yes, the only thing which brings the wirte I/O almost on my Host Level is by enabling viodiskcache = writeback. As far as i can tell this is caching enabled for the guest and the host which is critical if sudden power loss happens. Can i turn this is on if i have a BBU in my Host System?
I was referring to the set of gluster volume tunables in [1]. These options can be enabled through "volume set" interface in gluster CLI.
The quorum options are used for providing tolerance against split-brains and the remaining ones are recommended normally for performance.
-Vijay
[1] https://github.com/gluster/glusterfs/blob/master/extras/ group-virt.example
participants (3)
-
ml ml
-
Samuli Heinonen
-
Vijay Bellur