
I am on Cent OS 6.5 and i am using: [root@node1 ~]# rpm -qa | grep gluster glusterfs-rdma-3.4.2-1.el6.x86_64 glusterfs-server-3.4.2-1.el6.x86_64 glusterfs-fuse-3.4.2-1.el6.x86_64 glusterfs-libs-3.4.2-1.el6.x86_64 glusterfs-3.4.2-1.el6.x86_64 glusterfs-api-3.4.2-1.el6.x86_64 glusterfs-cli-3.4.2-1.el6.x86_64 vdsm-gluster-4.13.3-3.el6.noarch [root@node1 ~]# uname -a Linux node1.hq.imos.net 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Thanks, Mario On Sat, Feb 8, 2014 at 10:06 PM, Samuli Heinonen <samppah@neutraali.net>wrote:
Hello,
What version of GlusterFS you are using?
ml ml <mliebherr99@googlemail.com> kirjoitti 8.2.2014 kello 21.24:
anyone?
On Friday, February 7, 2014, ml ml <mliebherr99@googlemail.com> wrote:
Hello List,
i set up a Cluster with 2 Nodes and Glusterfs.
gluster> volume info all
Volume Name: Repl2 Type: Replicate Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1.local:/data Brick2: node2.local:/data Options Reconfigured: auth.allow: * user.cifs: enable nfs.disable: off
I turned node2 off. Just to make sure i have not network bottle neck and that it will not replicate for my first benchmarks.
My first test with bonnie on my local raw disk of node1 gave me 130MB/sec write speed. Then i did the same test on my cluster dir /data: 130MB/sec Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec
This is terrible and i wonder why?!
My tests where made with: bonnie++ -u root -s <double mem) -d <dir>
Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg
Since node2 is turned off, this cant be a network bottle neck.
Any ideas?
Thanks, Mario
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users