Dear Darrell,
I found the issue and now I can reach the maximum of the network with a fuse client.Here
is a short overview:1. I noticed that working with a new gluster volume is reaching my
network speed - I was quite excited.2. Then I have destroyed my gluster volume and created
a new one and started adding features from ovirt
Once I have added features.shard on -> I hit the same performance from before.
Increasing the shard size to 16MB didn't help at all.
For my case where I have 2 Virtualization hosts with single data gluster volume - sharding
is not neccessary, but for larger setups it will be a problem.
As this looks to me as a bug - can someone tell me where I can report it ?
Thanks to all who guided me in this journey of GlusterFS ! I have learned so much , as my
prior knowledge was only in Ceph.
Best Regards,Strahil Nikolov
В четвъртък, 24 януари 2019 г., 17:53:50 ч. Гринуич+2, Darrell Budic
<budic(a)onholyground.com> написа:
Strahil-
The fuse client is what it is, it’s limited by operating in user land and waiting for the
gluster servers to acknowledge all the writes. I noted you're using ovirt, you should
look into enabling the libgfapi engine setting to run your VMs with libgf natively. You
can’t test directly from the host with that, but you can run your tests inside the VMs. I
saw significant throughput and latency improvements that way. It’s still somewhat beta, so
you’ll probably need to search the overt-users mailing list to find info on enabling it.
Good luck!
On Jan 24, 2019, at 4:32 AM, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Dear Amar, Community,
it seems the issue is in the fuse client itself.
Here is the latest update:1. I have added the following:server.event-threads: 4
client.event-threads: 4
performance.stat-prefetch: onperformance.strict-o-direct: off
Results: no change
2. Allowed nfs and connected ovirt1 to the gluster volume:nfs.disable: off
Results: Drastic improvement in performance as follows:
[root@ovirt1 data]# dd if=/dev/zero of=largeio bs=1M count=5000 status=progress
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 53.0443 s, 98.8 MB/s
So I would be happy if anyone guide me in order to fix the situation as the fuse client is
the best way to use glusterfs, and it seems the glusterfs-server is not the guilty one.
Thanks in advance for your guidance.I have learned so much.
Best Regards,Strahil Nikolov
От: Strahil <hunter86_bg(a)yahoo.com>
До: Amar Tumballi Suryanarayan <atumball(a)redhat.com>
Копие: Gluster-users <gluster-users(a)gluster.org>
Изпратен: сряда, 23 януари 2019 г. 18:44
Тема: Re: [Gluster-users] Gluster performance issues - need advise
Dear Amar,
Thanks for your email.
Actually my concerns were on both topics.Would you recommend any perf options that will be
suitable ?
After mentioning the network usage, I just checked it and it seems duringthe test session,
ovirt1 (both client and host) is using no more than 455Mbit/s which is half the network
bandwidth.
I'm still in the middle of nowhere, so any ideas are welcome.
Best Regards,Strahil Nikolov
On Jan 23, 2019 17:49, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
I didn't understand the issue properly. Mostly I missed something.
Are you concerned the performance is 49MB/s with and without perf options? or are you
expecting it to be 123MB/s as over the n/w you get that speed?
If it is the first problem, then you are actually having 'performance.write-behind
on' in both options, and it is the only perf xlator which comes into action during the
test you ran.
If it is the second, then please be informed that gluster does client side replication,
which means, n/w would be split in half for write operations (like write(), creat() etc),
so the number you are getting is almost the maximum with 1GbE.
Regards,Amar
On Wed, Jan 23, 2019 at 8:38 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Hello Community,
recently I have built a new lab based on oVirt and CentOS 7.
During deployment I had some hicups, but now the engine is up and running - but gluster is
causing me trouble.
Symptoms: Slow VM install from DVD, poor write performance. The latter has been tested
via:
dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data bs=1M
count=1000 status=progress
The reported speed is 60MB/s which is way too low for my setup.
My lab design:
https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?us...
Gluster version is 3.12.15
So far I have done:
1. Added 'server.allow-insecure on' (with 'option rpc-auth-allow-insecure
on' in glusterd.vol)
Volume info after that change:
Volume Name: data
Type: Replicate
Volume ID: 9b06a1e9-8102-4cd7-bc56-84960a1efaa2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/gluster_bricks/data/data
Brick2: ovirt2.localdomain:/gluster_bricks/data/data
Brick3: ovirt3.localdomain:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.allow-insecure: on
Seems no positive or negative effect so far.
2. Tested with tmpfs on all bricks -> ovirt1 mounted gluster volume -> max 60MB/s
(bs=1M without 'oflag=direct')
[root@ovirt1 data]# dd if=/dev/zero of=large_io bs=1M count=4000 status=progress
4177526784 bytes (4.2 GB) copied, 70.843409 s, 59.0 MB/s
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 71.1407 s, 59.0 MB/s
[root@ovirt1 data]# rm -f large_io
[root@ovirt1 data]# gluster volume profile data info
Brick: ovirt1.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 131072b+
No. of Reads: 8
No. of Writes: 44968
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
0.00 45.80 us 38.00 us 54.00 us 10 STAT
0.00 227.67 us 216.00 us 242.00 us 3 CREATE
0.00 113.38 us 68.00 us 381.00 us 8 READ
0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
0.00 59.97 us 45.00 us 113.00 us 32 OPEN
0.00 24.41 us 13.00 us 89.00 us 161 INODELK
0.00 43.43 us 28.00 us 214.00 us 93 STATFS
0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
Duration: 380 seconds
Data Read: 1048576 bytes
Data Written: 5894045696 bytes
Interval 0 Stats:
Block Size: 131072b+
No. of Reads: 8
No. of Writes: 44968
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
0.00 45.80 us 38.00 us 54.00 us 10 STAT
0.00 227.67 us 216.00 us 242.00 us 3 CREATE
0.00 113.38 us 68.00 us 381.00 us 8 READ
0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
0.00 59.97 us 45.00 us 113.00 us 32 OPEN
0.00 24.41 us 13.00 us 89.00 us 161 INODELK
0.00 43.43 us 28.00 us 214.00 us 93 STATFS
0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
Duration: 380 seconds
Data Read: 1048576 bytes
Data Written: 5894045696 bytes
Brick: ovirt3.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 1b+
No. of Reads: 0
No. of Writes: 39328
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
0.01 219.50 us 188.00 us 251.00 us 2 CREATE
0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
0.01 62.30 us 38.00 us 119.00 us 10 OPEN
0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
0.01 24.60 us 12.00 us 64.00 us 40 INODELK
0.02 176.30 us 10.00 us 765.00 us 10 READDIR
0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 39328 bytes
Interval 0 Stats:
Block Size: 1b+
No. of Reads: 0
No. of Writes: 39328
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
0.01 219.50 us 188.00 us 251.00 us 2 CREATE
0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
0.01 62.30 us 38.00 us 119.00 us 10 OPEN
0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
0.01 24.60 us 12.00 us 64.00 us 40 INODELK
0.02 176.30 us 10.00 us 765.00 us 10 READDIR
0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 39328 bytes
Brick: ovirt2.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 512b+ 131072b+
No. of Reads: 0 0
No. of Writes: 36 76758
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 6 FORGET
0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
0.00 272.40 us 235.00 us 296.00 us 5 CREATE
0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
0.01 86.69 us 30.00 us 379.00 us 62 STAT
0.01 64.30 us 47.00 us 169.00 us 84 OPEN
0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
0.04 65.59 us 26.00 us 293.00 us 279 STATFS
0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
0.67 91.68 us 12.00 us 1141.00 us 3186 LOOKUP
10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
Duration: 1206 seconds
Data Read: 0 bytes
Data Written: 10060843008 bytes
Interval 0 Stats:
Block Size: 512b+ 131072b+
No. of Reads: 0 0
No. of Writes: 36 76758
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 6 FORGET
0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
0.00 272.40 us 235.00 us 296.00 us 5 CREATE
0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
0.01 86.69 us 30.00 us 379.00 us 62 STAT
0.01 64.30 us 47.00 us 169.00 us 84 OPEN
0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
0.04 65.59 us 26.00 us 293.00 us 279 STATFS
0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
0.67 91.66 us 12.00 us 1141.00 us 3186 LOOKUP
10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
Duration: 1206 seconds
Data Read: 0 bytes
Data Written: 10060843008 bytes
This indicates to me that it's not a problem in Disk/LVM/FileSystem layout.
Most probably I haven't created the volume properly or some option/feature is disabled
?!?
Network shows OK for a gigabit:
[root@ovirt1 data]# dd if=/dev/zero status=progress | nc ovirt2 9999
3569227264 bytes (3.6 GB) copied, 29.001052 s, 123 MB/s^C
7180980+0 records in
7180979+0 records out
3676661248 bytes (3.7 GB) copied, 29.8739 s, 123 MB/s
I'm looking for any help... you can share your volume info also.
Thanks in advance.
Best Regards,
Strahil Nikolov
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Amar Tumballi (amarts)
Dear Amar,
Thanks for your email.
Actually my concerns were on both topics.
Would you recommend any perf options that will be suitable ?
After mentioning the network usage, I just checked it and it seems duringthe test session,
ovirt1 (both client and host) is using no more than 455Mbit/s which is half the network
bandwidth.
I'm still in the middle of nowhere, so any ideas are welcome.
Best Regards,
Strahil Nikolov
On Jan 23, 2019 17:49, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
I didn't understand the issue properly. Mostly I missed something.
Are you concerned the performance is 49MB/s with and without perf options? or are you
expecting it to be 123MB/s as over the n/w you get that speed?
If it is the first problem, then you are actually having 'performance.write-behind
on' in both options, and it is the only perf xlator which comes into action during the
test you ran.
If it is the second, then please be informed that gluster does client side replication,
which means, n/w would be split in half for write operations (like write(), creat() etc),
so the number you are getting is almost the maximum with 1GbE.
Regards,
Amar
On Wed, Jan 23, 2019 at 8:38 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> Hello Community,
>
> recently I have built a new lab based on oVirt and CentOS 7.
> During deployment I had some hicups, but now the engine is up and running - but
gluster is causing me trouble.
>
> Symptoms: Slow VM install from DVD, poor write performance. The latter has been
tested via:
> dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data bs=1M
count=1000 status=progress
>
> The reported speed is 60MB/s which is way too low for my setup.
>
> My lab design:
>
https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?us...
> Gluster version is 3.12.15
>
> So far I have done:
>
> 1. Added 'server.allow-insecure on' (with 'option rpc-auth-allow-insecure
on' in glusterd.vol)
> Volume info after that change:
>
> Volume Name: data
> Type: Replicate
> Volume ID: 9b06a1e9-8102-4cd7-bc56-84960a1efaa2
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.localdomain:/gluster_bricks/data/data
> Brick2: ovirt2.localdomain:/gluster_bricks/data/data
> Brick3: ovirt3.localdomain:/gluster_bricks/data/data (arbiter)
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> network.ping-timeout: 30
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
> server.allow-insecure: on
>
> Seems no positive or negative effect so far.
>
> 2. Tested with tmpfs on all bricks -> ovirt1 mounted gluster volume -> max
60MB/s (bs=1M without 'oflag=direct')
>
>
> [root@ovirt1 data]# dd if=/dev/zero of=large_io bs=1M count=4000 status=progress
> 4177526784 bytes (4.2 GB) copied, 70.843409 s, 59.0 MB/s
> 4000+0 records in
> 4000+0 records out
> 4194304000 bytes (4.2 GB) copied, 71.1407 s, 59.0 MB/s
> [root@ovirt1 data]# rm -f large_io
> [root@ovirt1 data]# gluster volume profile data info
> Brick: ovirt1.localdomain:/gluster_bricks/data/data
> ---------------------------------------------------
> Cumulative Stats:
> Block Size: 131072b+
> No. of Reads: 8
> No. of Writes: 44968
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
> --------- ----------- ----------- ----------- ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us 3 FORGET
> 0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
> 0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
> 0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
> 0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
> 0.00 45.80 us 38.00 us 54.00 us 10 STAT
> 0.00 227.67 us 216.00 us 242.00 us 3 CREATE
> 0.00 113.38 us 68.00 us 381.00 us 8 READ
> 0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
> 0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
> 0.00 59.97 us 45.00 us 113.00 us 32 OPEN
> 0.00 24.41 us 13.00 us 89.00 us 161 INODELK
> 0.00 43.43 us 28.00 us 214.00 us 93 STATFS
> 0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
> 0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
> 0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
> 0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
> 0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
> 0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
> 0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
> 1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
> 96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
>
> Duration: 380 seconds
> Data Read: 1048576 bytes
> Data Written: 5894045696 bytes
>
> Interval 0 Stats:
> Block Size: 131072b+
> No. of Reads: 8
> No. of Writes: 44968
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
> --------- ----------- ----------- ----------- ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us 3 FORGET
> 0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
> 0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
> 0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
> 0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
> 0.00 45.80 us 38.00 us 54.00 us 10 STAT
> 0.00 227.67 us 216.00 us 242.00 us 3 CREATE
> 0.00 113.38 us 68.00 us 381.00 us 8 READ
> 0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
> 0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
> 0.00 59.97 us 45.00 us 113.00 us 32 OPEN
> 0.00 24.41 us 13.00 us 89.00 us 161 INODELK
> 0.00 43.43 us 28.00 us 214.00 us 93 STATFS
> 0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
> 0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
> 0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
> 0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
> 0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
> 0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
> 0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
> 1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
> 96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
>
> Duration: 380 seconds
> Data Read: 1048576 bytes
> Data Written: 5894045696 bytes
>
> Brick: ovirt3.localdomain:/gluster_bricks/data/data
> ---------------------------------------------------
> Cumulative Stats:
> Block Size: 1b+
> No. of Reads: 0
> No. of Writes: 39328
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
> --------- ----------- ----------- ----------- ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us 2 FORGET
> 0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
> 0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
> 0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
> 0.01 219.50 us 188.00 us 251.00 us 2 CREATE
> 0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
> 0.01 62.30 us 38.00 us 119.00 us 10 OPEN
> 0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
> 0.01 24.60 us 12.00 us 64.00 us 40 INODELK
> 0.02 176.30 us 10.00 us 765.00 us 10 READDIR
> 0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
> 0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
> 0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
> 0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
> 28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
> 29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
> 40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
>
> Duration: 189 seconds
> Data Read: 0 bytes
> Data Written: 39328 bytes
>
> Interval 0 Stats:
> Block Size: 1b+
> No. of Reads: 0
> No. of Writes: 39328
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
> --------- ----------- ----------- ----------- ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us 2 FORGET
> 0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
> 0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
> 0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
> 0.01 219.50 us 188.00 us 251.00 us 2 CREATE
> 0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
> 0.01 62.30 us 38.00 us 119.00 us 10 OPEN
> 0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
> 0.01 24.60 us 12.00 us 64.00 us 40 INODELK
> 0.02 176.30 us 10.00 us 765.00 us 10 READDIR
> 0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
> 0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
> 0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
> 0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
> 28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
> 29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
> 40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
>
> Duration: 189 seconds
> Data Read: 0 bytes
> Data Written: 39328 bytes
>
> Brick: ovirt2.localdomain:/gluster_bricks/data/data
> ---------------------------------------------------
> Cumulative Stats:
> Block Size: 512b+ 131072b+
> No. of Reads: 0 0
> No. of Writes: 36 76758
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
> --------- ----------- ----------- ----------- ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us 6 FORGET
> 0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
> 0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
> 0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
> 0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
> 0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
> 0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
> 0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
> 0.00 272.40 us 235.00 us 296.00 us 5 CREATE
> 0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
> 0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
> 0.01 86.69 us 30.00 us 379.00 us 62 STAT
> 0.01 64.30 us 47.00 us 169.00 us 84 OPEN
> 0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
> 0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
> 0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
> 0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
> 0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
> 0.04 65.59 us 26.00 us 293.00 us 279 STATFS
> 0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
> 0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
> 0.67 91.68 us 12.00 us 1141.00 us 3186 LOOKUP
> 10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
> 11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
> 19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
> 25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
> 32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
>
> Duration: 1206 seconds
> Data Read: 0 bytes
> Data Written: 10060843008 bytes
>
> Interval 0 Stats:
> Block Size: 512b+ 131072b+
> No. of Reads: 0 0
> No. of Writes: 36 76758
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
> --------- ----------- ----------- ----------- ------------ ----
> 0.00 0.00 us 0.00 us 0.00 us 6 FORGET
> 0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
> 0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
> 0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
> 0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
> 0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
> 0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
> 0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
> 0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
> 0.00 272.40 us 235.00 us 296.00 us 5 CREATE
> 0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
> 0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
> 0.01 86.69 us 30.00 us 379.00 us 62 STAT
> 0.01 64.30 us 47.00 us 169.00 us 84 OPEN
> 0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
> 0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
> 0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
> 0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
> 0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
> 0.04 65.59 us 26.00 us 293.00 us 279 STATFS
> 0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
> 0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
> 0.67 91.66 us 12.00 us 1141.00 us 3186 LOOKUP
> 10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
> 11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
> 19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
> 25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
> 32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
>
> Duration: 1206 seconds
> Data Read: 0 bytes
> Data Written: 10060843008 bytes
>
>
>
> This indicates to me that it's not a problem in Disk/LVM/FileSystem layout.
>
> Most probably I haven't created the volume properly or some option/feature is
disabled ?!?
> Network shows OK for a gigabit:
> [root@ovirt1 data]# dd if=/dev/zero status=progress | nc ovirt2 9999
> 3569227264 bytes (3.6 GB) copied, 29.001052 s, 123 MB/s^C
> 7180980+0 records in
> 7180979+0 records out
> 3676661248 bytes (3.7 GB) copied, 29.8739 s, 123 MB/s
>
>
> I'm looking for any help... you can share your volume info also.
>
> Thanks in advance.
>
> Best Regards,
> Strahil Nikolov
> _______________________________________________
> Gluster-users mailing list
> Gluster-users(a)gluster.org
>
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Amar Tumballi (amarts)
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users