--------------000400070400010206070402
Content-Type: text/plain; charset="windows-1252"; format=flowed
Content-Transfer-Encoding: 7bit
don't know if it helps, but I ran a few more tests, all from the same
hardware node.
The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M
count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s
Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile
bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s
NFS & Gluster are using the same interface. Tests were not run at same time.
This would suggest my problem isn't glusterfs, but the VM performance.
On 02/11/2016 03:13 PM, Bill James wrote:
xml attached.
On 02/11/2016 12:28 PM, Nir Soffer wrote:
> On Thu, Feb 11, 2016 at 8:27 PM, Bill James <bill.james(a)j2.com> wrote:
>> thank you for the reply.
>>
>> We setup gluster using the names associated with NIC 2 IP.
>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>
>> That's NIC 2's IP.
>> Using 'iftop -i eno2 -L 5 -t' :
>>
>> dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
>> 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s
> Can you share the xml of this vm? You can find it in vdsm log,
> at the time you start the vm.
>
> Or you can do (on the host):
>
> # virsh
> virsh # list
> (username: vdsm@ovirt password: shibboleth)
> virsh # dumpxml vm-id
>
>> Peak rate (sent/received/total): 281Mb 5.36Mb
>> 282Mb
>> Cumulative (sent/received/total): 1.96GB 14.6MB
>> 1.97GB
>>
>> gluster volume info gv1:
>> Options Reconfigured:
>> performance.write-behind-window-size: 4MB
>> performance.readdir-ahead: on
>> performance.cache-size: 1GB
>> performance.write-behind: off
>>
>> performance.write-behind: off didn't help.
>> Neither did any other changes I've tried.
>>
>>
>> There is no VM traffic on this VM right now except my test.
>>
>>
>>
>> On 02/10/2016 11:55 PM, Nir Soffer wrote:
>>> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N
>>> <ravishankar(a)redhat.com>
>>> wrote:
>>>> +gluster-users
>>>>
>>>> Does disabling 'performance.write-behind' give a better
throughput?
>>>>
>>>>
>>>>
>>>> On 02/10/2016 11:06 PM, Bill James wrote:
>>>>> I'm setting up a ovirt cluster using glusterfs and noticing not
>>>>> stellar
>>>>> performance.
>>>>> Maybe my setup could use some adjustments?
>>>>>
>>>>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt
>>>>> 3.6.2.6-1.
>>>>> Each node has 8 spindles configured in 1 array which is split
>>>>> using LVM
>>>>> with one logical volume for system and one for gluster.
>>>>> They each have 4 NICs,
>>>>> NIC1 = ovirtmgmt
>>>>> NIC2 = gluster (1GbE)
>>> How do you ensure that gluster trafic is using this nic?
>>>
>>>>> NIC3 = VM traffic
>>> How do you ensure that vm trafic is using this nic?
>>>
>>>>> I tried with default glusterfs settings
>>> And did you find any difference?
>>>
>>>>> and also with:
>>>>> performance.cache-size: 1GB
>>>>> performance.readdir-ahead: on
>>>>> performance.write-behind-window-size: 4MB
>>>>>
>>>>> [root@ovirt3 test scripts]# gluster volume info gv1
>>>>>
>>>>> Volume Name: gv1
>>>>> Type: Replicate
>>>>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
>>>>> Status: Started
>>>>> Number of Bricks: 1 x 3 = 3
>>>>> Transport-type: tcp
>>>>> Bricks:
>>>>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>>>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>>>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
>>>>> Options Reconfigured:
>>>>> performance.cache-size: 1GB
>>>>> performance.readdir-ahead: on
>>>>> performance.write-behind-window-size: 4MB
>>>>>
>>>>>
>>>>> Using simple dd test on VM in ovirt:
>>>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>> block size of 1G?!
>>>
>>> Try 1M (our default for storage operations)
>>>
>>>>> 1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
>>>>>
>>>>> Another VM not in ovirt using nfs:
>>>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>>>> 1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
>>>>>
>>>>>
>>>>> Is that expected or is there a better way to set it up to get better
>>>>> performance?
>>> Adding Niels for advice.
>>>
>>>>> This email, its contents and ....
>>> Please avoid this, this is a public mailing list, everything you write
>>> here is public.
>>>
>>> Nir
>> I'll have to look into how to remove this sig for this mailing list....
>>
>> Cloud Services for Business
www.j2.com
>> j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
>>
>>
>> This email, its contents and attachments contain information from j2
>> Global,
>> Inc. and/or its affiliates which may be privileged, confidential or
>> otherwise protected from disclosure. The information is intended to
>> be for
>> the addressee(s) only. If you are not an addressee, any disclosure,
>> copy,
>> distribution, or use of the contents of this message is prohibited.
>> If you
>> have received this email in error please notify the sender by reply
>> e-mail
>> and delete the original message and any copies. (c) 2015 j2 Global,
>> Inc. All
>> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and
>> Onebox
>> are registered trademarks of j2 Global, Inc. and its affiliates.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Cloud Services for Business
www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global, Inc. and/or
its affiliates which may be privileged, confidential or otherwise protected from
disclosure. The information is intended to be for the addressee(s) only. If you are not an
addressee, any disclosure, copy, distribution, or use of the contents of this message is
prohibited. If you have received this email in error please notify the sender by reply
e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered
trademarks of j2 Global, Inc. and its affiliates.
--------------000400070400010206070402
Content-Type: text/html; charset="windows-1252"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
don't know if it helps, but I ran a few more tests, all from the
same hardware node.<br>
<br>
The VM:<br>
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M
count=1000 oflag=direct<br>
1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s<br>
<br>
Writing directly to gluster volume:<br>
[root@ovirt2 test ~]# time dd if=/dev/zero
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct<br>
1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s<br>
<br>
<br>
Writing to NFS volume:<br>
[root@ovirt2 test ~]# time dd if=/dev/zero
of=/mnt/storage/qa/testfile bs=1M count=1000 oflag=direct<br>
1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s<br>
<br>
NFS & Gluster are using the same interface. Tests were not run
at same time.<br>
<br>
This would suggest my problem isn't glusterfs, but the VM
performance.<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 02/11/2016 03:13 PM, Bill James
wrote:<br>
</div>
<blockquote cite="mid:56BD1589.8090506@j2.com"
type="cite">xml
attached.
<br>
<br>
<br>
On 02/11/2016 12:28 PM, Nir Soffer wrote:
<br>
<blockquote type="cite">On Thu, Feb 11, 2016 at 8:27 PM, Bill
James <a class="moz-txt-link-rfc2396E"
href="mailto:bill.james@j2.com"><bill.james@j2.com></a>
wrote:
<br>
<blockquote type="cite">thank you for the reply.
<br>
<br>
We setup gluster using the names associated with NIC 2 IP.
<br>
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
<br>
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
<br>
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
<br>
<br>
That's NIC 2's IP.
<br>
Using 'iftop -i eno2 -L 5 -t' :
<br>
<br>
dd if=/dev/zero of=/root/testfile bs=1M count=1000
oflag=direct
<br>
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s
<br>
</blockquote>
Can you share the xml of this vm? You can find it in vdsm log,
<br>
at the time you start the vm.
<br>
<br>
Or you can do (on the host):
<br>
<br>
# virsh
<br>
virsh # list
<br>
(username: vdsm@ovirt password: shibboleth)
<br>
virsh # dumpxml vm-id
<br>
<br>
<blockquote type="cite">Peak rate
(sent/received/total): 281Mb 5.36Mb
<br>
282Mb
<br>
Cumulative (sent/received/total): 1.96GB
14.6MB
<br>
1.97GB
<br>
<br>
gluster volume info gv1:
<br>
Options Reconfigured:
<br>
performance.write-behind-window-size: 4MB
<br>
performance.readdir-ahead: on
<br>
performance.cache-size: 1GB
<br>
performance.write-behind: off
<br>
<br>
performance.write-behind: off didn't help.
<br>
Neither did any other changes I've tried.
<br>
<br>
<br>
There is no VM traffic on this VM right now except my test.
<br>
<br>
<br>
<br>
On 02/10/2016 11:55 PM, Nir Soffer wrote:
<br>
<blockquote type="cite">On Thu, Feb 11, 2016 at 2:42 AM,
Ravishankar N <a class="moz-txt-link-rfc2396E"
href="mailto:ravishankar@redhat.com"><ravishankar@redhat.com></a>
<br>
wrote:
<br>
<blockquote type="cite">+gluster-users
<br>
<br>
Does disabling 'performance.write-behind' give a better
throughput?
<br>
<br>
<br>
<br>
On 02/10/2016 11:06 PM, Bill James wrote:
<br>
<blockquote type="cite">I'm setting up a ovirt cluster
using glusterfs and noticing not stellar
<br>
performance.
<br>
Maybe my setup could use some adjustments?
<br>
<br>
3 hardware nodes running centos7.2, glusterfs 3.7.6.1,
ovirt 3.6.2.6-1.
<br>
Each node has 8 spindles configured in 1 array which is
split using LVM
<br>
with one logical volume for system and one for gluster.
<br>
They each have 4 NICs,
<br>
NIC1 = ovirtmgmt
<br>
NIC2 = gluster (1GbE)
<br>
</blockquote>
</blockquote>
How do you ensure that gluster trafic is using this nic?
<br>
<br>
<blockquote type="cite">
<blockquote type="cite"> NIC3 = VM traffic
<br>
</blockquote>
</blockquote>
How do you ensure that vm trafic is using this nic?
<br>
<br>
<blockquote type="cite">
<blockquote type="cite">I tried with default glusterfs
settings
<br>
</blockquote>
</blockquote>
And did you find any difference?
<br>
<br>
<blockquote type="cite">
<blockquote type="cite">and also with:
<br>
performance.cache-size: 1GB
<br>
performance.readdir-ahead: on
<br>
performance.write-behind-window-size: 4MB
<br>
<br>
[root@ovirt3 test scripts]# gluster volume info gv1
<br>
<br>
Volume Name: gv1
<br>
Type: Replicate
<br>
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
<br>
Status: Started
<br>
Number of Bricks: 1 x 3 = 3
<br>
Transport-type: tcp
<br>
Bricks:
<br>
Brick1:
ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
<br>
Brick2:
ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
<br>
Brick3:
ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
<br>
Options Reconfigured:
<br>
performance.cache-size: 1GB
<br>
performance.readdir-ahead: on
<br>
performance.write-behind-window-size: 4MB
<br>
<br>
<br>
Using simple dd test on VM in ovirt:
<br>
dd if=/dev/zero of=/root/testfile bs=1G count=1
oflag=direct
<br>
</blockquote>
</blockquote>
block size of 1G?!
<br>
<br>
Try 1M (our default for storage operations)
<br>
<br>
<blockquote type="cite">
<blockquote type="cite"> 1073741824 bytes (1.1 GB)
copied, 65.9337 s, 16.3 MB/s
<br>
<br>
Another VM not in ovirt using nfs:
<br>
dd if=/dev/zero of=/root/testfile bs=1G count=1
oflag=direct
<br>
1073741824 bytes (1.1 GB) copied, 27.0079 s,
39.8 MB/s
<br>
<br>
<br>
Is that expected or is there a better way to set it up
to get better
<br>
performance?
<br>
</blockquote>
</blockquote>
Adding Niels for advice.
<br>
<br>
<blockquote type="cite">
<blockquote type="cite">This email, its contents and ....
<br>
</blockquote>
</blockquote>
Please avoid this, this is a public mailing list, everything
you write
<br>
here is public.
<br>
<br>
Nir
<br>
</blockquote>
I'll have to look into how to remove this sig for this mailing
list....
<br>
<br>
Cloud Services for Business <a class="moz-txt-link-abbreviated"
href="http://www.j2.com">www.j2.com</a>
<br>
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe |
Onebox
<br>
<br>
<br>
This email, its contents and attachments contain information
from j2 Global,
<br>
Inc. and/or its affiliates which may be privileged,
confidential or
<br>
otherwise protected from disclosure. The information is
intended to be for
<br>
the addressee(s) only. If you are not an addressee, any
disclosure, copy,
<br>
distribution, or use of the contents of this message is
prohibited. If you
<br>
have received this email in error please notify the sender by
reply e-mail
<br>
and delete the original message and any copies. (c) 2015 j2
Global, Inc. All
<br>
rights reserved. eFax, eVoice, Campaigner, FuseMail,
KeepItSafe, and Onebox
<br>
are registered trademarks of j2 Global, Inc. and its
affiliates.
<br>
</blockquote>
</blockquote>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<br>
<p><a
href="http://www.j2.com/?utm_source=j2global&utm_medium=xsell-re...
style='color:windowtext;
text-decoration:none'><img border=0 width=391 height=46
src="http://home.j2.com/j2_Global_Cloud_Services/j2_Global_Email_Foo...
alt="www.j2.com"></span></a></p>
<p><span
style='font-size:8.0pt;font-family:"Arial","sans-serif";
color:gray'>This email, its contents and attachments contain information from <a
href="http://www.j2.com/?utm_source=j2global&utm_medium=xsell-re...
Global, Inc</a>. and/or its affiliates which may be privileged, confidential or
otherwise protected from disclosure. The information is intended to be for the
addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use
of the contents of this message is prohibited. If you have received this email in error
please notify the sender by reply e-mail and delete the original message and any copies. ©
2015 <a
href="http://www.j2.com/">j2 Global, Inc</a>. All rights
reserved. <a
href="http://www.efax.com/">eFax ®</a>, <a
href="http://www.evoice.com/">eVoice ®</a>, <a
href="http://www.campaigner.com/">Campaigner ®</a>, <a
href="http://www.fusemail.com/">FuseMail ®</a>, <a
href="http://www.keepitsafe.com/">KeepItSafe ®</a> and <a
href="http://www.onebox.com/">Onebox ®</a> are registered trademarks of
<a
href="http://www.j2.com/">j2 Global, Inc</a>. and its
affiliates.</span></p></body>
</html>
--------------000400070400010206070402--