[ovirt-users] ovirt glusterfs performance
Roderick Mooi
rmooi at csir.co.za
Tue Apr 12 09:11:54 UTC 2016
Hi
> It is not removed. Can you try 'gluster volume set volname cluster.eager-lock enable`?
This works. BTW by default this setting is “on”. What’s the difference between “on” and “enable”?
Thanks for the clarification.
Regards,
Roderick
> On 06 Apr 2016, at 10:56 AM, Ravishankar N <ravishankar at redhat.com> wrote:
>
> On 04/06/2016 02:08 PM, Roderick Mooi wrote:
>> Hi Ravi and colleagues
>>
>> (apologies for hijacking this thread but I’m not sure where else to report this (and it is related).)
>>
>> With gluster 3.7.10, running
>> #gluster volume set <volname> group virt
>> fails with:
>> volume set: failed: option : eager-lock does not exist
>> Did you mean eager-lock?
>>
>> I had to remove the eager-lock setting from /var/lib/glusterd/groups/virt to get this to work. It seems like setting eager-lock has been removed from latest gluster. Is this correct? Either way, is there anything else I should do?
>
> It is not removed. Can you try 'gluster volume set volname cluster.eager-lock enable`?
> I think the disperse (EC) translator introduced a `disperse.eager-lock` which is why you would need to mention entire volume option name to avoid ambiguity.
> We probably need to fix the virt profile setting to include the entire name. By the way 'gluster volume set help` should give you the list of all options.
>
> -Ravi
>
>>
>> Cheers,
>>
>> Roderick
>>
>>> On 12 Feb 2016, at 6:18 AM, Ravishankar N <ravishankar at redhat.com <mailto:ravishankar at redhat.com>> wrote:
>>>
>>> Hi Bill,
>>> Can you enable virt-profile setting for your volume and see if that helps? You need to enable this optimization when you create the volume using ovrit, or use the following command for an existing volume:
>>>
>>> #gluster volume set <volname> group virt
>>>
>>> -Ravi
>>>
>>>
>>> On 02/12/2016 05:22 AM, Bill James wrote:
>>>> My apologies, I'm showing how much of a noob I am.
>>>> Ignore last direct to gluster numbers, as that wasn't really glusterfs.
>>>>
>>>>
>>>> [root at ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com <http://ovirt2-ks.test.j2noc.com/>:/gv1 /mnt/tmp/
>>>> [root at ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M count=1000 oflag=direct
>>>> 1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s
>>>>
>>>> That's more how I expected, it is pointing to glusterfs performance.
>>>>
>>>>
>>>>
>>>> On 02/11/2016 03:27 PM, Bill James wrote:
>>>>> don't know if it helps, but I ran a few more tests, all from the same hardware node.
>>>>>
>>>>> The VM:
>>>>> [root at billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
>>>>> 1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
>>>>>
>>>>> Writing directly to gluster volume:
>>>>> [root at ovirt2 test ~]# time dd if=/dev/zero of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct
>>>>> 1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s
>>>>>
>>>>>
>>>>> Writing to NFS volume:
>>>>> [root at ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile bs=1M count=1000 oflag=direct
>>>>> 1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s
>>>>>
>>>>> NFS & Gluster are using the same interface. Tests were not run at same time.
>>>>>
>>>>> This would suggest my problem isn't glusterfs, but the VM performance.
>>>>>
>>>>>
>>>>>
>>>>> On 02/11/2016 03:13 PM, Bill James wrote:
>>>>>> xml attached.
>>>>>>
>>>>>>
>>>>>> On 02/11/2016 12:28 PM, Nir Soffer wrote:
>>>>>>> On Thu, Feb 11, 2016 at 8:27 PM, Bill James <mailto:bill.james at j2.com><bill.james at j2.com> <mailto:bill.james at j2.com> wrote:
>>>>>>>> thank you for the reply.
>>>>>>>>
>>>>>>>> We setup gluster using the names associated with NIC 2 IP.
>>>>>>>> Brick1: ovirt1-ks.test.j2noc.com <http://ovirt1-ks.test.j2noc.com/>:/gluster-store/brick1/gv1
>>>>>>>> Brick2: ovirt2-ks.test.j2noc.com <http://ovirt2-ks.test.j2noc.com/>:/gluster-store/brick1/gv1
>>>>>>>> Brick3: ovirt3-ks.test.j2noc.com <http://ovirt3-ks.test.j2noc.com/>:/gluster-store/brick1/gv1
>>>>>>>>
>>>>>>>> That's NIC 2's IP.
>>>>>>>> Using 'iftop -i eno2 -L 5 -t' :
>>>>>>>>
>>>>>>>> dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
>>>>>>>> 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s
>>>>>>> Can you share the xml of this vm? You can find it in vdsm log,
>>>>>>> at the time you start the vm.
>>>>>>>
>>>>>>> Or you can do (on the host):
>>>>>>>
>>>>>>> # virsh
>>>>>>> virsh # list
>>>>>>> (username: vdsm at ovirt password: shibboleth)
>>>>>>> virsh # dumpxml vm-id
>>>>>>>
>>>>>>>> Peak rate (sent/received/total): 281Mb 5.36Mb
>>>>>>>> 282Mb
>>>>>>>> Cumulative (sent/received/total): 1.96GB 14.6MB
>>>>>>>> 1.97GB
>>>>>>>>
>>>>>>>> gluster volume info gv1:
>>>>>>>> Options Reconfigured:
>>>>>>>> performance.write-behind-window-size: 4MB
>>>>>>>> performance.readdir-ahead: on
>>>>>>>> performance.cache-size: 1GB
>>>>>>>> performance.write-behind: off
>>>>>>>>
>>>>>>>> performance.write-behind: off didn't help.
>>>>>>>> Neither did any other changes I've tried.
>>>>>>>>
>>>>>>>>
>>>>>>>> There is no VM traffic on this VM right now except my test.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 02/10/2016 11:55 PM, Nir Soffer wrote:
>>>>>>>>> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N <mailto:ravishankar at redhat.com><ravishankar at redhat.com> <mailto:ravishankar at redhat.com>
>>>>>>>>> wrote:
>>>>>>>>>> +gluster-users
>>>>>>>>>>
>>>>>>>>>> Does disabling 'performance.write-behind' give a better throughput?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 02/10/2016 11:06 PM, Bill James wrote:
>>>>>>>>>>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar
>>>>>>>>>>> performance.
>>>>>>>>>>> Maybe my setup could use some adjustments?
>>>>>>>>>>>
>>>>>>>>>>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
>>>>>>>>>>> Each node has 8 spindles configured in 1 array which is split using LVM
>>>>>>>>>>> with one logical volume for system and one for gluster.
>>>>>>>>>>> They each have 4 NICs,
>>>>>>>>>>> NIC1 = ovirtmgmt
>>>>>>>>>>> NIC2 = gluster (1GbE)
>>>>>>>>> How do you ensure that gluster trafic is using this nic?
>>>>>>>>>
>>>>>>>>>>> NIC3 = VM traffic
>>>>>>>>> How do you ensure that vm trafic is using this nic?
>>>>>>>>>
>>>>>>>>>>> I tried with default glusterfs settings
>>>>>>>>> And did you find any difference?
>>>>>>>>>
>>>>>>>>>>> and also with:
>>>>>>>>>>> performance.cache-size: 1GB
>>>>>>>>>>> performance.readdir-ahead: on
>>>>>>>>>>> performance.write-behind-window-size: 4MB
>>>>>>>>>>>
>>>>>>>>>>> [root at ovirt3 test scripts]# gluster volume info gv1
>>>>>>>>>>>
>>>>>>>>>>> Volume Name: gv1
>>>>>>>>>>> Type: Replicate
>>>>>>>>>>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
>>>>>>>>>>> Status: Started
>>>>>>>>>>> Number of Bricks: 1 x 3 = 3
>>>>>>>>>>> Transport-type: tcp
>>>>>>>>>>> Bricks:
>>>>>>>>>>> Brick1: ovirt1-ks.test.j2noc.com <http://ovirt1-ks.test.j2noc.com/>:/gluster-store/brick1/gv1
>>>>>>>>>>> Brick2: ovirt2-ks.test.j2noc.com <http://ovirt2-ks.test.j2noc.com/>:/gluster-store/brick1/gv1
>>>>>>>>>>> Brick3: ovirt3-ks.test.j2noc.com <http://ovirt3-ks.test.j2noc.com/>:/gluster-store/brick1/gv1
>>>>>>>>>>> Options Reconfigured:
>>>>>>>>>>> performance.cache-size: 1GB
>>>>>>>>>>> performance.readdir-ahead: on
>>>>>>>>>>> performance.write-behind-window-size: 4MB
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Using simple dd test on VM in ovirt:
>>>>>>>>>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>>>>>>>> block size of 1G?!
>>>>>>>>>
>>>>>>>>> Try 1M (our default for storage operations)
>>>>>>>>>
>>>>>>>>>>> 1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
>>>>>>>>>>>
>>>>>>>>>>> Another VM not in ovirt using nfs:
>>>>>>>>>>> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>>>>>>>>>> 1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Is that expected or is there a better way to set it up to get better
>>>>>>>>>>> performance?
>>>>>>>>> Adding Niels for advice.
>>>>>>>>>
>>>>>>>>>>> This email, its contents and ....
>>>>>>>>> Please avoid this, this is a public mailing list, everything you write
>>>>>>>>> here is public.
>>>>>>>>>
>>>>>>>>> Nir
>>>>>>>> I'll have to look into how to remove this sig for this mailing list....
>>>>>>>>
>>>>>>>> Cloud Services for Business <http://www.j2.com/>www.j2.com <http://www.j2.com/>
>>>>>>>> j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
>>>>>>>>
>>>>>>>>
>>>>>>>> This email, its contents and attachments contain information from j2 Global,
>>>>>>>> Inc. and/or its affiliates which may be privileged, confidential or
>>>>>>>> otherwise protected from disclosure. The information is intended to be for
>>>>>>>> the addressee(s) only. If you are not an addressee, any disclosure, copy,
>>>>>>>> distribution, or use of the contents of this message is prohibited. If you
>>>>>>>> have received this email in error please notify the sender by reply e-mail
>>>>>>>> and delete the original message and any copies. (c) 2015 j2 Global, Inc. All
>>>>>>>> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox
>>>>>>>> are registered trademarks of j2 Global, Inc. and its affiliates.
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>>>>> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
>>>>>
>>>>> <http://www.j2.com/?utm_source=j2global&utm_medium=xsell-referral&utm_campaign=employeeemail>
>>>>> This email, its contents and attachments contain information from j2 Global, Inc <http://www.j2.com/?utm_source=j2global&utm_medium=xsell-referral&utm_campaign=employemail>. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. © 2015 j2 Global, Inc <http://www.j2.com/>. All rights reserved. eFax ® <http://www.efax.com/>, eVoice ® <http://www.evoice.com/>, Campaigner ® <http://www.campaigner.com/>, FuseMail ® <http://www.fusemail.com/>, KeepItSafe ® <http://www.keepitsafe.com/> and Onebox ® <http://www.onebox.com/> are r egistered trademarks of j2 Global, Inc <http://www.j2.com/>. and its affiliates.
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>>>> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
>>>>
>>>> <http://www.j2.com/?utm_source=j2global&utm_medium=xsell-referral&utm_campaign=employeeemail>
>>>> This email, its contents and attachments contain information from j2 Global, Inc <http://www.j2.com/?utm_source=j2global&utm_medium=xsell-referral&utm_campaign=employemail>. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. © 2015 j2 Global, Inc <http://www.j2.com/>. All rights reserved. eFax ® <http://www.efax.com/>, eVoice ® <http://www.evoice.com/>, Campaigner ® <http://www.campaigner.com/>, FuseMail ® <http://www.fusemail.com/>, KeepItSafe ® <http://www.keepitsafe.com/> and Onebox ® <http://www.onebox.com/> are r egistered trademarks of j2 Global, Inc <http://www.j2.com/>. and its affiliates.
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>>> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
>>>
>>>
>>>
>>> --
>>> This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.
>>> The full disclaimer details can be found at <http://www.csir.co.za/disclaimer.html>http://www.csir.co.za/disclaimer.html <http://www.csir.co.za/disclaimer.html>.
>>>
>>> This message has been scanned for viruses and dangerous content by MailScanner <http://www.mailscanner.info/>,
>>> and is believed to be clean.
>>>
>>>
>>> Please consider the environment before printing this email.
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>> --
>> This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.
>> The full disclaimer details can be found at <http://www.csir.co.za/disclaimer.html>http://www.csir.co.za/disclaimer.html <http://www.csir.co.za/disclaimer.html>.
>>
>> This message has been scanned for viruses and dangerous content by MailScanner <http://www.mailscanner.info/>,
>> and is believed to be clean.
>>
>>
>> Please consider the environment before printing this email.
>>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.
This message has been scanned for viruses and dangerous content by MailScanner,
and is believed to be clean.
Please consider the environment before printing this email.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160412/f1801be4/attachment.html>
More information about the Users
mailing list