On Sat, Jan 25, 2014 at 7:01 AM, Steve Dainard <sdainard(a)miovision.com>wrote:
Not sure what a good method to bench this would be, but:
An NFS mount point on virt host:
[root@ovirt001 iso-store]# dd if=/dev/zero of=test1 bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 3.95399 s, 104 MB/s
Raw brick performance on gluster server (yes, I know I shouldn't write
directly to the brick):
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 3.06743 s, 134 MB/s
Gluster mount point on gluster server:
[root@gluster1 iso-store]#
dd if=/dev/zero of=test bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 19.5766 s, 20.9 MB/s
The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.
I'm in the process of setting up a share from my desktop and I'll see if I
can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.
Does anyone have a hardware reference design for glusterfs as a backend
for virt? Or is there a benchmark utility?
Check this thread out,
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integrat...
quite dated but I remember seeing similar figures.
In fact when I used FIO on a libgfapi mounted VM I got slightly faster
read/write speeds than on the physical box itself (I assume because of some
level of caching). On NFS it was close to half.. You'll probably get a
little more interesting results using FIO opposed to dd
*Steve Dainard *
IT Infrastructure Manager
Miovision <
http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <
http://miovision.com/blog> | **LinkedIn
<
https://www.linkedin.com/company/miovision-technologies> | Twitter
<
https://twitter.com/miovision> | Facebook
<
https://www.facebook.com/miovision>*
------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
On Thu, Jan 23, 2014 at 7:18 PM, Andrew Cathrow <acathrow(a)redhat.com>wrote:
> Are we sure that the issue is the guest I/O - what's the raw performance
> on the host accessing the gluster storage?
>
> ------------------------------
>
> *From: *"Steve Dainard" <sdainard(a)miovision.com>
> *To: *"Itamar Heim" <iheim(a)redhat.com>
> *Cc: *"Ronen Hod" <rhod(a)redhat.com>, "users"
<users(a)ovirt.org>, "Sanjay
> Rao" <srao(a)redhat.com>
> *Sent: *Thursday, January 23, 2014 4:56:58 PM
> *Subject: *Re: [Users] Extremely poor disk access speeds in Windows guest
>
>
> I have two options, virtio and virtio-scsi.
>
> I was using virtio, and have also attempted virtio-scsi on another
> Windows guest with the same results.
>
> Using the newest drivers, virtio-win-0.1-74.iso.
>
> *Steve Dainard *
> IT Infrastructure Manager
> Miovision <
http://miovision.com/> | *Rethink Traffic*
> 519-513-2407 ex.250
> 877-646-8476 (toll-free)
>
> *Blog <
http://miovision.com/blog> | **LinkedIn
> <
https://www.linkedin.com/company/miovision-technologies> | Twitter
> <
https://twitter.com/miovision> | Facebook
> <
https://www.facebook.com/miovision>*
> ------------------------------
> Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
> ON, Canada | N2C 1L3
> This e-mail may contain information that is privileged or confidential.
> If you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.
>
>
> On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim <iheim(a)redhat.com> wrote:
>
>> On 01/23/2014 07:46 PM, Steve Dainard wrote:
>>
>>> Backing Storage: Gluster Replica
>>> Storage Domain: NFS
>>> Ovirt Hosts: CentOS 6.5
>>> Ovirt version: 3.3.2
>>> Network: GigE
>>> # of VM's: 3 - two Linux guests are idle, one Windows guest is
>>> installing updates.
>>>
>>> I've installed a Windows 2008 R2 guest with virtio disk, and all the
>>> drivers from the latest virtio iso. I've also installed the spice agent
>>> drivers.
>>>
>>> Guest disk access is horribly slow, Resource monitor during Windows
>>> updates shows Disk peaking at 1MB/sec (scale never increases) and Disk
>>> Queue Length Peaking at 5 and looks to be sitting at that level 99% of
>>> the time. 113 updates in Windows has been running solidly for about 2.5
>>> hours and is at 89/113 updates complete.
>>>
>>
>> virtio-block or virtio-scsi?
>> which windows guest driver version for that?
>>
>>
>>> I can't say my Linux guests are blisteringly fast, but updating a guest
>>> from RHEL 6.3 fresh install to 6.5 took about 25 minutes.
>>>
>>> If anyone has any ideas, please let me know - I haven't found any tuning
>>> docs for Windows guests that could explain this issue.
>>>
>>> Thanks,
>>>
>>>
>>> *Steve Dainard *
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users