[Users] Extremely poor disk access speeds in Windows guest

Vadim Rozenfeld vrozenfe at redhat.com
Sat Feb 1 01:20:42 UTC 2014


On Fri, 2014-01-31 at 11:37 -0500, Steve Dainard wrote:
> I've reconfigured my setup (good succes below, but need clarity on
> gluster option):
> 
> 
> Two nodes total, both running virt and glusterfs storage (2 node
> replica, quorum).
> 
> 
> I've created an NFS storage domain, pointed at the first nodes IP
> address. I've launched a 2008 R2 SP1 install with a virtio-scsi disk,
> and the SCSI pass-through driver on the same node as the NFS domain is
> pointing at.
> 
> 
> Windows guest install has been running for roughly 1.5 hours, still
> "Expanding Windows files (55%) ..."

[VR]
Does it work faster with IDE?
Do you have kvm enabled?
Thanks,
Vadim.
 
> 
> 
> top is showing:
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>                 
>  3609 root      20   0 1380m  33m 2604 S 35.4  0.1 231:39.75
> glusterfsd              
> 21444 qemu      20   0 6362m 4.1g 6592 S 10.3  8.7  10:11.53 qemu-kvm
>        
> 
> 
> This is a 2 socket, 6 core xeon machine with 48GB of RAM, and 6x
> 7200rpm enterprise sata disks in RAID5 so I don't think we're hitting
> hardware limitations.
> 
> 
> dd on xfs (no gluster)
> 
> 
> time dd if=/dev/zero of=test bs=1M count=2048
> 2048+0 records in
> 2048+0 records out
> 2147483648 bytes (2.1 GB) copied, 4.15787 s, 516 MB/s
> 
> 
> real 0m4.351s
> user 0m0.000s
> sys 0m1.661s
> 
> 
> 
> 
> time dd if=/dev/zero of=test bs=1k count=2000000
> 2000000+0 records in
> 2000000+0 records out
> 2048000000 bytes (2.0 GB) copied, 4.06949 s, 503 MB/s
> 
> 
> real 0m4.260s
> user 0m0.176s
> sys 0m3.991s
> 
> 
> 
> 
> I've enabled nfs.trusted-sync
> (http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync) on the gluster volume, and the speed difference is immeasurable . Can anyone explain what this option does, and what the risks are with a 2 node gluster replica volume with quorum enabled?
> 
> 
> Thanks,





More information about the Users mailing list