This is a multi-part message in MIME format.
--------------080505030003020501080002
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Adding the virtio-scsi developers.
Anyhow, virtio-scsi is newer and less established than viostor (the block device), so you
might want to try it out.
A disclaimer: There are time and patches gaps between RHEL and other versions.
Ronen.
On 01/28/2014 10:39 PM, Steve Dainard wrote:
I've had a bit of luck here.
Overall IO performance is very poor during Windows updates, but a contributing factor
seems to be the "SCSI Controller" device in the guest. This last install I
didn't install a driver for that device, and my performance is much better. Updates
still chug along quite slowly, but I seem to have more than the < 100KB/s write speeds
I was seeing previously.
Does anyone know what this device is for? I have the "Red Hat VirtIO SCSI
Controller" listed under storage controllers.
*Steve Dainard *
IT Infrastructure Manager
Miovision <
http://miovision.com/> | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <
http://miovision.com/blog> | **LinkedIn
<
https://www.linkedin.com/company/miovision-technologies> | Twitter
<
https://twitter.com/miovision> | Facebook
<
https://www.facebook.com/miovision>*
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C
1L3
This e-mail may contain information that is privileged or confidential. If you are not
the intended recipient, please delete the e-mail and any attachments and notify us
immediately.
On Sun, Jan 26, 2014 at 2:33 AM, Itamar Heim <iheim(a)redhat.com
<mailto:iheim@redhat.com>> wrote:
On 01/26/2014 02:37 AM, Steve Dainard wrote:
Thanks for the responses everyone, really appreciate it.
I've condensed the other questions into this reply.
Steve,
What is the CPU load of the GlusterFS host when comparing the raw
brick test to the gluster mount point test? Give it 30 seconds and
see what top reports. You'll probably have to significantly increase
the count on the test so that it runs that long.
- Nick
Gluster mount point:
*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>>
bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
Top reported this right away:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1826 root 20 0 294m 33m 2540 S 27.2 0.4 0:04.31 glusterfs
2126 root 20 0 1391m 31m 2336 S 22.6 0.4 11:25.48 glusterfsd
Then at about 20+ seconds top reports this:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1826 root 20 0 294m 35m 2660 R 141.7 0.5 1:14.94 glusterfs
2126 root 20 0 1392m 31m 2344 S 33.7 0.4 11:46.56 glusterfsd
*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>>
bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
7750 root 20 0 102m 648 544 R 50.3 0.0 0:01.52 dd
7719 root 20 0 0 0 0 D 1.0 0.0 0:01.50 flush-253:2
Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>>
bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2126 root 20 0 1396m 31m 2360 S 40.5 0.4 13:28.89 glusterfsd
Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 <tel:2048000000> <tel:2048000000 <tel:2048000000>>
bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2141 root 20 0 550m 184m 2840 R 84.6 2.3 16:43.10 glusterfs
2126 root 20 0 1407m 30m 2368 S 49.8 0.4 13:49.07 glusterfsd
Interesting - It looks like if I use a NFS mount point, I incur a cpu
hit on two processes instead of just the daemon. I also get much better
performance if I'm not running dd (fuse) on the GLUSTER host.
The storage servers are a bit older, but are both dual socket
quad core
opterons with 4x 7200rpm drives.
A block size of 4k is quite small so that the context switch
overhead involved with fuse would be more perceivable.
Would it be possible to increase the block size for dd and test?
I'm in the process of setting up a share from my desktop and
I'll see if
I can bench between the two systems. Not sure if my ssd will
impact the
tests, I've heard there isn't an advantage using ssd storage for
glusterfs.
Do you have any pointers to this source of information? Typically
glusterfs performance for virtualization work loads is bound by the
slowest element in the entire stack. Usually storage/disks happen to
be the bottleneck and ssd storage does benefit glusterfs.
-Vijay
I had a couple technical calls with RH (re: RHSS), and when I asked if
SSD's could add any benefit I was told no. The context may have been in
a product comparison to other storage vendors, where they use SSD's for
read/write caching, versus having an all SSD storage domain (which I'm
not proposing, but which is effectively what my desktop would provide).
Increasing bs against NFS mount point (gluster backend):
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000
16000+0 records in
16000+0 records out
2097152000 <tel:2097152000> <tel:2097152000 <tel:2097152000>>
bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
GLUSTER host top reports:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2141 root 20 0 550m 183m 2844 R 88.9 2.3 17:30.82 glusterfs
2126 root 20 0 1414m 31m 2408 S 46.1 0.4 14:18.18 glusterfsd
So roughly the same performance as 4k writes remotely. I'm guessing if I
could randomize these writes we'd see a large difference.
Check this thread out,
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integrat...
it's
quite dated but I remember seeing similar figures.
In fact when I used FIO on a libgfapi mounted VM I got slightly
faster read/write speeds than on the physical box itself (I assume
because of some level of caching). On NFS it was close to half..
You'll probably get a little more interesting results using FIO
opposed to dd
( -Andrew)
Sorry Andrew, I meant to reply to your other message - it looks like
CentOS 6.5 can't use libgfapi right now, I stumbled across this info in
a couple threads. Something about how the CentOS build has different
flags set on build for RHEV snapshot support then RHEL, so native
gluster storage domains are disabled because snapshot support is assumed
and would break otherwise. I'm assuming this is still valid as I cannot
get a storage lock when I attempt a gluster storage domain.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I've setup a NFS storage domain on my desktops SSD. I've re-installed
win 2008 r2 and initially it was running smoother.
Disk performance peaks at 100MB/s.
If I copy a 250MB file from a share into the Windows VM, it writes out
quickly, less than 5 seconds.
If I copy 20 files, ranging in file sizes from 4k to 200MB, totaling in
650MB from the share - windows becomes unresponsive, in top the
desktop's nfs daemon is barely being touched at all, and then eventually
is not hit. I can still interact with the VM's windows through the spice
console. Eventually the file transfer will start and rocket through the
transfer.
I've opened a 271MB zip file with 4454 files and started the extract
process but the progress windows will sit on 'calculating...' after a
significant period of time the decompression starts and runs at
<200KB/second. Windows is guesstimating 1HR completion time. Eventually
even this freezes up, and my spice console mouse won't grab. I can still
see the resource monitor in the Windows VM doing its thing but have to
poweroff the VM as its no longer usable.
The windows update process is the same. It seems like when the guest
needs quick large writes its fine, but lots of io causes serious
hanging, unresponsiveness, spice mouse cursor freeze, and eventually
poweroff/reboot is the only way to get it back.
Also, during window 2008 r2 install the 'expanding windows files' task
is quite slow, roughly 1% progress every 20 seconds (~30 mins to
complete). The GLUSTER host shows these stats pretty consistently:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8139 root 20 0 1380m 28m 2476 R 83.1 0.4 8:35.78 glusterfsd
8295 root 20 0 550m 186m 2980 S 4.3 2.4 1:52.56 glusterfs
bwm-ng v0.6 (probing every 2.000s), press 'h' for help
input: /proc/net/dev type: rate
\ iface Rx Tx
Total
==============================================================================
lo: 3719.31 KB/s 3719.31 KB/s
7438.62 KB/s
eth0: 3405.12 KB/s 3903.28 KB/s
7308.40 KB/s
I've copied the same zip file to an nfs mount point on the OVIRT host
(gluster backend) and get about 25 - 600 KB/s during unzip. The same
test on NFS mount point (desktop SSD ext4 backend) averaged a network
transfer speed of 5MB/s and completed in about 40 seconds.
I have a RHEL 6.5 guest running on the NFS/gluster backend storage
domain, and just did the same test. Extracting the file took 22.3
seconds (faster than the fuse mount point on the host !?!?).
GLUSTER host top reported this while the RHEL guest was decompressing
the zip file:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2141 root 20 0 555m 187m 2844 S 4.0 2.4 18:17.00 glusterfs
2122 root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd
*Steve Dainard *
IT Infrastructure Manager
Miovision <
http://miovision.com/> | /Rethink Traffic/
519-513-2407 <tel:519-513-2407> <tel:519-513-2407
<tel:519-513-2407>> ex.250
877-646-8476 <tel:877-646-8476> <tel:877-646-8476
<tel:877-646-8476>> (toll-free)
*Blog <
http://miovision.com/blog> | **LinkedIn
<
https://www.linkedin.com/company/miovision-technologies> | Twitter
<
https://twitter.com/miovision> | Facebook
<
https://www.facebook.com/miovision>*
------------------------------------------------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential.
If you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
please note currently (>3.3.1), we don't use libgfapi on fedora as well, as we
found some gaps in functionality in the libvirt libgfapi support for snapshots. once these
are resolved, we can re-enable libgfapi on a glusterfs storage domain.
--------------080505030003020501080002
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Adding the virtio-scsi
developers.<br>
Anyhow, virtio-scsi is newer and less established than viostor
(the block device), so you might want to try it out.<br>
A disclaimer: There are time and patches gaps between RHEL and
other versions.<br>
<br>
Ronen.<br>
<br>
On 01/28/2014 10:39 PM, Steve Dainard wrote:<br>
</div>
<blockquote
cite="mid:CAHnsdUt1wPNcWFhy1eW8iYRrp4cNn=9WMFt8sVV5bYGJS1cvUw@mail.gmail.com"
type="cite">
<div dir="ltr">I've had a bit of luck here.
<div><br>
</div>
<div>Overall IO performance is very poor during Windows updates,
but a contributing factor seems to be the "SCSI Controller"
device in the guest. This last install I didn't install a
driver for that device, and my performance is much better.
Updates still chug along quite slowly, but I seem to have more
than the < 100KB/s write speeds I was seeing previously.</div>
<div><br>
</div>
<div>Does anyone know what this device is for? I have the "Red
Hat VirtIO SCSI Controller" listed under storage controllers.</div>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div dir="ltr">
<span
style="font-family:arial,sans-serif;font-size:16px"><strong>Steve Dainard </strong></span><span
style="font-size:12px"></span><br>
<span style="font-family:arial,sans-serif;font-size:12px">IT
Infrastructure Manager<br>
<a moz-do-not-send="true"
href="http://miovision.com/"
target="_blank">Miovision</a> | <em>Rethink
Traffic</em><br>
519-513-2407 ex.250<br>
877-646-8476 (toll-free)<br>
<br>
<strong
style="font-family:arial,sans-serif;font-size:13px;color:#999999"><a
moz-do-not-send="true"
href="http://miovision.com/blog"
target="_blank">Blog</a>
| </strong><font
style="font-family:arial,sans-serif;font-size:13px"
color="#999999"><strong><a
moz-do-not-send="true"
href="https://www.linkedin.com/company/miovision-technologies"
target="_blank">LinkedIn</a> |
<a
moz-do-not-send="true"
href="https://twitter.com/miovision"
target="_blank">Twitter</a> |
<a moz-do-not-send="true"
href="https://www.facebook.com/miovision"
target="_blank">Facebook</a></strong></font> </span>
<hr
style="font-family:arial,sans-serif;font-size:13px;color:#333333;clear:both">
<div
style="color:#999999;font-family:arial,sans-serif;font-size:13px;padding-top:5px">
<span
style="font-family:arial,sans-serif;font-size:12px">Miovision
Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3</span><br>
<span
style="font-family:arial,sans-serif;font-size:12px">This
e-mail may contain information that is privileged or
confidential. If you are not the intended recipient,
please delete the e-mail and any attachments and notify
us immediately.</span></div>
</div>
</div>
<br>
<br>
<div class="gmail_quote">On Sun, Jan 26, 2014 at 2:33 AM, Itamar
Heim <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:iheim@redhat.com"
target="_blank">iheim(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On 01/26/2014 02:37 AM, Steve Dainard
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">
Thanks for the responses everyone, really appreciate it.<br>
<br>
I've condensed the other questions into this reply.<br>
<br>
<br>
Steve,<br>
What is the CPU load of the GlusterFS host when
comparing the raw<br>
brick test to the gluster mount point test? Give it
30 seconds and<br>
see what top reports. You’ll probably have
to
significantly increase<br>
the count on the test so that it runs that
long.<br>
<br>
- Nick<br>
<br>
<br>
<br>
Gluster mount point:<br>
<br>
*4K* on GLUSTER host<br>
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1
bs=4k count=500000<br>
500000+0 records in<br>
500000+0 records out<br>
</div>
<a moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a> <tel:<a
moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a>>
bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s
<div class="im">
<br>
<br>
Top reported this right away:<br>
PID USER PR NI VIRT
RES SHR S %CPU %MEM
TIME+ COMMAND<br>
1826 root 20 0
294m 33m 2540 S 27.2 0.4
0:04.31 glusterfs<br>
2126 root 20 0 1391m
31m 2336 S 22.6 0.4
11:25.48 glusterfsd<br>
<br>
Then at about 20+ seconds top reports this:<br>
PID USER PR
NI VIRT RES SHR S %CPU %MEM
TIME+ COMMAND<br>
1826 root 20 0
294m 35m 2660 R 141.7 0.5
1:14.94 glusterfs<br>
2126 root 20 0 1392m
31m 2344 S 33.7 0.4
11:46.56 glusterfsd<br>
<br>
*4K* Directly on the brick:<br>
dd if=/dev/zero of=test1 bs=4k count=500000<br>
500000+0 records in<br>
500000+0 records out<br>
</div>
<a moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a> <tel:<a
moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a>>
bytes (2.0 GB) copied, 4.99367 s, 410 MB/s
<div class="im">
<br>
<br>
7750 root 20 0
102m 648 544 R 50.3 0.0
0:01.52 dd<br>
7719 root 20 0
0 0 0 D 1.0
0.0
0:01.50 flush-253:2<br>
<br>
Same test, gluster mount point on OVIRT host:<br>
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=500000<br>
500000+0 records in<br>
500000+0 records out<br>
</div>
<a moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a> <tel:<a
moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a>>
bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s
<div class="im">
<br>
<br>
PID USER PR
NI VIRT RES SHR S %CPU %MEM
TIME+ COMMAND<br>
2126 root 20 0 1396m
31m 2360 S 40.5 0.4
13:28.89 glusterfsd<br>
<br>
<br>
Same test, on OVIRT host but against NFS mount point:<br>
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k
count=500000<br>
500000+0 records in<br>
500000+0 records out<br>
</div>
<a moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a> <tel:<a
moz-do-not-send="true" href="tel:2048000000"
value="+12048000000"
target="_blank">2048000000</a>>
bytes (2.0 GB) copied, 18.8911 s, 108 MB/s
<div>
<div class="h5">
<br>
<br>
PID USER PR NI VIRT
RES SHR S %CPU %MEM
TIME+ COMMAND<br>
2141 root 20 0
550m 184m 2840 R 84.6 2.3
16:43.10 glusterfs<br>
2126 root 20 0
1407m 30m 2368 S 49.8 0.4
13:49.07 glusterfsd<br>
<br>
Interesting - It looks like if I use a NFS mount
point, I incur a cpu<br>
hit on two processes instead of just the daemon. I
also get much better<br>
performance if I'm not running dd (fuse) on the
GLUSTER host.<br>
<br>
<br>
The storage servers are a
bit older, but are
both dual socket<br>
quad core<br>
<br>
opterons with 4x 7200rpm
drives.<br>
<br>
<br>
A block size of 4k is quite small so that the
context switch<br>
overhead involved with fuse would be more
perceivable.<br>
<br>
Would it be possible to increase the block size
for dd and test?<br>
<br>
<br>
<br>
I'm in the process of
setting up a share from
my desktop and<br>
I'll see if<br>
<br>
I can bench between the two
systems. Not sure
if my ssd will<br>
impact the<br>
<br>
tests, I've heard there
isn't an advantage
using ssd storage for<br>
glusterfs.<br>
<br>
<br>
Do you have any pointers to this source of
information? Typically<br>
glusterfs performance for virtualization work
loads is bound by the<br>
slowest element in the entire stack. Usually
storage/disks happen to<br>
be the bottleneck and ssd storage does benefit
glusterfs.<br>
<br>
-Vijay<br>
<br>
<br>
I had a couple technical calls with RH (re: RHSS), and
when I asked if<br>
SSD's could add any benefit I was told no. The context
may have been in<br>
a product comparison to other storage vendors, where
they use SSD's for<br>
read/write caching, versus having an all SSD storage
domain (which I'm<br>
not proposing, but which is effectively what my
desktop would provide).<br>
<br>
Increasing bs against NFS mount point (gluster
backend):<br>
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k
count=16000<br>
16000+0 records in<br>
16000+0 records out<br>
</div>
</div>
<a moz-do-not-send="true" href="tel:2097152000"
value="+12097152000"
target="_blank">2097152000</a> <tel:<a
moz-do-not-send="true" href="tel:2097152000"
value="+12097152000"
target="_blank">2097152000</a>>
bytes (2.1 GB) copied, 19.1089 s, 110 MB/s
<div>
<div class="h5"><br>
<br>
<br>
GLUSTER host top reports:<br>
PID USER PR
NI VIRT RES SHR S %CPU %MEM
TIME+ COMMAND<br>
2141 root 20 0
550m 183m 2844 R 88.9 2.3
17:30.82 glusterfs<br>
2126 root 20 0
1414m 31m 2408 S 46.1 0.4
14:18.18 glusterfsd<br>
<br>
So roughly the same performance as 4k writes remotely.
I'm guessing if I<br>
could randomize these writes we'd see a large
difference.<br>
<br>
<br>
Check this thread out,<br>
<a moz-do-not-send="true"
href="http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-nati...
target="_blank">http://raobharata.wordpress.com/2012/10/29/q...
it's<br>
quite dated but I remember seeing similar
figures.<br>
<br>
In fact when I used FIO on a libgfapi mounted VM
I
got slightly<br>
faster read/write speeds than on the physical box
itself (I assume<br>
because of some level of caching). On NFS it was
close to half..<br>
You'll probably get a little more interesting
results using FIO<br>
opposed to dd<br>
<br>
( -Andrew)<br>
<br>
<br>
Sorry Andrew, I meant to reply to your other message -
it looks like<br>
CentOS 6.5 can't use libgfapi right now, I stumbled
across this info in<br>
a couple threads. Something about how the CentOS build
has different<br>
flags set on build for RHEV snapshot support then
RHEL, so native<br>
gluster storage domains are disabled because snapshot
support is assumed<br>
and would break otherwise. I'm assuming this is still
valid as I cannot<br>
get a storage lock when I attempt a gluster storage
domain.<br>
<br>
<br>
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
<br>
I've setup a NFS storage domain on my desktops SSD.
I've re-installed<br>
win 2008 r2 and initially it was running smoother.<br>
<br>
Disk performance peaks at 100MB/s.<br>
<br>
If I copy a 250MB file from a share into the Windows
VM, it writes out<br>
quickly, less than 5 seconds.<br>
<br>
If I copy 20 files, ranging in file sizes from 4k to
200MB, totaling in<br>
650MB from the share - windows becomes unresponsive,
in top the<br>
desktop's nfs daemon is barely being touched at all,
and then eventually<br>
is not hit. I can still interact with the VM's windows
through the spice<br>
console. Eventually the file transfer will start and
rocket through the<br>
transfer.<br>
<br>
I've opened a 271MB zip file with 4454 files and
started the extract<br>
process but the progress windows will sit on
'calculating...' after a<br>
significant period of time the decompression starts
and runs at<br>
<200KB/second. Windows is guesstimating 1HR
completion time. Eventually<br>
even this freezes up, and my spice console mouse won't
grab. I can still<br>
see the resource monitor in the Windows VM doing its
thing but have to<br>
poweroff the VM as its no longer usable.<br>
<br>
The windows update process is the same. It seems like
when the guest<br>
needs quick large writes its fine, but lots of io
causes serious<br>
hanging, unresponsiveness, spice mouse cursor freeze,
and eventually<br>
poweroff/reboot is the only way to get it back.<br>
<br>
Also, during window 2008 r2 install the 'expanding
windows files' task<br>
is quite slow, roughly 1% progress every 20 seconds
(~30 mins to<br>
complete). The GLUSTER host shows these stats pretty
consistently:<br>
PID USER PR NI VIRT
RES SHR S %CPU %MEM
TIME+ COMMAND<br>
8139 root 20 0
1380m 28m 2476 R 83.1 0.4
8:35.78 glusterfsd<br>
8295 root 20 0
550m 186m 2980 S 4.3 2.4
1:52.56 glusterfs<br>
<br>
bwm-ng v0.6 (probing every 2.000s), press 'h' for
help<br>
input: /proc/net/dev type: rate<br>
\ iface
Rx
Tx<br>
Total<br>
<br>
==============================================================================<br>
lo: 3719.31 KB/s
3719.31 KB/s<br>
7438.62 KB/s<br>
eth0: 3405.12 KB/s
3903.28 KB/s<br>
7308.40 KB/s<br>
<br>
<br>
I've copied the same zip file to an nfs mount point on
the OVIRT host<br>
(gluster backend) and get about 25 - 600 KB/s during
unzip. The same<br>
test on NFS mount point (desktop SSD ext4 backend)
averaged a network<br>
transfer speed of 5MB/s and completed in about 40
seconds.<br>
<br>
I have a RHEL 6.5 guest running on the NFS/gluster
backend storage<br>
domain, and just did the same test. Extracting the
file took 22.3<br>
seconds (faster than the fuse mount point on the host
!?!?).<br>
<br>
GLUSTER host top reported this while the RHEL guest
was decompressing<br>
the zip file:<br>
PID USER PR
NI VIRT RES SHR S %CPU %MEM
TIME+ COMMAND<br>
2141 root 20 0
555m 187m 2844 S 4.0 2.4
18:17.00 glusterfs<br>
2122 root 20 0
1380m 31m 2396 S 2.3 0.4
83:19.40 glusterfsd<br>
<br>
<br>
<br>
<br>
<br>
</div>
</div>
<div class="im">
*Steve Dainard *<br>
IT Infrastructure Manager<br>
Miovision <<a moz-do-not-send="true"
href="http://miovision.com/"
target="_blank">http://miovision.com/</a>>
| /Rethink Traffic/<br>
<a moz-do-not-send="true" href="tel:519-513-2407"
value="+15195132407"
target="_blank">519-513-2407</a> <tel:<a
moz-do-not-send="true" href="tel:519-513-2407"
value="+15195132407"
target="_blank">519-513-2407</a>>
ex.250<br>
<a moz-do-not-send="true" href="tel:877-646-8476"
value="+18776468476"
target="_blank">877-646-8476</a> <tel:<a
moz-do-not-send="true" href="tel:877-646-8476"
value="+18776468476"
target="_blank">877-646-8476</a>>
(toll-free)<br>
<br>
*Blog <<a moz-do-not-send="true"
href="http://miovision.com/blog"
target="_blank">http://miovision.com/blog</a>>
| **LinkedIn<br>
<<a moz-do-not-send="true"
href="https://www.linkedin.com/company/miovision-technologies"
target="_blank">https://www.linkedin.com/company/miovision-t...
| Twitter<br>
<<a moz-do-not-send="true"
href="https://twitter.com/miovision"
target="_blank">https://twitter.com/miovision</a>>
| Facebook<br>
<<a moz-do-not-send="true"
href="https://www.facebook.com/miovision"
target="_blank">https://www.facebook.com/miovision</a>...
------------------------------------------------------------------------<br>
</div>
<div class="im">
Miovision Technologies Inc. | 148 Manitou Drive, Suite
101, Kitchener,<br>
ON, Canada | N2C 1L3<br>
This e-mail may contain information that is privileged
or confidential.<br>
If you are not the intended recipient, please delete the
e-mail and any<br>
attachments and notify us immediately.<br>
<br>
<br>
<br>
<br>
</div>
<div class="im">
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
target="_blank">http://lists.ovirt.org/mailman/listinfo/user...
<br>
</div>
</blockquote>
<br>
please note currently (>3.3.1), we don't use libgfapi on
fedora as well, as we found some gaps in functionality in
the libvirt libgfapi support for snapshots. once these are
resolved, we can re-enable libgfapi on a glusterfs storage
domain.<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------080505030003020501080002--