
This is a multi-part message in MIME format. --------------080005030803050104080801 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hello, I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS. I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed. The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware. The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware. I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings: - File size: 4000 - Block Size: 1 MByte The results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) On the host, running '|dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync|' on my data mount via NFS rsuled in the following: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s I got this <https://romanrm.net/dd-benchmark> and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM. Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice. Thanks, in advance, for your help and advice. -Alan --------------080005030803050104080801 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hello,<br> <br> I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS.<br> <br> I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed.<br> <br> The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware.<br> <br> The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware.<br> <br> I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings:<br> <br> - File size: 4000<br> - Block Size: 1 MByte<br> <br> The results are as follows:<br> <br> - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: )<br> - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )<br> <br> I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows:<br> <br> - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: )<br> - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )<br> <br> On the host, running '<code>dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync</code>' on my data mount via NFS rsuled in the following:<br> <br> 256+0 records in<br> 256+0 records out<br> 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s<br> <br> I got this <a class="moz-txt-link-rfc2396E" href="https://romanrm.net/dd-benchmark"><https://romanrm.net/dd-benchmark></a> and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM.<br> <br> Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s.<br> <br> Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice.<br> <br> Thanks, in advance, for your help and advice.<br> <br> -Alan<br> <br> </body> </html> --------------080005030803050104080801--

Hi again, After doing some searching around, I enabled "viodiscache = writeback" in the "Custom Properties" of my Windows 7 VM and running the same tests as earlier result in about 53MByte/sec write speed and 3.2GByyte/sec read speed. Is this a recommended setting? Is there any downside to having it enabled? -Alan

OK, so disk write performance is back to being poor on my Win7 guest. It is very plausible that the bottleneck is my HDD hardware and/or built-in RAID controller on the server, though the difference i am getting in write speeds between the bare metal test and with the guest VM is still pretty steep. This is a home/lab setup with not much on it, so just to determine if it is my HDD/RAID hardware, I am going to blow this setup away, install VMware 5.5 on this exact same hardware and install a Win7 guest VM. Once I have done that, I will run the exact same HHD read/write test with this Parkdale application and report back on the results. I should be able to get that done this week. Based on my results, if it turns out to be an oVirt issue, we can figure out if there is anything further I may be able to do to get better performance. Regards, Alan

Hello, So a bit of an update, though I still have some additional testing to do. I installed ESXi 5.5 on the same hardware (blew away my oVirt install) and installed a Windows 7 VM, with same settings (2GB RAM, 1 single-core vCPU, 60GB thin-provisioned HDD) The install of Windows itself was definitely *way* faster. I don't have actual timings for real comparisons, but I can say with 100% certainty that the install was faster. I would say it took at *least* half the time to install as oVirt, though to be honest, I would have to say it was maybe 1/3 of the time. Once installed, I installed the VMware Guest Tools, then downloaded and ran the "Parkdale" app with the same settings I ran it under the Windows 7 VM. The preliminary results are interesting. The "Seq. Write" test comes up at around 65 MByte/s, which compares well to the bare metal results I got previously. What is interesting (and disappointing) is that the "Seq. Read" test indicates about 65MByte/s, which is a *huge* decrease to what I was getting in the oVirt Win7 guest. As I mentioned, still going to do some additional testing, but wanted to let you know that -- initially, anyway -- the problem under oVirt does not seem to be a hardware-related issue, but possibly something with the virtio-SCSI? For those who are running Windows VMs in production, what sort of performance do you see? What type of virtual HDD are you running? I will post back either later or some time tomorrow (Tue) with more results. -Alan Quoting "Alan Murrell" <lists@murrell.ca>:
Hello,
I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS.
I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed.
The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware.
The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware.
I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings:
- File size: 4000 - Block Size: 1 MByte
The results are as follows:
- Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )
I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows:
- Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )
On the host, running '|dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync|' on my data mount via NFS rsuled in the following:
256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s
I got this <https://romanrm.net/dd-benchmark> and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM.
Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s.
Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice.
Thanks, in advance, for your help and advice.
-Alan

Hello again. So I have been doing some more testing just to be sure. I performed a reboot of my Win7 ESXi guest VM between each running of the Parkdale HDD speed test app, just to be absolutely sure the results were not a result of caching and each time I get pretty consistent results: no less than 60 MByte/s Seq. Write and around the same for Seq. Read. In oVirt, is there some tuning I am maybe missing? Is there a different HDD type I should be selecting (or converting to)? -Alan On 27/07/2015 6:34 PM, Alan Murrell wrote:
Hello,
So a bit of an update, though I still have some additional testing to do. I installed ESXi 5.5 on the same hardware (blew away my oVirt install) and installed a Windows 7 VM, with same settings (2GB RAM, 1 single-core vCPU, 60GB thin-provisioned HDD)
The install of Windows itself was definitely *way* faster. I don't have actual timings for real comparisons, but I can say with 100% certainty that the install was faster. I would say it took at *least* half the time to install as oVirt, though to be honest, I would have to say it was maybe 1/3 of the time.
Once installed, I installed the VMware Guest Tools, then downloaded and ran the "Parkdale" app with the same settings I ran it under the Windows 7 VM. The preliminary results are interesting.
The "Seq. Write" test comes up at around 65 MByte/s, which compares well to the bare metal results I got previously. What is interesting (and disappointing) is that the "Seq. Read" test indicates about 65MByte/s, which is a *huge* decrease to what I was getting in the oVirt Win7 guest.
As I mentioned, still going to do some additional testing, but wanted to let you know that -- initially, anyway -- the problem under oVirt does not seem to be a hardware-related issue, but possibly something with the virtio-SCSI?
For those who are running Windows VMs in production, what sort of performance do you see? What type of virtual HDD are you running?
I will post back either later or some time tomorrow (Tue) with more results.
-Alan
Quoting "Alan Murrell" <lists@murrell.ca>:
Hello,
I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS.
I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed.
The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware.
The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware.
I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings:
- File size: 4000 - Block Size: 1 MByte
The results are as follows:
- Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )
I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows:
- Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )
On the host, running '|dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync|' on my data mount via NFS rsuled in the following:
256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s
I got this <https://romanrm.net/dd-benchmark> and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM.
Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s.
Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice.
Thanks, in advance, for your help and advice.
-Alan
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Add this on to your dd command conv=sync And then report result. Its no surprise that your writes are slow in raid1 Hello, I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS. I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed. The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware. The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware. I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings: - File size: 4000 - Block Size: 1 MByte The results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) On the host, running 'dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync' on my data mount via NFS rsuled in the following: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s I got this <https://romanrm.net/dd-benchmark> <https://romanrm.net/dd-benchmark> and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM. Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice. Thanks, in advance, for your help and advice. -Alan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 27/07/2015 6:54 PM, Donny Davis wrote:
Add this on to your dd command
conv=sync
On the console login of my host/server, the 'dd' command I used was: dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync Would I be replacing the 'conv=fdatasync' with the 'conv=sync', or am I appending it (i.e., 'conv=fdatasync,sync')
And then report result. Its no surprise that your writes are slow in raid1
I will give it a try and report results when I re-install my oVirt+Hosted Engine on my host, however it actually, wasn't/isn't the write speed on the bare metal host that I was too concerned with but rather the poor write speed within the guest VMs, particularly my Windows 7 guest. As you can see from my original results, within the Win7 guest on oVirt I was only getting about 10 MByte/s while the host I was able to get about 74 MByte/s. That just seems like one heck of a performance drop-off to me, which means I either don't have something configured/tuned properly, or virtio-SCSI is not that good (though everything I have rad suggests that virtio-SCSI is what I should be using for best performance, as long as the guest drivers are installed) -Alan
Hello,
I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS.
I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed.
The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware.
The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware.
I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings:
- File size: 4000 - Block Size: 1 MByte
The results are as follows:
- Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )
I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows:
- Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: )
On the host, running '|dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync|' on my data mount via NFS rsuled in the following:
256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s
I got this <https://romanrm.net/dd-benchmark> <https://romanrm.net/dd-benchmark> and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM.
Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s.
Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice.
Thanks, in advance, for your help and advice.
-Alan
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users

For my latest test, I installed CentOS7 on my server and then installed the libvirt/KVM virtualization group. I created a Win7 guest VM giving it 2GB RAM, 1 vCPU, and 60GB HDD. I did not specify anything for the HDD type; just whatever the default is for libvirt. The install of Win7 went *way* faster than any bare metal install I have ever done of Win7, and also faster than the ESXi install I had done earlier. Once installed, I downloaded the same "Parkdale" HDD testing application I have been using, and used the same settings. The write test results were about 100 MByte/s. I ran the test several times, including rebooting between tests, and the results came back consistent. As a baseline, I ran the following modified 'dd' write test on the server itself: dd bs=1M count=4000 if=/dev/zero of=test conv=sync with the following results: 4000+0 records in 4000+0 records out 4194304000 bytes (4.2 GB) copied, 39.349 s, 107 MB/s So now the question becomes, why would I be getting such a huge difference in oVirt? Thanks. -Alan

Are you using virtio-scsi in oVirt? On Jul 28, 2015 1:50 PM, "Alan Murrell" <lists@murrell.ca> wrote:
For my latest test, I installed CentOS7 on my server and then installed the libvirt/KVM virtualization group. I created a Win7 guest VM giving it 2GB RAM, 1 vCPU, and 60GB HDD. I did not specify anything for the HDD type; just whatever the default is for libvirt.
The install of Win7 went *way* faster than any bare metal install I have ever done of Win7, and also faster than the ESXi install I had done earlier.
Once installed, I downloaded the same "Parkdale" HDD testing application I have been using, and used the same settings. The write test results were about 100 MByte/s. I ran the test several times, including rebooting between tests, and the results came back consistent.
As a baseline, I ran the following modified 'dd' write test on the server itself:
dd bs=1M count=4000 if=/dev/zero of=test conv=sync
with the following results:
4000+0 records in 4000+0 records out 4194304000 bytes (4.2 GB) copied, 39.349 s, 107 MB/s
So now the question becomes, why would I be getting such a huge difference in oVirt?
Thanks.
-Alan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------000904020305050802070207 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit On 28/07/15 03:06 PM, Donny Davis wrote:
Are you using virtio-scsi in oVirt?
Initially I thought I was, but realised it was just virtio (no '-scsi'), though I did have the guest tools installed (as well as having to install the virtio drivers to get the Win7 install to see the HDD). When I realised this, I added a second virtual HDD, making sure it was of type virtio-scsi and then I formatted it. I ran the same tests from "Parkdale" on the second HDD, but unfortunately with the same results. I am willing to install oVirt again and try another Win7 VM, making sure the first disk is virtio-scsi, if you think that may make a difference? -Alan --------------000904020305050802070207 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <br> <br> <div class="moz-cite-prefix">On 28/07/15 03:06 PM, Donny Davis wrote:<br> </div> <blockquote cite="mid:CAKZ1H7-qwCai=F=PM28dEDOnr3=Em_91H82ZraUFiRe23pZNQg@mail.gmail.com" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <p dir="ltr">Are you using virtio-scsi in oVirt?</p> </blockquote> <br> Initially I thought I was, but realised it was just virtio (no '-scsi'), though I did have the guest tools installed (as well as having to install the virtio drivers to get the Win7 install to see the HDD). When I realised this, I added a second virtual HDD, making sure it was of type virtio-scsi and then I formatted it.<br> <br> I ran the same tests from "Parkdale" on the second HDD, but unfortunately with the same results.<br> <br> I am willing to install oVirt again and try another Win7 VM, making sure the first disk is virtio-scsi, if you think that may make a difference?<br> <br> -Alan<br> <br> </body> </html> --------------000904020305050802070207--

I was asking because in ovirt you may have had that option turned on, and in just regular libvirtd/kvm you didn't. It could be making a difference, but its really hard to tell. It's very strange that you're having write performance issues. This is one of the reasons I use oVirt. Performance has always been top notch compared to any other comprehensive solution I have tried. Can you give this a try. Fire up a manager thats not hosted engine, maybe a vm on a laptop or something. And then add that centos install you already as a node in a datacenter that uses local storage. ... I have a feeling you might have had some NFS related issues with your write performance. If that is the case, then NFS tuning will be required to get what you are looking for. To be honest this has never been a problem for me in any of my deployments, so if anyone else out there has something please feel free to chime in. On Tue, Jul 28, 2015 at 8:19 PM, Alan Murrell <lists@murrell.ca> wrote:
On 28/07/15 03:06 PM, Donny Davis wrote:
Are you using virtio-scsi in oVirt?
Initially I thought I was, but realised it was just virtio (no '-scsi'), though I did have the guest tools installed (as well as having to install the virtio drivers to get the Win7 install to see the HDD). When I realised this, I added a second virtual HDD, making sure it was of type virtio-scsi and then I formatted it.
I ran the same tests from "Parkdale" on the second HDD, but unfortunately with the same results.
I am willing to install oVirt again and try another Win7 VM, making sure the first disk is virtio-scsi, if you think that may make a difference?
-Alan
-- Donny Davis

This is a multi-part message in MIME format. --------------030701080601000708090203 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi Donny, On 28/07/15 06:30 PM, Donny Davis wrote:
Can you give this a try. Fire up a manager thats not hosted engine, maybe a vm on a laptop or something. And then add that centos install you already as a node in a datacenter that uses local storage. ... I have a feeling you might have had some NFS related issues with your write performance. If that is the case, then NFS tuning will be required to get what you are looking for.
I only have the one server. I could do a oVirt All-In-One install and use local storage on it. You may be right about the NFS side of things; it did occur to me to. All the other tests I did (ESXi, "vanilla" KVM) use true local storage. If it works fine with the AIO/local storage install, then at least that gives me something to look at (NFS storage) I may be able to get to this tomorrow, but it may not be until Thu or Fri that I can report on results, so please be patient :-) -Alan --------------030701080601000708090203 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi Donny,<br> <br> <div class="moz-cite-prefix">On 28/07/15 06:30 PM, Donny Davis wrote:<br> </div> <blockquote cite="mid:CAKZ1H79inNNcaNuS7Jzfz1-tquDgvFsHhdDn2jDPLXhSeUXVfw@mail.gmail.com" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <div dir="ltr">Can you give this a try. Fire up a manager thats not hosted engine, maybe a vm on a laptop or something. And then add that centos install you already as a node in a datacenter that uses local storage. ... I have a feeling you might have had some NFS related issues with your write performance. If that is the case, then NFS tuning will be required to get what you are looking for.</div> </blockquote> <br> I only have the one server. I could do a oVirt All-In-One install and use local storage on it. You may be right about the NFS side of things; it did occur to me to. All the other tests I did (ESXi, "vanilla" KVM) use true local storage.<br> <br> If it works fine with the AIO/local storage install, then at least that gives me something to look at (NFS storage)<br> <br> I may be able to get to this tomorrow, but it may not be until Thu or Fri that I can report on results, so please be patient :-)<br> <br> -Alan<br> <br> <br> </body> </html> --------------030701080601000708090203--

I'm curious to hear how it works out. Keep us posted On Jul 29, 2015 12:22 AM, "Alan Murrell" <lists@murrell.ca> wrote:
Hi Donny,
On 28/07/15 06:30 PM, Donny Davis wrote:
Can you give this a try. Fire up a manager thats not hosted engine, maybe a vm on a laptop or something. And then add that centos install you already as a node in a datacenter that uses local storage. ... I have a feeling you might have had some NFS related issues with your write performance. If that is the case, then NFS tuning will be required to get what you are looking for.
I only have the one server. I could do a oVirt All-In-One install and use local storage on it. You may be right about the NFS side of things; it did occur to me to. All the other tests I did (ESXi, "vanilla" KVM) use true local storage.
If it works fine with the AIO/local storage install, then at least that gives me something to look at (NFS storage)
I may be able to get to this tomorrow, but it may not be until Thu or Fri that I can report on results, so please be patient :-)
-Alan

OK, so an update... it looks like the issue was indeed my NFS settings. I ended up doing a self-hosted engine again. Here was the entry I have been using for my (NFS) data domain (based on what I had seen in oVirt documentation): /storage1/data *(rw,all_squash,anonuid=36,anongid=36) however, I had come across some articles on NFS tuning and performance (nothing from oVirt, though) indicating that by default, (current versions of) NFS use "sync", meaning that it syncs all data changes to disk first. Indeed, my new test VM was getting the same disk write performance it was getting before (about 10-15 MB/s) In my new install, I added my NFS data store as I had been before, but I also added a second data store like this: /storage1/data *(rw,all_squash,async,anonuid=36,anongid=36) and migrated my VM's vHDD to this second data store. Once it was migrated, I rebooted and ran the HDD test again. Results are *much* better: about 130MB/s sequential write speed (averaged over a half dozen or so runs), and almost 2GB/s sequential read speed. If it means anything to anyone, Random QD32 speeds are about 30MB/s for write and 40MB/s for read. Hopefully this can help someone else out there. Would it be appropriate to add this to the "Troubleshooting NFS" documentation page? As long as people are aware of the possible consequences on the 'async' option (possible data loss if server shut down suddenly), then it seems to be a viable solution. @Donny: Thanks for pointing me in the right direction. I was actually starting to get a bit frustrated as it felt like I was talking to myself there... :-( -Alan

I also wonder if I had setup the storage as Gluster instead of "straight" NFS (and thus utilised Gluster's own NFS services), would this have been an issue? Would Gluster possibly have built-in options that would not have this sort of performance issue? -Alan On 30/07/2015 10:57 PM, Alan Murrell wrote:
OK, so an update... it looks like the issue was indeed my NFS settings. I ended up doing a self-hosted engine again. Here was the entry I have been using for my (NFS) data domain (based on what I had seen in oVirt documentation):
/storage1/data *(rw,all_squash,anonuid=36,anongid=36)
however, I had come across some articles on NFS tuning and performance (nothing from oVirt, though) indicating that by default, (current versions of) NFS use "sync", meaning that it syncs all data changes to disk first. Indeed, my new test VM was getting the same disk write performance it was getting before (about 10-15 MB/s)
In my new install, I added my NFS data store as I had been before, but I also added a second data store like this:
/storage1/data *(rw,all_squash,async,anonuid=36,anongid=36)
and migrated my VM's vHDD to this second data store. Once it was migrated, I rebooted and ran the HDD test again. Results are *much* better: about 130MB/s sequential write speed (averaged over a half dozen or so runs), and almost 2GB/s sequential read speed. If it means anything to anyone, Random QD32 speeds are about 30MB/s for write and 40MB/s for read.
Hopefully this can help someone else out there. Would it be appropriate to add this to the "Troubleshooting NFS" documentation page? As long as people are aware of the possible consequences on the 'async' option (possible data loss if server shut down suddenly), then it seems to be a viable solution.
@Donny: Thanks for pointing me in the right direction. I was actually starting to get a bit frustrated as it felt like I was talking to myself there... :-(
-Alan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Donny (and everyone else who uses NFS-backed storage) You mentioned that you get really good performance in oVirt and I am curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card. Just curious. Thanks! :-) -Alan

You can run without async so long as your nfs server can handle sync writes quickly -----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Alan Murrell Sent: Saturday, August 01, 2015 05:37 PM To: users Subject: Re: [ovirt-users] Poor guest write speeds Hi Donny (and everyone else who uses NFS-backed storage) You mentioned that you get really good performance in oVirt and I am curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card. Just curious. Thanks! :-) -Alan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thank you for reporting back. I don't have to use async with my NFS share. Its not the same comparison, but just for metrics I am running an HP dl380 with 2x6cores and 24GB of ram, with 16 disks in raidz2 with 4 vdevs. Backend is ZFS. Connections are 10GBE I get great performance from my setup. On Sat, Aug 1, 2015 at 9:00 PM, Matthew Lagoe <matthew.lagoe@subrigo.net> wrote:
You can run without async so long as your nfs server can handle sync writes quickly
-----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of Alan Murrell Sent: Saturday, August 01, 2015 05:37 PM To: users Subject: Re: [ovirt-users] Poor guest write speeds
Hi Donny (and everyone else who uses NFS-backed storage)
You mentioned that you get really good performance in oVirt and I am curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card.
Just curious. Thanks! :-)
-Alan
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Donny Davis

This is a multipart message in MIME format. ------=_NextPart_000_008F_01D0CD87.00D962B0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable With zfs you can use sync=3Ddisabled to enable async on the storage side = as well just fyi. =20 On one of ours we have 18 x 2tb drives across 6 x 3-way-mirrors with 3 = intel 7310 ssds 3 way mirrored for zil =20 With about 100 vm=E2=80=99s running I get roughly 1400 iops write and = 2230 read with about 600MBps read and 220 write running the tests from a = windows 2008 r2 vm. =20 From: Donny Davis [mailto:donny@cloudspin.me]=20 Sent: Sunday, August 02, 2015 05:49 PM To: Matthew Lagoe Cc: Alan Murrell; users Subject: Re: [ovirt-users] Poor guest write speeds =20 Thank you for reporting back. =20 I don't have to use async with my NFS share. Its not the same = comparison, but just for metrics I am running an =20 HP dl380 with 2x6cores and 24GB of ram, with 16 disks in raidz2 with 4 = vdevs. Backend is ZFS. Connections are 10GBE =20 I get great performance from my setup. =20 On Sat, Aug 1, 2015 at 9:00 PM, Matthew Lagoe = <matthew.lagoe@subrigo.net> wrote: You can run without async so long as your nfs server can handle sync = writes quickly -----Original Message----- From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf = Of Alan Murrell Sent: Saturday, August 01, 2015 05:37 PM To: users Subject: Re: [ovirt-users] Poor guest write speeds Hi Donny (and everyone else who uses NFS-backed storage) You mentioned that you get really good performance in oVirt and I am = curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card. Just curious. Thanks! :-) -Alan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 --=20 Donny Davis ------=_NextPart_000_008F_01D0CD87.00D962B0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta = name=3DGenerator content=3D"Microsoft Word 14 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.MsoAcetate, li.MsoAcetate, div.MsoAcetate {mso-style-priority:99; mso-style-link:"Balloon Text Char"; margin:0in; margin-bottom:.0001pt; font-size:8.0pt; font-family:"Tahoma","sans-serif";} span.EmailStyle17 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} span.BalloonTextChar {mso-style-name:"Balloon Text Char"; mso-style-priority:99; mso-style-link:"Balloon Text"; font-family:"Tahoma","sans-serif";} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>With zfs you can use sync=3Ddisabled to enable async on the storage = side as well just fyi.<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>On one of ours we have 18 x 2tb drives across 6 x 3-way-mirrors with = 3 intel 7310 ssds 3 way mirrored for zil<o:p></o:p></span></p><p = class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>With about 100 vm=E2=80=99s running I get roughly 1400 iops write and = 2230 read with about 600MBps read and 220 write running the tests from a = windows 2008 r2 vm.<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><b><span = style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>= </b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> = Donny Davis [mailto:donny@cloudspin.me] <br><b>Sent:</b> Sunday, August = 02, 2015 05:49 PM<br><b>To:</b> Matthew Lagoe<br><b>Cc:</b> Alan = Murrell; users<br><b>Subject:</b> Re: [ovirt-users] Poor guest write = speeds<o:p></o:p></span></p><p = class=3DMsoNormal><o:p> </o:p></p><div><p class=3DMsoNormal>Thank = you for reporting back.<o:p></o:p></p><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p class=3DMsoNormal>I = don't have to use async with my NFS share. Its not the same comparison, = but just for metrics I am running an<o:p></o:p></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal> HP dl380 with 2x6cores and 24GB of ram, with 16 = disks in raidz2 with 4 vdevs. Backend is ZFS. Connections are = 10GBE<o:p></o:p></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p class=3DMsoNormal>I = get great performance from my setup.<o:p></o:p></p></div></div><div><p = class=3DMsoNormal><o:p> </o:p></p><div><p class=3DMsoNormal>On Sat, = Aug 1, 2015 at 9:00 PM, Matthew Lagoe <<a = href=3D"mailto:matthew.lagoe@subrigo.net" = target=3D"_blank">matthew.lagoe@subrigo.net</a>> = wrote:<o:p></o:p></p><p class=3DMsoNormal>You can run without async so = long as your nfs server can handle sync = writes<br>quickly<o:p></o:p></p><div><div><p class=3DMsoNormal = style=3D'margin-bottom:12.0pt'><br>-----Original Message-----<br>From: = <a href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a> = [mailto:<a = href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>] On = Behalf Of<br>Alan Murrell<br>Sent: Saturday, August 01, 2015 05:37 = PM<br>To: users<br>Subject: Re: [ovirt-users] Poor guest write = speeds<br><br>Hi Donny (and everyone else who uses NFS-backed = storage)<br><br>You mentioned that you get really good performance in = oVirt and I am curious<br>what you use for your NFS options in = exportfs? The 'async'<br>option fixed my performance issues, but = of course this would not be a<br>recommended option in a production = environment, at least unless the NFS<br>server was running with a BBU on = the RAID card.<br><br>Just curious. Thanks! = :-)<br><br>-Alan<o:p></o:p></p></div></div><div><div><p = class=3DMsoNormal>_______________________________________________<br>User= s mailing list<br><a = href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" = target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br><b= r><br>_______________________________________________<br>Users mailing = list<br><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" = target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a><o:p><= /o:p></p></div></div></div><p class=3DMsoNormal><br><br = clear=3Dall><o:p></o:p></p><div><p = class=3DMsoNormal><o:p> </o:p></p></div><p class=3DMsoNormal>-- = <o:p></o:p></p><div><div><p class=3DMsoNormal = style=3D'margin-bottom:12.0pt'>Donny = Davis<o:p></o:p></p></div></div></div></div></body></html> ------=_NextPart_000_008F_01D0CD87.00D962B0--

While searching around a bit more, something made me think. When I had the default "sync" option enabled, I was getting about 15MB/s, sometimes a bit more, sometimes a bit less. The NICs on my server are connected to a 100Mbit switch for the moment. Even though my NFS share is on the same server, because the NIC is reporting as being connected at 100Mbit, could that be limiting my write speeds? I have a 1Gbit switch I could borrow to test it out with, but if it really wouldn't be a factor in my current setup, then I probably won't bother. I am just trying to figure out how one would get decent performance using 1Gbit NICs in a production environment (the SMB clients we service wouldn't necessarily be able to afford 10Gbit NICs). Of course other options would be to run the NFS with async enabled, but make sure the NFS storage is using a RAID card that has a battery backup, and of course the storage itself is connected to a UPS (which we always do) Anyway, any insight into the 100Mbit vs 1Gbit question, with regards to it all being on one server, would be appreciated. -Alan
participants (4)
-
Alan Murrell
-
Alan Murrell
-
Donny Davis
-
Matthew Lagoe