From lists at murrell.ca Sun Jul 26 04:49:09 2015 Content-Type: multipart/mixed; boundary="===============6357025719762882040==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: [ovirt-users] Poor guest write speeds Date: Sun, 26 Jul 2015 01:48:59 -0700 Message-ID: <55B49EFB.3090001@murrell.ca> --===============6357025719762882040== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------080005030803050104080801 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 7bit Hello, I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS. I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed. The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware. The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware. I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings: - File size: 4000 - Block Size: 1 MByte The results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) On the host, running '|dd bs=3D1M count=3D256 if=3D/dev/zero of=3Dtest conv=3Dfdatasync|' on my data mount via NFS rsuled in the following: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s I got this and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM. Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? = I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice. Thanks, in advance, for your help and advice. -Alan --------------080005030803050104080801 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 8bit Hello,

I am running oVirt 3.5 on a single server (hosted engine).=C2=A0 I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration.=C2=A0 My storage is actually on the single server, but I am attaching to it via NFS.

I created a Windows 7 guest, and I am finding its write speeds to be horrible.=C2=A0 It is a VirtIO-SCSI drive and the guest additions are installed.

The installation of the OS took way longer than bare metal or even VMware.=C2=A0 When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware.

The read speeds seem to be fine.=C2=A0 The guest is responsive when I click on programs and they open about as fast as bare metal or VMware.

I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings:

=C2=A0 - File size: 4000
=C2=A0 - Block Size: 1 MByte

The results are as follows:

=C2=A0 - Seq. Write Speed: 10.7 MByte/sec=C2=A0 (Random Q32D: )
=C2=A0 - Seq. Read Speed: 237.3 MByte/sec=C2=A0 (Random Q32D: )

I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]".=C2=A0 Results are as follows:

=C2=A0 - Seq. Write Speed: 10.7 MByte/sec=C2=A0 (Random Q32D: )
=C2=A0 - Seq. Read Speed: 237.3 MByte/sec=C2=A0 (Random Q32D: )

On the host, running 'dd bs=3D1M count=3D256 if=3D/dev/zero of=3D= test conv=3Dfdatasync' on my data mount via NFS rsuled in the following:

256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s

I got this <https://romanrm.net/dd-benchmark> and measures = the write speed of a disk.=C2=A0 As you can see, it is significantly higher than what I am getting in the Windows guest VM.

Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s.

Any ideas why I have such poor write performance?=C2=A0 Is this normal with oVirt guests?=C2=A0 Any ideas on what I might be able to do to improve them?=C2=A0 I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice.

Thanks, in advance, for your help and advice.

-Alan

--------------080005030803050104080801-- --===============6357025719762882040== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wODAwMDUwMzA4MDMwNTAxMDQwODA4MDEKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKCkhlbGxvLAoKSSBhbSBy dW5uaW5nIG9WaXJ0IDMuNSBvbiBhIHNpbmdsZSBzZXJ2ZXIgKGhvc3RlZCBlbmdpbmUpLiAgSSBo YXZlIHR3bwpXZXN0ZXJuIERpZ2l0YWwgV0QyMEVaUlggZHJpdmVzIGluIGEgaGFyZHdhcmUgUkFJ RDEgY29uZmlndXJhdGlvbi4gIE15CnN0b3JhZ2UgaXMgYWN0dWFsbHkgb24gdGhlIHNpbmdsZSBz ZXJ2ZXIsIGJ1dCBJIGFtIGF0dGFjaGluZyB0byBpdCB2aWEgTkZTLgoKSSBjcmVhdGVkIGEgV2lu ZG93cyA3IGd1ZXN0LCBhbmQgSSBhbSBmaW5kaW5nIGl0cyB3cml0ZSBzcGVlZHMgdG8gYmUKaG9y cmlibGUuICBJdCBpcyBhIFZpcnRJTy1TQ1NJIGRyaXZlIGFuZCB0aGUgZ3Vlc3QgYWRkaXRpb25z IGFyZSBpbnN0YWxsZWQuCgpUaGUgaW5zdGFsbGF0aW9uIG9mIHRoZSBPUyB0b29rIHdheSBsb25n ZXIgdGhhbiBiYXJlIG1ldGFsIG9yIGV2ZW4KVk13YXJlLiAgV2hlbiBJIHJhbiBXaW5kb3dzIHVw ZGF0ZXMsIGl0IGFnYWluIHRvb2sgYSAqbG90KiBsb25nZXIgdGhhbgpvbiBiYXIgbWV0YWwgb3Ig b24gVk13YXJlLgoKVGhlIHJlYWQgc3BlZWRzIHNlZW0gdG8gYmUgZmluZS4gIFRoZSBndWVzdCBp cyByZXNwb25zaXZlIHdoZW4gSSBjbGljawpvbiBwcm9ncmFtcyBhbmQgdGhleSBvcGVuIGFib3V0 IGFzIGZhc3QgYXMgYmFyZSBtZXRhbCBvciBWTXdhcmUuCgpJIGRvd25sb2FkZWQgYW5kIHJhbiAi UGFya2RhbGUiIEhERCB0ZXN0ZXIgYW5kIHJhbiBhIHRlc3Qgd2l0aCB0aGUKZm9sbG93aW5nIHNl dHRpbmdzOgoKICAtIEZpbGUgc2l6ZTogNDAwMAogIC0gQmxvY2sgU2l6ZTogMSBNQnl0ZQoKVGhl IHJlc3VsdHMgYXJlIGFzIGZvbGxvd3M6CgogIC0gU2VxLiBXcml0ZSBTcGVlZDogMTAuNyBNQnl0 ZS9zZWMgIChSYW5kb20gUTMyRDogKQogIC0gU2VxLiBSZWFkIFNwZWVkOiAyMzcuMyBNQnl0ZS9z ZWMgIChSYW5kb20gUTMyRDogKQoKSSByYW4gYW5vdGhlciB0ZXN0LCBidXQgdGhpcyB0aW1lIGNo YW5naW5nIHRoZSAiQmxvY2sgU2l6ZSIgdG8gIjY0IGtCeXRlCltXaW5kb3dzIERlZmF1bHRdIi4g IFJlc3VsdHMgYXJlIGFzIGZvbGxvd3M6CgogIC0gU2VxLiBXcml0ZSBTcGVlZDogMTAuNyBNQnl0 ZS9zZWMgIChSYW5kb20gUTMyRDogKQogIC0gU2VxLiBSZWFkIFNwZWVkOiAyMzcuMyBNQnl0ZS9z ZWMgIChSYW5kb20gUTMyRDogKQoKT24gdGhlIGhvc3QsIHJ1bm5pbmcgJ3xkZCBicz0xTSBjb3Vu dD0yNTYgaWY9L2Rldi96ZXJvIG9mPXRlc3QKY29udj1mZGF0YXN5bmN8JyBvbiBteSBkYXRhIG1v dW50IHZpYSBORlMgcnN1bGVkIGluIHRoZSBmb2xsb3dpbmc6CgoyNTYrMCByZWNvcmRzIGluCjI1 NiswIHJlY29yZHMgb3V0CjI2ODQzNTQ1NiBieXRlcyAoMjY4IE1CKSBjb3BpZWQsIDMuNTk0MzEg cywgNzQuNyBNQi9zCgpJIGdvdCB0aGlzIDxodHRwczovL3JvbWFucm0ubmV0L2RkLWJlbmNobWFy az4gYW5kIG1lYXN1cmVzIHRoZSB3cml0ZQpzcGVlZCBvZiBhIGRpc2suICBBcyB5b3UgY2FuIHNl ZSwgaXQgaXMgc2lnbmlmaWNhbnRseSBoaWdoZXIgdGhhbiB3aGF0IEkKYW0gZ2V0dGluZyBpbiB0 aGUgV2luZG93cyBndWVzdCBWTS4KClJ1bm5pbmcgdGhhdCBzYW1lICJkZCIgdGVzdCBvbiBhbiBV YnVudHUgZ3Vlc3QgVk0gZ2l2ZXMgbWUgMjRNQi9zLgoKQW55IGlkZWFzIHdoeSBJIGhhdmUgc3Vj aCBwb29yIHdyaXRlIHBlcmZvcm1hbmNlPyAgSXMgdGhpcyBub3JtYWwgd2l0aApvVmlydCBndWVz dHM/ICBBbnkgaWRlYXMgb24gd2hhdCBJIG1pZ2h0IGJlIGFibGUgdG8gZG8gdG8gaW1wcm92ZSB0 aGVtPyAKSSBkb24ndCBleHBlY3QgdG8gZ2V0IGNsb3NlIHRvIHRoZSAiYmFyZSBtZXRhbCIgcmVz dWx0cywgYnV0IG1heWJlCnNvbWV0aGluZyBpbiB0aGUgNDAtNjAgTUIvcyByYW5nZSB3b3VsZCBi ZSBuaWNlLgoKVGhhbmtzLCBpbiBhZHZhbmNlLCBmb3IgeW91ciBoZWxwIGFuZCBhZHZpY2UuCgot QWxhbgoKCi0tLS0tLS0tLS0tLS0tMDgwMDA1MDMwODAzMDUwMTA0MDgwODAxCkNvbnRlbnQtVHlw ZTogdGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhi aXQKCjxodG1sPgogIDxoZWFkPgoKICAgIDxtZXRhIGh0dHAtZXF1aXY9ImNvbnRlbnQtdHlwZSIg Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4KICA8L2hlYWQ+CiAgPGJvZHkgYmdj b2xvcj0iI0ZGRkZGRiIgdGV4dD0iIzAwMDAwMCI+CiAgICBIZWxsbyw8YnI+CiAgICA8YnI+CiAg ICBJIGFtIHJ1bm5pbmcgb1ZpcnQgMy41IG9uIGEgc2luZ2xlIHNlcnZlciAoaG9zdGVkIGVuZ2lu ZSkuwqAgSSBoYXZlCiAgICB0d28gV2VzdGVybiBEaWdpdGFsIFdEMjBFWlJYIGRyaXZlcyBpbiBh IGhhcmR3YXJlIFJBSUQxCiAgICBjb25maWd1cmF0aW9uLsKgIE15IHN0b3JhZ2UgaXMgYWN0dWFs bHkgb24gdGhlIHNpbmdsZSBzZXJ2ZXIsIGJ1dCBJCiAgICBhbSBhdHRhY2hpbmcgdG8gaXQgdmlh IE5GUy48YnI+CiAgICA8YnI+CiAgICBJIGNyZWF0ZWQgYSBXaW5kb3dzIDcgZ3Vlc3QsIGFuZCBJ IGFtIGZpbmRpbmcgaXRzIHdyaXRlIHNwZWVkcyB0byBiZQogICAgaG9ycmlibGUuwqAgSXQgaXMg YSBWaXJ0SU8tU0NTSSBkcml2ZSBhbmQgdGhlIGd1ZXN0IGFkZGl0aW9ucyBhcmUKICAgIGluc3Rh bGxlZC48YnI+CiAgICA8YnI+CiAgICBUaGUgaW5zdGFsbGF0aW9uIG9mIHRoZSBPUyB0b29rIHdh eSBsb25nZXIgdGhhbiBiYXJlIG1ldGFsIG9yIGV2ZW4KICAgIFZNd2FyZS7CoCBXaGVuIEkgcmFu IFdpbmRvd3MgdXBkYXRlcywgaXQgYWdhaW4gdG9vayBhICpsb3QqIGxvbmdlcgogICAgdGhhbiBv biBiYXIgbWV0YWwgb3Igb24gVk13YXJlLjxicj4KICAgIDxicj4KICAgIFRoZSByZWFkIHNwZWVk cyBzZWVtIHRvIGJlIGZpbmUuwqAgVGhlIGd1ZXN0IGlzIHJlc3BvbnNpdmUgd2hlbiBJCiAgICBj bGljayBvbiBwcm9ncmFtcyBhbmQgdGhleSBvcGVuIGFib3V0IGFzIGZhc3QgYXMgYmFyZSBtZXRh bCBvcgogICAgVk13YXJlLjxicj4KICAgIDxicj4KICAgIEkgZG93bmxvYWRlZCBhbmQgcmFuICJQ YXJrZGFsZSIgSEREIHRlc3RlciBhbmQgcmFuIGEgdGVzdCB3aXRoIHRoZQogICAgZm9sbG93aW5n IHNldHRpbmdzOjxicj4KICAgIDxicj4KICAgIMKgIC0gRmlsZSBzaXplOiA0MDAwPGJyPgogICAg wqAgLSBCbG9jayBTaXplOiAxIE1CeXRlPGJyPgogICAgPGJyPgogICAgVGhlIHJlc3VsdHMgYXJl IGFzIGZvbGxvd3M6PGJyPgogICAgPGJyPgogICAgwqAgLSBTZXEuIFdyaXRlIFNwZWVkOiAxMC43 IE1CeXRlL3NlY8KgIChSYW5kb20gUTMyRDogKTxicj4KICAgIMKgIC0gU2VxLiBSZWFkIFNwZWVk OiAyMzcuMyBNQnl0ZS9zZWPCoCAoUmFuZG9tIFEzMkQ6ICk8YnI+CiAgICA8YnI+CiAgICBJIHJh biBhbm90aGVyIHRlc3QsIGJ1dCB0aGlzIHRpbWUgY2hhbmdpbmcgdGhlICJCbG9jayBTaXplIiB0 byAiNjQKICAgIGtCeXRlIFtXaW5kb3dzIERlZmF1bHRdIi7CoCBSZXN1bHRzIGFyZSBhcyBmb2xs b3dzOjxicj4KICAgIDxicj4KICAgIMKgIC0gU2VxLiBXcml0ZSBTcGVlZDogMTAuNyBNQnl0ZS9z ZWPCoCAoUmFuZG9tIFEzMkQ6ICk8YnI+CiAgICDCoCAtIFNlcS4gUmVhZCBTcGVlZDogMjM3LjMg TUJ5dGUvc2VjwqAgKFJhbmRvbSBRMzJEOiApPGJyPgogICAgPGJyPgogICAgT24gdGhlIGhvc3Qs IHJ1bm5pbmcgJzxjb2RlPmRkIGJzPTFNIGNvdW50PTI1NiBpZj0vZGV2L3plcm8gb2Y9dGVzdAog ICAgICBjb252PWZkYXRhc3luYzwvY29kZT4nIG9uIG15IGRhdGEgbW91bnQgdmlhIE5GUyByc3Vs ZWQgaW4gdGhlCiAgICBmb2xsb3dpbmc6PGJyPgogICAgPGJyPgogICAgMjU2KzAgcmVjb3JkcyBp bjxicj4KICAgIDI1NiswIHJlY29yZHMgb3V0PGJyPgogICAgMjY4NDM1NDU2IGJ5dGVzICgyNjgg TUIpIGNvcGllZCwgMy41OTQzMSBzLCA3NC43IE1CL3M8YnI+CiAgICA8YnI+CiAgICBJIGdvdCB0 aGlzIDxhIGNsYXNzPSJtb3otdHh0LWxpbmstcmZjMjM5NkUiIGhyZWY9Imh0dHBzOi8vcm9tYW5y bS5uZXQvZGQtYmVuY2htYXJrIj4mbHQ7aHR0cHM6Ly9yb21hbnJtLm5ldC9kZC1iZW5jaG1hcmsm Z3Q7PC9hPiBhbmQgbWVhc3VyZXMgdGhlCiAgICB3cml0ZSBzcGVlZCBvZiBhIGRpc2suwqAgQXMg eW91IGNhbiBzZWUsIGl0IGlzIHNpZ25pZmljYW50bHkgaGlnaGVyCiAgICB0aGFuIHdoYXQgSSBh bSBnZXR0aW5nIGluIHRoZSBXaW5kb3dzIGd1ZXN0IFZNLjxicj4KICAgIDxicj4KICAgIFJ1bm5p bmcgdGhhdCBzYW1lICJkZCIgdGVzdCBvbiBhbiBVYnVudHUgZ3Vlc3QgVk0gZ2l2ZXMgbWUgMjRN Qi9zLjxicj4KICAgIDxicj4KICAgIEFueSBpZGVhcyB3aHkgSSBoYXZlIHN1Y2ggcG9vciB3cml0 ZSBwZXJmb3JtYW5jZT/CoCBJcyB0aGlzIG5vcm1hbAogICAgd2l0aCBvVmlydCBndWVzdHM/wqAg QW55IGlkZWFzIG9uIHdoYXQgSSBtaWdodCBiZSBhYmxlIHRvIGRvIHRvCiAgICBpbXByb3ZlIHRo ZW0/wqAgSSBkb24ndCBleHBlY3QgdG8gZ2V0IGNsb3NlIHRvIHRoZSAiYmFyZSBtZXRhbCIKICAg IHJlc3VsdHMsIGJ1dCBtYXliZSBzb21ldGhpbmcgaW4gdGhlIDQwLTYwIE1CL3MgcmFuZ2Ugd291 bGQgYmUgbmljZS48YnI+CiAgICA8YnI+CiAgICBUaGFua3MsIGluIGFkdmFuY2UsIGZvciB5b3Vy IGhlbHAgYW5kIGFkdmljZS48YnI+CiAgICA8YnI+CiAgICAtQWxhbjxicj4KICAgIDxicj4KICA8 L2JvZHk+CjwvaHRtbD4KCi0tLS0tLS0tLS0tLS0tMDgwMDA1MDMwODAzMDUwMTA0MDgwODAxLS0K --===============6357025719762882040==-- From alan at murrell.ca Sun Jul 26 05:08:22 2015 Content-Type: multipart/mixed; boundary="===============4256972456227871536==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Sun, 26 Jul 2015 02:08:16 -0700 Message-ID: <55B4A380.7080600@murrell.ca> In-Reply-To: 55B49EFB.3090001@murrell.ca --===============4256972456227871536== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi again, After doing some searching around, I enabled "viodiscache =3D writeback" in the "Custom Properties" of my Windows 7 VM and running the same tests as earlier result in about 53MByte/sec write speed and 3.2GByyte/sec read speed. Is this a recommended setting? Is there any downside to having it enabled? -Alan --===============4256972456227871536==-- From lists at murrell.ca Sun Jul 26 20:26:29 2015 Content-Type: multipart/mixed; boundary="===============6589097768175196803==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Sun, 26 Jul 2015 17:26:22 -0700 Message-ID: <55B57AAE.8070801@murrell.ca> In-Reply-To: 55B49EFB.3090001@murrell.ca --===============6589097768175196803== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable OK, so disk write performance is back to being poor on my Win7 guest. It is very plausible that the bottleneck is my HDD hardware and/or built-in RAID controller on the server, though the difference i am getting in write speeds between the bare metal test and with the guest VM is still pretty steep. This is a home/lab setup with not much on it, so just to determine if it is my HDD/RAID hardware, I am going to blow this setup away, install VMware 5.5 on this exact same hardware and install a Win7 guest VM. = Once I have done that, I will run the exact same HHD read/write test with this Parkdale application and report back on the results. I should be able to get that done this week. Based on my results, if it turns out to be an oVirt issue, we can figure out if there is anything further I may be able to do to get better performance. Regards, Alan --===============6589097768175196803==-- From lists at murrell.ca Mon Jul 27 21:39:26 2015 Content-Type: multipart/mixed; boundary="===============3969491156034540535==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Mon, 27 Jul 2015 18:34:36 -0700 Message-ID: <20150727183436.56505h6nwgggzj8k@remote.van.murrell.ca> In-Reply-To: 55B49EFB.3090001@murrell.ca --===============3969491156034540535== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hello, So a bit of an update, though I still have some additional testing to = do. I installed ESXi 5.5 on the same hardware (blew away my oVirt = install) and installed a Windows 7 VM, with same settings (2GB RAM, 1 = single-core vCPU, 60GB thin-provisioned HDD) The install of Windows itself was definitely *way* faster. I don't = have actual timings for real comparisons, but I can say with 100% = certainty that the install was faster. I would say it took at *least* = half the time to install as oVirt, though to be honest, I would have = to say it was maybe 1/3 of the time. Once installed, I installed the VMware Guest Tools, then downloaded = and ran the "Parkdale" app with the same settings I ran it under the = Windows 7 VM. The preliminary results are interesting. The "Seq. Write" test comes up at around 65 MByte/s, which compares = well to the bare metal results I got previously. What is interesting = (and disappointing) is that the "Seq. Read" test indicates about = 65MByte/s, which is a *huge* decrease to what I was getting in the = oVirt Win7 guest. As I mentioned, still going to do some additional testing, but wanted = to let you know that -- initially, anyway -- the problem under oVirt = does not seem to be a hardware-related issue, but possibly something = with the virtio-SCSI? For those who are running Windows VMs in production, what sort of = performance do you see? What type of virtual HDD are you running? I will post back either later or some time tomorrow (Tue) with more results. -Alan Quoting "Alan Murrell" : > Hello, > > I am running oVirt 3.5 on a single server (hosted engine). I have two > Western Digital WD20EZRX drives in a hardware RAID1 configuration. My > storage is actually on the single server, but I am attaching to it via NF= S. > > I created a Windows 7 guest, and I am finding its write speeds to be > horrible. It is a VirtIO-SCSI drive and the guest additions are installe= d. > > The installation of the OS took way longer than bare metal or even > VMware. When I ran Windows updates, it again took a *lot* longer than > on bar metal or on VMware. > > The read speeds seem to be fine. The guest is responsive when I click > on programs and they open about as fast as bare metal or VMware. > > I downloaded and ran "Parkdale" HDD tester and ran a test with the > following settings: > > - File size: 4000 > - Block Size: 1 MByte > > The results are as follows: > > - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) > - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) > > I ran another test, but this time changing the "Block Size" to "64 kByte > [Windows Default]". Results are as follows: > > - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) > - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) > > On the host, running '|dd bs=3D1M count=3D256 if=3D/dev/zero of=3Dtest > conv=3Dfdatasync|' on my data mount via NFS rsuled in the following: > > 256+0 records in > 256+0 records out > 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s > > I got this and measures the write > speed of a disk. As you can see, it is significantly higher than what I > am getting in the Windows guest VM. > > Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. > > Any ideas why I have such poor write performance? Is this normal with > oVirt guests? Any ideas on what I might be able to do to improve them? > I don't expect to get close to the "bare metal" results, but maybe > something in the 40-60 MB/s range would be nice. > > Thanks, in advance, for your help and advice. > > -Alan > > --===============3969491156034540535==-- From donny at cloudspin.me Mon Jul 27 21:54:32 2015 Content-Type: multipart/mixed; boundary="===============1066656025685235510==" MIME-Version: 1.0 From: Donny Davis To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Mon, 27 Jul 2015 21:54:31 -0400 Message-ID: In-Reply-To: 55B49EFB.3090001@murrell.ca --===============1066656025685235510== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Add this on to your dd command conv=3Dsync And then report result. Its no surprise that your writes are slow in raid1 Hello, I am running oVirt 3.5 on a single server (hosted engine). I have two Western Digital WD20EZRX drives in a hardware RAID1 configuration. My storage is actually on the single server, but I am attaching to it via NFS. I created a Windows 7 guest, and I am finding its write speeds to be horrible. It is a VirtIO-SCSI drive and the guest additions are installed. The installation of the OS took way longer than bare metal or even VMware. When I ran Windows updates, it again took a *lot* longer than on bar metal or on VMware. The read speeds seem to be fine. The guest is responsive when I click on programs and they open about as fast as bare metal or VMware. I downloaded and ran "Parkdale" HDD tester and ran a test with the following settings: - File size: 4000 - Block Size: 1 MByte The results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) I ran another test, but this time changing the "Block Size" to "64 kByte [Windows Default]". Results are as follows: - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) On the host, running 'dd bs=3D1M count=3D256 if=3D/dev/zero of=3Dtest conv= =3Dfdatasync' on my data mount via NFS rsuled in the following: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s I got this and measures the write speed of a disk. As you can see, it is significantly higher than what I am getting in the Windows guest VM. Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. Any ideas why I have such poor write performance? Is this normal with oVirt guests? Any ideas on what I might be able to do to improve them? I don't expect to get close to the "bare metal" results, but maybe something in the 40-60 MB/s range would be nice. Thanks, in advance, for your help and advice. -Alan _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users --===============1066656025685235510== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PHAgZGlyPSJsdHIiPkFkZCB0aGlzIG9uIHRvIHlvdXIgZGQgY29tbWFuZDwvcD4KPHAgZGlyPSJs dHIiPmNvbnY9c3luYzwvcD4KPHAgZGlyPSJsdHIiPkFuZCB0aGVuIHJlcG9ydCByZXN1bHQuIEl0 cyBubyBzdXJwcmlzZSB0aGF0IHlvdXIgd3JpdGVzIGFyZSBzbG93IGluIHJhaWQxPGJyPgo8L3A+ CjxkaXYgY2xhc3M9ImdtYWlsX3F1b3QmbHQ7YmxvY2txdW90ZSBjbGFzcz0iIHN0eWxlPSJtYXJn aW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxlZnQ6MWV4 Ij4KICAKCiAgICAKICAKICA8ZGl2IGJnY29sb3I9IiNGRkZGRkYiIHRleHQ9IiMwMDAwMDAiPgog ICAgSGVsbG8sPGJyPgogICAgPGJyPgogICAgSSBhbSBydW5uaW5nIG9WaXJ0IDMuNSBvbiBhIHNp bmdsZSBzZXJ2ZXIgKGhvc3RlZCBlbmdpbmUpLsKgIEkgaGF2ZQogICAgdHdvIFdlc3Rlcm4gRGln aXRhbCBXRDIwRVpSWCBkcml2ZXMgaW4gYSBoYXJkd2FyZSBSQUlEMQogICAgY29uZmlndXJhdGlv bi7CoCBNeSBzdG9yYWdlIGlzIGFjdHVhbGx5IG9uIHRoZSBzaW5nbGUgc2VydmVyLCBidXQgSQog ICAgYW0gYXR0YWNoaW5nIHRvIGl0IHZpYSBORlMuPGJyPgogICAgPGJyPgogICAgSSBjcmVhdGVk IGEgV2luZG93cyA3IGd1ZXN0LCBhbmQgSSBhbSBmaW5kaW5nIGl0cyB3cml0ZSBzcGVlZHMgdG8g YmUKICAgIGhvcnJpYmxlLsKgIEl0IGlzIGEgVmlydElPLVNDU0kgZHJpdmUgYW5kIHRoZSBndWVz dCBhZGRpdGlvbnMgYXJlCiAgICBpbnN0YWxsZWQuPGJyPgogICAgPGJyPgogICAgVGhlIGluc3Rh bGxhdGlvbiBvZiB0aGUgT1MgdG9vayB3YXkgbG9uZ2VyIHRoYW4gYmFyZSBtZXRhbCBvciBldmVu CiAgICBWTXdhcmUuwqAgV2hlbiBJIHJhbiBXaW5kb3dzIHVwZGF0ZXMsIGl0IGFnYWluIHRvb2sg YSAqbG90KiBsb25nZXIKICAgIHRoYW4gb24gYmFyIG1ldGFsIG9yIG9uIFZNd2FyZS48YnI+CiAg ICA8YnI+CiAgICBUaGUgcmVhZCBzcGVlZHMgc2VlbSB0byBiZSBmaW5lLsKgIFRoZSBndWVzdCBp cyByZXNwb25zaXZlIHdoZW4gSQogICAgY2xpY2sgb24gcHJvZ3JhbXMgYW5kIHRoZXkgb3BlbiBh Ym91dCBhcyBmYXN0IGFzIGJhcmUgbWV0YWwgb3IKICAgIFZNd2FyZS48YnI+CiAgICA8YnI+CiAg ICBJIGRvd25sb2FkZWQgYW5kIHJhbiAmcXVvdDtQYXJrZGFsZSZxdW90OyBIREQgdGVzdGVyIGFu ZCByYW4gYSB0ZXN0IHdpdGggdGhlCiAgICBmb2xsb3dpbmcgc2V0dGluZ3M6PGJyPgogICAgPGJy PgogICAgwqAgLSBGaWxlIHNpemU6IDQwMDA8YnI+CiAgICDCoCAtIEJsb2NrIFNpemU6IDEgTUJ5 dGU8YnI+CiAgICA8YnI+CiAgICBUaGUgcmVzdWx0cyBhcmUgYXMgZm9sbG93czo8YnI+CiAgICA8 YnI+CiAgICDCoCAtIFNlcS4gV3JpdGUgU3BlZWQ6IDEwLjcgTUJ5dGUvc2VjwqAgKFJhbmRvbSBR MzJEOiApPGJyPgogICAgwqAgLSBTZXEuIFJlYWQgU3BlZWQ6IDIzNy4zIE1CeXRlL3NlY8KgIChS YW5kb20gUTMyRDogKTxicj4KICAgIDxicj4KICAgIEkgcmFuIGFub3RoZXIgdGVzdCwgYnV0IHRo aXMgdGltZSBjaGFuZ2luZyB0aGUgJnF1b3Q7QmxvY2sgU2l6ZSZxdW90OyB0byAmcXVvdDs2NAog ICAga0J5dGUgW1dpbmRvd3MgRGVmYXVsdF0mcXVvdDsuwqAgUmVzdWx0cyBhcmUgYXMgZm9sbG93 czo8YnI+CiAgICA8YnI+CiAgICDCoCAtIFNlcS4gV3JpdGUgU3BlZWQ6IDEwLjcgTUJ5dGUvc2Vj wqAgKFJhbmRvbSBRMzJEOiApPGJyPgogICAgwqAgLSBTZXEuIFJlYWQgU3BlZWQ6IDIzNy4zIE1C eXRlL3NlY8KgIChSYW5kb20gUTMyRDogKTxicj4KICAgIDxicj4KICAgIE9uIHRoZSBob3N0LCBy dW5uaW5nICYjMzk7PGNvZGU+ZGQgYnM9MU0gY291bnQ9MjU2IGlmPS9kZXYvemVybyBvZj10ZXN0 CiAgICAgIGNvbnY9ZmRhdGFzeW5jPC9jb2RlPiYjMzk7IG9uIG15IGRhdGEgbW91bnQgdmlhIE5G UyByc3VsZWQgaW4gdGhlCiAgICBmb2xsb3dpbmc6PGJyPgogICAgPGJyPgogICAgMjU2KzAgcmVj b3JkcyBpbjxicj4KICAgIDI1NiswIHJlY29yZHMgb3V0PGJyPgogICAgMjY4NDM1NDU2IGJ5dGVz ICgyNjggTUIpIGNvcGllZCwgMy41OTQzMSBzLCA3NC43IE1CL3M8YnI+CiAgICA8YnI+CiAgICBJ IGdvdCB0aGlzIDxhIGhyZWY9Imh0dHBzOi8vcm9tYW5ybS5uZXQvZGQtYmVuY2htYXJrIiB0YXJn ZXQ9Il9ibGFuayI+Jmx0O2h0dHBzOi8vcm9tYW5ybS5uZXQvZGQtYmVuY2htYXJrJmd0OzwvYT4g YW5kIG1lYXN1cmVzIHRoZQogICAgd3JpdGUgc3BlZWQgb2YgYSBkaXNrLsKgIEFzIHlvdSBjYW4g c2VlLCBpdCBpcyBzaWduaWZpY2FudGx5IGhpZ2hlcgogICAgdGhhbiB3aGF0IEkgYW0gZ2V0dGlu ZyBpbiB0aGUgV2luZG93cyBndWVzdCBWTS48YnI+CiAgICA8YnI+CiAgICBSdW5uaW5nIHRoYXQg c2FtZSAmcXVvdDtkZCZxdW90OyB0ZXN0IG9uIGFuIFVidW50dSBndWVzdCBWTSBnaXZlcyBtZSAy NE1CL3MuPGJyPgogICAgPGJyPgogICAgQW55IGlkZWFzIHdoeSBJIGhhdmUgc3VjaCBwb29yIHdy aXRlIHBlcmZvcm1hbmNlP8KgIElzIHRoaXMgbm9ybWFsCiAgICB3aXRoIG9WaXJ0IGd1ZXN0cz/C oCBBbnkgaWRlYXMgb24gd2hhdCBJIG1pZ2h0IGJlIGFibGUgdG8gZG8gdG8KICAgIGltcHJvdmUg dGhlbT/CoCBJIGRvbiYjMzk7dCBleHBlY3QgdG8gZ2V0IGNsb3NlIHRvIHRoZSAmcXVvdDtiYXJl IG1ldGFsJnF1b3Q7CiAgICByZXN1bHRzLCBidXQgbWF5YmUgc29tZXRoaW5nIGluIHRoZSA0MC02 MCBNQi9zIHJhbmdlIHdvdWxkIGJlIG5pY2UuPGJyPgogICAgPGJyPgogICAgVGhhbmtzLCBpbiBh ZHZhbmNlLCBmb3IgeW91ciBoZWxwIGFuZCBhZHZpY2UuPGJyPgogICAgPGJyPgogICAgLUFsYW48 YnI+CiAgICA8YnI+CiAgPC9kaXY+Cgo8YnI+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX188YnI+ClVzZXJzIG1haWxpbmcgbGlzdDxicj4KPGEgaHJlZj0ibWFp bHRvOlVzZXJzQG92aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9hPjxicj4KPGEgaHJlZj0iaHR0 cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiByZWw9Im5vcmVmZXJy ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGlu Zm8vdXNlcnM8L2E+PGJyPgo8YnI+PC9kaXY+Cg== --===============1066656025685235510==-- From lists at murrell.ca Tue Jul 28 02:27:21 2015 Content-Type: multipart/mixed; boundary="===============1359154338797578979==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Mon, 27 Jul 2015 23:27:13 -0700 Message-ID: <55B720C1.4060101@murrell.ca> In-Reply-To: CAKZ1H79QrLLWjWK8hf03eQ_k1+yHOxOf4ae=fLynhRV6ovNniQ@mail.gmail.com --===============1359154338797578979== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 27/07/2015 6:54 PM, Donny Davis wrote: > Add this on to your dd command > > conv=3Dsync On the console login of my host/server, the 'dd' command I used was: dd bs=3D1M count=3D256 if=3D/dev/zero of=3Dtest conv=3Dfdatasync Would I be replacing the 'conv=3Dfdatasync' with the 'conv=3Dsync', or am I = appending it (i.e., 'conv=3Dfdatasync,sync') > And then report result. Its no surprise that your writes are slow in raid1 I will give it a try and report results when I re-install my = oVirt+Hosted Engine on my host, however it actually, wasn't/isn't the = write speed on the bare metal host that I was too concerned with but = rather the poor write speed within the guest VMs, particularly my = Windows 7 guest. As you can see from my original results, within the Win7 guest on oVirt = I was only getting about 10 MByte/s while the host I was able to get = about 74 MByte/s. That just seems like one heck of a performance = drop-off to me, which means I either don't have something = configured/tuned properly, or virtio-SCSI is not that good (though = everything I have rad suggests that virtio-SCSI is what I should be = using for best performance, as long as the guest drivers are installed) -Alan > > Hello, > > I am running oVirt 3.5 on a single server (hosted engine). I have two > Western Digital WD20EZRX drives in a hardware RAID1 configuration. My > storage is actually on the single server, but I am attaching to it via NF= S. > > I created a Windows 7 guest, and I am finding its write speeds to be > horrible. It is a VirtIO-SCSI drive and the guest additions are installe= d. > > The installation of the OS took way longer than bare metal or even > VMware. When I ran Windows updates, it again took a *lot* longer than > on bar metal or on VMware. > > The read speeds seem to be fine. The guest is responsive when I click > on programs and they open about as fast as bare metal or VMware. > > I downloaded and ran "Parkdale" HDD tester and ran a test with the > following settings: > > - File size: 4000 > - Block Size: 1 MByte > > The results are as follows: > > - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) > - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) > > I ran another test, but this time changing the "Block Size" to "64 kByte > [Windows Default]". Results are as follows: > > - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) > - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) > > On the host, running '|dd bs=3D1M count=3D256 if=3D/dev/zero of=3Dtest > conv=3Dfdatasync|' on my data mount via NFS rsuled in the following: > > 256+0 records in > 256+0 records out > 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s > > I got this > and measures the write speed of a > disk. As you can see, it is significantly higher than what I am getting > in the Windows guest VM. > > Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. > > Any ideas why I have such poor write performance? Is this normal with > oVirt guests? Any ideas on what I might be able to do to improve them? > I don't expect to get close to the "bare metal" results, but maybe > something in the 40-60 MB/s range would be nice. > > Thanks, in advance, for your help and advice. > > -Alan > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > --===============1359154338797578979==-- From alan at murrell.ca Tue Jul 28 02:47:53 2015 Content-Type: multipart/mixed; boundary="===============7579463693386708387==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Mon, 27 Jul 2015 23:39:39 -0700 Message-ID: <55B723AB.1090705@murrell.ca> In-Reply-To: 20150727183436.56505h6nwgggzj8k@remote.van.murrell.ca --===============7579463693386708387== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hello again. So I have been doing some more testing just to be sure. I performed a = reboot of my Win7 ESXi guest VM between each running of the Parkdale HDD = speed test app, just to be absolutely sure the results were not a result = of caching and each time I get pretty consistent results: no less than = 60 MByte/s Seq. Write and around the same for Seq. Read. In oVirt, is there some tuning I am maybe missing? Is there a different = HDD type I should be selecting (or converting to)? -Alan On 27/07/2015 6:34 PM, Alan Murrell wrote: > Hello, > > So a bit of an update, though I still have some additional testing to > do. I installed ESXi 5.5 on the same hardware (blew away my oVirt > install) and installed a Windows 7 VM, with same settings (2GB RAM, 1 > single-core vCPU, 60GB thin-provisioned HDD) > > The install of Windows itself was definitely *way* faster. I don't have > actual timings for real comparisons, but I can say with 100% certainty > that the install was faster. I would say it took at *least* half the > time to install as oVirt, though to be honest, I would have to say it > was maybe 1/3 of the time. > > Once installed, I installed the VMware Guest Tools, then downloaded and > ran the "Parkdale" app with the same settings I ran it under the Windows > 7 VM. The preliminary results are interesting. > > The "Seq. Write" test comes up at around 65 MByte/s, which compares well > to the bare metal results I got previously. What is interesting (and > disappointing) is that the "Seq. Read" test indicates about 65MByte/s, > which is a *huge* decrease to what I was getting in the oVirt Win7 guest. > > As I mentioned, still going to do some additional testing, but wanted to > let you know that -- initially, anyway -- the problem under oVirt does > not seem to be a hardware-related issue, but possibly something with the > virtio-SCSI? > > For those who are running Windows VMs in production, what sort of > performance do you see? What type of virtual HDD are you running? > > I will post back either later or some time tomorrow (Tue) with more > results. > > -Alan > > > Quoting "Alan Murrell" : > >> Hello, >> >> I am running oVirt 3.5 on a single server (hosted engine). I have two >> Western Digital WD20EZRX drives in a hardware RAID1 configuration. My >> storage is actually on the single server, but I am attaching to it via >> NFS. >> >> I created a Windows 7 guest, and I am finding its write speeds to be >> horrible. It is a VirtIO-SCSI drive and the guest additions are >> installed. >> >> The installation of the OS took way longer than bare metal or even >> VMware. When I ran Windows updates, it again took a *lot* longer than >> on bar metal or on VMware. >> >> The read speeds seem to be fine. The guest is responsive when I click >> on programs and they open about as fast as bare metal or VMware. >> >> I downloaded and ran "Parkdale" HDD tester and ran a test with the >> following settings: >> >> - File size: 4000 >> - Block Size: 1 MByte >> >> The results are as follows: >> >> - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) >> - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) >> >> I ran another test, but this time changing the "Block Size" to "64 kByte >> [Windows Default]". Results are as follows: >> >> - Seq. Write Speed: 10.7 MByte/sec (Random Q32D: ) >> - Seq. Read Speed: 237.3 MByte/sec (Random Q32D: ) >> >> On the host, running '|dd bs=3D1M count=3D256 if=3D/dev/zero of=3Dtest >> conv=3Dfdatasync|' on my data mount via NFS rsuled in the following: >> >> 256+0 records in >> 256+0 records out >> 268435456 bytes (268 MB) copied, 3.59431 s, 74.7 MB/s >> >> I got this and measures the write >> speed of a disk. As you can see, it is significantly higher than what I >> am getting in the Windows guest VM. >> >> Running that same "dd" test on an Ubuntu guest VM gives me 24MB/s. >> >> Any ideas why I have such poor write performance? Is this normal with >> oVirt guests? Any ideas on what I might be able to do to improve them? >> I don't expect to get close to the "bare metal" results, but maybe >> something in the 40-60 MB/s range would be nice. >> >> Thanks, in advance, for your help and advice. >> >> -Alan >> >> > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============7579463693386708387==-- From lists at murrell.ca Tue Jul 28 14:49:58 2015 Content-Type: multipart/mixed; boundary="===============0607035548662297128==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Tue, 28 Jul 2015 11:47:02 -0700 Message-ID: <20150728114702.156440hvs8d0yzwg@remote.van.murrell.ca> In-Reply-To: 55B49EFB.3090001@murrell.ca --===============0607035548662297128== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable For my latest test, I installed CentOS7 on my server and then = installed the libvirt/KVM virtualization group. I created a Win7 = guest VM giving it 2GB RAM, 1 vCPU, and 60GB HDD. I did not specify = anything for the HDD type; just whatever the default is for libvirt. The install of Win7 went *way* faster than any bare metal install I = have ever done of Win7, and also faster than the ESXi install I had = done earlier. Once installed, I downloaded the same "Parkdale" HDD testing = application I have been using, and used the same settings. The write = test results were about 100 MByte/s. I ran the test several times, = including rebooting between tests, and the results came back consistent. As a baseline, I ran the following modified 'dd' write test on the = server itself: dd bs=3D1M count=3D4000 if=3D/dev/zero of=3Dtest conv=3Dsync with the following results: 4000+0 records in 4000+0 records out 4194304000 bytes (4.2 GB) copied, 39.349 s, 107 MB/s So now the question becomes, why would I be getting such a huge = difference in oVirt? Thanks. -Alan --===============0607035548662297128==-- From donny at cloudspin.me Tue Jul 28 18:06:37 2015 Content-Type: multipart/mixed; boundary="===============7785319763433803136==" MIME-Version: 1.0 From: Donny Davis To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Tue, 28 Jul 2015 18:06:36 -0400 Message-ID: In-Reply-To: 20150728114702.156440hvs8d0yzwg@remote.van.murrell.ca --===============7785319763433803136== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Are you using virtio-scsi in oVirt? On Jul 28, 2015 1:50 PM, "Alan Murrell" wrote: > For my latest test, I installed CentOS7 on my server and then installed > the libvirt/KVM virtualization group. I created a Win7 guest VM giving it > 2GB RAM, 1 vCPU, and 60GB HDD. I did not specify anything for the HDD typ= e; > just whatever the default is for libvirt. > > The install of Win7 went *way* faster than any bare metal install I have > ever done of Win7, and also faster than the ESXi install I had done earli= er. > > Once installed, I downloaded the same "Parkdale" HDD testing application I > have been using, and used the same settings. The write test results were > about 100 MByte/s. I ran the test several times, including rebooting > between tests, and the results came back consistent. > > As a baseline, I ran the following modified 'dd' write test on the server > itself: > > dd bs=3D1M count=3D4000 if=3D/dev/zero of=3Dtest conv=3Dsync > > with the following results: > > 4000+0 records in > 4000+0 records out > 4194304000 bytes (4.2 GB) copied, 39.349 s, 107 MB/s > > So now the question becomes, why would I be getting such a huge difference > in oVirt? > > Thanks. > > -Alan > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > --===============7785319763433803136== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PHAgZGlyPSJsdHIiPkFyZSB5b3UgdXNpbmcgdmlydGlvLXNjc2kgaW4gb1ZpcnQ/PC9wPgo8ZGl2 IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gSnVsIDI4LCAyMDE1IDE6NTAgUE0sICZxdW90O0FsYW4g TXVycmVsbCZxdW90OyAmbHQ7PGEgaHJlZj0ibWFpbHRvOmxpc3RzQG11cnJlbGwuY2EiPmxpc3Rz QG11cnJlbGwuY2E8L2E+Jmd0OyB3cm90ZTo8YnIgdHlwZT0iYXR0cmlidXRpb24iPjxibG9ja3F1 b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1s ZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPkZvciBteSBsYXRlc3QgdGVzdCwg SSBpbnN0YWxsZWQgQ2VudE9TNyBvbiBteSBzZXJ2ZXIgYW5kIHRoZW4gaW5zdGFsbGVkIHRoZSBs aWJ2aXJ0L0tWTSB2aXJ0dWFsaXphdGlvbiBncm91cC7CoCBJIGNyZWF0ZWQgYSBXaW43IGd1ZXN0 IFZNIGdpdmluZyBpdCAyR0IgUkFNLCAxIHZDUFUsIGFuZCA2MEdCIEhERC4gSSBkaWQgbm90IHNw ZWNpZnkgYW55dGhpbmcgZm9yIHRoZSBIREQgdHlwZTsganVzdCB3aGF0ZXZlciB0aGUgZGVmYXVs dCBpcyBmb3IgbGlidmlydC48YnI+Cjxicj4KVGhlIGluc3RhbGwgb2YgV2luNyB3ZW50ICp3YXkq IGZhc3RlciB0aGFuIGFueSBiYXJlIG1ldGFsIGluc3RhbGwgSSBoYXZlIGV2ZXIgZG9uZSBvZiBX aW43LCBhbmQgYWxzbyBmYXN0ZXIgdGhhbiB0aGUgRVNYaSBpbnN0YWxsIEkgaGFkIGRvbmUgZWFy bGllci48YnI+Cjxicj4KT25jZSBpbnN0YWxsZWQsIEkgZG93bmxvYWRlZCB0aGUgc2FtZSAmcXVv dDtQYXJrZGFsZSZxdW90OyBIREQgdGVzdGluZyBhcHBsaWNhdGlvbiBJIGhhdmUgYmVlbiB1c2lu ZywgYW5kIHVzZWQgdGhlIHNhbWUgc2V0dGluZ3MuwqAgVGhlIHdyaXRlIHRlc3QgcmVzdWx0cyB3 ZXJlIGFib3V0IDEwMCBNQnl0ZS9zLsKgIEkgcmFuIHRoZSB0ZXN0IHNldmVyYWwgdGltZXMsIGlu Y2x1ZGluZyByZWJvb3RpbmcgYmV0d2VlbiB0ZXN0cywgYW5kIHRoZSByZXN1bHRzIGNhbWUgYmFj ayBjb25zaXN0ZW50Ljxicj4KPGJyPgpBcyBhIGJhc2VsaW5lLCBJIHJhbiB0aGUgZm9sbG93aW5n IG1vZGlmaWVkICYjMzk7ZGQmIzM5OyB3cml0ZSB0ZXN0IG9uIHRoZSBzZXJ2ZXIgaXRzZWxmOjxi cj4KPGJyPgpkZCBicz0xTSBjb3VudD00MDAwIGlmPS9kZXYvemVybyBvZj10ZXN0IGNvbnY9c3lu Yzxicj4KPGJyPgp3aXRoIHRoZSBmb2xsb3dpbmcgcmVzdWx0czo8YnI+Cjxicj4KNDAwMCswIHJl Y29yZHMgaW48YnI+CjQwMDArMCByZWNvcmRzIG91dDxicj4KPGEgaHJlZj0idGVsOjQxOTQzMDQw MDAiIHZhbHVlPSIrMTQxOTQzMDQwMDAiIHRhcmdldD0iX2JsYW5rIj40MTk0MzA0MDAwPC9hPsKg Ynl0ZXMgKDQuMiBHQikgY29waWVkLCAzOS4zNDkgcywgMTA3IE1CL3M8YnI+Cjxicj4KU28gbm93 IHRoZSBxdWVzdGlvbiBiZWNvbWVzLCB3aHkgd291bGQgSSBiZSBnZXR0aW5nIHN1Y2ggYSBodWdl IGRpZmZlcmVuY2UgaW4gb1ZpcnQ/PGJyPgo8YnI+ClRoYW5rcy48YnI+Cjxicj4KLUFsYW48YnI+ Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPgpVc2Vy cyBtYWlsaW5nIGxpc3Q8YnI+CjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciIHRhcmdl dD0iX2JsYW5rIj5Vc2Vyc0BvdmlydC5vcmc8L2E+PGJyPgo8YSBocmVmPSJodHRwOi8vbGlzdHMu b3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0 PSJfYmxhbmsiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vyczwv YT48YnI+CjwvYmxvY2txdW90ZT48L2Rpdj4K --===============7785319763433803136==-- From lists at murrell.ca Tue Jul 28 21:19:52 2015 Content-Type: multipart/mixed; boundary="===============7515717218892542619==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Tue, 28 Jul 2015 18:19:43 -0700 Message-ID: <55B82A2F.3010505@murrell.ca> In-Reply-To: CAKZ1H7-qwCai=F=PM28dEDOnr3=Em_91H82ZraUFiRe23pZNQg@mail.gmail.com --===============7515717218892542619== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------000904020305050802070207 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 7bit On 28/07/15 03:06 PM, Donny Davis wrote: > > Are you using virtio-scsi in oVirt? > Initially I thought I was, but realised it was just virtio (no '-scsi'), though I did have the guest tools installed (as well as having to install the virtio drivers to get the Win7 install to see the HDD). = When I realised this, I added a second virtual HDD, making sure it was of type virtio-scsi and then I formatted it. I ran the same tests from "Parkdale" on the second HDD, but unfortunately with the same results. I am willing to install oVirt again and try another Win7 VM, making sure the first disk is virtio-scsi, if you think that may make a difference? -Alan --------------000904020305050802070207 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 8bit

On 28/07/15 03:06 PM, Donny Davis wrote:

Are you using virtio-scsi in oVirt?


Initially I thought I was, but realised it was just virtio (no '-scsi'), though I did have the guest tools installed (as well as having to install the virtio drivers to get the Win7 install to see the HDD).=C2=A0 When I realised this, I added a second virtual HDD, making sure it was of type virtio-scsi and then I formatted it.

I ran the same tests from "Parkdale" on the second HDD, but unfortunately with the same results.

I am willing to install oVirt again and try another Win7 VM, making sure the first disk is virtio-scsi, if you think that may make a difference?

-Alan

--------------000904020305050802070207-- --===============7515717218892542619== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wMDA5MDQwMjAzMDUwNTA4MDIwNzAyMDcKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKCgoKT24gMjgvMDcvMTUg MDM6MDYgUE0sIERvbm55IERhdmlzIHdyb3RlOgo+Cj4gQXJlIHlvdSB1c2luZyB2aXJ0aW8tc2Nz aSBpbiBvVmlydD8KPgoKSW5pdGlhbGx5IEkgdGhvdWdodCBJIHdhcywgYnV0IHJlYWxpc2VkIGl0 IHdhcyBqdXN0IHZpcnRpbyAobm8gJy1zY3NpJyksCnRob3VnaCBJIGRpZCBoYXZlIHRoZSBndWVz dCB0b29scyBpbnN0YWxsZWQgKGFzIHdlbGwgYXMgaGF2aW5nIHRvCmluc3RhbGwgdGhlIHZpcnRp byBkcml2ZXJzIHRvIGdldCB0aGUgV2luNyBpbnN0YWxsIHRvIHNlZSB0aGUgSEREKS4gCldoZW4g SSByZWFsaXNlZCB0aGlzLCBJIGFkZGVkIGEgc2Vjb25kIHZpcnR1YWwgSERELCBtYWtpbmcgc3Vy ZSBpdCB3YXMKb2YgdHlwZSB2aXJ0aW8tc2NzaSBhbmQgdGhlbiBJIGZvcm1hdHRlZCBpdC4KCkkg cmFuIHRoZSBzYW1lIHRlc3RzIGZyb20gIlBhcmtkYWxlIiBvbiB0aGUgc2Vjb25kIEhERCwgYnV0 CnVuZm9ydHVuYXRlbHkgd2l0aCB0aGUgc2FtZSByZXN1bHRzLgoKSSBhbSB3aWxsaW5nIHRvIGlu c3RhbGwgb1ZpcnQgYWdhaW4gYW5kIHRyeSBhbm90aGVyIFdpbjcgVk0sIG1ha2luZyBzdXJlCnRo ZSBmaXJzdCBkaXNrIGlzIHZpcnRpby1zY3NpLCBpZiB5b3UgdGhpbmsgdGhhdCBtYXkgbWFrZSBh IGRpZmZlcmVuY2U/CgotQWxhbgoKCi0tLS0tLS0tLS0tLS0tMDAwOTA0MDIwMzA1MDUwODAyMDcw MjA3CkNvbnRlbnQtVHlwZTogdGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNm ZXItRW5jb2Rpbmc6IDhiaXQKCjxodG1sPgogIDxoZWFkPgogICAgPG1ldGEgY29udGVudD0idGV4 dC9odG1sOyBjaGFyc2V0PXV0Zi04IiBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiPgogIDwvaGVh ZD4KICA8Ym9keSBiZ2NvbG9yPSIjRkZGRkZGIiB0ZXh0PSIjMDAwMDAwIj4KICAgIDxicj4KICAg IDxicj4KICAgIDxkaXYgY2xhc3M9Im1vei1jaXRlLXByZWZpeCI+T24gMjgvMDcvMTUgMDM6MDYg UE0sIERvbm55IERhdmlzCiAgICAgIHdyb3RlOjxicj4KICAgIDwvZGl2PgogICAgPGJsb2NrcXVv dGUKY2l0ZT0ibWlkOkNBS1oxSDctcXdDYWk9Rj1QTTI4ZEVET25yMz1FbV85MUg4MlpyYVVGaVJl MjNwWk5RZ0BtYWlsLmdtYWlsLmNvbSIKICAgICAgdHlwZT0iY2l0ZSI+CiAgICAgIDxtZXRhIGh0 dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04 Ij4KICAgICAgPHAgZGlyPSJsdHIiPkFyZSB5b3UgdXNpbmcgdmlydGlvLXNjc2kgaW4gb1ZpcnQ/ PC9wPgogICAgPC9ibG9ja3F1b3RlPgogICAgPGJyPgogICAgSW5pdGlhbGx5IEkgdGhvdWdodCBJ IHdhcywgYnV0IHJlYWxpc2VkIGl0IHdhcyBqdXN0IHZpcnRpbyAobm8KICAgICctc2NzaScpLCB0 aG91Z2ggSSBkaWQgaGF2ZSB0aGUgZ3Vlc3QgdG9vbHMgaW5zdGFsbGVkIChhcyB3ZWxsIGFzCiAg ICBoYXZpbmcgdG8gaW5zdGFsbCB0aGUgdmlydGlvIGRyaXZlcnMgdG8gZ2V0IHRoZSBXaW43IGlu c3RhbGwgdG8gc2VlCiAgICB0aGUgSEREKS7CoCBXaGVuIEkgcmVhbGlzZWQgdGhpcywgSSBhZGRl ZCBhIHNlY29uZCB2aXJ0dWFsIEhERCwKICAgIG1ha2luZyBzdXJlIGl0IHdhcyBvZiB0eXBlIHZp cnRpby1zY3NpIGFuZCB0aGVuIEkgZm9ybWF0dGVkIGl0Ljxicj4KICAgIDxicj4KICAgIEkgcmFu IHRoZSBzYW1lIHRlc3RzIGZyb20gIlBhcmtkYWxlIiBvbiB0aGUgc2Vjb25kIEhERCwgYnV0CiAg ICB1bmZvcnR1bmF0ZWx5IHdpdGggdGhlIHNhbWUgcmVzdWx0cy48YnI+CiAgICA8YnI+CiAgICBJ IGFtIHdpbGxpbmcgdG8gaW5zdGFsbCBvVmlydCBhZ2FpbiBhbmQgdHJ5IGFub3RoZXIgV2luNyBW TSwgbWFraW5nCiAgICBzdXJlIHRoZSBmaXJzdCBkaXNrIGlzIHZpcnRpby1zY3NpLCBpZiB5b3Ug dGhpbmsgdGhhdCBtYXkgbWFrZSBhCiAgICBkaWZmZXJlbmNlPzxicj4KICAgIDxicj4KICAgIC1B bGFuPGJyPgogICAgPGJyPgogIDwvYm9keT4KPC9odG1sPgoKLS0tLS0tLS0tLS0tLS0wMDA5MDQw MjAzMDUwNTA4MDIwNzAyMDctLQo= --===============7515717218892542619==-- From donny at cloudspin.me Tue Jul 28 21:30:34 2015 Content-Type: multipart/mixed; boundary="===============4882847227048966645==" MIME-Version: 1.0 From: Donny Davis To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Tue, 28 Jul 2015 20:30:33 -0500 Message-ID: In-Reply-To: 55B82A2F.3010505@murrell.ca --===============4882847227048966645== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I was asking because in ovirt you may have had that option turned on, and in just regular libvirtd/kvm you didn't. It could be making a difference, but its really hard to tell. It's very strange that you're having write performance issues. This is one of the reasons I use oVirt. Performance has always been top notch compared to any other comprehensive solution I have tried. Can you give this a try. Fire up a manager thats not hosted engine, maybe a vm on a laptop or something. And then add that centos install you already as a node in a datacenter that uses local storage. ... I have a feeling you might have had some NFS related issues with your write performance. If that is the case, then NFS tuning will be required to get what you are looking for. To be honest this has never been a problem for me in any of my deployments, so if anyone else out there has something please feel free to chime in. On Tue, Jul 28, 2015 at 8:19 PM, Alan Murrell wrote: > > > On 28/07/15 03:06 PM, Donny Davis wrote: > > Are you using virtio-scsi in oVirt? > > > Initially I thought I was, but realised it was just virtio (no '-scsi'), > though I did have the guest tools installed (as well as having to install > the virtio drivers to get the Win7 install to see the HDD). When I > realised this, I added a second virtual HDD, making sure it was of type > virtio-scsi and then I formatted it. > > I ran the same tests from "Parkdale" on the second HDD, but unfortunately > with the same results. > > I am willing to install oVirt again and try another Win7 VM, making sure > the first disk is virtio-scsi, if you think that may make a difference? > > -Alan > > -- = Donny Davis --===============4882847227048966645== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PGRpdiBkaXI9Imx0ciI+SSB3YXMgYXNraW5nIGJlY2F1c2UgaW4gb3ZpcnQgeW91IG1heSBoYXZl IGhhZCB0aGF0IG9wdGlvbiB0dXJuZWQgb24sIGFuZCBpbiBqdXN0IHJlZ3VsYXIgbGlidmlydGQv a3ZtIHlvdSBkaWRuJiMzOTt0LiBJdCBjb3VsZCBiZSBtYWtpbmcgYSBkaWZmZXJlbmNlLCBidXQg aXRzIHJlYWxseSBoYXJkIHRvIHRlbGwuIEl0JiMzOTtzIHZlcnkgc3RyYW5nZSB0aGF0IHlvdSYj Mzk7cmUgaGF2aW5nIHdyaXRlIHBlcmZvcm1hbmNlIGlzc3Vlcy4gVGhpcyBpcyBvbmUgb2YgdGhl IHJlYXNvbnMgSSB1c2Ugb1ZpcnQuIFBlcmZvcm1hbmNlIGhhcyBhbHdheXMgYmVlbiB0b3Agbm90 Y2ggY29tcGFyZWQgdG8gYW55IG90aGVyIGNvbXByZWhlbnNpdmUgc29sdXRpb24gSSBoYXZlIHRy aWVkLsKgPGRpdj48YnI+PC9kaXY+PGRpdj5DYW4geW91IGdpdmUgdGhpcyBhIHRyeS4gRmlyZSB1 cCBhIG1hbmFnZXIgdGhhdHMgbm90IGhvc3RlZCBlbmdpbmUsIG1heWJlIGEgdm0gb24gYSBsYXB0 b3Agb3Igc29tZXRoaW5nLiBBbmQgdGhlbiBhZGQgdGhhdCBjZW50b3MgaW5zdGFsbCB5b3UgYWxy ZWFkeSBhcyBhIG5vZGUgaW4gYSBkYXRhY2VudGVyIHRoYXQgdXNlcyBsb2NhbCBzdG9yYWdlLiAu Li4gSSBoYXZlIGEgZmVlbGluZyB5b3UgbWlnaHQgaGF2ZSBoYWQgc29tZSBORlMgcmVsYXRlZCBp c3N1ZXMgd2l0aCB5b3VyIHdyaXRlIHBlcmZvcm1hbmNlLiBJZiB0aGF0IGlzIHRoZSBjYXNlLCB0 aGVuIE5GUyB0dW5pbmcgd2lsbCBiZSByZXF1aXJlZCB0byBnZXQgd2hhdCB5b3UgYXJlIGxvb2tp bmcgZm9yLsKgPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5UbyBiZSBob25lc3QgdGhpcyBoYXMg bmV2ZXIgYmVlbiBhIHByb2JsZW0gZm9yIG1lIGluIGFueSBvZiBteSBkZXBsb3ltZW50cywgc28g aWYgYW55b25lIGVsc2Ugb3V0IHRoZXJlIGhhcyBzb21ldGhpbmcgcGxlYXNlIGZlZWwgZnJlZSB0 byBjaGltZSBpbi48L2Rpdj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyPjxkaXYg Y2xhc3M9ImdtYWlsX3F1b3RlIj5PbiBUdWUsIEp1bCAyOCwgMjAxNSBhdCA4OjE5IFBNLCBBbGFu IE11cnJlbGwgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86bGlzdHNAbXVycmVs bC5jYSIgdGFyZ2V0PSJfYmxhbmsiPmxpc3RzQG11cnJlbGwuY2E8L2E+Jmd0Ozwvc3Bhbj4gd3Jv dGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAg MCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPgogIAog ICAgCiAgCiAgPGRpdiBiZ2NvbG9yPSIjRkZGRkZGIiB0ZXh0PSIjMDAwMDAwIj48c3BhbiBjbGFz cz0iIj4KICAgIDxicj4KICAgIDxicj4KICAgIDxkaXY+T24gMjgvMDcvMTUgMDM6MDYgUE0sIERv bm55IERhdmlzCiAgICAgIHdyb3RlOjxicj4KICAgIDwvZGl2PgogICAgPGJsb2NrcXVvdGUgdHlw ZT0iY2l0ZSI+CiAgICAgIAogICAgICA8cCBkaXI9Imx0ciI+QXJlIHlvdSB1c2luZyB2aXJ0aW8t c2NzaSBpbiBvVmlydD88L3A+CiAgICA8L2Jsb2NrcXVvdGU+CiAgICA8YnI+PC9zcGFuPgogICAg SW5pdGlhbGx5IEkgdGhvdWdodCBJIHdhcywgYnV0IHJlYWxpc2VkIGl0IHdhcyBqdXN0IHZpcnRp byAobm8KICAgICYjMzk7LXNjc2kmIzM5OyksIHRob3VnaCBJIGRpZCBoYXZlIHRoZSBndWVzdCB0 b29scyBpbnN0YWxsZWQgKGFzIHdlbGwgYXMKICAgIGhhdmluZyB0byBpbnN0YWxsIHRoZSB2aXJ0 aW8gZHJpdmVycyB0byBnZXQgdGhlIFdpbjcgaW5zdGFsbCB0byBzZWUKICAgIHRoZSBIREQpLsKg IFdoZW4gSSByZWFsaXNlZCB0aGlzLCBJIGFkZGVkIGEgc2Vjb25kIHZpcnR1YWwgSERELAogICAg bWFraW5nIHN1cmUgaXQgd2FzIG9mIHR5cGUgdmlydGlvLXNjc2kgYW5kIHRoZW4gSSBmb3JtYXR0 ZWQgaXQuPGJyPgogICAgPGJyPgogICAgSSByYW4gdGhlIHNhbWUgdGVzdHMgZnJvbSAmcXVvdDtQ YXJrZGFsZSZxdW90OyBvbiB0aGUgc2Vjb25kIEhERCwgYnV0CiAgICB1bmZvcnR1bmF0ZWx5IHdp dGggdGhlIHNhbWUgcmVzdWx0cy48YnI+CiAgICA8YnI+CiAgICBJIGFtIHdpbGxpbmcgdG8gaW5z dGFsbCBvVmlydCBhZ2FpbiBhbmQgdHJ5IGFub3RoZXIgV2luNyBWTSwgbWFraW5nCiAgICBzdXJl IHRoZSBmaXJzdCBkaXNrIGlzIHZpcnRpby1zY3NpLCBpZiB5b3UgdGhpbmsgdGhhdCBtYXkgbWFr ZSBhCiAgICBkaWZmZXJlbmNlPzxzcGFuIGNsYXNzPSJIT0VuWmIiPjxmb250IGNvbG9yPSIjODg4 ODg4Ij48YnI+CiAgICA8YnI+CiAgICAtQWxhbjxicj4KICAgIDxicj4KICA8L2ZvbnQ+PC9zcGFu PjwvZGl2PgoKPC9ibG9ja3F1b3RlPjwvZGl2Pjxicj48YnIgY2xlYXI9ImFsbCI+PGRpdj48YnI+ PC9kaXY+LS0gPGJyPjxkaXYgY2xhc3M9ImdtYWlsX3NpZ25hdHVyZSI+PGRpdiBkaXI9Imx0ciI+ RG9ubnkgRGF2aXM8YnI+PGJyPjwvZGl2PjwvZGl2Pgo8L2Rpdj4K --===============4882847227048966645==-- From lists at murrell.ca Wed Jul 29 01:22:38 2015 Content-Type: multipart/mixed; boundary="===============7832268818543344016==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Tue, 28 Jul 2015 22:22:29 -0700 Message-ID: <55B86315.8040407@murrell.ca> In-Reply-To: CAKZ1H79inNNcaNuS7Jzfz1-tquDgvFsHhdDn2jDPLXhSeUXVfw@mail.gmail.com --===============7832268818543344016== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------030701080601000708090203 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 7bit Hi Donny, On 28/07/15 06:30 PM, Donny Davis wrote: > Can you give this a try. Fire up a manager thats not hosted engine, > maybe a vm on a laptop or something. And then add that centos install > you already as a node in a datacenter that uses local storage. ... I > have a feeling you might have had some NFS related issues with your > write performance. If that is the case, then NFS tuning will be > required to get what you are looking for. I only have the one server. I could do a oVirt All-In-One install and use local storage on it. You may be right about the NFS side of things; it did occur to me to. All the other tests I did (ESXi, "vanilla" KVM) use true local storage. If it works fine with the AIO/local storage install, then at least that gives me something to look at (NFS storage) I may be able to get to this tomorrow, but it may not be until Thu or Fri that I can report on results, so please be patient :-) -Alan --------------030701080601000708090203 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 8bit Hi Donny,

On 28/07/15 06:30 PM, Donny Davis wrote:
Can you give this a try. Fire up a manager thats not hosted engine, maybe a vm on a laptop or something. And then add that centos install you already as a node in a datacenter that uses local storage. ... I have a feeling you might have had some NFS related issues with your write performance. If that is the case, then NFS tuning will be required to get what you are looking for.

I only have the one server.=C2=A0 I could do a oVirt All-In-One install and use local storage on it.=C2=A0 You may be right about the NFS side = of things; it did occur to me to.=C2=A0 All the other tests I did (ESXi, "vanilla" KVM) use true local storage.

If it works fine with the AIO/local storage install, then at least that gives me something to look at (NFS storage)

I may be able to get to this tomorrow, but it may not be until Thu or Fri that I can report on results, so please be patient :-)

-Alan


--------------030701080601000708090203-- --===============7832268818543344016== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wMzA3MDEwODA2MDEwMDA3MDgwOTAyMDMKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKCkhpIERvbm55LAoKT24g MjgvMDcvMTUgMDY6MzAgUE0sIERvbm55IERhdmlzIHdyb3RlOgo+IENhbiB5b3UgZ2l2ZSB0aGlz IGEgdHJ5LiBGaXJlIHVwIGEgbWFuYWdlciB0aGF0cyBub3QgaG9zdGVkIGVuZ2luZSwKPiBtYXli ZSBhIHZtIG9uIGEgbGFwdG9wIG9yIHNvbWV0aGluZy4gQW5kIHRoZW4gYWRkIHRoYXQgY2VudG9z IGluc3RhbGwKPiB5b3UgYWxyZWFkeSBhcyBhIG5vZGUgaW4gYSBkYXRhY2VudGVyIHRoYXQgdXNl cyBsb2NhbCBzdG9yYWdlLiAuLi4gSQo+IGhhdmUgYSBmZWVsaW5nIHlvdSBtaWdodCBoYXZlIGhh ZCBzb21lIE5GUyByZWxhdGVkIGlzc3VlcyB3aXRoIHlvdXIKPiB3cml0ZSBwZXJmb3JtYW5jZS4g SWYgdGhhdCBpcyB0aGUgY2FzZSwgdGhlbiBORlMgdHVuaW5nIHdpbGwgYmUKPiByZXF1aXJlZCB0 byBnZXQgd2hhdCB5b3UgYXJlIGxvb2tpbmcgZm9yLgoKSSBvbmx5IGhhdmUgdGhlIG9uZSBzZXJ2 ZXIuICBJIGNvdWxkIGRvIGEgb1ZpcnQgQWxsLUluLU9uZSBpbnN0YWxsIGFuZAp1c2UgbG9jYWwg c3RvcmFnZSBvbiBpdC4gIFlvdSBtYXkgYmUgcmlnaHQgYWJvdXQgdGhlIE5GUyBzaWRlIG9mIHRo aW5nczsKaXQgZGlkIG9jY3VyIHRvIG1lIHRvLiAgQWxsIHRoZSBvdGhlciB0ZXN0cyBJIGRpZCAo RVNYaSwgInZhbmlsbGEiIEtWTSkKdXNlIHRydWUgbG9jYWwgc3RvcmFnZS4KCklmIGl0IHdvcmtz IGZpbmUgd2l0aCB0aGUgQUlPL2xvY2FsIHN0b3JhZ2UgaW5zdGFsbCwgdGhlbiBhdCBsZWFzdCB0 aGF0CmdpdmVzIG1lIHNvbWV0aGluZyB0byBsb29rIGF0IChORlMgc3RvcmFnZSkKCkkgbWF5IGJl IGFibGUgdG8gZ2V0IHRvIHRoaXMgdG9tb3Jyb3csIGJ1dCBpdCBtYXkgbm90IGJlIHVudGlsIFRo dSBvcgpGcmkgdGhhdCBJIGNhbiByZXBvcnQgb24gcmVzdWx0cywgc28gcGxlYXNlIGJlIHBhdGll bnQgOi0pCgotQWxhbgoKCgotLS0tLS0tLS0tLS0tLTAzMDcwMTA4MDYwMTAwMDcwODA5MDIwMwpD b250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD11dGYtOApDb250ZW50LVRyYW5zZmVyLUVu Y29kaW5nOiA4Yml0Cgo8aHRtbD4KICA8aGVhZD4KICAgIDxtZXRhIGNvbnRlbnQ9InRleHQvaHRt bDsgY2hhcnNldD11dGYtOCIgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIj4KICA8L2hlYWQ+CiAg PGJvZHkgYmdjb2xvcj0iI0ZGRkZGRiIgdGV4dD0iIzAwMDAwMCI+CiAgICBIaSBEb25ueSw8YnI+ CiAgICA8YnI+CiAgICA8ZGl2IGNsYXNzPSJtb3otY2l0ZS1wcmVmaXgiPk9uIDI4LzA3LzE1IDA2 OjMwIFBNLCBEb25ueSBEYXZpcwogICAgICB3cm90ZTo8YnI+CiAgICA8L2Rpdj4KICAgIDxibG9j a3F1b3RlCmNpdGU9Im1pZDpDQUtaMUg3OWluTk5jYU51UzdKemZ6MS10cXVEZ3ZGc0hoZERuMmpE UExYaFNlVVhWZndAbWFpbC5nbWFpbC5jb20iCiAgICAgIHR5cGU9ImNpdGUiPgogICAgICA8bWV0 YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11 dGYtOCI+CiAgICAgIDxkaXYgZGlyPSJsdHIiPkNhbiB5b3UgZ2l2ZSB0aGlzIGEgdHJ5LiBGaXJl IHVwIGEgbWFuYWdlciB0aGF0cwogICAgICAgIG5vdCBob3N0ZWQgZW5naW5lLCBtYXliZSBhIHZt IG9uIGEgbGFwdG9wIG9yIHNvbWV0aGluZy4gQW5kIHRoZW4KICAgICAgICBhZGQgdGhhdCBjZW50 b3MgaW5zdGFsbCB5b3UgYWxyZWFkeSBhcyBhIG5vZGUgaW4gYSBkYXRhY2VudGVyCiAgICAgICAg dGhhdCB1c2VzIGxvY2FsIHN0b3JhZ2UuIC4uLiBJIGhhdmUgYSBmZWVsaW5nIHlvdSBtaWdodCBo YXZlIGhhZAogICAgICAgIHNvbWUgTkZTIHJlbGF0ZWQgaXNzdWVzIHdpdGggeW91ciB3cml0ZSBw ZXJmb3JtYW5jZS4gSWYgdGhhdCBpcwogICAgICAgIHRoZSBjYXNlLCB0aGVuIE5GUyB0dW5pbmcg d2lsbCBiZSByZXF1aXJlZCB0byBnZXQgd2hhdCB5b3UgYXJlCiAgICAgICAgbG9va2luZyBmb3Iu PC9kaXY+CiAgICA8L2Jsb2NrcXVvdGU+CiAgICA8YnI+CiAgICBJIG9ubHkgaGF2ZSB0aGUgb25l IHNlcnZlci7CoCBJIGNvdWxkIGRvIGEgb1ZpcnQgQWxsLUluLU9uZSBpbnN0YWxsCiAgICBhbmQg dXNlIGxvY2FsIHN0b3JhZ2Ugb24gaXQuwqAgWW91IG1heSBiZSByaWdodCBhYm91dCB0aGUgTkZT IHNpZGUgb2YKICAgIHRoaW5nczsgaXQgZGlkIG9jY3VyIHRvIG1lIHRvLsKgIEFsbCB0aGUgb3Ro ZXIgdGVzdHMgSSBkaWQgKEVTWGksCiAgICAidmFuaWxsYSIgS1ZNKSB1c2UgdHJ1ZSBsb2NhbCBz dG9yYWdlLjxicj4KICAgIDxicj4KICAgIElmIGl0IHdvcmtzIGZpbmUgd2l0aCB0aGUgQUlPL2xv Y2FsIHN0b3JhZ2UgaW5zdGFsbCwgdGhlbiBhdCBsZWFzdAogICAgdGhhdCBnaXZlcyBtZSBzb21l dGhpbmcgdG8gbG9vayBhdCAoTkZTIHN0b3JhZ2UpPGJyPgogICAgPGJyPgogICAgSSBtYXkgYmUg YWJsZSB0byBnZXQgdG8gdGhpcyB0b21vcnJvdywgYnV0IGl0IG1heSBub3QgYmUgdW50aWwgVGh1 CiAgICBvciBGcmkgdGhhdCBJIGNhbiByZXBvcnQgb24gcmVzdWx0cywgc28gcGxlYXNlIGJlIHBh dGllbnQgOi0pPGJyPgogICAgPGJyPgogICAgLUFsYW48YnI+CiAgICA8YnI+CiAgICA8YnI+CiAg PC9ib2R5Pgo8L2h0bWw+CgotLS0tLS0tLS0tLS0tLTAzMDcwMTA4MDYwMTAwMDcwODA5MDIwMy0t Cg== --===============7832268818543344016==-- From donny at cloudspin.me Wed Jul 29 01:30:46 2015 Content-Type: multipart/mixed; boundary="===============5300027706993170901==" MIME-Version: 1.0 From: Donny Davis To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Wed, 29 Jul 2015 00:30:44 -0500 Message-ID: In-Reply-To: 55B86315.8040407@murrell.ca --===============5300027706993170901== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I'm curious to hear how it works out. Keep us posted On Jul 29, 2015 12:22 AM, "Alan Murrell" wrote: > Hi Donny, > > On 28/07/15 06:30 PM, Donny Davis wrote: > > Can you give this a try. Fire up a manager thats not hosted engine, maybe > a vm on a laptop or something. And then add that centos install you alrea= dy > as a node in a datacenter that uses local storage. ... I have a feeling y= ou > might have had some NFS related issues with your write performance. If th= at > is the case, then NFS tuning will be required to get what you are looking > for. > > > I only have the one server. I could do a oVirt All-In-One install and use > local storage on it. You may be right about the NFS side of things; it d= id > occur to me to. All the other tests I did (ESXi, "vanilla" KVM) use true > local storage. > > If it works fine with the AIO/local storage install, then at least that > gives me something to look at (NFS storage) > > I may be able to get to this tomorrow, but it may not be until Thu or Fri > that I can report on results, so please be patient :-) > > -Alan > > > --===============5300027706993170901== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PHAgZGlyPSJsdHIiPkkmIzM5O20gY3VyaW91cyB0byBoZWFyIGhvdyBpdCB3b3JrcyBvdXQuIEtl ZXAgdXMgcG9zdGVkPC9wPgo8ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gSnVsIDI5LCAyMDE1 IDEyOjIyIEFNLCAmcXVvdDtBbGFuIE11cnJlbGwmcXVvdDsgJmx0OzxhIGhyZWY9Im1haWx0bzps aXN0c0BtdXJyZWxsLmNhIj5saXN0c0BtdXJyZWxsLmNhPC9hPiZndDsgd3JvdGU6PGJyIHR5cGU9 ImF0dHJpYnV0aW9uIj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJn aW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxlZnQ6MWV4 Ij4KICAKICAgIAogIAogIDxkaXYgYmdjb2xvcj0iI0ZGRkZGRiIgdGV4dD0iIzAwMDAwMCI+CiAg ICBIaSBEb25ueSw8YnI+CiAgICA8YnI+CiAgICA8ZGl2Pk9uIDI4LzA3LzE1IDA2OjMwIFBNLCBE b25ueSBEYXZpcwogICAgICB3cm90ZTo8YnI+CiAgICA8L2Rpdj4KICAgIDxibG9ja3F1b3RlIHR5 cGU9ImNpdGUiPgogICAgICAKICAgICAgPGRpdiBkaXI9Imx0ciI+Q2FuIHlvdSBnaXZlIHRoaXMg YSB0cnkuIEZpcmUgdXAgYSBtYW5hZ2VyIHRoYXRzCiAgICAgICAgbm90IGhvc3RlZCBlbmdpbmUs IG1heWJlIGEgdm0gb24gYSBsYXB0b3Agb3Igc29tZXRoaW5nLiBBbmQgdGhlbgogICAgICAgIGFk ZCB0aGF0IGNlbnRvcyBpbnN0YWxsIHlvdSBhbHJlYWR5IGFzIGEgbm9kZSBpbiBhIGRhdGFjZW50 ZXIKICAgICAgICB0aGF0IHVzZXMgbG9jYWwgc3RvcmFnZS4gLi4uIEkgaGF2ZSBhIGZlZWxpbmcg eW91IG1pZ2h0IGhhdmUgaGFkCiAgICAgICAgc29tZSBORlMgcmVsYXRlZCBpc3N1ZXMgd2l0aCB5 b3VyIHdyaXRlIHBlcmZvcm1hbmNlLiBJZiB0aGF0IGlzCiAgICAgICAgdGhlIGNhc2UsIHRoZW4g TkZTIHR1bmluZyB3aWxsIGJlIHJlcXVpcmVkIHRvIGdldCB3aGF0IHlvdSBhcmUKICAgICAgICBs b29raW5nIGZvci48L2Rpdj4KICAgIDwvYmxvY2txdW90ZT4KICAgIDxicj4KICAgIEkgb25seSBo YXZlIHRoZSBvbmUgc2VydmVyLsKgIEkgY291bGQgZG8gYSBvVmlydCBBbGwtSW4tT25lIGluc3Rh bGwKICAgIGFuZCB1c2UgbG9jYWwgc3RvcmFnZSBvbiBpdC7CoCBZb3UgbWF5IGJlIHJpZ2h0IGFi b3V0IHRoZSBORlMgc2lkZSBvZgogICAgdGhpbmdzOyBpdCBkaWQgb2NjdXIgdG8gbWUgdG8uwqAg QWxsIHRoZSBvdGhlciB0ZXN0cyBJIGRpZCAoRVNYaSwKICAgICZxdW90O3ZhbmlsbGEmcXVvdDsg S1ZNKSB1c2UgdHJ1ZSBsb2NhbCBzdG9yYWdlLjxicj4KICAgIDxicj4KICAgIElmIGl0IHdvcmtz IGZpbmUgd2l0aCB0aGUgQUlPL2xvY2FsIHN0b3JhZ2UgaW5zdGFsbCwgdGhlbiBhdCBsZWFzdAog ICAgdGhhdCBnaXZlcyBtZSBzb21ldGhpbmcgdG8gbG9vayBhdCAoTkZTIHN0b3JhZ2UpPGJyPgog ICAgPGJyPgogICAgSSBtYXkgYmUgYWJsZSB0byBnZXQgdG8gdGhpcyB0b21vcnJvdywgYnV0IGl0 IG1heSBub3QgYmUgdW50aWwgVGh1CiAgICBvciBGcmkgdGhhdCBJIGNhbiByZXBvcnQgb24gcmVz dWx0cywgc28gcGxlYXNlIGJlIHBhdGllbnQgOi0pPGJyPgogICAgPGJyPgogICAgLUFsYW48YnI+ CiAgICA8YnI+CiAgICA8YnI+CiAgPC9kaXY+Cgo8L2Jsb2NrcXVvdGU+PC9kaXY+Cg== --===============5300027706993170901==-- From lists at murrell.ca Fri Jul 31 01:57:38 2015 Content-Type: multipart/mixed; boundary="===============3759946460906711970==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: [ovirt-users] [SOLVED] Re: Poor guest write speeds Date: Thu, 30 Jul 2015 22:57:32 -0700 Message-ID: <55BB0E4C.1020908@murrell.ca> In-Reply-To: CAKZ1H79q4MsU=3EgBaOxUiMdQo5a2-YoGq6e=9c5ZpAwRmpWdw@mail.gmail.com --===============3759946460906711970== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable OK, so an update... it looks like the issue was indeed my NFS settings. = I ended up doing a self-hosted engine again. Here was the entry I = have been using for my (NFS) data domain (based on what I had seen in = oVirt documentation): /storage1/data *(rw,all_squash,anonuid=3D36,anongid=3D36) however, I had come across some articles on NFS tuning and performance = (nothing from oVirt, though) indicating that by default, (current = versions of) NFS use "sync", meaning that it syncs all data changes to = disk first. Indeed, my new test VM was getting the same disk write = performance it was getting before (about 10-15 MB/s) In my new install, I added my NFS data store as I had been before, but I = also added a second data store like this: /storage1/data *(rw,all_squash,async,anonuid=3D36,anongid=3D36) and migrated my VM's vHDD to this second data store. Once it was = migrated, I rebooted and ran the HDD test again. Results are *much* = better: about 130MB/s sequential write speed (averaged over a half dozen = or so runs), and almost 2GB/s sequential read speed. If it means = anything to anyone, Random QD32 speeds are about 30MB/s for write and = 40MB/s for read. Hopefully this can help someone else out there. Would it be appropriate = to add this to the "Troubleshooting NFS" documentation page? As long as = people are aware of the possible consequences on the 'async' option = (possible data loss if server shut down suddenly), then it seems to be a = viable solution. @Donny: Thanks for pointing me in the right direction. I was actually = starting to get a bit frustrated as it felt like I was talking to myself = there... :-( -Alan --===============3759946460906711970==-- From alan at murrell.ca Fri Jul 31 02:03:55 2015 Content-Type: multipart/mixed; boundary="===============3154904069172290079==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Thu, 30 Jul 2015 23:03:51 -0700 Message-ID: <55BB0FC7.8020300@murrell.ca> In-Reply-To: 55BB0E4C.1020908@murrell.ca --===============3154904069172290079== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I also wonder if I had setup the storage as Gluster instead of = "straight" NFS (and thus utilised Gluster's own NFS services), would = this have been an issue? Would Gluster possibly have built-in options = that would not have this sort of performance issue? -Alan On 30/07/2015 10:57 PM, Alan Murrell wrote: > OK, so an update... it looks like the issue was indeed my NFS settings. > I ended up doing a self-hosted engine again. Here was the entry I > have been using for my (NFS) data domain (based on what I had seen in > oVirt documentation): > > /storage1/data *(rw,all_squash,anonuid=3D36,anongid=3D36) > > however, I had come across some articles on NFS tuning and performance > (nothing from oVirt, though) indicating that by default, (current > versions of) NFS use "sync", meaning that it syncs all data changes to > disk first. Indeed, my new test VM was getting the same disk write > performance it was getting before (about 10-15 MB/s) > > In my new install, I added my NFS data store as I had been before, but I > also added a second data store like this: > > /storage1/data *(rw,all_squash,async,anonuid=3D36,anongid=3D36) > > and migrated my VM's vHDD to this second data store. Once it was > migrated, I rebooted and ran the HDD test again. Results are *much* > better: about 130MB/s sequential write speed (averaged over a half dozen > or so runs), and almost 2GB/s sequential read speed. If it means > anything to anyone, Random QD32 speeds are about 30MB/s for write and > 40MB/s for read. > > Hopefully this can help someone else out there. Would it be appropriate > to add this to the "Troubleshooting NFS" documentation page? As long as > people are aware of the possible consequences on the 'async' option > (possible data loss if server shut down suddenly), then it seems to be a > viable solution. > > @Donny: Thanks for pointing me in the right direction. I was actually > starting to get a bit frustrated as it felt like I was talking to myself > there... :-( > > -Alan > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============3154904069172290079==-- From lists at murrell.ca Sat Aug 1 20:37:14 2015 Content-Type: multipart/mixed; boundary="===============4011848146742973173==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Sat, 01 Aug 2015 17:37:07 -0700 Message-ID: <55BD6633.20007@murrell.ca> In-Reply-To: CAKZ1H79q4MsU=3EgBaOxUiMdQo5a2-YoGq6e=9c5ZpAwRmpWdw@mail.gmail.com --===============4011848146742973173== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Donny (and everyone else who uses NFS-backed storage) You mentioned that you get really good performance in oVirt and I am curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card. Just curious. Thanks! :-) -Alan --===============4011848146742973173==-- From matthew.lagoe at subrigo.net Sat Aug 1 21:01:05 2015 Content-Type: multipart/mixed; boundary="===============9119793531919126130==" MIME-Version: 1.0 From: Matthew Lagoe To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Sat, 01 Aug 2015 18:00:39 -0700 Message-ID: <00df01d0ccbe$a63db9a0$f2b92ce0$@subrigo.net> In-Reply-To: 55BD6633.20007@murrell.ca --===============9119793531919126130== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable You can run without async so long as your nfs server can handle sync writes quickly -----Original Message----- From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On Behal= f Of Alan Murrell Sent: Saturday, August 01, 2015 05:37 PM To: users Subject: Re: [ovirt-users] Poor guest write speeds Hi Donny (and everyone else who uses NFS-backed storage) You mentioned that you get really good performance in oVirt and I am curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card. Just curious. Thanks! :-) -Alan _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users --===============9119793531919126130==-- From donny at cloudspin.me Sun Aug 2 20:49:23 2015 Content-Type: multipart/mixed; boundary="===============8320539724757888056==" MIME-Version: 1.0 From: Donny Davis To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Sun, 02 Aug 2015 20:49:21 -0400 Message-ID: In-Reply-To: 00df01d0ccbe$a63db9a0$f2b92ce0$@subrigo.net --===============8320539724757888056== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Thank you for reporting back. I don't have to use async with my NFS share. Its not the same comparison, but just for metrics I am running an HP dl380 with 2x6cores and 24GB of ram, with 16 disks in raidz2 with 4 vdevs. Backend is ZFS. Connections are 10GBE I get great performance from my setup. On Sat, Aug 1, 2015 at 9:00 PM, Matthew Lagoe wrote: > You can run without async so long as your nfs server can handle sync writ= es > quickly > > -----Original Message----- > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On Beh= alf > Of > Alan Murrell > Sent: Saturday, August 01, 2015 05:37 PM > To: users > Subject: Re: [ovirt-users] Poor guest write speeds > > Hi Donny (and everyone else who uses NFS-backed storage) > > You mentioned that you get really good performance in oVirt and I am > curious > what you use for your NFS options in exportfs? The 'async' > option fixed my performance issues, but of course this would not be a > recommended option in a production environment, at least unless the NFS > server was running with a BBU on the RAID card. > > Just curious. Thanks! :-) > > -Alan > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- = Donny Davis --===============8320539724757888056== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PGRpdiBkaXI9Imx0ciI+VGhhbmsgeW91IGZvciByZXBvcnRpbmcgYmFjay48ZGl2Pjxicj48L2Rp dj48ZGl2PkkgZG9uJiMzOTt0IGhhdmUgdG8gdXNlIGFzeW5jIHdpdGggbXkgTkZTIHNoYXJlLiBJ dHMgbm90IHRoZSBzYW1lIGNvbXBhcmlzb24sIGJ1dCBqdXN0IGZvciBtZXRyaWNzIEkgYW0gcnVu bmluZyBhbjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+wqBIUCBkbDM4MCB3aXRoIDJ4NmNvcmVz IGFuZCAyNEdCIG9mIHJhbSwgd2l0aCAxNiBkaXNrcyBpbiByYWlkejIgd2l0aCA0IHZkZXZzLiBC YWNrZW5kIGlzIFpGUy4gQ29ubmVjdGlvbnMgYXJlIDEwR0JFPC9kaXY+PGRpdj48YnI+PC9kaXY+ PGRpdj5JIGdldCBncmVhdCBwZXJmb3JtYW5jZSBmcm9tIG15IHNldHVwLjwvZGl2PjwvZGl2Pjxk aXYgY2xhc3M9ImdtYWlsX2V4dHJhIj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIFNh dCwgQXVnIDEsIDIwMTUgYXQgOTowMCBQTSwgTWF0dGhldyBMYWdvZSA8c3BhbiBkaXI9Imx0ciI+ Jmx0OzxhIGhyZWY9Im1haWx0bzptYXR0aGV3LmxhZ29lQHN1YnJpZ28ubmV0IiB0YXJnZXQ9Il9i bGFuayI+bWF0dGhldy5sYWdvZUBzdWJyaWdvLm5ldDwvYT4mZ3Q7PC9zcGFuPiB3cm90ZTo8YnI+ PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7 Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+WW91IGNhbiBydW4g d2l0aG91dCBhc3luYyBzbyBsb25nIGFzIHlvdXIgbmZzIHNlcnZlciBjYW4gaGFuZGxlIHN5bmMg d3JpdGVzPGJyPgpxdWlja2x5PGJyPgo8ZGl2IGNsYXNzPSJIT0VuWmIiPjxkaXYgY2xhc3M9Img1 Ij48YnI+Ci0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tPGJyPgpGcm9tOiA8YSBocmVmPSJtYWls dG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmciPnVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnPC9hPiBb bWFpbHRvOjxhIGhyZWY9Im1haWx0bzp1c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZyI+dXNlcnMtYm91 bmNlc0BvdmlydC5vcmc8L2E+XSBPbiBCZWhhbGYgT2Y8YnI+CkFsYW4gTXVycmVsbDxicj4KU2Vu dDogU2F0dXJkYXksIEF1Z3VzdCAwMSwgMjAxNSAwNTozNyBQTTxicj4KVG86IHVzZXJzPGJyPgpT dWJqZWN0OiBSZTogW292aXJ0LXVzZXJzXSBQb29yIGd1ZXN0IHdyaXRlIHNwZWVkczxicj4KPGJy PgpIaSBEb25ueSAoYW5kIGV2ZXJ5b25lIGVsc2Ugd2hvIHVzZXMgTkZTLWJhY2tlZCBzdG9yYWdl KTxicj4KPGJyPgpZb3UgbWVudGlvbmVkIHRoYXQgeW91IGdldCByZWFsbHkgZ29vZCBwZXJmb3Jt YW5jZSBpbiBvVmlydCBhbmQgSSBhbSBjdXJpb3VzPGJyPgp3aGF0IHlvdSB1c2UgZm9yIHlvdXIg TkZTIG9wdGlvbnMgaW4gZXhwb3J0ZnM/wqAgVGhlICYjMzk7YXN5bmMmIzM5Ozxicj4Kb3B0aW9u IGZpeGVkIG15IHBlcmZvcm1hbmNlIGlzc3VlcywgYnV0IG9mIGNvdXJzZSB0aGlzIHdvdWxkIG5v dCBiZSBhPGJyPgpyZWNvbW1lbmRlZCBvcHRpb24gaW4gYSBwcm9kdWN0aW9uIGVudmlyb25tZW50 LCBhdCBsZWFzdCB1bmxlc3MgdGhlIE5GUzxicj4Kc2VydmVyIHdhcyBydW5uaW5nIHdpdGggYSBC QlUgb24gdGhlIFJBSUQgY2FyZC48YnI+Cjxicj4KSnVzdCBjdXJpb3VzLsKgIFRoYW5rcyEgOi0p PGJyPgo8YnI+Ci1BbGFuPGJyPgo8YnI+CjwvZGl2PjwvZGl2PjxkaXYgY2xhc3M9IkhPRW5aYiI+ PGRpdiBjbGFzcz0iaDUiPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fPGJyPgpVc2VycyBtYWlsaW5nIGxpc3Q8YnI+CjxhIGhyZWY9Im1haWx0bzpVc2Vyc0Bv dmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5v dmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9 Il9ibGFuayI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9h Pjxicj4KPGJyPgo8YnI+Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fPGJyPgpVc2VycyBtYWlsaW5nIGxpc3Q8YnI+CjxhIGhyZWY9Im1haWx0bzpVc2Vyc0Bv dmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5v dmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9 Il9ibGFuayI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9h Pjxicj4KPC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjwvZGl2Pjxicj48YnIgY2xlYXI9ImFsbCI+ PGRpdj48YnI+PC9kaXY+LS0gPGJyPjxkaXYgY2xhc3M9ImdtYWlsX3NpZ25hdHVyZSI+PGRpdiBk aXI9Imx0ciI+RG9ubnkgRGF2aXM8YnI+PGJyPjwvZGl2PjwvZGl2Pgo8L2Rpdj4K --===============8320539724757888056==-- From matthew.lagoe at subrigo.net Mon Aug 3 03:55:05 2015 Content-Type: multipart/mixed; boundary="===============0156582164548356004==" MIME-Version: 1.0 From: Matthew Lagoe To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Mon, 03 Aug 2015 00:54:47 -0700 Message-ID: <008e01d0cdc1$ad372940$07a57bc0$@subrigo.net> In-Reply-To: CAKZ1H7_sTBjRzv1-8dqUAABwP1qOaf7Xf-EJtq9wkKB_qGZsVA@mail.gmail.com --===============0156582164548356004== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multipart message in MIME format. ------=3D_NextPart_000_008F_01D0CD87.00D962B0 Content-Type: text/plain; charset=3D"UTF-8" Content-Transfer-Encoding: quoted-printable With zfs you can use sync=3D3Ddisabled to enable async on the storage side = =3D as well just fyi. =3D20 On one of ours we have 18 x 2tb drives across 6 x 3-way-mirrors with 3 =3D intel 7310 ssds 3 way mirrored for zil =3D20 With about 100 vm=3DE2=3D80=3D99s running I get roughly 1400 iops write and= =3D 2230 read with about 600MBps read and 220 write running the tests from a = =3D windows 2008 r2 vm. =3D20 From: Donny Davis [mailto:donny(a)cloudspin.me]=3D20 Sent: Sunday, August 02, 2015 05:49 PM To: Matthew Lagoe Cc: Alan Murrell; users Subject: Re: [ovirt-users] Poor guest write speeds =3D20 Thank you for reporting back. =3D20 I don't have to use async with my NFS share. Its not the same =3D comparison, but just for metrics I am running an =3D20 HP dl380 with 2x6cores and 24GB of ram, with 16 disks in raidz2 with 4 =3D vdevs. Backend is ZFS. Connections are 10GBE =3D20 I get great performance from my setup. =3D20 On Sat, Aug 1, 2015 at 9:00 PM, Matthew Lagoe =3D wrote: You can run without async so long as your nfs server can handle sync =3D writes quickly -----Original Message----- From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On Behal= f =3D Of Alan Murrell Sent: Saturday, August 01, 2015 05:37 PM To: users Subject: Re: [ovirt-users] Poor guest write speeds Hi Donny (and everyone else who uses NFS-backed storage) You mentioned that you get really good performance in oVirt and I am =3D curious what you use for your NFS options in exportfs? The 'async' option fixed my performance issues, but of course this would not be a recommended option in a production environment, at least unless the NFS server was running with a BBU on the RAID card. Just curious. Thanks! :-) -Alan _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users =3D20 --=3D20 Donny Davis ------=3D_NextPart_000_008F_01D0CD87.00D962B0 Content-Type: text/html; charset=3D"UTF-8" Content-Transfer-Encoding: quoted-printable

With zfs you can use sync=3D3Ddisabled to enable async on the storage = =3D side as well just fyi.

 

On one of ours we have 18 x 2tb drives across 6 x 3-way-mirrors with =3D 3 intel 7310 ssds 3 way mirrored for zil

 

With about 100 vm=3DE2=3D80=3D99s running I get roughly 1400 iops write = and =3D 2230 read with about 600MBps read and 220 write running the tests from a = =3D windows 2008 r2 vm.

 

From:= =3D = =3D Donny Davis [mailto:donny(a)cloudspin.me]
Sent: Sunday, August = =3D 02, 2015 05:49 PM
To: Matthew Lagoe
Cc: Alan =3D Murrell; users
Subject: Re: [ovirt-users] Poor guest write =3D speeds

 

Thank = =3D you for reporting back.

 

= I =3D don't have to use async with my NFS share. Its not the same comparison, =3D but just for metrics I am running an

 

 HP dl380 with 2x6cores and 24GB of ram, with 16 = =3D disks in raidz2 with 4 vdevs. Backend is ZFS. Connections are =3D 10GBE

 

= I =3D get great performance from my setup.

 

On Sat= , =3D Aug 1, 2015 at 9:00 PM, Matthew Lagoe <matthew.lagoe(a)subrigo.net> =3D wrote:

You can run without async so = =3D long as your nfs server can handle sync =3D writes
quickly


-----Original Message-----
From: = =3D users-bounces(a)ovirt.org =3D [mailto:users-bounces(a)ovirt.org] = On =3D Behalf Of
Alan Murrell
Sent: Saturday, August 01, 2015 05:37 =3D PM
To: users
Subject: Re: [ovirt-users] Poor guest write =3D speeds

Hi Donny (and everyone else who uses NFS-backed =3D storage)

You mentioned that you get really good performance in =3D oVirt and I am curious
what you use for your NFS options in =3D exportfs?  The 'async'
option fixed my performance issues, but =3D of course this would not be a
recommended option in a production =3D environment, at least unless the NFS
server was running with a BBU on = =3D the RAID card.

Just curious.  Thanks! =3D :-)

-Alan

_______________________________________________
User= =3D s mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing =3D list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users<= =3D /o:p>



 

-- = =3D

Donny =3D Davis

------=3D_NextPart_000_008F_01D0CD87.00D962B0-- --===============0156582164548356004== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpcGFydCBtZXNzYWdlIGluIE1JTUUgZm9ybWF0LgoKLS0tLS0tPV9OZXh0 UGFydF8wMDBfMDA4Rl8wMUQwQ0Q4Ny4wMEQ5NjJCMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47 CgljaGFyc2V0PSJVVEYtOCIKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogcXVvdGVkLXByaW50 YWJsZQoKV2l0aCB6ZnMgeW91IGNhbiB1c2Ugc3luYz0zRGRpc2FibGVkIHRvIGVuYWJsZSBhc3lu YyBvbiB0aGUgc3RvcmFnZSBzaWRlID0KYXMgd2VsbCBqdXN0IGZ5aS4KCj0yMAoKT24gb25lIG9m IG91cnMgd2UgaGF2ZSAxOCB4IDJ0YiBkcml2ZXMgYWNyb3NzIDYgeCAzLXdheS1taXJyb3JzIHdp dGggMyA9CmludGVsIDczMTAgc3NkcyAzIHdheSBtaXJyb3JlZCBmb3IgemlsCgo9MjAKCldpdGgg YWJvdXQgMTAwIHZtPUUyPTgwPTk5cyBydW5uaW5nIEkgZ2V0IHJvdWdobHkgMTQwMCBpb3BzIHdy aXRlIGFuZCA9CjIyMzAgcmVhZCB3aXRoIGFib3V0IDYwME1CcHMgcmVhZCBhbmQgMjIwIHdyaXRl IHJ1bm5pbmcgdGhlIHRlc3RzIGZyb20gYSA9CndpbmRvd3MgMjAwOCByMiB2bS4KCj0yMAoKRnJv bTogRG9ubnkgRGF2aXMgW21haWx0bzpkb25ueUBjbG91ZHNwaW4ubWVdPTIwClNlbnQ6IFN1bmRh eSwgQXVndXN0IDAyLCAyMDE1IDA1OjQ5IFBNClRvOiBNYXR0aGV3IExhZ29lCkNjOiBBbGFuIE11 cnJlbGw7IHVzZXJzClN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIFBvb3IgZ3Vlc3Qgd3JpdGUg c3BlZWRzCgo9MjAKClRoYW5rIHlvdSBmb3IgcmVwb3J0aW5nIGJhY2suCgo9MjAKCkkgZG9uJ3Qg aGF2ZSB0byB1c2UgYXN5bmMgd2l0aCBteSBORlMgc2hhcmUuIEl0cyBub3QgdGhlIHNhbWUgPQpj b21wYXJpc29uLCBidXQganVzdCBmb3IgbWV0cmljcyBJIGFtIHJ1bm5pbmcgYW4KCj0yMAoKIEhQ IGRsMzgwIHdpdGggMng2Y29yZXMgYW5kIDI0R0Igb2YgcmFtLCB3aXRoIDE2IGRpc2tzIGluIHJh aWR6MiB3aXRoIDQgPQp2ZGV2cy4gQmFja2VuZCBpcyBaRlMuIENvbm5lY3Rpb25zIGFyZSAxMEdC RQoKPTIwCgpJIGdldCBncmVhdCBwZXJmb3JtYW5jZSBmcm9tIG15IHNldHVwLgoKPTIwCgpPbiBT YXQsIEF1ZyAxLCAyMDE1IGF0IDk6MDAgUE0sIE1hdHRoZXcgTGFnb2UgPQo8bWF0dGhldy5sYWdv ZUBzdWJyaWdvLm5ldD4gd3JvdGU6CgpZb3UgY2FuIHJ1biB3aXRob3V0IGFzeW5jIHNvIGxvbmcg YXMgeW91ciBuZnMgc2VydmVyIGNhbiBoYW5kbGUgc3luYyA9CndyaXRlcwpxdWlja2x5CgoKLS0t LS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0KRnJvbTogdXNlcnMtYm91bmNlc0BvdmlydC5vcmcgW21h aWx0bzp1c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZ10gT24gQmVoYWxmID0KT2YKQWxhbiBNdXJyZWxs ClNlbnQ6IFNhdHVyZGF5LCBBdWd1c3QgMDEsIDIwMTUgMDU6MzcgUE0KVG86IHVzZXJzClN1Ympl Y3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIFBvb3IgZ3Vlc3Qgd3JpdGUgc3BlZWRzCgpIaSBEb25ueSAo YW5kIGV2ZXJ5b25lIGVsc2Ugd2hvIHVzZXMgTkZTLWJhY2tlZCBzdG9yYWdlKQoKWW91IG1lbnRp b25lZCB0aGF0IHlvdSBnZXQgcmVhbGx5IGdvb2QgcGVyZm9ybWFuY2UgaW4gb1ZpcnQgYW5kIEkg YW0gPQpjdXJpb3VzCndoYXQgeW91IHVzZSBmb3IgeW91ciBORlMgb3B0aW9ucyBpbiBleHBvcnRm cz8gIFRoZSAnYXN5bmMnCm9wdGlvbiBmaXhlZCBteSBwZXJmb3JtYW5jZSBpc3N1ZXMsIGJ1dCBv ZiBjb3Vyc2UgdGhpcyB3b3VsZCBub3QgYmUgYQpyZWNvbW1lbmRlZCBvcHRpb24gaW4gYSBwcm9k dWN0aW9uIGVudmlyb25tZW50LCBhdCBsZWFzdCB1bmxlc3MgdGhlIE5GUwpzZXJ2ZXIgd2FzIHJ1 bm5pbmcgd2l0aCBhIEJCVSBvbiB0aGUgUkFJRCBjYXJkLgoKSnVzdCBjdXJpb3VzLiAgVGhhbmtz ISA6LSkKCi1BbGFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fXwpVc2VycyBtYWlsaW5nIGxpc3QKVXNlcnNAb3ZpcnQub3JnCmh0dHA6Ly9saXN0cy5vdmly dC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycwoKCl9fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlzdApVc2Vyc0BvdmlydC5vcmcK aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzCgoKCgoKPTIwCgot LT0yMAoKRG9ubnkgRGF2aXMKCgotLS0tLS09X05leHRQYXJ0XzAwMF8wMDhGXzAxRDBDRDg3LjAw RDk2MkIwCkNvbnRlbnQtVHlwZTogdGV4dC9odG1sOwoJY2hhcnNldD0iVVRGLTgiCkNvbnRlbnQt VHJhbnNmZXItRW5jb2Rpbmc6IHF1b3RlZC1wcmludGFibGUKCjxodG1sIHhtbG5zOnY9M0QidXJu OnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiID0KeG1sbnM6bz0zRCJ1cm46c2NoZW1hcy1taWNy b3NvZnQtY29tOm9mZmljZTpvZmZpY2UiID0KeG1sbnM6dz0zRCJ1cm46c2NoZW1hcy1taWNyb3Nv ZnQtY29tOm9mZmljZTp3b3JkIiA9CnhtbG5zOm09M0QiaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0 LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiA9CnhtbG5zPTNEImh0dHA6Ly93d3cudzMub3JnL1RS L1JFQy1odG1sNDAiPjxoZWFkPjxtZXRhID0KaHR0cC1lcXVpdj0zRENvbnRlbnQtVHlwZSBjb250 ZW50PTNEInRleHQvaHRtbDsgY2hhcnNldD0zRHV0Zi04Ij48bWV0YSA9Cm5hbWU9M0RHZW5lcmF0 b3IgY29udGVudD0zRCJNaWNyb3NvZnQgV29yZCAxNCAoZmlsdGVyZWQgPQptZWRpdW0pIj48c3R5 bGU+PCEtLQovKiBGb250IERlZmluaXRpb25zICovCkBmb250LWZhY2UKCXtmb250LWZhbWlseTpD YWxpYnJpOwoJcGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQpAZm9udC1mYWNlCgl7Zm9u dC1mYW1pbHk6VGFob21hOwoJcGFub3NlLTE6MiAxMSA2IDQgMyA1IDQgNCAyIDQ7fQovKiBTdHls ZSBEZWZpbml0aW9ucyAqLwpwLk1zb05vcm1hbCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFs Cgl7bWFyZ2luOjBpbjsKCW1hcmdpbi1ib3R0b206LjAwMDFwdDsKCWZvbnQtc2l6ZToxMi4wcHQ7 Cglmb250LWZhbWlseToiVGltZXMgTmV3IFJvbWFuIiwic2VyaWYiO30KYTpsaW5rLCBzcGFuLk1z b0h5cGVybGluawoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsKCWNvbG9yOmJsdWU7Cgl0ZXh0LWRl Y29yYXRpb246dW5kZXJsaW5lO30KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2Vk Cgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5OwoJY29sb3I6cHVycGxlOwoJdGV4dC1kZWNvcmF0aW9u OnVuZGVybGluZTt9CnAuTXNvQWNldGF0ZSwgbGkuTXNvQWNldGF0ZSwgZGl2Lk1zb0FjZXRhdGUK CXttc28tc3R5bGUtcHJpb3JpdHk6OTk7Cgltc28tc3R5bGUtbGluazoiQmFsbG9vbiBUZXh0IENo YXIiOwoJbWFyZ2luOjBpbjsKCW1hcmdpbi1ib3R0b206LjAwMDFwdDsKCWZvbnQtc2l6ZTo4LjBw dDsKCWZvbnQtZmFtaWx5OiJUYWhvbWEiLCJzYW5zLXNlcmlmIjt9CnNwYW4uRW1haWxTdHlsZTE3 Cgl7bXNvLXN0eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7Cglmb250LWZhbWlseToiQ2FsaWJyaSIs InNhbnMtc2VyaWYiOwoJY29sb3I6IzFGNDk3RDt9CnNwYW4uQmFsbG9vblRleHRDaGFyCgl7bXNv LXN0eWxlLW5hbWU6IkJhbGxvb24gVGV4dCBDaGFyIjsKCW1zby1zdHlsZS1wcmlvcml0eTo5OTsK CW1zby1zdHlsZS1saW5rOiJCYWxsb29uIFRleHQiOwoJZm9udC1mYW1pbHk6IlRhaG9tYSIsInNh bnMtc2VyaWYiO30KLk1zb0NocERlZmF1bHQKCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsK CWZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7fQpAcGFnZSBXb3JkU2VjdGlvbjEK CXtzaXplOjguNWluIDExLjBpbjsKCW1hcmdpbjoxLjBpbiAxLjBpbiAxLjBpbiAxLjBpbjt9CmRp di5Xb3JkU2VjdGlvbjEKCXtwYWdlOldvcmRTZWN0aW9uMTt9Ci0tPjwvc3R5bGU+PCEtLVtpZiBn dGUgbXNvIDldPjx4bWw+CjxvOnNoYXBlZGVmYXVsdHMgdjpleHQ9M0QiZWRpdCIgc3BpZG1heD0z RCIxMDI2IiAvPgo8L3htbD48IVtlbmRpZl0tLT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4KPG86 c2hhcGVsYXlvdXQgdjpleHQ9M0QiZWRpdCI+CjxvOmlkbWFwIHY6ZXh0PTNEImVkaXQiIGRhdGE9 M0QiMSIgLz4KPC9vOnNoYXBlbGF5b3V0PjwveG1sPjwhW2VuZGlmXS0tPjwvaGVhZD48Ym9keSBs YW5nPTNERU4tVVMgbGluaz0zRGJsdWUgPQp2bGluaz0zRHB1cnBsZT48ZGl2IGNsYXNzPTNEV29y ZFNlY3Rpb24xPjxwIGNsYXNzPTNETXNvTm9ybWFsPjxzcGFuID0Kc3R5bGU9M0QnZm9udC1zaXpl OjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiO2NvbG9yOiMxRjQ5Nz0K RCc+V2l0aCB6ZnMgeW91IGNhbiB1c2Ugc3luYz0zRGRpc2FibGVkIHRvIGVuYWJsZSBhc3luYyBv biB0aGUgc3RvcmFnZSA9CnNpZGUgYXMgd2VsbCBqdXN0IGZ5aS48bzpwPjwvbzpwPjwvc3Bhbj48 L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gPQpzdHlsZT0zRCdmb250LXNpemU6MTEuMHB0 O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7Y29sb3I6IzFGNDk3PQpEJz48bzpw PiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gPQpzdHls ZT0zRCdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7 Y29sb3I6IzFGNDk3PQpEJz5PbiBvbmUgb2Ygb3VycyB3ZSBoYXZlIDE4IHggMnRiIGRyaXZlcyBh Y3Jvc3MgNiB4IDMtd2F5LW1pcnJvcnMgd2l0aCA9CjMgaW50ZWwgNzMxMCBzc2RzIDMgd2F5IG1p cnJvcmVkIGZvciB6aWw8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PHAgPQpjbGFzcz0zRE1zb05vcm1h bD48c3BhbiA9CnN0eWxlPTNEJ2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmki LCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTc9CkQnPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwv cD48cCBjbGFzcz0zRE1zb05vcm1hbD48c3BhbiA9CnN0eWxlPTNEJ2ZvbnQtc2l6ZToxMS4wcHQ7 Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTc9CkQnPldpdGgg YWJvdXQgMTAwIHZtPUUyPTgwPTk5cyBydW5uaW5nIEkgZ2V0IHJvdWdobHkgMTQwMCBpb3BzIHdy aXRlIGFuZCA9CjIyMzAgcmVhZCB3aXRoIGFib3V0IDYwME1CcHMgcmVhZCBhbmQgMjIwIHdyaXRl IHJ1bm5pbmcgdGhlIHRlc3RzIGZyb20gYSA9CndpbmRvd3MgMjAwOCByMiB2bS48bzpwPjwvbzpw Pjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gPQpzdHlsZT0zRCdmb250LXNp emU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7Y29sb3I6IzFGNDk3 PQpEJz48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PGI+ PHNwYW4gPQpzdHlsZT0zRCdmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiJUYWhvbWEiLCJz YW5zLXNlcmlmIic+RnJvbTo8L3NwYW4+PQo8L2I+PHNwYW4gc3R5bGU9M0QnZm9udC1zaXplOjEw LjBwdDtmb250LWZhbWlseToiVGFob21hIiwic2Fucy1zZXJpZiInPiA9CkRvbm55IERhdmlzIFtt YWlsdG86ZG9ubnlAY2xvdWRzcGluLm1lXSA8YnI+PGI+U2VudDo8L2I+IFN1bmRheSwgQXVndXN0 ID0KMDIsIDIwMTUgMDU6NDkgUE08YnI+PGI+VG86PC9iPiBNYXR0aGV3IExhZ29lPGJyPjxiPkNj OjwvYj4gQWxhbiA9Ck11cnJlbGw7IHVzZXJzPGJyPjxiPlN1YmplY3Q6PC9iPiBSZTogW292aXJ0 LXVzZXJzXSBQb29yIGd1ZXN0IHdyaXRlID0Kc3BlZWRzPG86cD48L286cD48L3NwYW4+PC9wPjxw ID0KY2xhc3M9M0RNc29Ob3JtYWw+PG86cD4mbmJzcDs8L286cD48L3A+PGRpdj48cCBjbGFzcz0z RE1zb05vcm1hbD5UaGFuayA9CnlvdSBmb3IgcmVwb3J0aW5nIGJhY2suPG86cD48L286cD48L3A+ PGRpdj48cCA9CmNsYXNzPTNETXNvTm9ybWFsPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPjwvZGl2Pjxk aXY+PHAgY2xhc3M9M0RNc29Ob3JtYWw+SSA9CmRvbid0IGhhdmUgdG8gdXNlIGFzeW5jIHdpdGgg bXkgTkZTIHNoYXJlLiBJdHMgbm90IHRoZSBzYW1lIGNvbXBhcmlzb24sID0KYnV0IGp1c3QgZm9y IG1ldHJpY3MgSSBhbSBydW5uaW5nIGFuPG86cD48L286cD48L3A+PC9kaXY+PGRpdj48cCA9CmNs YXNzPTNETXNvTm9ybWFsPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPjwvZGl2PjxkaXY+PHAgPQpjbGFz cz0zRE1zb05vcm1hbD4mbmJzcDtIUCBkbDM4MCB3aXRoIDJ4NmNvcmVzIGFuZCAyNEdCIG9mIHJh bSwgd2l0aCAxNiA9CmRpc2tzIGluIHJhaWR6MiB3aXRoIDQgdmRldnMuIEJhY2tlbmQgaXMgWkZT LiBDb25uZWN0aW9ucyBhcmUgPQoxMEdCRTxvOnA+PC9vOnA+PC9wPjwvZGl2PjxkaXY+PHAgPQpj bGFzcz0zRE1zb05vcm1hbD48bzpwPiZuYnNwOzwvbzpwPjwvcD48L2Rpdj48ZGl2PjxwIGNsYXNz PTNETXNvTm9ybWFsPkkgPQpnZXQgZ3JlYXQgcGVyZm9ybWFuY2UgZnJvbSBteSBzZXR1cC48bzpw PjwvbzpwPjwvcD48L2Rpdj48L2Rpdj48ZGl2PjxwID0KY2xhc3M9M0RNc29Ob3JtYWw+PG86cD4m bmJzcDs8L286cD48L3A+PGRpdj48cCBjbGFzcz0zRE1zb05vcm1hbD5PbiBTYXQsID0KQXVnIDEs IDIwMTUgYXQgOTowMCBQTSwgTWF0dGhldyBMYWdvZSAmbHQ7PGEgPQpocmVmPTNEIm1haWx0bzpt YXR0aGV3LmxhZ29lQHN1YnJpZ28ubmV0IiA9CnRhcmdldD0zRCJfYmxhbmsiPm1hdHRoZXcubGFn b2VAc3Vicmlnby5uZXQ8L2E+Jmd0OyA9Cndyb3RlOjxvOnA+PC9vOnA+PC9wPjxwIGNsYXNzPTNE TXNvTm9ybWFsPllvdSBjYW4gcnVuIHdpdGhvdXQgYXN5bmMgc28gPQpsb25nIGFzIHlvdXIgbmZz IHNlcnZlciBjYW4gaGFuZGxlIHN5bmMgPQp3cml0ZXM8YnI+cXVpY2tseTxvOnA+PC9vOnA+PC9w PjxkaXY+PGRpdj48cCBjbGFzcz0zRE1zb05vcm1hbCA9CnN0eWxlPTNEJ21hcmdpbi1ib3R0b206 MTIuMHB0Jz48YnI+LS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS08YnI+RnJvbTogPQo8YSBocmVm PTNEIm1haWx0bzp1c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZyI+dXNlcnMtYm91bmNlc0BvdmlydC5v cmc8L2E+ID0KW21haWx0bzo8YSA9CmhyZWY9M0QibWFpbHRvOnVzZXJzLWJvdW5jZXNAb3ZpcnQu b3JnIj51c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZzwvYT5dIE9uID0KQmVoYWxmIE9mPGJyPkFsYW4g TXVycmVsbDxicj5TZW50OiBTYXR1cmRheSwgQXVndXN0IDAxLCAyMDE1IDA1OjM3ID0KUE08YnI+ VG86IHVzZXJzPGJyPlN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIFBvb3IgZ3Vlc3Qgd3JpdGUg PQpzcGVlZHM8YnI+PGJyPkhpIERvbm55IChhbmQgZXZlcnlvbmUgZWxzZSB3aG8gdXNlcyBORlMt YmFja2VkID0Kc3RvcmFnZSk8YnI+PGJyPllvdSBtZW50aW9uZWQgdGhhdCB5b3UgZ2V0IHJlYWxs eSBnb29kIHBlcmZvcm1hbmNlIGluID0Kb1ZpcnQgYW5kIEkgYW0gY3VyaW91czxicj53aGF0IHlv dSB1c2UgZm9yIHlvdXIgTkZTIG9wdGlvbnMgaW4gPQpleHBvcnRmcz8mbmJzcDsgVGhlICdhc3lu Yyc8YnI+b3B0aW9uIGZpeGVkIG15IHBlcmZvcm1hbmNlIGlzc3VlcywgYnV0ID0Kb2YgY291cnNl IHRoaXMgd291bGQgbm90IGJlIGE8YnI+cmVjb21tZW5kZWQgb3B0aW9uIGluIGEgcHJvZHVjdGlv biA9CmVudmlyb25tZW50LCBhdCBsZWFzdCB1bmxlc3MgdGhlIE5GUzxicj5zZXJ2ZXIgd2FzIHJ1 bm5pbmcgd2l0aCBhIEJCVSBvbiA9CnRoZSBSQUlEIGNhcmQuPGJyPjxicj5KdXN0IGN1cmlvdXMu Jm5ic3A7IFRoYW5rcyEgPQo6LSk8YnI+PGJyPi1BbGFuPG86cD48L286cD48L3A+PC9kaXY+PC9k aXY+PGRpdj48ZGl2PjxwID0KY2xhc3M9M0RNc29Ob3JtYWw+X19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX188YnI+VXNlcj0KcyBtYWlsaW5nIGxpc3Q8YnI+PGEg PQpocmVmPTNEIm1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+ PGEgPQpocmVmPTNEImh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy cyIgPQp0YXJnZXQ9M0QiX2JsYW5rIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlz dGluZm8vdXNlcnM8L2E+PGJyPjxiPQpyPjxicj5fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXzxicj5Vc2VycyBtYWlsaW5nID0KbGlzdDxicj48YSBocmVmPTNE Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+PGEgPQpocmVm PTNEImh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyIgPQp0YXJn ZXQ9M0QiX2JsYW5rIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNl cnM8L2E+PG86cD48PQovbzpwPjwvcD48L2Rpdj48L2Rpdj48L2Rpdj48cCBjbGFzcz0zRE1zb05v cm1hbD48YnI+PGJyID0KY2xlYXI9M0RhbGw+PG86cD48L286cD48L3A+PGRpdj48cCA9CmNsYXNz PTNETXNvTm9ybWFsPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPjwvZGl2PjxwIGNsYXNzPTNETXNvTm9y bWFsPi0tID0KPG86cD48L286cD48L3A+PGRpdj48ZGl2PjxwIGNsYXNzPTNETXNvTm9ybWFsID0K c3R5bGU9M0QnbWFyZ2luLWJvdHRvbToxMi4wcHQnPkRvbm55ID0KRGF2aXM8bzpwPjwvbzpwPjwv cD48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2JvZHk+PC9odG1sPgotLS0tLS09X05leHRQYXJ0 XzAwMF8wMDhGXzAxRDBDRDg3LjAwRDk2MkIwLS0KCgo= --===============0156582164548356004==-- From lists at murrell.ca Tue Aug 4 03:17:05 2015 Content-Type: multipart/mixed; boundary="===============8705306881946576197==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Poor guest write speeds Date: Tue, 04 Aug 2015 00:17:01 -0700 Message-ID: <55C066ED.8080103@murrell.ca> In-Reply-To: CAKZ1H7_sTBjRzv1-8dqUAABwP1qOaf7Xf-EJtq9wkKB_qGZsVA@mail.gmail.com --===============8705306881946576197== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable While searching around a bit more, something made me think. When I had the default "sync" option enabled, I was getting about = 15MB/s, sometimes a bit more, sometimes a bit less. The NICs on my server are connected to a 100Mbit switch for the moment. Even though my NFS share is on the same server, because the NIC is = reporting as being connected at 100Mbit, could that be limiting my write = speeds? I have a 1Gbit switch I could borrow to test it out with, but if it = really wouldn't be a factor in my current setup, then I probably won't = bother. I am just trying to figure out how one would get decent performance = using 1Gbit NICs in a production environment (the SMB clients we service = wouldn't necessarily be able to afford 10Gbit NICs). Of course other options would be to run the NFS with async enabled, but = make sure the NFS storage is using a RAID card that has a battery = backup, and of course the storage itself is connected to a UPS (which we = always do) Anyway, any insight into the 100Mbit vs 1Gbit question, with regards to = it all being on one server, would be appreciated. -Alan --===============8705306881946576197==--