On Jul 28, 2018 22:17, Denys Oryshchenko <denys.oryshchenko@hotmail.com> wrote:

Wesley,

Try to disable compression on zfs fs


Nooo! Quiet the contrary, compression actually improves writing, as you won't have to write as much as without it!


On 07/28/2018 11:01 AM, Wesley Stewart wrote:
Windows reportes about 500-600 MB/s over a 4GB file.

However I believe I found the issue.  My NFS backend is ZFS which is apparently notorious for horrible sync writes.


I will try an iSCSI target and see how that goes.

You don't say! Well ZFS just happens to be a love of mine, you're in luck!:)

In that case, and since you're clearly disregarding my previous warning about how fsck'ed up you'll be when shit hits the fan about asynchronous writing then I can tell you that the article is referring to the old 'istgt' daemon that never even listened to sync calls, something you'd normally want if you care about consistency. Nowadays, that has been replaced by 'ctld' which is far superior and does listen to the calls, so no, you won't see any difference there. What you can do to force ZFS not to care is to:

# zfs set sync=disabled <dataset>

After that it won't matter what the services want any more...

Now that I've spent my time telling you what you shouldn't do, aka it'll get you in trouble sooner or later, I feel obligated to tell you what you should do, instead:)

The right thing to do is to add an SSD to your pool as a SLOG (Separate LOG) that'll take care of all those pesky sync calls that normally slows you down. A pro tip is to just use a small partition of a bigger disk, helps the drive relocate sectors that are damaged over time, say 15 GB partition of a 256 GB SSD will last a life-time and even if it doesn't, it's ok if it dies, it'll just be as slow as without it, you'll surely notice and know to replace it:) As a big bonus, it'll also help prevent fragmentation moving the ZIL (ZFS Intent Log) out of the main pool onto the SLOG.

Good luck with it!

/K


On Sat, Jul 28, 2018, 1:49 PM Karli Sjöberg <karli@inparadise.se> wrote:


On Jul 28, 2018 19:30, Wesley Stewart <wstewart3@gmail.com> wrote:
I have added the "async" option to the "additional mount options" section.

Transferring from my raided SSD mirrors is QUITE fast now.

However, this might be confusing, but if I read/write to the nfs share at the same time performance plummits.  

So if I grab something from the SMB share over 10gb network and write this to my VM being hosted on the NFS share.  Bad performance.

Or If I copy and paste ISO from the VM to itself, horrible performance.

If I grab a file from a VM on my raised SSD storage and copy it to my VM on the NFS Share, it has great performance (400+ MB/s)

Not quite sure what's going on!  However if I grab something from the 1gbps network everything seems okay.  Or if I change the NFS Mount to the 1 gbps everything is okay. 

Any ideas?

Let's start with expectations, shall we? 10Gb ~ 1 GB/s. So you read a file over SMB and write that to a VM hosted on NFS, so remote to remote. The absolute max for that transfer would be ~ 500 MB/s. What do you get?

/K


On Sat, Jul 28, 2018 at 3:02 AM Karli Sjöberg <karli@inparadise.se> wrote:


On Jul 28, 2018 01:01, Wesley Stewart <wstewart3@gmail.com> wrote:
I currently have a NFS server with decent speeds.  I get about 200MB/s write and 400+ MB/s read.  

My single node oVirt host has a mirrored SSD store for my Windows 10 VM, and I have about 3-5 VMs running on the NFS data store.

However, VM/s on the NFS datastore are SLOW.  They can write to their own disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get 150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).

Does anyone have any suggestions on what I could try to speed up the NFS storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover cable plugged directly into my NFS servers 10GB port.

Thanks!
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/

It may be because of how oVirt mounts them, much more carefully, than Linux does by default. If I am not mistaken, oVirt mounts the NFS shares with 'sync', whereas a standard 'mount' with no options gets you 'async'. The difference in performance is huge, but for a reason; it's unsafe in case of a power failure. That may explain things.

/K




_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2F&amp;data=02%7C01%7C%7C9866b7bc59f44a85d9b608d5f4b4f911%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636683980468746699&amp;sdata=DAeU%2BqkFf7pJ%2BfTZJnT57e1nTClSjGrYoevRqZAuiUQ%3D&amp;reserved=0
oVirt Code of Conduct: https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&amp;data=02%7C01%7C%7C9866b7bc59f44a85d9b608d5f4b4f911%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636683980468746699&amp;sdata=XAHMiRHgXDT2t7303%2Bi9sVeYrGZ6rWqN5SP6JmytvRY%3D&amp;reserved=0
List Archives: https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FVEZTV4SVQMWOTTYFUU3EGPE6EUD5G4F4%2F&amp;data=02%7C01%7C%7C9866b7bc59f44a85d9b608d5f4b4f911%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636683980468746699&amp;sdata=JruY%2FI6jODEYx594uOwt2P%2BF%2FsxtYGpvINVCXHVJxI4%3D&amp;reserved=0