Hi Drew,
What is the host RAM size and what is the setting for VM.dirty_ratio and background ratio
on those hosts?
What about your iSCSI target?
Best Regards,
Strahil Nikolov
On Mar 11, 2019 23:51, Drew Rash <drew.rash(a)gmail.com> wrote:
Added the disable:false, removed the gluster, re-added using nfs. Performance still in
the low 10's MBps + or - 5
Ran the showmount -e "" and it displayed the mount.
Trying right now to re-mount using gluster with a negative-timeout=1 option.
We converted one of our 4 boxes to FreeNAS, took 4 6TB drives and made a raid iSCSI and
connected it to oVirt. Boot windows. ( times 2, did 2 boxes with a 7GB file on each)
copied from one to the other and it copied at 600MBps average. But then has weird
pauses... I think it's doing some kind of cache..it'll go like 2GB and choke to
zero Bps. Then speed up and choke, speed up choke averaging or getting up to 10MBps. Then
at 99% it waits 15 seconds with 0 bytes left...
Small files, are instant basically. No complaint there.
So...WAY faster. But suffers from the same thing....just requires writing some more to
get to it. a few gigs and then it crawls.
Seems to be related to if I JUST finished running a test. If I wait a while, I get it it
to copy almost 4GB or so before choking.
I made a 3rd windows 10 VM and copied the same file from the 1st to the 2nd (via a
windows share and from the 3rd box) And it didn't choke or do any funny
business...oddly. Maybe a fluke. Only did that once.
So....switching to freenas appears to have increased the window size before it runs
horribly. But it will still run horrifically if the disk is busy.
And since we're planning on doing actual work on this... idle disks caching up on
some hidden cache feature of oVirt isn't gonna work. We won't be writing gigs of
data all over the place...but knowing that this chokes a VM to near death...is scary.
It looks like for a windows 10 install to operate correctly, it expects at least 15MB/s
with less than 1s latency. Otherwise services don't start and weird stuff happens and
it runs slower than my dog while pooping out that extra little stringy bit near the end.
So we gotta avoid that.
On Sat, Mar 9, 2019 at 12:44 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>
> Hj Drew,
>
> For the test change the gluster parameter nfs.disabled to false.
> Something like gluster volume set volname nfs.dsiable false
>
> Then use shownount -e gluster-node-fqdn
> Note: NFS might not be allowed in the firewall.
>
> Then add this NFS domain (don't forget to remove the gluster storage domain
before that) and do your tests.
>
> If it works well, you will have to switch off nfs.disable and deploy NFS Ganesha:
>
> gluster volume reset volname nfs.disable
>
> Best Regards,
> Strahil Nikolov