Hi,
but performance.strict-o-direct is not one of the option enabled by
gdeploy during installation because it's supposed to give some sort of
benefit?
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on
This was the bloody option that created the botleneck ! It was ON.
So now i get an average of 17k random writes, which is not bad at
all. Below, the volume options that worked for me:
performance.strict-write-ordering: off
performance.strict-o-direct: off
server.event-threads: 4
client.event-threads: 4
performance.read-ahead: off
network.ping-timeout: 30
performance.quick-read: off
cluster.eager-lock: enable
performance.stat-prefetch: on
performance.low-prio-threads: 32
network.remote-dio: off
user.cifs: off
performance.io-cache: off
server.allow-insecure: on
features.shard: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
nfs.disable: on
If any other tweaks can be done, please let me know.
Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex(a)gmail.com
<mailto:leoalex@gmail.com>> wrote:
Hi Everyone,
So i have decided to take out all of the gluster volume custom
options, and add them one by one while activating/deactivating
the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on
cluster.eager-lock enable
performance.io-cache off
performance.read-ahead off
performance.quick-read off
user.cifs off
network.ping-timeout 30
network.remote-dio off
performance.strict-o-direct on
performance.low-prio-threads 32
After adding only:
server.allow-insecure on
features.shard on
storage.owner-gid 36
storage.owner-uid 36
transport.address-family inet
nfs.disable on
The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it
can be more improved ) , or even worse than this there might be
chances to into different volume issues by taking out some volume
really needed options...
If would have handy the default options that are applied to
volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex(a)gmail.com
<mailto:leoalex@gmail.com>> wrote:
Any thoughs on these ? Is that UI optimization only a gluster
volume custom configuration ? If so, i guess it can be done
from cli, but I am not aware of the corect optimized
parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex(a)gmail.com
<mailto:leoalex@gmail.com>> wrote:
Thank you Jayme. I am trying to do this, but I am getting
an error, since the volume is replica 1 distribute, and it
seems that oVirt expects a replica 3 volume.
Would it be another way to optimize the volume in this
situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef(a)gmail.com
<mailto:jaymef@gmail.com>> wrote:
I had similar problems until I clicked "optimize
volume for vmstore" in the admin GUI for each data
volume. I'm not sure if this is what is causing your
problem here but I'd recommend trying that first. It
is suppose to be optimized by default but for some
reason my ovirt 4.2 cockpit deploy did not apply those
settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David
<leoalex(a)gmail.com <mailto:leoalex@gmail.com>> wrote:
Hi Everyone,
I am encountering the following issue on a single
instance hyper-converged 4.2 setup.
The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1
--gtod_reduce=1 --name=test --filename=test
--bs=4k --iodepth=64 --size=4G --readwrite=randwrite
The results are very poor doing the test inside of
a vm with a prealocated disk on the ssd store:
~2k IOPS
Same test done on the oVirt node directly on the
mounted ssd_lvm: ~30k IOPS
Same test done, this time on the gluster mount
path: ~20K IOPS
What could be the issue that the vms have this
slow hdd performance ( 2k on ssd !! )?
Thank you very much !
--
Best regards, Leo David
_______________________________________________
Users mailing list -- users(a)ovirt.org
<mailto:users@ovirt.org>
To unsubscribe send an email to
users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<
https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINW...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINW...
--
Best regards, Leo David
--
Best regards, Leo David
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJL...