
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much ! -- Best regards, Leo David

I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically. On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...

Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ? On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...

Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume.... On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...

Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :( The default options that giving bad iops ( ~1-2k) performance are : performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32 After adding only: server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on The performance increased to 7k-10k iops. The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options... If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help.. Any thoughts ? Thank you very much ! Leo On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/
-- Best regards, Leo David

performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at all. Below, the volume options that worked for me: performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on If any other tweaks can be done, please let me know. Thank you ! Leo On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/
-- Best regards, Leo David
-- Best regards, Leo David

Hi, but performance.strict-o-direct is not one of the option enabled by gdeploy during installation because it's supposed to give some sort of benefit? Paolo Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at all. Below, the volume options that worked for me:
performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on
If any other tweaks can be done, please let me know. Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com <mailto:leoalex@gmail.com>> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on
The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com <mailto:leoalex@gmail.com>> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com <mailto:leoalex@gmail.com>> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com <mailto:jaymef@gmail.com>> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com <mailto:leoalex@gmail.com>> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/>
-- Best regards, Leo David
-- Best regards, Leo David
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3H...

Hi, Is the performance.strict-o-direct=on a mandatory option to avoid data inconsistency, although it has a pretty big impact on volume iops performance? Thank you ! On Fri, Sep 14, 2018, 13:03 Paolo Margara <paolo.margara@polito.it> wrote:
Hi,
but performance.strict-o-direct is not one of the option enabled by gdeploy during installation because it's supposed to give some sort of benefit?
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at all. Below, the volume options that worked for me:
performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on
If any other tweaks can be done, please let me know. Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...
-- Best regards, Leo David
-- Best regards, Leo David
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3H...

On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara <paolo.margara@polito.it> wrote:
Hi,
but performance.strict-o-direct is not one of the option enabled by gdeploy during installation because it's supposed to give some sort of benefit?
See https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGG... on why the option is set.
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at all. Below, the volume options that worked for me:
performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on
If any other tweaks can be done, please let me know. Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
Hi Everyone, I am encountering the following issue on a single instance hyper-converged 4.2 setup. The following fio test was done:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS Same test done, this time on the gluster mount path: ~20K IOPS
What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? Thank you very much !
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...
-- Best regards, Leo David
-- Best regards, Leo David
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3H...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2KCDRMU2KESRN...

Thank you Sahina, I'm in that conversation too :). On the other hand... In this case, setting this option on, would only make sense in multi-node setups, and not in single instance ones, where we only have one hypervisor accsessing the volume. Please correct me if this is wrong. Have a nice day, Leo On Tue, Feb 26, 2019, 08:24 Sahina Bose <sabose@redhat.com> wrote:
On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara <paolo.margara@polito.it> wrote:
Hi,
but performance.strict-o-direct is not one of the option enabled by gdeploy during installation because it's supposed to give some sort of benefit?
See https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGG... on why the option is set.
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at all. Below, the volume options that worked for me:
performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on
If any other tweaks can be done, please let me know. Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote:
I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically.
On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote:
> Hi Everyone, > I am encountering the following issue on a single instance > hyper-converged 4.2 setup. > The following fio test was done: > > fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 > --name=test --filename=test --bs=4k --iodepth=64 --size=4G > --readwrite=randwrite > The results are very poor doing the test inside of a vm with a > prealocated disk on the ssd store: ~2k IOPS > Same test done on the oVirt node directly on the mounted ssd_lvm: > ~30k IOPS > Same test done, this time on the gluster mount path: ~20K IOPS > > What could be the issue that the vms have this slow hdd performance > ( 2k on ssd !! )? > Thank you very much ! > > > > > -- > Best regards, Leo David > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ... >
-- Best regards, Leo David
-- Best regards, Leo David
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3H...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2KCDRMU2KESRN...

On Wed, Feb 27, 2019 at 11:21 AM Leo David <leoalex@gmail.com> wrote:
Thank you Sahina, I'm in that conversation too :). On the other hand... In this case, setting this option on, would only make sense in multi-node setups, and not in single instance ones, where we only have one hypervisor accsessing the volume. Please correct me if this is wrong. Have a nice day,
In single instance deployments too, the option ensures all writes (with o-direct flag) are flushed to disk and not cached.
Leo
On Tue, Feb 26, 2019, 08:24 Sahina Bose <sabose@redhat.com> wrote:
On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara <paolo.margara@polito.it> wrote:
Hi,
but performance.strict-o-direct is not one of the option enabled by gdeploy during installation because it's supposed to give some sort of benefit?
See https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGG... on why the option is set.
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at all. Below, the volume options that worked for me:
performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on
If any other tweaks can be done, please let me know. Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on
The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote:
Thank you Jayme. I am trying to do this, but I am getting an error, since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume. Would it be another way to optimize the volume in this situation ?
On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote: > > I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically. > > On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote: >> >> Hi Everyone, >> I am encountering the following issue on a single instance hyper-converged 4.2 setup. >> The following fio test was done: >> >> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite >> The results are very poor doing the test inside of a vm with a prealocated disk on the ssd store: ~2k IOPS >> Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS >> Same test done, this time on the gluster mount path: ~20K IOPS >> >> What could be the issue that the vms have this slow hdd performance ( 2k on ssd !! )? >> Thank you very much ! >> >> >> >> >> -- >> Best regards, Leo David >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...
-- Best regards, Leo David
-- Best regards, Leo David
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3H...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2KCDRMU2KESRN...

Yes, in the end that makes perfectly sense. Thank you very much Sahina ! On Fri, Mar 1, 2019, 07:45 Sahina Bose <sabose@redhat.com> wrote:
On Wed, Feb 27, 2019 at 11:21 AM Leo David <leoalex@gmail.com> wrote:
Thank you Sahina, I'm in that conversation too :). On the other hand... In this case, setting this option on, would only make sense in
multi-node setups, and not in single instance ones, where we only have one hypervisor accsessing the volume.
Please correct me if this is wrong. Have a nice day,
In single instance deployments too, the option ensures all writes (with o-direct flag) are flushed to disk and not cached.
Leo
On Tue, Feb 26, 2019, 08:24 Sahina Bose <sabose@redhat.com> wrote:
On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara <paolo.margara@polito.it>
Hi,
but performance.strict-o-direct is not one of the option enabled by
gdeploy during installation because it's supposed to give some sort of benefit?
See https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGG... on why the option is set.
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
performance.strict-o-direct: on This was the bloody option that created the botleneck ! It was ON. So now i get an average of 17k random writes, which is not bad at
all. Below, the volume options that worked for me:
performance.strict-write-ordering: off performance.strict-o-direct: off server.event-threads: 4 client.event-threads: 4 performance.read-ahead: off network.ping-timeout: 30 performance.quick-read: off cluster.eager-lock: enable performance.stat-prefetch: on performance.low-prio-threads: 32 network.remote-dio: off user.cifs: off performance.io-cache: off server.allow-insecure: on features.shard: on transport.address-family: inet storage.owner-uid: 36 storage.owner-gid: 36 nfs.disable: on
If any other tweaks can be done, please let me know. Thank you !
Leo
On Fri, Sep 14, 2018 at 12:01 PM, Leo David <leoalex@gmail.com> wrote:
Hi Everyone, So i have decided to take out all of the gluster volume custom
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetch on cluster.eager-lock enable performance.io-cache off performance.read-ahead off performance.quick-read off user.cifs off network.ping-timeout 30 network.remote-dio off performance.strict-o-direct on performance.low-prio-threads 32
After adding only:
server.allow-insecure on features.shard on storage.owner-gid 36 storage.owner-uid 36 transport.address-family inet nfs.disable on
The performance increased to 7k-10k iops.
The problem is that i don't know if that's sufficient ( maybe it can
be more improved ) , or even worse than this there might be chances to into different volume issues by taking out some volume really needed options...
If would have handy the default options that are applied to volumes
as optimization in a 3way replica, I think that might help..
Any thoughts ?
Thank you very much !
Leo
On Fri, Sep 14, 2018 at 8:54 AM, Leo David <leoalex@gmail.com> wrote:
Any thoughs on these ? Is that UI optimization only a gluster volume
custom configuration ? If so, i guess it can be done from cli, but I am not aware of the corect optimized parameters of the volume....
On Thu, Sep 13, 2018, 18:25 Leo David <leoalex@gmail.com> wrote: > > Thank you Jayme. I am trying to do this, but I am getting an error,
since the volume is replica 1 distribute, and it seems that oVirt expects a replica 3 volume.
> Would it be another way to optimize the volume in this situation ? > > > On Thu, Sep 13, 2018, 17:49 Jayme <jaymef@gmail.com> wrote: >> >> I had similar problems until I clicked "optimize volume for vmstore" in the admin GUI for each data volume. I'm not sure if this is what is causing your problem here but I'd recommend trying that first. It is suppose to be optimized by default but for some reason my ovirt 4.2 cockpit deploy did not apply those settings automatically. >> >> On Thu, Sep 13, 2018 at 10:21 AM Leo David <leoalex@gmail.com> wrote: >>> >>> Hi Everyone, >>> I am encountering the following issue on a single instance hyper-converged 4.2 setup. >>> The following fio test was done: >>> >>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite >>> The results are very poor doing the test inside of a vm with a
>>> Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k IOPS >>> Same test done, this time on the gluster mount path: ~20K IOPS >>> >>> What could be the issue that the vms have this slow hdd
wrote: options, and add them one by one while activating/deactivating the storage domain & rebooting one vm after each added option :( prealocated disk on the ssd store: ~2k IOPS performance ( 2k on ssd !! )?
>>> Thank you very much ! >>> >>> >>> >>> >>> -- >>> Best regards, Leo David >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQ...
-- Best regards, Leo David
-- Best regards, Leo David
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3H...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2KCDRMU2KESRN...
participants (4)
-
Jayme
-
Leo David
-
Paolo Margara
-
Sahina Bose