
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I’m concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci?

According to GlusterFS Storage Domain the feature is not the default as it is incompatible with Live Storage Migration. Best Regards,Strahil Nikolov В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme <jaymef@gmail.com> написа: Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I’m concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci?_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ...

I use libgfap in production, the performance is worth a couple of quirks for me. - watch major version updates, they’ll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron’d chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn’t. may be my VMs though, still analyzing for specific file issues I need to spend some time doing a little more research and filing/updating some bug reports, but it’s been a busy end of year so far… -Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain <https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html> the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme <jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I’m concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...

The performance is certainly attractive from the minimal testing I've done with it (almost 5x I/O performance increase). For my environment I'm hitting the snapshot bug on replica 3 setups so I cannot snapshot VMs and doing so breaks the VM. This is a deal breaker for me since the VM backup software I'm using relies on snapshots. The other of course is the lack of HA, that I could probably work around. Is there actually a timeline when libgfapi is expected to be working properly? Some of the bug reports I've seen date back to 2017. On Mon, Dec 16, 2019 at 1:46 PM Darrell Budic <budic@onholyground.com> wrote:
I use libgfap in production, the performance is worth a couple of quirks for me.
- watch major version updates, they’ll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron’d chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn’t. may be my VMs though, still analyzing for specific file issues
I need to spend some time doing a little more research and filing/updating some bug reports, but it’s been a busy end of year so far…
-Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain <https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html>
the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme < jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I’m concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...

I also use libgfapi in prod. 1. This is a pretty annoying issue, i wish engine-config would look to see if it already enabled and just keep it that way. 2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop the permission changes. 3. I don't see this error on any of my clusters, all using libgfapi. I also have no issues using snapshots with libgfapi, but live migration between storage domains indeed does not work. On 2019-12-16 12:46, Darrell Budic wrote:
I use libgfap in production, the performance is worth a couple of quirks for me.
- watch major version updates, they'll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues
I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far...
-Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain [1] the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme <jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I'm concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE...
Links: ------ [1] https://www.ovirt.org/develop/release-management/features/storage/glusterfs-...

I believe the snapshot issue is only present with gluster replica 3 volumes. I can confirm it on my replica 3 cluster On Mon, Dec 16, 2019 at 4:18 PM Alex McWhirter <alex@triadic.us> wrote:
I also use libgfapi in prod.
1. This is a pretty annoying issue, i wish engine-config would look to see if it already enabled and just keep it that way.
2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop the permission changes.
3. I don't see this error on any of my clusters, all using libgfapi.
I also have no issues using snapshots with libgfapi, but live migration between storage domains indeed does not work.
On 2019-12-16 12:46, Darrell Budic wrote:
I use libgfap in production, the performance is worth a couple of quirks for me.
- watch major version updates, they'll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues
I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far...
-Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain <https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html>
the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme < jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I'm concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFM...

The bug has been closed wontfix for a lack of perceived progress on the issue : https://bugzilla.redhat.com/show_bug.cgi?id=1633642 https://bugzilla.redhat.com/show_bug.cgi?id=1484227 However when following the related opened bugs in qemu, i get the feeling things are getting ready to have libgfapi working in a replica 3 cluster. See : https://bugzilla.redhat.com/show_bug.cgi?id=1465810 I wish someone would reopen those closed bugs in order for that issue not being forgotten. Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Tue, Dec 17, 2019 at 7:21 AM Jayme <jaymef@gmail.com> wrote:
I believe the snapshot issue is only present with gluster replica 3 volumes. I can confirm it on my replica 3 cluster
On Mon, Dec 16, 2019 at 4:18 PM Alex McWhirter <alex@triadic.us> wrote:
I also use libgfapi in prod.
1. This is a pretty annoying issue, i wish engine-config would look to see if it already enabled and just keep it that way.
2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop the permission changes.
3. I don't see this error on any of my clusters, all using libgfapi.
I also have no issues using snapshots with libgfapi, but live migration between storage domains indeed does not work.
On 2019-12-16 12:46, Darrell Budic wrote:
I use libgfap in production, the performance is worth a couple of quirks for me.
- watch major version updates, they'll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues
I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far...
-Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain <https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html>
the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme < jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I'm concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFM...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GNMWDCNGJYNRMK...
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

It would be nice to see some progress. I have no idea why there wouldn’t be interest in adding to rhev. The io performance increase I saw while testing was phenomenal On Wed, Dec 18, 2019 at 9:42 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
The bug has been closed wontfix for a lack of perceived progress on the issue : https://bugzilla.redhat.com/show_bug.cgi?id=1633642 https://bugzilla.redhat.com/show_bug.cgi?id=1484227
However when following the related opened bugs in qemu, i get the feeling things are getting ready to have libgfapi working in a replica 3 cluster. See : https://bugzilla.redhat.com/show_bug.cgi?id=1465810
I wish someone would reopen those closed bugs in order for that issue not being forgotten.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Dec 17, 2019 at 7:21 AM Jayme <jaymef@gmail.com> wrote:
I believe the snapshot issue is only present with gluster replica 3 volumes. I can confirm it on my replica 3 cluster
On Mon, Dec 16, 2019 at 4:18 PM Alex McWhirter <alex@triadic.us> wrote:
I also use libgfapi in prod.
1. This is a pretty annoying issue, i wish engine-config would look to see if it already enabled and just keep it that way.
2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop the permission changes.
3. I don't see this error on any of my clusters, all using libgfapi.
I also have no issues using snapshots with libgfapi, but live migration between storage domains indeed does not work.
On 2019-12-16 12:46, Darrell Budic wrote:
I use libgfap in production, the performance is worth a couple of quirks for me.
- watch major version updates, they'll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues
I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far...
-Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain <https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html>
the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme < jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I'm concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFM...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GNMWDCNGJYNRMK...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

Hey Jayme, everyone I saw that most related bugs are closed wontfix and a comment that not enough performance increase was found. I think it would help if you could update the related bugzilla entries with the performance results that you observed. Anyone interested in this feature, or with benchmark results should post them in : https://bugzilla.redhat.com/show_bug.cgi?id=1484227 https://bugzilla.redhat.com/show_bug.cgi?id=1465810 Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Thu, Dec 19, 2019 at 11:44 AM Jayme <jaymef@gmail.com> wrote:
It would be nice to see some progress. I have no idea why there wouldn’t be interest in adding to rhev. The io performance increase I saw while testing was phenomenal
On Wed, Dec 18, 2019 at 9:42 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
The bug has been closed wontfix for a lack of perceived progress on the issue : https://bugzilla.redhat.com/show_bug.cgi?id=1633642 https://bugzilla.redhat.com/show_bug.cgi?id=1484227
However when following the related opened bugs in qemu, i get the feeling things are getting ready to have libgfapi working in a replica 3 cluster. See : https://bugzilla.redhat.com/show_bug.cgi?id=1465810
I wish someone would reopen those closed bugs in order for that issue not being forgotten.
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Tue, Dec 17, 2019 at 7:21 AM Jayme <jaymef@gmail.com> wrote:
I believe the snapshot issue is only present with gluster replica 3 volumes. I can confirm it on my replica 3 cluster
On Mon, Dec 16, 2019 at 4:18 PM Alex McWhirter <alex@triadic.us> wrote:
I also use libgfapi in prod.
1. This is a pretty annoying issue, i wish engine-config would look to see if it already enabled and just keep it that way.
2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop the permission changes.
3. I don't see this error on any of my clusters, all using libgfapi.
I also have no issues using snapshots with libgfapi, but live migration between storage domains indeed does not work.
On 2019-12-16 12:46, Darrell Budic wrote:
I use libgfap in production, the performance is worth a couple of quirks for me.
- watch major version updates, they'll silently turn it off because the engine starts using a new version variable - VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me - some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues
I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far...
-Darrell
On Dec 14, 2019, at 5:47 PM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
According to GlusterFS Storage Domain <https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html>
the feature is not the default as it is incompatible with Live Storage Migration.
Best Regards, Strahil Nikolov
В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme < jaymef@gmail.com> написа:
Are there currently any known issues with using libgfapi in the latest stable version of ovirt in hci deployments? I have recently enabled it and have noticed a significant (over 4x) increase in io performance on my vms. I'm concerned however since it does not seem to be an ovirt default setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJ... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MR...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFM...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GNMWDCNGJYNRMK...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

I recently learned that gluster community is archiving the libgfapi stuff.I think that a lot of effort was spent on FUSE to get it faster.When did anyone compare them ? Best Regards,Strahil Nikolov Sent from Yahoo Mail on Android On Fri, Feb 5, 2021 at 13:12, Guillaume Pavese<guillaume.pavese@interactiv-group.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YA6J533264NBO6...

Hey everyone, Couple of months ago i benchmarked FUSE, libgfapi performance. If read speed is more or less tolerable for both, but write on FUSE is a disaster. Here below is a result screen: https://ibb.co/vBVB0WY BR Aleksandr
I recently learned that gluster community is archiving the libgfapi stuff.I think that a lot of effort was spent on FUSE to get it faster.When did anyone compare them ?
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Feb 5, 2021 at 13:12, Guillaume Pavese<guillaume.pavese(a)interactiv-group.com> wrote: _______________________________________________ Users mailing list -- users(a)ovirt.org To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YA6J533264N...

I strongly invite you to post those results in RedHat's Buzilla entries Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Wed, Feb 10, 2021 at 6:55 PM <scroodj@gmail.com> wrote:
Hey everyone,
Couple of months ago i benchmarked FUSE, libgfapi performance. If read speed is more or less tolerable for both, but write on FUSE is a disaster. Here below is a result screen:
BR Aleksandr
I recently learned that gluster community is archiving the libgfapi stuff.I think that a lot of effort was spent on FUSE to get it faster.When did anyone compare them ?
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Feb 5, 2021 at 13:12, Guillaume Pavese<guillaume.pavese(a)interactiv-group.com> wrote: _______________________________________________ Users mailing list -- users(a)ovirt.org To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YA6J533264N. .. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F74JJEBMICD4OK...
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

Additionally to posting benchmark results and interests in the Bugzilla entries mentioned previously, I could be useful to also post in this one that should act as a RFE : [gfapi] Support libgfapi access to the gluster storage domains https://bugzilla.redhat.com/show_bug.cgi?id=1633642 Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Thu, Feb 11, 2021 at 4:44 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I strongly invite you to post those results in RedHat's Buzilla entries
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Wed, Feb 10, 2021 at 6:55 PM <scroodj@gmail.com> wrote:
Hey everyone,
Couple of months ago i benchmarked FUSE, libgfapi performance. If read speed is more or less tolerable for both, but write on FUSE is a disaster. Here below is a result screen:
BR Aleksandr
I recently learned that gluster community is archiving the libgfapi stuff.I think that a lot of effort was spent on FUSE to get it faster.When did anyone compare them ?
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Feb 5, 2021 at 13:12, Guillaume Pavese<guillaume.pavese(a)interactiv-group.com> wrote: _______________________________________________ Users mailing list -- users(a)ovirt.org To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YA6J533264N. .. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F74JJEBMICD4OK...
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

Hello, There are many issues after enabling libgfapi . Issues like Attempting to start VM,also i think making a snapshot of VM, live storage migration(As Strahil Nikolov mentioned ) and VM's are not highly available etc. Because of this reason it is not enabled by default in Ovirt. Regards Ritesh Chikatwar On Thu, Feb 11, 2021 at 1:29 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Additionally to posting benchmark results and interests in the Bugzilla entries mentioned previously, I could be useful to also post in this one that should act as a RFE :
[gfapi] Support libgfapi access to the gluster storage domains https://bugzilla.redhat.com/show_bug.cgi?id=1633642
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Thu, Feb 11, 2021 at 4:44 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I strongly invite you to post those results in RedHat's Buzilla entries
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Wed, Feb 10, 2021 at 6:55 PM <scroodj@gmail.com> wrote:
Hey everyone,
Couple of months ago i benchmarked FUSE, libgfapi performance. If read speed is more or less tolerable for both, but write on FUSE is a disaster. Here below is a result screen:
BR Aleksandr
I recently learned that gluster community is archiving the libgfapi stuff.I think that a lot of effort was spent on FUSE to get it faster.When did anyone compare them ?
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Feb 5, 2021 at 13:12, Guillaume Pavese<guillaume.pavese(a)interactiv-group.com> wrote: _______________________________________________ Users mailing list -- users(a)ovirt.org To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YA6J533264N. .. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F74JJEBMICD4OK...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MDF3GENKIMUVL...

Hi Ritesh Chitawar, Those bugs mostly all depended on a old (2011!) qemu bug that took a very long time to be resolved : https://bugzilla.redhat.com/show_bug.cgi?id=760547 However in the meantime, the oVirt bugs that you spoke about were closed/deferred for reasons like "no activity on blocking bug for a long time" & "not enough perf anyway". So, since blocking bugs have at last been resolved, and since different users report seeing strong performance gains contrary to what has been tested by RedHat, it seems justified to reevaluate the situation. Best, Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Thu, Feb 11, 2021 at 5:36 PM Ritesh Chikatwar <rchikatw@redhat.com> wrote:
Hello,
There are many issues after enabling libgfapi . Issues like Attempting to start VM,also i think making a snapshot of VM, live storage migration(As Strahil Nikolov mentioned ) and VM's are not highly available etc.
Because of this reason it is not enabled by default in Ovirt.
Regards Ritesh Chikatwar
On Thu, Feb 11, 2021 at 1:29 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Additionally to posting benchmark results and interests in the Bugzilla entries mentioned previously, I could be useful to also post in this one that should act as a RFE :
[gfapi] Support libgfapi access to the gluster storage domains https://bugzilla.redhat.com/show_bug.cgi?id=1633642
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Thu, Feb 11, 2021 at 4:44 PM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
I strongly invite you to post those results in RedHat's Buzilla entries
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Wed, Feb 10, 2021 at 6:55 PM <scroodj@gmail.com> wrote:
Hey everyone,
Couple of months ago i benchmarked FUSE, libgfapi performance. If read speed is more or less tolerable for both, but write on FUSE is a disaster. Here below is a result screen:
BR Aleksandr
I recently learned that gluster community is archiving the libgfapi stuff.I think that a lot of effort was spent on FUSE to get it faster.When did anyone compare them ?
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Feb 5, 2021 at 13:12, Guillaume Pavese<guillaume.pavese(a)interactiv-group.com> wrote: _______________________________________________ Users mailing list -- users(a)ovirt.org To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YA6J533264N. .. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/F74JJEBMICD4OK...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MDF3GENKIMUVL...
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

Hi all, I can confirm that when using libgfapi with oVirt + Gluster replica 3 (Hyperconverged) read and write performance under a VM was 4 to 5 times better than when using fuse. -------------------------------------------------------------------------------------------------- Tested with a VM CentOS 6 and 7 under the hyperconverged cluster HW: -------------------------------------------------------------------------------------------------- ovirt 4.3.10 hypervisors with replica 3 - 256Gb Ram - 32 total cores with hyperthreading - RAID 1 (2 HDDs) for OS - RAID 6 (9 SSDs) for Gluster , also tested with RAID 10, JBOD, all provided similar improvements with libgfapi (4 to 5 times better), replica 3 volumes. - 10Gbe NICs, 1 for ovirtmgmnt and 1 for Gluster - Ran tests using fio ------------------------------------------------------------------------------- Test results using fuse (1500 MTU) (Took about 4~5 min): ------------------------------------------------------------------------------- [root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64 fio-2.0.13 Starting 1 process test: Laying out IO file(s) (1 file(s) / 4096MB) Jobs: 1 (f=1): [m] [100.0% done] [11984K/4079K/0K /s] [2996 /1019 /0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=8894: Mon Mar 29 10:05:35 2021 read : io=3070.5MB, bw=12286KB/s, iops=3071 , runt=255918msec <------------------ write: io=1025.6MB, bw=4103.5KB/s, iops=1025 , runt=255918msec <------------------ cpu : usr=1.84%, sys=10.50%, ctx=859129, majf=0, minf=19 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=786043/w=262533/d=0, short=r=0/w=0/d=0 Run status group 0 (all jobs): READ: io=3070.5MB, aggrb=12285KB/s, minb=12285KB/s, maxb=12285KB/s, mint=255918msec, maxt=255918msec WRITE: io=1025.6MB, aggrb=4103KB/s, minb=4103KB/s, maxb=4103KB/s, mint=255918msec, maxt=255918msec Disk stats (read/write): dm-3: ios=785305/262494, merge=0/0, ticks=492833/15794537, in_queue=16289356, util=100.00%, aggrios=786024/262789, aggrmerge=19/45, aggrticks=492419/15811831, aggrin_queue=16303803, aggrutil=100.00% sda: ios=786024/262789, merge=19/45, ticks=492419/15811831, in_queue=16303803, util=100.00% -------------------------------------------------------------------------------------------------------------------------- Test results using fuse (9000 MTU) // Did not see much of a difference (Took about 4~5 min): -------------------------------------------------------------------------------------------------------------------------- [root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64 fio-2.0.13 Starting 1 process Jobs: 1 (f=1): [m] [100.0% done] [14956K/4596K/0K /s] [3739 /1149 /0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=2193: Mon Mar 29 10:22:44 2021 read : io=3070.8MB, bw=12882KB/s, iops=3220 , runt=244095msec <------------------ write: io=1025.3MB, bw=4300.1KB/s, iops=1075 , runt=244095msec <------------------ cpu : usr=1.85%, sys=10.43%, ctx=849742, majf=0, minf=21 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=786117/w=262459/d=0, short=r=0/w=0/d=0 Run status group 0 (all jobs): READ: io=3070.8MB, aggrb=12882KB/s, minb=12882KB/s, maxb=12882KB/s, mint=244095msec, maxt=244095msec WRITE: io=1025.3MB, aggrb=4300KB/s, minb=4300KB/s, maxb=4300KB/s, mint=244095msec, maxt=244095msec Disk stats (read/write): dm-3: ios=785805/262493, merge=0/0, ticks=511951/15009580, in_queue=15523355, util=100.00%, aggrios=786105/262713, aggrmerge=18/19, aggrticks=511235/15026104, aggrin_queue=15536995, aggrutil=100.00% sda: ios=786105/262713, merge=18/19, ticks=511235/15026104, in_queue=15536995, util=100.00% -------------------------------------------------------------------------------------- Test results using LIBGFAPI (9000 MTU), took about 38 seconds -------------------------------------------------------------------------------------- [root@vmm04 ~]# ping -I glusternet -M do -s 8972 192.168.1.6 PING 192.168.1.6 (192.168.1.6) from 192.168.1.4 glusternet: 8972(9000) bytes of data. 8980 bytes from 192.168.1.6: icmp_seq=1 ttl=64 time=0.300 ms [root@vmm04 ~]# ping -I ovirtmgmt -M do -s 8972 192.168.0.6 PING 192.168.0.6 (192.168.0.6) from 192.168.0.4 ovirtmgmt: 8972(9000) bytes of data. 8980 bytes from 192.168.0.6: icmp_seq=1 ttl=64 time=0.171 ms [root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64 fio-2.0.13 Starting 1 process Jobs: 1 (f=1): [m] [100.0% done] [25878K/8599K/0K /s] [6469 /2149 /0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=2188: Mon Mar 29 10:43:00 2021 read : io=3071.2MB, bw=80703KB/s, iops=20175 , runt= 38969msec <------------------ write: io=1024.9MB, bw=26929KB/s, iops=6732 , runt= 38969msec <------------------ cpu : usr=8.00%, sys=41.19%, ctx=374931, majf=0, minf=20 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued : total=r=786224/w=262352/d=0, short=r=0/w=0/d=0 Run status group 0 (all jobs): READ: io=3071.2MB, aggrb=80702KB/s, minb=80702KB/s, maxb=80702KB/s, mint=38969msec, maxt=38969msec WRITE: io=1024.9MB, aggrb=26929KB/s, minb=26929KB/s, maxb=26929KB/s, mint=38969msec, maxt=38969msec Disk stats (read/write): dm-3: ios=784858/261925, merge=0/0, ticks=1403884/1028357, in_queue=2433435, util=99.88%, aggrios=786155/262410, aggrmerge=70/51, aggrticks=1409868/1039790, aggrin_queue=2449280, aggrutil=99.82% sda: ios=786155/262410, merge=70/51, ticks=1409868/1039790, in_queue=2449280, util=99.82% So I do agree with Guillaume, it be worth to re-evaluate the situation :) Regards, Adrian

Hi! I have tested also with last version of Ovirt 4.4, and I see that writing speed in 4 times more then with fuse. I think this is important reason to support snapshots with Libgfapi. Regards, Alex
participants (9)
-
adrianquintero@gmail.com
-
Alex McWhirter
-
Alexandr Mikhailov
-
Darrell Budic
-
Guillaume Pavese
-
Jayme
-
Ritesh Chikatwar
-
scroodj@gmail.com
-
Strahil Nikolov