
It appears that VDSM changes the following params: vm.dirty_ratio = 5 vm.dirty_background_ratio = 2 Any idea why? Because we use cache=none it's irrelevant anyway? TIA, Y.

On 29/11/16 22:01 +0200, Yaniv Kaul wrote:
It appears that VDSM changes the following params: vm.dirty_ratio = 5 vm.dirty_background_ratio = 2
Any idea why? Because we use cache=none it's irrelevant anyway?
It's not really irrelevant, the host still uses disk cache. Anyway, there is BZ[1] with a presentation[2] that (imho reasonably) states: "Reduce dirty page limits in KVM host to allow direct I/O writer VMs to compete successfully with buffered writer processes for storage access" I wonder why virtual-host tuned profile doesn't contain these values: $ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf vm.dirty_background_ratio = 5 [1]https://bugzilla.redhat.com/show_bug.cgi?id=740887 [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime.pdf
TIA, Y.
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Wed, Nov 30, 2016 at 9:48 AM, Martin Polednik <mpolednik@redhat.com> wrote:
On 29/11/16 22:01 +0200, Yaniv Kaul wrote:
It appears that VDSM changes the following params: vm.dirty_ratio = 5 vm.dirty_background_ratio = 2
Any idea why? Because we use cache=none it's irrelevant anyway?
It's not really irrelevant, the host still uses disk cache. Anyway, there is BZ[1] with a presentation[2] that (imho reasonably) states:
"Reduce dirty page limits in KVM host to allow direct I/O writer VMs to compete successfully with buffered writer processes for storage access"
Thanks, but it really makes no sense to me. The direct IO by the VMs is going to a different storage than what the host is writing to, in most cases. The host would write to the local disk, the VMs - to a shared storage, across NFS or block layer or so. Moreover, their IO is not buffered. There is very little IO coming from the host itself, generally (I hope so!). Partially unrelated - the trend today is actually to put NOOP on the VMs - the deadline is quite meaningless, as the host scheduler will reschedule anyway as it see fits. Most likely it is also a deadline scheduler (but could be NOOP as well if it's an all flash array, for example). Therefore there is no reason for anything but simple NOOP on the VMs themselves. In short, I think it's an outdated decision that perhaps should be revisited. Not urgent, though. Y.
I wonder why virtual-host tuned profile doesn't contain these values:
$ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf vm.dirty_background_ratio = 5
[1]https://bugzilla.redhat.com/show_bug.cgi?id=740887 [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/ rhev-vm-rsptime.pdf
TIA,
Y.
_______________________________________________
Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

--kno3LaHbtea9cJj5CQNI45l6V8XacJLPD Content-Type: multipart/mixed; boundary="xpKKJkTBj1DGO23WMiqFancuNdxdtwPOt"; protected-headers="v1" From: Sven Kieske <s.kieske@mittwald.de> To: devel@ovirt.org Message-ID: <11712170-1758-96b5-adb8-eb29d59be1b8@mittwald.de> Subject: Re: [ovirt-devel] VDSM changes Linux memory dirty ratios - why? References: <CAJgorsbMk1UeLt82GOWsVKOmj=Kse+4CvhoT2E_kj=PUnhG28A@mail.gmail.com> <20161130074847.GA1215@dhcp130-218.brq.redhat.com> <CAJgorsYqRX84st-jByqVbaUHCX=TmtnQeXw1=6FPu_Mbz6kUzg@mail.gmail.com> In-Reply-To: <CAJgorsYqRX84st-jByqVbaUHCX=TmtnQeXw1=6FPu_Mbz6kUzg@mail.gmail.com> --xpKKJkTBj1DGO23WMiqFancuNdxdtwPOt Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 30/11/16 09:13, Yaniv Kaul wrote:
Thanks, but it really makes no sense to me. The direct IO by the VMs is=
going to a different storage than what the host is writing to, in most cases. The host would write to the local disk, the VMs - to a shared storage, across NFS or block layer or so. Moreover, their IO is not buffered. There is very little IO coming from the host itself, generally (I hope so!). =20 Partially unrelated - the trend today is actually to put NOOP on the VM= s - the deadline is quite meaningless, as the host scheduler will reschedul= e anyway as it see fits. Most likely it is also a deadline scheduler (but could be NOOP as well = if it's an all flash array, for example). Therefore there is no reason for=
anything but simple NOOP on the VMs themselves. =20 In short, I think it's an outdated decision that perhaps should be revisited. Not urgent, though. Y.
I know it's not the most cared about usecase but I'd like to add that this might affect local storage domains, which I happen to use a lot, and maybe others too. --=20 Mit freundlichen Gr=FC=DFen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG K=F6nigsberger Stra=DFe 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Gesch=E4ftsf=FChrer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhause= n Komplement=E4rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynha= usen --xpKKJkTBj1DGO23WMiqFancuNdxdtwPOt-- --kno3LaHbtea9cJj5CQNI45l6V8XacJLPD Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJYPsIPAAoJEMby9TMDAbQRCwgQANGx3ttkYz9kbs6x4+vUKiwZ wD3OgXGeqQVoVIJ7H98JgoK3MRbR6EiPJYrx8A/2ROa1k3JCY96wcl1oJwCWQkl6 AXmU/fgqIXCx/8+0CiDU3VtZtlUVHN4Tr0sVpPJ/mH9YdcY6kGj6NA8b9FgeS7IZ DxYDXNyAFLCg6Ei3IFsg0UgIvEZz4ckdDLAVwnWBqIVybY62W96tAH8aghCLNZxz /QnVaW7wOXhZTcMUePGShP2Zy/vW9+P8hzar2ZX4G0Qz86B3lJ6kcFPNuR2D3AIR U2Mx0NRwnCfQXL2Iv1g4LSecwoUZ8qTqnmBE/qjVeGe7kIFD+RfQABPH9+Yj0Fo9 +28rgx049FLI8DPAHPVyxkd2ClLWZ4XpBC0uDvqvzZcFdJ6URAKu+r3y+IM0q6BG CI82Ofv50oXbh1zkWRtzHm/A4tHcIO47WcF55GSgyrdWg1Js7UcFzTuSj9weONVi Af1jsUvjMl1nK98xX8dSXA8qpkAC8V9n/jRd9dIn0bEUkAvQtkkdOzafWkhhvh6r Esj0iKnzGGilx3+wL6AEd/BexIPwp5QrtUHb7EBT7dwOsKfE2mQt2U2WmWE9WWQ4 +eOCYXXIV/za6cUHy/HxQWHv/ByIuGb7BeX1kEaaWjWmfF2JIfWaZnFgi2MD8u/I dgNzTJScMAj6cCK0a1vm =x/5A -----END PGP SIGNATURE----- --kno3LaHbtea9cJj5CQNI45l6V8XacJLPD--

On 30/11/16 13:11 +0100, Sven Kieske wrote:
On 30/11/16 09:13, Yaniv Kaul wrote:
Thanks, but it really makes no sense to me. The direct IO by the VMs is going to a different storage than what the host is writing to, in most cases. The host would write to the local disk, the VMs - to a shared storage, across NFS or block layer or so. Moreover, their IO is not buffered. There is very little IO coming from the host itself, generally (I hope so!).
Partially unrelated - the trend today is actually to put NOOP on the VMs - the deadline is quite meaningless, as the host scheduler will reschedule anyway as it see fits. Most likely it is also a deadline scheduler (but could be NOOP as well if it's an all flash array, for example). Therefore there is no reason for anything but simple NOOP on the VMs themselves.
In short, I think it's an outdated decision that perhaps should be revisited. Not urgent, though. Y.
I know it's not the most cared about usecase but I'd like to add that this might affect local storage domains, which I happen to use a lot, and maybe others too.
Current values seem to be optimal; considering this use case I'd definitely leave it in place.
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Wed, Nov 30, 2016 at 2:11 PM, Sven Kieske <s.kieske@mittwald.de> wrote:
Thanks, but it really makes no sense to me. The direct IO by the VMs is going to a different storage than what the host is writing to, in most cases. The host would write to the local disk, the VMs - to a shared storage, across NFS or block layer or so. Moreover, their IO is not buffered. There is very little IO coming from the host itself, generally (I hope so!).
Partially unrelated - the trend today is actually to put NOOP on the VMs
On 30/11/16 09:13, Yaniv Kaul wrote: -
the deadline is quite meaningless, as the host scheduler will reschedule anyway as it see fits. Most likely it is also a deadline scheduler (but could be NOOP as well if it's an all flash array, for example). Therefore there is no reason for anything but simple NOOP on the VMs themselves.
In short, I think it's an outdated decision that perhaps should be revisited. Not urgent, though. Y.
I know it's not the most cared about usecase but I'd like to add that this might affect local storage domains, which I happen to use a lot, and maybe others too.
In which case we should change the default for local storage. BTW, when using Gluster, they have returned these values to the original values... Y.
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

--jqM3pA5AEqqKPdR1mpM0Heahc81JGL6Ve Content-Type: multipart/mixed; boundary="JmHWW6MUOGuane1dNFbNJhlcrxhH1obJ5"; protected-headers="v1" From: Sven Kieske <s.kieske@mittwald.de> To: devel@ovirt.org Message-ID: <918b520b-176c-3a09-d7c9-a16340ac6480@mittwald.de> Subject: Re: [ovirt-devel] VDSM changes Linux memory dirty ratios - why? References: <CAJgorsbMk1UeLt82GOWsVKOmj=Kse+4CvhoT2E_kj=PUnhG28A@mail.gmail.com> <20161130074847.GA1215@dhcp130-218.brq.redhat.com> In-Reply-To: <20161130074847.GA1215@dhcp130-218.brq.redhat.com> --JmHWW6MUOGuane1dNFbNJhlcrxhH1obJ5 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 30/11/16 08:48, Martin Polednik wrote:
It's not really irrelevant, the host still uses disk cache. Anyway, there is BZ[1] with a presentation[2] that (imho reasonably) states: =20 "Reduce dirty page limits in KVM host to allow direct I/O writer VMs to compete successfully with buffered writer processes for storage access" =20 I wonder why virtual-host tuned profile doesn't contain these values: =20 $ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf vm.dirty_background_ratio =3D 5 =20 [1]https://bugzilla.redhat.com/show_bug.cgi?id=3D740887 [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime= =2Epdf
Could you share [2] with the wider community? This would be awesome! --=20 Mit freundlichen Gr=FC=DFen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG K=F6nigsberger Stra=DFe 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Gesch=E4ftsf=FChrer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhause= n Komplement=E4rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynha= usen --JmHWW6MUOGuane1dNFbNJhlcrxhH1obJ5-- --jqM3pA5AEqqKPdR1mpM0Heahc81JGL6Ve Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJYPsG3AAoJEMby9TMDAbQRIgIQAJAoKRFa2S8zBCIPKDQQz8GL yFI7jrbveoxYoSzvUU/E+bIEaY1ZvltLjcGaVrqaRG9MAYKNqdGwRzIWqHg7/9dJ io2E5MO1sIUdwizMozuKobGspW4vpFVuWPpJfZdUt2fOQmZw0lEG8t3loUX9Mzu7 n96q+Zx52y32MrwDqkOZLXQsp9nEmySef0maz+36RWp/vyYyN3Yx1+WZcZe5CqSs UlnudfGczPVgu+lSi8utFw+C5sKGNaGn4TCswWpkihEvRiQtULJS5drEZRI3vJJP pqoWBVtPsbZFiGCL34LWpm8oawKI/7xYLwUY4X6YJ3WLwqM2fy6P95rIgR16nMoq GHkMLhkYb3IkfBquNVpJKc1mT7PAEtKyrbnSJ4mYLSbUIJ3aZZXIHRgTMJ1jcfHG P4f+khPmMieKeTc/tcWwUo8GEcF/v15se24iOXZKqzzR9H10Hk4jjLzFxhVDgamR 1RW2HoYrs6kyH+K4JwwsAtLX/Y7Kynu5A/q71FzlFjWuSVjjQ/TwWw88Hlplw9Ha 6FNlgzMFVz+L/AbxczrwDmLxphfYGHFxj8fbU7D7CVyJcdPsaRFCE57Ll1mCQm/g xAyPuRqqao4a16UmNvSrLYWX5unwsgF0KBTvKK49ktniis7wYbHgqbBIvm2tKY3j utQ+pz/rvDYLPskThwBn =jyJe -----END PGP SIGNATURE----- --jqM3pA5AEqqKPdR1mpM0Heahc81JGL6Ve--

On 30/11/16 13:10 +0100, Sven Kieske wrote:
On 30/11/16 08:48, Martin Polednik wrote:
It's not really irrelevant, the host still uses disk cache. Anyway, there is BZ[1] with a presentation[2] that (imho reasonably) states:
"Reduce dirty page limits in KVM host to allow direct I/O writer VMs to compete successfully with buffered writer processes for storage access"
I wonder why virtual-host tuned profile doesn't contain these values:
$ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf vm.dirty_background_ratio = 5
[1]https://bugzilla.redhat.com/show_bug.cgi?id=740887 [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime.pdf
Could you share [2] with the wider community? This would be awesome!
Sorry, I've totally missed the fact it's internal link (unfortunately publicly visible on the BZ). I believe it's slightly outdated, but let's ask the author. Ben, is the document[2] somehow still valid and could it be made publicly available? Re-referencing for completeness: [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime.pdf
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

I uploaded the article to: https://s3.amazonaws.com/ben.england/rhev-vm-rsptime.pdf I had just gone to work for Red Hat back in 2011 so I made the mistake of including a company-internal URL in a public-facing document. We can update the bz with the above link if you want. This article is ANCIENT (2011), so while the basic approach should still be relevant, some of the specifics may have changed. This was a RHEV-on-NFS configuration. As for why dirty ratio lowering is not part of a tuned profile, there is a debate in the performance team about dirty ratio, with some advocating for higher dirty ratios that help some application workloads (example: writing /tmp files). I'd stand by this article for the particular configuration that was being studied though, and I think the basic approach of controlling latency and improving fairness by managing queue depths is very relevant today. cc'ed Sanjay Rao, who continues to be a RHEV expert in the perf team (I no longer am working on it). -ben ----- Original Message -----
From: "Martin Polednik" <mpolednik@redhat.com> To: "Sven Kieske" <s.kieske@mittwald.de> Cc: devel@ovirt.org, "Ben" <bengland@redhat.com> Sent: Wednesday, November 30, 2016 7:22:05 AM Subject: Re: VDSM changes Linux memory dirty ratios - why?
On 30/11/16 13:10 +0100, Sven Kieske wrote:
On 30/11/16 08:48, Martin Polednik wrote:
It's not really irrelevant, the host still uses disk cache. Anyway, there is BZ[1] with a presentation[2] that (imho reasonably) states:
"Reduce dirty page limits in KVM host to allow direct I/O writer VMs to compete successfully with buffered writer processes for storage access"
I wonder why virtual-host tuned profile doesn't contain these values:
$ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf vm.dirty_background_ratio = 5
[1]https://bugzilla.redhat.com/show_bug.cgi?id=740887 [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime.pdf
Could you share [2] with the wider community? This would be awesome!
Sorry, I've totally missed the fact it's internal link (unfortunately publicly visible on the BZ). I believe it's slightly outdated, but let's ask the author.
Ben, is the document[2] somehow still valid and could it be made publicly available?
Re-referencing for completeness: [2]http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime.pdf
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Nov 30, 2016 4:48 PM, "Ben England" <bengland@redhat.com> wrote: I uploaded the article to: https://s3.amazonaws.com/ben.england/rhev-vm-rsptime.pdf I had just gone to work for Red Hat back in 2011 so I made the mistake of including a company-internal URL in a public-facing document. We can update the bz with the above link if you want. This article is ANCIENT (2011), so while the basic approach should still be relevant, some of the specifics may have changed. This was a RHEV-on-NFS configuration. As for why dirty ratio lowering is not part of a tuned profile, there is a debate in the performance team about dirty ratio, with some advocating for higher dirty ratios that help some application workloads (example: writing /tmp files). I'd stand by this article for the particular configuration that was being studied though, and I think the basic approach of controlling latency and improving fairness by managing queue depths is very relevant today. cc'ed Sanjay Rao, who continues to be a RHEV expert in the perf team (I no longer am working on it). Ben, any idea why, at the time, you have added a workload from the host, that was competing with the VMs? Usually the only IO load from the hosts are the logs... Nothing serious really. Y. -ben ----- Original Message -----
From: "Martin Polednik" <mpolednik@redhat.com> To: "Sven Kieske" <s.kieske@mittwald.de> Cc: devel@ovirt.org, "Ben" <bengland@redhat.com> Sent: Wednesday, November 30, 2016 7:22:05 AM Subject: Re: VDSM changes Linux memory dirty ratios - why?
On 30/11/16 13:10 +0100, Sven Kieske wrote:
On 30/11/16 08:48, Martin Polednik wrote:
It's not really irrelevant, the host still uses disk cache. Anyway, there is BZ[1] with a presentation[2] that (imho reasonably) states:
"Reduce dirty page limits in KVM host to allow direct I/O writer VMs to compete successfully with buffered writer processes for storage access"
I wonder why virtual-host tuned profile doesn't contain these values:
$ grep vm.dirty /usr/lib/tuned/virtual-host/tuned.conf vm.dirty_background_ratio = 5
[1]https://bugzilla.redhat.com/show_bug.cgi?id=740887 [2]http://perf1.lab.bos.redhat.com/bengland/laptop/ rhev/rhev-vm-rsptime.pdf
Could you share [2] with the wider community? This would be awesome!
Sorry, I've totally missed the fact it's internal link (unfortunately publicly visible on the BZ). I believe it's slightly outdated, but let's ask the author.
Ben, is the document[2] somehow still valid and could it be made publicly available?
Re-referencing for completeness: [2]http://perf1.lab.bos.redhat.com/bengland/laptop/ rhev/rhev-vm-rsptime.pdf
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
participants (4)
-
Ben England
-
Martin Polednik
-
Sven Kieske
-
Yaniv Kaul