<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 3, 2017 at 12:27 AM, Darrell Budic <span dir="ltr"><<a href="mailto:budic@onholyground.com" target="_blank">budic@onholyground.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It seems vdsmd under 4.1.x (or something under it’s control) changes the disk schedulers when it starts or a host node is activated, and I’d like to avoid this. Is it preventable? Or configurable anywhere? This was probably happening under earlier version, but I just noticed it while upgrading some converged boxes today.<br>
<br>
It likes to set deadline, which I understand is the RHEL default for centos 7 on non SATA disks. But I’d rather have NOOP on my SSDs because SSDs, and NOOP on my SATA spinning platters because ZFS does it’s own scheduling, and running anything other than NOOP can cause increased CPU utilization for no gain. It’s also fighting ZFS, which tires to set NOOP on whole disks it controls, and my kernel command line setting.<br></blockquote><div><br></div><div>We've stopped doing it in 4.1.1 (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1381219">https://bugzilla.redhat.com/show_bug.cgi?id=1381219</a> )</div><div>Y.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Thanks,<br>
<br>
-Darrell<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</blockquote></div><br></div></div>