On Mon, Jul 3, 2017 at 12:27 AM, Darrell Budic <budic(a)onholyground.com>
wrote:
It seems vdsmd under 4.1.x (or something under it’s control) changes
the
disk schedulers when it starts or a host node is activated, and I’d like to
avoid this. Is it preventable? Or configurable anywhere? This was probably
happening under earlier version, but I just noticed it while upgrading some
converged boxes today.
It likes to set deadline, which I understand is the RHEL default for
centos 7 on non SATA disks. But I’d rather have NOOP on my SSDs because
SSDs, and NOOP on my SATA spinning platters because ZFS does it’s own
scheduling, and running anything other than NOOP can cause increased CPU
utilization for no gain. It’s also fighting ZFS, which tires to set NOOP on
whole disks it controls, and my kernel command line setting.
We've stopped doing it in 4.1.1 (
https://bugzilla.redhat.com/show_bug.cgi?id=1381219 )
Y.
Thanks,
-Darrell
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users