[ovirt-users] Monitoring disk I/O

Ernest Beinrohr Ernest.Beinrohr at axonpro.sk
Fri Jan 20 07:20:17 UTC 2017


On 19.01.2017 21:42, Michael Watters wrote:
> Does ovirt have any way to monitor disk I/O for each VM or disk in a
> storage pool?  I am receiving disk latency warnings and would like to
> know which VMs are causing the most disk I/O.
>
We have a homebrew IO vm monitoring, libvirt uses cgroups which record 
CPU and IO stats for each VM. It's a little tricky to follow the VM 
while it migrates, but once done, we have cpu and IO graphs for each VM.

Basicly for each hypervisor we periodicky poll cgroup info for all its VMs:

for vm in $vms
do
         (
         echo -n "$HOST:$vm:"
         vm=${vm/-/\\\\x2d}
         egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_serviced 
| grep ^253:.*Read | cut -f3 -d " " | paste -sd+ | bc
         echo -n ":"
         egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_serviced 
| grep ^253:.*Write | cut -f3 -d " " | paste -sd+ | bc
         echo -n ":"
         egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_service_bytes 
| grep ^253:.*Read | cut -f3 -d " " | paste -sd+ | bc
         echo -n ":"
         egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_service_bytes 
| grep ^253:.*Write | cut -f3 -d " " | paste -sd+ | bc
         echo -n ":"
         cat /sys/fs/cgroup/cpuacct/machine.slice/*$vm*/cpuacct.usage
         ) | tr -d '\n'
         echo ""
done

and then we MRTG it.
-- 
Ernest Beinrohr, AXON PRO
Ing <http://www.beinrohr.sk/ing.php>, RHCE 
<http://www.beinrohr.sk/rhce.php>, RHCVA 
<http://www.beinrohr.sk/rhce.php>, LPIC 
<http://www.beinrohr.sk/lpic.php>, VCA <http://www.beinrohr.sk/vca.php>,
+421-2-62410360 +421-903-482603
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170120/8afaa089/attachment.html>


More information about the Users mailing list