[ovirt-users] Empty cgroup files on centos 7.3 host
Yaniv Kaul
ykaul at redhat.com
Sun Dec 17 16:41:25 UTC 2017
On Fri, Jun 30, 2017 at 1:55 PM, Florian Schmid <fschmid at ubimet.com> wrote:
> Hi Yaniv,
>
> thank you for your answer! I haven't known, that there is already such a
> monitoring tool on ovirt.
>
> We will sure give it a try, but we have already in our environment a
> monitoring tool, that's why I wanted to add those values, too.
>
> How does collectd get this data from libvirt, when the corresponding
> cgroup values are empty?
>
Specifically for IO statistics, VDSM reads the values from libvirt[1].
cgroup limiting is possible if you define it, but is unrelated.
Also note that 7.3 is a bit ancient, I'm not sure how supported it is with
latest 4.1 - which I'm sure will pull new dependencies from 7.4 (for
example, libvirt!).
Y.
[1]
https://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=lib/vdsm/virt/vmstats.py;h=5043e8b3e44457cec99205939472fda14bd130a8;hb=HEAD#l458
>
> BR Florian
>
>
> ------------------------------
> *Von: *"Yaniv Kaul" <ykaul at redhat.com>
> *An: *"Florian Schmid" <fschmid at ubimet.com>
> *CC: *"users" <users at ovirt.org>
> *Gesendet: *Dienstag, 27. Juni 2017 09:08:51
> *Betreff: *Re: [ovirt-users] Empty cgroup files on centos 7.3 host
>
>
>
> On Mon, Jun 26, 2017 at 11:03 PM, Florian Schmid <fschmid at ubimet.com>
> wrote:
>
>> Hi,
>>
>> I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3
>> hypervisor hosts, but it looks like that all those files are empty.
>>
>
> We have a very nice integration with Elastic based monitoring and logging
> - why not use it.
> On the host, we use collectd for monitoring.
> See http://www.ovirt.org/develop/release-management/features/
> engine/metrics-store/
>
> Y.
>
>
>> For example:
>> ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\
>> x2dHostedEngine.scope/
>> insgesamt 0
>> drwxr-xr-x. 2 root root 0 30. Mai 10:09 .
>> drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
>> --w-------. 1 root root 0 30. Mai 10:09 blkio.reset_stats
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors_recursive
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time
>> -r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time_recursive
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight_device
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.clone_children
>> --w--w--w-. 1 root root 0 30. Mai 10:09 cgroup.event_control
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.procs
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 notify_on_release
>> -rw-r--r--. 1 root root 0 30. Mai 10:09 tasks
>>
>>
>> I thought, I can get my needed values from there, but all files are empty.
>>
>> Looking at this post: http://lists.ovirt.org/
>> pipermail/users/2017-January/079011.html
>> this should work.
>>
>> Is this normal on centos 7.3 with oVirt installed? How can I get those
>> values, without monitoring all VMs directly?
>>
>> oVirt Version we use:
>> 4.1.1.8-1.el7.centos
>>
>> BR Florian
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171217/729e18dd/attachment.html>
More information about the Users
mailing list