Mailing-Lists upgrade
by Marc Dequènes (Duck)
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: multipart/mixed; boundary="AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv";
protected-headers="v1"
From: =?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?= <duck(a)redhat.com>
To: oVirt Infra <infra(a)ovirt.org>, users <users(a)ovirt.org>,
devel <devel(a)ovirt.org>
Message-ID: <c5c71fce-0290-e97a-ddd0-eab0e6fccea4(a)redhat.com>
Subject: Mailing-Lists upgrade
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Quack,
On behalf of the oVirt infra team, I'd like to announce the current
Mailing-Lists system is going to be upgraded to a brand new Mailman 3
installation on Monday during the slot 11:00-12:00 JST.
It should not take a full hour to migrate as we already made incremental
synchronization with the current system but better keep some margin. The
system will then take over delivery of the mails but might be a bit slow
at first as it needs to reindex all the archived mails (which might take
a few hours).
To manage your subscriptions and delivery settings you can do this
easily on the much nicer web interface (https://lists.ovirt.org). There
is a notion of account so you don't need to login separately for each ML.=
You can Sign In using Fedora, GitHub or Google or create a local account
if you prefer. Please keep in mind signing in with a different method
would create separate accounts (which cannot be merged at the moment).
But you can easily link your account to other authentication methods in
your settings (click on you name in the up-right corner -> Account ->
Account Connections).
As for the original mail archives, because the previous system did not
have stable URLs, we cannot create mappings to the new pages. We decided
to keep the old archives around on the same URL (/pipermail), so the
Internet links would still work fine.
Hope you'd be happy with the new system.
\_o<
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv--
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEcpcqg+UmRT3yiF+BVen596wcRD8FAlmwx0MACgkQVen596wc
RD9LTQ/+LtUncsq9K8D/LX8wqUTd6VyPwAD5UnAk5c3H/2tmyVA0u7FIfhEyPsXs
Z//LE9FEneTDqDVRi1Dw9I54K0ZwxPBemi71dXfwgBI7Ay0ezkbLWrA168Mt9spE
tHAODEuxPt2to2aqaS4ujogrkp/gvEP8ILoxPEqoTPCJ/eDTPAu/I1a2JzjMPK3n
2BBS6D8z0TLAf7w1n72TsgX2QzJW57ig/0HELyjvat2/K8V3HSrkwiKlsdULQDWe
zB+aMde7r6UoyVKHqlu4asTl2tU/lGZ+e31Hd9Bnx1/oZOJdzslGOhEo9Qoz6763
AHWU9LKiK4NtxYHj2UQTWhndr8PiTtTmR73eIDmkb0cuRXxzjl9VQbwYJ0Kbrmfp
attTqpc2CnEojTNXUUNSmNxotZoYXZiX8ZvjPfSgRVr15TUYujzlOfG+lUynbQMV
9rQ9/m58wgwYUymMpOIsRGaIcAKzjm+WpuuVnO+bS2AfmcBkGMQRoIhfV+3SkS8q
kT9cDXgcDZOzVFcnZZB4EjbycMcPgZDcoHxU88VdYH+jFJYvvb21esgswVF/wJ2Z
uEI/chp4+ADaQhl8ehZNWMSZq125v6SeirPhBNgLG7zFVZI1S9Tm/6qFmH+ajQY7
nCk1X9HZlB1ubex1X+HibRz9QKOilkMgkADyJ4yMDckwYj93sx0=
=l6uN
-----END PGP SIGNATURE-----
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq--
6 years, 6 months
vdsClient is removed and replaced by vdsm-client
by Irit Goihman
Hi All,
vdsClient will be removed from master branch today.
It is using XMLRPC protocol which has been deprecated and replaced by
JSON-RPC.
A new client for vdsm was introduced in 4.1: vdsm-client.
This is a simple client that uses JSON-RPC protocol which was introduced in
ovirt 3.5.
The client is not aware of the available methods and parameters, and you
should consult
the schema [1] in order to construct the desired command.
Future version should parse the schema and provide online help.
If you're using vdsClient, we will be happy to assist you in migrating to
the new vdsm client.
*vdsm-client usage:*
vdsm-client [-h] [-a ADDRESS] [-p PORT] [--unsecure] [--timeout TIMEOUT]
[-f FILE] namespace method [name=value [name=value] ...]
Invoking simple methods:
# vdsm-client Host getVMList
['b3f6fa00-b315-4ad4-8108-f73da817b5c5']
For invoking methods with many or complex parameters, you can read
the parameters from a JSON format file:
# vdsm-client Lease info -f lease.json
where lease.json file content is:
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
It is also possible to read parameters from standard input, creating
complex parameters interactively:
# cat <<EOF | vdsm-client Lease info -f -
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
EOF
*Constructing a command from vdsm schema:*
Let's take VM.getStats as an example.
This is the entry in the schema:
VM.getStats:
added: '3.1'
description: Get statistics about a running virtual machine.
params:
- description: The UUID of the VM
name: vmID
type: *UUID
return:
description: An array containing a single VmStats record
type:
- *VmStats
namespace: VM
method name: getStats
params: vmID
The vdsm-client command is:
# vdsm-client VM getStats vmID=b3f6fa00-b315-4ad4-8108-f73da817b5c5
*Invoking getVdsCaps command:*
# vdsm-client Host getCapabilities
Please consult vdsm-client help and man page for further details and
options.
[1] https://github.com/oVirt/vdsm/blob/master/lib/api/vdsm-api.yml
--
Irit Goihman
Software Engineer
Red Hat Israel Ltd.
6 years, 6 months
Passing VLAN trunk to VM
by Simon Vincent
Is it possible to pass multiple VLANs to a VM (pfSense) using a single
virtual NIC? All my existing oVirt networks are setup as a single tagged
VLAN. I know this didn't used to be supported but wondered if this has
changed. My other option is to pass each VLAN as a separate NIC to the VM
however if I needed to add a new VLAN I would have to add a new interface
and reboot the VM as hot-add of NICs is not supported by pfSense.
6 years, 7 months
HostedEngine with HA
by Carlos Rodrigues
Hello,
I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.
When i shutdown the network of host with HostedEngine VM, it should be
possible the HostedEngine VM migrate automatically to another host?
What is the expected behaviour on this HA scenario?
Regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
6 years, 7 months
oVIRT 4.1 / iSCSI Multipathing
by Devin Acosta
I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.
How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.
Please Advise.
Devin
6 years, 8 months
when creating VMs, I don't want hosted_storage to be an option
by Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not available to add new VMs to? Right now it’s the default storage domain when adding a VM. At the least, I’d like to make another storage domain the default.
Is there a way to do this?
Thanks
6 years, 8 months
qemu-kvm images corruption
by Nicolas Ecarnot
TL;DR:
How to avoid images corruption?
Hello,
On two of our old 3.6 DC, a recent series of VM migrations lead to some
issues :
- I'm putting a host into maintenance mode
- most of the VM are migrating nicely
- one remaining VM never migrates, and the logs are showing :
* engine.log : "...VM has been paused due to I/O error..."
* vdsm.log : "...Improbable extension request for volume..."
After digging amongst the RH BZ tickets, I saved the day by :
- stopping the VM
- lvchange -ay the adequate /dev/...
- qemu-img check [-r all] /rhev/blahblah
- lvchange -an...
- boot the VM
- enjoy!
Yesterday this worked for a VM where only one error occurred on the qemu
image, and the repair was easily done by qemu-img.
Today, facing the same issue on another VM, it failed because the errors
were very numerous, and also because of this message :
[...]
Rebuilding refcount structure
ERROR writing refblock: No space left on device
qemu-img: Check failed: No space left on device
[...]
The PV/VG/LV are far from being full, so I guess I don't where to look at.
I tried many ways to solve it but I'm not comfortable at all with qemu
images, corruption and solving, so I ended up exporting this VM (to an
NFS export domain), importing it into another DC : this had the side
effect to use qemu-img convert from qcow2 to qcow2, and (maybe?????) to
solve some errors???
I also copied it into another qcow2 file with the same qemu-img convert
way, but it is leading to another clean qcow2 image without errors.
I saw that on 4.x some bugs are fixed about VM migrations, but this is
not the point here.
I checked my SANs, my network layers, my blades, the OS (CentOS 7.2) of
my hosts, but I see nothing special.
The real reason behind my message is not to know how to repair anything,
rather than to understand what could have lead to this situation?
Where to keep a keen eye?
--
Nicolas ECARNOT
6 years, 9 months
Official Hyperconverged Gluster oVirt upgrade procedure?
by Hanson
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable glusterfs from ovirt gui, run
yum update?
Thanks!
6 years, 9 months
[vdsm] status update: running containers alongside VMs
by Francesco Romani
Hi everyone,
I'm happy to share some progress about the former "convirt"[1] project,
which aims to let Vdsm containers alongside VMs, on bare metal.
In the last couple of months I kept updating the patch series, which
is approaching the readiness to be merged in Vdsm.
Please read through this mail to see what the patchset can do now,
how you could try it *now*, even before it is merged.
Everyone is invited to share thoughts and ideas about how this effort
could evolve.
This will be a long mail; I will amend, enhance and polish the content
and make a blog post (on https://mojaves.github.io) to make it easier
to consume and to have some easy-to-find documentation. Later on the
same content will appear also on the oVirt blog.
Happy hacking!
+++
# How to try how the experimental container support for Vdsm.
Vdsm is gaining *experimental* support to run containers alongside VMs.
Vdsm had since long time the ability to manage VMs which run containers,
and recently gained support for
[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
With the new support we are describing, you will be able to manage containers
with the same, proven infrastructure that let you manage VMs.
This feature is currently being developed and it is still not merged in the
Vdsm codebase, so some extra work is needed if you want to try it out.
We aiming to merge it in the oVirt 4.1.z cycle.
## What works, aka what to expect
The basic features are expected to work:
1. Run any docker image on the public docker registry
2. Make the container accessible from the outside (aka not just from localhost)
3. Use file-based storage for persistent volumes
## What does not yet work, aka what NOT to expect
Few things are planned and currently under active development:
1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
One important drawback is that you will not be told the IP of the container from Engine,
you will need to connect to the Vdsm host to discover it using standard docker tools.
2. Proper network integration. Some steps still need manual intervention
3. Stability and recovery - it's pre-alpha software after all! :)
## 1. Introduction and prerequisites
Trying out container support affects only the host and the Vdsm.
Besides add few custom properties (totally safe and supported since early
3.z), there are zero changes required to the DB and to Engine.
Nevertheless, we recommend to dedicate one oVirt 4.y environment,
or at least one 4.y host, to try out the container feature.
To get started, first thing you need is to setup a vanilla oVirt 4.y
installation. We will need to make changes to the Vdsm and to the
Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
better to avoid them at the moment.
The reminder of this tutorial assumes you are using two hosts,
one for Vdsm (will be changed) and one for Engine (will require zero changes);
furthermore, we assume the Vdsm host is running on CentOS 7.y.
We require:
- one test host for Vdsm. This host need to have one NIC dedicated to containers.
We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
so this NIC *must not be* part of one bridge.
- docker >= 1.12
- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
- CentOS >= 7.2
Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
Caveats:
1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
2. you may want to replace the default service file
[with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/syst...
and to use this
[sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/sys....
Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
on the testing box.
## 2. Patch Vdsm to support containers
You need to patch and rebuild Vdsm.
Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.1...
and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
Rebuild Vdsm and reinstall on your box.
[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
Make sure you install the Vdsm command line client (vdsm-cli)
Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
This ensure that no regression is introduced, and that your environment can run VMs just as before.
Now we can proceed adding the container support.
start docker:
# systemctl start docker-engine
(optional)
# systemctl enable docker-engine
Restart Vdsm again
# systemctl restart vdsm
Now we can check if Vdsm detects docker, so you can use it:
still on the same Vdsm host, run
$ vdsClient -s 0 getVdsCaps | grep containers
containers = ['docker', 'fake']
This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
Now we need to make sure the host network configuration is fine.
### 2.1. Configure the docker network for Vdsm
PLEASE NOTE
that the suggested network configuration assumes that
* you have one network, `ovirtmgmt` (the default one) you use for everything
* you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
care of this automatically in the future.
You can use
[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-...,
which reuses the Vdsm libraries. Make sure
you have patched Vdsm to support container before to use it.
Let's review what the script needs:
# ./cont-setup-net -h
usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
[--interface [INTERFACE]] [--gateway [GATEWAY]]
[--subnet [SUBNET]] [--mask [MASK]]
optional arguments:
-h, --help show this help message and exit
--name [NAME] network name to use
--bridge [BRIDGE] bridge to use
--interface [INTERFACE]
interface to use
--gateway [GATEWAY] address of the gateway
--subnet [SUBNET] subnet to use
--mask [MASK] netmask to use
So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
For my case the default mask was indeed fine, so I used the script like this:
# ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
Thhis is the output I got:
DEBUG:virt.containers.runtime:configuring runtime 'docker'
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
Error: No such network: ovirtmgmt
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.runtime:configuring runtime 'fake'
You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
91535f3425a8 bridge bridge local
d42f7e5561b5 host host local
621ab6dd49b1 none null local
f4b88e4a67eb ovirtmgmt macvlan local
# docker network inspect ovirtmgmt
[
{
"Name": "ovirtmgmt",
"Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.1.0/24",
"IPRange": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"parent": "enp3s0"
},
"Labels": {}
}
]
Looks good! the host configuration is completed. Let's move to the Engine side.
## 3. Configure Engine
As mentioned above, we need now to configure Engine. This boils down to:
Add a few custom properties for VMs:
In case you were already using custom properties, you need to amend the command
line to not overwrite your existing ones.
# engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
It is worth stressing that while the variables are container-specific,
the VM custom properties are totally inuntrusive and old concept in oVirt, so
this step is totally safe.
Now restart Engine to let it use the new variables:
# systemctl restart ovirt-engine
The next step is actually configure one "container VM" and run it.
## 4. Create the container "VM"
To finally run a container, you start creating a VM much like you always did, with
few changes
1. most of the hardware-related configuration isn't relevant for container "VMs",
besides cpu share and memory limits; this will be better documented in the
future; unneeded configuration will just be ignored
2. You need to set some custom properties for your container "VM". Those are
actually needed to enable the container flow, and they are documented in
the next section. You *need* to set at least `containerType` and `containerImage`.
### 4.2. Custom variables for container support
The container support needs some custom properties to be properly configured:
1. `containerImage` (*needed* to enable the container system).
Just select the target image you want to run. You can use the standard syntax of the
container runtimes.
2. `containerType` (*needed* to enable the container system).
Selects the container runtime you want to use. All the available options are always showed.
Please note that unavailable container options are not yet grayed out.
If you *do not* have rkt support on your host, you still can select it, but it won't work.
3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
to have persistent storage. Only file-based storage is supported.
Example configuration:
`containerImage = redis`
`containerType = docker`
`volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
### 4.2. A little bit of extra work: preload the images on the Vdsm host
This step is not needed by the flow, and will be handled by oVirt in the future.
The issue is how the container image are handled. They are stored by the container
management system (rkt, docker) on each host, and they are not pre-downloaded.
To shorten the duration of the first boot, you are advised to pre-download
the image(s) you want to run. For example
## on the Vdsm host you want to use with containers
# docker pull redis
## 5. Run the container "VM"
You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
Some actions doesn't make sense for a container "VM", like live migration.
Engine won't stop you to try to do those actions, but they will fail gracefully
using the standard errors.
## 6. Next steps
What to expect from this project in the future?
For the integration with Vdsm, we want to fix the existing known issues, most notably:
* add proper monitoring/reporting of the container health
* ensure proper integration of the container image store with oVirt storage management
* streamline the network configuration
What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
moment, so fixing the following is currently unplanned:
* First and foremost, Engine will not distinguish between real VMs and container VMs.
Actions unavailable to container will not be hidden from UI. Same for monitoring
and configuration data, which will be ignored.
* Engine is NOT aware of the volumes one container can use. You must inspect and do the
mapping manually.
* Engine is NOT aware of the available container runtimes. You must select it carefully
Proper integration with Engine may be added in the future once this feature exits
from the experimental/provisional stage.
Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
+++
[1] we keep calling it that way _only_ internally, because it's a short
name we are used to. After the merge/once we release it, we will use
a different name, like "vdsm-containers" or something like it.
--
Francesco Romani
Red Hat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
6 years, 10 months
Empty cgroup files on centos 7.3 host
by Florian Schmid
Hi,
I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3 hypervisor hosts, but it looks like that all those files are empty.
For example:
ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\x2dHostedEngine.scope/
insgesamt 0
drwxr-xr-x. 2 root root 0 30. Mai 10:09 .
drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
--w-------. 1 root root 0 30. Mai 10:09 blkio.reset_stats
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.clone_children
--w--w--w-. 1 root root 0 30. Mai 10:09 cgroup.event_control
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.procs
-rw-r--r--. 1 root root 0 30. Mai 10:09 notify_on_release
-rw-r--r--. 1 root root 0 30. Mai 10:09 tasks
I thought, I can get my needed values from there, but all files are empty.
Looking at this post: http://lists.ovirt.org/pipermail/users/2017-January/079011.html
this should work.
Is this normal on centos 7.3 with oVirt installed? How can I get those values, without monitoring all VMs directly?
oVirt Version we use:
4.1.1.8-1.el7.centos
BR Florian
6 years, 10 months