[vdsm] status update: running containers alongside VMs
by Francesco Romani
Hi everyone,
I'm happy to share some progress about the former "convirt"[1] project,
which aims to let Vdsm containers alongside VMs, on bare metal.
In the last couple of months I kept updating the patch series, which
is approaching the readiness to be merged in Vdsm.
Please read through this mail to see what the patchset can do now,
how you could try it *now*, even before it is merged.
Everyone is invited to share thoughts and ideas about how this effort
could evolve.
This will be a long mail; I will amend, enhance and polish the content
and make a blog post (on https://mojaves.github.io) to make it easier
to consume and to have some easy-to-find documentation. Later on the
same content will appear also on the oVirt blog.
Happy hacking!
+++
# How to try how the experimental container support for Vdsm.
Vdsm is gaining *experimental* support to run containers alongside VMs.
Vdsm had since long time the ability to manage VMs which run containers,
and recently gained support for
[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
With the new support we are describing, you will be able to manage containers
with the same, proven infrastructure that let you manage VMs.
This feature is currently being developed and it is still not merged in the
Vdsm codebase, so some extra work is needed if you want to try it out.
We aiming to merge it in the oVirt 4.1.z cycle.
## What works, aka what to expect
The basic features are expected to work:
1. Run any docker image on the public docker registry
2. Make the container accessible from the outside (aka not just from localhost)
3. Use file-based storage for persistent volumes
## What does not yet work, aka what NOT to expect
Few things are planned and currently under active development:
1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
One important drawback is that you will not be told the IP of the container from Engine,
you will need to connect to the Vdsm host to discover it using standard docker tools.
2. Proper network integration. Some steps still need manual intervention
3. Stability and recovery - it's pre-alpha software after all! :)
## 1. Introduction and prerequisites
Trying out container support affects only the host and the Vdsm.
Besides add few custom properties (totally safe and supported since early
3.z), there are zero changes required to the DB and to Engine.
Nevertheless, we recommend to dedicate one oVirt 4.y environment,
or at least one 4.y host, to try out the container feature.
To get started, first thing you need is to setup a vanilla oVirt 4.y
installation. We will need to make changes to the Vdsm and to the
Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
better to avoid them at the moment.
The reminder of this tutorial assumes you are using two hosts,
one for Vdsm (will be changed) and one for Engine (will require zero changes);
furthermore, we assume the Vdsm host is running on CentOS 7.y.
We require:
- one test host for Vdsm. This host need to have one NIC dedicated to containers.
We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
so this NIC *must not be* part of one bridge.
- docker >= 1.12
- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
- CentOS >= 7.2
Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
Caveats:
1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
2. you may want to replace the default service file
[with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/syst...
and to use this
[sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/sys....
Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
on the testing box.
## 2. Patch Vdsm to support containers
You need to patch and rebuild Vdsm.
Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.1...
and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
Rebuild Vdsm and reinstall on your box.
[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
Make sure you install the Vdsm command line client (vdsm-cli)
Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
This ensure that no regression is introduced, and that your environment can run VMs just as before.
Now we can proceed adding the container support.
start docker:
# systemctl start docker-engine
(optional)
# systemctl enable docker-engine
Restart Vdsm again
# systemctl restart vdsm
Now we can check if Vdsm detects docker, so you can use it:
still on the same Vdsm host, run
$ vdsClient -s 0 getVdsCaps | grep containers
containers = ['docker', 'fake']
This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
Now we need to make sure the host network configuration is fine.
### 2.1. Configure the docker network for Vdsm
PLEASE NOTE
that the suggested network configuration assumes that
* you have one network, `ovirtmgmt` (the default one) you use for everything
* you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
care of this automatically in the future.
You can use
[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-...,
which reuses the Vdsm libraries. Make sure
you have patched Vdsm to support container before to use it.
Let's review what the script needs:
# ./cont-setup-net -h
usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
[--interface [INTERFACE]] [--gateway [GATEWAY]]
[--subnet [SUBNET]] [--mask [MASK]]
optional arguments:
-h, --help show this help message and exit
--name [NAME] network name to use
--bridge [BRIDGE] bridge to use
--interface [INTERFACE]
interface to use
--gateway [GATEWAY] address of the gateway
--subnet [SUBNET] subnet to use
--mask [MASK] netmask to use
So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
For my case the default mask was indeed fine, so I used the script like this:
# ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
Thhis is the output I got:
DEBUG:virt.containers.runtime:configuring runtime 'docker'
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
Error: No such network: ovirtmgmt
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.runtime:configuring runtime 'fake'
You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
91535f3425a8 bridge bridge local
d42f7e5561b5 host host local
621ab6dd49b1 none null local
f4b88e4a67eb ovirtmgmt macvlan local
# docker network inspect ovirtmgmt
[
{
"Name": "ovirtmgmt",
"Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.1.0/24",
"IPRange": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"parent": "enp3s0"
},
"Labels": {}
}
]
Looks good! the host configuration is completed. Let's move to the Engine side.
## 3. Configure Engine
As mentioned above, we need now to configure Engine. This boils down to:
Add a few custom properties for VMs:
In case you were already using custom properties, you need to amend the command
line to not overwrite your existing ones.
# engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
It is worth stressing that while the variables are container-specific,
the VM custom properties are totally inuntrusive and old concept in oVirt, so
this step is totally safe.
Now restart Engine to let it use the new variables:
# systemctl restart ovirt-engine
The next step is actually configure one "container VM" and run it.
## 4. Create the container "VM"
To finally run a container, you start creating a VM much like you always did, with
few changes
1. most of the hardware-related configuration isn't relevant for container "VMs",
besides cpu share and memory limits; this will be better documented in the
future; unneeded configuration will just be ignored
2. You need to set some custom properties for your container "VM". Those are
actually needed to enable the container flow, and they are documented in
the next section. You *need* to set at least `containerType` and `containerImage`.
### 4.2. Custom variables for container support
The container support needs some custom properties to be properly configured:
1. `containerImage` (*needed* to enable the container system).
Just select the target image you want to run. You can use the standard syntax of the
container runtimes.
2. `containerType` (*needed* to enable the container system).
Selects the container runtime you want to use. All the available options are always showed.
Please note that unavailable container options are not yet grayed out.
If you *do not* have rkt support on your host, you still can select it, but it won't work.
3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
to have persistent storage. Only file-based storage is supported.
Example configuration:
`containerImage = redis`
`containerType = docker`
`volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
### 4.2. A little bit of extra work: preload the images on the Vdsm host
This step is not needed by the flow, and will be handled by oVirt in the future.
The issue is how the container image are handled. They are stored by the container
management system (rkt, docker) on each host, and they are not pre-downloaded.
To shorten the duration of the first boot, you are advised to pre-download
the image(s) you want to run. For example
## on the Vdsm host you want to use with containers
# docker pull redis
## 5. Run the container "VM"
You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
Some actions doesn't make sense for a container "VM", like live migration.
Engine won't stop you to try to do those actions, but they will fail gracefully
using the standard errors.
## 6. Next steps
What to expect from this project in the future?
For the integration with Vdsm, we want to fix the existing known issues, most notably:
* add proper monitoring/reporting of the container health
* ensure proper integration of the container image store with oVirt storage management
* streamline the network configuration
What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
moment, so fixing the following is currently unplanned:
* First and foremost, Engine will not distinguish between real VMs and container VMs.
Actions unavailable to container will not be hidden from UI. Same for monitoring
and configuration data, which will be ignored.
* Engine is NOT aware of the volumes one container can use. You must inspect and do the
mapping manually.
* Engine is NOT aware of the available container runtimes. You must select it carefully
Proper integration with Engine may be added in the future once this feature exits
from the experimental/provisional stage.
Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
+++
[1] we keep calling it that way _only_ internally, because it's a short
name we are used to. After the merge/once we release it, we will use
a different name, like "vdsm-containers" or something like it.
--
Francesco Romani
Red Hat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
6 years, 11 months
Empty cgroup files on centos 7.3 host
by Florian Schmid
Hi,
I wanted to monitor disk IO and R/W on all of our oVirt centos 7.3 hypervisor hosts, but it looks like that all those files are empty.
For example:
ls -al /sys/fs/cgroup/blkio/machine.slice/machine-qemu\\x2d14\\x2dHostedEngine.scope/
insgesamt 0
drwxr-xr-x. 2 root root 0 30. Mai 10:09 .
drwxr-xr-x. 16 root root 0 26. Jun 09:25 ..
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_merged_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_queued_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_bytes_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_serviced_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_service_time_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.io_wait_time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.leaf_weight_device
--w-------. 1 root root 0 30. Mai 10:09 blkio.reset_stats
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.sectors_recursive
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_service_bytes
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.io_serviced
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.read_iops_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_bps_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.throttle.write_iops_device
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time
-r--r--r--. 1 root root 0 30. Mai 10:09 blkio.time_recursive
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight
-rw-r--r--. 1 root root 0 30. Mai 10:09 blkio.weight_device
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.clone_children
--w--w--w-. 1 root root 0 30. Mai 10:09 cgroup.event_control
-rw-r--r--. 1 root root 0 30. Mai 10:09 cgroup.procs
-rw-r--r--. 1 root root 0 30. Mai 10:09 notify_on_release
-rw-r--r--. 1 root root 0 30. Mai 10:09 tasks
I thought, I can get my needed values from there, but all files are empty.
Looking at this post: http://lists.ovirt.org/pipermail/users/2017-January/079011.html
this should work.
Is this normal on centos 7.3 with oVirt installed? How can I get those values, without monitoring all VMs directly?
oVirt Version we use:
4.1.1.8-1.el7.centos
BR Florian
6 years, 11 months
Failed to open grubx64.efi
by Julio Cesar Bustamante
Hi there,
I have installed Ovirt Host in a HS22 Blade Ibm, but I have this bug.
Failed to open \efi\centos\grubx64.efi not found
Falied to load image \EFI\centos\grubx64.efi Not found
You know how can I solve this issue ?
--
Julio Cesar Bustamante.
6 years, 11 months
slow performance with export storage on glusterfs
by Jiří Sléžka
This is a cryptographically signed message in MIME format.
--------------ms020509020901010502070100
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Hi,
I am trying realize why is exporting of vm to export storage on
glusterfs such slow.
I am using oVirt and RHV, both instalations on version 4.1.7.
Hosts have dedicated nics for rhevm network - 1gbps, data storage itself
is on FC.
GlusterFS cluster lives separate on 4 dedicated hosts. It has slow disks
but I can achieve about 200-400mbit throughput in other applications (we
are using it for "cold" data, backups mostly).
I am using this glusterfs cluster as backend for export storage. When I
am exporting vm I can see only about 60-80mbit throughput.
What could be the bottleneck here?
Could it be qemu-img utility?
vdsm 97739 0.3 0.0 354212 29148 ? S<l 15:43 0:06
/usr/bin/qemu-img convert -p -t none -T none -f raw
/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426=
-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d=
6cf-48cc-ad0c-4a7d979c0c1e
-O raw
/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4e=
a2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f=
-d6cf-48cc-ad0c-4a7d979c0c1e
Any idea how to make it work faster or what throughput should I expected?=
Cheers,
Jiri
--------------ms020509020901010502070100
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ
BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy
dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx
MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I
b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd
VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty
qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+
0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim
atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg
oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj
ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB
AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw
AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB
LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD
ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t
L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr
BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj
egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI
hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF
OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS
WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB
iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101
S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow
ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV
BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN
BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw
HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y
ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE
BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT
E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/
VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN
MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E
8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX
aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh
tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf
BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V
jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow
DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE
LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f
BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy
c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj
aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw
Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj
ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC
AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf
/SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx
VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts
UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd
M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB
MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB
bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl
cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG
SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE3MTEyMDE1MjAzM1owLwYJ
KoZIhvcNAQkEMSIEILmfZ4a/vSZv57MxXgCAPwjxh9wMgWdW8v78HTqqQ3zIMGwGCSqGSIb3
DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG
9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG
AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES
MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl
U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC
CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE
BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j
ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAGii
FiUp6Lb3R5JRu32BeDHrAGBL8Ym2pCdXh8zv4dYs3Q3dd/SQHkif4Z3qysUHfPAEQ2gz7N4C
W7ALZjblD0RZRy5JlrzGLfCmruPBKmU/cLIAYZk2v78+aEjgi8oiZHSLnhIwCRQheQtxK7rG
O8qkWpqZ2jmCstaBlj8kbpVb4WvNC7EbtEJLjXTP4rKAET2daeZdLqPaLx6eYND3l10GymKV
k4KLh0o638Eh3fmBLIncOh79nuURZl5Q6K7bflEQacQEzWDw9+9fGf5vWc8FzTzrn6w9RPwZ
NCcAVdR8SAU274u0cFWPnj+UywsJRKywJ31UWbQGU8ONy7MsAoAAAAAAAAA=
--------------ms020509020901010502070100--
6 years, 12 months
Logical network setup with neutron
by Lakshmi Narasimhan Sundararajan
Hi Team,
I am looking at integrating openstack neutron with oVirt.
Reading the docs so far, and through my setup experiments, I can see that
oVirt and neutron do seem to understand each other.
But I need some helpful pointers to help me understand a few items
during configuration.
1) During External Provider registration,
a) although openstack keystone is currently supporting v3 api
endpoints, only configuring v2 works. I see an exception otherwise.I
have a feeling only v2 auth is supported with oVirt.
b) Interface mappings.
This I believe is a way for logical networks to switch/route traffic
back to physical networks. This is of the form label:interface. Where
label is placed on each Hosts network setting to point to the right
physical interface.
I did map label "red" when I setup Host networks to a physical Nic.
And used "red:br-red, green:br-green" here, wherein my intention is to
create a bridge br-red on each Host for this logical network and
switch/route packets over the "red" label mapped physical nic on each
host. And every vm attached to "red" logical network shall have a vnic
placed on "br-red" Is my understanding correct?
2) Now I finally create a logical network using external provider
"openstack neutron". Herein "Physical Network" parameter that I
totally do not understand.
If the registration were to have many interface mappings, is this a
way of pinning to the right interface?
I cannot choose, red, red:br-red... I can only leave it empty,
So what is the IP address of the physical address argument part of
logical network creation?
"Optionally select the Create on external provider check box. Select
the External Provider from the drop-down list and provide the IP
address of the Physical Network". What this field means?
I would appreciate some clarity and helpful pointers here.
Best regards
--
DISCLAIMER
The information in this e-mail is confidential and may be subject to legal
privilege. It is intended solely for the addressee. Access to this e-mail
by anyone else is unauthorized. If you have received this communication in
error, please address with the subject heading "Received in error," send to
it(a)msystechnologies.com, then delete the e-mail and destroy any copies of
it. If you are not the intended recipient, any disclosure, copying,
distribution or any action taken or omitted to be taken in reliance on it,
is prohibited and may be unlawful. The views, opinions, conclusions and
other information expressed in this electronic mail and any attachments are
not given or endorsed by the company unless otherwise indicated by an
authorized representative independent of this message.
MSys cannot guarantee that e-mail communications are secure or error-free,
as information could be intercepted, corrupted, amended, lost, destroyed,
arrive late or incomplete, or contain viruses, though all reasonable
precautions have been taken to ensure no viruses are present in this e-mail.
As our company cannot accept responsibility for any loss or damage arising
from the use of this e-mail or attachments we recommend that you subject
these to your virus checking procedures prior to use
7 years
Re: [ovirt-users] VDSM multipath.conf - prevent automatic management of local devices
by Ben Bradley
On 23/11/17 06:46, Maton, Brett wrote:
> Might not be quite what you're after but adding
>
> # RHEV PRIVATE
>
> To /etc/multipath.conf will stop vdsm from changing the file.
> |||
Hi there. Thanks for the reply.
Yes I am aware of that and it seems that's what I will have to do.
I have no problem with VDSM managing the file, I just wish it didn't
automatically load local storage devices into multipathd.
I'm still not clear on the purpose of this automatic management though.
From what I can tell there is no difference to hosts/clusters made
through this automatic management - i.e. you still have to add storage
domains manually in oVirt.
Could anyone give any info on the purpose of this auto-management of
local storage devices into multipathd in VDSM?
Then I will be able to make an informed decision as to the benefit of
letting it continue.
Thanks, Ben
>
> On 22 November 2017 at 22:42, Ben Bradley <listsbb(a)virtx.net
> <mailto:listsbb@virtx.net>> wrote:
>
> Hi All
>
> I have been running ovirt in a lab environment on CentOS 7 for
> several months but have only just got around to really testing things.
> I understand that VDSM manages multipath.conf and I understand that
> I can make changes to that file and set it to private to prevent
> VDSM making further changes.
>
> I don't mind VDSM managing the file but is it possible to set to
> prevent local devices being automatically added to multipathd?
>
> Many times I have had to flush local devices from multipath when
> they are added/removed or re-partitioned or the system is rebooted.
> It doesn't even look like oVirt does anything with these devices
> once they are setup in multipathd.
>
> I'm assuming it's the VDSM additions to multipath that are causing
> this. Can anyone else confirm this?
>
> Is there a way to prevent new or local devices being added
> automatically?
>
> Regards
> Ben
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
7 years
Fedora support (was: [ANN] oVirt 4.2.0 Second Beta Release is now available for testing)
by Yedidyah Bar David
On Wed, Nov 29, 2017 at 7:29 PM, Blaster <Blaster(a)556nato.com> wrote:
> Is Fedora not supported anymore?
>
> I've read the release notes for the 4.2r2 beta and 4.1.7, they mention
> specific versions of RHEL and CentOS, but only mention Fedora by name, with
> no specific version information.
We currently have too many problems with fedora to call it even 'Technical
Preview', as was done in the past.
You can still use the nightly snapshots, and most things work, more-or-less,
with some issues having known workarounds. See e.g.:
https://bugzilla.redhat.com/showdependencytree.cgi?id=1460625&hide_resolv...
And also:
http://lists.ovirt.org/pipermail/devel/2017-August/030990.html
(not sure that one is still relevant for Fedora 27, didn't check recently).
>
> On 11/15/2017 9:17 AM, Sandro Bonazzola wrote
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 7.4 or later
>
> * CentOS Linux (or similar) 7.4 or later
>
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
>
> * Red Hat Enterprise Linux 7.4 or later
>
> * CentOS Linux (or similar) 7.4 or later
>
> * oVirt Node 4.2 (available for x86_64 only)
>
> tp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
--
Didi
7 years
Reg: Ovirt mouse not responding
by syedquadeer@ctel.in
Dear Team,
I am using Ovirt 3.x on centos 3 node cluster and in that Ubuntu 14.04
64bit vm's are installed. But the end users, who are using this vm' s
are facing some issue daily and issues are mentioned below,
1. Keyboard will not respond in middle automatically and after checking
log file in vm, it show pmouse sync issue.
2. If Vm is restarted it is giving black screen, then Vm need to power
off and start again.
Please provide solution for above issues. Thanks in advance...
--
Thanks & Regards,
Syed Abdul Qadeer.
7660022818.
7 years
Convert local storage domain to shared
by Demeter Tibor
--=_345563be-5719-4ca3-a875-90e8c94299c9
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear Users,
We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we created a new data center, that based on 4.1 and it use only shared infrastructure.
I would like to migrate an big VM from the old local datacenter to our new, but I don't have enough downtime.
Is it possible to convert the old local storage to shared (by share via NFS) and attach that as new storage domain to the new cluster?
I just want to import VM and copy (while running) with live storage migration function.
I know, the official way for move vms between ovirt clusters is the export domain, but it has very big disks.
What can I do?
Thanks
Tibor
--=_345563be-5719-4ca3-a875-90e8c94299c9
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div id=3D"zimbraEditorContainer" style=3D"font-family: arial, =
helvetica, sans-serif; font-size: 12pt; color: #000000" class=3D"403"><div>=
<br></div><div>Dear Users,</div><div><br data-mce-bogus=3D"1"></div><div>We=
have an old ovirt3.5 install with a local and a shared clusters. Meanwhile=
we created a new data center, that based on 4.1 and it use only shared inf=
rastructure.</div><div>I would like to migrate an big VM from the old local=
datacenter to our new, but I don't have enough downtime. </div><div><=
br data-mce-bogus=3D"1"></div><div>Is it possible to convert the old local =
storage to shared (by share via NFS) and attach that as new storage domain =
to the new cluster?</div><div></div><div>I just want to import VM and copy =
(while running) with live storage migration function.</div><div><br data-mc=
e-bogus=3D"1"></div><div>I know, the official way for move vms between ovir=
t clusters is the export domain, but it has very big disks.</div><div><br d=
ata-mce-bogus=3D"1"></div><div>What can I do?</div><div><br data-mce-bogus=
=3D"1"></div><div>Thanks </div><div><br data-mce-bogus=3D"1"></div><di=
v>Tibor</div><div data-marker=3D"__SIG_PRE__"><p style=3D"font-family: 'Tim=
es New Roman'; font-size: medium; margin: 0px;" data-mce-style=3D"font-fami=
ly: 'Times New Roman'; font-size: medium; margin: 0px;"><strong><span style=
=3D"font-size: medium;" data-mce-style=3D"font-size: medium;"><span style=
=3D"color: #2d67b0;" data-mce-style=3D"color: #2d67b0;"></span></span></str=
ong></p><p></p></div></div></body></html>
--=_345563be-5719-4ca3-a875-90e8c94299c9--
7 years