[ovirt-users] status update: running containers alongside VMs
Martin Polednik
mpolednik at redhat.com
Thu Oct 13 09:52:32 EDT 2016
On 13/10/16 06:40 -0400, Francesco Romani wrote:
>Hi everyone,
>
>I'm happy to share some progress about the former "convirt"[1] project,
>which aims to let Vdsm containers alongside VMs, on bare metal.
>
>In the last couple of months I kept updating the patch series, which
>is approaching the readiness to be merged in Vdsm.
>
>Please read through this mail to see what the patchset can do now,
>how you could try it *now*, even before it is merged.
>
>Everyone is invited to share thoughts and ideas about how this effort
>could evolve.
>This will be a long mail; I will amend, enhance and polish the content
>and make a blog post (on https://mojaves.github.io) to make it easier
>to consume and to have some easy-to-find documentation. Later on the
>same content will appear also on the oVirt blog.
Thank you for the e-mail!
>Happy hacking!
>
>+++
>
># How to try how the experimental container support for Vdsm.
>
>Vdsm is gaining *experimental* support to run containers alongside VMs.
>Vdsm had since long time the ability to manage VMs which run containers,
>and recently gained support for
>[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
>
>With the new support we are describing, you will be able to manage containers
>with the same, proven infrastructure that let you manage VMs.
>
>This feature is currently being developed and it is still not merged in the
>Vdsm codebase, so some extra work is needed if you want to try it out.
>We aiming to merge it in the oVirt 4.1.z cycle.
>
>## What works, aka what to expect
>
>The basic features are expected to work:
>1. Run any docker image on the public docker registry
>2. Make the container accessible from the outside (aka not just from localhost)
>3. Use file-based storage for persistent volumes
>
>## What does not yet work, aka what NOT to expect
>
>Few things are planned and currently under active development:
>1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
The question is what to actually monitor. Do we want some performance
metrics or simple "service is running" probe?
> One important drawback is that you will not be told the IP of the container from Engine,
> you will need to connect to the Vdsm host to discover it using standard docker tools.
>2. Proper network integration. Some steps still need manual intervention
>3. Stability and recovery - it's pre-alpha software after all! :)
This is something we should really discuss and care about - to which
degree we want to pet our containers.
>
>## 1. Introduction and prerequisites
>
>Trying out container support affects only the host and the Vdsm.
>Besides add few custom properties (totally safe and supported since early
>3.z), there are zero changes required to the DB and to Engine.
>Nevertheless, we recommend to dedicate one oVirt 4.y environment,
>or at least one 4.y host, to try out the container feature.
>
>To get started, first thing you need is to setup a vanilla oVirt 4.y
>installation. We will need to make changes to the Vdsm and to the
>Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
>better to avoid them at the moment.
>
>The reminder of this tutorial assumes you are using two hosts,
>one for Vdsm (will be changed) and one for Engine (will require zero changes);
>furthermore, we assume the Vdsm host is running on CentOS 7.y.
>
>We require:
>- one test host for Vdsm. This host need to have one NIC dedicated to containers.
> We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
> so this NIC *must not be* part of one bridge.
>- docker >= 1.12
>- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
>- CentOS >= 7.2
>
>Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
>
>Caveats:
>1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
> Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
>2. you may want to replace the default service file
> [with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/systemd/docker/docker.service)
> and to use this
> [sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/systemd/docker/docker-engine).
> Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
> Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
> on the testing box.
>
>## 2. Patch Vdsm to support containers
>
>You need to patch and rebuild Vdsm.
>Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.15.1/0001-container-support-for-Vdsm.patch.gz)
>and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
>
>Rebuild Vdsm and reinstall on your box.
>[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
>Make sure you install the Vdsm command line client (vdsm-cli)
>
>Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
>This ensure that no regression is introduced, and that your environment can run VMs just as before.
>Now we can proceed adding the container support.
>
>start docker:
>
> # systemctl start docker-engine
> (optional)
> # systemctl enable docker-engine
>
>Restart Vdsm again
>
> # systemctl restart vdsm
>
>Now we can check if Vdsm detects docker, so you can use it:
>still on the same Vdsm host, run
>
> $ vdsClient -s 0 getVdsCaps | grep containers
> containers = ['docker', 'fake']
>
>This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
>Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
>
>Now we need to make sure the host network configuration is fine.
>
>### 2.1. Configure the docker network for Vdsm
>
> PLEASE NOTE
> that the suggested network configuration assumes that
> * you have one network, `ovirtmgmt` (the default one) you use for everything
> * you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
>
>_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
>care of this automatically in the future.
>
>You can use
>[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-setup-net),
>which reuses the Vdsm libraries. Make sure
>you have patched Vdsm to support container before to use it.
>
>Let's review what the script needs:
>
> # ./cont-setup-net -h
> usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
> [--interface [INTERFACE]] [--gateway [GATEWAY]]
> [--subnet [SUBNET]] [--mask [MASK]]
>
> optional arguments:
> -h, --help show this help message and exit
> --name [NAME] network name to use
> --bridge [BRIDGE] bridge to use
> --interface [INTERFACE]
> interface to use
> --gateway [GATEWAY] address of the gateway
> --subnet [SUBNET] subnet to use
> --mask [MASK] netmask to use
>
>So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
>
>For my case the default mask was indeed fine, so I used the script like this:
>
> # ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
>
>Thhis is the output I got:
>
> DEBUG:virt.containers.runtime:configuring runtime 'docker'
> DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
> Error: No such network: ovirtmgmt
> DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
> DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
> DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
> DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
> DEBUG:virt.containers.runtime:configuring runtime 'fake'
>
>You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
>
> # docker network ls
> NETWORK ID NAME DRIVER SCOPE
> 91535f3425a8 bridge bridge local
> d42f7e5561b5 host host local
> 621ab6dd49b1 none null local
> f4b88e4a67eb ovirtmgmt macvlan local
>
> # docker network inspect ovirtmgmt
> [
> {
> "Name": "ovirtmgmt",
> "Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
> "Scope": "local",
> "Driver": "macvlan",
> "EnableIPv6": false,
> "IPAM": {
> "Driver": "default",
> "Options": {},
> "Config": [
> {
> "Subnet": "192.168.1.0/24",
> "IPRange": "192.168.1.0/24",
> "Gateway": "192.168.1.1"
> }
> ]
> },
> "Internal": false,
> "Containers": {},
> "Options": {
> "parent": "enp3s0"
> },
> "Labels": {}
> }
> ]
>
>Looks good! the host configuration is completed. Let's move to the Engine side.
>
>## 3. Configure Engine
>
>As mentioned above, we need now to configure Engine. This boils down to:
>Add a few custom properties for VMs:
>
>In case you were already using custom properties, you need to amend the command
>line to not overwrite your existing ones.
>
> # engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
>
>It is worth stressing that while the variables are container-specific,
>the VM custom properties are totally inuntrusive and old concept in oVirt, so
>this step is totally safe.
>
>Now restart Engine to let it use the new variables:
>
> # systemctl restart ovirt-engine
>
>The next step is actually configure one "container VM" and run it.
>
>## 4. Create the container "VM"
>
>To finally run a container, you start creating a VM much like you always did, with
>few changes
>
> 1. most of the hardware-related configuration isn't relevant for container "VMs",
> besides cpu share and memory limits; this will be better documented in the
> future; unneeded configuration will just be ignored
> 2. You need to set some custom properties for your container "VM". Those are
> actually needed to enable the container flow, and they are documented in
> the next section. You *need* to set at least `containerType` and `containerImage`.
>
>### 4.2. Custom variables for container support
>
>The container support needs some custom properties to be properly configured:
>
> 1. `containerImage` (*needed* to enable the container system).
> Just select the target image you want to run. You can use the standard syntax of the
> container runtimes.
>
> 2. `containerType` (*needed* to enable the container system).
> Selects the container runtime you want to use. All the available options are always showed.
> Please note that unavailable container options are not yet grayed out.
> If you *do not* have rkt support on your host, you still can select it, but it won't work.
>
> 3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
> to have persistent storage. Only file-based storage is supported.
>
>Example configuration:
>
> `containerImage = redis`
> `containerType = docker`
> `volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
>
>### 4.2. A little bit of extra work: preload the images on the Vdsm host
>
>This step is not needed by the flow, and will be handled by oVirt in the future.
>The issue is how the container image are handled. They are stored by the container
>management system (rkt, docker) on each host, and they are not pre-downloaded.
>
>To shorten the duration of the first boot, you are advised to pre-download
>the image(s) you want to run. For example
>
> ## on the Vdsm host you want to use with containers
> # docker pull redis
>
>## 5. Run the container "VM"
>
>You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
>Some actions doesn't make sense for a container "VM", like live migration.
>Engine won't stop you to try to do those actions, but they will fail gracefully
>using the standard errors.
>
>## 6. Next steps
>
>What to expect from this project in the future?
>For the integration with Vdsm, we want to fix the existing known issues, most notably:
>
> * add proper monitoring/reporting of the container health
As noted above, I wonder if you have some idea of which approach to
use.
> * ensure proper integration of the container image store with oVirt storage management
What about private container registry? (thinking of docker registry
right now).
> * streamline the network configuration
Did you think of how hard integration with cloudy networking tech such
as vxlan would be? Is it actually feasible within oVirt?
>What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
>moment, so fixing the following is currently unplanned:
>
> * First and foremost, Engine will not distinguish between real VMs and container VMs.
> Actions unavailable to container will not be hidden from UI. Same for monitoring
> and configuration data, which will be ignored.
> * Engine is NOT aware of the volumes one container can use. You must inspect and do the
> mapping manually.
> * Engine is NOT aware of the available container runtimes. You must select it carefully
>
>Proper integration with Engine may be added in the future once this feature exits
>from the experimental/provisional stage.
>
>Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
>
>+++
>
>[1] we keep calling it that way _only_ internally, because it's a short
>name we are used to. After the merge/once we release it, we will use
>a different name, like "vdsm-containers" or something like it.
>
>--
>Francesco Romani
>Red Hat Engineering Virtualization R & D
>Phone: 8261328
>IRC: fromani
>_______________________________________________
>Users mailing list
>Users at ovirt.org
>http://lists.ovirt.org/mailman/listinfo/users
More information about the Users
mailing list