On April 12, 2020 4:49:51 PM GMT+03:00, thomas(a)hoberg.net wrote:
I want to run containers and VMs side by side and not necessarily
nested. The main reason for that is GPUs, Voltas mostly, used for CUDA
machine learning not for VDI, which is what most of the VM
orchestrators like oVirt or vSphere seem to focus on. And CUDA drivers
are notorious for refusing to work under KVM unless you pay $esla.
oVirt is more of a side show in my environment, used to run some
smaller functional VMs alongside bigger containers, but also in order
to consolidate and re-distribute the local compute node storage as a
Gluster storage pool: Kibbutz storage and compute, if you want, very
much how I understand the HCI philosophy behind oVirt.
The full integration of containers and VMs is still very much on the
roadmap I believe, but I was surprised to see that even co-existence
seems to be a problem currently.
So I set-up a 3-node HCI on CentOS7 (GPU-less and older) hosts and then
added additional (beefier GPGPU) CentOS7 hosts, that have been running
CUDA workloads on the latest Docker-CE v19 something.
The installation works fine, I can migrate VMs to these extra hosts
etc., but to my dismay Docker containers on these hosts lose access to
the local network, that is the entire subnet the host is in. For some
strange reason I can still ping Internet hosts, perhaps even everything
behind the host's gateway, but local connections are blocked.
It would seem that the ovritmgmt network that the oVirt installation
puts in breaks the docker0 bridge that Docker put there first.
I'd consider that a bug, but I'd like to gather some feedback first, if
anyone else has run into this problem.
I've repeated this several times in completely distinct environments
with the same results:
Simply add a host with a working Docker-CE as an oVirt host to an
existing DC/cluster and then try if you can still ping anyone on that
net, including the Docker host from a busybox container afterwards
(should try that ping just before you actually add it).
No, I didn't try this with podman yet, because that's separate
challenge with CUDA: Would love to know if that is part of QA for oVirt
already.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKLB3IAN7FJ...
Hi Thomas,
I don't think that this type of setup is not supported.
Have you tried the opposite way -> add a new host to oVirt, then try to put docker on
it and add the docker0 somehow in oVirt as a VM network (no matter you won't use it as
such)?
About the KVM and Nvidia drivers - RedHat and Nvidia got a partnership and thus by default
you are not able to set "<hidden state='on'/>" on the VM. As the
Nvidia drivers check 'if we are a VM or not' , they either allow using the
hardware or not.Of course enterprise hardware's (sold with that option in mind)
drivers don't care if we are in VM or not.
If you manage to put the hidden flag on a VM , the story will change a little bit for
you.
Best Regards,
Strahil Nikolov