On Mon, Feb 21, 2022 at 12:27 PM Thomas Hoberg <thomas@hoberg.net> wrote:
That's exactly the direction I originally understood oVirt would go, with the ability to run VMs and container side-by-side on the bare metal or nested with containers inside VMs for stronger resource or security isolation and network virtualization. To me it sounded especially attractive with an HCI underpinning so you could deploy it also in the field with small 3 node clusters.

I think in general a big part of the industry is going down the path of moving most things behind the k8s API/resource model. This means different things for different companies. For instance vmware keeps its traditional virt-stack, adding additional k8s apis in front of it, and crossing the bridges to k8s clusters behind the scenes to get a unified view, while other parts are choosing k8s (be it vanilla k8s, openshift, harvester, ...) and then take for instance KubeVirt to deploy additional k8s clusters on top of it, unifying the stack this way.

It is definitely true that k8s works significantly different to other solutions like oVirt or OpenStack, but once you get into it, I think one would be surprised how simple the architecture of k8s actually is, and also how little resources core k8s actually takes.

Having said that, as an ex oVirt engineer I would be glad to see oVirt continue to thrive. The simplicity of oVirt was always appealing to me.

Best regards,
Roman

 

But combining all those features evidently comes at too high a cost for all the integration and the customer base is either too small or too poor: the cloud players are all out on making sure you no longer run any hardware and then it's really just about pushing your applications there as cloud native or "IaaS" compatible as needed.

E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use, because it ties the machine to a specific host and goes against the grain of K8 as I understand it.

Memory overcommit is quite funny, really, because it's the same issue as the original virtual memory: essentially you lie to your consumer about the resources available and then swap pages forth and back in an attempt to make all your consumers happy. It was processes for virtual memory, it's VMs now for the hypervisor and in both cases it's about the consumer and the provider not continously negotiating for the resources they need and the price they are willing to pay.

That negotiation is always better at the highest level of abstraction, the application itself, which why implementing it at the lower levels (e.g. VMs) becomes less useful and needed.

And then there is technology like CXL which essentially turns RAM in to a fabric and your local CPU will just get RAM from another piece of hardware when your application needs more RAM and is willing to pay the premium something will charge for it.

With that type of hardware much of what hypervisors used to do goes into DPUs/IPUs and CPUs are just running applications making hypercalls. The kernel is just there to bootstrap.

Not sure we'll see that type of hardware at home or in the edge, though...
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PC5SDUMCPUEHQCE6SCMITTQWK5QKGMWT/