Il giorno mar 4 mag 2021 alle ore 00:35 Thomas Hoberg <thomas@hoberg.net> ha scritto:
Do you think it would add significant value to your use of oVirt if

more than a question to the users community this sounds like feedback on current pain points :-)

- single node HCI could easily promote to 3-node HCI?

+Rejy Cyriac how well is documented and how can we improve the experience for this step?
 
- single increments of HCI nodes worked with "sensible solution of quota issues"?
- extra HCI nodes (say beyond 6) could easily transition into erasure coding for good quota management, distinguishable by volumes?
- oVirt clusters supported easy transition between HCI and SAN/NFS storage as initial 1 or 3 node HCI "succeed" into a broader deployment with role differentiation?

I think these needs further explanation but I'll let Gluster team to ask about them
 
- it was validated on "edgy hardware" like Atoms, which support 32GB RAM these days, nested virtualization with affordable 100% passive hardware?

Anyone willing to donate this edgy hardware to the project so we can fully validate oVirt on such hardware? https://ovirt.org/community/get-involved/donate-hardware.html
 
- oVirt node images were made only from fully validated vertical stacks, including all standard deployment variants (SAN/NFS/Gluster 1/3/6/9 node HCI) including VDO and all life-cycle operations (updates)?

Can you please detail the test criteria? Just noting here we lack the hardware for testing a 9 node HCI setup in oVirt Jenkins.
 
- import and export of OVA were fully supported/validated standard operations against oVirt, VMware and VirtualBox?

Any specific issue seen on this?


- oVirt, Docker, Podman (and OKD) could work side-by-side on hosts, recognizing each other's resource allocations and networks instead of each assuming it owned the host?

I don't foresee this to happen, if you want to run VMs and containers on the same hosts you should probably look at OKD+Kubevirt as soltution.
 
- RealTek drivers, both for onboard and USB3 2.5Gbit were included in the oVirt node images and actually worked properly across warm reboots?

Yes, working with 3rd party drivers is not easy while using Node. For this case a plain CentOS / RHEL would work better.
There's a bug in Anaconda that doesn't allow to easily handle 3rd party driver installation with image based installation.
 
- nested virtualization was fully supported with oVirt on oVirt for fully testing migration and expansion scenarios before applying them on the physical hardware?

nested virtualization is used for testing oVirt on x86_64 always. All oVirt System Tests suite relies on nested virtualization working.
 
- Ansible was just 10000x faster?

This is not something the oVirt  team can do :-) we can suggest something to speed up: https://github.com/oVirt/ovirt-ansible-hosted-engine-setup#deployment-time-improvements
but it may have its corner cases when it may not work as espected.
 
- oVirt 4.3 could upgrade to 4.4 automagically and with a secure fail-back at any point? (ok, I know this is getting madly out of hand...) 

maybe worth splitting the discussion on separate threads per topic.

 
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4RZUOERTVUSM2ITRV52E2ST757OLU6H/


--