
Hi, Doesn't CEPH provide a (distributed?) iSCSI interface that oVirt could use? At least, that was my impression as I was looking to migrate from a single-host, hyperconverged solution (based on 4.3) to a 3-host hyperconverged solution (leveraging 4.5). My existing hardware is 8+ years old, so it's about time to refresh. When I was first considering expanding, I thought I'd go to Gluster, but with the EOL and lack of development, I've re-considered and was thinking about using CEPH instead. Of course, this would depend on CEPH providing an interface that both HostedEngine and other VMs could use, in a distributed manner that would not tie the H-E to a single host. My plan was to acquire 3 new systems, install OS and set them up as CEPH, then load up one as a (new?) oVirt system with HostedEngine based on the (existing) Ceph infra, and THEN work on migrating from the old oVirt to the new oVirt... Of course not sure exactly how that would work, yet. I was also assuming that a 10Ge private network would be sufficient for the CEPH storage backplane. Not sure if I need a second 10G private network for "migrations"? Am I incorrect in my understandings? (I've never actually played with CEPH, multipe-node oVirt, or migrations, so...) Thanks! -derek On Wed, January 15, 2025 9:21 am, Pierre Labanowski wrote:
Hello,
We have been using oVirt for many years, and despite Red Hat's withdrawal, the project remains important to us, and we plan to continue using it.
We have a rather unique setup, as we primarily use iSCSI for performance needs and Gluster for distributed and high-capacity storage. The end of life (EOL) of Gluster is particularly concerning for us as we plan the next upgrades to our infrastructure.
We haven’t really considered Ceph, as we lack the human expertise to maintain such a solution. The complexity and fine-tuning required for CephFS to achieve a good configuration are significant hurdles that we are not ready to overcome at this time.
One potential alternative might be to replace Gluster volumes with MinIO, but we are still evaluating our options. We are closely watching how the community evolves to ensure we follow the best path forward.
Best regards,
Le 14/01/2025 à 10:48, Sandro Bonazzola a écrit :
@Jean-Louis Dupond <mailto:jean-louis@dupond.be> and his team are very active!
Il giorno mar 14 gen 2025 alle ore 06:25 David A. Stewart <itsavant@gmail.com> ha scritto:
Oracle still contributes, albeit they have their own variant OLVM and development is very active.
On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users <users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCMPI3VEIYLBU...
--
Sandro Bonazzola**
_______________________________________________ Users mailing list --users@ovirt.org To unsubscribe send an email tousers-leave@ovirt.org Privacy Statement:https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZT2SNNKMYJ3KSM... Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56YTJM5O4TD5S...
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant