
Hello, GlusterFS seems to be EOL. At least no more dev from Red Hat, the github doesn't record any move from more than one year. Is there a plan to use another FS on oVirt instead of glusterFS ?

Hey, I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms. We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported. And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024. I guess you either need to run on what you have now, find an alternative or join the community for future updates :-(

Oracle still contributes, albeit they have their own variant OLVM and development is very active. On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users < users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...

@Jean-Louis Dupond <jean-louis@dupond.be> and his team are very active! Il giorno mar 14 gen 2025 alle ore 06:25 David A. Stewart < itsavant@gmail.com> ha scritto:
Oracle still contributes, albeit they have their own variant OLVM and development is very active.
On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users < users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCMPI3VEIYLBU...
-- Sandro Bonazzola

Hello, We have been using oVirt for many years, and despite Red Hat's withdrawal, the project remains important to us, and we plan to continue using it. We have a rather unique setup, as we primarily use iSCSI for performance needs and Gluster for distributed and high-capacity storage. The end of life (EOL) of Gluster is particularly concerning for us as we plan the next upgrades to our infrastructure. We haven’t really considered Ceph, as we lack the human expertise to maintain such a solution. The complexity and fine-tuning required for CephFS to achieve a good configuration are significant hurdles that we are not ready to overcome at this time. One potential alternative might be to replace Gluster volumes with MinIO, but we are still evaluating our options. We are closely watching how the community evolves to ensure we follow the best path forward. Best regards, Le 14/01/2025 à 10:48, Sandro Bonazzola a écrit :
@Jean-Louis Dupond <mailto:jean-louis@dupond.be> and his team are very active!
Il giorno mar 14 gen 2025 alle ore 06:25 David A. Stewart <itsavant@gmail.com> ha scritto:
Oracle still contributes, albeit they have their own variant OLVM and development is very active.
On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users <users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCMPI3VEIYLBU...
--
Sandro Bonazzola**
_______________________________________________ Users mailing list --users@ovirt.org To unsubscribe send an email tousers-leave@ovirt.org Privacy Statement:https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZT2SNNKMYJ3KSM...

At this moment it's still safe to keep GlusterFS support in oVirt. But I think we should already think of the moment GlusterFS will not be shipped anymore in RHEL/CentOS/Alma, and then we will hit issues with oVirt. So there might be a moment GlusterFS support gets dropped from oVirt, in order to keep it building. CEPH might be an alternative, but I think also a lot of work to maintain it. And do you really want to run your CEPH on your hypervisor? Jean-Louis On 1/15/25 15:21, Pierre Labanowski wrote:
Hello,
We have been using oVirt for many years, and despite Red Hat's withdrawal, the project remains important to us, and we plan to continue using it.
We have a rather unique setup, as we primarily use iSCSI for performance needs and Gluster for distributed and high-capacity storage. The end of life (EOL) of Gluster is particularly concerning for us as we plan the next upgrades to our infrastructure.
We haven’t really considered Ceph, as we lack the human expertise to maintain such a solution. The complexity and fine-tuning required for CephFS to achieve a good configuration are significant hurdles that we are not ready to overcome at this time.
One potential alternative might be to replace Gluster volumes with MinIO, but we are still evaluating our options. We are closely watching how the community evolves to ensure we follow the best path forward.
Best regards,
Le 14/01/2025 à 10:48, Sandro Bonazzola a écrit :
@Jean-Louis Dupond <mailto:jean-louis@dupond.be> and his team are very active!
Il giorno mar 14 gen 2025 alle ore 06:25 David A. Stewart <itsavant@gmail.com> ha scritto:
Oracle still contributes, albeit they have their own variant OLVM and development is very active.
On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users <users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCMPI3VEIYLBU...
--
Sandro Bonazzola**
_______________________________________________ Users mailing list --users@ovirt.org To unsubscribe send an email tousers-leave@ovirt.org Privacy Statement:https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZT2SNNKMYJ3KSM...
_______________________________________________ Users mailing list --users@ovirt.org To unsubscribe send an email tousers-leave@ovirt.org Privacy Statement:https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56YTJM5O4TD5S...

I've been running a k8s cluster with the Piraeus operator built in for a while now, and it's working great. It uses LinBit DRBD under the hood to keep local storage on the hosts in sync. Maybe with a bit of work, it can be adapted to oVirt as well? Quite some time ago LinBit and RHV started on an integration project but IIRC that ended up not really completing. In any case, to stay relevant, oVirt needs to integrate fresh solutions instead of keeping the long deprecated and obsolete stuff alive. On Wed, Jan 15, 2025 at 9:57 AM Jean-Louis Dupond via Users <users@ovirt.org> wrote:
At this moment it's still safe to keep GlusterFS support in oVirt. But I think we should already think of the moment GlusterFS will not be shipped anymore in RHEL/CentOS/Alma, and then we will hit issues with oVirt.
So there might be a moment GlusterFS support gets dropped from oVirt, in order to keep it building. CEPH might be an alternative, but I think also a lot of work to maintain it. And do you really want to run your CEPH on your hypervisor?
Jean-Louis On 1/15/25 15:21, Pierre Labanowski wrote:
Hello,
We have been using oVirt for many years, and despite Red Hat's withdrawal, the project remains important to us, and we plan to continue using it.
We have a rather unique setup, as we primarily use iSCSI for performance needs and Gluster for distributed and high-capacity storage. The end of life (EOL) of Gluster is particularly concerning for us as we plan the next upgrades to our infrastructure.
We haven’t really considered Ceph, as we lack the human expertise to maintain such a solution. The complexity and fine-tuning required for CephFS to achieve a good configuration are significant hurdles that we are not ready to overcome at this time.
One potential alternative might be to replace Gluster volumes with MinIO, but we are still evaluating our options. We are closely watching how the community evolves to ensure we follow the best path forward.
Best regards,
Le 14/01/2025 à 10:48, Sandro Bonazzola a écrit :
@Jean-Louis Dupond <jean-louis@dupond.be> and his team are very active!
Il giorno mar 14 gen 2025 alle ore 06:25 David A. Stewart < itsavant@gmail.com> ha scritto:
Oracle still contributes, albeit they have their own variant OLVM and development is very active.
On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users < users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCMPI3VEIYLBU...
--
Sandro Bonazzola
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZT2SNNKMYJ3KSM...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56YTJM5O4TD5S...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4VD4I5FL67AQPY...

Dear Jean-Louis Previously, oVirt had integration with Ceph via OpenStack Cinder [1]. Then, developers removed or simply disabled (I have no information about this, we simply stopped updating because of this) the old integration and made integration via the cinderlib library [2]. They did it, to put it mildly - very strangely, via a kernel module It would be good if oVirt had the ability to work with Ceph RBD+QEMU from user space, as it works in OpenStack. And provide some manuals, that it was possible to migrate tables of the old integration to the new one. It seems that for this to work now, it is enough to make the appropriate edits to the code so as not to deal with servicing kernel devices, but to work through the librbd QEMU driver (as legacy [1] do it [3]) engine=# SELECT cinder_volume_type AS volume_type, pg_size_pretty(SUM(size)) AS bytes, COUNT(disk_id) AS disks FROM all_disks_for_vms GROUP BY ROLLUP(cinder_volume_type) ORDER BY cinder_volume_type; volume_type | bytes | disks ---------------------+--------+------- replicated-rbd | 136 TB | 235 replicated-rbd-nvme | 23 TB | 87 | 159 TB | 322 (3 rows) Thanks, k [1] https://www.ovirt.org/develop/release-management/features/storage/cinder-int... [2] https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... [3] https://docs.ceph.com/en/latest/rbd/libvirt/#configuring-the-vm
On 15 Jan 2025, at 17:57, Jean-Louis Dupond via Users <users@ovirt.org> wrote:
At this moment it's still safe to keep GlusterFS support in oVirt. But I think we should already think of the moment GlusterFS will not be shipped anymore in RHEL/CentOS/Alma, and then we will hit issues with oVirt.
So there might be a moment GlusterFS support gets dropped from oVirt, in order to keep it building. CEPH might be an alternative, but I think also a lot of work to maintain it. And do you really want to run your CEPH on your hypervisor?

CEPH is not something we use in combination with oVirt atm. But I would say, feel free to open some PR's :) On 1/16/25 14:46, Konstantin Shalygin wrote:
Dear Jean-Louis
Previously, oVirt had integration with Ceph via OpenStack Cinder [1]. Then, developers removed or simply disabled (I have no information about this, we simply stopped updating because of this) the old integration and made integration via the cinderlib library [2]. They did it, to put it mildly - very strangely, via a kernel module
It would be good if oVirt had the ability to work with Ceph RBD+QEMU from user space, as it works in OpenStack. And provide some manuals, that it was possible to migrate tables of the old integration to the new one. It seems that for this to work now, it is enough to make the appropriate edits to the code so as not to deal with servicing kernel devices, but to work through the librbd QEMU driver (as legacy [1] do it [3])
engine=# SELECT cinder_volume_type AS volume_type, pg_size_pretty(SUM(size)) AS bytes, COUNT(disk_id) AS disks FROM all_disks_for_vms GROUP BY ROLLUP(cinder_volume_type) ORDER BY cinder_volume_type; volume_type | bytes | disks ---------------------+--------+------- replicated-rbd | 136 TB | 235 replicated-rbd-nvme | 23 TB | 87 | 159 TB | 322 (3 rows)
Thanks, k
[1] https://www.ovirt.org/develop/release-management/features/storage/cinder-int... [2] https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... [3] https://docs.ceph.com/en/latest/rbd/libvirt/#configuring-the-vm
On 15 Jan 2025, at 17:57, Jean-Louis Dupond via Users <users@ovirt.org> wrote:
At this moment it's still safe to keep GlusterFS support in oVirt. But I think we should already think of the moment GlusterFS will not be shipped anymore in RHEL/CentOS/Alma, and then we will hit issues with oVirt.
So there might be a moment GlusterFS support gets dropped from oVirt, in order to keep it building. CEPH might be an alternative, but I think also a lot of work to maintain it. And do you really want to run your CEPH on your hypervisor?

Hi, Doesn't CEPH provide a (distributed?) iSCSI interface that oVirt could use? At least, that was my impression as I was looking to migrate from a single-host, hyperconverged solution (based on 4.3) to a 3-host hyperconverged solution (leveraging 4.5). My existing hardware is 8+ years old, so it's about time to refresh. When I was first considering expanding, I thought I'd go to Gluster, but with the EOL and lack of development, I've re-considered and was thinking about using CEPH instead. Of course, this would depend on CEPH providing an interface that both HostedEngine and other VMs could use, in a distributed manner that would not tie the H-E to a single host. My plan was to acquire 3 new systems, install OS and set them up as CEPH, then load up one as a (new?) oVirt system with HostedEngine based on the (existing) Ceph infra, and THEN work on migrating from the old oVirt to the new oVirt... Of course not sure exactly how that would work, yet. I was also assuming that a 10Ge private network would be sufficient for the CEPH storage backplane. Not sure if I need a second 10G private network for "migrations"? Am I incorrect in my understandings? (I've never actually played with CEPH, multipe-node oVirt, or migrations, so...) Thanks! -derek On Wed, January 15, 2025 9:21 am, Pierre Labanowski wrote:
Hello,
We have been using oVirt for many years, and despite Red Hat's withdrawal, the project remains important to us, and we plan to continue using it.
We have a rather unique setup, as we primarily use iSCSI for performance needs and Gluster for distributed and high-capacity storage. The end of life (EOL) of Gluster is particularly concerning for us as we plan the next upgrades to our infrastructure.
We haven’t really considered Ceph, as we lack the human expertise to maintain such a solution. The complexity and fine-tuning required for CephFS to achieve a good configuration are significant hurdles that we are not ready to overcome at this time.
One potential alternative might be to replace Gluster volumes with MinIO, but we are still evaluating our options. We are closely watching how the community evolves to ensure we follow the best path forward.
Best regards,
Le 14/01/2025 à 10:48, Sandro Bonazzola a écrit :
@Jean-Louis Dupond <mailto:jean-louis@dupond.be> and his team are very active!
Il giorno mar 14 gen 2025 alle ore 06:25 David A. Stewart <itsavant@gmail.com> ha scritto:
Oracle still contributes, albeit they have their own variant OLVM and development is very active.
On Mon, Jan 13, 2025, 5:01 AM change_jeeringly679--- via Users <users@ovirt.org> wrote:
Hey,
I don't think there are any plans, at the moment. We have been running oVirt for 3 years now and are pretty happy with it. But it seems, that it is no longer a priority for Red Hat and it has not been seriously picked up in the community. Most users seems to have moved on to other platforms.
We use oVirt with CEPH and are very happy about it and has been running very stable. We did have the occational problem, such as the kernels in CentOS 8 that caused KVM to "pause" indefinitly, but we are able to solve most of that ourselves. The solution was to upgrade to an AlmaLinux 9 distro that had a kernel that was not impacted.. However, our solution is not a hyperconverged setup. Before we used NFS which worked very reliable as well. This will open up for alternative storage solutions (standalone, redundant, distributed) as long as NFS is supported.
And you are right, Red Hat announced a while ago that GlusterFS would be EoL as per 31 December 2024.
I guess you either need to run on what you have now, find an alternative or join the community for future updates :-( _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPM44EJINRWKE...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVCMPI3VEIYLBU...
--
Sandro Bonazzola**
_______________________________________________ Users mailing list --users@ovirt.org To unsubscribe send an email tousers-leave@ovirt.org Privacy Statement:https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZT2SNNKMYJ3KSM... Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/G56YTJM5O4TD5S...
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant
participants (9)
-
change_jeeringly679@dralias.com
-
Dan Yasny
-
David A. Stewart
-
Derek Atkins
-
Jean-Louis Dupond
-
Konstantin Shalygin
-
Olivier
-
Pierre Labanowski
-
Sandro Bonazzola