
Hi all, I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits. 1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup? Thanks for your help Regards, Shantur

Hi Shantur, the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with). There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard . With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements. Best Regards, Strahil Nikolov В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <shantur.rathore@gmail.com> написа: Hi all, I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits. 1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup? Thanks for your help Regards, Shantur _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW...

Hi Strahil, Thanks for your reply, I have 16 nodes for now but more on the way. The reason why Ceph appeals me over Gluster because of the following reasons. 1. I have more experience with Ceph than Gluster. 2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time. 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph. In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense? Regards Shantur On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore < shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...

Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster. 2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time. 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph.
In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install.
Does Ceph hyperconverge still make sense?
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org>
wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore < shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...

Hi Strahil, Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster. That is a good reason to pick CEPH. 2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time. Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time). 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph. Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream
В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа: projects, support is on best effort from the community.
In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense? Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH. Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users < users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have.
Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is.
I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards,
Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore < shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pit falls in such a setup?
Thanks for your help
Regards,
Shantur
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW...
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...

Thanks Strahil for your reply. Sorry just to confirm, 1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes? Thanks, Shantur On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <users@ovirt.org> wrote:
В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster.
That is a good reason to pick CEPH.
2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time.
Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time).
4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph.
Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream projects, support is on best effort from the community.
In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense?
Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH.
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore < shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS...

Beware about Ceph and oVirt Managed Block Storage, current integration is only possible with kernel, not with qemu-rbd. k Sent from my iPhone
On 18 Jan 2021, at 13:00, Shantur Rathore <rathore4u@gmail.com> wrote:
Thanks Strahil for your reply.
Sorry just to confirm,
1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes?
Thanks, Shantur
On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <users@ovirt.org> wrote: В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster. That is a good reason to pick CEPH. 2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time. Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time). 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph. Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream projects, support is on best effort from the community. In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense? Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH.
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org> wrote: Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3...

Thanks for pointing that out to me Konstantin. I understand that it would use a kernel client instead of userland rbd lib. Isn't it better as I have seen kernel clients 20x faster than userland?? I am probably missing something important here, would you mind detailing that. Regards, Shantur On Mon, Jan 18, 2021 at 3:27 PM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
Beware about Ceph and oVirt Managed Block Storage, current integration is only possible with kernel, not with qemu-rbd.
k
Sent from my iPhone
On 18 Jan 2021, at 13:00, Shantur Rathore <rathore4u@gmail.com> wrote:
Thanks Strahil for your reply.
Sorry just to confirm,
1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes?
Thanks, Shantur
On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <users@ovirt.org> wrote:
В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster.
That is a good reason to pick CEPH.
2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time.
Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time).
4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph.
Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream projects, support is on best effort from the community.
In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense?
Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH.
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore < shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3...

Faster than fuse-rbd, not qemu. Main issue is kernel pagecache and client upgrades, for example cluster with 700 osd and 1000 clients we need update client version for new features. With current oVirt realization we need update kernel then reboot host. With librbd we just need update package and activate host. k Sent from my iPhone
On 18 Jan 2021, at 19:13, Shantur Rathore <rathore4u@gmail.com> wrote:
Thanks for pointing that out to me Konstantin.
I understand that it would use a kernel client instead of userland rbd lib. Isn't it better as I have seen kernel clients 20x faster than userland??
I am probably missing something important here, would you mind detailing that.

Most probably it will be easier if you stick with full-blown distro. @Sandro Bonazzola can help with CEPH status. Best Regards,Strahil Nikolov В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore <rathore4u@gmail.com> написа: Thanks Strahil for your reply. Sorry just to confirm, 1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes? Thanks, Shantur On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <users@ovirt.org> wrote:
В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster. That is a good reason to pick CEPH. 2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time. Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time). 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph. Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream projects, support is on best effort from the community. In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense? Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH.
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS...

Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov < hunter86_bg@yahoo.com> ha scritto:
Most probably it will be easier if you stick with full-blown distro.
@Sandro Bonazzola can help with CEPH status.
Letting the storage team have a voice here :-) +Tal Nisan <tnisan@redhat.com> , +Eyal Shenitzky <eshenitz@redhat.com> , +Nir Soffer <nsoffer@redhat.com>
Best Regards,Strahil Nikolov
В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore < rathore4u@gmail.com> написа:
Thanks Strahil for your reply.
Sorry just to confirm,
1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes?
Thanks, Shantur
В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster. That is a good reason to pick CEPH. 2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time. Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time). 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph. Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream projects, support is on best effort from the community. In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense? Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH.
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users < users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most
On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <users@ovirt.org> wrote: probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup
due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can
only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only
Centos?
3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

Ceph support is available via Managed Block Storage (tech preview), it cannot be used instead of gluster for hyperconverged setups. Moreover, it is not possible to use a pure Managed Block Storage setup at all, there has to be at least one regular storage domain in a datacenter On Mon, Jan 18, 2021 at 11:58 AM Shantur Rathore <rathore4u@gmail.com> wrote:
Thanks Strahil for your reply.
Sorry just to confirm,
1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes?
Thanks, Shantur
On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users <users@ovirt.org> wrote:
В 15:51 +0000 на 17.01.2021 (нд), Shantur Rathore написа:
Hi Strahil,
Thanks for your reply, I have 16 nodes for now but more on the way.
The reason why Ceph appeals me over Gluster because of the following reasons.
1. I have more experience with Ceph than Gluster.
That is a good reason to pick CEPH.
2. I heard in Managed Block Storage presentation that it leverages storage software to offload storage related tasks. 3. Adding Gluster storage limits to 3 hosts at a time.
Only if you wish the nodes to be both Storage and Compute. Yet, you can add as many as you wish as a compute node (won't be part of Gluster) and later you can add them to the Gluster TSP (this requires 3 nodes at a time).
4. I read that there is a limit of maximum 12 hosts in Gluster setup. No such limitation if I go via Ceph.
Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. As both oVirt and Gluster ,that are used, are upstream projects, support is on best effort from the community.
In my initial testing I was able to enable Centos repositories in Node Ng but if I remember correctly, there were some librbd versions present in Node Ng which clashed with the version I was trying to install. Does Ceph hyperconverge still make sense?
Yes it is. You got the knowledge to run the CEPH part, yet consider talking with some of the devs on the list - as there were some changes recently in oVirt's support for CEPH.
Regards Shantur
On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi Shantur,
the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with).
There are users reporting using CEPH with their oVirt , but I can't tell how good it is. I doubt that oVirt nodes come with CEPH components , so you most probably will need to use a full-blown distro. In general, using extra software on oVirt nodes is quite hard .
With such setup, you will need much more nodes than a Gluster setup due to CEPH's requirements.
Best Regards, Strahil Nikolov
В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <shantur.rathore@gmail.com> написа:
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only have one disk which I plan to partition and use for hyper converged setup. As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib? 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos? 3. Can I install cinderlib on oVirt Node Next hosts? 4. Are there any pit falls in such a setup?
Thanks for your help
Regards, Shantur
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKW... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3...

Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t use Ceph Storage. Current storage team support in oVirt just can break something and do not work with this anymore, take a look what I talking about: in [1], [2], [3] k [1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 <https://bugzilla.redhat.com/show_bug.cgi?id=1899453> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 <https://bugzilla.redhat.com/show_bug.cgi?id=1899453> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 <https://bugzilla.redhat.com/show_bug.cgi?id=1899453>
On 19 Jan 2021, at 10:40, Benny Zlotnik <bzlotnik@redhat.com> wrote:
Ceph support is available via Managed Block Storage (tech preview), it cannot be used instead of gluster for hyperconverged setups.
Moreover, it is not possible to use a pure Managed Block Storage setup at all, there has to be at least one regular storage domain in a datacenter

On Tue, Jan 19, 2021 at 9:01 AM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t use Ceph Storage. Current storage team support in oVirt just can break something and do not work with this anymore, take a look what I talking about: in [1], [2], [3]
k
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
perhaps a copy paste error about the bugzilla entries? They are the same number...

Yep, BZ is https://bugzilla.redhat.com/show_bug.cgi?id=1539837 <https://bugzilla.redhat.com/show_bug.cgi?id=1539837> https://bugzilla.redhat.com/show_bug.cgi?id=1904669 <https://bugzilla.redhat.com/show_bug.cgi?id=1904669> https://bugzilla.redhat.com/show_bug.cgi?id=1905113 <https://bugzilla.redhat.com/show_bug.cgi?id=1905113> Thanks, k
On 19 Jan 2021, at 11:05, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
perhaps a copy paste error about the bugzilla entries? They are the same number...

@Konstantin Shalygin <k0ste@k0ste.ru> :
I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t use Ceph Storage.
I have tested all options but oVirt seems to tick most required boxes. OpenStack : Too complex for use case Proxmox : Love Ceph support but very basic clustering support OpenNebula : Weird VM state machine. Not sure if you know that rbd-nbd support is going to be implemented to Cinderlib. I could understand why oVirt wants to support CinderLib and deprecate Cinder support. @Strahil Nikolov <hunter86_bg@yahoo.com>
Most probably it will be easier if you stick with full-blown distro.
Yesterday, I was able to bring up a single host single disk Ceph cluster on oVirt Node NG 4.4.4 after enabling some repositories. Having said that, I didn't try image based upgrades to host. I read somewhere that rpms are persisted between host upgrades in Node NG now. @Benny Zlotnik
Moreover, it is not possible to use a pure Managed Block Storage setup at all, there has to be at least one regular storage domain in a datacenter
Thanks for pointing out the requirement for Master domain. In theory, will I be able to satisfy the requirement with another iSCSI or maybe Ceph iSCSI as master domain? So each node has - oVirt Node NG / Centos - Ceph cluster member - iSCSI or Ceph iSCSI master domain How practical is such a setup? Thanks, Shantur On Tue, Jan 19, 2021 at 9:39 AM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
Yep, BZ is
https://bugzilla.redhat.com/show_bug.cgi?id=1539837 https://bugzilla.redhat.com/show_bug.cgi?id=1904669 https://bugzilla.redhat.com/show_bug.cgi?id=1905113
Thanks, k
On 19 Jan 2021, at 11:05, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
perhaps a copy paste error about the bugzilla entries? They are the same number...

On 19 Jan 2021, at 13:39, Shantur Rathore <rathore4u@gmail.com> wrote:
I have tested all options but oVirt seems to tick most required boxes.
OpenStack : Too complex for use case Proxmox : Love Ceph support but very basic clustering support OpenNebula : Weird VM state machine.
Not sure if you know that rbd-nbd support is going to be implemented to Cinderlib. I could understand why oVirt wants to support CinderLib and deprecate Cinder support.
Yes, we love oVirt for “that should work like this”, before oVirt 4.4... Now imagine: you current cluster runned with qemu-rbd and Cinder, now you upgrade oVirt and can’t do anything - can’t migrate, your images in another oVirt pool, engine-setup can’t migrate current images to MBS - all in “feature preview”, older integration broken, then abandoned. Thanks, k

Thanks for pointing out the requirement for Master domain. In theory, will I be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as master domain? It should work as ovirt sees it as a regular domain, cephFS will probably work too
So each node has
- oVirt Node NG / Centos - Ceph cluster member - iSCSI or Ceph iSCSI master domain
How practical is such a setup? Not sure, it could work, but it hasn't been tested and it's likely you are going to be the first to try it

On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik <bzlotnik@redhat.com> wrote:
Thanks for pointing out the requirement for Master domain. In theory, will I be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as master domain? It should work as ovirt sees it as a regular domain, cephFS will probably work too
Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for configuring the master domain and still leveraging the same overall storage environment provided by Ceph, correct? https://bugzilla.redhat.com/show_bug.cgi?id=1527061 Gianluca

I can confirm that Ceph iSCSI can be used for master domain, we are using it together with VM disks on Ceph via Cinder ("old style"). Recent developments concerning Ceph in oVirt are disappointing for me, I think I will have to look elsewhere (OpenStack, Proxmox) for our rather big deployment. At least Nir Soffer's explanation for the move to cinderlib in another thread (dated 20210121) shed some light on the background of this decision. Matthias Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik <bzlotnik@redhat.com <mailto:bzlotnik@redhat.com>> wrote:
>Thanks for pointing out the requirement for Master domain. In theory, will I be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as master domain? It should work as ovirt sees it as a regular domain, cephFS will probably work too
Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for configuring the master domain and still leveraging the same overall storage environment provided by Ceph, correct?
https://bugzilla.redhat.com/show_bug.cgi?id=1527061
Gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4N...
-- Matthias Leopold IT Systems & Communications Medizinische Universität Wien Spitalgasse 23 / BT 88 / Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200

Thanks Matthias, Ceph iSCSI is indeed supported but it introduces an overhead for running LIO gateways for iSCSI. CephFS works as a posix domain, if we could get a posix domain to work as a master domain then we could run a self-hosted engine on it. Ceph RBD ( rbd-nbd hopefully in future ) could be used with cinderlib and we have got a self-hosted infrastructure with Ceph. I am hopeful that when cinderlib integration is mature enough to be out of Tech preview, there will be a way to migrate old cinder disks to new cinderlib. PS: About your large deployment, go OpenStack or OpenNebula if you like. Proxmox clustering isn't very great, it doesn't have a single controller and uses coro-sync based clustering which isn't very great. Cheers, Shantur On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold < matthias.leopold@meduniwien.ac.at> wrote:
I can confirm that Ceph iSCSI can be used for master domain, we are using it together with VM disks on Ceph via Cinder ("old style"). Recent developments concerning Ceph in oVirt are disappointing for me, I think I will have to look elsewhere (OpenStack, Proxmox) for our rather big deployment. At least Nir Soffer's explanation for the move to cinderlib in another thread (dated 20210121) shed some light on the background of this decision.
Matthias
Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik <bzlotnik@redhat.com <mailto:bzlotnik@redhat.com>> wrote:
>Thanks for pointing out the requirement for Master domain. In theory, will I be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as master domain? It should work as ovirt sees it as a regular domain, cephFS will probably work too
Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for configuring the master domain and still leveraging the same overall storage environment provided by Ceph, correct?
https://bugzilla.redhat.com/show_bug.cgi?id=1527061
Gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4N...
-- Matthias Leopold IT Systems & Communications Medizinische Universität Wien Spitalgasse 23 / BT 88 / Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200

Am 22.01.21 um 12:01 schrieb Shantur Rathore:
Thanks Matthias,
Ceph iSCSI is indeed supported but it introduces an overhead for running LIO gateways for iSCSI. CephFS works as a posix domain, if we could get a posix domain to work as a master domain then we could run a self-hosted engine on it. Concerning this you should look at https://bugzilla.redhat.com/show_bug.cgi?id=1577529.
Ceph RBD ( rbd-nbd hopefully in future ) could be used with cinderlib and we have got a self-hosted infrastructure with Ceph.
I am hopeful that when cinderlib integration is mature enough to be out of Tech preview, there will be a way to migrate old cinder disks to new cinderlib.
PS: About your large deployment, go OpenStack or OpenNebula if you like. Proxmox clustering isn't very great, it doesn't have a single controller and uses coro-sync based clustering which isn't very great.
Cheers, Shantur
On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold <matthias.leopold@meduniwien.ac.at <mailto:matthias.leopold@meduniwien.ac.at>> wrote:
I can confirm that Ceph iSCSI can be used for master domain, we are using it together with VM disks on Ceph via Cinder ("old style"). Recent developments concerning Ceph in oVirt are disappointing for me, I think I will have to look elsewhere (OpenStack, Proxmox) for our rather big deployment. At least Nir Soffer's explanation for the move to cinderlib in another thread (dated 20210121) shed some light on the background of this decision.
Matthias ...

It should work as ovirt sees it as a regular domain, cephFS will probably work too
Just tried to setup Ceph hyperconverged 1. Installed oVirt NG 4.4.4 on a machine ( partitioned to leave space for Ceph ) 2. Installed CephAdm : https://docs.ceph.com/en/latest/cephadm/install/ 3. Enabled EPEL and other required repos. 4. Bootstrapped ceph cluster 5. Created LV on the partitioned free space 6. Added OSD to ceph cluster 7. Added CephFS 8. Set min_size and size to 1 for osd pools to make it work with 1 OSD. All ready to deploy Self hosted engine from Cockpit 1. Started Self-Hosted engine deployment (not Hyperconverged) 2. Enter the details to Prepare-VM. 3. Prepare-VM successful. 4. Feeling excited, get the cephfs mount details ready. 5. Storage screen - There is no option to use POSIX storage for Self-Hosted. Bummer. Is there any way to work around this? I am able to add this to another oVirt Engine. [image: Screenshot 2021-01-20 at 12.19.55.png] Thanks, Shantur On Tue, Jan 19, 2021 at 11:16 AM Benny Zlotnik <bzlotnik@redhat.com> wrote:
Thanks for pointing out the requirement for Master domain. In theory, will I be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as master domain? It should work as ovirt sees it as a regular domain, cephFS will probably work too
So each node has
- oVirt Node NG / Centos - Ceph cluster member - iSCSI or Ceph iSCSI master domain
How practical is such a setup? Not sure, it could work, but it hasn't been tested and it's likely you are going to be the first to try it

So, after a quick dive into source code, I cannot see any mention of posix storage in hosted-engine code. I am not sure if there is a manual way of moving the locally created hosted-engine vm to POSIX storage and create a storage domain using API as it does for other types of domains while installing self-hosted engine. Regards, Shantur

Just a bump. Any ideas anyone? On Wed, Jan 20, 2021 at 4:13 PM Shantur Rathore <rathore4u@gmail.com> wrote:
So, after a quick dive into source code, I cannot see any mention of posix storage in hosted-engine code. I am not sure if there is a manual way of moving the locally created hosted-engine vm to POSIX storage and create a storage domain using API as it does for other types of domains while installing self-hosted engine.
Regards, Shantur

Thanks Konstantin. I do get that oVirt needs a master domain. Just want to make a POSIX domain as a master domain. I can see there is no option in UI for that but do not understand if it is incompatible or not implemented. If it is not implemented then there might be a possibility of creating one with manual steps. Thanks On Fri, Jan 22, 2021 at 10:21 AM Konstantin Shalygin <k0ste@k0ste.ru> wrote:
Shantur, this is oVirt. You always should make master domain. It’s enough some 1GB NFS on manager side.
k
On 22 Jan 2021, at 12:02, Shantur Rathore <rathore4u@gmail.com> wrote:
Just a bump. Any ideas anyone?

On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik <bzlotnik@redhat.com> wrote:
Ceph support is available via Managed Block Storage (tech preview), it cannot be used instead of gluster for hyperconverged setups.
Just for clarification: when you say Managed Block Storage you mean cinderlib integration, correct? Is still this one below the correct reference page for 4.4? https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... So are the manual steps still needed (and also repo config that seems against pike)? Or do you have an updated link for configuring cinderlib in 4.4? Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a datacenter
Is this true only for Self Hosted Engine Environment or also if I have an external engine? Thanks, Gianluca

Just for clarification: when you say Managed Block Storage you mean cinderlib integration, >correct? Is still this one below the correct reference page for 4.4? https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... yes
So are the manual steps still needed (and also repo config that seems against pike)? Or do you have an updated link for configuring cinderlib in 4.4? It is slightly outdated, I, and other users have successfully used ussuri. I will update the feature page today.
Is this true only for Self Hosted Engine Environment or also if I have an external engine? External engine as well. The reason this is required is that only regular domains can serve as master domains which is required for a host to get the SPM role

Il giorno mar 19 gen 2021 alle ore 09:07 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik <bzlotnik@redhat.com> wrote:
Ceph support is available via Managed Block Storage (tech preview), it cannot be used instead of gluster for hyperconverged setups.
Just for clarification: when you say Managed Block Storage you mean cinderlib integration, correct? Is still this one below the correct reference page for 4.4?
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-...
So are the manual steps still needed (and also repo config that seems against pike)? Or do you have an updated link for configuring cinderlib in 4.4?
Above mentioned page was feature development page and not considered end user documentation. Updated documentation is here: https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_wit...
Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a datacenter
Is this true only for Self Hosted Engine Environment or also if I have an external engine?
Thanks, Gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNT...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
participants (8)
-
Benny Zlotnik
-
Gianluca Cecchi
-
Konstantin Shalygin
-
Matthias Leopold
-
Sandro Bonazzola
-
Shantur Rathore
-
Shantur Rathore
-
Strahil Nikolov