n00b Requesting Some Advice

Hi All, I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc? I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done. So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ). Thanks in advance Dulux-Oz

+Eyal Shenitzky <eshenitz@redhat.com> any suggestion? Il giorno lun 12 apr 2021 alle ore 11:15 <matthew@peregrineit.net> ha scritto:
Hi All,
I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc?
I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done.
So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ).
Thanks in advance
Dulux-Oz _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VDIZ6AAES22C7...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

Hi Matthew, Currently, in order to use Ceph in oVirt you have 2 options: 1. Ceph using ISCSI gateway - regular ISCSI storage domain with Ceph as the storage backend, supports all the regular operations [1]. 2. Using the new Managed Block Storage (Cinderlib integration) technical-preview - Create a Managed Block Storage domain that doesn't support all the operations that we have for the "regular" storage domain you can find more info here [2] and [3]. Each option has its benefits. [1] - https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/bl... [2] - https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... [3] - https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... On Wed, 21 Apr 2021 at 10:12, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
+Eyal Shenitzky <eshenitz@redhat.com> any suggestion?
Il giorno lun 12 apr 2021 alle ore 11:15 <matthew@peregrineit.net> ha scritto:
Hi All,
I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc?
I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done.
So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ).
Thanks in advance
Dulux-Oz _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VDIZ6AAES22C7...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
-- Regards, Eyal Shenitzky

Thanks guys - yeah, we thought that was the case. Thanks for confirming things Regards Matthew J On 21/04/2021 17:30, Eyal Shenitzky wrote:
Hi Matthew,
Currently, in order to use Ceph in oVirt you have 2 options:
1. Ceph using ISCSI gateway - regular ISCSI storage domain with Ceph as the storage backend, supports all the regular operations [1]. 2. Using the new Managed Block Storage (Cinderlib integration) technical-preview - Create a Managed Block Storage domain that doesn't support all the operations that we have for the "regular" storage domain you can find more info here [2] and [3].
Each option has its benefits.
[1] - https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/bl... <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/using_an_iscsi_gateway> [2] - https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/set_up_cinderlib> [3] - https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... <https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html>
On Wed, 21 Apr 2021 at 10:12, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote:
+Eyal Shenitzky <mailto:eshenitz@redhat.com> any suggestion?
Il giorno lun 12 apr 2021 alle ore 11:15 <matthew@peregrineit.net <mailto:matthew@peregrineit.net>> ha scritto:
Hi All,
I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc?
I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done.
So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ).
Thanks in advance
Dulux-Oz _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VDIZ6AAES22C7... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VDIZ6AAES22C7MN7P3YNVYEAEYGAGSY/>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* *
*
-- Regards, Eyal Shenitzky -- Peregrine IT Signature
*Matthew J BLACK* M.Inf.Tech.(Data Comms) MBA B.Sc. MACS (Snr), CP, IP3P When you want it done /right/ ‒ the first time! Phone: +61 4 0411 0089 Email: matthew@peregrineit.net <mailto:matthew@peregrineit.net> Web: www.peregrineit.net <http://www.peregrineit.net> View Matthew J BLACK's profile on LinkedIn <http://au.linkedin.com/in/mjblack> This Email is intended only for the addressee. Its use is limited to that intended by the author at the time and it is not to be distributed without the author’s consent. You must not use or disclose the contents of this Email, or add the sender’s Email address to any database, list or mailing list unless you are expressly authorised to do so. Unless otherwise stated, Peregrine I.T. Pty Ltd accepts no liability for the contents of this Email except where subsequently confirmed in writing. The opinions expressed in this Email are those of the author and do not necessarily represent the views of Peregrine I.T. Pty Ltd. This Email is confidential and may be subject to a claim of legal privilege. If you have received this Email in error, please notify the author and delete this message immediately.

Hi All, POSIXFS is good option to mount cephfs filesystem as Data domain and get all benefits of virtualization layer and as well as ceph storage layer i am using since more than 2 years without problem Tarun kumar kushwaha Skyvirt Cyberrange https://cyberrange.skyvirt.tech On Wed, 21 Apr 2021, 13:03 Eyal Shenitzky, <eshenitz@redhat.com> wrote:
Hi Matthew,
Currently, in order to use Ceph in oVirt you have 2 options:
1. Ceph using ISCSI gateway - regular ISCSI storage domain with Ceph as the storage backend, supports all the regular operations [1]. 2. Using the new Managed Block Storage (Cinderlib integration) technical-preview - Create a Managed Block Storage domain that doesn't support all the operations that we have for the "regular" storage domain you can find more info here [2] and [3].
Each option has its benefits.
[1] - https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/bl... [2] - https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... [3] - https://www.ovirt.org/develop/release-management/features/storage/cinderlib-...
On Wed, 21 Apr 2021 at 10:12, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
+Eyal Shenitzky <eshenitz@redhat.com> any suggestion?
Il giorno lun 12 apr 2021 alle ore 11:15 <matthew@peregrineit.net> ha scritto:
Hi All,
I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc?
I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done.
So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ).
Thanks in advance
Dulux-Oz _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VDIZ6AAES22C7...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
-- Regards, Eyal Shenitzky _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QMWK2TT7NXNWRA...

Thanks Tarun Kumar, Right, we have that option too :) On Wed, 21 Apr 2021 at 10:43, Kushwaha, Tarun Kumar < tarun@synergysystemsindia.com> wrote:
Hi All,
POSIXFS is good option to mount cephfs filesystem as Data domain and get all benefits of virtualization layer and as well as ceph storage layer i am using since more than 2 years without problem
Tarun kumar kushwaha Skyvirt Cyberrange https://cyberrange.skyvirt.tech
On Wed, 21 Apr 2021, 13:03 Eyal Shenitzky, <eshenitz@redhat.com> wrote:
Hi Matthew,
Currently, in order to use Ceph in oVirt you have 2 options:
1. Ceph using ISCSI gateway - regular ISCSI storage domain with Ceph as the storage backend, supports all the regular operations [1]. 2. Using the new Managed Block Storage (Cinderlib integration) technical-preview - Create a Managed Block Storage domain that doesn't support all the operations that we have for the "regular" storage domain you can find more info here [2] and [3].
Each option has its benefits.
[1] - https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/bl... [2] - https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... [3] - https://www.ovirt.org/develop/release-management/features/storage/cinderlib-...
On Wed, 21 Apr 2021 at 10:12, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
+Eyal Shenitzky <eshenitz@redhat.com> any suggestion?
Il giorno lun 12 apr 2021 alle ore 11:15 <matthew@peregrineit.net> ha scritto:
Hi All,
I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc?
I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done.
So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ).
Thanks in advance
Dulux-Oz _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VDIZ6AAES22C7...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
-- Regards, Eyal Shenitzky _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QMWK2TT7NXNWRA...
-- Regards, Eyal Shenitzky

FYI: I recently found that ceph performs quite well in vmware, when using the VM as a "direct" ceph client. (Our tests involved directly mounting an RBD device from the VM) So as long as you can treat the OS level as throwaway, and use ceph-to-the-vm for your actually important data, this might be also a good thing to use in ovirt. ----- Original Message ----- From: "Eyal Shenitzky" <eshenitz@redhat.com> To: "Sandro Bonazzola" <sbonazzo@redhat.com>, "Benny Zlotnik" <bzlotnik@redhat.com>, matthew@peregrineit.net Cc: "users" <users@ovirt.org> Sent: Wednesday, April 21, 2021 12:30:39 AM Subject: [ovirt-users] Re: n00b Requesting Some Advice Hi Matthew, Currently, in order to use Ceph in oVirt you have 2 options: 1. Ceph using ISCSI gateway - regular ISCSI storage domain with Ceph as the storage backend, supports all the regular operations [1]. 2. Using the new Managed Block Storage (Cinderlib integration) technical-preview - Create a Managed Block Storage domain that doesn't support all the operations that we have for the "regular" storage domain you can find more info here [2] and [3]. Each option has its benefits. [1] - [ https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/bl... | https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/bl... ] [2] - [ https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... | https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... ] [3] - [ https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... | https://www.ovirt.org/develop/release-management/features/storage/cinderlib-... ] On Wed, 21 Apr 2021 at 10:12, Sandro Bonazzola < [ mailto:sbonazzo@redhat.com | sbonazzo@redhat.com ] > wrote: [ mailto:eshenitz@redhat.com | +Eyal Shenitzky ] any suggestion? Il giorno lun 12 apr 2021 alle ore 11:15 < [ mailto:matthew@peregrineit.net | matthew@peregrineit.net ] > ha scritto: Hi All, I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus moving to Pacific) which we'd like to use with our new oVirt Cluster (all on CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other way I haven't read about yet, etc? I realise 'best' is a subjective term, but what I tend to do is do 'manual' installs so that I both actually understand what is happening (ie how things fit together - I pull apart and rebuild mechanical clocks and watches for the same reason) and also so I can '"Puppet-ise" the results for future use. This means that I am *not* necessarily looking for "quick and dirty" or "quick and easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as required) but I do want a solid, "best-practice" system when I'm done. So, can some please help? And also, would you mind pointing me towards the relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ). Thanks in advance Dulux-Oz
participants (6)
-
duluxoz
-
Eyal Shenitzky
-
Kushwaha, Tarun Kumar
-
matthew@peregrineit.net
-
Philip Brown
-
Sandro Bonazzola