
--_000_307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46exchange2010dim_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hello I've been reading lots of material about implementing oVirt with Ceph, howe= ver all talk about using Cinder. Is there a way to get oVirt with Ceph without having to implement entire Op= enstack ? I'm already currently using Foreman to deploy Ceph and KVM nodes, trying to= minimize the amount of moving parts. I heard something about oVirt providi= ng a managed Cinder appliance, have any seen this ? --_000_307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46exchange2010dim_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri",sans-serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72"> <div class=3D"WordSection1"> <p class=3D"MsoNormal">Hello<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">I’ve been reading lots of material about imple= menting oVirt with Ceph, however all talk about using Cinder. <o:p></o:p></p> <p class=3D"MsoNormal">Is there a way to get oVirt with Ceph without having= to implement entire Openstack ? <o:p></o:p></p> <p class=3D"MsoNormal">I’m already currently using Foreman to deploy = Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard = something about oVirt providing a managed Cinder appliance, have any seen t= his ?<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> </div> </body> </html> --_000_307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46exchange2010dim_--

Hi Charles, Currently, oVirt communicates with Ceph only through Cinder. If you want to avoid using Cinder perhaps you can try to use cephfs and mount it as a posix storage domain instead. Regarding Cinder appliance, it is not yet implemented though we are currently investigating this option. Regards, Maor On Fri, Jun 24, 2016 at 11:23 PM, Charles Gomes <cgomes@clearpoolgroup.com> wrote:
Hello
I’ve been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder.
Is there a way to get oVirt with Ceph without having to implement entire Openstack ?
I’m already currently using Foreman to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sat, Jun 25, 2016 at 11:47 PM, Nicolás <nicolas@devels.es> wrote:
Hi,
We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.
Can you share more details on this setup and how you integrate with ovirt? For example, are you using ceph luns in regular iscsi storage domain, or attaching luns directly to vms? Did you try our dedicated cinder/ceph support and compared it with ceph iscsi gateway? Nir

This is a multi-part message in MIME format. --------------060404000307020607060307 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Hi Nir, El 25/06/16 a las 22:57, Nir Soffer escribió:
On Sat, Jun 25, 2016 at 11:47 PM, Nicolás <nicolas@devels.es> wrote:
Hi,
We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up. Can you share more details on this setup and how you integrate with ovirt?
For example, are you using ceph luns in regular iscsi storage domain, or attaching luns directly to vms?
Fernando Frediani (responding to this thread) hit the nail on the head. Actually we have a 3-node Ceph infrastructure, so we created a few volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt who creates the LVs on the top, this way we don't need to attach luns directly. Once the volumes are exported on the iSCSI side, adding an iSCSI domain on oVirt is enough to make the whole thing work. As for experience, we have done a few tests and so far we've had zero issues: * The main bottleneck is the iSCSI gateway interface bandwith. In our case we have a balance-alb bond over two 1G network interfaces. Later we realized this kind of bonding is useless because MAC addresses won't change, so in practice only 1G will be used at most. Making some heavy tests (i.e., powering on 50 VMs at a time) we've reached this threshold at specific points but it didn't affect performance significantly. * Doing some additional heavy tests (powering on and off all VMs at a time), we've reached the maximum value of cca. 1200 IOPS at a time. In normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk operations. * We've also done some tolerance tests, like removing one or more disks from a Ceph node, reinserting them, suddenly shut down one node, restoring it... The only problem we've experienced is a slower access to the iSCSI backend, which results in a message in the oVirt manager warning about this: something like "Storage is taking to long to respond...", which was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any significant issue.
Did you try our dedicated cinder/ceph support and compared it with ceph iscsi gateway?
Not actually, in order to avoid deploying Cinder we directly implemented the gateway as it looked easier to us.
Nir
Hope this helps. Regards. --------------060404000307020607060307 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> Hi Nir,<br> <br> <div class="moz-cite-prefix">El 25/06/16 a las 22:57, Nir Soffer escribió:<br> </div> <blockquote cite="mid:CAMRbyytMQdcSXDKwEWKD=aX=XXkZkt+=X1hC5hOiSfWXK2RCYQ@mail.gmail.com" type="cite"> <pre wrap="">On Sat, Jun 25, 2016 at 11:47 PM, Nicolás <a class="moz-txt-link-rfc2396E" href="mailto:nicolas@devels.es"><nicolas@devels.es></a> wrote: </pre> <blockquote type="cite"> <pre wrap="">Hi, We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up. </pre> </blockquote> <pre wrap=""> Can you share more details on this setup and how you integrate with ovirt? For example, are you using ceph luns in regular iscsi storage domain, or attaching luns directly to vms? </pre> </blockquote> <br> Fernando Frediani (responding to this thread) hit the nail on the head. Actually we have a 3-node Ceph infrastructure, so we created a few volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt who creates the LVs on the top, this way we don't need to attach luns directly.<br> <br> Once the volumes are exported on the iSCSI side, adding an iSCSI domain on oVirt is enough to make the whole thing work.<br> <br> As for experience, we have done a few tests and so far we've had zero issues:<br> <ul> <li>The main bottleneck is the iSCSI gateway interface bandwith. In our case we have a balance-alb bond over two 1G network interfaces. Later we realized this kind of bonding is useless because MAC addresses won't change, so in practice only 1G will be used at most. Making some heavy tests (i.e., powering on 50 VMs at a time) we've reached this threshold at specific points but it didn't affect performance significantly.</li> <li>Doing some additional heavy tests (powering on and off all VMs at a time), we've reached the maximum value of cca. 1200 IOPS at a time. In normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk operations.</li> <li>We've also done some tolerance tests, like removing one or more disks from a Ceph node, reinserting them, suddenly shut down one node, restoring it... The only problem we've experienced is a slower access to the iSCSI backend, which results in a message in the oVirt manager warning about this: something like "Storage is taking to long to respond...", which was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any significant issue.</li> </ul> <blockquote cite="mid:CAMRbyytMQdcSXDKwEWKD=aX=XXkZkt+=X1hC5hOiSfWXK2RCYQ@mail.gmail.com" type="cite"> <pre wrap=""> Did you try our dedicated cinder/ceph support and compared it with ceph iscsi gateway? </pre> </blockquote> <br> Not actually, in order to avoid deploying Cinder we directly implemented the gateway as it looked easier to us.<br> <br> <blockquote cite="mid:CAMRbyytMQdcSXDKwEWKD=aX=XXkZkt+=X1hC5hOiSfWXK2RCYQ@mail.gmail.com" type="cite"> <pre wrap=""> Nir </pre> </blockquote> <br> <p>Hope this helps.<br> </p> <p>Regards.</p> <br> </body> </html> --------------060404000307020607060307--

Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: ydary@redhat.com IRC : ydary On Sun, Jun 26, 2016 at 11:49 AM, Nicolás <nicolas@devels.es> wrote:
Hi Nir,
El 25/06/16 a las 22:57, Nir Soffer escribió:
On Sat, Jun 25, 2016 at 11:47 PM, Nicolás <nicolas@devels.es> <nicolas@devels.es> wrote:
Hi,
We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.
Can you share more details on this setup and how you integrate with ovirt?
For example, are you using ceph luns in regular iscsi storage domain, or attaching luns directly to vms?
Fernando Frediani (responding to this thread) hit the nail on the head. Actually we have a 3-node Ceph infrastructure, so we created a few volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt who creates the LVs on the top, this way we don't need to attach luns directly.
Once the volumes are exported on the iSCSI side, adding an iSCSI domain on oVirt is enough to make the whole thing work.
As for experience, we have done a few tests and so far we've had zero issues:
- The main bottleneck is the iSCSI gateway interface bandwith. In our case we have a balance-alb bond over two 1G network interfaces. Later we realized this kind of bonding is useless because MAC addresses won't change, so in practice only 1G will be used at most. Making some heavy tests (i.e., powering on 50 VMs at a time) we've reached this threshold at specific points but it didn't affect performance significantly.
Did you try using ISCSI bonding to allow use of more than one path?
- Doing some additional heavy tests (powering on and off all VMs at a time), we've reached the maximum value of cca. 1200 IOPS at a time. In normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk operations. - We've also done some tolerance tests, like removing one or more disks from a Ceph node, reinserting them, suddenly shut down one node, restoring it... The only problem we've experienced is a slower access to the iSCSI backend, which results in a message in the oVirt manager warning about this: something like "Storage is taking to long to respond...", which was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any significant issue.
Did you try our dedicated cinder/ceph support and compared it with ceph iscsi gateway?
Not actually, in order to avoid deploying Cinder we directly implemented the gateway as it looked easier to us.
Nir
Hope this helps.
Regards.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sun, Jun 26, 2016 at 11:49 AM, Nicolás <nicolas@devels.es> wrote:
Hi Nir,
El 25/06/16 a las 22:57, Nir Soffer escribió:
On Sat, Jun 25, 2016 at 11:47 PM, Nicolás <nicolas@devels.es> wrote:
Hi,
We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.
Can you share more details on this setup and how you integrate with ovirt?
For example, are you using ceph luns in regular iscsi storage domain, or attaching luns directly to vms?
Fernando Frediani (responding to this thread) hit the nail on the head. Actually we have a 3-node Ceph infrastructure, so we created a few volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt who creates the LVs on the top, this way we don't need to attach luns directly.
Once the volumes are exported on the iSCSI side, adding an iSCSI domain on oVirt is enough to make the whole thing work.
As for experience, we have done a few tests and so far we've had zero issues:
The main bottleneck is the iSCSI gateway interface bandwith. In our case we have a balance-alb bond over two 1G network interfaces. Later we realized this kind of bonding is useless because MAC addresses won't change, so in practice only 1G will be used at most. Making some heavy tests (i.e., powering on 50 VMs at a time) we've reached this threshold at specific points but it didn't affect performance significantly. Doing some additional heavy tests (powering on and off all VMs at a time), we've reached the maximum value of cca. 1200 IOPS at a time. In normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk operations. We've also done some tolerance tests, like removing one or more disks from a Ceph node, reinserting them, suddenly shut down one node, restoring it... The only problem we've experienced is a slower access to the iSCSI backend, which results in a message in the oVirt manager warning about this: something like "Storage is taking to long to respond...", which was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any significant issue.
This setup works, but you are not fully using ceph potential. You are actually using iscsi storage, so you are limited to 350 lvs per storage domain (for performance reasons). You are also using ovirt thin provisioning instead of ceph thin provisioning, so all your vms depend on the spm to extend vms disks when needed, and your vms may pause from time to time if the spm could not extend the disks fast enough. When cloning disks (e.g. create vm from template), you are copying the data from ceph to the spm node, and back to ceph. With cinder/ceph, this operation happen inside the ceph cluster and is much more efficient, possibly not copying anything. Performance is limited by the iscsi gateway(s) - when using native ceph, each vm is talking directly to the osds keeping its data. Reads and writes are using multiple hosts. On the other hand you are not limited by missing features in our current ceph implementation (e.g. live storage migration, copy disks from other storage domains, no monitoring). It would be interesting to compare cinder/ceph with your system. You can install a vm with cinder and the rest of the components, add another pool for cinder, and compare vms using native ceph and iscsi/ceph. You may like to check this project providing production-ready openstack containers: https://github.com/openstack/kolla Nir

You may like to check this project providing production-ready openstack containers: https://github.com/openstack/kolla
Also, the oVirt installer can actually deploy these containers for you: https://www.ovirt.org/develop/release-management/features/cinderglance-docke... -- Barak Korren bkorren@redhat.com RHEV-CI Team

Hi, the cinder container is broken since a while, since when the kollaglue changed the installation method upstream, AFAIK. Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" version of openstack, so you will need to install yours if you need a more recent one. We are using a VM managed by ovirt itself for keystone/glance/cinder with our ceph cluster, and it works quite well with the Mitaka version, which is the latest one. The DB is hosted outside, so that even if we loose the VM we don't loose the state, besides all performance reasons. The installation is not using containers, but installing the services directly via puppet/Foreman. So far we are happily using ceph in this way. The only drawback of this setup is that if the VM is not up we cannot start machines with ceph volumes attached, but the running machines survives without problems even if the cinder VM is down. Cheers, Alessandro Il 27/06/16 09:37, Barak Korren ha scritto:
You may like to check this project providing production-ready openstack containers: https://github.com/openstack/kolla
Also, the oVirt installer can actually deploy these containers for you:
https://www.ovirt.org/develop/release-management/features/cinderglance-docke...

On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, the cinder container is broken since a while, since when the kollaglue changed the installation method upstream, AFAIK. Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" version of openstack, so you will need to install yours if you need a more recent one. We are using a VM managed by ovirt itself for keystone/glance/cinder with our ceph cluster, and it works quite well with the Mitaka version, which is the latest one. The DB is hosted outside, so that even if we loose the VM we don't loose the state, besides all performance reasons. The installation is not using containers, but installing the services directly via puppet/Foreman. So far we are happily using ceph in this way. The only drawback of this setup is that if the VM is not up we cannot start machines with ceph volumes attached, but the running machines survives without problems even if the cinder VM is down.
Thanks for the info Alessandro! This seems like the best way to run cinder/ceph, using other storage for these vms, so cinder vm does not depend on the vm managing the storage it runs on. If you use highly available vms, ovirt will make sure they are up all the time, and will migrated them to other hosts when needed. Nir

Hi Nir, yes indeed, we use the high-availability setup from oVirt for the Glance/Cinder VM, hosted on a high-available gluster storage. For the DB we use an SSD-backed Percona Cluster. The VM itself connects to the DB cluster via haproxy, so we should have the full high-availability. The problem with the VM is the first time when you start the oVirt cluster, since you cannot start any VM using ceph volumes before you start the Glance/Cinder VM. It's easy to be solved, though, and even if you autostart all the machines they will automatically start in the correct order. Cheers, Alessandro Il 27/06/16 11:24, Nir Soffer ha scritto:
On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, the cinder container is broken since a while, since when the kollaglue changed the installation method upstream, AFAIK. Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" version of openstack, so you will need to install yours if you need a more recent one. We are using a VM managed by ovirt itself for keystone/glance/cinder with our ceph cluster, and it works quite well with the Mitaka version, which is the latest one. The DB is hosted outside, so that even if we loose the VM we don't loose the state, besides all performance reasons. The installation is not using containers, but installing the services directly via puppet/Foreman. So far we are happily using ceph in this way. The only drawback of this setup is that if the VM is not up we cannot start machines with ceph volumes attached, but the running machines survives without problems even if the cinder VM is down. Thanks for the info Alessandro!
This seems like the best way to run cinder/ceph, using other storage for these vms, so cinder vm does not depend on the vm managing the storage it runs on.
If you use highly available vms, ovirt will make sure they are up all the time, and will migrated them to other hosts when needed.
Nir

This is a multi-part message in MIME format. --------------FA12EBC73B50F8394E8AB5E9 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit This solution looks intresting. If I understand it correctly you first build your CEPH pool. Then you export RBD to iSCSI Target which exports it to oVirt which then will create LVMs on the top of it ? Could you share more details about your experience ? Looks like a way to get CEPH + oVirt without Cinder. Thanks Fernando On 25/06/2016 17:47, Nicolás wrote:
Hi,
We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.
Regards.
[1]: https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_is... En 24/6/2016 9:28 p. m., Charles Gomes <cgomes@clearpoolgroup.com> escribió:
Hello
Ive been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder.
Is there a way to get oVirt with Ceph without having to implement entire Openstack ?
Im already currently using Foreman to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------FA12EBC73B50F8394E8AB5E9 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <p>This solution looks intresting.</p> <p>If I understand it correctly you first build your CEPH pool. Then you export RBD to iSCSI Target which exports it to oVirt which then will create LVMs on the top of it ?</p> <p>Could you share more details about your experience ? Looks like a way to get CEPH + oVirt without Cinder.<br> </p> <p>Thanks</p> <p>Fernando<br> </p> <div class="moz-cite-prefix">On 25/06/2016 17:47, Nicolás wrote:<br> </div> <blockquote cite="mid:qiniqubu82maql8o3l71ltjd.1466887669583@email.android.com" type="cite"> <div dir="ltr">Hi,<br> </div> <div dir="ltr"><br> </div> <div dir="ltr">We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.<br> </div> <div dir="ltr"><br> </div> <div dir="ltr">Regards.<br> </div> <div dir="ltr"><br> </div> <div dir="ltr">[1]: <a moz-do-not-send="true" href="https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.html">https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.html</a></div> <div class="wps_quotion">En 24/6/2016 9:28 p. m., Charles Gomes <a class="moz-txt-link-rfc2396E" href="mailto:cgomes@clearpoolgroup.com"><cgomes@clearpoolgroup.com></a> escribió:<br type="attribution"> <blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <meta name="Generator" content="Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri",sans-serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style> <div class="WordSection1"> <p class="MsoNormal">Hello</p> <p class="MsoNormal"> </p> <p class="MsoNormal">Ive been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder. </p> <p class="MsoNormal">Is there a way to get oVirt with Ceph without having to implement entire Openstack ? </p> <p class="MsoNormal">Im already currently using Foreman to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?</p> <p class="MsoNormal"> </p> </div> </blockquote> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------FA12EBC73B50F8394E8AB5E9--

Hello Charles, The solution I came up with to solve this problem was to use RDO. I have oVirt engine running on dedicated hardware. The best way to have oVirt engine and RDO running on the same hardware is to build a VM on the same hardware as the engine with virt manager or virsh using the local disk as storage (you could possibly replace the VM with docker but I never explored that option). I found it necessary to do this because the oVirt engine and RDO http configs didn't play well together. They could probably be made to work on the same OS instance, but it was taking much more time than I wanted to figure out how to make httpd work with both. Once the VM is up and running I set up the RDO repos on it and installed packstack. Use packstack to generate an answers file, then go through the answers file and set it up so that it only installs Cinder, Keystone, MariaDB, and RabbitMQ. These are the only necessary pieces of openstack for cinder to work correctly. Once it is installed you need to configure cinder and keystone how you want since they only come with the admin tenant,user,project,etc... I set up a ovirt user,tenant,project and configured cinder to use my ceph cluster/pool. It is much simpler to do than that long paragraph may make it seem at first. I've also tested using CephFS as a POSIX storage domain in oVirt. It works but in my experience there was at least a 25% performance decrease over Cinder/RBD. Kevin On Fri, Jun 24, 2016 at 3:23 PM, Charles Gomes <cgomes@clearpoolgroup.com> wrote:
Hello
I’ve been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder.
Is there a way to get oVirt with Ceph without having to implement entire Openstack ?
I’m already currently using Foreman to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (9)
-
Alessandro De Salvo
-
Barak Korren
-
Charles Gomes
-
Fernando Frediani
-
Kevin Hrpcek
-
Maor Lipchuk
-
Nicolás
-
Nir Soffer
-
Yaniv Dary