
Hi, we want to use a Ceph cluster as the main storage for our oVirt 4.1.x datacenter. We successfully tested using librbd1-12.2.1-0.el7 package from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS 7 in an oVirt virtualization node. Are there any caveats when doing so? Will this work in oVirt 4.2? thx matthias

This is a multi-part message in MIME format. --------------0AC4344B480303E03A95F74A Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit
we want to use a Ceph cluster as the main storage for our oVirt 4.1.x datacenter. We successfully tested using librbd1-12.2.1-0.el7 package from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS 7 in an oVirt virtualization node. Are there any caveats when doing so? Will this work in oVirt 4.2?
Hello Matthias. Can I ask separate question? At this time we atoVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).In few weeks I planned to expand the cluster and I would like to upgrade to Ceph 12 (Luminous), for bluestore support. So my question is:have you tested oVirt with Ceph 12? Thanks. -- Best regards, Konstantin Shalygin --------------0AC4344B480303E03A95F74A Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> </head> <body bgcolor="#FFFFFF" text="#000000"> <p> </p> <blockquote type="cite"> <pre>we want to use a Ceph cluster as the main storage for our oVirt 4.1.x datacenter. We successfully tested using librbd1-12.2.1-0.el7 package from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS 7 in an oVirt virtualization node. Are there any caveats when doing so? Will this work in oVirt 4.2?</pre> </blockquote> <br> Hello Matthias. Can I ask separate question?<br> At this time we at<span class="version-text"> oVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).</span><span id="result_box" class="" lang="en"><span> In few weeks I planned to expand the cluster and I would like to upgrade to Ceph 12 (Luminous), for bluestore support.<br> So my question is:</span></span><span id="result_box" class="" lang="en"><span> have you tested oVirt with Ceph 12?<br> <br> <br> Thanks.</span></span><span id="result_box" class="" lang="en"><span></span></span><span id="result_box" class="" lang="en"><span></span></span><span class="version-text"></span><span class="version-text"></span> <pre class="moz-signature" cols="72">-- Best regards, Konstantin Shalygin </pre> </body> </html> --------------0AC4344B480303E03A95F74A--

Am 2017-10-24 um 14:09 schrieb Konstantin Shalygin:
we want to use a Ceph cluster as the main storage for our oVirt 4.1.x datacenter. We successfully tested using librbd1-12.2.1-0.el7 package from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS 7 in an oVirt virtualization node. Are there any caveats when doing so? Will this work in oVirt 4.2?
Hello Matthias. Can I ask separate question? At this time we atoVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).In few weeks I planned to expand the cluster and I would like to upgrade to Ceph 12 (Luminous), for bluestore support. So my question is:have you tested oVirt with Ceph 12?
Thanks.
-- Best regards, Konstantin Shalygin
-- Matthias Leopold IT Systems & Communications Medizinische Universität Wien Spitalgasse 23 / BT 88 /Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200 Hi Konstantin, yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt Hypervisor Hosts, which we're installed with CentOS 7 and Ceph upstream repos, not oVirt Node (for this exact purpose). Since /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain are using Ceph 12 all the way. Are you also using a newer librbd1? Regards Matthias

yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt Hypervisor Hosts, which we're installed with CentOS 7 and Ceph upstream repos, not oVirt Node (for this exact purpose). On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64 Since /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain are using Ceph 12 all the way. Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with
On 10/24/2017 07:26 PM, Matthias Leopold wrote: librbd1-10.2.3-0.el7.x86_64 What version of Cinder I should have for work with Ceph 12? Or just upgrade python-rbd/librados/librbd1/etc.
Are you also using a newer librbd1? Not for now as you can see. I was open "ovirt-users" for my questions about Ceph 12 and see your fresh message. I think you first who used Ceph 12 with oVirt.
-- Best regards, Konstantin Shalygin

Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin:
On 10/24/2017 07:26 PM, Matthias Leopold wrote:
yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt Hypervisor Hosts, which we're installed with CentOS 7 and Ceph upstream repos, not oVirt Node (for this exact purpose). On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64 Since /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain are using Ceph 12 all the way. Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with librbd1-10.2.3-0.el7.x86_64 What version of Cinder I should have for work with Ceph 12? Or just upgrade python-rbd/librados/librbd1/etc.
I'll talk to my colleague, who is the Ceph expert, about this tomorrow. Regards Matthias

Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin:
On 10/24/2017 07:26 PM, Matthias Leopold wrote:
yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt Hypervisor Hosts, which we're installed with CentOS 7 and Ceph upstream repos, not oVirt Node (for this exact purpose). On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64 Since /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain are using Ceph 12 all the way. Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with librbd1-10.2.3-0.el7.x86_64
we're also using cinder from openstack ocata release. the point is a) we didn't upgrade, but started from scratch with ceph 12 b) we didn't test all of the new features in ceph 12 (eg. EC pools for RBD devices) in connection with cinder yet matthias

This is a multi-part message in MIME format. --------------F21CBAF3CD53691C26FFFE99 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit On 10/25/2017 03:30 PM, Matthias Leopold wrote:
we're also using cinder from openstack ocata release.
the point is a) we didn't upgrade, but started from scratch with ceph 12 b) we didn't test all of the new features in ceph 12 (eg. EC pools for RBD devices) in connection with cinder yet
Thanks. We use EC pools with replication pull cache - only one way to use EC with rbd, before Ceph 12. We are half year on Ceph with oVirt in production. The best storage experience, the only thing you can find fault this is impossible to move images between pools. Only manually migration with qemu-img/rados or cp/rsync inside VM. -- Best regards, Konstantin Shalygin --------------F21CBAF3CD53691C26FFFE99 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body bgcolor="#FFFFFF" text="#000000"> On 10/25/2017 03:30 PM, Matthias Leopold wrote:<br> <blockquote type="cite" cite="mid:46bcd7a5-1739-7117-12a5-80c85f8f4035@meduniwien.ac.at">we're also using cinder from openstack ocata release. <br> <br> the point is <br> a) we didn't upgrade, but started from scratch with ceph 12 <br> b) we didn't test all of the new features in ceph 12 (eg. EC pools for RBD devices) in connection with cinder yet </blockquote> <br> Thanks. We use EC pools with replication pull cache - only one way to use EC with rbd, before Ceph 12.<br> We are half year on Ceph with oVirt in production. The best storage experience, the only thing you can find fault this is impossible to move images between pools. Only manually migration with qemu-img/rados or cp/rsync inside VM.<span id="result_box" class="short_text" lang="en"><span class=""><br> </span></span> <pre class="moz-signature" cols="72">-- Best regards, Konstantin Shalygin </pre> </body> </html> --------------F21CBAF3CD53691C26FFFE99--

we're also using cinder from openstack ocata release.
the point is a) we didn't upgrade, but started from scratch with ceph 12 b) we didn't test all of the new features in ceph 12 (eg. EC pools for RBD devices) in connection with cinder yet We are live on librbd1-12.2.1 for a week. All works okay. I was upgraded ceph from 11.2.0 to 11.2.1. Not Luminous, because seems 12.2.1 is stable only when cluster started from Luminous (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022522.htm...).
participants (2)
-
Konstantin Shalygin
-
Matthias Leopold