From matthias.leopold at meduniwien.ac.at Mon Oct 23 12:22:40 2017 Content-Type: multipart/mixed; boundary="===============6065356656088738557==" MIME-Version: 1.0 From: Matthias Leopold To: users at ovirt.org Subject: [ovirt-users] using oVirt with newer librbd1 Date: Mon, 23 Oct 2017 14:22:37 +0200 Message-ID: <77c6f9e7-8b83-008f-9958-780b6fe33646@meduniwien.ac.at> --===============6065356656088738557== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi, we want to use a Ceph cluster as the main storage for our oVirt 4.1.x = datacenter. We successfully tested using librbd1-12.2.1-0.el7 package = from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS = 7 in an oVirt virtualization node. Are there any caveats when doing so? = Will this work in oVirt 4.2? thx matthias --===============6065356656088738557==-- From k0ste at k0ste.ru Tue Oct 24 12:09:35 2017 Content-Type: multipart/mixed; boundary="===============5468054529554176030==" MIME-Version: 1.0 From: Konstantin Shalygin To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Tue, 24 Oct 2017 19:09:32 +0700 Message-ID: <290945ce-d011-21b4-3b47-6314ffcd0a18@k0ste.ru> In-Reply-To: 77c6f9e7-8b83-008f-9958-780b6fe33646@meduniwien.ac.at --===============5468054529554176030== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------0AC4344B480303E03A95F74A Content-Type: text/plain; charset=3Dutf-8; format=3Dflowed Content-Transfer-Encoding: 7bit > we want to use a Ceph cluster as the main storage for our oVirt 4.1.x > datacenter. We successfully tested using librbd1-12.2.1-0.el7 package > from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS > 7 in an oVirt virtualization node. Are there any caveats when doing so? > Will this work in oVirt 4.2? Hello Matthias. Can I ask separate question? At this time we atoVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).In = few weeks I planned to expand the cluster and I would like to upgrade to = Ceph 12 (Luminous), for bluestore support. So my question is:have you tested oVirt with Ceph 12? Thanks. -- = Best regards, Konstantin Shalygin --------------0AC4344B480303E03A95F74A Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 7bit

we want to use a Ceph cluster as the main storage for our oVirt =
4.1.x =

datacenter. We successfully tested using librbd1-12.2.1-0.el7 package =

from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS =

7 in an oVirt virtualization node. Are there any caveats when doing so? =

Will this work in oVirt 4.2?

Hello Matthias. Can I ask separate question?
At this time we at oVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken). In few weeks I planned to expand the cluster and I would like to upgrade to Ceph 12 (Luminous), for bluestore support.
So my question is:
have you tested oVirt with Ceph 12?


Thanks.
-- =

Best regards,
Konstantin Shalygin
--------------0AC4344B480303E03A95F74A-- --===============5468054529554176030== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wQUM0MzQ0QjQ4MDMwM0UwM0E5NUY3NEEKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04OyBmb3JtYXQ9Zmxvd2VkCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQK Cj4gd2Ugd2FudCB0byB1c2UgYSBDZXBoIGNsdXN0ZXIgYXMgdGhlIG1haW4gc3RvcmFnZSBmb3Ig b3VyIG9WaXJ0IDQuMS54Cj4gZGF0YWNlbnRlci4gV2Ugc3VjY2Vzc2Z1bGx5IHRlc3RlZCB1c2lu ZyBsaWJyYmQxLTEyLjIuMS0wLmVsNyBwYWNrYWdlCj4gZnJvbSBDZXBoIHJlcG9zIGluc3RlYWQg b2YgdGhlIHN0YW5kYXJkIGxpYnJiZDEtMC45NC41LTIuZWw3IGZyb20gQ2VudE9TCj4gNyBpbiBh biBvVmlydCB2aXJ0dWFsaXphdGlvbiBub2RlLiBBcmUgdGhlcmUgYW55IGNhdmVhdHMgd2hlbiBk b2luZyBzbz8KPiBXaWxsIHRoaXMgd29yayBpbiBvVmlydCA0LjI/CgpIZWxsbyBNYXR0aGlhcy4g Q2FuIEkgYXNrIHNlcGFyYXRlIHF1ZXN0aW9uPwpBdCB0aGlzIHRpbWUgd2UgYXRvVmlydCA0LjEu My41IGFuZCBDZXBoIENsdXN0ZXIgYXQgMTEuMi4wIChLcmFrZW4pLkluIApmZXcgd2Vla3MgSSBw bGFubmVkIHRvIGV4cGFuZCB0aGUgY2x1c3RlciBhbmQgSSB3b3VsZCBsaWtlIHRvIHVwZ3JhZGUg dG8gCkNlcGggMTIgKEx1bWlub3VzKSwgZm9yIGJsdWVzdG9yZSBzdXBwb3J0LgpTbyBteSBxdWVz dGlvbiBpczpoYXZlIHlvdSB0ZXN0ZWQgb1ZpcnQgd2l0aCBDZXBoIDEyPwoKClRoYW5rcy4KCi0t IApCZXN0IHJlZ2FyZHMsCktvbnN0YW50aW4gU2hhbHlnaW4KCgotLS0tLS0tLS0tLS0tLTBBQzQz NDRCNDgwMzAzRTAzQTk1Rjc0QQpDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD11dGYt OApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0Cgo8aHRtbD4KICA8aGVhZD4KCiAgICA8 bWV0YSBodHRwLWVxdWl2PSJjb250ZW50LXR5cGUiIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNl dD11dGYtOCI+CiAgPC9oZWFkPgogIDxib2R5IGJnY29sb3I9IiNGRkZGRkYiIHRleHQ9IiMwMDAw MDAiPgogICAgPHA+IDwvcD4KICAgIDxibG9ja3F1b3RlIHR5cGU9ImNpdGUiPgogICAgICA8cHJl PndlIHdhbnQgdG8gdXNlIGEgQ2VwaCBjbHVzdGVyIGFzIHRoZSBtYWluIHN0b3JhZ2UgZm9yIG91 ciBvVmlydCA0LjEueCAKZGF0YWNlbnRlci4gV2Ugc3VjY2Vzc2Z1bGx5IHRlc3RlZCB1c2luZyBs aWJyYmQxLTEyLjIuMS0wLmVsNyBwYWNrYWdlIApmcm9tIENlcGggcmVwb3MgaW5zdGVhZCBvZiB0 aGUgc3RhbmRhcmQgbGlicmJkMS0wLjk0LjUtMi5lbDcgZnJvbSBDZW50T1MgCjcgaW4gYW4gb1Zp cnQgdmlydHVhbGl6YXRpb24gbm9kZS4gQXJlIHRoZXJlIGFueSBjYXZlYXRzIHdoZW4gZG9pbmcg c28/IApXaWxsIHRoaXMgd29yayBpbiBvVmlydCA0LjI/PC9wcmU+CiAgICA8L2Jsb2NrcXVvdGU+ CiAgICA8YnI+CiAgICBIZWxsbyBNYXR0aGlhcy4gQ2FuIEkgYXNrIHNlcGFyYXRlIHF1ZXN0aW9u Pzxicj4KICAgIEF0IHRoaXMgdGltZSB3ZSBhdDxzcGFuIGNsYXNzPSJ2ZXJzaW9uLXRleHQiPiBv VmlydCA0LjEuMy41IGFuZCBDZXBoCiAgICAgIENsdXN0ZXIgYXQgMTEuMi4wIChLcmFrZW4pLjwv c3Bhbj48c3BhbiBpZD0icmVzdWx0X2JveCIgY2xhc3M9IiIKICAgICAgbGFuZz0iZW4iPjxzcGFu PiBJbiBmZXcgd2Vla3MgSSBwbGFubmVkIHRvIGV4cGFuZCB0aGUgY2x1c3RlciBhbmQKICAgICAg ICBJIHdvdWxkIGxpa2UgdG8gdXBncmFkZSB0byBDZXBoIDEyIChMdW1pbm91cyksIGZvciBibHVl c3RvcmUKICAgICAgICBzdXBwb3J0Ljxicj4KICAgICAgICBTbyBteSBxdWVzdGlvbiBpczo8L3Nw YW4+PC9zcGFuPjxzcGFuIGlkPSJyZXN1bHRfYm94IiBjbGFzcz0iIgogICAgICBsYW5nPSJlbiI+ PHNwYW4+IGhhdmUgeW91IHRlc3RlZCBvVmlydCB3aXRoIENlcGggMTI/PGJyPgogICAgICAgIDxi cj4KICAgICAgICA8YnI+CiAgICAgICAgVGhhbmtzLjwvc3Bhbj48L3NwYW4+PHNwYW4gaWQ9InJl c3VsdF9ib3giIGNsYXNzPSIiIGxhbmc9ImVuIj48c3Bhbj48L3NwYW4+PC9zcGFuPjxzcGFuCiAg ICAgIGlkPSJyZXN1bHRfYm94IiBjbGFzcz0iIiBsYW5nPSJlbiI+PHNwYW4+PC9zcGFuPjwvc3Bh bj48c3BhbgogICAgICBjbGFzcz0idmVyc2lvbi10ZXh0Ij48L3NwYW4+PHNwYW4gY2xhc3M9InZl cnNpb24tdGV4dCI+PC9zcGFuPgogICAgPHByZSBjbGFzcz0ibW96LXNpZ25hdHVyZSIgY29scz0i NzIiPi0tIApCZXN0IHJlZ2FyZHMsCktvbnN0YW50aW4gU2hhbHlnaW4KPC9wcmU+CiAgPC9ib2R5 Pgo8L2h0bWw+CgotLS0tLS0tLS0tLS0tLTBBQzQzNDRCNDgwMzAzRTAzQTk1Rjc0QS0tCg== --===============5468054529554176030==-- From matthias.leopold at meduniwien.ac.at Tue Oct 24 12:26:40 2017 Content-Type: multipart/mixed; boundary="===============2259217571061302883==" MIME-Version: 1.0 From: Matthias Leopold To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Tue, 24 Oct 2017 14:26:37 +0200 Message-ID: <20c00b27-1e00-6323-e5fa-221390685d5e@meduniwien.ac.at> In-Reply-To: 290945ce-d011-21b4-3b47-6314ffcd0a18@k0ste.ru --===============2259217571061302883== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am 2017-10-24 um 14:09 schrieb Konstantin Shalygin: >> we want to use a Ceph cluster as the main storage for our oVirt 4.1.x >> datacenter. We successfully tested using librbd1-12.2.1-0.el7 package >> from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS >> 7 in an oVirt virtualization node. Are there any caveats when doing so? >> Will this work in oVirt 4.2? > = > Hello Matthias. Can I ask separate question? > At this time we atoVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).In = > few weeks I planned to expand the cluster and I would like to upgrade to = > Ceph 12 (Luminous), for bluestore support. > So my question is:have you tested oVirt with Ceph 12? > = > = > Thanks. > = > -- = > Best regards, > Konstantin Shalygin > = -- = Matthias Leopold IT Systems & Communications Medizinische Universit=C3=A4t Wien Spitalgasse 23 / BT 88 /Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200 Hi Konstantin, yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt = Hypervisor Hosts, which we're installed with CentOS 7 and Ceph upstream = repos, not oVirt Node (for this exact purpose). Since = /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is = using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain = are using Ceph 12 all the way. Are you also using a newer librbd1? Regards Matthias --===============2259217571061302883==-- From k0ste at k0ste.ru Tue Oct 24 13:11:30 2017 Content-Type: multipart/mixed; boundary="===============3479983871951528309==" MIME-Version: 1.0 From: Konstantin Shalygin To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Tue, 24 Oct 2017 20:11:26 +0700 Message-ID: <55d74564-60e2-7a56-233f-147420a1a195@k0ste.ru> In-Reply-To: 20c00b27-1e00-6323-e5fa-221390685d5e@meduniwien.ac.at --===============3479983871951528309== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 10/24/2017 07:26 PM, Matthias Leopold wrote: > yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt = > Hypervisor Hosts, which we're installed with CentOS 7 and Ceph = > upstream repos, not oVirt Node (for this exact purpose). On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64 > Since = > /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is = > using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain = > are using Ceph 12 all the way. = Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with = librbd1-10.2.3-0.el7.x86_64 What version of Cinder I should have for work with Ceph 12? Or just = upgrade python-rbd/librados/librbd1/etc. > Are you also using a newer librbd1? = Not for now as you can see. I was open "ovirt-users" for my questions = about Ceph 12 and see your fresh message. I think you first who used = Ceph 12 with oVirt. -- = Best regards, Konstantin Shalygin --===============3479983871951528309==-- From matthias.leopold at meduniwien.ac.at Tue Oct 24 15:04:38 2017 Content-Type: multipart/mixed; boundary="===============2492122105945983259==" MIME-Version: 1.0 From: Matthias Leopold To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Tue, 24 Oct 2017 17:04:35 +0200 Message-ID: In-Reply-To: 55d74564-60e2-7a56-233f-147420a1a195@k0ste.ru --===============2492122105945983259== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin: > On 10/24/2017 07:26 PM, Matthias Leopold wrote: > = >> yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt = >> Hypervisor Hosts, which we're installed with CentOS 7 and Ceph = >> upstream repos, not oVirt Node (for this exact purpose). > On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64 >> Since = >> /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is = >> using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain = >> are using Ceph 12 all the way. = > Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with = > librbd1-10.2.3-0.el7.x86_64 > What version of Cinder I should have for work with Ceph 12? Or just = > upgrade python-rbd/librados/librbd1/etc. I'll talk to my colleague, who is the Ceph expert, about this tomorrow. Regards Matthias --===============2492122105945983259==-- From matthias.leopold at meduniwien.ac.at Wed Oct 25 08:30:47 2017 Content-Type: multipart/mixed; boundary="===============8352291071860257799==" MIME-Version: 1.0 From: Matthias Leopold To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Wed, 25 Oct 2017 10:30:44 +0200 Message-ID: <46bcd7a5-1739-7117-12a5-80c85f8f4035@meduniwien.ac.at> In-Reply-To: 55d74564-60e2-7a56-233f-147420a1a195@k0ste.ru --===============8352291071860257799== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin: > On 10/24/2017 07:26 PM, Matthias Leopold wrote: > = >> yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt = >> Hypervisor Hosts, which we're installed with CentOS 7 and Ceph = >> upstream repos, not oVirt Node (for this exact purpose). > On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64 >> Since = >> /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is = >> using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain = >> are using Ceph 12 all the way. = > Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with = > librbd1-10.2.3-0.el7.x86_64 we're also using cinder from openstack ocata release. the point is a) we didn't upgrade, but started from scratch with ceph 12 b) we didn't test all of the new features in ceph 12 (eg. EC pools for = RBD devices) in connection with cinder yet matthias --===============8352291071860257799==-- From k0ste at k0ste.ru Wed Oct 25 11:45:07 2017 Content-Type: multipart/mixed; boundary="===============6502837935901947999==" MIME-Version: 1.0 From: Konstantin Shalygin To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Wed, 25 Oct 2017 18:45:04 +0700 Message-ID: In-Reply-To: 46bcd7a5-1739-7117-12a5-80c85f8f4035@meduniwien.ac.at --===============6502837935901947999== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------F21CBAF3CD53691C26FFFE99 Content-Type: text/plain; charset=3Dutf-8; format=3Dflowed Content-Transfer-Encoding: 7bit On 10/25/2017 03:30 PM, Matthias Leopold wrote: > we're also using cinder from openstack ocata release. > > the point is > a) we didn't upgrade, but started from scratch with ceph 12 > b) we didn't test all of the new features in ceph 12 (eg. EC pools for = > RBD devices) in connection with cinder yet = Thanks. We use EC pools with replication pull cache - only one way to = use EC with rbd, before Ceph 12. We are half year on Ceph with oVirt in production. The best storage = experience, the only thing you can find fault this is impossible to move = images between pools. Only manually migration with qemu-img/rados or = cp/rsync inside VM. -- = Best regards, Konstantin Shalygin --------------F21CBAF3CD53691C26FFFE99 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 7bit On 10/25/2017 03:30 PM, Matthias Leopold wrote:
= we're also using cinder from openstack ocata release.

the point is
a) we didn't upgrade, but started from scratch with ceph 12
b) we didn't test all of the new features in ceph 12 (eg. EC pools for RBD devices) in connection with cinder yet

Thanks. We use EC pools with replication pull cache - only one way to use EC with rbd, before Ceph 12.
We are half year on Ceph with oVirt in production. The best storage experience, the only thing you can find fault this is impossible to move images between pools. Only manually migration with qemu-img/rados or cp/rsync inside VM.
-- =

Best regards,
Konstantin Shalygin
--------------F21CBAF3CD53691C26FFFE99-- --===============6502837935901947999== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS1GMjFDQkFGM0NENTM2OTFDMjZGRkZFOTkKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04OyBmb3JtYXQ9Zmxvd2VkCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQK Ck9uIDEwLzI1LzIwMTcgMDM6MzAgUE0sIE1hdHRoaWFzIExlb3BvbGQgd3JvdGU6Cj4gd2UncmUg YWxzbyB1c2luZyBjaW5kZXIgZnJvbSBvcGVuc3RhY2sgb2NhdGEgcmVsZWFzZS4KPgo+IHRoZSBw b2ludCBpcwo+IGEpIHdlIGRpZG4ndCB1cGdyYWRlLCBidXQgc3RhcnRlZCBmcm9tIHNjcmF0Y2gg d2l0aCBjZXBoIDEyCj4gYikgd2UgZGlkbid0IHRlc3QgYWxsIG9mIHRoZSBuZXcgZmVhdHVyZXMg aW4gY2VwaCAxMiAoZWcuIEVDIHBvb2xzIGZvciAKPiBSQkQgZGV2aWNlcykgaW4gY29ubmVjdGlv biB3aXRoIGNpbmRlciB5ZXQgCgpUaGFua3MuIFdlIHVzZSBFQyBwb29scyB3aXRoIHJlcGxpY2F0 aW9uIHB1bGwgY2FjaGUgLSBvbmx5IG9uZSB3YXkgdG8gCnVzZSBFQyB3aXRoIHJiZCwgYmVmb3Jl IENlcGggMTIuCldlIGFyZSBoYWxmIHllYXIgb24gQ2VwaCB3aXRoIG9WaXJ0IGluIHByb2R1Y3Rp b24uIFRoZSBiZXN0IHN0b3JhZ2UgCmV4cGVyaWVuY2UsIHRoZSBvbmx5IHRoaW5nIHlvdSBjYW4g ZmluZCBmYXVsdCB0aGlzIGlzIGltcG9zc2libGUgdG8gbW92ZSAKaW1hZ2VzIGJldHdlZW4gcG9v bHMuIE9ubHkgbWFudWFsbHkgbWlncmF0aW9uIHdpdGggcWVtdS1pbWcvcmFkb3Mgb3IgCmNwL3Jz eW5jIGluc2lkZSBWTS4KCi0tIApCZXN0IHJlZ2FyZHMsCktvbnN0YW50aW4gU2hhbHlnaW4KCgot LS0tLS0tLS0tLS0tLUYyMUNCQUYzQ0Q1MzY5MUMyNkZGRkU5OQpDb250ZW50LVR5cGU6IHRleHQv aHRtbDsgY2hhcnNldD11dGYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0Cgo8aHRt bD4KICA8aGVhZD4KICAgIDxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4KICA8L2hlYWQ+CiAgPGJvZHkgYmdjb2xvcj0iI0ZG RkZGRiIgdGV4dD0iIzAwMDAwMCI+CiAgICBPbiAxMC8yNS8yMDE3IDAzOjMwIFBNLCBNYXR0aGlh cyBMZW9wb2xkIHdyb3RlOjxicj4KICAgIDxibG9ja3F1b3RlIHR5cGU9ImNpdGUiCiAgICAgIGNp dGU9Im1pZDo0NmJjZDdhNS0xNzM5LTcxMTctMTJhNS04MGM4NWY4ZjQwMzVAbWVkdW5pd2llbi5h Yy5hdCI+d2UncmUKICAgICAgYWxzbyB1c2luZyBjaW5kZXIgZnJvbSBvcGVuc3RhY2sgb2NhdGEg cmVsZWFzZS4KICAgICAgPGJyPgogICAgICA8YnI+CiAgICAgIHRoZSBwb2ludCBpcwogICAgICA8 YnI+CiAgICAgIGEpIHdlIGRpZG4ndCB1cGdyYWRlLCBidXQgc3RhcnRlZCBmcm9tIHNjcmF0Y2gg d2l0aCBjZXBoIDEyCiAgICAgIDxicj4KICAgICAgYikgd2UgZGlkbid0IHRlc3QgYWxsIG9mIHRo ZSBuZXcgZmVhdHVyZXMgaW4gY2VwaCAxMiAoZWcuIEVDIHBvb2xzCiAgICAgIGZvciBSQkQgZGV2 aWNlcykgaW4gY29ubmVjdGlvbiB3aXRoIGNpbmRlciB5ZXQKICAgIDwvYmxvY2txdW90ZT4KICAg IDxicj4KICAgIFRoYW5rcy4gV2UgdXNlIEVDIHBvb2xzIHdpdGggcmVwbGljYXRpb24gcHVsbCBj YWNoZSAtIG9ubHkgb25lIHdheQogICAgdG8gdXNlIEVDIHdpdGggcmJkLCBiZWZvcmUgQ2VwaCAx Mi48YnI+CiAgICBXZSBhcmUgaGFsZiB5ZWFyIG9uIENlcGggd2l0aCBvVmlydCBpbiBwcm9kdWN0 aW9uLiBUaGUgYmVzdCBzdG9yYWdlCiAgICBleHBlcmllbmNlLCB0aGUgb25seSB0aGluZyB5b3Ug Y2FuIGZpbmQgZmF1bHQgdGhpcyBpcyBpbXBvc3NpYmxlIHRvCiAgICBtb3ZlIGltYWdlcyBiZXR3 ZWVuIHBvb2xzLiBPbmx5IG1hbnVhbGx5IG1pZ3JhdGlvbiB3aXRoCiAgICBxZW11LWltZy9yYWRv cyBvciBjcC9yc3luYyBpbnNpZGUgVk0uPHNwYW4gaWQ9InJlc3VsdF9ib3giCiAgICAgIGNsYXNz PSJzaG9ydF90ZXh0IiBsYW5nPSJlbiI+PHNwYW4gY2xhc3M9IiI+PGJyPgogICAgICA8L3NwYW4+ PC9zcGFuPgogICAgPHByZSBjbGFzcz0ibW96LXNpZ25hdHVyZSIgY29scz0iNzIiPi0tIApCZXN0 IHJlZ2FyZHMsCktvbnN0YW50aW4gU2hhbHlnaW4KPC9wcmU+CiAgPC9ib2R5Pgo8L2h0bWw+Cgot LS0tLS0tLS0tLS0tLUYyMUNCQUYzQ0Q1MzY5MUMyNkZGRkU5OS0tCg== --===============6502837935901947999==-- From k0ste at k0ste.ru Sat Nov 18 14:25:48 2017 Content-Type: multipart/mixed; boundary="===============8283195049669234558==" MIME-Version: 1.0 From: Konstantin Shalygin To: users at ovirt.org Subject: Re: [ovirt-users] using oVirt with newer librbd1 Date: Sat, 18 Nov 2017 21:25:45 +0700 Message-ID: <97f4c583-d781-3b21-6329-4adcf2ab4bb2@k0ste.ru> In-Reply-To: 46bcd7a5-1739-7117-12a5-80c85f8f4035@meduniwien.ac.at --===============8283195049669234558== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable > we're also using cinder from openstack ocata release. > > the point is > a) we didn't upgrade, but started from scratch with ceph 12 > b) we didn't test all of the new features in ceph 12 (eg. EC pools for > RBD devices) in connection with cinder yet We are live on librbd1-12.2.1 for a week. All works okay. I was upgraded ceph from 11.2.0 to 11.2.1. Not Luminous, because seems = 12.2.1 is stable only when cluster started from Luminous = (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022522.h= tml). --===============8283195049669234558==--