From cgomes at clearpoolgroup.com Fri Jun 24 16:28:40 2016 Content-Type: multipart/mixed; boundary="===============7817919071539723739==" MIME-Version: 1.0 From: Charles Gomes To: users at ovirt.org Subject: [ovirt-users] oVirt and Ceph Date: Fri, 24 Jun 2016 20:23:13 +0000 Message-ID: <307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46@exchange2010.dimcap.corp> --===============7817919071539723739== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable --_000_307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46exchange2010dim_ Content-Type: text/plain; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable Hello I've been reading lots of material about implementing oVirt with Ceph, howe= =3D ver all talk about using Cinder. Is there a way to get oVirt with Ceph without having to implement entire Op= =3D enstack ? I'm already currently using Foreman to deploy Ceph and KVM nodes, trying to= =3D minimize the amount of moving parts. I heard something about oVirt providi= =3D ng a managed Cinder appliance, have any seen this ? --_000_307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46exchange2010dim_ Content-Type: text/html; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable

Hello

 

I’ve been reading lots of material about imp= le=3D menting oVirt with Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without havi= ng=3D to implement entire Openstack ?

I’m already currently using Foreman to deplo= y =3D Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard = =3D something about oVirt providing a managed Cinder appliance, have any seen t= =3D his ?

 

--_000_307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46exchange2010dim_-- --===============7817919071539723739== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" LS1fMDAwXzMwN0U2MDhDNEJGRkExNDVBMTQ5NkE1RkQxQUNDM0JDMUQ3RkRENDZleGNoYW5nZTIw MTBkaW1fCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD0idXMtYXNjaWkiCkNvbnRl bnQtVHJhbnNmZXItRW5jb2Rpbmc6IHF1b3RlZC1wcmludGFibGUKCkhlbGxvCgpJJ3ZlIGJlZW4g cmVhZGluZyBsb3RzIG9mIG1hdGVyaWFsIGFib3V0IGltcGxlbWVudGluZyBvVmlydCB3aXRoIENl cGgsIGhvd2U9CnZlciBhbGwgdGFsayBhYm91dCB1c2luZyBDaW5kZXIuCklzIHRoZXJlIGEgd2F5 IHRvIGdldCBvVmlydCB3aXRoIENlcGggd2l0aG91dCBoYXZpbmcgdG8gaW1wbGVtZW50IGVudGly ZSBPcD0KZW5zdGFjayA/CkknbSBhbHJlYWR5IGN1cnJlbnRseSB1c2luZyBGb3JlbWFuIHRvIGRl cGxveSBDZXBoIGFuZCBLVk0gbm9kZXMsIHRyeWluZyB0bz0KIG1pbmltaXplIHRoZSBhbW91bnQg b2YgbW92aW5nIHBhcnRzLiBJIGhlYXJkIHNvbWV0aGluZyBhYm91dCBvVmlydCBwcm92aWRpPQpu ZyBhIG1hbmFnZWQgQ2luZGVyIGFwcGxpYW5jZSwgaGF2ZSBhbnkgc2VlbiB0aGlzID8KCgotLV8w MDBfMzA3RTYwOEM0QkZGQTE0NUExNDk2QTVGRDFBQ0MzQkMxRDdGREQ0NmV4Y2hhbmdlMjAxMGRp bV8KQ29udGVudC1UeXBlOiB0ZXh0L2h0bWw7IGNoYXJzZXQ9InVzLWFzY2lpIgpDb250ZW50LVRy YW5zZmVyLUVuY29kaW5nOiBxdW90ZWQtcHJpbnRhYmxlCgo8aHRtbCB4bWxuczp2PTNEInVybjpz Y2hlbWFzLW1pY3Jvc29mdC1jb206dm1sIiB4bWxuczpvPTNEInVybjpzY2hlbWFzLW1pY3I9Cm9z b2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4bWxuczp3PTNEInVybjpzY2hlbWFzLW1pY3Jvc29mdC1j b206b2ZmaWNlOndvcmQiID0KeG1sbnM6bT0zRCJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29t L29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPTNEImh0dHA6PQovL3d3dy53My5vcmcvVFIvUkVD LWh0bWw0MCI+CjxoZWFkPgo8bWV0YSBodHRwLWVxdWl2PTNEIkNvbnRlbnQtVHlwZSIgY29udGVu dD0zRCJ0ZXh0L2h0bWw7IGNoYXJzZXQ9M0R1cy1hc2NpaSI9Cj4KPG1ldGEgbmFtZT0zRCJHZW5l cmF0b3IiIGNvbnRlbnQ9M0QiTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPgo8 c3R5bGU+PCEtLQovKiBGb250IERlZmluaXRpb25zICovCkBmb250LWZhY2UKCXtmb250LWZhbWls eToiQ2FtYnJpYSBNYXRoIjsKCXBhbm9zZS0xOjIgNCA1IDMgNSA0IDYgMyAyIDQ7fQpAZm9udC1m YWNlCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsKCXBhbm9zZS0xOjIgMTUgNSAyIDIgMiA0IDMgMiA0 O30KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwgZGl2 Lk1zb05vcm1hbAoJe21hcmdpbjowaW47CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7Cglmb250LXNp emU6MTEuMHB0OwoJZm9udC1mYW1pbHk6IkNhbGlicmkiLHNhbnMtc2VyaWY7fQphOmxpbmssIHNw YW4uTXNvSHlwZXJsaW5rCgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5OwoJY29sb3I6IzA1NjNDMTsK CXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQphOnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5r Rm9sbG93ZWQKCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7Cgljb2xvcjojOTU0RjcyOwoJdGV4dC1k ZWNvcmF0aW9uOnVuZGVybGluZTt9CnNwYW4uRW1haWxTdHlsZTE3Cgl7bXNvLXN0eWxlLXR5cGU6 cGVyc29uYWwtY29tcG9zZTsKCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixzYW5zLXNlcmlmOwoJY29s b3I6d2luZG93dGV4dDt9Ci5Nc29DaHBEZWZhdWx0Cgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0LW9u bHk7Cglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1zZXJpZjt9CkBwYWdlIFdvcmRTZWN0aW9u MQoJe3NpemU6OC41aW4gMTEuMGluOwoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEuMGluO30K ZGl2LldvcmRTZWN0aW9uMQoJe3BhZ2U6V29yZFNlY3Rpb24xO30KLS0+PC9zdHlsZT48IS0tW2lm IGd0ZSBtc28gOV0+PHhtbD4KPG86c2hhcGVkZWZhdWx0cyB2OmV4dD0zRCJlZGl0IiBzcGlkbWF4 PTNEIjEwMjYiIC8+CjwveG1sPjwhW2VuZGlmXS0tPjwhLS1baWYgZ3RlIG1zbyA5XT48eG1sPgo8 bzpzaGFwZWxheW91dCB2OmV4dD0zRCJlZGl0Ij4KPG86aWRtYXAgdjpleHQ9M0QiZWRpdCIgZGF0 YT0zRCIxIiAvPgo8L286c2hhcGVsYXlvdXQ+PC94bWw+PCFbZW5kaWZdLS0+CjwvaGVhZD4KPGJv ZHkgbGFuZz0zRCJFTi1VUyIgbGluaz0zRCIjMDU2M0MxIiB2bGluaz0zRCIjOTU0RjcyIj4KPGRp diBjbGFzcz0zRCJXb3JkU2VjdGlvbjEiPgo8cCBjbGFzcz0zRCJNc29Ob3JtYWwiPkhlbGxvPG86 cD48L286cD48L3A+CjxwIGNsYXNzPTNEIk1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+ CjxwIGNsYXNzPTNEIk1zb05vcm1hbCI+SSYjODIxNzt2ZSBiZWVuIHJlYWRpbmcgbG90cyBvZiBt YXRlcmlhbCBhYm91dCBpbXBsZT0KbWVudGluZyBvVmlydCB3aXRoIENlcGgsIGhvd2V2ZXIgYWxs IHRhbGsgYWJvdXQgdXNpbmcgQ2luZGVyLgo8bzpwPjwvbzpwPjwvcD4KPHAgY2xhc3M9M0QiTXNv Tm9ybWFsIj5JcyB0aGVyZSBhIHdheSB0byBnZXQgb1ZpcnQgd2l0aCBDZXBoIHdpdGhvdXQgaGF2 aW5nPQogdG8gaW1wbGVtZW50IGVudGlyZSBPcGVuc3RhY2sgPwo8bzpwPjwvbzpwPjwvcD4KPHAg Y2xhc3M9M0QiTXNvTm9ybWFsIj5JJiM4MjE3O20gYWxyZWFkeSBjdXJyZW50bHkgdXNpbmcgRm9y ZW1hbiB0byBkZXBsb3kgPQpDZXBoIGFuZCBLVk0gbm9kZXMsIHRyeWluZyB0byBtaW5pbWl6ZSB0 aGUgYW1vdW50IG9mIG1vdmluZyBwYXJ0cy4gSSBoZWFyZCA9CnNvbWV0aGluZyBhYm91dCBvVmly dCBwcm92aWRpbmcgYSBtYW5hZ2VkIENpbmRlciBhcHBsaWFuY2UsIGhhdmUgYW55IHNlZW4gdD0K aGlzID88bzpwPjwvbzpwPjwvcD4KPHAgY2xhc3M9M0QiTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwv bzpwPjwvcD4KPC9kaXY+CjwvYm9keT4KPC9odG1sPgoKLS1fMDAwXzMwN0U2MDhDNEJGRkExNDVB MTQ5NkE1RkQxQUNDM0JDMUQ3RkRENDZleGNoYW5nZTIwMTBkaW1fLS0K --===============7817919071539723739==-- From mlipchuk at redhat.com Sat Jun 25 16:12:36 2016 Content-Type: multipart/mixed; boundary="===============7150398408339613195==" MIME-Version: 1.0 From: Maor Lipchuk To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sat, 25 Jun 2016 23:12:34 +0300 Message-ID: In-Reply-To: 307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46@exchange2010.dimcap.corp --===============7150398408339613195== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Charles, Currently, oVirt communicates with Ceph only through Cinder. If you want to avoid using Cinder perhaps you can try to use cephfs and mount it as a posix storage domain instead. Regarding Cinder appliance, it is not yet implemented though we are currently investigating this option. Regards, Maor On Fri, Jun 24, 2016 at 11:23 PM, Charles Gomes wrote: > Hello > > > > I=E2=80=99ve been reading lots of material about implementing oVirt with = Ceph, > however all talk about using Cinder. > > Is there a way to get oVirt with Ceph without having to implement entire > Openstack ? > > I=E2=80=99m already currently using Foreman to deploy Ceph and KVM nodes,= trying > to minimize the amount of moving parts. I heard something about oVirt > providing a managed Cinder appliance, have any seen this ? > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > --===============7150398408339613195== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PGRpdiBkaXI9Imx0ciI+SGkgQ2hhcmxlcyw8ZGl2Pjxicj48L2Rpdj48ZGl2PkN1cnJlbnRseSwg b1ZpcnQgY29tbXVuaWNhdGVzIHdpdGggQ2VwaCBvbmx5IHRocm91Z2ggQ2luZGVyLjwvZGl2Pjxk aXY+SWYgeW91IHdhbnQgdG8gYXZvaWQgdXNpbmcgQ2luZGVyIHBlcmhhcHMgeW91IGNhbiB0cnkg dG8gdXNlIGNlcGhmcyBhbmQgbW91bnQgaXQgYXMgYSBwb3NpeCBzdG9yYWdlIGRvbWFpbiBpbnN0 ZWFkLjwvZGl2PjxkaXY+UmVnYXJkaW5nIENpbmRlciBhcHBsaWFuY2UsIGl0IGlzIG5vdCB5ZXQg aW1wbGVtZW50ZWQgdGhvdWdoIHdlIGFyZSBjdXJyZW50bHkgaW52ZXN0aWdhdGluZyB0aGlzIG9w dGlvbi48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+PGRpdj5NYW9yPC9k aXY+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+ T24gRnJpLCBKdW4gMjQsIDIwMTYgYXQgMTE6MjMgUE0sIENoYXJsZXMgR29tZXMgPHNwYW4gZGly PSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86Y2dvbWVzQGNsZWFycG9vbGdyb3VwLmNvbSIgdGFy Z2V0PSJfYmxhbmsiPmNnb21lc0BjbGVhcnBvb2xncm91cC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3Jv dGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHgg MHB4IDBweCAwLjhleDtib3JkZXItbGVmdC13aWR0aDoxcHg7Ym9yZGVyLWxlZnQtY29sb3I6cmdi KDIwNCwyMDQsMjA0KTtib3JkZXItbGVmdC1zdHlsZTpzb2xpZDtwYWRkaW5nLWxlZnQ6MWV4Ij4K CgoKCgo8ZGl2IGxhbmc9IkVOLVVTIiBsaW5rPSIjMDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+Cjxk aXY+CjxwIGNsYXNzPSJNc29Ob3JtYWwiPkhlbGxvPHU+PC91Pjx1PjwvdT48L3A+CjxwIGNsYXNz PSJNc29Ob3JtYWwiPjx1PjwvdT7CoDx1PjwvdT48L3A+CjxwIGNsYXNzPSJNc29Ob3JtYWwiPkni gJl2ZSBiZWVuIHJlYWRpbmcgbG90cyBvZiBtYXRlcmlhbCBhYm91dCBpbXBsZW1lbnRpbmcgb1Zp cnQgd2l0aCBDZXBoLCBob3dldmVyIGFsbCB0YWxrIGFib3V0IHVzaW5nIENpbmRlci4KPHU+PC91 Pjx1PjwvdT48L3A+CjxwIGNsYXNzPSJNc29Ob3JtYWwiPklzIHRoZXJlIGEgd2F5IHRvIGdldCBv VmlydCB3aXRoIENlcGggd2l0aG91dCBoYXZpbmcgdG8gaW1wbGVtZW50IGVudGlyZSBPcGVuc3Rh Y2sgPwo8dT48L3U+PHU+PC91PjwvcD4KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SeKAmW0gYWxyZWFk eSBjdXJyZW50bHkgdXNpbmcgRm9yZW1hbiB0byBkZXBsb3kgQ2VwaCBhbmQgS1ZNIG5vZGVzLCB0 cnlpbmcgdG8gbWluaW1pemUgdGhlIGFtb3VudCBvZiBtb3ZpbmcgcGFydHMuIEkgaGVhcmQgc29t ZXRoaW5nIGFib3V0IG9WaXJ0IHByb3ZpZGluZyBhIG1hbmFnZWQgQ2luZGVyIGFwcGxpYW5jZSwg aGF2ZSBhbnkgc2VlbiB0aGlzID88dT48L3U+PHU+PC91PjwvcD4KPHAgY2xhc3M9Ik1zb05vcm1h bCI+PHU+PC91PsKgPHU+PC91PjwvcD4KPC9kaXY+CjwvZGl2PgoKPGJyPl9fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPgpVc2VycyBtYWlsaW5nIGxpc3Q8 YnI+CjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48 YnI+CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy cyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm92aXJ0Lm9y Zy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxicj4KPGJyPjwvYmxvY2txdW90ZT48L2Rpdj48 YnI+PC9kaXY+PC9kaXY+Cg== --===============7150398408339613195==-- From nicolas at devels.es Sat Jun 25 16:47:50 2016 Content-Type: multipart/mixed; boundary="===============6388420576147870367==" MIME-Version: 1.0 From: =?utf-8?q?Nicol=C3=A1s_=3Cnicolas_at_devels=2Ees=3E?= To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sat, 25 Jun 2016 21:47:49 +0100 Message-ID: In-Reply-To: 307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46@exchange2010.dimcap.corp --===============6388420576147870367== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable --===============6388420576147870367== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PGRpdiBkaXI9Imx0ciI+SGksPGJyPgo8L2Rpdj48ZGl2IGRpcj0ibHRyIj48YnI+CjwvZGl2Pjxk aXYgZGlyPSJsdHIiPldlJiMzOTtyZSB1c2luZyBDZXBoIGFsb25nIHdpdGggYW4gaVNDU0kgZ2F0 ZXdheSwgc28gb3VyIHN0b3JhZ2UgZG9tYWluIGlzIGFjdHVhbGx5IGFuIGlTQ1NJIGJhY2tlbmQu IFNvIGZhciwgd2UgaGF2ZSBoYWQgemVybyBpc3N1ZXMgd2l0aCBjY2EuIDUwIGhpZ2ggSU8gcmF0 ZWQgVk1zLiBQZXJoYXBzIFsxXSBtaWdodCBzaGVkIHNvbWUgbGlnaHQgb24gaG93IHRvIHNldCBp dCB1cC48YnI+CjwvZGl2PjxkaXYgZGlyPSJsdHIiPjxicj4KPC9kaXY+PGRpdiBkaXI9Imx0ciI+ UmVnYXJkcy48YnI+CjwvZGl2PjxkaXYgZGlyPSJsdHIiPjxicj4KPC9kaXY+PGRpdiBkaXI9Imx0 ciI+WzFdOiA8YSBocmVmPSJodHRwczovL3d3dy5zdXNlLmNvbS9kb2N1bWVudGF0aW9uL3Nlcy0y L2Jvb2tfc3RvcmFnZV9hZG1pbi9kYXRhL2NoYV9jZXBoX2lzY3NpLmh0bWwiPmh0dHBzOi8vd3d3 LnN1c2UuY29tL2RvY3VtZW50YXRpb24vc2VzLTIvYm9va19zdG9yYWdlX2FkbWluL2RhdGEvY2hh X2NlcGhfaXNjc2kuaHRtbDwvYT48L2Rpdj48ZGl2IGNsYXNzPSJ3cHNfcXVvdGlvbiI+RW4gMjQv Ni8yMDE2IDk6MjggcC4gbS4sIENoYXJsZXMgR29tZXMgJmx0O2Nnb21lc0BjbGVhcnBvb2xncm91 cC5jb20mZ3Q7IGVzY3JpYmnDszo8YnIgdHlwZT0iYXR0cmlidXRpb24iPjxibG9ja3F1b3RlIGNs YXNzPSJxdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2Nj IHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2 PSJDb250ZW50LVR5cGUiIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11cy1hc2NpaSI+DQo8 bWV0YSBuYW1lPSJHZW5lcmF0b3IiIGNvbnRlbnQ9Ik1pY3Jvc29mdCBXb3JkIDE1IChmaWx0ZXJl ZCBtZWRpdW0pIj4NCjxzdHlsZT48IS0tDQovKiBGb250IERlZmluaXRpb25zICovDQpAZm9udC1m YWNlDQoJe2ZvbnQtZmFtaWx5OiJDYW1icmlhIE1hdGgiOw0KCXBhbm9zZS0xOjIgNCA1IDMgNSA0 IDYgMyAyIDQ7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseTpDYWxpYnJpOw0KCXBhbm9zZS0x OjIgMTUgNSAyIDIgMiA0IDMgMiA0O30NCi8qIFN0eWxlIERlZmluaXRpb25zICovDQpwLk1zb05v cm1hbCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJe21hcmdpbjowaW47DQoJbWFyZ2lu LWJvdHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMS4wcHQ7DQoJZm9udC1mYW1pbHk6IkNhbGli cmkiLHNhbnMtc2VyaWY7fQ0KYTpsaW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXttc28tc3R5bGUt cHJpb3JpdHk6OTk7DQoJY29sb3I6IzA1NjNDMTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5l O30NCmE6dmlzaXRlZCwgc3Bhbi5Nc29IeXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5bGUtcHJp b3JpdHk6OTk7DQoJY29sb3I6Izk1NEY3MjsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30N CnNwYW4uRW1haWxTdHlsZTE3DQoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsLWNvbXBvc2U7DQoJ Zm9udC1mYW1pbHk6IkNhbGlicmkiLHNhbnMtc2VyaWY7DQoJY29sb3I6d2luZG93dGV4dDt9DQou TXNvQ2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsNCglmb250LWZhbWls eToiQ2FsaWJyaSIsc2Fucy1zZXJpZjt9DQpAcGFnZSBXb3JkU2VjdGlvbjENCgl7c2l6ZTo4LjVp biAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEuMGluO30NCmRpdi5Xb3JkU2Vj dGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHlsZT4NCjwvaGVhZD4NCjxib2R5 IGxhbmc9IkVOLVVTIiBsaW5rPSIjMDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNz PSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SGVsbG88L3A+DQo8cCBjbGFz cz0iTXNvTm9ybWFsIj7CoDwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPknigJl2ZSBiZWVuIHJl YWRpbmcgbG90cyBvZiBtYXRlcmlhbCBhYm91dCBpbXBsZW1lbnRpbmcgb1ZpcnQgd2l0aCBDZXBo LCBob3dldmVyIGFsbCB0YWxrIGFib3V0IHVzaW5nIENpbmRlci4NCjwvcD4NCjxwIGNsYXNzPSJN c29Ob3JtYWwiPklzIHRoZXJlIGEgd2F5IHRvIGdldCBvVmlydCB3aXRoIENlcGggd2l0aG91dCBo YXZpbmcgdG8gaW1wbGVtZW50IGVudGlyZSBPcGVuc3RhY2sgPw0KPC9wPg0KPHAgY2xhc3M9Ik1z b05vcm1hbCI+SeKAmW0gYWxyZWFkeSBjdXJyZW50bHkgdXNpbmcgRm9yZW1hbiB0byBkZXBsb3kg Q2VwaCBhbmQgS1ZNIG5vZGVzLCB0cnlpbmcgdG8gbWluaW1pemUgdGhlIGFtb3VudCBvZiBtb3Zp bmcgcGFydHMuIEkgaGVhcmQgc29tZXRoaW5nIGFib3V0IG9WaXJ0IHByb3ZpZGluZyBhIG1hbmFn ZWQgQ2luZGVyIGFwcGxpYW5jZSwgaGF2ZSBhbnkgc2VlbiB0aGlzID88L3A+DQo8cCBjbGFzcz0i TXNvTm9ybWFsIj7CoDwvcD4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0KPC9ibG9ja3F1b3Rl PjwvZGl2Pg== --===============6388420576147870367==-- From nsoffer at redhat.com Sat Jun 25 17:57:50 2016 Content-Type: multipart/mixed; boundary="===============7170945065488129812==" MIME-Version: 1.0 From: Nir Soffer To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sun, 26 Jun 2016 00:57:48 +0300 Message-ID: In-Reply-To: qiniqubu82maql8o3l71ltjd.1466887669583@email.android.com --===============7170945065488129812== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Sat, Jun 25, 2016 at 11:47 PM, Nicol=C3=A1s wrote: > Hi, > > We're using Ceph along with an iSCSI gateway, so our storage domain is > actually an iSCSI backend. So far, we have had zero issues with cca. 50 h= igh > IO rated VMs. Perhaps [1] might shed some light on how to set it up. Can you share more details on this setup and how you integrate with ovirt? For example, are you using ceph luns in regular iscsi storage domain, or attaching luns directly to vms? Did you try our dedicated cinder/ceph support and compared it with ceph iscsi gateway? Nir --===============7170945065488129812==-- From fernando.frediani at upx.com.br Sat Jun 25 18:42:01 2016 Content-Type: multipart/mixed; boundary="===============7977375551055925600==" MIME-Version: 1.0 From: Fernando Frediani To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sat, 25 Jun 2016 19:42:58 -0300 Message-ID: <5d52f434-0978-ead2-6f92-da6e896db2dd@upx.com.br> In-Reply-To: qiniqubu82maql8o3l71ltjd.1466887669583@email.android.com --===============7977375551055925600== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------FA12EBC73B50F8394E8AB5E9 Content-Type: text/plain; charset=3Dwindows-1252; format=3Dflowed Content-Transfer-Encoding: 8bit This solution looks intresting. If I understand it correctly you first build your CEPH pool. Then you = export RBD to iSCSI Target which exports it to oVirt which then will = create LVMs on the top of it ? Could you share more details about your experience ? Looks like a way to = get CEPH + oVirt without Cinder. Thanks Fernando On 25/06/2016 17:47, Nicol=C3=A1s wrote: > Hi, > > We're using Ceph along with an iSCSI gateway, so our storage domain is = > actually an iSCSI backend. So far, we have had zero issues with cca. = > 50 high IO rated VMs. Perhaps [1] might shed some light on how to set = > it up. > > Regards. > > [1]: = > https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph= _iscsi.html > En 24/6/2016 9:28 p. m., Charles Gomes = > escribi=C3=B3: > > Hello > > I=C2=92ve been reading lots of material about implementing oVirt with > Ceph, however all talk about using Cinder. > > Is there a way to get oVirt with Ceph without having to implement > entire Openstack ? > > I=C2=92m already currently using Foreman to deploy Ceph and KVM nodes, > trying to minimize the amount of moving parts. I heard something > about oVirt providing a managed Cinder appliance, have any seen this ? > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --------------FA12EBC73B50F8394E8AB5E9 Content-Type: text/html; charset=3Dwindows-1252 Content-Transfer-Encoding: 8bit

This solution looks intresting.

If I understand it correctly you first build your CEPH pool. Then you export RBD to iSCSI Target which exports it to oVirt which then will create LVMs on the top of it ?

Could you share more details about your experience ? Looks like a way to get CEPH + oVirt without Cinder.

Thanks

Fernando

On 25/06/2016 17:47, Nicol=C3=A1s wrote:=
Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.

Regards.

En 24/6/2016 9:28 p. m., Charles Gomes <cgomes(a)clearpoolgroup.com> escribi=C3=B3:

Hello

=C2=A0

I=C2=92ve been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without having to implement entire Openstack ?

I=C2=92m already currently using Foreman= to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?

=C2=A0



_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--------------FA12EBC73B50F8394E8AB5E9-- --===============7977375551055925600== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS1GQTEyRUJDNzNCNTBGODM5NEU4QUI1RTkKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXdpbmRvd3MtMTI1MjsgZm9ybWF0PWZsb3dlZApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5n OiA4Yml0CgpUaGlzIHNvbHV0aW9uIGxvb2tzIGludHJlc3RpbmcuCgpJZiBJIHVuZGVyc3RhbmQg aXQgY29ycmVjdGx5IHlvdSBmaXJzdCBidWlsZCB5b3VyIENFUEggcG9vbC4gVGhlbiB5b3UgCmV4 cG9ydCBSQkQgdG8gaVNDU0kgVGFyZ2V0IHdoaWNoIGV4cG9ydHMgaXQgdG8gb1ZpcnQgd2hpY2gg dGhlbiB3aWxsIApjcmVhdGUgTFZNcyBvbiB0aGUgdG9wIG9mIGl0ID8KCkNvdWxkIHlvdSBzaGFy ZSBtb3JlIGRldGFpbHMgYWJvdXQgeW91ciBleHBlcmllbmNlID8gTG9va3MgbGlrZSBhIHdheSB0 byAKZ2V0IENFUEggKyBvVmlydCB3aXRob3V0IENpbmRlci4KClRoYW5rcwoKRmVybmFuZG8KCk9u IDI1LzA2LzIwMTYgMTc6NDcsIE5pY29s4XMgd3JvdGU6Cj4gSGksCj4KPiBXZSdyZSB1c2luZyBD ZXBoIGFsb25nIHdpdGggYW4gaVNDU0kgZ2F0ZXdheSwgc28gb3VyIHN0b3JhZ2UgZG9tYWluIGlz IAo+IGFjdHVhbGx5IGFuIGlTQ1NJIGJhY2tlbmQuIFNvIGZhciwgd2UgaGF2ZSBoYWQgemVybyBp c3N1ZXMgd2l0aCBjY2EuIAo+IDUwIGhpZ2ggSU8gcmF0ZWQgVk1zLiBQZXJoYXBzIFsxXSBtaWdo dCBzaGVkIHNvbWUgbGlnaHQgb24gaG93IHRvIHNldCAKPiBpdCB1cC4KPgo+IFJlZ2FyZHMuCj4K PiBbMV06IAo+IGh0dHBzOi8vd3d3LnN1c2UuY29tL2RvY3VtZW50YXRpb24vc2VzLTIvYm9va19z dG9yYWdlX2FkbWluL2RhdGEvY2hhX2NlcGhfaXNjc2kuaHRtbAo+IEVuIDI0LzYvMjAxNiA5OjI4 IHAuIG0uLCBDaGFybGVzIEdvbWVzIDxjZ29tZXNAY2xlYXJwb29sZ3JvdXAuY29tPiAKPiBlc2Ny aWJp8zoKPgo+ICAgICBIZWxsbwo+Cj4gICAgIEmSdmUgYmVlbiByZWFkaW5nIGxvdHMgb2YgbWF0 ZXJpYWwgYWJvdXQgaW1wbGVtZW50aW5nIG9WaXJ0IHdpdGgKPiAgICAgQ2VwaCwgaG93ZXZlciBh bGwgdGFsayBhYm91dCB1c2luZyBDaW5kZXIuCj4KPiAgICAgSXMgdGhlcmUgYSB3YXkgdG8gZ2V0 IG9WaXJ0IHdpdGggQ2VwaCB3aXRob3V0IGhhdmluZyB0byBpbXBsZW1lbnQKPiAgICAgZW50aXJl IE9wZW5zdGFjayA/Cj4KPiAgICAgSZJtIGFscmVhZHkgY3VycmVudGx5IHVzaW5nIEZvcmVtYW4g dG8gZGVwbG95IENlcGggYW5kIEtWTSBub2RlcywKPiAgICAgdHJ5aW5nIHRvIG1pbmltaXplIHRo ZSBhbW91bnQgb2YgbW92aW5nIHBhcnRzLiBJIGhlYXJkIHNvbWV0aGluZwo+ICAgICBhYm91dCBv VmlydCBwcm92aWRpbmcgYSBtYW5hZ2VkIENpbmRlciBhcHBsaWFuY2UsIGhhdmUgYW55IHNlZW4g dGhpcyA/Cj4KPgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX18KPiBVc2VycyBtYWlsaW5nIGxpc3QKPiBVc2Vyc0BvdmlydC5vcmcKPiBodHRwOi8vbGlz dHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMKCgotLS0tLS0tLS0tLS0tLUZBMTJF QkM3M0I1MEY4Mzk0RThBQjVFOQpDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD13aW5k b3dzLTEyNTIKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJpdAoKPGh0bWw+CiAgPGhlYWQ+ CiAgICA8bWV0YSBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9d2luZG93cy0xMjUyIgogICAg ICBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiPgogIDwvaGVhZD4KICA8Ym9keSBiZ2NvbG9yPSIj RkZGRkZGIiB0ZXh0PSIjMDAwMDAwIj4KICAgIDxwPlRoaXMgc29sdXRpb24gbG9va3MgaW50cmVz dGluZy48L3A+CiAgICA8cD5JZiBJIHVuZGVyc3RhbmQgaXQgY29ycmVjdGx5IHlvdSBmaXJzdCBi dWlsZCB5b3VyIENFUEggcG9vbC4gVGhlbgogICAgICB5b3UgZXhwb3J0IFJCRCB0byBpU0NTSSBU YXJnZXQgd2hpY2ggZXhwb3J0cyBpdCB0byBvVmlydCB3aGljaAogICAgICB0aGVuIHdpbGwgY3Jl YXRlIExWTXMgb24gdGhlIHRvcCBvZiBpdCA/PC9wPgogICAgPHA+Q291bGQgeW91IHNoYXJlIG1v cmUgZGV0YWlscyBhYm91dCB5b3VyIGV4cGVyaWVuY2UgPyBMb29rcyBsaWtlIGEKICAgICAgd2F5 IHRvIGdldCBDRVBIICsgb1ZpcnQgd2l0aG91dCBDaW5kZXIuPGJyPgogICAgPC9wPgogICAgPHA+ VGhhbmtzPC9wPgogICAgPHA+RmVybmFuZG88YnI+CiAgICA8L3A+CiAgICA8ZGl2IGNsYXNzPSJt b3otY2l0ZS1wcmVmaXgiPk9uIDI1LzA2LzIwMTYgMTc6NDcsIE5pY29s4XMgd3JvdGU6PGJyPgog ICAgPC9kaXY+CiAgICA8YmxvY2txdW90ZQogICAgICBjaXRlPSJtaWQ6cWluaXF1YnU4Mm1hcWw4 bzNsNzFsdGpkLjE0NjY4ODc2Njk1ODNAZW1haWwuYW5kcm9pZC5jb20iCiAgICAgIHR5cGU9ImNp dGUiPgogICAgICA8ZGl2IGRpcj0ibHRyIj5IaSw8YnI+CiAgICAgIDwvZGl2PgogICAgICA8ZGl2 IGRpcj0ibHRyIj48YnI+CiAgICAgIDwvZGl2PgogICAgICA8ZGl2IGRpcj0ibHRyIj5XZSdyZSB1 c2luZyBDZXBoIGFsb25nIHdpdGggYW4gaVNDU0kgZ2F0ZXdheSwgc28KICAgICAgICBvdXIgc3Rv cmFnZSBkb21haW4gaXMgYWN0dWFsbHkgYW4gaVNDU0kgYmFja2VuZC4gU28gZmFyLCB3ZSBoYXZl CiAgICAgICAgaGFkIHplcm8gaXNzdWVzIHdpdGggY2NhLiA1MCBoaWdoIElPIHJhdGVkIFZNcy4g UGVyaGFwcyBbMV0KICAgICAgICBtaWdodCBzaGVkIHNvbWUgbGlnaHQgb24gaG93IHRvIHNldCBp dCB1cC48YnI+CiAgICAgIDwvZGl2PgogICAgICA8ZGl2IGRpcj0ibHRyIj48YnI+CiAgICAgIDwv ZGl2PgogICAgICA8ZGl2IGRpcj0ibHRyIj5SZWdhcmRzLjxicj4KICAgICAgPC9kaXY+CiAgICAg IDxkaXYgZGlyPSJsdHIiPjxicj4KICAgICAgPC9kaXY+CiAgICAgIDxkaXYgZGlyPSJsdHIiPlsx XTogPGEgbW96LWRvLW5vdC1zZW5kPSJ0cnVlIgpocmVmPSJodHRwczovL3d3dy5zdXNlLmNvbS9k b2N1bWVudGF0aW9uL3Nlcy0yL2Jvb2tfc3RvcmFnZV9hZG1pbi9kYXRhL2NoYV9jZXBoX2lzY3Np Lmh0bWwiPmh0dHBzOi8vd3d3LnN1c2UuY29tL2RvY3VtZW50YXRpb24vc2VzLTIvYm9va19zdG9y YWdlX2FkbWluL2RhdGEvY2hhX2NlcGhfaXNjc2kuaHRtbDwvYT48L2Rpdj4KICAgICAgPGRpdiBj bGFzcz0id3BzX3F1b3Rpb24iPkVuIDI0LzYvMjAxNiA5OjI4IHAuIG0uLCBDaGFybGVzIEdvbWVz CiAgICAgICAgPGEgY2xhc3M9Im1vei10eHQtbGluay1yZmMyMzk2RSIgaHJlZj0ibWFpbHRvOmNn b21lc0BjbGVhcnBvb2xncm91cC5jb20iPiZsdDtjZ29tZXNAY2xlYXJwb29sZ3JvdXAuY29tJmd0 OzwvYT4gZXNjcmliafM6PGJyCiAgICAgICAgICB0eXBlPSJhdHRyaWJ1dGlvbiI+CiAgICAgICAg PGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwCiAgICAgICAgICAu OGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPgogICAgICAg ICAgPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7CiAg ICAgICAgICAgIGNoYXJzZXQ9d2luZG93cy0xMjUyIj4KICAgICAgICAgIDxtZXRhIG5hbWU9Ikdl bmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkCiAgICAgICAgICAg IG1lZGl1bSkiPgogICAgICAgICAgPHN0eWxlPjwhLS0KLyogRm9udCBEZWZpbml0aW9ucyAqLwpA Zm9udC1mYWNlCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7CglwYW5vc2UtMToyIDQgNSAz IDUgNCA2IDMgMiA0O30KQGZvbnQtZmFjZQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7CglwYW5vc2Ut MToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9Ci8qIFN0eWxlIERlZmluaXRpb25zICovCnAuTXNvTm9y bWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwKCXttYXJnaW46MGluOwoJbWFyZ2luLWJv dHRvbTouMDAwMXB0OwoJZm9udC1zaXplOjExLjBwdDsKCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixz YW5zLXNlcmlmO30KYTpsaW5rLCBzcGFuLk1zb0h5cGVybGluawoJe21zby1zdHlsZS1wcmlvcml0 eTo5OTsKCWNvbG9yOiMwNTYzQzE7Cgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30KYTp2aXNp dGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2VkCgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5OwoJ Y29sb3I6Izk1NEY3MjsKCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQpzcGFuLkVtYWlsU3R5 bGUxNwoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsLWNvbXBvc2U7Cglmb250LWZhbWlseToiQ2Fs aWJyaSIsc2Fucy1zZXJpZjsKCWNvbG9yOndpbmRvd3RleHQ7fQouTXNvQ2hwRGVmYXVsdAoJe21z by1zdHlsZS10eXBlOmV4cG9ydC1vbmx5OwoJZm9udC1mYW1pbHk6IkNhbGlicmkiLHNhbnMtc2Vy aWY7fQpAcGFnZSBXb3JkU2VjdGlvbjEKCXtzaXplOjguNWluIDExLjBpbjsKCW1hcmdpbjoxLjBp biAxLjBpbiAxLjBpbiAxLjBpbjt9CmRpdi5Xb3JkU2VjdGlvbjEKCXtwYWdlOldvcmRTZWN0aW9u MTt9Ci0tPjwvc3R5bGU+CiAgICAgICAgICA8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPgogICAg ICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5IZWxsbzwvcD4KICAgICAgICAgICAgPHAgY2xh c3M9Ik1zb05vcm1hbCI+oDwvcD4KICAgICAgICAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+SZJ2 ZSBiZWVuIHJlYWRpbmcgbG90cyBvZiBtYXRlcmlhbAogICAgICAgICAgICAgIGFib3V0IGltcGxl bWVudGluZyBvVmlydCB3aXRoIENlcGgsIGhvd2V2ZXIgYWxsIHRhbGsgYWJvdXQKICAgICAgICAg ICAgICB1c2luZyBDaW5kZXIuCiAgICAgICAgICAgIDwvcD4KICAgICAgICAgICAgPHAgY2xhc3M9 Ik1zb05vcm1hbCI+SXMgdGhlcmUgYSB3YXkgdG8gZ2V0IG9WaXJ0IHdpdGggQ2VwaAogICAgICAg ICAgICAgIHdpdGhvdXQgaGF2aW5nIHRvIGltcGxlbWVudCBlbnRpcmUgT3BlbnN0YWNrID8KICAg ICAgICAgICAgPC9wPgogICAgICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5Jkm0gYWxyZWFk eSBjdXJyZW50bHkgdXNpbmcgRm9yZW1hbiB0bwogICAgICAgICAgICAgIGRlcGxveSBDZXBoIGFu ZCBLVk0gbm9kZXMsIHRyeWluZyB0byBtaW5pbWl6ZSB0aGUgYW1vdW50CiAgICAgICAgICAgICAg b2YgbW92aW5nIHBhcnRzLiBJIGhlYXJkIHNvbWV0aGluZyBhYm91dCBvVmlydCBwcm92aWRpbmcg YQogICAgICAgICAgICAgIG1hbmFnZWQgQ2luZGVyIGFwcGxpYW5jZSwgaGF2ZSBhbnkgc2VlbiB0 aGlzID88L3A+CiAgICAgICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPqA8L3A+CiAgICAgICAg ICA8L2Rpdj4KICAgICAgICA8L2Jsb2NrcXVvdGU+CiAgICAgIDwvZGl2PgogICAgICA8YnI+CiAg ICAgIDxmaWVsZHNldCBjbGFzcz0ibWltZUF0dGFjaG1lbnRIZWFkZXIiPjwvZmllbGRzZXQ+CiAg ICAgIDxicj4KICAgICAgPHByZSB3cmFwPSIiPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlzdAo8YSBjbGFzcz0ibW96LXR4dC1s aW5rLWFiYnJldmlhdGVkIiBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0Bvdmly dC5vcmc8L2E+CjxhIGNsYXNzPSJtb3otdHh0LWxpbmstZnJlZXRleHQiIGhyZWY9Imh0dHA6Ly9s aXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyI+aHR0cDovL2xpc3RzLm92aXJ0 Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPgo8L3ByZT4KICAgIDwvYmxvY2txdW90ZT4K ICAgIDxicj4KICA8L2JvZHk+CjwvaHRtbD4KCi0tLS0tLS0tLS0tLS0tRkExMkVCQzczQjUwRjgz OTRFOEFCNUU5LS0K --===============7977375551055925600==-- From nicolas at devels.es Sun Jun 26 04:49:26 2016 Content-Type: multipart/mixed; boundary="===============2907523814203317332==" MIME-Version: 1.0 From: =?utf-8?q?Nicol=C3=A1s_=3Cnicolas_at_devels=2Ees=3E?= To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sun, 26 Jun 2016 09:49:28 +0100 Message-ID: <576F9718.7020605@devels.es> In-Reply-To: CAMRbyytMQdcSXDKwEWKD=aX=XXkZkt+=X1hC5hOiSfWXK2RCYQ@mail.gmail.com --===============2907523814203317332== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------060404000307020607060307 Content-Type: text/plain; charset=3Dutf-8; format=3Dflowed Content-Transfer-Encoding: 8bit Hi Nir, El 25/06/16 a las 22:57, Nir Soffer escribi=C3=B3: > On Sat, Jun 25, 2016 at 11:47 PM, Nicol=C3=A1s wrot= e: >> Hi, >> >> We're using Ceph along with an iSCSI gateway, so our storage domain is >> actually an iSCSI backend. So far, we have had zero issues with cca. 50 = high >> IO rated VMs. Perhaps [1] might shed some light on how to set it up. > Can you share more details on this setup and how you integrate with ovirt? > > For example, are you using ceph luns in regular iscsi storage domain, or > attaching luns directly to vms? Fernando Frediani (responding to this thread) hit the nail on the head. = Actually we have a 3-node Ceph infrastructure, so we created a few = volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so = it's oVirt who creates the LVs on the top, this way we don't need to = attach luns directly. Once the volumes are exported on the iSCSI side, adding an iSCSI domain = on oVirt is enough to make the whole thing work. As for experience, we have done a few tests and so far we've had zero = issues: * The main bottleneck is the iSCSI gateway interface bandwith. In our case we have a balance-alb bond over two 1G network interfaces. Later we realized this kind of bonding is useless because MAC addresses won't change, so in practice only 1G will be used at most. Making some heavy tests (i.e., powering on 50 VMs at a time) we've reached this threshold at specific points but it didn't affect performance significantly. * Doing some additional heavy tests (powering on and off all VMs at a time), we've reached the maximum value of cca. 1200 IOPS at a time. In normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk operations. * We've also done some tolerance tests, like removing one or more disks from a Ceph node, reinserting them, suddenly shut down one node, restoring it... The only problem we've experienced is a slower access to the iSCSI backend, which results in a message in the oVirt manager warning about this: something like "Storage is taking to long to respond...", which was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any significant issue. > Did you try our dedicated cinder/ceph support and compared it with ceph > iscsi gateway? Not actually, in order to avoid deploying Cinder we directly implemented = the gateway as it looked easier to us. > Nir Hope this helps. Regards. --------------060404000307020607060307 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 8bit Hi Nir,

El 25/06/16 a las 22:57, Nir Soffer escribi=C3=B3:
On Sat, Jun 25, 2016 at 11:47 PM, Nicol=C3=A1s <nicolas=
(a)devels.es> wrote:
Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is
actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
IO rated VMs. Perhaps [1] might shed some light on how to set it up.
Can you share more details on this setup and how you integrate with ovirt?

For example, are you using ceph luns in regular iscsi storage domain, or
attaching luns directly to vms?

Fernando Frediani (responding to this thread) hit the nail on the head. Actually we have a 3-node Ceph infrastructure, so we created a few volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt who creates the LVs on the top, this way we don't need to attach luns directly.

Once the volumes are exported on the iSCSI side, adding an iSCSI domain on oVirt is enough to make the whole thing work.

As for experience, we have done a few tests and so far we've had zero issues:
  • The main bottleneck is the iSCSI gateway interface bandwith. In our case we have a balance-alb bond over two 1G network interfaces. Later we realized this kind of bonding is useless because MAC addresses won't change, so in practice only 1G will be used at most. Making some heavy tests (i.e., powering on 50 VMs at a time) we've reached this threshold at specific points but it didn't affect performance significantly.
  • Doing some additional heavy tests (powering on and off all VMs at a time), we've reached the maximum value of cca. 1200 IOPS at a time. In normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk operations.
  • We've also done some tolerance tests, like removing one or more disks from a Ceph node, reinserting them, suddenly shut down one node, restoring it... The only problem we've experienced is a slower access to the iSCSI backend, which results in a message in the oVirt manager warning about this: something like "Storage is taking to long to respond...", which was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any significant issue.
Did you try our dedicated cinder/ceph support and compared it with ceph
iscsi gateway?

Not actually, in order to avoid deploying Cinder we directly implemented the gateway as it looked easier to us.

Nir

Hope this helps.

Regards.


--------------060404000307020607060307-- --===============2907523814203317332== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNjA0MDQwMDAzMDcwMjA2MDcwNjAzMDcKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04OyBmb3JtYXQ9Zmxvd2VkCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQK CkhpIE5pciwKCkVsIDI1LzA2LzE2IGEgbGFzIDIyOjU3LCBOaXIgU29mZmVyIGVzY3JpYmnDszoK PiBPbiBTYXQsIEp1biAyNSwgMjAxNiBhdCAxMTo0NyBQTSwgTmljb2zDoXMgPG5pY29sYXNAZGV2 ZWxzLmVzPiB3cm90ZToKPj4gSGksCj4+Cj4+IFdlJ3JlIHVzaW5nIENlcGggYWxvbmcgd2l0aCBh biBpU0NTSSBnYXRld2F5LCBzbyBvdXIgc3RvcmFnZSBkb21haW4gaXMKPj4gYWN0dWFsbHkgYW4g aVNDU0kgYmFja2VuZC4gU28gZmFyLCB3ZSBoYXZlIGhhZCB6ZXJvIGlzc3VlcyB3aXRoIGNjYS4g NTAgaGlnaAo+PiBJTyByYXRlZCBWTXMuIFBlcmhhcHMgWzFdIG1pZ2h0IHNoZWQgc29tZSBsaWdo dCBvbiBob3cgdG8gc2V0IGl0IHVwLgo+IENhbiB5b3Ugc2hhcmUgbW9yZSBkZXRhaWxzIG9uIHRo aXMgc2V0dXAgYW5kIGhvdyB5b3UgaW50ZWdyYXRlIHdpdGggb3ZpcnQ/Cj4KPiBGb3IgZXhhbXBs ZSwgYXJlIHlvdSB1c2luZyBjZXBoIGx1bnMgaW4gcmVndWxhciBpc2NzaSBzdG9yYWdlIGRvbWFp biwgb3IKPiBhdHRhY2hpbmcgbHVucyBkaXJlY3RseSB0byB2bXM/CgpGZXJuYW5kbyBGcmVkaWFu aSAocmVzcG9uZGluZyB0byB0aGlzIHRocmVhZCkgaGl0IHRoZSBuYWlsIG9uIHRoZSBoZWFkLiAK QWN0dWFsbHkgd2UgaGF2ZSBhIDMtbm9kZSBDZXBoIGluZnJhc3RydWN0dXJlLCBzbyB3ZSBjcmVh dGVkIGEgZmV3IAp2b2x1bWVzIG9uIHRoZSBDZXBoIG5vZGVzIHNpZGUgKFJCRCkgYW5kIHRoZW4g ZXhwb3J0ZWQgdGhlbSB0byBpU0NTSSwgc28gCml0J3Mgb1ZpcnQgd2hvIGNyZWF0ZXMgdGhlIExW cyBvbiB0aGUgdG9wLCB0aGlzIHdheSB3ZSBkb24ndCBuZWVkIHRvIAphdHRhY2ggbHVucyBkaXJl Y3RseS4KCk9uY2UgdGhlIHZvbHVtZXMgYXJlIGV4cG9ydGVkIG9uIHRoZSBpU0NTSSBzaWRlLCBh ZGRpbmcgYW4gaVNDU0kgZG9tYWluIApvbiBvVmlydCBpcyBlbm91Z2ggdG8gbWFrZSB0aGUgd2hv bGUgdGhpbmcgd29yay4KCkFzIGZvciBleHBlcmllbmNlLCB3ZSBoYXZlIGRvbmUgYSBmZXcgdGVz dHMgYW5kIHNvIGZhciB3ZSd2ZSBoYWQgemVybyAKaXNzdWVzOgoKICAqIFRoZSBtYWluIGJvdHRs ZW5lY2sgaXMgdGhlIGlTQ1NJIGdhdGV3YXkgaW50ZXJmYWNlIGJhbmR3aXRoLiBJbiBvdXIKICAg IGNhc2Ugd2UgaGF2ZSBhIGJhbGFuY2UtYWxiIGJvbmQgb3ZlciB0d28gMUcgbmV0d29yayBpbnRl cmZhY2VzLgogICAgTGF0ZXIgd2UgcmVhbGl6ZWQgdGhpcyBraW5kIG9mIGJvbmRpbmcgaXMgdXNl bGVzcyBiZWNhdXNlIE1BQwogICAgYWRkcmVzc2VzIHdvbid0IGNoYW5nZSwgc28gaW4gcHJhY3Rp Y2Ugb25seSAxRyB3aWxsIGJlIHVzZWQgYXQgbW9zdC4KICAgIE1ha2luZyBzb21lIGhlYXZ5IHRl c3RzIChpLmUuLCBwb3dlcmluZyBvbiA1MCBWTXMgYXQgYSB0aW1lKSB3ZSd2ZQogICAgcmVhY2hl ZCB0aGlzIHRocmVzaG9sZCBhdCBzcGVjaWZpYyBwb2ludHMgYnV0IGl0IGRpZG4ndCBhZmZlY3QK ICAgIHBlcmZvcm1hbmNlIHNpZ25pZmljYW50bHkuCiAgKiBEb2luZyBzb21lIGFkZGl0aW9uYWwg aGVhdnkgdGVzdHMgKHBvd2VyaW5nIG9uIGFuZCBvZmYgYWxsIFZNcyBhdCBhCiAgICB0aW1lKSwg d2UndmUgcmVhY2hlZCB0aGUgbWF4aW11bSB2YWx1ZSBvZiBjY2EuIDEyMDAgSU9QUyBhdCBhIHRp bWUuCiAgICBJbiBub3JtYWwgY29uZGl0aW9ucyB3ZSBkb24ndCBzdXJwYXNzIDIwMCBJT1BTLCBl dmVuIHdoZW4gdGhlc2UgNTAKICAgIFZNcyBkbyBsb3RzIG9mIGRpc2sgb3BlcmF0aW9ucy4KICAq IFdlJ3ZlIGFsc28gZG9uZSBzb21lIHRvbGVyYW5jZSB0ZXN0cywgbGlrZSByZW1vdmluZyBvbmUg b3IgbW9yZQogICAgZGlza3MgZnJvbSBhIENlcGggbm9kZSwgcmVpbnNlcnRpbmcgdGhlbSwgc3Vk ZGVubHkgc2h1dCBkb3duIG9uZQogICAgbm9kZSwgcmVzdG9yaW5nIGl0Li4uIFRoZSBvbmx5IHBy b2JsZW0gd2UndmUgZXhwZXJpZW5jZWQgaXMgYSBzbG93ZXIKICAgIGFjY2VzcyB0byB0aGUgaVND U0kgYmFja2VuZCwgd2hpY2ggcmVzdWx0cyBpbiBhIG1lc3NhZ2UgaW4gdGhlIG9WaXJ0CiAgICBt YW5hZ2VyIHdhcm5pbmcgYWJvdXQgdGhpczogc29tZXRoaW5nIGxpa2UgIlN0b3JhZ2UgaXMgdGFr aW5nIHRvCiAgICBsb25nIHRvIHJlc3BvbmQuLi4iLCB3aGljaCB3YXMgbWF5YmUgMTUtMjAgc2Vj b25kcy4gV2UgZ290IG5vIFZNCiAgICBwYXVzZXMgYXQgYW55IHRpbWUsIHRob3VnaCwgbm9yIGFu eSBzaWduaWZpY2FudCBpc3N1ZS4KCj4gRGlkIHlvdSB0cnkgb3VyIGRlZGljYXRlZCBjaW5kZXIv Y2VwaCBzdXBwb3J0IGFuZCBjb21wYXJlZCBpdCB3aXRoIGNlcGgKPiBpc2NzaSBnYXRld2F5PwoK Tm90IGFjdHVhbGx5LCBpbiBvcmRlciB0byBhdm9pZCBkZXBsb3lpbmcgQ2luZGVyIHdlIGRpcmVj dGx5IGltcGxlbWVudGVkIAp0aGUgZ2F0ZXdheSBhcyBpdCBsb29rZWQgZWFzaWVyIHRvIHVzLgoK PiBOaXIKCkhvcGUgdGhpcyBoZWxwcy4KClJlZ2FyZHMuCgoKCi0tLS0tLS0tLS0tLS0tMDYwNDA0 MDAwMzA3MDIwNjA3MDYwMzA3CkNvbnRlbnQtVHlwZTogdGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04 CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKCjxodG1sPgogIDxoZWFkPgogICAgPG1l dGEgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04IiBodHRwLWVxdWl2PSJDb250ZW50 LVR5cGUiPgogIDwvaGVhZD4KICA8Ym9keSB0ZXh0PSIjMDAwMDAwIiBiZ2NvbG9yPSIjRkZGRkZG Ij4KICAgIEhpIE5pciw8YnI+CiAgICA8YnI+CiAgICA8ZGl2IGNsYXNzPSJtb3otY2l0ZS1wcmVm aXgiPkVsIDI1LzA2LzE2IGEgbGFzIDIyOjU3LCBOaXIgU29mZmVyCiAgICAgIGVzY3JpYmnDszo8 YnI+CiAgICA8L2Rpdj4KICAgIDxibG9ja3F1b3RlCmNpdGU9Im1pZDpDQU1SYnl5dE1RZGNTWERL d0VXS0Q9YVg9WFhrWmt0Kz1YMWhDNWhPaVNmV1hLMlJDWVFAbWFpbC5nbWFpbC5jb20iCiAgICAg IHR5cGU9ImNpdGUiPgogICAgICA8cHJlIHdyYXA9IiI+T24gU2F0LCBKdW4gMjUsIDIwMTYgYXQg MTE6NDcgUE0sIE5pY29sw6FzIDxhIGNsYXNzPSJtb3otdHh0LWxpbmstcmZjMjM5NkUiIGhyZWY9 Im1haWx0bzpuaWNvbGFzQGRldmVscy5lcyI+Jmx0O25pY29sYXNAZGV2ZWxzLmVzJmd0OzwvYT4g d3JvdGU6CjwvcHJlPgogICAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4KICAgICAgICA8cHJl IHdyYXA9IiI+SGksCgpXZSdyZSB1c2luZyBDZXBoIGFsb25nIHdpdGggYW4gaVNDU0kgZ2F0ZXdh eSwgc28gb3VyIHN0b3JhZ2UgZG9tYWluIGlzCmFjdHVhbGx5IGFuIGlTQ1NJIGJhY2tlbmQuIFNv IGZhciwgd2UgaGF2ZSBoYWQgemVybyBpc3N1ZXMgd2l0aCBjY2EuIDUwIGhpZ2gKSU8gcmF0ZWQg Vk1zLiBQZXJoYXBzIFsxXSBtaWdodCBzaGVkIHNvbWUgbGlnaHQgb24gaG93IHRvIHNldCBpdCB1 cC4KPC9wcmU+CiAgICAgIDwvYmxvY2txdW90ZT4KICAgICAgPHByZSB3cmFwPSIiPgpDYW4geW91 IHNoYXJlIG1vcmUgZGV0YWlscyBvbiB0aGlzIHNldHVwIGFuZCBob3cgeW91IGludGVncmF0ZSB3 aXRoIG92aXJ0PwoKRm9yIGV4YW1wbGUsIGFyZSB5b3UgdXNpbmcgY2VwaCBsdW5zIGluIHJlZ3Vs YXIgaXNjc2kgc3RvcmFnZSBkb21haW4sIG9yCmF0dGFjaGluZyBsdW5zIGRpcmVjdGx5IHRvIHZt cz8KPC9wcmU+CiAgICA8L2Jsb2NrcXVvdGU+CiAgICA8YnI+CiAgICBGZXJuYW5kbyBGcmVkaWFu aSAocmVzcG9uZGluZyB0byB0aGlzIHRocmVhZCkgaGl0IHRoZSBuYWlsIG9uIHRoZQogICAgaGVh ZC4gQWN0dWFsbHkgd2UgaGF2ZSBhIDMtbm9kZSBDZXBoIGluZnJhc3RydWN0dXJlLCBzbyB3ZSBj cmVhdGVkIGEKICAgIGZldyB2b2x1bWVzIG9uIHRoZSBDZXBoIG5vZGVzIHNpZGUgKFJCRCkgYW5k IHRoZW4gZXhwb3J0ZWQgdGhlbSB0bwogICAgaVNDU0ksIHNvIGl0J3Mgb1ZpcnQgd2hvIGNyZWF0 ZXMgdGhlIExWcyBvbiB0aGUgdG9wLCB0aGlzIHdheSB3ZQogICAgZG9uJ3QgbmVlZCB0byBhdHRh Y2ggbHVucyBkaXJlY3RseS48YnI+CiAgICA8YnI+CiAgICBPbmNlIHRoZSB2b2x1bWVzIGFyZSBl eHBvcnRlZCBvbiB0aGUgaVNDU0kgc2lkZSwgYWRkaW5nIGFuIGlTQ1NJCiAgICBkb21haW4gb24g b1ZpcnQgaXMgZW5vdWdoIHRvIG1ha2UgdGhlIHdob2xlIHRoaW5nIHdvcmsuPGJyPgogICAgPGJy PgogICAgQXMgZm9yIGV4cGVyaWVuY2UsIHdlIGhhdmUgZG9uZSBhIGZldyB0ZXN0cyBhbmQgc28g ZmFyIHdlJ3ZlIGhhZAogICAgemVybyBpc3N1ZXM6PGJyPgogICAgPHVsPgogICAgICA8bGk+VGhl IG1haW4gYm90dGxlbmVjayBpcyB0aGUgaVNDU0kgZ2F0ZXdheSBpbnRlcmZhY2UgYmFuZHdpdGgu CiAgICAgICAgSW4gb3VyIGNhc2Ugd2UgaGF2ZSBhIGJhbGFuY2UtYWxiIGJvbmQgb3ZlciB0d28g MUcgbmV0d29yawogICAgICAgIGludGVyZmFjZXMuIExhdGVyIHdlIHJlYWxpemVkIHRoaXMga2lu ZCBvZiBib25kaW5nIGlzIHVzZWxlc3MKICAgICAgICBiZWNhdXNlIE1BQyBhZGRyZXNzZXMgd29u J3QgY2hhbmdlLCBzbyBpbiBwcmFjdGljZSBvbmx5IDFHIHdpbGwKICAgICAgICBiZSB1c2VkIGF0 IG1vc3QuIE1ha2luZyBzb21lIGhlYXZ5IHRlc3RzIChpLmUuLCBwb3dlcmluZyBvbiA1MAogICAg ICAgIFZNcyBhdCBhIHRpbWUpIHdlJ3ZlIHJlYWNoZWQgdGhpcyB0aHJlc2hvbGQgYXQgc3BlY2lm aWMgcG9pbnRzCiAgICAgICAgYnV0IGl0IGRpZG4ndCBhZmZlY3QgcGVyZm9ybWFuY2Ugc2lnbmlm aWNhbnRseS48L2xpPgogICAgICA8bGk+RG9pbmcgc29tZSBhZGRpdGlvbmFsIGhlYXZ5IHRlc3Rz IChwb3dlcmluZyBvbiBhbmQgb2ZmIGFsbCBWTXMKICAgICAgICBhdCBhIHRpbWUpLCB3ZSd2ZSBy ZWFjaGVkIHRoZSBtYXhpbXVtIHZhbHVlIG9mIGNjYS4gMTIwMCBJT1BTIGF0CiAgICAgICAgYSB0 aW1lLiBJbiBub3JtYWwgY29uZGl0aW9ucyB3ZSBkb24ndCBzdXJwYXNzIDIwMCBJT1BTLCBldmVu CiAgICAgICAgd2hlbiB0aGVzZSA1MCBWTXMgZG8gbG90cyBvZiBkaXNrIG9wZXJhdGlvbnMuPC9s aT4KICAgICAgPGxpPldlJ3ZlIGFsc28gZG9uZSBzb21lIHRvbGVyYW5jZSB0ZXN0cywgbGlrZSBy ZW1vdmluZyBvbmUgb3IKICAgICAgICBtb3JlIGRpc2tzIGZyb20gYSBDZXBoIG5vZGUsIHJlaW5z ZXJ0aW5nIHRoZW0sIHN1ZGRlbmx5IHNodXQKICAgICAgICBkb3duIG9uZSBub2RlLCByZXN0b3Jp bmcgaXQuLi4gVGhlIG9ubHkgcHJvYmxlbSB3ZSd2ZQogICAgICAgIGV4cGVyaWVuY2VkIGlzIGEg c2xvd2VyIGFjY2VzcyB0byB0aGUgaVNDU0kgYmFja2VuZCwgd2hpY2gKICAgICAgICByZXN1bHRz IGluIGEgbWVzc2FnZSBpbiB0aGUgb1ZpcnQgbWFuYWdlciB3YXJuaW5nIGFib3V0IHRoaXM6CiAg ICAgICAgc29tZXRoaW5nIGxpa2UgIlN0b3JhZ2UgaXMgdGFraW5nIHRvIGxvbmcgdG8gcmVzcG9u ZC4uLiIsIHdoaWNoCiAgICAgICAgd2FzIG1heWJlIDE1LTIwIHNlY29uZHMuIFdlIGdvdCBubyBW TSBwYXVzZXMgYXQgYW55IHRpbWUsCiAgICAgICAgdGhvdWdoLCBub3IgYW55IHNpZ25pZmljYW50 IGlzc3VlLjwvbGk+CiAgICA8L3VsPgogICAgPGJsb2NrcXVvdGUKY2l0ZT0ibWlkOkNBTVJieXl0 TVFkY1NYREt3RVdLRD1hWD1YWGtaa3QrPVgxaEM1aE9pU2ZXWEsyUkNZUUBtYWlsLmdtYWlsLmNv bSIKICAgICAgdHlwZT0iY2l0ZSI+CiAgICAgIDxwcmUgd3JhcD0iIj4KRGlkIHlvdSB0cnkgb3Vy IGRlZGljYXRlZCBjaW5kZXIvY2VwaCBzdXBwb3J0IGFuZCBjb21wYXJlZCBpdCB3aXRoIGNlcGgK aXNjc2kgZ2F0ZXdheT8KPC9wcmU+CiAgICA8L2Jsb2NrcXVvdGU+CiAgICA8YnI+CiAgICBOb3Qg YWN0dWFsbHksIGluIG9yZGVyIHRvIGF2b2lkIGRlcGxveWluZyBDaW5kZXIgd2UgZGlyZWN0bHkK ICAgIGltcGxlbWVudGVkIHRoZSBnYXRld2F5IGFzIGl0IGxvb2tlZCBlYXNpZXIgdG8gdXMuPGJy PgogICAgPGJyPgogICAgPGJsb2NrcXVvdGUKY2l0ZT0ibWlkOkNBTVJieXl0TVFkY1NYREt3RVdL RD1hWD1YWGtaa3QrPVgxaEM1aE9pU2ZXWEsyUkNZUUBtYWlsLmdtYWlsLmNvbSIKICAgICAgdHlw ZT0iY2l0ZSI+CiAgICAgIDxwcmUgd3JhcD0iIj4KTmlyCjwvcHJlPgogICAgPC9ibG9ja3F1b3Rl PgogICAgPGJyPgogICAgPHA+SG9wZSB0aGlzIGhlbHBzLjxicj4KICAgIDwvcD4KICAgIDxwPlJl Z2FyZHMuPC9wPgogICAgPGJyPgogIDwvYm9keT4KPC9odG1sPgoKLS0tLS0tLS0tLS0tLS0wNjA0 MDQwMDAzMDcwMjA2MDcwNjAzMDctLQo= --===============2907523814203317332==-- From ydary at redhat.com Sun Jun 26 08:47:47 2016 Content-Type: multipart/mixed; boundary="===============1650170359705280763==" MIME-Version: 1.0 From: Yaniv Dary To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sun, 26 Jun 2016 15:47:06 +0300 Message-ID: In-Reply-To: 576F9718.7020605@devels.es --===============1650170359705280763== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: ydary(a)redhat.com IRC : ydary On Sun, Jun 26, 2016 at 11:49 AM, Nicol=C3=A1s wrote: > Hi Nir, > > El 25/06/16 a las 22:57, Nir Soffer escribi=C3=B3: > > On Sat, Jun 25, 2016 at 11:47 PM, Nicol=C3=A1s wrote: > > Hi, > > We're using Ceph along with an iSCSI gateway, so our storage domain is > actually an iSCSI backend. So far, we have had zero issues with cca. 50 h= igh > IO rated VMs. Perhaps [1] might shed some light on how to set it up. > > Can you share more details on this setup and how you integrate with ovirt? > > For example, are you using ceph luns in regular iscsi storage domain, or > attaching luns directly to vms? > > > Fernando Frediani (responding to this thread) hit the nail on the head. > Actually we have a 3-node Ceph infrastructure, so we created a few volumes > on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVi= rt > who creates the LVs on the top, this way we don't need to attach luns > directly. > > Once the volumes are exported on the iSCSI side, adding an iSCSI domain on > oVirt is enough to make the whole thing work. > > As for experience, we have done a few tests and so far we've had zero > issues: > > - The main bottleneck is the iSCSI gateway interface bandwith. In our > case we have a balance-alb bond over two 1G network interfaces. Later = we > realized this kind of bonding is useless because MAC addresses won't > change, so in practice only 1G will be used at most. Making some heavy > tests (i.e., powering on 50 VMs at a time) we've reached this threshol= d at > specific points but it didn't affect performance significantly. > > Did you try using ISCSI bonding to allow use of more than one path? > > - Doing some additional heavy tests (powering on and off all VMs at a > time), we've reached the maximum value of cca. 1200 IOPS at a time. In > normal conditions we don't surpass 200 IOPS, even when these 50 VMs do= lots > of disk operations. > - We've also done some tolerance tests, like removing one or more > disks from a Ceph node, reinserting them, suddenly shut down one node, > restoring it... The only problem we've experienced is a slower access = to > the iSCSI backend, which results in a message in the oVirt manager war= ning > about this: something like "Storage is taking to long to respond...", = which > was maybe 15-20 seconds. We got no VM pauses at any time, though, nor = any > significant issue. > > Did you try our dedicated cinder/ceph support and compared it with ceph > iscsi gateway? > > > Not actually, in order to avoid deploying Cinder we directly implemented > the gateway as it looked easier to us. > > Nir > > > Hope this helps. > > Regards. > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > --===============1650170359705280763== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PGRpdiBkaXI9Imx0ciI+PGJyPjxkaXYgY2xhc3M9ImdtYWlsX2V4dHJhIj48YnIgY2xlYXI9ImFs bCI+PGRpdj48ZGl2IGNsYXNzPSJnbWFpbF9zaWduYXR1cmUiIGRhdGEtc21hcnRtYWlsPSJnbWFp bF9zaWduYXR1cmUiPjxkaXYgZGlyPSJsdHIiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PHByZSBjb2xz PSI3MiI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OmFyaWFsLGhlbHZldGljYSxzYW5zLXNlcmlm Ij5ZYW5pdiBEYXJ5ClRlY2huaWNhbCBQcm9kdWN0IE1hbmFnZXIKUmVkIEhhdCBJc3JhZWwgTHRk LgozNCBKZXJ1c2FsZW0gUm9hZApCdWlsZGluZyBBLCA0dGggZmxvb3IKUmEmIzM5O2FuYW5hLCBJ c3JhZWwgNDM1MDEwOQoKVGVsIDogKzk3MiAoOSkgNzY5MjMwNgogICAgICAgIDgyNzIzMDYKRW1h aWw6IDxhIGhyZWY9Im1haWx0bzp5ZGFyeUByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+eWRh cnlAcmVkaGF0LmNvbTwvYT4KSVJDIDogeWRhcnk8L3NwYW4+PC9wcmU+CjwvZGl2PjwvZGl2Pjwv ZGl2PjwvZGl2PjwvZGl2Pgo8YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIFN1biwgSnVu IDI2LCAyMDE2IGF0IDExOjQ5IEFNLCBOaWNvbMOhcyA8c3BhbiBkaXI9Imx0ciI+Jmx0OzxhIGhy ZWY9Im1haWx0bzpuaWNvbGFzQGRldmVscy5lcyIgdGFyZ2V0PSJfYmxhbmsiPm5pY29sYXNAZGV2 ZWxzLmVzPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxf cXVvdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xp ZDtwYWRkaW5nLWxlZnQ6MWV4Ij4KICAKICAgIAogIAogIDxkaXYgdGV4dD0iIzAwMDAwMCIgYmdj b2xvcj0iI0ZGRkZGRiI+CiAgICBIaSBOaXIsPGJyPgogICAgPGJyPgogICAgPGRpdj5FbCAyNS8w Ni8xNiBhIGxhcyAyMjo1NywgTmlyIFNvZmZlcgogICAgICBlc2NyaWJpw7M6PGJyPgogICAgPC9k aXY+PHNwYW4gY2xhc3M9IiI+CiAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4KICAgICAgPHBy ZT5PbiBTYXQsIEp1biAyNSwgMjAxNiBhdCAxMTo0NyBQTSwgTmljb2zDoXMgPGEgaHJlZj0ibWFp bHRvOm5pY29sYXNAZGV2ZWxzLmVzIiB0YXJnZXQ9Il9ibGFuayI+Jmx0O25pY29sYXNAZGV2ZWxz LmVzJmd0OzwvYT4gd3JvdGU6CjwvcHJlPgogICAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4K ICAgICAgICA8cHJlPkhpLAoKV2UmIzM5O3JlIHVzaW5nIENlcGggYWxvbmcgd2l0aCBhbiBpU0NT SSBnYXRld2F5LCBzbyBvdXIgc3RvcmFnZSBkb21haW4gaXMKYWN0dWFsbHkgYW4gaVNDU0kgYmFj a2VuZC4gU28gZmFyLCB3ZSBoYXZlIGhhZCB6ZXJvIGlzc3VlcyB3aXRoIGNjYS4gNTAgaGlnaApJ TyByYXRlZCBWTXMuIFBlcmhhcHMgWzFdIG1pZ2h0IHNoZWQgc29tZSBsaWdodCBvbiBob3cgdG8g c2V0IGl0IHVwLgo8L3ByZT4KICAgICAgPC9ibG9ja3F1b3RlPgogICAgICA8cHJlPkNhbiB5b3Ug c2hhcmUgbW9yZSBkZXRhaWxzIG9uIHRoaXMgc2V0dXAgYW5kIGhvdyB5b3UgaW50ZWdyYXRlIHdp dGggb3ZpcnQ/CgpGb3IgZXhhbXBsZSwgYXJlIHlvdSB1c2luZyBjZXBoIGx1bnMgaW4gcmVndWxh ciBpc2NzaSBzdG9yYWdlIGRvbWFpbiwgb3IKYXR0YWNoaW5nIGx1bnMgZGlyZWN0bHkgdG8gdm1z Pwo8L3ByZT4KICAgIDwvYmxvY2txdW90ZT4KICAgIDxicj48L3NwYW4+CiAgICBGZXJuYW5kbyBG cmVkaWFuaSAocmVzcG9uZGluZyB0byB0aGlzIHRocmVhZCkgaGl0IHRoZSBuYWlsIG9uIHRoZQog ICAgaGVhZC4gQWN0dWFsbHkgd2UgaGF2ZSBhIDMtbm9kZSBDZXBoIGluZnJhc3RydWN0dXJlLCBz byB3ZSBjcmVhdGVkIGEKICAgIGZldyB2b2x1bWVzIG9uIHRoZSBDZXBoIG5vZGVzIHNpZGUgKFJC RCkgYW5kIHRoZW4gZXhwb3J0ZWQgdGhlbSB0bwogICAgaVNDU0ksIHNvIGl0JiMzOTtzIG9WaXJ0 IHdobyBjcmVhdGVzIHRoZSBMVnMgb24gdGhlIHRvcCwgdGhpcyB3YXkgd2UKICAgIGRvbiYjMzk7 dCBuZWVkIHRvIGF0dGFjaCBsdW5zIGRpcmVjdGx5Ljxicj4KICAgIDxicj4KICAgIE9uY2UgdGhl IHZvbHVtZXMgYXJlIGV4cG9ydGVkIG9uIHRoZSBpU0NTSSBzaWRlLCBhZGRpbmcgYW4gaVNDU0kK ICAgIGRvbWFpbiBvbiBvVmlydCBpcyBlbm91Z2ggdG8gbWFrZSB0aGUgd2hvbGUgdGhpbmcgd29y ay48YnI+CiAgICA8YnI+CiAgICBBcyBmb3IgZXhwZXJpZW5jZSwgd2UgaGF2ZSBkb25lIGEgZmV3 IHRlc3RzIGFuZCBzbyBmYXIgd2UmIzM5O3ZlIGhhZAogICAgemVybyBpc3N1ZXM6PGJyPgogICAg PHVsPgogICAgICA8bGk+VGhlIG1haW4gYm90dGxlbmVjayBpcyB0aGUgaVNDU0kgZ2F0ZXdheSBp bnRlcmZhY2UgYmFuZHdpdGguCiAgICAgICAgSW4gb3VyIGNhc2Ugd2UgaGF2ZSBhIGJhbGFuY2Ut YWxiIGJvbmQgb3ZlciB0d28gMUcgbmV0d29yawogICAgICAgIGludGVyZmFjZXMuIExhdGVyIHdl IHJlYWxpemVkIHRoaXMga2luZCBvZiBib25kaW5nIGlzIHVzZWxlc3MKICAgICAgICBiZWNhdXNl IE1BQyBhZGRyZXNzZXMgd29uJiMzOTt0IGNoYW5nZSwgc28gaW4gcHJhY3RpY2Ugb25seSAxRyB3 aWxsCiAgICAgICAgYmUgdXNlZCBhdCBtb3N0LiBNYWtpbmcgc29tZSBoZWF2eSB0ZXN0cyAoaS5l LiwgcG93ZXJpbmcgb24gNTAKICAgICAgICBWTXMgYXQgYSB0aW1lKSB3ZSYjMzk7dmUgcmVhY2hl ZCB0aGlzIHRocmVzaG9sZCBhdCBzcGVjaWZpYyBwb2ludHMKICAgICAgICBidXQgaXQgZGlkbiYj Mzk7dCBhZmZlY3QgcGVyZm9ybWFuY2Ugc2lnbmlmaWNhbnRseS48L2xpPjwvdWw+PC9kaXY+PC9i bG9ja3F1b3RlPjxkaXY+PGJyPjwvZGl2PjxkaXY+RGlkIHlvdSB0cnkgdXNpbmcgSVNDU0kgYm9u ZGluZyB0byBhbGxvdyB1c2Ugb2YgbW9yZSB0aGFuIG9uZSBwYXRoPzwvZGl2PjxkaXY+wqA8L2Rp dj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhl eDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxlZnQ6MWV4Ij48ZGl2IHRleHQ9 IiMwMDAwMDAiIGJnY29sb3I9IiNGRkZGRkYiPjx1bD4KICAgICAgPGxpPkRvaW5nIHNvbWUgYWRk aXRpb25hbCBoZWF2eSB0ZXN0cyAocG93ZXJpbmcgb24gYW5kIG9mZiBhbGwgVk1zCiAgICAgICAg YXQgYSB0aW1lKSwgd2UmIzM5O3ZlIHJlYWNoZWQgdGhlIG1heGltdW0gdmFsdWUgb2YgY2NhLiAx MjAwIElPUFMgYXQKICAgICAgICBhIHRpbWUuIEluIG5vcm1hbCBjb25kaXRpb25zIHdlIGRvbiYj Mzk7dCBzdXJwYXNzIDIwMCBJT1BTLCBldmVuCiAgICAgICAgd2hlbiB0aGVzZSA1MCBWTXMgZG8g bG90cyBvZiBkaXNrIG9wZXJhdGlvbnMuPC9saT4KICAgICAgPGxpPldlJiMzOTt2ZSBhbHNvIGRv bmUgc29tZSB0b2xlcmFuY2UgdGVzdHMsIGxpa2UgcmVtb3Zpbmcgb25lIG9yCiAgICAgICAgbW9y ZSBkaXNrcyBmcm9tIGEgQ2VwaCBub2RlLCByZWluc2VydGluZyB0aGVtLCBzdWRkZW5seSBzaHV0 CiAgICAgICAgZG93biBvbmUgbm9kZSwgcmVzdG9yaW5nIGl0Li4uIFRoZSBvbmx5IHByb2JsZW0g d2UmIzM5O3ZlCiAgICAgICAgZXhwZXJpZW5jZWQgaXMgYSBzbG93ZXIgYWNjZXNzIHRvIHRoZSBp U0NTSSBiYWNrZW5kLCB3aGljaAogICAgICAgIHJlc3VsdHMgaW4gYSBtZXNzYWdlIGluIHRoZSBv VmlydCBtYW5hZ2VyIHdhcm5pbmcgYWJvdXQgdGhpczoKICAgICAgICBzb21ldGhpbmcgbGlrZSAm cXVvdDtTdG9yYWdlIGlzIHRha2luZyB0byBsb25nIHRvIHJlc3BvbmQuLi4mcXVvdDssIHdoaWNo CiAgICAgICAgd2FzIG1heWJlIDE1LTIwIHNlY29uZHMuIFdlIGdvdCBubyBWTSBwYXVzZXMgYXQg YW55IHRpbWUsCiAgICAgICAgdGhvdWdoLCBub3IgYW55IHNpZ25pZmljYW50IGlzc3VlLjwvbGk+ CiAgICA8L3VsPjxzcGFuIGNsYXNzPSIiPgogICAgPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+CiAg ICAgIDxwcmU+RGlkIHlvdSB0cnkgb3VyIGRlZGljYXRlZCBjaW5kZXIvY2VwaCBzdXBwb3J0IGFu ZCBjb21wYXJlZCBpdCB3aXRoIGNlcGgKaXNjc2kgZ2F0ZXdheT8KPC9wcmU+CiAgICA8L2Jsb2Nr cXVvdGU+CiAgICA8YnI+PC9zcGFuPgogICAgTm90IGFjdHVhbGx5LCBpbiBvcmRlciB0byBhdm9p ZCBkZXBsb3lpbmcgQ2luZGVyIHdlIGRpcmVjdGx5CiAgICBpbXBsZW1lbnRlZCB0aGUgZ2F0ZXdh eSBhcyBpdCBsb29rZWQgZWFzaWVyIHRvIHVzLjxicj4KICAgIDxicj4KICAgIDxibG9ja3F1b3Rl IHR5cGU9ImNpdGUiPgogICAgICA8cHJlPk5pcgo8L3ByZT4KICAgIDwvYmxvY2txdW90ZT4KICAg IDxicj4KICAgIDxwPkhvcGUgdGhpcyBoZWxwcy48YnI+CiAgICA8L3A+CiAgICA8cD5SZWdhcmRz LjwvcD4KICAgIDxicj4KICA8L2Rpdj4KCjxicj5fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXzxicj4KVXNlcnMgbWFpbGluZyBsaXN0PGJyPgo8YSBocmVmPSJt YWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8L2E+PGJyPgo8YSBocmVmPSJo dHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiIHJlbD0ibm9yZWZl cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0 aW5mby91c2VyczwvYT48YnI+Cjxicj48L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjwvZGl2PjwvZGl2 Pgo= --===============1650170359705280763==-- From khrpcek at gmail.com Sun Jun 26 12:35:39 2016 Content-Type: multipart/mixed; boundary="===============6947494283188117329==" MIME-Version: 1.0 From: Kevin Hrpcek To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Sun, 26 Jun 2016 11:35:37 -0500 Message-ID: In-Reply-To: 307E608C4BFFA145A1496A5FD1ACC3BC1D7FDD46@exchange2010.dimcap.corp --===============6947494283188117329== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hello Charles, The solution I came up with to solve this problem was to use RDO. I have oVirt engine running on dedicated hardware. The best way to have oVirt engine and RDO running on the same hardware is to build a VM on the same hardware as the engine with virt manager or virsh using the local disk as storage (you could possibly replace the VM with docker but I never explored that option). I found it necessary to do this because the oVirt engine and RDO http configs didn't play well together. They could probably be made to work on the same OS instance, but it was taking much more time than I wanted to figure out how to make httpd work with both. Once the VM is up and running I set up the RDO repos on it and installed packstack. Use packstack to generate an answers file, then go through the answers file and set it up so that it only installs Cinder, Keystone, MariaDB, and RabbitMQ. These are the only necessary pieces of openstack for cinder to work correctly. Once it is installed you need to configure cinder and keystone how you want since they only come with the admin tenant,user,project,etc... I set up a ovirt user,tenant,project and configured cinder to use my ceph cluster/pool. It is much simpler to do than that long paragraph may make it seem at first. I've also tested using CephFS as a POSIX storage domain in oVirt. It works but in my experience there was at least a 25% performance decrease over Cinder/RBD. Kevin On Fri, Jun 24, 2016 at 3:23 PM, Charles Gomes wrote: > Hello > > > > I=E2=80=99ve been reading lots of material about implementing oVirt with = Ceph, > however all talk about using Cinder. > > Is there a way to get oVirt with Ceph without having to implement entire > Openstack ? > > I=E2=80=99m already currently using Foreman to deploy Ceph and KVM nodes,= trying > to minimize the amount of moving parts. I heard something about oVirt > providing a managed Cinder appliance, have any seen this ? > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > --===============6947494283188117329== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2PjxkaXY+SGVsbG8gQ2hhcmxlcyw8YnI+PGJyPjwvZGl2 PlRoZSBzb2x1dGlvbiBJIGNhbWUgdXAgd2l0aCB0byAKc29sdmUgdGhpcyBwcm9ibGVtIHdhcyB0 byB1c2UgUkRPLiBJIGhhdmUgb1ZpcnQgZW5naW5lIHJ1bm5pbmcgb24gCmRlZGljYXRlZCBoYXJk d2FyZS4gVGhlIGJlc3Qgd2F5IHRvIGhhdmUgb1ZpcnQgZW5naW5lIGFuZCBSRE8gcnVubmluZyBv bgogdGhlIHNhbWUgaGFyZHdhcmUgaXMgdG8gYnVpbGQgYSBWTSBvbiB0aGUgc2FtZSBoYXJkd2Fy ZSBhcyB0aGUgZW5naW5lIAp3aXRoIHZpcnQgbWFuYWdlciBvciB2aXJzaCB1c2luZyB0aGUgbG9j YWwgZGlzayBhcyBzdG9yYWdlICh5b3UgY291bGQgCnBvc3NpYmx5IHJlcGxhY2UgdGhlIFZNIHdp dGggZG9ja2VyIGJ1dCBJIG5ldmVyIGV4cGxvcmVkIHRoYXQgb3B0aW9uKS4gSQogZm91bmQgaXQg bmVjZXNzYXJ5IHRvIGRvIHRoaXMgYmVjYXVzZSB0aGUgb1ZpcnQgZW5naW5lIGFuZCBSRE8gaHR0 cCAKY29uZmlncyBkaWRuJiMzOTt0IHBsYXkgd2VsbCB0b2dldGhlci4gVGhleSBjb3VsZCBwcm9i YWJseSBiZSBtYWRlIHRvIHdvcmsgCm9uIHRoZSBzYW1lIE9TIGluc3RhbmNlLCBidXQgaXQgd2Fz IHRha2luZyBtdWNoIG1vcmUgdGltZSB0aGFuIEkgd2FudGVkIAp0byBmaWd1cmUgb3V0IGhvdyB0 byBtYWtlIGh0dHBkIHdvcmsgd2l0aCBib3RoLiBPbmNlIHRoZSBWTSBpcyB1cCBhbmQgCnJ1bm5p bmcgSSBzZXQgdXAgdGhlIFJETyByZXBvcyBvbiBpdCBhbmQgaW5zdGFsbGVkIHBhY2tzdGFjay4g VXNlIApwYWNrc3RhY2sgdG8gZ2VuZXJhdGUgYW4gYW5zd2VycyBmaWxlLCB0aGVuIGdvIHRocm91 Z2ggdGhlIGFuc3dlcnMgZmlsZSAKYW5kIHNldCBpdCB1cCBzbyB0aGF0IGl0IG9ubHkgaW5zdGFs bHMgQ2luZGVyLCBLZXlzdG9uZSwgTWFyaWFEQiwgYW5kIApSYWJiaXRNUS4gVGhlc2UgYXJlIHRo ZSBvbmx5IG5lY2Vzc2FyeSBwaWVjZXMgb2Ygb3BlbnN0YWNrIGZvciBjaW5kZXIgdG8KIHdvcmsg Y29ycmVjdGx5LiBPbmNlIGl0IGlzIGluc3RhbGxlZCB5b3UgbmVlZCB0byBjb25maWd1cmUgY2lu ZGVyIGFuZCAKa2V5c3RvbmUgaG93IHlvdSB3YW50IHNpbmNlIHRoZXkgb25seSBjb21lIHdpdGgg dGhlIGFkbWluIAp0ZW5hbnQsdXNlcixwcm9qZWN0LGV0Yy4uLiBJIHNldCB1cCBhIG92aXJ0IHVz ZXIsdGVuYW50LHByb2plY3QgYW5kIApjb25maWd1cmVkIGNpbmRlciB0byB1c2UgbXkgY2VwaCBj bHVzdGVyL3Bvb2wuIDxicj48YnI+PC9kaXY+SXQgaXMgbXVjaCAKc2ltcGxlciB0byBkbyB0aGFu IHRoYXQgbG9uZyBwYXJhZ3JhcGggbWF5IG1ha2UgaXQgc2VlbSBhdCBmaXJzdC4gSSYjMzk7dmUg CmFsc28gdGVzdGVkIHVzaW5nIENlcGhGUyBhcyBhIFBPU0lYIHN0b3JhZ2UgZG9tYWluIGluIG9W aXJ0LiBJdCB3b3JrcyAKYnV0IGluIG15IGV4cGVyaWVuY2UgdGhlcmUgd2FzIGF0IGxlYXN0IGEg MjUlIHBlcmZvcm1hbmNlIGRlY3JlYXNlIG92ZXIgCkNpbmRlci9SQkQuPGJyPjxicj48L2Rpdj5L ZXZpbjxicj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyPjxkaXYgY2xhc3M9Imdt YWlsX3F1b3RlIj5PbiBGcmksIEp1biAyNCwgMjAxNiBhdCAzOjIzIFBNLCBDaGFybGVzIEdvbWVz IDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFpbHRvOmNnb21lc0BjbGVhcnBvb2xncm91 cC5jb20iIHRhcmdldD0iX2JsYW5rIj5jZ29tZXNAY2xlYXJwb29sZ3JvdXAuY29tPC9hPiZndDs8 L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJt YXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxlZnQ6 MWV4Ij4KCgoKCgo8ZGl2IGxpbms9IiMwNTYzQzEiIHZsaW5rPSIjOTU0RjcyIiBsYW5nPSJFTi1V UyI+CjxkaXY+CjxwIGNsYXNzPSJNc29Ob3JtYWwiPkhlbGxvPHU+PC91Pjx1PjwvdT48L3A+Cjxw IGNsYXNzPSJNc29Ob3JtYWwiPjx1PjwvdT7CoDx1PjwvdT48L3A+CjxwIGNsYXNzPSJNc29Ob3Jt YWwiPknigJl2ZSBiZWVuIHJlYWRpbmcgbG90cyBvZiBtYXRlcmlhbCBhYm91dCBpbXBsZW1lbnRp bmcgb1ZpcnQgd2l0aCBDZXBoLCBob3dldmVyIGFsbCB0YWxrIGFib3V0IHVzaW5nIENpbmRlci4K PHU+PC91Pjx1PjwvdT48L3A+CjxwIGNsYXNzPSJNc29Ob3JtYWwiPklzIHRoZXJlIGEgd2F5IHRv IGdldCBvVmlydCB3aXRoIENlcGggd2l0aG91dCBoYXZpbmcgdG8gaW1wbGVtZW50IGVudGlyZSBP cGVuc3RhY2sgPwo8dT48L3U+PHU+PC91PjwvcD4KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SeKAmW0g YWxyZWFkeSBjdXJyZW50bHkgdXNpbmcgRm9yZW1hbiB0byBkZXBsb3kgQ2VwaCBhbmQgS1ZNIG5v ZGVzLCB0cnlpbmcgdG8gbWluaW1pemUgdGhlIGFtb3VudCBvZiBtb3ZpbmcgcGFydHMuIEkgaGVh cmQgc29tZXRoaW5nIGFib3V0IG9WaXJ0IHByb3ZpZGluZyBhIG1hbmFnZWQgQ2luZGVyIGFwcGxp YW5jZSwgaGF2ZSBhbnkgc2VlbiB0aGlzID88dT48L3U+PHU+PC91PjwvcD4KPHAgY2xhc3M9Ik1z b05vcm1hbCI+PHU+PC91PsKgPHU+PC91PjwvcD4KPC9kaXY+CjwvZGl2PgoKPGJyPl9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPgpVc2VycyBtYWlsaW5n IGxpc3Q8YnI+CjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9y ZzwvYT48YnI+CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5m by91c2VycyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm92 aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxicj4KPGJyPjwvYmxvY2txdW90ZT48 L2Rpdj48YnI+PC9kaXY+Cg== --===============6947494283188117329==-- From nsoffer at redhat.com Mon Jun 27 03:05:51 2016 Content-Type: multipart/mixed; boundary="===============7404976568945315875==" MIME-Version: 1.0 From: Nir Soffer To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Mon, 27 Jun 2016 10:05:48 +0300 Message-ID: In-Reply-To: 576F9718.7020605@devels.es --===============7404976568945315875== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Sun, Jun 26, 2016 at 11:49 AM, Nicol=C3=A1s wrote: > Hi Nir, > > El 25/06/16 a las 22:57, Nir Soffer escribi=C3=B3: > > On Sat, Jun 25, 2016 at 11:47 PM, Nicol=C3=A1s wrot= e: > > Hi, > > We're using Ceph along with an iSCSI gateway, so our storage domain is > actually an iSCSI backend. So far, we have had zero issues with cca. 50 h= igh > IO rated VMs. Perhaps [1] might shed some light on how to set it up. > > Can you share more details on this setup and how you integrate with ovirt? > > For example, are you using ceph luns in regular iscsi storage domain, or > attaching luns directly to vms? > > > Fernando Frediani (responding to this thread) hit the nail on the head. > Actually we have a 3-node Ceph infrastructure, so we created a few volumes > on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVi= rt > who creates the LVs on the top, this way we don't need to attach luns > directly. > > Once the volumes are exported on the iSCSI side, adding an iSCSI domain on > oVirt is enough to make the whole thing work. > > As for experience, we have done a few tests and so far we've had zero > issues: > > The main bottleneck is the iSCSI gateway interface bandwith. In our case = we > have a balance-alb bond over two 1G network interfaces. Later we realized > this kind of bonding is useless because MAC addresses won't change, so in > practice only 1G will be used at most. Making some heavy tests (i.e., > powering on 50 VMs at a time) we've reached this threshold at specific > points but it didn't affect performance significantly. > Doing some additional heavy tests (powering on and off all VMs at a time), > we've reached the maximum value of cca. 1200 IOPS at a time. In normal > conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of d= isk > operations. > We've also done some tolerance tests, like removing one or more disks fro= m a > Ceph node, reinserting them, suddenly shut down one node, restoring it... > The only problem we've experienced is a slower access to the iSCSI backen= d, > which results in a message in the oVirt manager warning about this: > something like "Storage is taking to long to respond...", which was maybe > 15-20 seconds. We got no VM pauses at any time, though, nor any significa= nt > issue. This setup works, but you are not fully using ceph potential. You are actually using iscsi storage, so you are limited to 350 lvs per storage domain (for performance reasons). You are also using ovirt thin provisioning instead of ceph thin provisioning, so all your vms depend on the spm to extend vms disks when needed, and your vms may pause from time to time if the spm could not extend the disks fast enough. When cloning disks (e.g. create vm from template), you are copying the data from ceph to the spm node, and back to ceph. With cinder/ceph, this operation happen inside the ceph cluster and is much more efficient, possibly not copying anything. Performance is limited by the iscsi gateway(s) - when using native ceph, each vm is talking directly to the osds keeping its data. Reads and writes are using multiple hosts. On the other hand you are not limited by missing features in our current ceph implementation (e.g. live storage migration, copy disks from other storage domains, no monitoring). It would be interesting to compare cinder/ceph with your system. You can install a vm with cinder and the rest of the components, add another pool for cinder, and compare vms using native ceph and iscsi/ceph. You may like to check this project providing production-ready openstack containers: https://github.com/openstack/kolla Nir --===============7404976568945315875==-- From bkorren at redhat.com Mon Jun 27 03:37:35 2016 Content-Type: multipart/mixed; boundary="===============1239306046487368938==" MIME-Version: 1.0 From: Barak Korren To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Mon, 27 Jun 2016 10:37:33 +0300 Message-ID: In-Reply-To: CAMRbyyvZxBKROZKfT4caLmsoE84GWv4DvqfmxLM084MzOdDVGw@mail.gmail.com --===============1239306046487368938== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable > > You may like to check this project providing production-ready openstack > containers: > https://github.com/openstack/kolla > Also, the oVirt installer can actually deploy these containers for you: https://www.ovirt.org/develop/release-management/features/cinderglance-dock= er-integration/ -- = Barak Korren bkorren(a)redhat.com RHEV-CI Team --===============1239306046487368938==-- From Alessandro.DeSalvo at roma1.infn.it Mon Jun 27 05:02:46 2016 Content-Type: multipart/mixed; boundary="===============5825856173871983025==" MIME-Version: 1.0 From: Alessandro De Salvo To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Mon, 27 Jun 2016 11:02:42 +0200 Message-ID: <019f8f75-8c28-c957-9197-a98896781b50@roma1.infn.it> In-Reply-To: CAGJrMmrgkqyjdLzgFE6SX4oUAE7D88aLMqDUuLScMgBKB5GEiA@mail.gmail.com --===============5825856173871983025== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi, the cinder container is broken since a while, since when the kollaglue = changed the installation method upstream, AFAIK. Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" = version of openstack, so you will need to install yours if you need a = more recent one. We are using a VM managed by ovirt itself for keystone/glance/cinder = with our ceph cluster, and it works quite well with the Mitaka version, = which is the latest one. The DB is hosted outside, so that even if we = loose the VM we don't loose the state, besides all performance reasons. = The installation is not using containers, but installing the services = directly via puppet/Foreman. So far we are happily using ceph in this way. The only drawback of this = setup is that if the VM is not up we cannot start machines with ceph = volumes attached, but the running machines survives without problems = even if the cinder VM is down. Cheers, Alessandro Il 27/06/16 09:37, Barak Korren ha scritto: >> You may like to check this project providing production-ready openstack >> containers: >> https://github.com/openstack/kolla >> > Also, the oVirt installer can actually deploy these containers for you: > > https://www.ovirt.org/develop/release-management/features/cinderglance-do= cker-integration/ > > --===============5825856173871983025==-- From nsoffer at redhat.com Mon Jun 27 05:24:50 2016 Content-Type: multipart/mixed; boundary="===============5013610516986159636==" MIME-Version: 1.0 From: Nir Soffer To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Mon, 27 Jun 2016 12:24:48 +0300 Message-ID: In-Reply-To: 019f8f75-8c28-c957-9197-a98896781b50@roma1.infn.it --===============5013610516986159636== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo wrote: > Hi, > the cinder container is broken since a while, since when the kollaglue > changed the installation method upstream, AFAIK. > Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" versi= on > of openstack, so you will need to install yours if you need a more recent > one. > We are using a VM managed by ovirt itself for keystone/glance/cinder with > our ceph cluster, and it works quite well with the Mitaka version, which = is > the latest one. The DB is hosted outside, so that even if we loose the VM= we > don't loose the state, besides all performance reasons. The installation = is > not using containers, but installing the services directly via > puppet/Foreman. > So far we are happily using ceph in this way. The only drawback of this > setup is that if the VM is not up we cannot start machines with ceph volu= mes > attached, but the running machines survives without problems even if the > cinder VM is down. Thanks for the info Alessandro! This seems like the best way to run cinder/ceph, using other storage for these vms, so cinder vm does not depend on the vm managing the storage it runs on. If you use highly available vms, ovirt will make sure they are up all the t= ime, and will migrated them to other hosts when needed. Nir --===============5013610516986159636==-- From Alessandro.DeSalvo at roma1.infn.it Mon Jun 27 05:35:55 2016 Content-Type: multipart/mixed; boundary="===============8327429452040950616==" MIME-Version: 1.0 From: Alessandro De Salvo To: users at ovirt.org Subject: Re: [ovirt-users] oVirt and Ceph Date: Mon, 27 Jun 2016 11:35:51 +0200 Message-ID: In-Reply-To: CAMRbyyvG8eb9FtmQAia076cniyoQ_9x_V4xxSEPvfxSXK3MfKw@mail.gmail.com --===============8327429452040950616== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Nir, yes indeed, we use the high-availability setup from oVirt for the = Glance/Cinder VM, hosted on a high-available gluster storage. For the DB = we use an SSD-backed Percona Cluster. The VM itself connects to the DB = cluster via haproxy, so we should have the full high-availability. The problem with the VM is the first time when you start the oVirt = cluster, since you cannot start any VM using ceph volumes before you = start the Glance/Cinder VM. It's easy to be solved, though, and even if = you autostart all the machines they will automatically start in the = correct order. Cheers, Alessandro Il 27/06/16 11:24, Nir Soffer ha scritto: > On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo > wrote: >> Hi, >> the cinder container is broken since a while, since when the kollaglue >> changed the installation method upstream, AFAIK. >> Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" vers= ion >> of openstack, so you will need to install yours if you need a more recent >> one. >> We are using a VM managed by ovirt itself for keystone/glance/cinder with >> our ceph cluster, and it works quite well with the Mitaka version, which= is >> the latest one. The DB is hosted outside, so that even if we loose the V= M we >> don't loose the state, besides all performance reasons. The installation= is >> not using containers, but installing the services directly via >> puppet/Foreman. >> So far we are happily using ceph in this way. The only drawback of this >> setup is that if the VM is not up we cannot start machines with ceph vol= umes >> attached, but the running machines survives without problems even if the >> cinder VM is down. > Thanks for the info Alessandro! > > This seems like the best way to run cinder/ceph, using other storage for > these vms, so cinder vm does not depend on the vm managing the > storage it runs on. > > If you use highly available vms, ovirt will make sure they are up all the= time, > and will migrated them to other hosts when needed. > > Nir --===============8327429452040950616==--