
--Apple-Mail-8EC6BCA2-2F9E-4101-B18E-E997CA3607D9 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, having a 3-node ceph cluster is the bare minimum you can have to make it wor= king, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily c= eph-deploy, have puppet configuring it or even run it in containers. Using d= ocker is in fact the easiest solution, it really requires 10 minutes to make= a cluster up. I've tried it both with jewel (official containers) and krake= n (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but using i= t from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We h= ave it and it's working pretty well, but it requires some work. For your ref= erence we have cinder running on an ovirt VM using gluster. Cheers, Alessandro=20
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha sc= ritto: =20 =20 =20
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrot= e: =E2=80=8BDear Team, =20 We are using Ovirt 4.0 for POC what we are doing I want to check with all= Guru's Ovirt. =20 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB S= SD. =20 Waht we are done we have install ovirt hyp on these h/w and we have physi= cal server where we are running our manager for ovirt. For ovirt hyp we are u= sing only one 500GB of one HDD rest we have kept for ceph, so we have 3 node= as guest running on ovirt and for ceph. My question you all is what I am do= ing is right or wrong. =20 I think Ceph requires a lot more resources than above. It's also a bit mor= e challenging to configure. I would highly recommend a 3-node cluster with G= luster. Y. =20 =20 Regards Rajat=E2=80=8B =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bor= der-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"m_-= 8155750194716306479gmail_signature"><div dir=3D"ltr"><div>=E2=80=8BDear Team= ,<br><br>We are using Ovirt 4.0 for POC what we are doing I want to check wi=
--Apple-Mail-8EC6BCA2-2F9E-4101-B18E-E997CA3607D9 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi,</div><div>having a 3-no= de ceph cluster is the bare minimum you can have to make it working, unless y= ou want to have just a replica-2 mode, which is not safe.</div><div>It's not= true that ceph is not easy to configure, you might use very easily ceph-dep= loy, have puppet configuring it or even run it in containers. Using docker i= s in fact the easiest solution, it really requires 10 minutes to make a clus= ter up. I've tried it both with jewel (official containers) and kraken (cust= om containers), and it works pretty well.</div><div>The real problem is not c= reating and configuring a ceph cluster, but using it from ovirt, as it requi= res cinder, i.e. a minimal setup of openstack. We have it and it's working p= retty well, but it requires some work. For your reference we have cinder run= ning on an ovirt VM using gluster.</div><div>Cheers,</div><div><br></div><di= v> Alessandro </div><div><br>Il giorno 18 dic 2016, alle or= e 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com">ykaul@redhat.com= </a>> ha scritto:<br><br></div><blockquote type=3D"cite"><div><div dir=3D= "ltr"><br><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Sun, D= ec 18, 2016 at 3:29 PM, rajatjpatel <span dir=3D"ltr"><<a href=3D"mailto:= rajatjpatel@gmail.com" target=3D"_blank">rajatjpatel@gmail.com</a>></span= th all Guru's Ovirt.<br><br>We have 2 hp proliant dl 380 with 500GB SAS &= ; 1TB *4 SAS Disk and 500GB SSD.<br><br>Waht we are done we have install ovi= rt hyp on these h/w and we have physical server where we are running our man= ager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we= have kept for ceph, so we have 3 node as guest running on ovirt and for cep= h. My question you all is what I am doing is right or wrong.<br></div></div>= </div></div></blockquote><div><br></div><div>I think Ceph requires a lot mor= e resources than above. It's also a bit more challenging to configure. I wou= ld highly recommend a 3-node cluster with Gluster.</div><div>Y.</div><div>&n= bsp;</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde= r-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"m_-81= 55750194716306479gmail_signature"><div dir=3D"ltr"><div><br></div><div>Regar= ds<br></div><div>Rajat=E2=80=8B</div><br></div></div> </div> <br>______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br=
<br></blockquote></div><br></div></div> </div></blockquote><blockquote type=3D"cite"><div><span>____________________= ___________________________</span><br><span>Users mailing list</span><br><sp= an><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a></span><br><span><a= href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.o= rg/mailman/listinfo/users</a></span><br></div></blockquote></body></html>= --Apple-Mail-8EC6BCA2-2F9E-4101-B18E-E997CA3607D9--