
Dear Team, We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt. We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD. Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong. Regards Rajat

On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail-8EC6BCA2-2F9E-4101-B18E-E997CA3607D9 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, having a 3-node ceph cluster is the bare minimum you can have to make it wor= king, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily c= eph-deploy, have puppet configuring it or even run it in containers. Using d= ocker is in fact the easiest solution, it really requires 10 minutes to make= a cluster up. I've tried it both with jewel (official containers) and krake= n (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but using i= t from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We h= ave it and it's working pretty well, but it requires some work. For your ref= erence we have cinder running on an ovirt VM using gluster. Cheers, Alessandro=20
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha sc= ritto: =20 =20 =20
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrot= e: =E2=80=8BDear Team, =20 We are using Ovirt 4.0 for POC what we are doing I want to check with all= Guru's Ovirt. =20 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB S= SD. =20 Waht we are done we have install ovirt hyp on these h/w and we have physi= cal server where we are running our manager for ovirt. For ovirt hyp we are u= sing only one 500GB of one HDD rest we have kept for ceph, so we have 3 node= as guest running on ovirt and for ceph. My question you all is what I am do= ing is right or wrong. =20 I think Ceph requires a lot more resources than above. It's also a bit mor= e challenging to configure. I would highly recommend a 3-node cluster with G= luster. Y. =20 =20 Regards Rajat=E2=80=8B =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bor= der-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"m_-= 8155750194716306479gmail_signature"><div dir=3D"ltr"><div>=E2=80=8BDear Team= ,<br><br>We are using Ovirt 4.0 for POC what we are doing I want to check wi=
--Apple-Mail-8EC6BCA2-2F9E-4101-B18E-E997CA3607D9 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi,</div><div>having a 3-no= de ceph cluster is the bare minimum you can have to make it working, unless y= ou want to have just a replica-2 mode, which is not safe.</div><div>It's not= true that ceph is not easy to configure, you might use very easily ceph-dep= loy, have puppet configuring it or even run it in containers. Using docker i= s in fact the easiest solution, it really requires 10 minutes to make a clus= ter up. I've tried it both with jewel (official containers) and kraken (cust= om containers), and it works pretty well.</div><div>The real problem is not c= reating and configuring a ceph cluster, but using it from ovirt, as it requi= res cinder, i.e. a minimal setup of openstack. We have it and it's working p= retty well, but it requires some work. For your reference we have cinder run= ning on an ovirt VM using gluster.</div><div>Cheers,</div><div><br></div><di= v> Alessandro </div><div><br>Il giorno 18 dic 2016, alle or= e 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com">ykaul@redhat.com= </a>> ha scritto:<br><br></div><blockquote type=3D"cite"><div><div dir=3D= "ltr"><br><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Sun, D= ec 18, 2016 at 3:29 PM, rajatjpatel <span dir=3D"ltr"><<a href=3D"mailto:= rajatjpatel@gmail.com" target=3D"_blank">rajatjpatel@gmail.com</a>></span= th all Guru's Ovirt.<br><br>We have 2 hp proliant dl 380 with 500GB SAS &= ; 1TB *4 SAS Disk and 500GB SSD.<br><br>Waht we are done we have install ovi= rt hyp on these h/w and we have physical server where we are running our man= ager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we= have kept for ceph, so we have 3 node as guest running on ovirt and for cep= h. My question you all is what I am doing is right or wrong.<br></div></div>= </div></div></blockquote><div><br></div><div>I think Ceph requires a lot mor= e resources than above. It's also a bit more challenging to configure. I wou= ld highly recommend a 3-node cluster with Gluster.</div><div>Y.</div><div>&n= bsp;</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde= r-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"m_-81= 55750194716306479gmail_signature"><div dir=3D"ltr"><div><br></div><div>Regar= ds<br></div><div>Rajat=E2=80=8B</div><br></div></div> </div> <br>______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br=
<br></blockquote></div><br></div></div> </div></blockquote><blockquote type=3D"cite"><div><span>____________________= ___________________________</span><br><span>Users mailing list</span><br><sp= an><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a></span><br><span><a= href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.o= rg/mailman/listinfo/users</a></span><br></div></blockquote></body></html>= --Apple-Mail-8EC6BCA2-2F9E-4101-B18E-E997CA3607D9--

Great, thanks! Alessandro ++ Yaniv ++ What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH ) I had done so much duckduckgo for all these solution and use lot of reference from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp. We dont mind having more guest running and creating ceph block storage and which will be presented to ovirt as storage. Gluster is not is use right now bcoz we have DB will be running on guest. Regard Rajat On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster. Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
Sent from my Cell Phone - excuse the typos & auto incorrect

In fact after reading lot of KB I was thing to run one all in one open stack and use cinder as block storage. Ragards Rajat On Sun, Dec 18, 2016 at 8:33 PM rajatjpatel <rajatjpatel@gmail.com> wrote:
Great, thanks! Alessandro ++ Yaniv ++
What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
I had done so much duckduckgo for all these solution and use lot of reference from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp.
We dont mind having more guest running and creating ceph block storage and which will be presented to ovirt as storage. Gluster is not is use right now bcoz we have DB will be running on guest.
Regard Rajat
On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster. Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
Sent from my Cell Phone - excuse the typos & auto incorrect
-- Sent from my Cell Phone - excuse the typos & auto incorrect

--Apple-Mail-8A63DCD0-12E6-4E0A-A253-37E17A1B9FB4 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Rajat, sorry but I do not really have a clear picture of your actual setup, can you= please explain a bit more? In particular: 1) what to you mean by using 4TB for ovirt? In which machines and how do you= make it available to ovirt? 2) how do you plan to use ceph with ovirt? I guess we can give more help if you clarify those points. Thanks, Alessandro=20
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel@gmail.com>= ha scritto: =20 Great, thanks! Alessandro ++ Yaniv ++=20 =20 What I want to use around 4 TB of SAS disk for my Ovirt (which going to be= RHV4.0.5 once POC get 100% successful, in fact all product will be RH ) =20 I had done so much duckduckgo for all these solution and use lot of refere= nce from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp= . =20 We dont mind having more guest running and creating ceph block storage and= which will be presented to ovirt as storage. Gluster is not is use right no= w bcoz we have DB will be running on guest. =20 Regard Rajat=20 =20
On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <Alessandro.DeSalvo@r= oma1.infn.it> wrote: Hi, having a 3-node ceph cluster is the bare minimum you can have to make it w= orking, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easi= ly ceph-deploy, have puppet configuring it or even run it in containers. Usi= ng docker is in fact the easiest solution, it really requires 10 minutes to m= ake a cluster up. I've tried it both with jewel (official containers) and kr= aken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but usin= g it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. W= e have it and it's working pretty well, but it requires some work. For your r= eference we have cinder running on an ovirt VM using gluster. Cheers, =20 Alessandro=20 =20
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha s= critto: =20 =20 =20 On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wro= te: =E2=80=8BDear Team, =20 We are using Ovirt 4.0 for POC what we are doing I want to check with al= l Guru's Ovirt. =20 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB S= SD. =20 Waht we are done we have install ovirt hyp on these h/w and we have phys= ical server where we are running our manager for ovirt. For ovirt hyp we are= using only one 500GB of one HDD rest we have kept for ceph, so we have 3 no= de as guest running on ovirt and for ceph. My question you all is what I am d= oing is right or wrong. =20 I think Ceph requires a lot more resources than above. It's also a bit m= ore challenging to configure. I would highly recommend a 3-node cluster with= Gluster. Y. =20 =20 Regards Rajat=E2=80=8B =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 --=20 Sent from my Cell Phone - excuse the typos & auto incorrect
</div>What I want to use around 4 TB of SAS disk for my Ovirt (which going t= o be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )= <br><br></div>I had done so much duckduckgo for all these solution and use l= ot of reference from <a href=3D"http://ovirt.org">ovirt.org</a> & <a hre= f=3D"http://access.redhat.com">access.redhat.com</a> for setting up a Ovirt e= ngine and hyp.<br><br></div>We dont mind having more guest running and creat= ing ceph block storage and which will be presented to ovirt as storage. Glus= ter is not is use right now bcoz we have DB will be running on guest.<br><br= </div>Regard<br></div>Rajat <br></div><br><div class=3D"gmail_quote"><div d= ir=3D"ltr">On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <<a href=3D= "mailto:Alessandro.DeSalvo@roma1.infn.it">Alessandro.DeSalvo@roma1.infn.it</= a>> wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0= 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"auto" class= =3D"gmail_msg"><div class=3D"gmail_msg"></div><div class=3D"gmail_msg">Hi,</=
--Apple-Mail-8A63DCD0-12E6-4E0A-A253-37E17A1B9FB4 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi Rajat,</div><div>sorry b= ut I do not really have a clear picture of your actual setup, can you please= explain a bit more?</div><div>In particular:</div><div><br></div><div>1) wh= at to you mean by using 4TB for ovirt? In which machines and how do you make= it available to ovirt?</div><div><br></div><div>2) how do you plan to use c= eph with ovirt?</div><div><br></div><div>I guess we can give more help if yo= u clarify those points.</div><div>Thanks,</div><div><br></div><div> &n= bsp;Alessandro </div><div><br>Il giorno 18 dic 2016, alle ore 17:33, ra= jatjpatel <<a href=3D"mailto:rajatjpatel@gmail.com">rajatjpatel@gmail.com= </a>> ha scritto:<br><br></div><blockquote type=3D"cite"><div><div dir=3D= "ltr"><div><div><div><div><div>Great, thanks! Alessandro ++ Yaniv ++ <br><br= div><div class=3D"gmail_msg">having a 3-node ceph cluster is the bare minimu= m you can have to make it working, unless you want to have just a replica-2 m= ode, which is not safe.</div><div class=3D"gmail_msg">It's not true that cep= h is not easy to configure, you might use very easily ceph-deploy, have pupp= et configuring it or even run it in containers. Using docker is in fact the e= asiest solution, it really requires 10 minutes to make a cluster up. I've tr= ied it both with jewel (official containers) and kraken (custom containers),= and it works pretty well.</div><div class=3D"gmail_msg">The real problem is= not creating and configuring a ceph cluster, but using it from ovirt, as it= requires cinder, i.e. a minimal setup of openstack. We have it and it's wor= king pretty well, but it requires some work. For your reference we have cind= er running on an ovirt VM using gluster.</div><div class=3D"gmail_msg">Cheer= s,</div><div class=3D"gmail_msg"><br class=3D"gmail_msg"></div><div class=3D= "gmail_msg"> Alessandro </div></div><div dir=3D"auto" class= =3D"gmail_msg"><div class=3D"gmail_msg"><br class=3D"gmail_msg">Il giorno 18= dic 2016, alle ore 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com= " class=3D"gmail_msg" target=3D"_blank">ykaul@redhat.com</a>> ha scritto:= <br class=3D"gmail_msg"><br class=3D"gmail_msg"></div><blockquote type=3D"ci= te" class=3D"gmail_msg"><div class=3D"gmail_msg"><div dir=3D"ltr" class=3D"g= mail_msg"><br class=3D"gmail_msg"><div class=3D"gmail_extra gmail_msg"><br c= lass=3D"gmail_msg"><div class=3D"gmail_quote gmail_msg">On Sun, Dec 18, 2016= at 3:29 PM, rajatjpatel <span dir=3D"ltr" class=3D"gmail_msg"><<a href=3D= "mailto:rajatjpatel@gmail.com" class=3D"gmail_msg" target=3D"_blank">rajatjp= atel@gmail.com</a>></span> wrote:<br class=3D"gmail_msg"><blockquote clas= s=3D"gmail_quote gmail_msg" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc s= olid;padding-left:1ex"><div dir=3D"ltr" class=3D"gmail_msg"><div class=3D"m_= -4293604042961126787m_-8155750194716306479gmail_signature gmail_msg"><div di= r=3D"ltr" class=3D"gmail_msg"><div class=3D"gmail_msg">=E2=80=8BDear Team,<b= r class=3D"gmail_msg"><br class=3D"gmail_msg">We are using Ovirt 4.0 for POC= what we are doing I want to check with all Guru's Ovirt.<br class=3D"gmail_= msg"><br class=3D"gmail_msg">We have 2 hp proliant dl 380 with 500GB SAS &am= p; 1TB *4 SAS Disk and 500GB SSD.<br class=3D"gmail_msg"><br class=3D"gmail_= msg">Waht we are done we have install ovirt hyp on these h/w and we have phy= sical server where we are running our manager for ovirt. For ovirt hyp we ar= e using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n= ode as guest running on ovirt and for ceph. My question you all is what I am= doing is right or wrong.<br class=3D"gmail_msg"></div></div></div></div></b= lockquote><div class=3D"gmail_msg"><br class=3D"gmail_msg"></div><div class=3D= "gmail_msg">I think Ceph requires a lot more resources than above. It's also= a bit more challenging to configure. I would highly recommend a 3-node clus= ter with Gluster.</div><div class=3D"gmail_msg">Y.</div><div class=3D"gmail_= msg"> </div><blockquote class=3D"gmail_quote gmail_msg" style=3D"margin= :0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr" cl= ass=3D"gmail_msg"><div class=3D"m_-4293604042961126787m_-8155750194716306479= gmail_signature gmail_msg"><div dir=3D"ltr" class=3D"gmail_msg"><div class=3D= "gmail_msg"><br class=3D"gmail_msg"></div><div class=3D"gmail_msg">Regards<b= r class=3D"gmail_msg"></div><div class=3D"gmail_msg">Rajat=E2=80=8B</div><br= class=3D"gmail_msg"></div></div> </div> <br class=3D"gmail_msg">_______________________________________________<br c= lass=3D"gmail_msg"> Users mailing list<br class=3D"gmail_msg"> <a href=3D"mailto:Users@ovirt.org" class=3D"gmail_msg" target=3D"_blank">Use= rs@ovirt.org</a><br class=3D"gmail_msg"> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= class=3D"gmail_msg" target=3D"_blank">http://lists.ovirt.org/mailman/listin= fo/users</a><br class=3D"gmail_msg"> <br class=3D"gmail_msg"></blockquote></div><br class=3D"gmail_msg"></div></d= iv> </div></blockquote><blockquote type=3D"cite" class=3D"gmail_msg"><div class=3D= "gmail_msg"><span class=3D"gmail_msg">______________________________________= _________</span><br class=3D"gmail_msg"><span class=3D"gmail_msg">Users mail= ing list</span><br class=3D"gmail_msg"><span class=3D"gmail_msg"><a href=3D"= mailto:Users@ovirt.org" class=3D"gmail_msg" target=3D"_blank">Users@ovirt.or= g</a></span><br class=3D"gmail_msg"><span class=3D"gmail_msg"><a href=3D"htt= p://lists.ovirt.org/mailman/listinfo/users" class=3D"gmail_msg" target=3D"_b= lank">http://lists.ovirt.org/mailman/listinfo/users</a></span><br class=3D"g= mail_msg"></div></blockquote></div></blockquote></div><div dir=3D"ltr">-- <b= r></div><div data-smartmail=3D"gmail_signature"><p dir=3D"ltr">Sent from my C= ell Phone - excuse the typos & auto incorrect</p> </div> </div></blockquote></body></html>= --Apple-Mail-8A63DCD0-12E6-4E0A-A253-37E17A1B9FB4--

Hi Alessandro, Right now I have 2 physical server where I have host ovirt these are HP proliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB of SAS for my ovirt to run on both server. rest are not in use. At present I am using NFS which coming from mapper to ovirt as storage, go forward we like to use all these disk as hyper-converged for ovirt. RH I could see there is KB for using gluster. But we are looking for Ceph bcoz best pref romance and scale. [image: Inline image 1] Regards Rajat Hi Regards, Rajat Patel http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN... On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi Rajat, sorry but I do not really have a clear picture of your actual setup, can you please explain a bit more? In particular:
1) what to you mean by using 4TB for ovirt? In which machines and how do you make it available to ovirt?
2) how do you plan to use ceph with ovirt?
I guess we can give more help if you clarify those points. Thanks,
Alessandro
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel@gmail.com> ha scritto:
Great, thanks! Alessandro ++ Yaniv ++
What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
I had done so much duckduckgo for all these solution and use lot of reference from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp.
We dont mind having more guest running and creating ceph block storage and which will be presented to ovirt as storage. Gluster is not is use right now bcoz we have DB will be running on guest.
Regard Rajat
On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster. Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
Sent from my Cell Phone - excuse the typos & auto incorrect

Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpatel@gmail.com>= ha scritto: =20 Hi Alessandro, =20 Right now I have 2 physical server where I have host ovirt these are HP pr=
--Apple-Mail-F8806A8F-B8B6-48D9-BC6C-8A2672E65957 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, oh, so you have only 2 physical servers? I've understood they were 3! Well, i= n this case ceph would not work very well, too few resources and redundancy.= You could try a replica 2, but it's not safe. Having a replica 3 could be f= orced, but you would end up with a server with 2 replicas, which is dangerou= s/useless. Okay, so you use nfs as storage domain, but in your setup the HA is not guar= anteed: if a physical machine goes down and it's the one where the storage d= omain resides you are lost. Why not using gluster instead of nfs for the ovi= rt disks? You can still reserve a small gluster space for the non-ceph machi= nes (for example a cinder VM) and ceph for the rest. Where do you have your c= inder running? Cheers, Alessandro oliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So= right now I have use only one disk which 500GB of SAS for my ovirt to run o= n both server. rest are not in use. At present I am using NFS which coming f= rom mapper to ovirt as storage, go forward we like to use all these disk as = hyper-converged for ovirt. RH I could see there is KB for using gluster. Bu= t we are looking for Ceph bcoz best pref romance and scale.
=20 <Screenshot from 2016-12-18 21-03-21.png> Regards Rajat =20 Hi =20 =20 Regards, Rajat Patel =20 http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN... =20 =20
On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <Alessandro.DeSalvo@= roma1.infn.it> wrote: Hi Rajat, sorry but I do not really have a clear picture of your actual setup, can y= ou please explain a bit more? In particular: =20 1) what to you mean by using 4TB for ovirt? In which machines and how do y= ou make it available to ovirt? =20 2) how do you plan to use ceph with ovirt? =20 I guess we can give more help if you clarify those points. Thanks, =20 Alessandro=20 =20
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel@gmail.co= m> ha scritto: =20 Great, thanks! Alessandro ++ Yaniv ++=20 =20 What I want to use around 4 TB of SAS disk for my Ovirt (which going to b= e RHV4.0.5 once POC get 100% successful, in fact all product will be RH ) =20 I had done so much duckduckgo for all these solution and use lot of refe= rence from ovirt.org & access.redhat.com for setting up a Ovirt engine and h= yp. =20 We dont mind having more guest running and creating ceph block storage a= nd which will be presented to ovirt as storage. Gluster is not is use right n= ow bcoz we have DB will be running on guest. =20 Regard Rajat=20 =20
On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <Alessandro.DeSalvo= @roma1.infn.it> wrote: Hi, having a 3-node ceph cluster is the bare minimum you can have to make i= t working, unless you want to have just a replica-2 mode, which is not safe.=
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> h= a scritto: =20 =20 =20 On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> w= rote: =E2=80=8BDear Team, =20 We are using Ovirt 4.0 for POC what we are doing I want to check with a= ll Guru's Ovirt. =20 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500G= B SSD. =20 Waht we are done we have install ovirt hyp on these h/w and we have ph= ysical server where we are running our manager for ovirt. For ovirt hyp we a= re using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n= ode as guest running on ovirt and for ceph. My question you all is what I am= doing is right or wrong. =20 I think Ceph requires a lot more resources than above. It's also a bit= more challenging to configure. I would highly recommend a 3-node cluster wi=
It's not true that ceph is not easy to configure, you might use very ea= sily ceph-deploy, have puppet configuring it or even run it in containers. U= sing docker is in fact the easiest solution, it really requires 10 minutes t= o make a cluster up. I've tried it both with jewel (official containers) and= kraken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but us= ing it from ovirt, as it requires cinder, i.e. a minimal setup of openstack.= We have it and it's working pretty well, but it requires some work. For you= r reference we have cinder running on an ovirt VM using gluster. Cheers, =20 Alessandro=20 =20 th Gluster.
Y. =20 =20 Regards Rajat=E2=80=8B =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 --=20 Sent from my Cell Phone - excuse the typos & auto incorrect =20 =20
--Apple-Mail-F8806A8F-B8B6-48D9-BC6C-8A2672E65957 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi,</div><div>oh, so you ha= ve only 2 physical servers? I've understood they were 3! Well, in this case c= eph would not work very well, too few resources and redundancy. You could tr= y a replica 2, but it's not safe. Having a replica 3 could be forced, but yo= u would end up with a server with 2 replicas, which is dangerous/useless.</d= iv><div>Okay, so you use nfs as storage domain, but in your setup the HA is n= ot guaranteed: if a physical machine goes down and it's the one where the st= orage domain resides you are lost. Why not using gluster instead of nfs for t= he ovirt disks? You can still reserve a small gluster space for the non-ceph= machines (for example a cinder VM) and ceph for the rest. Where do you have= your cinder running?</div><div>Cheers,</div><div><br></div><div> &nbs= p; Alessandro</div><div><br>Il giorno 18 dic 2016, alle ore 18:05, rajatjpat= el <<a href=3D"mailto:rajatjpatel@gmail.com">rajatjpatel@gmail.com</a>>= ; ha scritto:<br><br></div><blockquote type=3D"cite"><div><div dir=3D"ltr"><= div class=3D"gmail_default" style=3D"font-family:comic sans ms,sans-serif;fo= nt-size:large;color:rgb(0,0,255)">Hi Alessandro,<br><br></div><div class=3D"= gmail_default" style=3D"font-family:comic sans ms,sans-serif;font-size:large= ;color:rgb(0,0,255)">Right now I have 2 physical server where I have host ov= irt these are HP proliant dl 380 each server 1*500GB SAS & 1TB *4 S= AS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB o= f SAS for my ovirt to run on both server. rest are not in use. At present I a= m using NFS which coming from mapper to ovirt as storage, go forward we like= to use all these disk as hyper-converged for ovirt. RH I could see th= ere is KB for using gluster. But we are looking for Ceph bcoz best pref roma= nce and scale.<br><br><Screenshot from 2016-12-18 21-03-21.png><br></d= iv><div class=3D"gmail_default" style=3D"font-family:comic sans ms,sans-seri= f;font-size:large;color:rgb(0,0,255)">Regards<br></div><div class=3D"gmail_d= efault" style=3D"font-family:comic sans ms,sans-serif;font-size:large;color:= rgb(0,0,255)">Rajat<br></div></div><div class=3D"gmail_extra"><br clear=3D"a= ll"><div><div class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><= div dir=3D"ltr"><div><font face=3D"tahoma, sans-serif" size=3D"4" style=3D"b= ackground-color:rgb(243,243,243)" color=3D"#0000ff">Hi</font></div><font fac= e=3D"tahoma, sans-serif" size=3D"4" style=3D"background-color:rgb(243,243,24= 3)" color=3D"#0000ff"><div><font face=3D"tahoma, sans-serif" size=3D"4" styl= e=3D"background-color:rgb(243,243,243)" color=3D"#0000ff"><br></font></div><= div><font face=3D"tahoma, sans-serif" size=3D"4" style=3D"background-color:r= gb(243,243,243)" color=3D"#0000ff"><br></font></div>Regards,<br>Rajat Patel<= br><br><a href=3D"http://studyhat.blogspot.com/" target=3D"_blank">http://st= udyhat.blogspot.com</a><br>FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YO= U...<br>THEN THEY FIGHT YOU...<br>THEN YOU WIN...</font><br><br></div></div>= </div> <br><div class=3D"gmail_quote">On Sun, Dec 18, 2016 at 8:49 PM, Alessandro D= e Salvo <span dir=3D"ltr"><<a href=3D"mailto:Alessandro.DeSalvo@roma1.inf= n.it" target=3D"_blank">Alessandro.DeSalvo@roma1.infn.it</a>></span> wrot= e:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-le= ft:1px #ccc solid;padding-left:1ex"><div dir=3D"auto"><div></div><div>Hi Raj= at,</div><div>sorry but I do not really have a clear picture of your actual s= etup, can you please explain a bit more?</div><div>In particular:</div><div>= <br></div><div>1) what to you mean by using 4TB for ovirt? In which machines= and how do you make it available to ovirt?</div><div><br></div><div>2) how d= o you plan to use ceph with ovirt?</div><div><br></div><div>I guess we can g= ive more help if you clarify those points.</div><div>Thanks,</div><div><br><= /div><div> Alessandro </div><div><div class=3D"h5"><div><br=
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <<a href=3D"mailto:ra= jatjpatel@gmail.com" target=3D"_blank">rajatjpatel@gmail.com</a>> ha scri= tto:<br><br></div><blockquote type=3D"cite"><div><div dir=3D"ltr"><div><div>= <div><div><div>Great, thanks! Alessandro ++ Yaniv ++ <br><br></div>What I wa= nt to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 o= nce POC get 100% successful, in fact all product will be RH )<br><br></div>I= had done so much duckduckgo for all these solution and use lot of reference= from <a href=3D"http://ovirt.org" target=3D"_blank">ovirt.org</a> & <a h= ref=3D"http://access.redhat.com" target=3D"_blank">access.redhat.com</a> for= setting up a Ovirt engine and hyp.<br><br></div>We dont mind having more gu= est running and creating ceph block storage and which will be presented to o= virt as storage. Gluster is not is use right now bcoz we have DB will be run= ning on guest.<br><br></div>Regard<br></div>Rajat <br></div><br><div class=3D= "gmail_quote"><div dir=3D"ltr">On Sun, Dec 18, 2016 at 8:21 PM Alessandro De= Salvo <<a href=3D"mailto:Alessandro.DeSalvo@roma1.infn.it" target=3D"_bl= ank">Alessandro.DeSalvo@roma1.<wbr>infn.it</a>> wrote:<br></div><blockquo= te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc sol= id;padding-left:1ex"><div dir=3D"auto" class=3D"m_324330166984056793gmail_ms= g"><div class=3D"m_324330166984056793gmail_msg"></div><div class=3D"m_324330= 166984056793gmail_msg">Hi,</div><div class=3D"m_324330166984056793gmail_msg"= having a 3-node ceph cluster is the bare minimum you can have to make it wo= rking, unless you want to have just a replica-2 mode, which is not safe.</di= v><div class=3D"m_324330166984056793gmail_msg">It's not true that ceph is no= t easy to configure, you might use very easily ceph-deploy, have puppet conf= iguring it or even run it in containers. Using docker is in fact the easiest= solution, it really requires 10 minutes to make a cluster up. I've tried it= both with jewel (official containers) and kraken (custom containers), and i= t works pretty well.</div><div class=3D"m_324330166984056793gmail_msg">The r= eal problem is not creating and configuring a ceph cluster, but using it fro= m ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have i= t and it's working pretty well, but it requires some work. For your referenc= e we have cinder running on an ovirt VM using gluster.</div><div class=3D"m_= 324330166984056793gmail_msg">Cheers,</div><div class=3D"m_324330166984056793= gmail_msg"><br class=3D"m_324330166984056793gmail_msg"></div><div class=3D"m= _324330166984056793gmail_msg"> Alessandro </div></div><div d= ir=3D"auto" class=3D"m_324330166984056793gmail_msg"><div class=3D"m_32433016= 6984056793gmail_msg"><br class=3D"m_324330166984056793gmail_msg">Il giorno 1= 8 dic 2016, alle ore 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.co= m" class=3D"m_324330166984056793gmail_msg" target=3D"_blank">ykaul@redhat.co= m</a>> ha scritto:<br class=3D"m_324330166984056793gmail_msg"><br class=3D= "m_324330166984056793gmail_msg"></div><blockquote type=3D"cite" class=3D"m_3= 24330166984056793gmail_msg"><div class=3D"m_324330166984056793gmail_msg"><di= v dir=3D"ltr" class=3D"m_324330166984056793gmail_msg"><br class=3D"m_3243301= 66984056793gmail_msg"><div class=3D"gmail_extra m_324330166984056793gmail_ms= g"><br class=3D"m_324330166984056793gmail_msg"><div class=3D"gmail_quote m_3= 24330166984056793gmail_msg">On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <sp= an dir=3D"ltr" class=3D"m_324330166984056793gmail_msg"><<a href=3D"mailto= :rajatjpatel@gmail.com" class=3D"m_324330166984056793gmail_msg" target=3D"_b= lank">rajatjpatel@gmail.com</a>></span> wrote:<br class=3D"m_324330166984= 056793gmail_msg"><blockquote class=3D"gmail_quote m_324330166984056793gmail_= msg" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"= <div dir=3D"ltr" class=3D"m_324330166984056793gmail_msg"><div class=3D"m_32= 4330166984056793m_-4293604042961126787m_-8155750194716306479gmail_signature m= _324330166984056793gmail_msg"><div dir=3D"ltr" class=3D"m_324330166984056793= gmail_msg"><div class=3D"m_324330166984056793gmail_msg">=E2=80=8BDear Team,<= br class=3D"m_324330166984056793gmail_msg"><br class=3D"m_324330166984056793= gmail_msg">We are using Ovirt 4.0 for POC what we are doing I want to check w= ith all Guru's Ovirt.<br class=3D"m_324330166984056793gmail_msg"><br class=3D= "m_324330166984056793gmail_msg">We have 2 hp proliant dl 380 with 500GB SAS &= amp; 1TB *4 SAS Disk and 500GB SSD.<br class=3D"m_324330166984056793gmail_ms= g"><br class=3D"m_324330166984056793gmail_msg">Waht we are done we have inst= all ovirt hyp on these h/w and we have physical server where we are running o= ur manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD r= est we have kept for ceph, so we have 3 node as guest running on ovirt and f= or ceph. My question you all is what I am doing is right or wrong.<br class=3D= "m_324330166984056793gmail_msg"></div></div></div></div></blockquote><div cl= ass=3D"m_324330166984056793gmail_msg"><br class=3D"m_324330166984056793gmail= _msg"></div><div class=3D"m_324330166984056793gmail_msg">I think Ceph requir= es a lot more resources than above. It's also a bit more challenging to conf= igure. I would highly recommend a 3-node cluster with Gluster.</div><div cla= ss=3D"m_324330166984056793gmail_msg">Y.</div><div class=3D"m_324330166984056= 793gmail_msg"> </div><blockquote class=3D"gmail_quote m_324330166984056= 793gmail_msg" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-= left:1ex"><div dir=3D"ltr" class=3D"m_324330166984056793gmail_msg"><div clas= s=3D"m_324330166984056793m_-4293604042961126787m_-8155750194716306479gmail_s= ignature m_324330166984056793gmail_msg"><div dir=3D"ltr" class=3D"m_32433016= 6984056793gmail_msg"><div class=3D"m_324330166984056793gmail_msg"><br class=3D= "m_324330166984056793gmail_msg"></div><div class=3D"m_324330166984056793gmai= l_msg">Regards<br class=3D"m_324330166984056793gmail_msg"></div><div class=3D= "m_324330166984056793gmail_msg">Rajat=E2=80=8B</div><br class=3D"m_324330166= 984056793gmail_msg"></div></div> </div> <br class=3D"m_324330166984056793gmail_msg">______________________________<w= br>_________________<br class=3D"m_324330166984056793gmail_msg"> Users mailing list<br class=3D"m_324330166984056793gmail_msg"> <a href=3D"mailto:Users@ovirt.org" class=3D"m_324330166984056793gmail_msg" t= arget=3D"_blank">Users@ovirt.org</a><br class=3D"m_324330166984056793gmail_m= sg"> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= class=3D"m_324330166984056793gmail_msg" target=3D"_blank">http://lists.ovir= t.org/<wbr>mailman/listinfo/users</a><br class=3D"m_324330166984056793gmail_= msg"> <br class=3D"m_324330166984056793gmail_msg"></blockquote></div><br class=3D"= m_324330166984056793gmail_msg"></div></div> </div></blockquote><blockquote type=3D"cite" class=3D"m_324330166984056793gm= ail_msg"><div class=3D"m_324330166984056793gmail_msg"><span class=3D"m_32433= 0166984056793gmail_msg">______________________________<wbr>_________________= </span><br class=3D"m_324330166984056793gmail_msg"><span class=3D"m_32433016= 6984056793gmail_msg">Users mailing list</span><br class=3D"m_324330166984056= 793gmail_msg"><span class=3D"m_324330166984056793gmail_msg"><a href=3D"mailt= o:Users@ovirt.org" class=3D"m_324330166984056793gmail_msg" target=3D"_blank"= Users@ovirt.org</a></span><br class=3D"m_324330166984056793gmail_msg"><span= class=3D"m_324330166984056793gmail_msg"><a href=3D"http://lists.ovirt.org/m= ailman/listinfo/users" class=3D"m_324330166984056793gmail_msg" target=3D"_bl= ank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a></span><br class=3D= "m_324330166984056793gmail_msg"></div></blockquote></div></blockquote></div>= <div dir=3D"ltr">-- <br></div><div data-smartmail=3D"gmail_signature"><p dir= =3D"ltr">Sent from my Cell Phone - excuse the typos & auto incorrect</p>=
</div> </div></blockquote></div></div></div></blockquote></div><br></div> </div></blockquote></body></html>= --Apple-Mail-F8806A8F-B8B6-48D9-BC6C-8A2672E65957--

Alessandro, Right now I dont have cinder running in my setup in case if ceph don't work then I have get one vm running open stack all in one and have all these disk connect my open stack using cinder I can present storage to my ovirt. At the same time I not getting case study for the same. Regards Rajat Hi Regards, Rajat Patel http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN... On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, oh, so you have only 2 physical servers? I've understood they were 3! Well, in this case ceph would not work very well, too few resources and redundancy. You could try a replica 2, but it's not safe. Having a replica 3 could be forced, but you would end up with a server with 2 replicas, which is dangerous/useless. Okay, so you use nfs as storage domain, but in your setup the HA is not guaranteed: if a physical machine goes down and it's the one where the storage domain resides you are lost. Why not using gluster instead of nfs for the ovirt disks? You can still reserve a small gluster space for the non-ceph machines (for example a cinder VM) and ceph for the rest. Where do you have your cinder running? Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpatel@gmail.com> ha scritto:
Hi Alessandro,
Right now I have 2 physical server where I have host ovirt these are HP proliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB of SAS for my ovirt to run on both server. rest are not in use. At present I am using NFS which coming from mapper to ovirt as storage, go forward we like to use all these disk as hyper-converged for ovirt. RH I could see there is KB for using gluster. But we are looking for Ceph bcoz best pref romance and scale.
<Screenshot from 2016-12-18 21-03-21.png> Regards Rajat
Hi
Regards, Rajat Patel
http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN...
On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi Rajat, sorry but I do not really have a clear picture of your actual setup, can you please explain a bit more? In particular:
1) what to you mean by using 4TB for ovirt? In which machines and how do you make it available to ovirt?
2) how do you plan to use ceph with ovirt?
I guess we can give more help if you clarify those points. Thanks,
Alessandro
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel@gmail.com> ha scritto:
Great, thanks! Alessandro ++ Yaniv ++
What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
I had done so much duckduckgo for all these solution and use lot of reference from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp.
We dont mind having more guest running and creating ceph block storage and which will be presented to ovirt as storage. Gluster is not is use right now bcoz we have DB will be running on guest.
Regard Rajat
On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster. Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
Sent from my Cell Phone - excuse the typos & auto incorrect

--Apple-Mail-BEC7796A-D99C-4DBE-B890-89AF12E27C9A Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Rajat, OK, I see. Well, so just consider that ceph will not work at best in your se= tup, unless you add at least a physical machine. Same is true for ovirt if y= ou are only using native NFS, as you loose a real HA. Having said this, of course you choose what's best for your site or affordab= le, but your setup looks quite fragile to me. Happy to help more if you need= . Regards, Alessandro
Il giorno 18 dic 2016, alle ore 18:22, rajatjpatel <rajatjpatel@gmail.com>= ha scritto: =20 Alessandro, =20 Right now I dont have cinder running in my setup in case if ceph don't wor= k then I have get one vm running open stack all in one and have all these di= sk connect my open stack using cinder I can present storage to my ovirt. =20 At the same time I not getting case study for the same. =20 Regards Rajat =20 Hi =20 =20 Regards, Rajat Patel =20 http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN... =20 =20
On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo <Alessandro.DeSalvo@= roma1.infn.it> wrote: Hi, oh, so you have only 2 physical servers? I've understood they were 3! Wel= l, in this case ceph would not work very well, too few resources and redunda= ncy. You could try a replica 2, but it's not safe. Having a replica 3 could b= e forced, but you would end up with a server with 2 replicas, which is dange= rous/useless. Okay, so you use nfs as storage domain, but in your setup the HA is not g= uaranteed: if a physical machine goes down and it's the one where the storag= e domain resides you are lost. Why not using gluster instead of nfs for the o= virt disks? You can still reserve a small gluster space for the non-ceph mac= hines (for example a cinder VM) and ceph for the rest. Where do you have you= r cinder running? Cheers, =20 Alessandro =20
Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpatel@gmail.co= m> ha scritto: =20 Hi Alessandro, =20 Right now I have 2 physical server where I have host ovirt these are HP p= roliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. S= o right now I have use only one disk which 500GB of SAS for my ovirt to run o= n both server. rest are not in use. At present I am using NFS which coming f= rom mapper to ovirt as storage, go forward we like to use all these disk as = hyper-converged for ovirt. RH I could see there is KB for using gluster. Bu= t we are looking for Ceph bcoz best pref romance and scale. =20 <Screenshot from 2016-12-18 21-03-21.png> Regards Rajat =20 Hi =20 =20 Regards, Rajat Patel =20 http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN... =20 =20
On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <Alessandro.DeSalv= o@roma1.infn.it> wrote: Hi Rajat, sorry but I do not really have a clear picture of your actual setup, ca= n you please explain a bit more? In particular: =20 1) what to you mean by using 4TB for ovirt? In which machines and how d= o you make it available to ovirt? =20 2) how do you plan to use ceph with ovirt? =20 I guess we can give more help if you clarify those points. Thanks, =20 Alessandro=20 =20
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel@gmail.= com> ha scritto: =20 Great, thanks! Alessandro ++ Yaniv ++=20 =20 What I want to use around 4 TB of SAS disk for my Ovirt (which going t= o be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )=
=20 I had done so much duckduckgo for all these solution and use lot of re= ference from ovirt.org & access.redhat.com for setting up a Ovirt engine and= hyp. =20 We dont mind having more guest running and creating ceph block storage= and which will be presented to ovirt as storage. Gluster is not is use righ= t now bcoz we have DB will be running on guest. =20 Regard Rajat=20 =20
On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <Alessandro.DeSal= vo@roma1.infn.it> wrote: Hi, having a 3-node ceph cluster is the bare minimum you can have to make= it working, unless you want to have just a replica-2 mode, which is not saf= e. It's not true that ceph is not easy to configure, you might use very e= asily ceph-deploy, have puppet configuring it or even run it in containers. U= sing docker is in fact the easiest solution, it really requires 10 minutes t= o make a cluster up. I've tried it both with jewel (official containers) and= kraken (custom containers), and it works pretty well. The real problem is not creating and configuring a ceph cluster, but u= sing it from ovirt, as it requires cinder, i.e. a minimal setup of openstack= . We have it and it's working pretty well, but it requires some work. For yo= ur reference we have cinder running on an ovirt VM using gluster. Cheers, =20 Alessandro=20 =20 > Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com>= ha scritto: >=20 >=20 >=20 > On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com>= wrote: > =E2=80=8BDear Team, >=20 > We are using Ovirt 4.0 for POC what we are doing I want to check wit= h all Guru's Ovirt. >=20 > We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 50= 0GB SSD. >=20 > Waht we are done we have install ovirt hyp on these h/w and we have p= hysical server where we are running our manager for ovirt. For ovirt hyp we a= re using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n= ode as guest running on ovirt and for ceph. My question you all is what I am= doing is right or wrong. >=20 > I think Ceph requires a lot more resources than above. It's also a b= it more challenging to configure. I would highly recommend a 3-node cluster w= ith Gluster. > Y. > =20 >=20 > Regards > Rajat=E2=80=8B >=20 >=20 > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >=20 >=20 > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users =20 --=20 Sent from my Cell Phone - excuse the typos & auto incorrect =20 =20 =20
<font face=3D"tahoma, sans-serif" size=3D"4" style=3D"background-color:rgb(= 243,243,243)" color=3D"#0000ff"><br></font></div><div><font face=3D"tahoma, s= ans-serif" size=3D"4" style=3D"background-color:rgb(243,243,243)" color=3D"#= 0000ff"><br></font></div>Regards,<br>Rajat Patel<br><br><a href=3D"http://st= udyhat.blogspot.com/" target=3D"_blank">http://studyhat.blogspot.com</a><br>= FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YOU...<br>THEN THEY FIGHT YOU= ...<br>THEN YOU WIN...</font><br><br></div></div></div> <br><div class=3D"gmail_quote">On Sun, Dec 18, 2016 at 9:17 PM, Alessandro D= e Salvo <span dir=3D"ltr"><<a href=3D"mailto:Alessandro.DeSalvo@roma1.inf= n.it" target=3D"_blank">Alessandro.DeSalvo@roma1.infn.it</a>></span> wrot= e:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-le= ft:1px #ccc solid;padding-left:1ex"><div dir=3D"auto"><div></div><div>Hi,</d= iv><div>oh, so you have only 2 physical servers? I've understood they were 3= ! Well, in this case ceph would not work very well, too few resources and re= dundancy. You could try a replica 2, but it's not safe. Having a replica 3 c= ould be forced, but you would end up with a server with 2 replicas, which is= dangerous/useless.</div><div>Okay, so you use nfs as storage domain, but in= your setup the HA is not guaranteed: if a physical machine goes down and it= 's the one where the storage domain resides you are lost. Why not using glus= ter instead of nfs for the ovirt disks? You can still reserve a small gluste= r space for the non-ceph machines (for example a cinder VM) and ceph for the= rest. Where do you have your cinder running?</div><div>Cheers,</div><div><b= r></div><div> Alessandro</div><span class=3D""><div><br>Il gior= no 18 dic 2016, alle ore 18:05, rajatjpatel <<a href=3D"mailto:rajatjpate= l@gmail.com" target=3D"_blank">rajatjpatel@gmail.com</a>> ha scritto:<br>= <br></div></span><blockquote type=3D"cite"><div><div dir=3D"ltr"><div class=3D= "gmail_default" style=3D"font-family:comic sans ms,sans-serif;font-size:larg= e;color:rgb(0,0,255)">Hi Alessandro,<br><br></div><div class=3D"gmail_defaul= t" style=3D"font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0= ,0,255)"><span class=3D"">Right now I have 2 physical server where I have ho= st ovirt these are HP proliant dl 380 each server 1*500GB SAS & 1T= B *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 5= 00GB of SAS for my ovirt to run on both server. rest are not in use. At pres= ent I am using NFS which coming from mapper to ovirt as storage, go forward w= e like to use all these disk as hyper-converged for ovirt. RH I could s= ee there is KB for using gluster. But we are looking for Ceph bcoz best pref= romance and scale.<br><br></span><Screenshot from 2016-12-18 21-03-21.pn= g><br></div><div class=3D"gmail_default" style=3D"font-family:comic sans m= s,sans-serif;font-size:large;color:rgb(0,0,255)">Regards<br></div><div class= =3D"gmail_default" style=3D"font-family:comic sans ms,sans-serif;font-size:l= arge;color:rgb(0,0,255)">Rajat<br></div></div><div><div class=3D"h5"><div cl= ass=3D"gmail_extra"><br clear=3D"all"><div><div class=3D"m_79061084183831192= 04gmail_signature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div>= <font style=3D"background-color:rgb(243,243,243)" size=3D"4" color=3D"#0000f= f" face=3D"tahoma, sans-serif">Hi</font></div><font style=3D"background-colo= r:rgb(243,243,243)" size=3D"4" color=3D"#0000ff" face=3D"tahoma, sans-serif"= <div><font style=3D"background-color:rgb(243,243,243)" size=3D"4" color=3D"= #0000ff" face=3D"tahoma, sans-serif"><br></font></div><div><font style=3D"ba= ckground-color:rgb(243,243,243)" size=3D"4" color=3D"#0000ff" face=3D"tahoma= , sans-serif"><br></font></div>Regards,<br>Rajat Patel<br><br><a href=3D"htt=
<div><br></div><div>1) what to you mean by using 4TB for ovirt? In which ma= chines and how do you make it available to ovirt?</div><div><br></div><div>2= ) how do you plan to use ceph with ovirt?</div><div><br></div><div>I guess w= e can give more help if you clarify those points.</div><div>Thanks,</div><di= v><br></div><div> Alessandro </div><div><div class=3D"m_790= 6108418383119204h5"><div><br>Il giorno 18 dic 2016, alle ore 17:33, rajatjpa= tel <<a href=3D"mailto:rajatjpatel@gmail.com" target=3D"_blank">rajatjpat= el@gmail.com</a>> ha scritto:<br><br></div><blockquote type=3D"cite"><div= <div dir=3D"ltr"><div><div><div><div><div>Great, thanks! Alessandro ++ Yani= v ++ <br><br></div>What I want to use around 4 TB of SAS disk for my Ovirt (= which going to be RHV4.0.5 once POC get 100% successful, in fact all product= will be RH )<br><br></div>I had done so much duckduckgo for all these solut= ion and use lot of reference from <a href=3D"http://ovirt.org" target=3D"_bl= ank">ovirt.org</a> & <a href=3D"http://access.redhat.com" target=3D"_bla= nk">access.redhat.com</a> for setting up a Ovirt engine and hyp.<br><br></di= v>We dont mind having more guest running and creating ceph block storage and= which will be presented to ovirt as storage. Gluster is not is use right no= w bcoz we have DB will be running on guest.<br><br></div>Regard<br></div>Raj= at <br></div><br><div class=3D"gmail_quote"><div dir=3D"ltr">On Sun, Dec 18,= 2016 at 8:21 PM Alessandro De Salvo <<a href=3D"mailto:Alessandro.DeSalv= o@roma1.infn.it" target=3D"_blank">Alessandro.DeSalvo@roma1.infn<wbr>.it</a>= > wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0= .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"auto" class=3D= "m_7906108418383119204m_324330166984056793gmail_msg"><div class=3D"m_7906108= 418383119204m_324330166984056793gmail_msg"></div><div class=3D"m_79061084183= 83119204m_324330166984056793gmail_msg">Hi,</div><div class=3D"m_790610841838= 3119204m_324330166984056793gmail_msg">having a 3-node ceph cluster is the ba= re minimum you can have to make it working, unless you want to have just a r= eplica-2 mode, which is not safe.</div><div class=3D"m_7906108418383119204m_= 324330166984056793gmail_msg">It's not true that ceph is not easy to configur= e, you might use very easily ceph-deploy, have puppet configuring it or even= run it in containers. Using docker is in fact the easiest solution, it real= ly requires 10 minutes to make a cluster up. I've tried it both with jewel (= official containers) and kraken (custom containers), and it works pretty wel= l.</div><div class=3D"m_7906108418383119204m_324330166984056793gmail_msg">Th= e real problem is not creating and configuring a ceph cluster, but using it f= rom ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have= it and it's working pretty well, but it requires some work. For your refere= nce we have cinder running on an ovirt VM using gluster.</div><div class=3D"= m_7906108418383119204m_324330166984056793gmail_msg">Cheers,</div><div class=3D= "m_7906108418383119204m_324330166984056793gmail_msg"><br class=3D"m_79061084= 18383119204m_324330166984056793gmail_msg"></div><div class=3D"m_790610841838= 3119204m_324330166984056793gmail_msg"> Alessandro </div></d= iv><div dir=3D"auto" class=3D"m_7906108418383119204m_324330166984056793gmail= _msg"><div class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><br c= lass=3D"m_7906108418383119204m_324330166984056793gmail_msg">Il giorno 18 dic= 2016, alle ore 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com" cl= ass=3D"m_7906108418383119204m_324330166984056793gmail_msg" target=3D"_blank"= ykaul@redhat.com</a>> ha scritto:<br class=3D"m_7906108418383119204m_324= 330166984056793gmail_msg"><br class=3D"m_7906108418383119204m_32433016698405= 6793gmail_msg"></div><blockquote type=3D"cite" class=3D"m_790610841838311920= 4m_324330166984056793gmail_msg"><div class=3D"m_7906108418383119204m_3243301= 66984056793gmail_msg"><div dir=3D"ltr" class=3D"m_7906108418383119204m_32433= 0166984056793gmail_msg"><br class=3D"m_7906108418383119204m_3243301669840567= 93gmail_msg"><div class=3D"gmail_extra m_7906108418383119204m_32433016698405= 6793gmail_msg"><br class=3D"m_7906108418383119204m_324330166984056793gmail_m= sg"><div class=3D"gmail_quote m_7906108418383119204m_324330166984056793gmail= _msg">On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <span dir=3D"ltr" class=3D= "m_7906108418383119204m_324330166984056793gmail_msg"><<a href=3D"mailto:r= ajatjpatel@gmail.com" class=3D"m_7906108418383119204m_324330166984056793gmai= l_msg" target=3D"_blank">rajatjpatel@gmail.com</a>></span> wrote:<br clas= s=3D"m_7906108418383119204m_324330166984056793gmail_msg"><blockquote class=3D= "gmail_quote m_7906108418383119204m_324330166984056793gmail_msg" style=3D"ma= rgin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr= " class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><div class=3D= "m_7906108418383119204m_324330166984056793m_-4293604042961126787m_-815575019= 4716306479gmail_signature m_7906108418383119204m_324330166984056793gmail_msg= "><div dir=3D"ltr" class=3D"m_7906108418383119204m_324330166984056793gmail_m= sg"><div class=3D"m_7906108418383119204m_324330166984056793gmail_msg">=E2=80= =8BDear Team,<br class=3D"m_7906108418383119204m_324330166984056793gmail_msg= "><br class=3D"m_7906108418383119204m_324330166984056793gmail_msg">We are us= ing Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovir= t.<br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><br class= =3D"m_7906108418383119204m_324330166984056793gmail_msg">We have 2 hp prolian= t dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.<br class=3D"m_7= 906108418383119204m_324330166984056793gmail_msg"><br class=3D"m_790610841838= 3119204m_324330166984056793gmail_msg">Waht we are done we have install ovirt= hyp on these h/w and we have physical server where we are running our manag= er for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we h= ave kept for ceph, so we have 3 node as guest running on ovirt and for ceph.= My question you all is what I am doing is right or wrong.<br class=3D"m_790= 6108418383119204m_324330166984056793gmail_msg"></div></div></div></div></blo= ckquote><div class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><b= r class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div><div cl= ass=3D"m_7906108418383119204m_324330166984056793gmail_msg">I think Ceph requ= ires a lot more resources than above. It's also a bit more challenging to co= nfigure. I would highly recommend a 3-node cluster with Gluster.</div><div c= lass=3D"m_7906108418383119204m_324330166984056793gmail_msg">Y.</div><div cla= ss=3D"m_7906108418383119204m_324330166984056793gmail_msg"> </div><block= quote class=3D"gmail_quote m_7906108418383119204m_324330166984056793gmail_ms= g" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><=
<div class=3D"m_7906108418383119204m_324330166984056793m_-42936040429611267= 87m_-8155750194716306479gmail_signature m_7906108418383119204m_3243301669840= 56793gmail_msg"><div dir=3D"ltr" class=3D"m_7906108418383119204m_32433016698= 4056793gmail_msg"><div class=3D"m_7906108418383119204m_324330166984056793gma= il_msg"><br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></d= iv><div class=3D"m_7906108418383119204m_324330166984056793gmail_msg">Regards= <br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div><div c= lass=3D"m_7906108418383119204m_324330166984056793gmail_msg">Rajat=E2=80=8B</=
--Apple-Mail-BEC7796A-D99C-4DBE-B890-89AF12E27C9A Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi Rajat,</div><div>OK, I s= ee. Well, so just consider that ceph will not work at best in your setup, un= less you add at least a physical machine. Same is true for ovirt if you are o= nly using native NFS, as you loose a real HA.</div><div>Having said this, of= course you choose what's best for your site or affordable, but your setup l= ooks quite fragile to me. Happy to help more if you need.</div><div>Regards,= </div><div><br></div><div> Alessandro</div><div><br>Il giorno 18= dic 2016, alle ore 18:22, rajatjpatel <<a href=3D"mailto:rajatjpatel@gma= il.com">rajatjpatel@gmail.com</a>> ha scritto:<br><br></div><blockquote t= ype=3D"cite"><div><div dir=3D"ltr"><div class=3D"gmail_default" style=3D"fon= t-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Alessa= ndro,<br><br></div><div class=3D"gmail_default" style=3D"font-family:comic s= ans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Right now I dont have c= inder running in my setup in case if ceph don't work then I have get one vm r= unning open stack all in one and have all these disk connect my open stack u= sing cinder I can present storage to my ovirt.<br><br></div><div class=3D"gm= ail_default" style=3D"font-family:comic sans ms,sans-serif;font-size:large;c= olor:rgb(0,0,255)">At the same time I not getting case study for the same.<b= r></div><div class=3D"gmail_default" style=3D"font-family:comic sans ms,sans= -serif;font-size:large;color:rgb(0,0,255)"><br></div><div class=3D"gmail_def= ault" style=3D"font-family:comic sans ms,sans-serif;font-size:large;color:rg= b(0,0,255)">Regards<br></div><div class=3D"gmail_default" style=3D"font-fami= ly:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Rajat<br></d= iv></div><div class=3D"gmail_extra"><br clear=3D"all"><div><div class=3D"gma= il_signature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div><font= face=3D"tahoma, sans-serif" size=3D"4" style=3D"background-color:rgb(243,24= 3,243)" color=3D"#0000ff">Hi</font></div><font face=3D"tahoma, sans-serif" s= ize=3D"4" style=3D"background-color:rgb(243,243,243)" color=3D"#0000ff"><div= p://studyhat.blogspot.com/" target=3D"_blank">http://studyhat.blogspot.com</= a><br>FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YOU...<br>THEN THEY FIG= HT YOU...<br>THEN YOU WIN...</font><br><br></div></div></div> <br><div class=3D"gmail_quote">On Sun, Dec 18, 2016 at 8:49 PM, Alessandro D= e Salvo <span dir=3D"ltr"><<a href=3D"mailto:Alessandro.DeSalvo@roma1.inf= n.it" target=3D"_blank">Alessandro.DeSalvo@roma1.<wbr>infn.it</a>></span>= wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bord= er-left:1px #ccc solid;padding-left:1ex"><div dir=3D"auto"><div></div><div>H= i Rajat,</div><div>sorry but I do not really have a clear picture of your ac= tual setup, can you please explain a bit more?</div><div>In particular:</div= div dir=3D"ltr" class=3D"m_7906108418383119204m_324330166984056793gmail_msg"= div><br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div><= /div> </div> <br class=3D"m_7906108418383119204m_324330166984056793gmail_msg">___________= ___________________<wbr>_________________<br class=3D"m_7906108418383119204m= _324330166984056793gmail_msg"> Users mailing list<br class=3D"m_7906108418383119204m_324330166984056793gmai= l_msg"> <a href=3D"mailto:Users@ovirt.org" class=3D"m_7906108418383119204m_324330166= 984056793gmail_msg" target=3D"_blank">Users@ovirt.org</a><br class=3D"m_7906= 108418383119204m_324330166984056793gmail_msg"> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= class=3D"m_7906108418383119204m_324330166984056793gmail_msg" target=3D"_bla= nk">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br class=3D"m_790= 6108418383119204m_324330166984056793gmail_msg"> <br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></blockquot= e></div><br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></d= iv></div> </div></blockquote><blockquote type=3D"cite" class=3D"m_7906108418383119204m= _324330166984056793gmail_msg"><div class=3D"m_7906108418383119204m_324330166= 984056793gmail_msg"><span class=3D"m_7906108418383119204m_324330166984056793= gmail_msg">______________________________<wbr>_________________</span><br cl= ass=3D"m_7906108418383119204m_324330166984056793gmail_msg"><span class=3D"m_= 7906108418383119204m_324330166984056793gmail_msg">Users mailing list</span><= br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><span class=3D= "m_7906108418383119204m_324330166984056793gmail_msg"><a href=3D"mailto:Users= @ovirt.org" class=3D"m_7906108418383119204m_324330166984056793gmail_msg" tar= get=3D"_blank">Users@ovirt.org</a></span><br class=3D"m_7906108418383119204m= _324330166984056793gmail_msg"><span class=3D"m_7906108418383119204m_32433016= 6984056793gmail_msg"><a href=3D"http://lists.ovirt.org/mailman/listinfo/user= s" class=3D"m_7906108418383119204m_324330166984056793gmail_msg" target=3D"_b= lank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a></span><br class= =3D"m_7906108418383119204m_324330166984056793gmail_msg"></div></blockquote><= /div></blockquote></div><div dir=3D"ltr">-- <br></div><div data-smartmail=3D= "gmail_signature"><p dir=3D"ltr">Sent from my Cell Phone - excuse the typos &= amp; auto incorrect</p> </div> </div></blockquote></div></div></div></blockquote></div><br></div> </div></div></div></blockquote></div></blockquote></div><br></div> </div></blockquote></body></html>= --Apple-Mail-BEC7796A-D99C-4DBE-B890-89AF12E27C9A--

--Apple-Mail-6BF3C0A3-971A-4943-A637-7FB0951DF2E3 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Rajat, 3 is the bare minimum, but yes, it works well, as I said before. But you sti= ll have to decide weather you want to have more resiliency for ovirt, and st= andard NFS is not helping much. If you plan to run your cinder or openstack all in one box as VM in ovirt as= well you should consider moving from standard NFS to something else, like g= luster. Cheers, Alessandro
Il giorno 18 dic 2016, alle ore 18:56, rajatjpatel <rajatjpatel@gmail.com>= ha scritto: =20 =20 =20
On Sun, Dec 18, 2016 at 9:31 PM, Alessandro De Salvo <Alessandro.DeSalvo@= roma1.infn.it> wrote: Alessandro =20 =E2=80=8BThank you Alessandro, for all your support if I add one more ovir= t-hyp to my setup with same config as h/w will it work for ceph. =20 Regards Rajat=E2=80=8B =20 =20 =20
--Apple-Mail-6BF3C0A3-971A-4943-A637-7FB0951DF2E3 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi Rajat,</div><div>3 is th= e bare minimum, but yes, it works well, as I said before. But you still have= to decide weather you want to have more resiliency for ovirt, and standard N= FS is not helping much.</div><div>If you plan to run your cinder or openstac= k all in one box as VM in ovirt as well you should consider moving from stan= dard NFS to something else, like gluster.</div><div>Cheers,</div><div><br></= div><div> Alessandro</div><div><br>Il giorno 18 dic 2016, alle ore 18:= 56, rajatjpatel <<a href=3D"mailto:rajatjpatel@gmail.com">rajatjpatel@gma= il.com</a>> ha scritto:<br><br></div><blockquote type=3D"cite"><div><div d= ir=3D"ltr"><br><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On S= un, Dec 18, 2016 at 9:31 PM, Alessandro De Salvo <span dir=3D"ltr"><<a hr= ef=3D"mailto:Alessandro.DeSalvo@roma1.infn.it" target=3D"_blank">Alessandro.= DeSalvo@roma1.infn.it</a>></span> wrote:<br><blockquote class=3D"gmail_qu= ote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204= );padding-left:1ex">Alessandro</blockquote></div><br><div style=3D"font-fami= ly:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)" class=3D"gma= il_default">=E2=80=8BThank you Alessandro, for all your support if I add one= more ovirt-hyp to my setup with same config as h/w will it work for ceph.<b= r><br></div><div style=3D"font-family:comic sans ms,sans-serif;font-size:lar= ge;color:rgb(0,0,255)" class=3D"gmail_default">Regards<br></div><div style=3D= "font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)" cl= ass=3D"gmail_default">Rajat=E2=80=8B</div><br clear=3D"all"><div><div class=3D= "gmail_signature"><br><div dir=3D"ltr"><br></div></div></div> </div></div> </div></blockquote></body></html>= --Apple-Mail-6BF3C0A3-971A-4943-A637-7FB0951DF2E3--

On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe.
How well does it perform?
It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well.
This could be a great blog post in ovirt.org site - care to write something describing the configuration and setup? Y.
The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster. Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Yaniv, If I am not wrong your referencing to this https://www.ovirt.org/develop/release-management/features/cinderglance-docke... But only issue here right now this is not add from RH officially, after finish this we will going for RH product. Regards Rajat Hi Regards, Rajat Patel http://studyhat.blogspot.com FIRST THEY IGNORE YOU... THEN THEY LAUGH AT YOU... THEN THEY FIGHT YOU... THEN YOU WIN... On Sun, Dec 18, 2016 at 8:37 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo < Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe.
How well does it perform?
It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well.
This could be a great blog post in ovirt.org site - care to write something describing the configuration and setup? Y.
The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster. Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
Dear Team,
We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.
We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.
I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster. Y.
Regards Rajat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail-3A6654D0-314E-489D-8AFD-B591A0E708E2 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Yaniv,
Il giorno 18 dic 2016, alle ore 17:37, Yaniv Kaul <ykaul@redhat.com> ha sc= ritto: =20 =20 =20
On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo <Alessandro.DeSalvo@= roma1.infn.it> wrote: Hi, having a 3-node ceph cluster is the bare minimum you can have to make it w= orking, unless you want to have just a replica-2 mode, which is not safe. =20 How well does it perform?
One if the ceph clusters we use had exactly this setup: 3 DELL R630 (ceph je= wel), 6 1TB NL-SAS disks so 3 mons, 6 osds. We bound the cluster network to a= dedicated interface, 1Gbps. I can say it works pretty well, the performance= reaches up to 100MB/s per rbd device, which is the expected maximum for the= network connection. Resiliency is also pretty good, we can loose 2 osds (I.= e. a full machine) without impacting on the performance.
=20
It's not true that ceph is not easy to configure, you might use very easi= ly ceph-deploy, have puppet configuring it or even run it in containers. Usi= ng docker is in fact the easiest solution, it really requires 10 minutes to m= ake a cluster up. I've tried it both with jewel (official containers) and kr= aken (custom containers), and it works pretty well. =20 This could be a great blog post in ovirt.org site - care to write somethin= g describing the configuration and setup?
Oh sure, if it may be of general interest I'll be glad to. How can I do it? := -) Cheers, Alessandro=20
Y. =20
The real problem is not creating and configuring a ceph cluster, but usin= g it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. W= e have it and it's working pretty well, but it requires some work. For your r= eference we have cinder running on an ovirt VM using gluster. Cheers, =20 Alessandro=20 =20
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha s= critto: =20 =20 =20
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wr= ote: =E2=80=8BDear Team, =20 We are using Ovirt 4.0 for POC what we are doing I want to check with a= ll Guru's Ovirt. =20 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB= SSD. =20 Waht we are done we have install ovirt hyp on these h/w and we have phy= sical server where we are running our manager for ovirt. For ovirt hyp we ar= e using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n= ode as guest running on ovirt and for ceph. My question you all is what I am= doing is right or wrong. =20 I think Ceph requires a lot more resources than above. It's also a bit m= ore challenging to configure. I would highly recommend a 3-node cluster with= Gluster. Y. =20 =20 Regards Rajat=E2=80=8B =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
--Apple-Mail-3A6654D0-314E-489D-8AFD-B591A0E708E2 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi Yaniv,</div><div><br>Il g= iorno 18 dic 2016, alle ore 17:37, Yaniv Kaul <<a href=3D"mailto:ykaul@re= dhat.com">ykaul@redhat.com</a>> ha scritto:<br><br></div><blockquote type= =3D"cite"><div><div dir=3D"ltr"><br><div class=3D"gmail_extra"><br><div clas= s=3D"gmail_quote">On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo <span= dir=3D"ltr"><<a href=3D"mailto:Alessandro.DeSalvo@roma1.infn.it" target=3D= "_blank">Alessandro.DeSalvo@roma1.infn.it</a>></span> wrote:<br><blockquo= te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc sol= id;padding-left:1ex"><div dir=3D"auto"><div></div><div>Hi,</div><div>having a= 3-node ceph cluster is the bare minimum you can have to make it working, un= less you want to have just a replica-2 mode, which is not safe.</div></div><= /blockquote><div><br></div><div>How well does it perform?</div></div></div><= /div></div></blockquote><div><br></div><div>One if the ceph clusters we use h= ad exactly this setup: 3 DELL R630 (ceph jewel), 6 1TB NL-SAS disks so 3 mon= s, 6 osds. We bound the cluster network to a dedicated interface, 1Gbps. I c= an say it works pretty well, the performance reaches up to 100MB/s per rbd d= evice, which is the expected maximum for the network connection. Resiliency i= s also pretty good, we can loose 2 osds (I.e. a full machine) without impact= ing on the performance.</div><br><blockquote type=3D"cite"><div><div dir=3D"= ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div> </div>= <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px= #ccc solid;padding-left:1ex"><div dir=3D"auto"><div>It's not true that ceph= is not easy to configure, you might use very easily ceph-deploy, have puppe= t configuring it or even run it in containers. Using docker is in fact the e= asiest solution, it really requires 10 minutes to make a cluster up. I've tr= ied it both with jewel (official containers) and kraken (custom containers),= and it works pretty well.</div></div></blockquote><div><br></div><div>This c= ould be a great blog post in <a href=3D"http://ovirt.org">ovirt.org</a> site= - care to write something describing the configuration and setup?</div></di= v></div></div></div></blockquote><div><br></div><div>Oh sure, if it may be o= f general interest I'll be glad to. How can I do it? :-)</div><div>Cheers,</= div><div><br></div><div> Alessandro </div><br><blockquote t= ype=3D"cite"><div><div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"= gmail_quote"><div>Y.</div><div> </div><blockquote class=3D"gmail_quote"= style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><di= v dir=3D"auto"><div>The real problem is not creating and configuring a ceph c= luster, but using it from ovirt, as it requires cinder, i.e. a minimal setup= of openstack. We have it and it's working pretty well, but it requires some= work. For your reference we have cinder running on an ovirt VM using gluste= r.</div><div>Cheers,</div><div><br></div><div> Alessandro <= /div><div><div class=3D"h5"><div><br>Il giorno 18 dic 2016, alle ore 17:07, Y= aniv Kaul <<a href=3D"mailto:ykaul@redhat.com" target=3D"_blank">ykaul@re= dhat.com</a>> ha scritto:<br><br></div><blockquote type=3D"cite"><div><di= v dir=3D"ltr"><br><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">= On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <span dir=3D"ltr"><<a href=3D= "mailto:rajatjpatel@gmail.com" target=3D"_blank">rajatjpatel@gmail.com</a>&g= t;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .= 8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class= =3D"m_8035446879836480739m_-8155750194716306479gmail_signature"><div dir=3D"= ltr"><div>=E2=80=8BDear Team,<br><br>We are using Ovirt 4.0 for POC what we a= re doing I want to check with all Guru's Ovirt.<br><br>We have 2 hp proliant= dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.<br><br>Waht we a= re done we have install ovirt hyp on these h/w and we have physical server w= here we are running our manager for ovirt. For ovirt hyp we are using only o= ne 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest r= unning on ovirt and for ceph. My question you all is what I am doing is righ= t or wrong.<br></div></div></div></div></blockquote><div><br></div><div>I th= ink Ceph requires a lot more resources than above. It's also a bit more chal= lenging to configure. I would highly recommend a 3-node cluster with Gluster= .</div><div>Y.</div><div> </div><blockquote class=3D"gmail_quote" style= =3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D= "ltr"><div class=3D"m_8035446879836480739m_-8155750194716306479gmail_signatu= re"><div dir=3D"ltr"><div><br></div><div>Regards<br></div><div>Rajat=E2=80=8B= </div><br></div></div> </div> <br>______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org" target=3D"_blank">Users@ovirt.org</a><br>= <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= target=3D"_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br=
<br></blockquote></div><br></div></div> </div></blockquote><blockquote type=3D"cite"><div><span>____________________= __________<wbr>_________________</span><br><span>Users mailing list</span><b= r><span><a href=3D"mailto:Users@ovirt.org" target=3D"_blank">Users@ovirt.org= </a></span><br><span><a href=3D"http://lists.ovirt.org/mailman/listinfo/user= s" target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><= /span><br></div></blockquote></div></div></div></blockquote></div><br></div>= </div> </div></blockquote></body></html>= --Apple-Mail-3A6654D0-314E-489D-8AFD-B591A0E708E2--

--Apple-Mail-6024481C-236A-4594-B6B9-E380CB8000FE Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, sorry, forgot to mention you may have both gluster and ceph on the same mach= ines, as long as you have enough disk space. Cheers, Alessandro=20
Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha sc= ritto: =20 =20 =20
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrot= e: =E2=80=8BDear Team, =20 We are using Ovirt 4.0 for POC what we are doing I want to check with all= Guru's Ovirt. =20 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB S= SD. =20 Waht we are done we have install ovirt hyp on these h/w and we have physi= cal server where we are running our manager for ovirt. For ovirt hyp we are u= sing only one 500GB of one HDD rest we have kept for ceph, so we have 3 node= as guest running on ovirt and for ceph. My question you all is what I am do= ing is right or wrong. =20 I think Ceph requires a lot more resources than above. It's also a bit mor= e challenging to configure. I would highly recommend a 3-node cluster with G= luster. Y. =20 =20 Regards Rajat=E2=80=8B =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
> ha scritto:<br><br></div><blockquote type=3D"cite"><div><div dir=3D"lt= r"><br><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Sun, Dec= 18, 2016 at 3:29 PM, rajatjpatel <span dir=3D"ltr"><<a href=3D"mailto:ra= jatjpatel@gmail.com" target=3D"_blank">rajatjpatel@gmail.com</a>></span> w= rote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border= -left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"m_-815= 5750194716306479gmail_signature"><div dir=3D"ltr"><div>=E2=80=8BDear Team,<b= r><br>We are using Ovirt 4.0 for POC what we are doing I want to check with a= ll Guru's Ovirt.<br><br>We have 2 hp proliant dl 380 with 500GB SAS & 1T= B *4 SAS Disk and 500GB SSD.<br><br>Waht we are done we have install ovirt h= yp on these h/w and we have physical server where we are running our manager= for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we hav= e kept for ceph, so we have 3 node as guest running on ovirt and for ceph. M= y question you all is what I am doing is right or wrong.<br></div></div></di= v></div></blockquote><div><br></div><div>I think Ceph requires a lot more re=
--Apple-Mail-6024481C-236A-4594-B6B9-E380CB8000FE Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hi,</div><div>sorry, forgot= to mention you may have both gluster and ceph on the same machines, as long= as you have enough disk space.</div><div>Cheers,</div><div><br></div><div>&= nbsp; Alessandro </div><div><br>Il giorno 18 dic 2016, alle ore 1= 7:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com">ykaul@redhat.com</a= sources than above. It's also a bit more challenging to configure. I would h= ighly recommend a 3-node cluster with Gluster.</div><div>Y.</div><div> = </div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-le= ft:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"m_-815575= 0194716306479gmail_signature"><div dir=3D"ltr"><div><br></div><div>Regards<b= r></div><div>Rajat=E2=80=8B</div><br></div></div> </div> <br>______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br=
<br></blockquote></div><br></div></div> </div></blockquote><blockquote type=3D"cite"><div><span>____________________= ___________________________</span><br><span>Users mailing list</span><br><sp= an><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a></span><br><span><a= href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.o= rg/mailman/listinfo/users</a></span><br></div></blockquote></body></html>= --Apple-Mail-6024481C-236A-4594-B6B9-E380CB8000FE--
participants (3)
-
Alessandro De Salvo
-
rajatjpatel
-
Yaniv Kaul