<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div></div><div>Hi Rajat,</div><div>OK, I see. Well, so just consider that ceph will not work at best in your setup, unless you add at least a physical machine. Same is true for ovirt if you are only using native NFS, as you loose a real HA.</div><div>Having said this, of course you choose what's best for your site or affordable, but your setup looks quite fragile to me. Happy to help more if you need.</div><div>Regards,</div><div><br></div><div> Alessandro</div><div><br>Il giorno 18 dic 2016, alle ore 18:22, rajatjpatel <<a href="mailto:rajatjpatel@gmail.com">rajatjpatel@gmail.com</a>> ha scritto:<br><br></div><blockquote type="cite"><div><div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Alessandro,<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Right now I dont have cinder running in my setup in case if ceph don't work then I have get one vm running open stack all in one and have all these disk connect my open stack using cinder I can present storage to my ovirt.<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">At the same time I not getting case study for the same.<br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)"><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Regards<br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Rajat<br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><font face="tahoma, sans-serif" size="4" style="background-color:rgb(243,243,243)" color="#0000ff">Hi</font></div><font face="tahoma, sans-serif" size="4" style="background-color:rgb(243,243,243)" color="#0000ff"><div><font face="tahoma, sans-serif" size="4" style="background-color:rgb(243,243,243)" color="#0000ff"><br></font></div><div><font face="tahoma, sans-serif" size="4" style="background-color:rgb(243,243,243)" color="#0000ff"><br></font></div>Regards,<br>Rajat Patel<br><br><a href="http://studyhat.blogspot.com/" target="_blank">http://studyhat.blogspot.com</a><br>FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YOU...<br>THEN THEY FIGHT YOU...<br>THEN YOU WIN...</font><br><br></div></div></div>
<br><div class="gmail_quote">On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo <span dir="ltr"><<a href="mailto:Alessandro.DeSalvo@roma1.infn.it" target="_blank">Alessandro.DeSalvo@roma1.infn.it</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div></div><div>Hi,</div><div>oh, so you have only 2 physical servers? I've understood they were 3! Well, in this case ceph would not work very well, too few resources and redundancy. You could try a replica 2, but it's not safe. Having a replica 3 could be forced, but you would end up with a server with 2 replicas, which is dangerous/useless.</div><div>Okay, so you use nfs as storage domain, but in your setup the HA is not guaranteed: if a physical machine goes down and it's the one where the storage domain resides you are lost. Why not using gluster instead of nfs for the ovirt disks? You can still reserve a small gluster space for the non-ceph machines (for example a cinder VM) and ceph for the rest. Where do you have your cinder running?</div><div>Cheers,</div><div><br></div><div> Alessandro</div><span class=""><div><br>Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <<a href="mailto:rajatjpatel@gmail.com" target="_blank">rajatjpatel@gmail.com</a>> ha scritto:<br><br></div></span><blockquote type="cite"><div><div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Hi Alessandro,<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)"><span class="">Right now I have 2 physical server where I have host ovirt these are HP proliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB of SAS for my ovirt to run on both server. rest are not in use. At present I am using NFS which coming from mapper to ovirt as storage, go forward we like to use all these disk as hyper-converged for ovirt. RH I could see there is KB for using gluster. But we are looking for Ceph bcoz best pref romance and scale.<br><br></span><Screenshot from 2016-12-18 21-03-21.png><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Regards<br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Rajat<br></div></div><div><div class="h5"><div class="gmail_extra"><br clear="all"><div><div class="m_7906108418383119204gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><font style="background-color:rgb(243,243,243)" size="4" color="#0000ff" face="tahoma, sans-serif">Hi</font></div><font style="background-color:rgb(243,243,243)" size="4" color="#0000ff" face="tahoma, sans-serif"><div><font style="background-color:rgb(243,243,243)" size="4" color="#0000ff" face="tahoma, sans-serif"><br></font></div><div><font style="background-color:rgb(243,243,243)" size="4" color="#0000ff" face="tahoma, sans-serif"><br></font></div>Regards,<br>Rajat Patel<br><br><a href="http://studyhat.blogspot.com/" target="_blank">http://studyhat.blogspot.com</a><br>FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YOU...<br>THEN THEY FIGHT YOU...<br>THEN YOU WIN...</font><br><br></div></div></div>
<br><div class="gmail_quote">On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <span dir="ltr"><<a href="mailto:Alessandro.DeSalvo@roma1.infn.it" target="_blank">Alessandro.DeSalvo@roma1.<wbr>infn.it</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div></div><div>Hi Rajat,</div><div>sorry but I do not really have a clear picture of your actual setup, can you please explain a bit more?</div><div>In particular:</div><div><br></div><div>1) what to you mean by using 4TB for ovirt? In which machines and how do you make it available to ovirt?</div><div><br></div><div>2) how do you plan to use ceph with ovirt?</div><div><br></div><div>I guess we can give more help if you clarify those points.</div><div>Thanks,</div><div><br></div><div> Alessandro </div><div><div class="m_7906108418383119204h5"><div><br>Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <<a href="mailto:rajatjpatel@gmail.com" target="_blank">rajatjpatel@gmail.com</a>> ha scritto:<br><br></div><blockquote type="cite"><div><div dir="ltr"><div><div><div><div><div>Great, thanks! Alessandro ++ Yaniv ++ <br><br></div>What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )<br><br></div>I had done so much duckduckgo for all these solution and use lot of reference from <a href="http://ovirt.org" target="_blank">ovirt.org</a> & <a href="http://access.redhat.com" target="_blank">access.redhat.com</a> for setting up a Ovirt engine and hyp.<br><br></div>We dont mind having more guest running and creating ceph block storage and which will be presented to ovirt as storage. Gluster is not is use right now bcoz we have DB will be running on guest.<br><br></div>Regard<br></div>Rajat <br></div><br><div class="gmail_quote"><div dir="ltr">On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <<a href="mailto:Alessandro.DeSalvo@roma1.infn.it" target="_blank">Alessandro.DeSalvo@roma1.infn<wbr>.it</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793gmail_msg"></div><div class="m_7906108418383119204m_324330166984056793gmail_msg">Hi,</div><div class="m_7906108418383119204m_324330166984056793gmail_msg">having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe.</div><div class="m_7906108418383119204m_324330166984056793gmail_msg">It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well.</div><div class="m_7906108418383119204m_324330166984056793gmail_msg">The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster.</div><div class="m_7906108418383119204m_324330166984056793gmail_msg">Cheers,</div><div class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div><div class="m_7906108418383119204m_324330166984056793gmail_msg"> Alessandro </div></div><div dir="auto" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg">Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <<a href="mailto:ykaul@redhat.com" class="m_7906108418383119204m_324330166984056793gmail_msg" target="_blank">ykaul@redhat.com</a>> ha scritto:<br class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div><blockquote type="cite" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793gmail_msg"><div dir="ltr" class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="gmail_extra m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="gmail_quote m_7906108418383119204m_324330166984056793gmail_msg">On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <span dir="ltr" class="m_7906108418383119204m_324330166984056793gmail_msg"><<a href="mailto:rajatjpatel@gmail.com" class="m_7906108418383119204m_324330166984056793gmail_msg" target="_blank">rajatjpatel@gmail.com</a>></span> wrote:<br class="m_7906108418383119204m_324330166984056793gmail_msg"><blockquote class="gmail_quote m_7906108418383119204m_324330166984056793gmail_msg" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793m_-4293604042961126787m_-8155750194716306479gmail_signature m_7906108418383119204m_324330166984056793gmail_msg"><div dir="ltr" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793gmail_msg">Dear Team,<br class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg">We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.<br class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg">We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.<br class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg">Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.<br class="m_7906108418383119204m_324330166984056793gmail_msg"></div></div></div></div></blockquote><div class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div><div class="m_7906108418383119204m_324330166984056793gmail_msg">I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster.</div><div class="m_7906108418383119204m_324330166984056793gmail_msg">Y.</div><div class="m_7906108418383119204m_324330166984056793gmail_msg"> </div><blockquote class="gmail_quote m_7906108418383119204m_324330166984056793gmail_msg" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793m_-4293604042961126787m_-8155750194716306479gmail_signature m_7906108418383119204m_324330166984056793gmail_msg"><div dir="ltr" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793gmail_msg"><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div><div class="m_7906108418383119204m_324330166984056793gmail_msg">Regards<br class="m_7906108418383119204m_324330166984056793gmail_msg"></div><div class="m_7906108418383119204m_324330166984056793gmail_msg">Rajat</div><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div></div>
</div>
<br class="m_7906108418383119204m_324330166984056793gmail_msg">______________________________<wbr>_________________<br class="m_7906108418383119204m_324330166984056793gmail_msg">
Users mailing list<br class="m_7906108418383119204m_324330166984056793gmail_msg">
<a href="mailto:Users@ovirt.org" class="m_7906108418383119204m_324330166984056793gmail_msg" target="_blank">Users@ovirt.org</a><br class="m_7906108418383119204m_324330166984056793gmail_msg">
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" class="m_7906108418383119204m_324330166984056793gmail_msg" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br class="m_7906108418383119204m_324330166984056793gmail_msg">
<br class="m_7906108418383119204m_324330166984056793gmail_msg"></blockquote></div><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div></div>
</div></blockquote><blockquote type="cite" class="m_7906108418383119204m_324330166984056793gmail_msg"><div class="m_7906108418383119204m_324330166984056793gmail_msg"><span class="m_7906108418383119204m_324330166984056793gmail_msg">______________________________<wbr>_________________</span><br class="m_7906108418383119204m_324330166984056793gmail_msg"><span class="m_7906108418383119204m_324330166984056793gmail_msg">Users mailing list</span><br class="m_7906108418383119204m_324330166984056793gmail_msg"><span class="m_7906108418383119204m_324330166984056793gmail_msg"><a href="mailto:Users@ovirt.org" class="m_7906108418383119204m_324330166984056793gmail_msg" target="_blank">Users@ovirt.org</a></span><br class="m_7906108418383119204m_324330166984056793gmail_msg"><span class="m_7906108418383119204m_324330166984056793gmail_msg"><a href="http://lists.ovirt.org/mailman/listinfo/users" class="m_7906108418383119204m_324330166984056793gmail_msg" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a></span><br class="m_7906108418383119204m_324330166984056793gmail_msg"></div></blockquote></div></blockquote></div><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><p dir="ltr">Sent from my Cell Phone - excuse the typos & auto incorrect</p>
</div>
</div></blockquote></div></div></div></blockquote></div><br></div>
</div></div></div></blockquote></div></blockquote></div><br></div>
</div></blockquote></body></html>