<div dir="ltr"><div>It's likely possibile you will get more performance from a NFS server compared to Gluster. Specially if on your NFS server you have something like ZFS + SSD for L2ARC or ext4 + Bcache, but you get not redundancy. If you NFS server dies everything stops working, which is not the case with Distributed Storage.<br><br></div>Fernando<br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-04-06 10:45 GMT-03:00 Jayme <span dir="ltr"><<a href="mailto:jaymef@gmail.com" target="_blank">jaymef@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Yaniv,<div><br></div><div>I appreciate your input, thanks! </div><div><br></div><div>I understand that everyone's use case is different, but I was hoping to hear from some users that are using oVirt hyper-converged setup and get some input on the performance. When I research GlusterFS I hear a lot about how it can be slow especially when dealing with small files. I'm starting to wonder if a straight up NFS server with a few SSDs would be less hassle and perhaps offer better VM performance than glusterFS can currently. </div><div><br></div><div>I want to get the best oVirt performance I can get (on somewhat of a budget) with a fairly small amount of required disk space (under 2TB). I'm not sure if hyper-converged setup w/GlusterFS is the answer or not. I'd like to avoid spending 15k only to find out that it's too slow. </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Apr 6, 2018 at 6:05 AM, Yaniv Kaul <span dir="ltr"><<a href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><span><div><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Apr 5, 2018, 11:39 PM Vincent Royer <<a href="mailto:vincent@epicenergy.ca" target="_blank">vincent@epicenergy.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Jayme, <div><br></div><div>I'm doing a very similar build, the only difference really is I am using SSDs instead of HDDs. I have similar questions as you regarding expected performance. Have you considered JBOD + NFS? Putting a Gluster Replica 3 on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a massive hit. Am I correct in saying you will only get 4TB total usable capacity out of 24TB worth of disks? The cost per TB in that sort of scenario is immense. </div><div><br></div><div>My plan is two 2TB SSDs per server in JBOD with a caching raid card, with replica 3. I would end up with the same 4TB total capacity using 12TB of SSDs. </div></div></blockquote></div></div><div dir="auto"><br></div></span><div dir="auto">I'm not sure I see the value in RAID card if you don't use RAID and I'm not sure you really need caching on the card. </div><span class="m_2755559696524259706HOEnZb"><font color="#888888"><div dir="auto">Y. </div></font></span><div><div class="m_2755559696524259706h5"><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm talking from zero experience... Would love others to chime in with their opinions on both these setups. </div></div><div class="gmail_extra"><br clear="all"><div><div class="m_2755559696524259706m_-157098227340179066m_1656318568188939082gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div><div dir="ltr" style="font-size:12.8px"><font face="arial narrow, sans-serif" size="4"><b>Vincent Royer</b></font></div><div dir="ltr"><div style="font-size:12.8px"><font face="arial narrow, sans-serif" size="4"><b>778-825-1057</b></font></div><div style="font-size:12.8px"><font face="arial narrow, sans-serif" size="4"><b><br></b></font></div><div style="font-size:12.8px"><a href="http://www.epicenergy.ca/" rel="noreferrer" target="_blank"><img src="https://lh3.googleusercontent.com/ezD9JxeN67lTscselHjW4ZEqvQRj_i_T_Ky_s__GkR0AtanG1nK52CD1Rdhmyzbk7XRPf-Rhhnr2-SLPmhtd0J8xVMp0xOiR20ebQVb6jU2xwJPKN4lePRq5G2p93ybqA4SnqjCWCvEsNlK37a9FtStdFcTJyIoS2aHKiL69L3bZg1g7mEAMr7IHXVYvHgq3MX9rNaqIQx9T3uPuIHEDIfxrgpCrsBoEamgaRHj5hcpEobnogqCqpa1sRcKy927TSeR5BbVLM1-wn0h-4LvE8CMqj30jpQULxZ9uXEg_S6sR3ZeZjtkkRvgK4l_wiivWlyV_SjCDQ5QREG3jkuCjXC04i1NNmziGZZNWYsaohE6khrN4JGVtvaSOiS2woYJNz3Gt0EucxJp3eUBYspzTl9iZia7bLvFEGIODvmfjRVZJmnCrFucJxGkAtWn_EUIbxJXS5NWAFnvBoixQP563Fbu7dFKL77lDcb4PHwrnJVjqcgU0OTeiMBlMvEt97YgMpMvdi7rx9L4DeEbrIGUovgx4QLfRRoF_9u2mxFu8FHupCxfocSyB3vs8gqOLvVdx42qwiLN0sGLAg8K3G_vFwVNddXQ_yb3cGeCEAOTnk7jfiP4O4jsPIW2RRbnKKL-usL11Tp7gkxbgRouLOoyqpbisjA4vRWDeylj1yDGyBg=w400-h38-no" width="200" height="18"><br></a></div><div><b><font face="arial narrow, sans-serif" size="2">SUSTAINABLE MOBILE <font color="#6aa84f">ENERGY </font>SOLUTIONS</font></b></div></div></div><div><br></div><div><br><div><div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Thu, Apr 5, 2018 at 12:22 PM, Jayme <span dir="ltr"><<a href="mailto:jaymef@gmail.com" rel="noreferrer" target="_blank">jaymef@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks for your feedback. Any other opinions on this proposed setup? I'm very torn over using GlusterFS and what the expected performance may be, there seems to be little information out there. Would love to hear any feedback specifically from ovirt users on hyperconverged configurations. </div><div class="m_2755559696524259706m_-157098227340179066m_1656318568188939082HOEnZb"><div class="m_2755559696524259706m_-157098227340179066m_1656318568188939082h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 5, 2018 at 2:56 AM, Alex K <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" rel="noreferrer" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">Hi,<div dir="auto"><br></div><div dir="auto">You should be ok with the setup.</div><div dir="auto">I am running around 20 vms (linux and windows, small and medium size) with the half of your specs. With 10G network replica 3 is ok. </div><div dir="auto"><br></div><div dir="auto">Alex</div></div><br><div class="gmail_quote"><div><div class="m_2755559696524259706m_-157098227340179066m_1656318568188939082m_-2395917300502361481h5"><div dir="ltr">On Wed, Apr 4, 2018, 16:13 Jayme <<a href="mailto:jaymef@gmail.com" rel="noreferrer" target="_blank">jaymef@gmail.com</a>> wrote:<br></div></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_2755559696524259706m_-157098227340179066m_1656318568188939082m_-2395917300502361481h5"><div dir="ltr">I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a budget). I plan to do 20-30 Linux VMs most of them very light weight + a couple of heavier hitting web and DB servers with frequent rsync backups. Some have a lot of small files from large github repos etc. <div><br></div><div>3X of the following:</div><div><br></div><div><span style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Dell PowerEdge R720</span><br style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">2x 2.9 GHz 8 Core E5-2690 (SR0L0)</span><br style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">256GB RAM</span><br style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">PERC H710</span><br style="color:rgb(35,31,32);font-family:Helvetica,Arial,sans-serif;font-size:14px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">2x10GB Nic</div><div><br></div><div>Boot/OS will likely be two cheaper small sata/ssd in raid 1. </div><div><br></div><div>Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10 per server. Using a replica 3 setup (and I'm thinking right now with no arbiter for extra redundancy, although I'm not sure what the performance hit may be as a result). Will this allow for two host failure or just one? </div><div><br></div><div>I've been really struggling with storage choices, it seems very difficult to predict the performance of glusterFS due to the variance in hardware (everyone is using something different). I'm not sure if the performance will be adequate enough for my needs. </div><div><br></div><div>I will be using an all ready existing Netgear XS716T 10GB switch for Gluster storage network. </div><div><br></div><div>In addition I plan to build another simple glusterFS storage server that I can use to georeplicate the gluster volume to for DR purposes and use existing hardware to build an independent standby oVirt host that is able to start up a few high priority VMs from the georeplicated glusterFS volume if for some reason the primary oVirt cluster/glusterFS volume ever failed. <br></div><div><br></div><div>I would love to hear any advice or critiques on this plan. </div><div><br></div><div>Thanks!</div></div></div></div>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" rel="noreferrer noreferrer" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer noreferrer noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote></div>
</blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" rel="noreferrer" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" rel="noreferrer" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote></div></div></div></div></div>
</blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>