<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace">Andy, Im using 2 node cluster:<br></div><div class="gmail_default" style="font-family:monospace,monospace">-2 supermicro 6017 (2 Intel 2420(12C24T each node) 384Gb ram total, 10gbe. all hosted engine via nfs<br></div><div class="gmail_default" style="font-family:monospace,monospace"><br>storage side: 2 SC836BE16-R1K28B(192gb arc cache) with raid 10 zfs+intel slog serving iscsi at 10Gbe<br></div><div class="gmail_default" style="font-family:monospace,monospace">80 VM's more or less.<br></div><div class="gmail_default" style="font-family:monospace,monospace"><br>regards, <br></div><div class="gmail_default" style="font-family:monospace,monospace"><br></div><div class="gmail_default" style="font-family:monospace,monospace"><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2018-03-25 4:36 GMT-03:00 Andy Michielsen <span dir="ltr"><<a href="mailto:andy.michielsen@gmail.com" target="_blank">andy.michielsen@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div></div><div>Hello Alex,</div><div><br></div><div>Thanks for sharing. Much appriciated.</div><div><br></div><div>I believe my setup would need 96 Gb off RAM in each host, and would need about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with snapshots. (Will be running mostly windows 2016 servers or windows 10 desktops with 6Gb off RAM and 100 Gb of disks)</div><div><br></div><div>I agree that a 10 Gb network for storage would be very beneficial.</div><div><br></div><div>Now If I can figure out how to set up a glusterfs on a 3 node cluster in oVirt 4.2 just for the data storage. I ‘m golden to get started. :-)</div><div><br></div><div>Kind regards.</div><div><div class="h5"><div><br>On 24 Mar 2018, at 20:08, Alex K <<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr"><div><div>I have 2 or 3 node clusters with following hardware (all with self-hosted engine) : <br><br></div>2 node cluster: <br>RAM: 64 GB per host<br></div><div>CPU: 8 cores per host<br></div><div>Storage: 4x 1TB SAS in RAID10<br></div><div>NIC: 2x Gbit<br></div><div>VMs: 20<br><br></div><div>The above, although I would like to have had a third NIC for gluster storage redundancy, it is running smoothly for quite some time and without performance issues. <br></div><div>The VMs it is running are not high on IO (mostly small Linux servers). <br><br></div><div>3 node clusters: <br></div><div>RAM: 32 GB per host<br></div><div>CPU: 16 cores per host<br></div><div>Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space without purchasing extra disks)<br></div><div>NIC: 6x Gbit<br></div><div>VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)<br><br></div><div>For your setup (30 VMs) I would rather go with RAID10 SAS disks and at least a dual 10Gbit NIC dedicated to the gluster traffic only. <br><br></div><div>Alex<br></div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <span dir="ltr"><<a href="mailto:andy.michielsen@gmail.com" target="_blank">andy.michielsen@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div></div><div>Hello Andrei,</div><div><br></div><div>Thank you very much for sharing info on your hardware setup. Very informative.</div><div><br></div><div>At this moment I have my ovirt engine on our vmware environment which is fine for good backup and restore.</div><div><br></div><div>I have 4 nodes running now all different in make and model with local storage and it works but lacks performance a bit.</div><div><br></div><div>But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2 quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s harddisks. This isn’t bad but I will add more RAM for starters. Also I would like to have some good redundant storage for this too and the servers have limited space to add that.</div><div><br></div><div>Hopefully others will also share there setups and expirience like you did.</div><div><br></div><div>Kind regards.</div><div><div class="m_2098279194425126461h5"><div><br>On 24 Mar 2018, at 10:35, Andrei Verovski <<a href="mailto:andreil1@starlett.lv" target="_blank">andreil1@starlett.lv</a>> wrote:<br><br></div><blockquote type="cite"><div>
<div class="m_2098279194425126461m_2490472978721696341moz-cite-prefix">Hi,<br>
<br>
HL ProLiant DL380, dual Xeon<br>
120 GB RAID L1 for system<br>
2 TB RAID L10 for VM disks<br>
5 VMs, 3 Linux, 2 Windows<br>
Total CPU load most of the time is low, high level of activity
related to disk.<br>
Host engine under KVM appliance on SuSE, can be easily moved,
backed up, copied, experimented with, etc.<br>
<br>
You'll have to use servers with more RAM and storage than main.<br>
More then one NIC required if some of your VMs are on different
subnets, e.g. 1 in internal zone and 2nd on DMZ.<br>
For your setup 10 GB NICs + L3 Switch for ovirtmgmt.<br>
<br>
BTW, I would suggest to have several separate hardware RAIDs
unless you have SSD, otherwise limit of the disk system I/O will
be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.<br>
<br>
<font color="#990000"><b>Please note many cheap SSDs do NOT work
reliably with SAS controllers even in SATA mode</b>.</font><br>
<br>
For example, I supposed to use 2 x WD Green SSD configures as RAID
L1 for OS. <br>
It was possible to install system, yet under heavy load simulated
with iozone disk system freeze, rendering OS unbootable.<br>
Same crash was experienced with 512GB KingFast SSD connected to
broadcom/AMCC SAS RAID Card.<br>
<br>
<br>
On 03/24/2018 10:33 AM, Andy Michielsen wrote:<br>
</div>
<blockquote type="cite">
<pre>Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s.
The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks,
______________________________<wbr>_________________
Users mailing list
<a class="m_2098279194425126461m_2490472978721696341moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_2098279194425126461m_2490472978721696341moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>
</pre>
</blockquote>
<p><br>
</p>
</div></blockquote></div></div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></blockquote></div></div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>