[ovirt-users] Which hardware are you using for oVirt

Alex K rightkicktech at gmail.com
Sat Mar 24 19:08:14 UTC 2018


I have 2 or 3 node clusters with following hardware (all with self-hosted
engine) :

2 node cluster:
RAM: 64 GB per host
CPU: 8 cores per host
Storage: 4x 1TB SAS in RAID10
NIC: 2x Gbit
VMs: 20

The above, although I would like to have had a third NIC for gluster
storage redundancy, it is running smoothly for quite some time and without
performance issues.
The VMs it is running are not high on IO (mostly small Linux servers).

3 node clusters:
RAM: 32 GB per host
CPU: 16 cores per host
Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space
without purchasing extra disks)
NIC: 6x Gbit
VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)

For your setup (30 VMs) I would rather go with RAID10 SAS disks and at
least a dual 10Gbit NIC dedicated to the gluster traffic only.

Alex


On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <andy.michielsen at gmail.com>
wrote:

> Hello Andrei,
>
> Thank you very much for sharing info on your hardware setup. Very
> informative.
>
> At this moment I have my ovirt engine on our vmware environment which is
> fine for good backup and restore.
>
> I have 4 nodes running now all different in make and model with local
> storage and it works but lacks performance a bit.
>
> But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2
> quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s
> harddisks. This isn’t bad but I will add more RAM for starters. Also I
> would like to have some good redundant storage for this too and the servers
> have limited space to add that.
>
> Hopefully others will also share there setups and expirience like you did.
>
> Kind regards.
>
> On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1 at starlett.lv> wrote:
>
> Hi,
>
> HL ProLiant DL380, dual Xeon
> 120 GB RAID L1 for system
> 2 TB RAID L10 for VM disks
> 5 VMs, 3 Linux, 2 Windows
> Total CPU load most of the time is  low, high level of activity related to
> disk.
> Host engine under KVM appliance on SuSE, can be easily moved, backed up,
> copied, experimented with, etc.
>
> You'll have to use servers with more RAM and storage than main.
> More then one NIC required if some of your VMs are on different subnets,
> e.g. 1 in internal zone and 2nd on DMZ.
> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
>
> BTW, I would suggest to have several separate hardware RAIDs unless you
> have SSD, otherwise limit of the disk system I/O will be a bottleneck.
> Consider SSD L1 RAID for heavy-loaded databases.
>
> *Please note many cheap SSDs do NOT work reliably with SAS controllers
> even in SATA mode*.
>
> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for
> OS.
> It was possible to install system, yet under heavy load simulated with
> iozone disk system freeze, rendering OS unbootable.
> Same crash was experienced with 512GB KingFast SSD connected to
> broadcom/AMCC SAS RAID Card.
>
>
> On 03/24/2018 10:33 AM, Andy Michielsen wrote:
>
> Hi all,
>
> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
>
> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s.
> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
>
> Any input you guys would like to share would be greatly appriciated.
>
> Thanks,
> _______________________________________________
> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180324/9e992a4c/attachment.html>


More information about the Users mailing list