[ovirt-users] Which hardware are you using for oVirt

Richard Neuboeck hawk at tbi.univie.ac.at
Mon Mar 26 07:42:57 UTC 2018


Hi Andy,

we have 3 hosts for virtualization. Each 40 Cores, 512GB RAM, RAID 1 for
the system, 4 bonded (onboard) 1Gbit NICs for client access (to the VMs)
and a 10GBit NIC for the storage network.
The storage is built of 3 hosts, 10Gbit NIC, RAID 6 (5TB HDDs and SSDs
for caching) and gluster in replica 3 mode.

Cheers
Richard

On 25.03.18 09:36, Andy Michielsen wrote:
> Hello Alex,
> 
> Thanks for sharing. Much appriciated.
> 
> I believe my setup would need 96 Gb off RAM in each host, and would need
> about at least 3 Tb of storage. Probably 4 Tb would be beter if I want
> to work with snapshots. (Will be running mostly windows 2016 servers or
> windows 10 desktops with 6Gb off RAM and 100 Gb of disks)
> 
> I agree that a 10 Gb network for storage would be very beneficial.
> 
> Now If I can figure out how to set up a glusterfs on a 3 node cluster in
> oVirt 4.2 just for the data storage. I ‘m golden to get started. :-)
> 
> Kind regards.
> 
> On 24 Mar 2018, at 20:08, Alex K <rightkicktech at gmail.com
> <mailto:rightkicktech at gmail.com>> wrote:
> 
>> I have 2 or 3 node clusters with following hardware (all with
>> self-hosted engine) :
>>
>> 2 node cluster:
>> RAM: 64 GB per host
>> CPU: 8 cores per host
>> Storage: 4x 1TB SAS in RAID10
>> NIC: 2x Gbit
>> VMs: 20
>>
>> The above, although I would like to have had a third NIC for gluster
>> storage redundancy, it is running smoothly for quite some time and
>> without performance issues.
>> The VMs it is running are not high on IO (mostly small Linux servers).
>>
>> 3 node clusters:
>> RAM: 32 GB per host
>> CPU: 16 cores per host
>> Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage
>> space without purchasing extra disks)
>> NIC: 6x Gbit
>> VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)
>>
>> For your setup (30 VMs) I would rather go with RAID10 SAS disks and at
>> least a dual 10Gbit NIC dedicated to the gluster traffic only.
>>
>> Alex
>>
>>
>> On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen
>> <andy.michielsen at gmail.com <mailto:andy.michielsen at gmail.com>> wrote:
>>
>>     Hello Andrei,
>>
>>     Thank you very much for sharing info on your hardware setup. Very
>>     informative.
>>
>>     At this moment I have my ovirt engine on our vmware environment
>>     which is fine for good backup and restore.
>>
>>     I have 4 nodes running now all different in make and model with
>>     local storage and it works but lacks performance a bit.
>>
>>     But I can get my hands on some old dell’s R415 with 96 Gb of ram
>>     and 2 quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb
>>     15000 rpm’s harddisks. This isn’t bad but I will add more RAM for
>>     starters. Also I would like to have some good redundant storage
>>     for this too and the servers have limited space to add that.
>>
>>     Hopefully others will also share there setups and expirience like
>>     you did.
>>
>>     Kind regards.
>>
>>     On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1 at starlett.lv
>>     <mailto:andreil1 at starlett.lv>> wrote:
>>
>>>     Hi,
>>>
>>>     HL ProLiant DL380, dual Xeon
>>>     120 GB RAID L1 for system
>>>     2 TB RAID L10 for VM disks
>>>     5 VMs, 3 Linux, 2 Windows
>>>     Total CPU load most of the time is  low, high level of activity
>>>     related to disk.
>>>     Host engine under KVM appliance on SuSE, can be easily moved,
>>>     backed up, copied, experimented with, etc.
>>>
>>>     You'll have to use servers with more RAM and storage than main.
>>>     More then one NIC required if some of your VMs are on different
>>>     subnets, e.g. 1 in internal zone and 2nd on DMZ.
>>>     For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
>>>
>>>     BTW, I would suggest to have several separate hardware RAIDs
>>>     unless you have SSD, otherwise limit of the disk system I/O will
>>>     be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.
>>>
>>>     *Please note many cheap SSDs do NOT work reliably with SAS
>>>     controllers even in SATA mode*.
>>>
>>>     For example, I supposed to use 2 x WD Green SSD configures as
>>>     RAID L1 for OS.
>>>     It was possible to install system, yet under heavy load simulated
>>>     with iozone disk system freeze, rendering OS unbootable.
>>>     Same crash was experienced with 512GB KingFast SSD connected to
>>>     broadcom/AMCC SAS RAID Card.
>>>
>>>
>>>     On 03/24/2018 10:33 AM, Andy Michielsen wrote:
>>>>     Hi all,
>>>>
>>>>     Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
>>>>
>>>>     I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s.
>>>>     The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
>>>>
>>>>     Any input you guys would like to share would be greatly appriciated.
>>>>
>>>>     Thanks,
>>>>     _______________________________________________
>>>>     Users mailing list
>>>>     Users at ovirt.org <mailto:Users at ovirt.org>
>>>>     http://lists.ovirt.org/mailman/listinfo/users
>>>>     <http://lists.ovirt.org/mailman/listinfo/users>
>>>
>>>
>>
>>     _______________________________________________
>>     Users mailing list
>>     Users at ovirt.org <mailto:Users at ovirt.org>
>>     http://lists.ovirt.org/mailman/listinfo/users
>>     <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
> 
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 228 bytes
Desc: OpenPGP digital signature
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180326/da9baab1/attachment.sig>


More information about the Users mailing list