[ovirt-users] Which hardware are you using for oVirt

Christopher Cox ccox at endlessnow.com
Mon Mar 26 16:03:31 UTC 2018


On 03/24/2018 03:33 AM, Andy Michielsen wrote:
> Hi all,
> 
> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
> 
> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s.
> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)

Just because you asked, but not because this is helpful to you....

But first, a comment on "3 hosts to be able to run 30 VMs".  The SPM 
node shouldn't run a lot of VMs.  There are settings (the setting slips 
my mind) on the engine to give it a "virtual set" of VMs in order to 
keep VMs off of it.

With that said, CPU wise, it doesn't require a lot to run 30 VM's.  The 
costly thing is memory (in general).  So while a cheap set of 3 machines 
might handle the CPU requirements of 30 VM's, those cheap machines might 
not be able to give you the memory you need (depends).  You might be 
fine.  I mean, there are cheap desktop like machines that do 64G (and 
sometimes more).  Just something to keep in mind.  Memory and storage 
will be the most costly items.  It's simple math.  Linux hosts, of 
course, don't necessarily need much memory (or storage).  But Windows...

1Gbit NIC's are "ok", but again, depends on storage.  Glusterfs is no 
speed demon.  But you might not need "fast" storage.

Lastly, your setup is just for "fun", right?  Otherwise, read on.


Running oVirt 3.6 (this is a production setup)

ovirt engine (manager):
Dell PowerEdge 430, 32G

ovirt cluster nodes:
Dell m1000e 1.1 backplane Blade Enclosure
9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 
10GbE, CentOS 7.2
4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)

120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012
We've run on as few as 3 nodes.

Network, SAN and Storage (for ovirt Domains):
2 x S4810 (part is used for SAN, part for LAN)
Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K 
SAS)
Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS

ISO and Export Domains are handled by:
Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), 
CentOS 7.4, NFS

What I like:
* Easy setup.
* Relatively good network and storage.

What I don't like:
* 2 "effective" networks, LAN and iSCSI.  All networking uses the same 
effective path.  Would be nice to have more physical isolation for mgmt 
vs motion vs VMs.  QoS is provided in oVirt, but still, would be nice to 
have the full pathways.
* Storage doesn't use active/active controllers, so controller failover 
is VERY slow.
* We have a fast storage system, and somewhat slower storage system 
(matter of IOPS),  neither is SSD, so there isn't a huge difference.  No 
real redundancy or flexibility.
* vdsm can no longer respond fast enough for the amount of disks defined 
(in the event of a new Storage Domain add).  We have raised vdsTimeout, 
but have not tested yet.

I inherited the "style" above.  My recommendation of where to start for 
a reasonable production instance, minimum (assumes the S4810's above, 
not priced here):

1 x ovirt manager/engine, approx $1500
4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
3 x Nexsan 18P 108TB, approx $96K

While significantly cheaper (by 6 figures), it provides active/active 
controllers, storage reliability and flexibility and better network 
pathways.  Why 4 x nodes?  Need at least N+1 for reliability.  The extra 
4th node is merely capacity.  Why 3 x storage?  Need at least N+1 for 
reliability.

Obviously, you'll still want to back things up and test the ability to 
restore components like the ovirt engine from scratch.

Btw, my recommended minimum above is regardless of hypervisor cluster 
choice (could be VMware).


More information about the Users mailing list