[ovirt-users] Which hardware are you using for oVirt
Yaniv Kaul
ykaul at redhat.com
Fri Mar 30 20:42:40 UTC 2018
On Mon, Mar 26, 2018, 7:04 PM Christopher Cox <ccox at endlessnow.com> wrote:
> On 03/24/2018 03:33 AM, Andy Michielsen wrote:
> > Hi all,
> >
> > Not sure if this is the place to be asking this but I was wondering
> which hardware you all are using and why in order for me to see what I
> would be needing.
> >
> > I would like to set up a HA cluster consisting off 3 hosts to be able to
> run 30 vm’s.
> > The engine, I can run on an other server. The hosts can be fitted with
> the storage and share the space through glusterfs. I would think I will be
> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s
> sufficient ?)
>
> Just because you asked, but not because this is helpful to you....
>
> But first, a comment on "3 hosts to be able to run 30 VMs". The SPM
> node shouldn't run a lot of VMs. There are settings (the setting slips
> my mind) on the engine to give it a "virtual set" of VMs in order to
> keep VMs off of it.
>
> With that said, CPU wise, it doesn't require a lot to run 30 VM's. The
> costly thing is memory (in general). So while a cheap set of 3 machines
> might handle the CPU requirements of 30 VM's, those cheap machines might
> not be able to give you the memory you need (depends). You might be
> fine. I mean, there are cheap desktop like machines that do 64G (and
> sometimes more). Just something to keep in mind. Memory and storage
> will be the most costly items. It's simple math. Linux hosts, of
> course, don't necessarily need much memory (or storage). But Windows...
>
> 1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no
> speed demon. But you might not need "fast" storage.
>
> Lastly, your setup is just for "fun", right? Otherwise, read on.
>
>
> Running oVirt 3.6 (this is a production setup)
>
> ovirt engine (manager):
> Dell PowerEdge 430, 32G
>
> ovirt cluster nodes:
> Dell m1000e 1.1 backplane Blade Enclosure
> 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all
> 10GbE, CentOS 7.2
> 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)
>
> 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012
> We've run on as few as 3 nodes.
>
> Network, SAN and Storage (for ovirt Domains):
> 2 x S4810 (part is used for SAN, part for LAN)
> Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K
> SAS)
> Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS
>
> ISO and Export Domains are handled by:
> Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above),
> CentOS 7.4, NFS
>
> What I like:
> * Easy setup.
> * Relatively good network and storage.
>
> What I don't like:
> * 2 "effective" networks, LAN and iSCSI. All networking uses the same
> effective path. Would be nice to have more physical isolation for mgmt
> vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to
> have the full pathways.
> * Storage doesn't use active/active controllers, so controller failover
> is VERY slow.
> * We have a fast storage system, and somewhat slower storage system
> (matter of IOPS), neither is SSD, so there isn't a huge difference. No
> real redundancy or flexibility.
> * vdsm can no longer respond fast enough for the amount of disks defined
> (in the event of a new Storage Domain add). We have raised vdsTimeout,
> but have not tested yet.
>
We have substantially changed and improved VDSM for better scale since 3.6.
How many disks are defined, in how many storage domains and LUNs?
(also the OS itself has improved).
> I inherited the "style" above. My recommendation of where to start for
> a reasonable production instance, minimum (assumes the S4810's above,
> not priced here):
>
> 1 x ovirt manager/engine, approx $1500
>
What about high availability for the engine?
4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
> 3 x Nexsan 18P 108TB, approx $96K
>
Alternatively, how many reasonable SSDs can you buy? Samsing 860 EVO, 4TB
costs in Amazon (US) $1300. You could buy tens (70+) of those and be left
with some change.
Can you instead use them in a fast storage setup?
https://www.backblaze.com/blog/open-source-data-storage-server/ for example
is interesting.
> While significantly cheaper (by 6 figures), it provides active/active
> controllers, storage reliability and flexibility and better network
> pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra
> 4th node is merely capacity. Why 3 x storage? Need at least N+1 for
> reliability.
>
Are they running in some cluster?
> Obviously, you'll still want to back things up and test the ability to
> restore components like the ovirt engine from scratch.
>
+1.
Y.
> Btw, my recommended minimum above is regardless of hypervisor cluster
> choice (could be VMware).
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180330/65212bb3/attachment.html>
More information about the Users
mailing list