very nice!
great to see the distribution of OSs,
can give us a hint on where to focus testing/ci/etc...
e.
----- Original Message -----
> From: "Sandro Bonazzola" <sbonazzo(a)redhat.com>
> To: devel(a)ovirt.org, Users(a)ovirt.org
> Sent: Tuesday, March 3, 2015 1:56:30 PM
> Subject: [ovirt-users] Where do you run oVirt? Here are the answers!
>
> Hi,
> This is a summary of the 85 response we got to the last month poll. Thanks
> everyone who answered!
>
> Which distribution are you using for running ovirt-engine?
> Fedora 20 8 9%
> CentOS 6 52 61%
> CentOS 7 22 26%
> Other 3 4%
>
> Which distribution are you using for your nodes?
> Fedora 20 6 7%
> CentOS 6 40 47%
> CentOS 7 31 36%
> oVirt Node 6 7%
> Other 2 2%
>
> In Other: RHEL 6.6, 7.0, 7.1 and a mixed environment of CentOS 6 and 7.
>
> Do you use Hosted Engine?
> Yes 42 49%
> No 42 49%
>
>
> Would you like to share more info on your datacenter, vms,...? Tell us about
> it
> -------------------------------------------------------------------------------
>
> oVirt is so AWESOME! I luv it.
> --
> "We currently run engine on CentOS 6 as CentOS 7 was not yet supported. We
> plan on migrating it to a CentOS 7 machine.
> Our nodes are currently CentOS 6 but are planning to migrate to CentOS 7.
> (For the nodes a checkbox for each distribution would be better than the
> radiobutton, as you can have multiple clusters with different
> distributions)."
>
> --
> FC Storage (Dell md3620f and IBM Blade-S internal SAS storage)
>
> --
> Please provide ceph support and built in backup tools
>
> --
> "3 separate virtual RHEV datacenters, test, dev, prod.
> Use direct attach fibre channel luns for application storage heavily on VMs
> to take advantage of snapshot/restore features on our array.
> Hosted Engine in test and dev. Physical manager in prod. "
>
> --
> "2 nodes running centos 6.6 from SSD (i3 35w 32GB)
> 1x NFS datastore
> 1x ISO datastore
> 1 node running NFS (i3 35w 8GB, Dell PERC sas controller)
> All 3 nodes connected via 10Gb ethernet
> Between 9 en 15 VM's depending on my tests
> Always active
> - Zimbra
> - ldap/dhcp
> - web/php
> - devel/php
> - pfsense
> Develop/test
> - Hadoop
> - Openstack
> - Gluster
> - ms..."
>
> --
> "- 4 nodes for KVM, 4 nodes for GlusterFS.
> - 1Gigabit for management and 4Gigabit channel bonding for GlusterFS replica.
> - Master Storage Domain lives on replica-3 GlustrerFS volume."
>
> --
> "30 hvs NFS storage over infiniband, custom portal for task automation and
> classroom abstraction via API"
>
> --
> "We use atm local storage.
> Current running vm count is over 100.
> I'd like to use EL7 platform in the future, but I'm uncertain how to best
> upgrade everything with a minimal downtime.
> we currently run ovirt-engine 3.3.3
> we will stick with EL platform and not switch to fedora based, because
> we need the improved stability.
> we also do not upgrade to dot zero releases as these introduced
> some breakage in the past (regressions).
> I hope this gets better with future releases.
> Keep up the good work!
> Sven"
>
> --
> "Storage GlusterFS (Virt+Gluster on Nodes), and FreeNAS via NFS"
>
> --
> "- iSCSI dedicated network
> - 2x Poweredge M1000e chassis (so, 2 x 16 blades)"
>
> --
> Yes it's NIELIT a gov agency to provide various trannig on virtual
> environment
>
> --
> "Running production engine on CentOS6 with CentOS6 nodes.
> Test/Staging environemtn based on CentOS7 and CentOS7 nodes, Hosted-engine on
> iSCSI."
>
> --
> "Mix of Dell, HP, UCS for compute
> Netapp for NAS, VNX for FC"
>
> --
> "Cloud used for CI purpose, made from about ""old"" 50
desktop PCs (and still
> growing) with Celerons, i3, i5 and few i7. VMs are ""light""
nodes for
> Jenkins (2GB-6GB/2-4cores). Some resources are utilized for cloud's services
> like vpn, zabbix, httpd, etc. As storage we use MooseFS!"
>
> --
> "This is a sample config for the few installes I have performed, but ideal
> for a small office.
> 2x nodes - CentOS 6 with SSD boot and 2x 2TB drives and 2 gluster volumes
> spread over the 2 - 1 for vm storage and 1 for file storage
> 1x engine (planning on changing to hosted)
> 5x vms - 2x DNS/DHCP/Management, 1x webserver for intranet, 1x mailserver and
> 1x Asterisk PBX
> "
>
> --
> "I think that we really need more troubleshooting tools and guides more than
> anything. There are various logs, but there is no reason why we
> shouldn't be publishing some of this information to the engine UI and even
> automating certain self-healing.
> The absolute most important feature in my mind is getting the ability to auto
> start (restart) VMs after certain failures and attempting to unlock
> disks, etc.. VMware does a tremendous amount of that in order to provide
> better HA. We need this."
>
> --
> "Have FC only. Using SVC. Behind it now DS4700. Going to have other storages
> too.
> This is BYOD.
> "
>
> --
> "One node cluster with local storage for education, POC etc. at home."
>
> --
> No
>
> --
> Combo glusterfs storage and vm hosted nodes. Will be migrating engine to
> centos 7 at some point. Wish libgfapi was properly supported now that it's
> feasible.
>
> --
> 3 x Supermicro A1SAi-2750F nodes (16 GiB RAM + 8 TiB storage + 8x1GiB/s
> Ethernet each) with hyperconverged GlusterFS (doubling as an NFS/CIFS
> storage
> cluster)
>
> --
> "running 15 vms, for univ apps (LMS/SIS/etc), using dual freeIPA (vms),
> glusterfs (replicated-distributed-10G net)"
>
> --
> "Tried to install hosted engine on Centos 7, but ran into issues and went for
> Fedora20 instead. Fedora20 installation was pretty much problem free.
> Using NFS on Synology NAS for vm disks."
>
> --
> "3 clusters, 1 gluster storage cluster, 50 TB total disk space, with 4 hosts
> all volumes replica 3
> 2 virtualisation clusters, SandyBridge with 5 hosts, Nehelem with 2 hosts
> running about 70 mostly Linux clients, hosted engine runs on 3 of the
> SandyBridge nodes."
>
> --
> I use oVirt since version 3.0. The evolution of this project is outstanding.
> The biggest unsolved problem is that there is no good backup solution for
> oVirt.
>
> --
> Modest server with all-in-one oVirt installation.
>
> --
> "oVirt is for Testing actually, I should say validate. We use Xen Open Source
> and plan to migrate to oVirt during this year."
>
> --
> "4 nodes, 30 VM's (10 are HA), ISO and export domain on NAS, local drives
> shared over NFS between nodes, one FC primary storage domain 1TB. Engine is
> a KVM VM on a CentOS host."
>
> --
> "5 hosts, FC"
>
> --
> Development system for working with other Open Source projects.
>
> --
> "1. DATA_DOMAIN - HP P2000 G3 FC
> 2. Nodes - Intel server boards SB2400BB
> 3. VMS - On CentOS 6.6"
>
> --
> We are developing VDI solution based on oVirt/KVM/SPICE.
>
> --
> So I answered for my largest oVirt cluster. I actually have several
>
> --
> "Create multiple VM using ovirt in order to provide platform as a service in
> my campus environment.
> All the VM(s) created will be used to run web based application for the
> purpose of final year project presentation.
> I did this only on 1 physical server as we currently have the limitation on
> the hardware part."
>
> --
> "small lab environment, ~10 nodes, ~20-30 VMs."
>
> --
> Ovirt 3.4.4 with 16 VDSM Nodes and 170 virtual machines. We currently use
> Direct attached disks from our ISCSI SAN and we use the snapshot and
> replication features of the SAN (Dell Equallogic).
>
> --
> "Lab @Home
> SuperMicro mainboard with 1 x Xeon quad core cpu (Sandy Bridge)
> 32 GB RAM
> Synology NAS with storage for oVirt through iSCSI"
>
> --
> "IBM Blade Center.
> 1 Engine
> 2 Ovirt Nodes
> 1 NAS for NFS"
>
> --
> "We started looking at OVIRT a while ago and it had come a long way. My only
> objection to migrating it into our production is the issues we have with
> the network interfaces (bonded VLAN tags on the mgmt) and their removal on
> boot/reboot. Other than that we have fully tested multi-cluster glusterized
> environments successfully.
> Again outside of the networks the only other feature I would suggest is
> gluster storage for the hosted engine. "
>
> --
> Storage is ZFS shared over NFS via dual 10 gbit links. Running 15 nodes now.
>
> --
> We will upgrade up CentOS7 soon
>
> --
> "2 FC datacenters, 4 VNX SAN, 2 4To Lun, 25 hosts on 2 physical sites, 180
> centos server vms. We used to recycle all our old servers (1950, 2950, G6
> etc..) to get additional vCPU and RAM, but it has no more interest since we
> got 4 new r630 chassis with 128 Go of RAM and 40 vCPUs per server. So the
> goal is to reduce the number of hosts when keeping the same capacities."
>
> --
> "2 production setups in different DC, beginning their history from ovirt-3.2,
> thus centos6 and no hosted-engine.
> dell m1000e bladesystem + fc storage in first one; hp dl580 g7 + fc storage
> in second. ~150 vms in each"
>
> --
> Storage is ZFS shared over NFS via dual 10 gbit links. Running 15 nodes now.
>
> --
> "Just started using Ovirt, we're using supermicro microcloud servers with
> 32GB memory and quad core E3-1241v3 Xeons.
> We're using the jenkins builds of ovirt-node since there isn't a recent
> ovirt-node build. "
>
> --
> Hyorvisor used also as replica glusterfs nodes.
>
> --
> "We run about 20 VMs on 3 proliant machines with a msa backed shared storage.
> We have migrated dec/jan to oVirt and I am a big fan of the virtualization
> solution. We ran VPS before on Virtuozzo, but it did not provide out of the
> box HA.
> The only thing I was missing is some way of automated backups.
> We have finally bought a proprietary backup solution ( R1soft CDP ) to run
> within the VMs to have disaster and file backups.
> Overall a great product!
> "
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at
redhat.com
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users