Which hardware are you using for oVirt

Hi all, Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?) Any input you guys would like to share would be greatly appriciated. Thanks,

Hi all,
Not sure if this is the place to be asking this but I was wondering whi= ch hardware you all are using and why in order for me to see what I would= be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able t= o run 30 vm=E2=80=99s. The engine, I can run on an other server. The hosts can be fitted with =
This is a multi-part message in MIME format. --------------D22D740E8090EF46CB19D891 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is=C2=A0 low, high level of activity rela= ted to disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc. You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt. BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases. *Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode*. For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card. On 03/24/2018 10:33 AM, Andy Michielsen wrote: the storage and share the space through glusterfs. I would think I will b= e needing at least 3 nic=E2=80=99s but would be able to install ovn. (Are= 1gb nic=E2=80=99s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks, _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------D22D740E8090EF46CB19D891 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf= -8"> </head> <body bgcolor=3D"#FFFFFF" text=3D"#000000"> <div class=3D"moz-cite-prefix">Hi,<br> <br> HL ProLiant DL380, dual Xeon<br> 120 GB RAID L1 for system<br> 2 TB RAID L10 for VM disks<br> 5 VMs, 3 Linux, 2 Windows<br> Total CPU load most of the time is=C2=A0 low, high level of activit= y related to disk.<br> Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc.<br> <br> You'll have to use servers with more RAM and storage than main.<br> More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ.<br> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.<br> <br> BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.<b= r> <br> <font color=3D"#990000"><b>Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode</b>.</font><br> <br> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. <br> It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable.<br> Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card.<br> <br> <br> On 03/24/2018 10:33 AM, Andy Michielsen wrote:<br> </div> <blockquote type=3D"cite" cite=3D"mid:815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com"> <pre wrap=3D"">Hi all, Not sure if this is the place to be asking this but I was wondering which= hardware you all are using and why in order for me to see what I would b= e needing. I would like to set up a HA cluster consisting off 3 hosts to be able to = run 30 vm=E2=80=99s. The engine, I can run on an other server. The hosts can be fitted with th= e storage and share the space through glusterfs. I would think I will be = needing at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1= gb nic=E2=80=99s sufficient ?) Any input you guys would like to share would be greatly appriciated. Thanks, _______________________________________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org">Use= rs@ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman= /listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <p><br> </p> </body> </html> --------------D22D740E8090EF46CB19D891--

--Apple-Mail-FF4BAD67-2D6B-4D37-B4DF-9283D336B1AF Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hello Andrei, Thank you very much for sharing info on your hardware setup. Very informativ= e. At this moment I have my ovirt engine on our vmware environment which is fin= e for good backup and restore. I have 4 nodes running now all different in make and model with local storag= e and it works but lacks performance a bit. But I can get my hands on some old dell=E2=80=99s R415 with 96 Gb of ram and= 2 quadcores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x 146 Gb 15000= rpm=E2=80=99s harddisks. This isn=E2=80=99t bad but I will add more RAM for= starters. Also I would like to have some good redundant storage for this to= o and the servers have limited space to add that. Hopefully others will also share there setups and expirience like you did. Kind regards.
On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1@starlett.lv> wrote: =20 Hi, =20 HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is low, high level of activity related to= disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up, c= opied, experimented with, etc. =20 You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets, e= .g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt. =20 BTW, I would suggest to have several separate hardware RAIDs unless you ha= ve SSD, otherwise limit of the disk system I/O will be a bottleneck. Conside= r SSD L1 RAID for heavy-loaded databases. =20 Please note many cheap SSDs do NOT work reliably with SAS controllers even= in SATA mode. =20 For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for O= S.=20 It was possible to install system, yet under heavy load simulated with ioz= one disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom/A= MCC SAS RAID Card. =20 =20
On 03/24/2018 10:33 AM, Andy Michielsen wrote: Hi all, =20 Not sure if this is the place to be asking this but I was wondering which= hardware you all are using and why in order for me to see what I would be n= eeding. =20 I would like to set up a HA cluster consisting off 3 hosts to be able to r= un 30 vm=E2=80=99s. The engine, I can run on an other server. The hosts can be fitted with th= e storage and share the space through glusterfs. I would think I will be nee= ding at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1gb nic= =E2=80=99s sufficient ?) =20 Any input you guys would like to share would be greatly appriciated. =20 Thanks, _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
</div><div>Thank you very much for sharing info on your hardware setup. Ver= y informative.</div><div><br></div><div>At this moment I have my ovirt engin= e on our vmware environment which is fine for good backup and restore.</div>= <div><br></div><div>I have 4 nodes running now all different in make and mod= el with local storage and it works but lacks performance a bit.</div><div><b= r></div><div>But I can get my hands on some old dell=E2=80=99s R415 with 96 G= b of ram and 2 quadcores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x 1= 46 Gb 15000 rpm=E2=80=99s harddisks. This isn=E2=80=99t bad but I will add m= ore RAM for starters. Also I would like to have some good redundant storage f= or this too and the servers have limited space to add that.</div><div><br></=
--Apple-Mail-FF4BAD67-2D6B-4D37-B4DF-9283D336B1AF Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hello Andrei,</div><div><br= div><div>Hopefully others will also share there setups and expirience like y= ou did.</div><div><br></div><div>Kind regards.</div><div><br>On 24 Mar 2018,= at 10:35, Andrei Verovski <<a href=3D"mailto:andreil1@starlett.lv">andre= il1@starlett.lv</a>> wrote:<br><br></div><blockquote type=3D"cite"><div> =20 <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf-8"=
=20 =20 <div class=3D"moz-cite-prefix">Hi,<br> <br> HL ProLiant DL380, dual Xeon<br> 120 GB RAID L1 for system<br> 2 TB RAID L10 for VM disks<br> 5 VMs, 3 Linux, 2 Windows<br> Total CPU load most of the time is low, high level of activity related to disk.<br> Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc.<br> <br> You'll have to use servers with more RAM and storage than main.<br> More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ.<br> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.<br> <br> BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.<br> <br> <font color=3D"#990000"><b>Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode</b>.</font><br> <br> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. <br> It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable.<br> Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card.<br> <br> <br> On 03/24/2018 10:33 AM, Andy Michielsen wrote:<br> </div> <blockquote type=3D"cite" cite=3D"mid:815987B5-31DA-4316-809D-A03363A1E3= C3@gmail.com"> <pre wrap=3D"">Hi all, Not sure if this is the place to be asking this but I was wondering which ha= rdware you all are using and why in order for me to see what I would be need= ing. I would like to set up a HA cluster consisting off 3 hosts to be able to run= 30 vm=E2=80=99s. The engine, I can run on an other server. The hosts can be fitted with the s= torage and share the space through glusterfs. I would think I will be needin= g at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1gb nic=E2= =80=99s sufficient ?) Any input you guys would like to share would be greatly appriciated. Thanks, _______________________________________________ Users mailing list <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org">Users@= ovirt.org</a> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/li= stinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <p><br> </p> =20 </div></blockquote></body></html>= --Apple-Mail-FF4BAD67-2D6B-4D37-B4DF-9283D336B1AF--

I have 2 or 3 node clusters with following hardware (all with self-hosted engine) : 2 node cluster: RAM: 64 GB per host CPU: 8 cores per host Storage: 4x 1TB SAS in RAID10 NIC: 2x Gbit VMs: 20 The above, although I would like to have had a third NIC for gluster storage redundancy, it is running smoothly for quite some time and without performance issues. The VMs it is running are not high on IO (mostly small Linux servers). 3 node clusters: RAM: 32 GB per host CPU: 16 cores per host Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space without purchasing extra disks) NIC: 6x Gbit VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) For your setup (30 VMs) I would rather go with RAID10 SAS disks and at least a dual 10Gbit NIC dedicated to the gluster traffic only. Alex On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <andy.michielsen@gmail.com> wrote:
Hello Andrei,
Thank you very much for sharing info on your hardware setup. Very informative.
At this moment I have my ovirt engine on our vmware environment which is fine for good backup and restore.
I have 4 nodes running now all different in make and model with local storage and it works but lacks performance a bit.
But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2 quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s harddisks. This isn’t bad but I will add more RAM for starters. Also I would like to have some good redundant storage for this too and the servers have limited space to add that.
Hopefully others will also share there setups and expirience like you did.
Kind regards.
On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1@starlett.lv> wrote:
Hi,
HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is low, high level of activity related to disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc.
You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.
*Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode*.
For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card.
On 03/24/2018 10:33 AM, Andy Michielsen wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks, _______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail-B8EBE151-6DD4-4679-A889-96283D26D5BF Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hello Alex, Thanks for sharing. Much appriciated. I believe my setup would need 96 Gb off RAM in each host, and would need abo= ut at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work w= ith snapshots. (Will be running mostly windows 2016 servers or windows 10 de= sktops with 6Gb off RAM and 100 Gb of disks) I agree that a 10 Gb network for storage would be very beneficial. Now If I can figure out how to set up a glusterfs on a 3 node cluster in oVi= rt 4.2 just for the data storage. I =E2=80=98m golden to get started. :-) Kind regards.
On 24 Mar 2018, at 20:08, Alex K <rightkicktech@gmail.com> wrote: =20 I have 2 or 3 node clusters with following hardware (all with self-hosted e= ngine) :=20 =20 2 node cluster:=20 RAM: 64 GB per host CPU: 8 cores per host Storage: 4x 1TB SAS in RAID10 NIC: 2x Gbit VMs: 20 =20 The above, although I would like to have had a third NIC for gluster stora= ge redundancy, it is running smoothly for quite some time and without perfor= mance issues.=20 The VMs it is running are not high on IO (mostly small Linux servers).=20 =20 3 node clusters:=20 RAM: 32 GB per host CPU: 16 cores per host Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space= without purchasing extra disks) NIC: 6x Gbit VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) =20 For your setup (30 VMs) I would rather go with RAID10 SAS disks and at lea= st a dual 10Gbit NIC dedicated to the gluster traffic only.=20 =20 Alex =20 =20
On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <andy.michielsen@gmail.c= om> wrote: Hello Andrei, =20 Thank you very much for sharing info on your hardware setup. Very informa= tive. =20 At this moment I have my ovirt engine on our vmware environment which is f= ine for good backup and restore. =20 I have 4 nodes running now all different in make and model with local sto= rage and it works but lacks performance a bit. =20 But I can get my hands on some old dell=E2=80=99s R415 with 96 Gb of ram a= nd 2 quadcores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x 146 Gb 150= 00 rpm=E2=80=99s harddisks. This isn=E2=80=99t bad but I will add more RAM f= or starters. Also I would like to have some good redundant storage for this t= oo and the servers have limited space to add that. =20 Hopefully others will also share there setups and expirience like you did= . =20 Kind regards. =20
On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1@starlett.lv> wrote: =20 Hi, =20 HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is low, high level of activity related t= o disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up,= copied, experimented with, etc. =20 You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets,= e.g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt. =20 BTW, I would suggest to have several separate hardware RAIDs unless you h= ave SSD, otherwise limit of the disk system I/O will be a bottleneck. Consid= er SSD L1 RAID for heavy-loaded databases. =20 Please note many cheap SSDs do NOT work reliably with SAS controllers ev= en in SATA mode. =20 For example, I supposed to use 2 x WD Green SSD configures as RAID L1 fo= r OS.=20 It was possible to install system, yet under heavy load simulated with i= ozone disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom= /AMCC SAS RAID Card. =20 =20
On 03/24/2018 10:33 AM, Andy Michielsen wrote: Hi all, =20 Not sure if this is the place to be asking this but I was wondering whi= ch hardware you all are using and why in order for me to see what I would be= needing. =20 I would like to set up a HA cluster consisting off 3 hosts to be able t= o run 30 vm=E2=80=99s. The engine, I can run on an other server. The hosts can be fitted with t= he storage and share the space through glusterfs. I would think I will be ne= eding at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1gb ni= c=E2=80=99s sufficient ?) =20 Any input you guys would like to share would be greatly appriciated. =20 Thanks, _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20 =20
For your setup (30 VMs) I would rather go with RAID10 SAS disks and at leas= t a dual 10Gbit NIC dedicated to the gluster traffic only. <br><br></div><di= v>Alex<br></div><br></div><div class=3D"gmail_extra"><br><div class=3D"gmail= _quote">On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <span dir=3D"ltr">&= lt;<a href=3D"mailto:andy.michielsen@gmail.com" target=3D"_blank">andy.michi= elsen@gmail.com</a>></span> wrote:<br><blockquote class=3D"gmail_quote" s= tyle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div d= ir=3D"auto"><div></div><div>Hello Andrei,</div><div><br></div><div>Thank you= very much for sharing info on your hardware setup. Very informative.</div><=
--Apple-Mail-B8EBE151-6DD4-4679-A889-96283D26D5BF Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div></div><div>Hello Alex,</div><div><br><= /div><div>Thanks for sharing. Much appriciated.</div><div><br></div><div>I b= elieve my setup would need 96 Gb off RAM in each host, and would need about a= t least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with= snapshots. (Will be running mostly windows 2016 servers or windows 10 deskt= ops with 6Gb off RAM and 100 Gb of disks)</div><div><br></div><div>I agree t= hat a 10 Gb network for storage would be very beneficial.</div><div><br></di= v><div>Now If I can figure out how to set up a glusterfs on a 3 node cluster= in oVirt 4.2 just for the data storage. I =E2=80=98m golden to get started.= :-)</div><div><br></div><div>Kind regards.</div><div><br>On 24 Mar 2018, at= 20:08, Alex K <<a href=3D"mailto:rightkicktech@gmail.com">rightkicktech@= gmail.com</a>> wrote:<br><br></div><blockquote type=3D"cite"><div><div di= r=3D"ltr"><div><div>I have 2 or 3 node clusters with following hardware (all= with self-hosted engine) : <br><br></div>2 node cluster: <br>RAM: 64 GB per= host<br></div><div>CPU: 8 cores per host<br></div><div>Storage: 4x 1TB SAS i= n RAID10<br></div><div>NIC: 2x Gbit<br></div><div>VMs: 20<br><br></div><div>= The above, although I would like to have had a third NIC for gluster storage= redundancy, it is running smoothly for quite some time and without performa= nce issues. <br></div><div>The VMs it is running are not high on IO (mostly s= mall Linux servers). <br><br></div><div>3 node clusters: <br></div><div>RAM:= 32 GB per host<br></div><div>CPU: 16 cores per host<br></div><div>Storage: 5= x 600GB in RAID5 (not ideal but I had to gain some storage space without pur= chasing extra disks)<br></div><div>NIC: 6x Gbit<br></div><div>VMs: less then= 10 large Windows VMs (Windows 2016 server and Windows 10)<br><br></div><div= div><br></div><div>At this moment I have my ovirt engine on our vmware envir= onment which is fine for good backup and restore.</div><div><br></div><div>I= have 4 nodes running now all different in make and model with local storage= and it works but lacks performance a bit.</div><div><br></div><div>But I ca= n get my hands on some old dell=E2=80=99s R415 with 96 Gb of ram and 2 quadc= ores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x 146 Gb 15000 rpm=E2=80= =99s harddisks. This isn=E2=80=99t bad but I will add more RAM for starters.= Also I would like to have some good redundant storage for this too and the s= ervers have limited space to add that.</div><div><br></div><div>Hopefully ot= hers will also share there setups and expirience like you did.</div><div><br=
</div><div>Kind regards.</div><div><div class=3D"h5"><div><br>On 24 Mar 201= 8, at 10:35, Andrei Verovski <<a href=3D"mailto:andreil1@starlett.lv" tar= get=3D"_blank">andreil1@starlett.lv</a>> wrote:<br><br></div><blockquote t= ype=3D"cite"><div> =20 =20 =20 =20 <div class=3D"m_2490472978721696341moz-cite-prefix">Hi,<br> <br> HL ProLiant DL380, dual Xeon<br> 120 GB RAID L1 for system<br> 2 TB RAID L10 for VM disks<br> 5 VMs, 3 Linux, 2 Windows<br> Total CPU load most of the time is low, high level of activity related to disk.<br> Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc.<br> <br> You'll have to use servers with more RAM and storage than main.<br> More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ.<br> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.<br> <br> BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.<br> <br> <font color=3D"#990000"><b>Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode</b>.</font><br> <br> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. <br> It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable.<br> Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card.<br> <br> <br> On 03/24/2018 10:33 AM, Andy Michielsen wrote:<br> </div> <blockquote type=3D"cite"> <pre>Hi all,
Not sure if this is the place to be asking this but I was wondering which ha= rdware you all are using and why in order for me to see what I would be need= ing. I would like to set up a HA cluster consisting off 3 hosts to be able to run= 30 vm=E2=80=99s. The engine, I can run on an other server. The hosts can be fitted with the s= torage and share the space through glusterfs. I would think I will be needin= g at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1gb nic=E2= =80=99s sufficient ?) Any input you guys would like to share would be greatly appriciated. Thanks, ______________________________<wbr>_________________ Users mailing list <a class=3D"m_2490472978721696341moz-txt-link-abbreviated" href=3D"mailto:Us= ers@ovirt.org" target=3D"_blank">Users@ovirt.org</a> <a class=3D"m_2490472978721696341moz-txt-link-freetext" href=3D"http://lists= .ovirt.org/mailman/listinfo/users" target=3D"_blank">http://lists.ovirt.org/= <wbr>mailman/listinfo/users</a> </pre> </blockquote> <p><br> </p> =20 </div></blockquote></div></div></div><br>______________________________<wbr>= _________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer"= target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br=
<br></blockquote></div><br></div> </div></blockquote></body></html>= --Apple-Mail-B8EBE151-6DD4-4679-A889-96283D26D5BF--

Andy, Im using 2 node cluster: -2 supermicro 6017 (2 Intel 2420(12C24T each node) 384Gb ram total, 10gbe. all hosted engine via nfs storage side: 2 SC836BE16-R1K28B(192gb arc cache) with raid 10 zfs+intel slog serving iscsi at 10Gbe 80 VM's more or less. regards, 2018-03-25 4:36 GMT-03:00 Andy Michielsen <andy.michielsen@gmail.com>:
Hello Alex,
Thanks for sharing. Much appriciated.
I believe my setup would need 96 Gb off RAM in each host, and would need about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with snapshots. (Will be running mostly windows 2016 servers or windows 10 desktops with 6Gb off RAM and 100 Gb of disks)
I agree that a 10 Gb network for storage would be very beneficial.
Now If I can figure out how to set up a glusterfs on a 3 node cluster in oVirt 4.2 just for the data storage. I ‘m golden to get started. :-)
Kind regards.
On 24 Mar 2018, at 20:08, Alex K <rightkicktech@gmail.com> wrote:
I have 2 or 3 node clusters with following hardware (all with self-hosted engine) :
2 node cluster: RAM: 64 GB per host CPU: 8 cores per host Storage: 4x 1TB SAS in RAID10 NIC: 2x Gbit VMs: 20
The above, although I would like to have had a third NIC for gluster storage redundancy, it is running smoothly for quite some time and without performance issues. The VMs it is running are not high on IO (mostly small Linux servers).
3 node clusters: RAM: 32 GB per host CPU: 16 cores per host Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space without purchasing extra disks) NIC: 6x Gbit VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)
For your setup (30 VMs) I would rather go with RAID10 SAS disks and at least a dual 10Gbit NIC dedicated to the gluster traffic only.
Alex
On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen < andy.michielsen@gmail.com> wrote:
Hello Andrei,
Thank you very much for sharing info on your hardware setup. Very informative.
At this moment I have my ovirt engine on our vmware environment which is fine for good backup and restore.
I have 4 nodes running now all different in make and model with local storage and it works but lacks performance a bit.
But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2 quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s harddisks. This isn’t bad but I will add more RAM for starters. Also I would like to have some good redundant storage for this too and the servers have limited space to add that.
Hopefully others will also share there setups and expirience like you did.
Kind regards.
On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1@starlett.lv> wrote:
Hi,
HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is low, high level of activity related to disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc.
You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.
*Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode*.
For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card.
On 03/24/2018 10:33 AM, Andy Michielsen wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks, _______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --Otw9dz9x11zAjj8g4jYNmTEKPlPgL4W9o Content-Type: multipart/mixed; boundary="7NT1xWEOC6g4SNggAbskfGEZF9hpEIL4F"; protected-headers="v1" From: Richard Neuboeck <hawk@tbi.univie.ac.at> To: users@ovirt.org Message-ID: <b71bed12-f9b6-bbda-64f2-e99b47160e30@tbi.univie.ac.at> Subject: Re: [ovirt-users] Which hardware are you using for oVirt References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> <cdca9ccd-8c15-1cdb-ead6-84567f226ea4@starlett.lv> <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> <CABMULtKPvZRDRkByH3WyXbZQHuSZY4o7=hNwNqvRDsV=+6yjtg@mail.gmail.com> <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> In-Reply-To: <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> --7NT1xWEOC6g4SNggAbskfGEZF9hpEIL4F Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable Hi Andy, we have 3 hosts for virtualization. Each 40 Cores, 512GB RAM, RAID 1 for the system, 4 bonded (onboard) 1Gbit NICs for client access (to the VMs) and a 10GBit NIC for the storage network. The storage is built of 3 hosts, 10Gbit NIC, RAID 6 (5TB HDDs and SSDs for caching) and gluster in replica 3 mode. Cheers Richard On 25.03.18 09:36, Andy Michielsen wrote:
Hello Alex, =20 Thanks for sharing. Much appriciated. =20 I believe my setup would need 96 Gb off RAM in each host, and would nee= d about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with snapshots. (Will be running mostly windows 2016 servers or=
windows 10 desktops with 6Gb off RAM and 100 Gb of disks) =20 I agree that a 10 Gb network for storage would be very beneficial. =20 Now If I can figure out how to set up a glusterfs on a 3 node cluster i= n oVirt 4.2 just for the data storage. I =E2=80=98m golden to get started= =2E :-) =20 Kind regards. =20 On 24 Mar 2018, at 20:08, Alex K <rightkicktech@gmail.com <mailto:rightkicktech@gmail.com>> wrote: =20
I have 2 or 3 node clusters with following hardware (all with self-hosted engine) :
2 node cluster: RAM: 64 GB per host CPU: 8 cores per host Storage: 4x 1TB SAS in RAID10 NIC: 2x Gbit VMs: 20
The above, although I would like to have had a third NIC for gluster storage redundancy, it is running smoothly for quite some time and without performance issues. The VMs it is running are not high on IO (mostly small Linux servers).=
For your setup (30 VMs) I would rather go with RAID10 SAS disks and at=
3 node clusters: RAM: 32 GB per host CPU: 16 cores per host Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space without purchasing extra disks) NIC: 6x Gbit VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 1=
least a dual 10Gbit NIC dedicated to the gluster traffic only.
Alex
On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <andy.michielsen@gmail.com <mailto:andy.michielsen@gmail.com>> wrote:
Hello Andrei,
Thank you very much for sharing info on your hardware setup. Very informative.
At this moment I have my ovirt engine on our vmware environment which is fine for good backup and restore.
I have 4 nodes running now all different in make and model with local storage and it works but lacks performance a bit.
But I can get my hands on some old dell=E2=80=99s R415 with 96 Gb = of ram and 2 quadcores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x= 146 Gb 15000 rpm=E2=80=99s harddisks. This isn=E2=80=99t bad but I will a= dd more RAM for starters. Also I would like to have some good redundant storage for this too and the servers have limited space to add that.
Hopefully others will also share there setups and expirience like you did.
Kind regards.
On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1@starlett.lv <mailto:andreil1@starlett.lv>> wrote:
Hi,
HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is=C2=A0 low, high level of activ=
ity
related to disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc.
You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.=
*Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode*.
For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. It was possible to install system, yet under heavy load simulated=
with iozone disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card.
On 03/24/2018 10:33 AM, Andy Michielsen wrote:
Hi all,
Not sure if this is the place to be asking this but I was wonder=
ing which hardware you all are using and why in order for me to see what = I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be=
able to run 30 vm=E2=80=99s.
The engine, I can run on an other server. The hosts can be fitte=
d with the storage and share the space through glusterfs. I would think I= will be needing at least 3 nic=E2=80=99s but would be able to install ov= n. (Are 1gb nic=E2=80=99s sufficient ?)
Any input you guys would like to share would be greatly appricia=
ted.
Thanks, _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
=20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
--7NT1xWEOC6g4SNggAbskfGEZF9hpEIL4F-- --Otw9dz9x11zAjj8g4jYNmTEKPlPgL4W9o Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- iHQEARECADQWIQQH3Gy5jXqjdPfp3qKcaiGA6s70YQUCWrikgRYcaGF3a0B0Ymku dW5pdmllLmFjLmF0AAoJEJxqIYDqzvRh7oUAoPhxFVG6Rq/9n12MqHFjD/VAIHPI AKDJjxGOtzvHcCRN/H+h8UgEFBNmfQ== =e9ez -----END PGP SIGNATURE----- --Otw9dz9x11zAjj8g4jYNmTEKPlPgL4W9o--

On Sat, Mar 24, 2018 at 9:33 AM, Andy Michielsen <andy.michielsen@gmail.com> wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Hello Andy, i'm not running hyperconverged setup, but just for reference i'll describe my setup: 2 clusters (6+2) HPE BL460 G9 with 512 GB of ram each. On the cluster composed by two nodes we're running self-hosted engine. The storage backend is FC for both (EMC VNX8000 for the biggest one and EMC VPLEX with VMAX as disk backend on the smallest one). Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet è la più grande biblioteca del mondo. Ma il problema è che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca@gmail.com>

Hi Luca, You have 2 node cluster with 512 GB each host to run engine only? How many vms are u running at the compute nodes? Alex On Mon, Mar 26, 2018, 10:02 Luca 'remix_tj' Lorenzetto < lorenzetto.luca@gmail.com> wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with
On Sat, Mar 24, 2018 at 9:33 AM, Andy Michielsen <andy.michielsen@gmail.com> wrote: the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Hello Andy,
i'm not running hyperconverged setup, but just for reference i'll describe my setup:
2 clusters (6+2) HPE BL460 G9 with 512 GB of ram each. On the cluster composed by two nodes we're running self-hosted engine.
The storage backend is FC for both (EMC VNX8000 for the biggest one and EMC VPLEX with VMAX as disk backend on the smallest one).
Luca
-- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo. Ma il problema è che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < lorenzetto.luca@gmail.com> _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Sorry Andy, forgot to write the density. On the first cluster (6 nodes) we're running 225 VMs at the moment On the second one (2 nodes) we're running 42 VMs + the engine. This setup is running since last July. Just for completness, we've just set up another environment with the same sizing (2 sockets + 512gb ram) and distribution (6+2) but different hardware (Lenovo x240 M5 IIRC) for hosting production VMs. At the moment there are few VMs, but we're planning to migrate over 300 VMs. Same storage backend. Luca On Mon, Mar 26, 2018 at 9:19 AM, Alex K <rightkicktech@gmail.com> wrote:
Hi Luca,
You have 2 node cluster with 512 GB each host to run engine only?
How many vms are u running at the compute nodes?
Alex
On Mon, Mar 26, 2018, 10:02 Luca 'remix_tj' Lorenzetto <lorenzetto.luca@gmail.com> wrote:
On Sat, Mar 24, 2018 at 9:33 AM, Andy Michielsen <andy.michielsen@gmail.com> wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Hello Andy,
i'm not running hyperconverged setup, but just for reference i'll describe my setup:
2 clusters (6+2) HPE BL460 G9 with 512 GB of ram each. On the cluster composed by two nodes we're running self-hosted engine.
The storage backend is FC for both (EMC VNX8000 for the biggest one and EMC VPLEX with VMAX as disk backend on the smallest one).
Luca
-- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo. Ma il problema è che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca@gmail.com> _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet è la più grande biblioteca del mondo. Ma il problema è che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca@gmail.com>

On 03/24/2018 03:33 AM, Andy Michielsen wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Just because you asked, but not because this is helpful to you.... But first, a comment on "3 hosts to be able to run 30 VMs". The SPM node shouldn't run a lot of VMs. There are settings (the setting slips my mind) on the engine to give it a "virtual set" of VMs in order to keep VMs off of it. With that said, CPU wise, it doesn't require a lot to run 30 VM's. The costly thing is memory (in general). So while a cheap set of 3 machines might handle the CPU requirements of 30 VM's, those cheap machines might not be able to give you the memory you need (depends). You might be fine. I mean, there are cheap desktop like machines that do 64G (and sometimes more). Just something to keep in mind. Memory and storage will be the most costly items. It's simple math. Linux hosts, of course, don't necessarily need much memory (or storage). But Windows... 1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no speed demon. But you might not need "fast" storage. Lastly, your setup is just for "fun", right? Otherwise, read on. Running oVirt 3.6 (this is a production setup) ovirt engine (manager): Dell PowerEdge 430, 32G ovirt cluster nodes: Dell m1000e 1.1 backplane Blade Enclosure 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 10GbE, CentOS 7.2 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's) 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012 We've run on as few as 3 nodes. Network, SAN and Storage (for ovirt Domains): 2 x S4810 (part is used for SAN, part for LAN) Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K SAS) Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS ISO and Export Domains are handled by: Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), CentOS 7.4, NFS What I like: * Easy setup. * Relatively good network and storage. What I don't like: * 2 "effective" networks, LAN and iSCSI. All networking uses the same effective path. Would be nice to have more physical isolation for mgmt vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to have the full pathways. * Storage doesn't use active/active controllers, so controller failover is VERY slow. * We have a fast storage system, and somewhat slower storage system (matter of IOPS), neither is SSD, so there isn't a huge difference. No real redundancy or flexibility. * vdsm can no longer respond fast enough for the amount of disks defined (in the event of a new Storage Domain add). We have raised vdsTimeout, but have not tested yet. I inherited the "style" above. My recommendation of where to start for a reasonable production instance, minimum (assumes the S4810's above, not priced here): 1 x ovirt manager/engine, approx $1500 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K 3 x Nexsan 18P 108TB, approx $96K While significantly cheaper (by 6 figures), it provides active/active controllers, storage reliability and flexibility and better network pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra 4th node is merely capacity. Why 3 x storage? Need at least N+1 for reliability. Obviously, you'll still want to back things up and test the ability to restore components like the ovirt engine from scratch. Btw, my recommended minimum above is regardless of hypervisor cluster choice (could be VMware).

Hello Chritopher, Thank you very much for sharing. It started out just for fun but now people at work are looking at me to have an environment to do testing, simulate problems they have encountered, etc. And more an more off them see the benifits off this. At work we are running vmware but that was far to expencieve to use it for these test. But as I suspected that was in the beginning and I knew I had to be able to expand so whenever an old server was decommisioned from production I converted it to an node. I now have 4 in use and demands keep growing. So now I want to ask my boss to invest in new hardware as now people are asking me why I do not have proper backups and even why the can not use the vm’s when I perform administrative tasks or upgrades. So that’s why I’m very inerested in what others are using. Kind regards.
On 26 Mar 2018, at 18:03, Christopher Cox <ccox@endlessnow.com> wrote:
On 03/24/2018 03:33 AM, Andy Michielsen wrote: Hi all, Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Just because you asked, but not because this is helpful to you....
But first, a comment on "3 hosts to be able to run 30 VMs". The SPM node shouldn't run a lot of VMs. There are settings (the setting slips my mind) on the engine to give it a "virtual set" of VMs in order to keep VMs off of it.
With that said, CPU wise, it doesn't require a lot to run 30 VM's. The costly thing is memory (in general). So while a cheap set of 3 machines might handle the CPU requirements of 30 VM's, those cheap machines might not be able to give you the memory you need (depends). You might be fine. I mean, there are cheap desktop like machines that do 64G (and sometimes more). Just something to keep in mind. Memory and storage will be the most costly items. It's simple math. Linux hosts, of course, don't necessarily need much memory (or storage). But Windows...
1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no speed demon. But you might not need "fast" storage.
Lastly, your setup is just for "fun", right? Otherwise, read on.
Running oVirt 3.6 (this is a production setup)
ovirt engine (manager): Dell PowerEdge 430, 32G
ovirt cluster nodes: Dell m1000e 1.1 backplane Blade Enclosure 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 10GbE, CentOS 7.2 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)
120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012 We've run on as few as 3 nodes.
Network, SAN and Storage (for ovirt Domains): 2 x S4810 (part is used for SAN, part for LAN) Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K SAS) Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS
ISO and Export Domains are handled by: Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), CentOS 7.4, NFS
What I like: * Easy setup. * Relatively good network and storage.
What I don't like: * 2 "effective" networks, LAN and iSCSI. All networking uses the same effective path. Would be nice to have more physical isolation for mgmt vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to have the full pathways. * Storage doesn't use active/active controllers, so controller failover is VERY slow. * We have a fast storage system, and somewhat slower storage system (matter of IOPS), neither is SSD, so there isn't a huge difference. No real redundancy or flexibility. * vdsm can no longer respond fast enough for the amount of disks defined (in the event of a new Storage Domain add). We have raised vdsTimeout, but have not tested yet.
I inherited the "style" above. My recommendation of where to start for a reasonable production instance, minimum (assumes the S4810's above, not priced here):
1 x ovirt manager/engine, approx $1500 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K 3 x Nexsan 18P 108TB, approx $96K
While significantly cheaper (by 6 figures), it provides active/active controllers, storage reliability and flexibility and better network pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra 4th node is merely capacity. Why 3 x storage? Need at least N+1 for reliability.
Obviously, you'll still want to back things up and test the ability to restore components like the ovirt engine from scratch.
Btw, my recommended minimum above is regardless of hypervisor cluster choice (could be VMware). _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Mar 26, 2018, 7:04 PM Christopher Cox <ccox@endlessnow.com> wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with
On 03/24/2018 03:33 AM, Andy Michielsen wrote: the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Just because you asked, but not because this is helpful to you....
But first, a comment on "3 hosts to be able to run 30 VMs". The SPM node shouldn't run a lot of VMs. There are settings (the setting slips my mind) on the engine to give it a "virtual set" of VMs in order to keep VMs off of it.
With that said, CPU wise, it doesn't require a lot to run 30 VM's. The costly thing is memory (in general). So while a cheap set of 3 machines might handle the CPU requirements of 30 VM's, those cheap machines might not be able to give you the memory you need (depends). You might be fine. I mean, there are cheap desktop like machines that do 64G (and sometimes more). Just something to keep in mind. Memory and storage will be the most costly items. It's simple math. Linux hosts, of course, don't necessarily need much memory (or storage). But Windows...
1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no speed demon. But you might not need "fast" storage.
Lastly, your setup is just for "fun", right? Otherwise, read on.
Running oVirt 3.6 (this is a production setup)
ovirt engine (manager): Dell PowerEdge 430, 32G
ovirt cluster nodes: Dell m1000e 1.1 backplane Blade Enclosure 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 10GbE, CentOS 7.2 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)
120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012 We've run on as few as 3 nodes.
Network, SAN and Storage (for ovirt Domains): 2 x S4810 (part is used for SAN, part for LAN) Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K SAS) Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS
ISO and Export Domains are handled by: Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), CentOS 7.4, NFS
What I like: * Easy setup. * Relatively good network and storage.
What I don't like: * 2 "effective" networks, LAN and iSCSI. All networking uses the same effective path. Would be nice to have more physical isolation for mgmt vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to have the full pathways. * Storage doesn't use active/active controllers, so controller failover is VERY slow. * We have a fast storage system, and somewhat slower storage system (matter of IOPS), neither is SSD, so there isn't a huge difference. No real redundancy or flexibility. * vdsm can no longer respond fast enough for the amount of disks defined (in the event of a new Storage Domain add). We have raised vdsTimeout, but have not tested yet.
We have substantially changed and improved VDSM for better scale since 3.6. How many disks are defined, in how many storage domains and LUNs? (also the OS itself has improved).
I inherited the "style" above. My recommendation of where to start for a reasonable production instance, minimum (assumes the S4810's above, not priced here):
1 x ovirt manager/engine, approx $1500
What about high availability for the engine? 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
3 x Nexsan 18P 108TB, approx $96K
Alternatively, how many reasonable SSDs can you buy? Samsing 860 EVO, 4TB costs in Amazon (US) $1300. You could buy tens (70+) of those and be left with some change. Can you instead use them in a fast storage setup? https://www.backblaze.com/blog/open-source-data-storage-server/ for example is interesting.
While significantly cheaper (by 6 figures), it provides active/active controllers, storage reliability and flexibility and better network pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra 4th node is merely capacity. Why 3 x storage? Need at least N+1 for reliability.
Are they running in some cluster?
Obviously, you'll still want to back things up and test the ability to restore components like the ovirt engine from scratch.
+1. Y.
Btw, my recommended minimum above is regardless of hypervisor cluster choice (could be VMware). _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

We run Dell Poweredge R720s and R730s with 32 GB of RAM and quad Xeon processors. Storage is provided by Dell MD3800i and Promise arrays using iSCSI. The network is all 10 gigabit interfaces using 802.3ad bonds. We actually just upgraded from 1 gigabit NICs since there were some performance issues with storage causing high IOwait on VMs. I'd recommend avoiding 1 gigabit if you can. On 3/24/18 4:33 AM, Andy Michielsen wrote:
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks, _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Lots of great hardware knowledge in this thread! I'm also making the move to 10Gb. I'm adding a 3rd host to my deployment and moving to Glusterfs on 3 nodes, from my current NFS share on a separate storage server. Each of the 3 nodes has dual E5-2640 V4s with128Gb ram. I have some hardware choices I would love some advice about: - Should I use Intel X520-DA2 or X710-DA2 nics for the storage network? No significant price difference. The hosts are running oVirt Node 4.2. I hope to use them in bridge mode so that I don't need a 10Gbe switch. I do have a single 10Gbe port left on my router. - the hosts have 12Gbps 520i SAS cards, should I spec 6 or 12 Gbps SSD drives? Here there is a large price difference, also large difference between Enterprise performance, Enterprise mainstream, and Enterprise entry. I'm not sure how to estimate the value of those different options in a Glusterfs deployment. The workload is pretty I/O intensive with fairly small read/write operations (under 128Kb) on windows VMs. Any obvious weak links with this plan? On Sat, Mar 31, 2018, 12:27 PM Michael Watters, <wattersm@watters.ws> wrote:
We run Dell Poweredge R720s and R730s with 32 GB of RAM and quad Xeon processors. Storage is provided by Dell MD3800i and Promise arrays using iSCSI. The network is all 10 gigabit interfaces using 802.3ad bonds. We actually just upgraded from 1 gigabit NICs since there were some performance issues with storage causing high IOwait on VMs. I'd recommend avoiding 1 gigabit if you can.
Hi all,
Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm’s. The engine, I can run on an other server. The hosts can be fitted with
On 3/24/18 4:33 AM, Andy Michielsen wrote: the storage and share the space through glusterfs. I would think I will be needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks, _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (10)
-
Alex K
-
Andrei Verovski
-
Andy Michielsen
-
Christopher Cox
-
Juan Pablo
-
Luca 'remix_tj' Lorenzetto
-
Michael Watters
-
Richard Neuboeck
-
Vincent Royer
-
Yaniv Kaul