--Apple-Mail-B8EBE151-6DD4-4679-A889-96283D26D5BF
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello Alex,
Thanks for sharing. Much appriciated.
I believe my setup would need 96 Gb off RAM in each host, and would need abo=
ut at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work w=
ith snapshots. (Will be running mostly windows 2016 servers or windows 10 de=
sktops with 6Gb off RAM and 100 Gb of disks)
I agree that a 10 Gb network for storage would be very beneficial.
Now If I can figure out how to set up a glusterfs on a 3 node cluster in oVi=
rt 4.2 just for the data storage. I =E2=80=98m golden to get started. :-)
Kind regards.
On 24 Mar 2018, at 20:08, Alex K <rightkicktech(a)gmail.com>
wrote:
=20
I have 2 or 3 node clusters with following hardware (all with self-hosted e=
ngine)
:=20
=20
2 node cluster:=20
RAM: 64 GB per host
CPU: 8 cores per host
Storage: 4x 1TB SAS in RAID10
NIC: 2x Gbit
VMs: 20
=20
The above, although I would like to have had a third NIC for gluster stora=
ge
redundancy, it is running smoothly for quite some time and without perfor=
mance issues.=20
The VMs it is running are not high on IO (mostly small Linux
servers).=20
=20
3 node clusters:=20
RAM: 32 GB per host
CPU: 16 cores per host
Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space=
without purchasing extra disks)
NIC: 6x Gbit
VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)
=20
For your setup (30 VMs) I would rather go with RAID10 SAS disks and at lea=
st a
dual 10Gbit NIC dedicated to the gluster traffic only.=20
=20
Alex
=20
=20
> On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <andy.michielsen(a)gmail.c=
om> wrote:
> Hello Andrei,
>=20
> Thank you very much for sharing info on your hardware setup. Very informa=
tive.
>=20
> At this moment I have my ovirt engine on our vmware environment which is f=
ine
for good backup and restore.
>=20
> I have 4 nodes running now all different in make and model with local sto=
rage
and it works but lacks performance a bit.
>=20
> But I can get my hands on some old dell=E2=80=99s R415 with 96 Gb of ram a=
nd
2 quadcores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x 146 Gb 150=
00 rpm=E2=80=99s harddisks. This isn=E2=80=99t bad but I will add more RAM f=
or starters. Also I would like to have some good redundant storage for this t=
oo and the servers have limited space to add that.
>=20
> Hopefully others will also share there setups and expirience like you did=
.
>=20
> Kind regards.
>=20
>> On 24 Mar 2018, at 10:35, Andrei Verovski <andreil1(a)starlett.lv> wrote:
>>=20
>> Hi,
>>=20
>> HL ProLiant DL380, dual Xeon
>> 120 GB RAID L1 for system
>> 2 TB RAID L10 for VM disks
>> 5 VMs, 3 Linux, 2 Windows
>> Total CPU load most of the time is low, high level of activity related t=
o disk.
>> Host engine under KVM appliance on SuSE, can be easily moved,
backed up,=
copied, experimented with, etc.
>>=20
>> You'll have to use servers with more RAM and storage than main.
>> More then one NIC required if some of your VMs are on different subnets,=
e.g. 1 in internal zone and 2nd on DMZ.
>> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
>>=20
>> BTW, I would suggest to have several separate hardware RAIDs unless you h=
ave SSD, otherwise limit of the disk system I/O will be a bottleneck. Consid=
er SSD L1 RAID for heavy-loaded databases.
>>=20
>> Please note many cheap SSDs do NOT work reliably with SAS controllers ev=
en in SATA mode.
>>=20
>> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 fo=
r
OS.=20
>> It was possible to install system, yet under heavy load
simulated with i=
ozone disk system freeze, rendering OS unbootable.
>> Same crash was experienced with 512GB KingFast SSD connected
to broadcom=
/AMCC SAS RAID Card.
>>=20
>>=20
>>> On 03/24/2018 10:33 AM, Andy Michielsen wrote:
>>> Hi all,
>>>=20
>>> Not sure if this is the place to be asking this but I was wondering whi=
ch hardware you all are using and why in order for me to see what I would be=
needing.
>>>=20
>>> I would like to set up a HA cluster consisting off 3 hosts to be able t=
o run 30 vm=E2=80=99s.
>>> The engine, I can run on an other server. The hosts can
be fitted with t=
he storage and share the space through glusterfs. I would think I
will be ne=
eding at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1gb ni=
c=E2=80=99s sufficient ?)
>>>=20
>>> Any input you guys would like to share would be greatly appriciated.
>>>=20
>>> Thanks,
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>=20
>=20
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>=20
=20
--Apple-Mail-B8EBE151-6DD4-4679-A889-96283D26D5BF
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type"
content=3D"text/html; charset=3D=
utf-8"></head><body
dir=3D"auto"><div></div><div>Hello
Alex,</div><div><br><=
/div><div>Thanks for sharing. Much
appriciated.</div><div><br></div><div>I b=
elieve my setup would need 96 Gb off RAM in each host, and would need about a=
t least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with=
snapshots. (Will be running mostly windows 2016 servers or windows 10 deskt=
ops with 6Gb off RAM and 100 Gb of
disks)</div><div><br></div><div>I agree t=
hat a 10 Gb network for storage would be very
beneficial.</div><div><br></di=
v><div>Now If I can figure out how to set up a glusterfs on a 3 node cluster=
in oVirt 4.2 just for the data storage. I =E2=80=98m golden to get started.=
:-)</div><div><br></div><div>Kind
regards.</div><div><br>On 24 Mar 2018, at=
20:08, Alex K <<a
href=3D"mailto:rightkicktech@gmail.com">rightkicktech@=
gmail.com</a>> wrote:<br><br></div><blockquote
type=3D"cite"><div><div di=
r=3D"ltr"><div><div>I have 2 or 3 node clusters with following
hardware (all=
with self-hosted engine) : <br><br></div>2 node cluster: <br>RAM:
64 GB per=
host<br></div><div>CPU: 8 cores per
host<br></div><div>Storage: 4x 1TB SAS i=
n RAID10<br></div><div>NIC: 2x Gbit<br></div><div>VMs:
20<br><br></div><div>=
The above, although I would like to have had a third NIC for gluster storage=
redundancy, it is running smoothly for quite some time and without performa=
nce issues. <br></div><div>The VMs it is running are not high on IO
(mostly s=
mall Linux servers). <br><br></div><div>3 node clusters:
<br></div><div>RAM:=
32 GB per host<br></div><div>CPU: 16 cores per
host<br></div><div>Storage: 5=
x 600GB in RAID5 (not ideal but I had to gain some storage space without pur=
chasing extra disks)<br></div><div>NIC: 6x
Gbit<br></div><div>VMs: less then=
10 large Windows VMs (Windows 2016 server and Windows
10)<br><br></div><div=
For your setup (30 VMs) I would rather go with RAID10 SAS disks and at
leas=
t a dual 10Gbit NIC dedicated to the gluster traffic only.
<br><br></div><di=
v>Alex<br></div><br></div><div
class=3D"gmail_extra"><br><div class=3D"gmail=
_quote">On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <span
dir=3D"ltr">&=
lt;<a href=3D"mailto:andy.michielsen@gmail.com"
target=3D"_blank">andy.michi=
elsen(a)gmail.com</a>&gt;</span> wrote:<br><blockquote
class=3D"gmail_quote" s=
tyle=3D"margin:0 0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex"><div d=
ir=3D"auto"><div></div><div>Hello
Andrei,</div><div><br></div><div>Thank you=
very much for sharing info on your hardware setup. Very informative.</div><=
div><br></div><div>At this moment I have my ovirt engine on our
vmware envir=
onment which is fine for good backup and
restore.</div><div><br></div><div>I=
have 4 nodes running now all different in make and model with local storage=
and it works but lacks performance a
bit.</div><div><br></div><div>But I ca=
n get my hands on some old dell=E2=80=99s R415 with 96 Gb of ram and 2 quadc=
ores and 6 x 1 Gb nic=E2=80=99s. They all come with 2 x 146 Gb 15000 rpm=E2=80=
=99s harddisks. This isn=E2=80=99t bad but I will add more RAM for starters.=
Also I would like to have some good redundant storage for this too and the s=
ervers have limited space to add
that.</div><div><br></div><div>Hopefully ot=
hers will also share there setups and expirience like you
did.</div><div><br=
</div><div>Kind regards.</div><div><div
class=3D"h5"><div><br>On 24 Mar 201=
8, at 10:35, Andrei
Verovski <<a href=3D"mailto:andreil1@starlett.lv" tar=
get=3D"_blank">andreil1(a)starlett.lv</a>&gt;
wrote:<br><br></div><blockquote t=
ype=3D"cite"><div
=20
=20
=20
=20
<div class=3D"m_2490472978721696341moz-cite-prefix">Hi,<br
<br
HL
ProLiant DL380, dual Xeon<br
120 GB RAID L1 for
system<br
2 TB RAID L10 for VM
disks<br
5 VMs, 3 Linux, 2
Windows<br
Total CPU load most of the
time is low, high level of activity
related to disk.<br
Host engine under KVM
appliance on SuSE, can be easily moved,
backed up, copied, experimented with, etc.<br
<br
You'll have to use servers with more RAM and storage
than main.<br
More then one NIC required if
some of your VMs are on different
subnets, e.g. 1 in internal zone and 2nd on DMZ.<br
For
your setup 10 GB NICs + L3 Switch for ovirtmgmt.<br
<br
BTW, I would suggest to have several separate hardware
RAIDs
unless you have SSD, otherwise limit of the disk system I/O will
be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases.<br
<br
<font color=3D"#990000"><b>Please note many cheap SSDs do NOT work
reliably with SAS controllers even in SATA
mode</b>.</font><br
<br
For example, I supposed to use 2 x WD Green SSD
configures as RAID
L1 for OS. <br
It was possible to install
system, yet under heavy load simulated
with iozone disk system freeze, rendering OS unbootable.<br
Same crash was experienced with 512GB KingFast SSD
connected to
broadcom/AMCC SAS RAID Card.<br
<br
<br
On
03/24/2018 10:33 AM, Andy Michielsen wrote:<br
</div
<blockquote
type=3D"cite"
<pre>Hi all,
Not sure if this is the place to be asking this but I was wondering which ha=
rdware you all are using and why in order for me to see what I would be need=
ing.
I would like to set up a HA cluster consisting off 3 hosts to be able to run=
30 vm=E2=80=99s.
The engine, I can run on an other server. The hosts can be fitted with the s=
torage and share the space through glusterfs. I would think I will be needin=
g at least 3 nic=E2=80=99s but would be able to install ovn. (Are 1gb nic=E2=
=80=99s sufficient ?)
Any input you guys would like to share would be greatly appriciated.
Thanks,
______________________________<wbr>_________________
Users mailing list
<a class=3D"m_2490472978721696341moz-txt-link-abbreviated"
href=3D"mailto:Us=
ers(a)ovirt.org" target=3D"_blank">Users(a)ovirt.org</a
<a
class=3D"m_2490472978721696341moz-txt-link-freetext" href=3D"http://lists=
.ovirt.org/mailman/listinfo/users"
target=3D"_blank">http://lists.ovirt.org/=
<wbr>mailman/listinfo/users</a
</pre
</blockquote
<p><br
</p
=20
</div></blockquote></div></div></div><br>______________________________<wbr>=
_________________<br
Users mailing list<br
<a
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
rel=3D"noreferrer"=
target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/...
<br></blockquote></div><br></div
</div></blockquote></body></html>=
--Apple-Mail-B8EBE151-6DD4-4679-A889-96283D26D5BF--