cc'ing Denis and Sahina,
Perhaps they can share their expirience and insights with the hyperconverged
environment.
Regards,
Maor
On Fri, Jul 6, 2018 at 9:47 AM, Tal Bar-Or <tbaror(a)gmail.com> wrote:
Hello All,
I am about deploying a new Ovirt system for our developers that we plan to
be hyperconverged environment Node based.
The system workload would be mostly used for builders that compiling our
code , which involves with lots of small files and intensive IO.
I plan to build two glustered volume "layers" one based on sas drives for
OS spin on, and second for Nvme based for intensive IO.
I would expect that the system will be resilient/high availability and in
the same time give enough good IO request for vm builders that will be at
least 6 to 8 vm guests.
The system hardware would be as follows:
*chassis*: 4x HP DL380 gen8
*each server hardware:*
*cpu*: 2x e5-2690v2
*memory*:256GB
*Disks*:12x 1.2TB sas 10k disks , 2 mirror for os (or using usk kingstone
2x 128gb mirror) rest for vm os volume.
*Nvme*: 2x960GB Kingstone KC1000 for builders compiling source code
*Network: *4 ports Intel 10Gbit/s SFP +
Given above configuration and theory, my question would be what would be
best practice in terms of Gluster configuration *Distributed,Replicated,Distributed
Replicated,Dispersed,Distributed Dispersed*?
What is the suggestion for hardware raid type 5 or 6 , or use ZFS?
Network nodes communication , i intend to use 3 ports for storage
communication and one port for guests network , my question regarding
Gluster inter communication , what is better would i gail from 3x 10G LACP
or 1 network for each gluster volume?
Please advice
Thanks
--
Tal Bar-or
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/communit
y/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archiv
es/list/users(a)ovirt.org/message/4XKEEDT2HHVDQU7FZCANZ26UOMFJTBE5/