Hello!
I would like to read your thoughts regarding a new implementation we
are about to do at work
Our infrastructure is as follows
There are 9 servers who will be used for running vm's
Network is 2x10G + 4x1G ethernet per server
each has 2x 10C xeon processors and 192GB RAM
and 2x 300GB 10K Enterprise SAS 6Gbps, Hot Swap HDD
There are 3 servers who will be used for storage.
Network is 2x10G + 4x1G ethernet per server
each has 2x 8c Xeon processors with 128GB RAM
2x 300GB 10000RPM, 2.5", Serial Attached SCSI (SAS), Hot Swap
6x 200GB WesternDigital HGST ultrastar HUSMM1620ASS200 ssd
Each server has a jbod box connected to the server through sas interface
12x 4TB
The ovirt installation is expected to run 250-200 VM's on maximum load.
Expected to run on a big VM there is our mail server which is pretty
loaded, has about
50000 registred users and about half of them are pretty active.
What I'm currently thinking is :
have hosted vm on compute servers storage, as a stand alone non
managed volume of about 150GB. but I would like the engine to be
stored on more than 3 servers. Any thoughts ?
On the vm storage since high availability is a must I will probably
set the servers in raid6 with a spare each and then I will do 3
volumes(2 data bricks + 1 arbiter brick on each server) on glusterfs
replica 3 arbiter 1.
This will give me about 36TB usable per storage server.
I'm highly concerned on how glusterfs will perform regarding the email
server. Right now email server is on a
vm running on KVM and it gets it's storage from an EMC storage. I'm
not expecting Gluster to perform as the EMC storage but I would like
to know if it can handle it.
I'm thinking of dedicating a gluster volume exclusively
Is it better to give to the email vm the gluster volume as a disk(s)
from ovirt or directly mount it through the vm? ovirt will force the
use of gfapi if I set it on the engine. on the other hand if I use
this, how sharding will perform.
On the other hand, If I mount it as a glusterfs volume directly on the
vm how will it perform through fuse?
I have 6x200gb ssd's per storage server. Will it do any good to add
these as a tiering volume? What should the shard block size be in that
case should it be smaller to utilise the ssd's better ?
Will performance be better if I use ceph through iscsi with multipath?
I read that ceph needs more nodes to perform suitably and that iscsi
adds considerable overhead.
I'd love to read your thoughts on this.