[ovirt-users] Questions about converged infrastructure setup and glusterFS sizing/performance
Kasturi Narra
knarra at redhat.com
Mon Jan 22 08:02:57 UTC 2018
Hello Jayme,
Please find the responses inline.
On Fri, Jan 19, 2018 at 7:44 PM, Jayme <jaymef at gmail.com> wrote:
> I am attempting to narrow down choices for storage in a new oVirt build
> that will eventually be used for a mix of dev and production servers.
>
> My current space usage excluding backups sits at about only 1TB so I
> figure 3-5 TB would be more than enough for VM storage only + some room to
> grow. There will be around 24 linux VMs total but 80% of them are VERY low
> usage and low spec servers.
>
> I've been considering a 3 host hyperconverged oVirt setup, replica 3
> arbiter 1 setup with a disaster recovery plan to replicate the gluster
> volume to a separate server. I would of course do additional incremental
> backups to an alternate server as well probably with rsync or some other
> method.
>
> Some questions:
>
> 1. Is it recommended to use SSDs for glusterFS or can the performance of
> regular server/sas drives be sufficient enough performance. If using SSDs
> is it recommended to use enterprise SSDs are consumer SSDs good enough due
> to the redundancy of glusterFS? I would love to hear of any use cases
> from any of you regarding hardware specs you used in hyperconverged setups
> and what level of performance you are seeing.
>
You can use SSD's if you would like to. But you could use regular server/
sas drives too.
>
> 2. Is it recommended to RAID the drives that form the gluster bricks? If
> so what raid level?
>
RAID level can be Raid5/6.
>
> 3. How do I calculate how much space will be usable in a replicate 3
> arbiter 1 configuration? Will it be 75% of total drive capacity minus what
> I lose from raid (if I raid the drives)?
>
Each replica subvolume is defined to have 1 arbiter out of the 3 bricks.
The arbiter bricks are taken from the end of each replica subvolume. Since
the arbiter brick does not store file data, its disk usage will be
considerably less than the other bricks of the replica. The sizing of the
brick will depend on how many files you plan to store in the volume. A good
estimate will be 4kb times the number of files in the replica. In the other
two nodes if the size of the brick is 1TB, then total capacity will be 1 TB
.
>
> 4. For replication of the gluster volume, is it possible for me to
> replicate the entire volume to a single drive/raid array in an alternate
> server or does the replicated volume need to match the configuration of the
> main glusterFS volume (i.e. same amount of drives/configuration etc).
>
you could replicate the entire volume to another single volume but please
make sure that size of the volume is more so that no data is lost.
>
> 5. Has the meltdown bug caused or expected to cause major issues with
> oVirt hyperconverged setup due to performance loss from the patches. I've
> been reading articles suggesting up to 30% performance loss on some
> converged/storage setups due to how CPU intensive converged setups are.
>
> Thanks in advance!
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180122/7175ddf6/attachment.html>
More information about the Users
mailing list