[ovirt-users] GlusterFS performance with only one drive per host?
Sahina Bose
sabose at redhat.com
Thu Mar 22 10:01:21 UTC 2018
On Mon, Mar 19, 2018 at 5:57 PM, Jayme <jaymef at gmail.com> wrote:
> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm
> considering storage options. I don't have a requirement for high amounts
> of storage, I have a little over 1TB to store but want some overhead so I'm
> thinking 2TB of usable space would be sufficient.
>
> I've been doing some research on Micron 1100 2TB ssd's and they seem to
> offer a lot of value for the money. I'm considering using smaller cheaper
> SSDs for boot drives and using one 2TB micron SSD in each host for a
> glusterFS replica 3 setup (on the fence about using an arbiter, I like the
> extra redundancy replicate 3 will give me).
>
> My question is, would I see a performance hit using only one drive in each
> host with glusterFS or should I try to add more physical disks. Such as 6
> 1TB drives instead of 3 2TB drives?
>
[Adding gluster-users for inputs here]
> Also one other question. I've read that gluster can only be done in
> groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I
> had an operational replicate 3 glusterFS setup and wanted to add more
> capacity I would have to add 3 more hosts, or is it possible for me to add
> a 4th host in to the mix for extra processing power down the road?
>
In oVirt, we support replica 3 or replica 3 with arbiter (where one of the
3 bricks is a low storage arbiter brick). To expand storage, you would need
to add in multiples of 3 bricks. However if you only want to expand compute
capacity in your HC environment, you can add a 4th node.
> Thanks!
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180322/1182ea07/attachment.html>
More information about the Users
mailing list