Thank you so much Strahil ! Yes, this would be the setup. Basically, I will
equip the 3rd node with only some consumer grade ssds to have the arbiter
metadata for all the volumes, while having the 1st and 2nd nodes equipped
with proper dc grade disks for both spinning and ssd volumes. This will
drastically reduce costs...
Thank you !
On Thu, Apr 25, 2019, 15:12 Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
I can't get the idea. Can you give an example.
let me share some of my setup.
1. Volume -> data_fast is consisting of:
ovirt1:/gluster_bricks/data_fast/data_fast -> 500GB NVMe
ovirt2:/gluster_bricks/data_fast/data_fast -> 500GB NVMe
ovirt3:/gluster_bricks/data_fast/data_fast -> small LV on a slow
(QLC-based) SATA SSD
All hosted on thin LVM.
Of course , for the engine I have:
ovirt1:/gluster_bricks/engine/engine -> SATA ssd shared between OS and
brick
ovirt2:/gluster_bricks/engine/engine -> SATA ssd shared between OS and
brick
ovirt3:/gluster_bricks/engine/engine -> SATA ssd shared between OS and 4
other bricks
Since I have switched from old HDDs to consumer SSD disks - the engine
volume is not reported by sanlock.service , despite Gluster v52.XX has
higher latency.
Best Regards,
Strahil Nikolov
В сряда, 24 април 2019 г., 21:25:10 ч. Гринуич-4, Leo David <
leoalex(a)gmail.com> написа:
Thank you very much Strahil, very helpful. As always. So I would equip the
3rd server and alocate one small ( 120 - 240gb) consumer grade ssd for each
of the gluster volume, and at volume creation, to specify the small ssds as
the 3rd brick.
Do it make sense ?
Thank you !
On Wed, Apr 24, 2019, 18:10 Strahil <hunter86_bg(a)yahoo.com> wrote:
I think 2 small ssds (raid 1 mdadm) can do the job better as ssds have
lower latencies .You can use them both for OS (minimum needed is 60 GB) and
the rest will be plenty for an arbiter.
By the way, if you plan using gluster snapshots - use thin LVM for the
brick.
Best Regards,
Strahil Nikolov
On Apr 24, 2019 16:20, Leo David <leoalex(a)gmail.com> wrote:
Hello Everyone,
I need to look into adding some enterprise grade sas disks ( both ssd
and spinning ), and since the prices are not too low, I would like to
benefit of replica 3 arbitrated.
Therefore, I intend to buy some smaller disks for use them as arbiter
brick.
My question is, what performance ( regarding iops, througput ) the
arbiter disks need to be. Should they be at least the same as the real data
disks ?
Knowing that they only keep metadata, I am thinking that will not be so
much pressure on the arbiters.
Any thoughts?
Thank you !
--
Best regards, Leo David