Hi Strahil,
I had a 3 node Hyperconverged setup and added 3 new nodes to the cluster for a total of 6
servers. I am now taking advantage of more compute power, however the gluster storage part
is what gets me.
Current Hyperconverged setup:
-
host1.mydomain.com
Bricks:
engine
data1
vmstore1
-
host2.mydomain.com
Bricks:
engine
data1
vmstore1
-
host3.mydomain.com
Bricks:
engine
data1
vmstore1
-
host4.mydomain.com
Bricks:
-
host5.mydomain.com
Bricks:
-
host6.mydomain.com
Bricks:
As you can see from the above, the original first 3 servers are the only ones that contain
the gluster storage bricks, so storage redundancy is not set across all 6 nodes. I think
it is a lack of understanding from my end on how ovirt and gluster integrate with one
another so have a few questions:
How would I go about achieving storage redundancy across all nodes?
Do I need to configure gluster volumes manually through the OS CLI?
If I configure the fail storage scenario manually will oVirt know about it?
Again I know that the bricks must be added in sets of 3 and per the first 3 nodes my
gluster setup looks like this (all done by hyperconverged seup in ovirt):
engine volume: host1:brick1, host2:brick1, host3:brick1
data1 volume: host1:brick2, host2:brick2, host3:brick2
vmstore1 volume: host1:brick3, host2:brick3, host3:brick3
So after adding the 3 new servers I dont know if I need to do something similar to the
example in
https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-..., if
I do a similar change will oVirt know about it? will it be able to handle it as
hyperconverged?
As I mentioned before I normally see 3 node hyperconverged setup examples with gluster but
have not found one for 6, 9 or 12 node cluster.
Thanks again.