On Wed, Feb 12, 2014 at 6:18 PM, ml ml wrote:
I guess the brick details are stored in the postgres database and everything
else after will fail?!
Am i the only one with dedicated migration/storage interfaces? :)
Thanks,
Mario
One of the workarounds I found and that works for me as I'm not using
dns is this:
- for engine host node1 and node two have ip on mgmt
- for node1 and node2 their own ip addresses are on dedicated gluster network
so for example
10.4.4.x = mgmt
192.168.3.x = gluster dedicated
before:
on engine
/etc/hosts
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine
on node01
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine
after:
on engine (the same as before)
/etc/hosts
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine
on node01
#10.4.4.58 node01
#10.4.4.59 node02
192.168.3.1 node01
192.168.3.3 node02
10.4.4.60 engine
No operations on RDBMS.
HIH,
Gianluca