[Users] Gluster network info

Gianluca Cecchi gianluca.cecchi at gmail.com
Sun Oct 6 19:03:24 UTC 2013


On Fri, Oct 4, 2013 at 12:15 PM, Sahina Bose wrote:

>
> I believe it should. When you add host to engine you would use node01.
> Please let us know how it goes,
>
> thanks again
> sahina

I tried and it works.
I was able to maintain my data too.
I stopped the only VM created and put both hosts in maintenance.

Then from f18ovn01 (with 10.4.4.58 on ovirtmgmt and 192.168.3.1 on new
replication network):
please note f18 in the name but is fedora 19....

gluster volume stop gvdata
gluster volume delete gvdata
gluster peer detach 10.4.4.59

change /etc/hosts on both hosts and activate another interface with
192.168.3.1 and 192.168.3.3

to clear data about previous gluster volume and reuse it with same values:
on both hosts
setfattr -x trusted.glusterfs.volume-id /gluster/DATA_GLUSTER/brick1
setfattr -x trusted.gfid /gluster/DATA_GLUSTER/brick1
rm -rf rm -rf /gluster/DATA_GLUSTER/brick1/.glusterfs

gluster peer probe f18ovn03.mydomain

remake the manual steps to create gvdata volume and settings as before

from oVirt webadmin activated both hosts and all came up transparently.

able to start c6s VM (from run once only due to bug...)

wget a netinst.iso for fedora 19 from inside VM and during transfer
(VM is running on this f18ovn01 node):

[root at f18ovn01 ]#  bwm-ng v0.6 (probing every 1.000s), press 'h' for help
  input: libstatnet type: rate
  |         iface                   Rx                   Tx                Total
  ==============================================================================
        ovirtmgmt:           0.19 KB/s            0.24 KB/s            0.43 KB/s
           enp3s0:         214.34 KB/s        28794.59 KB/s        29008.93 KB/s
  ------------------------------------------------------------------------------
            total:         214.53 KB/s        28794.82 KB/s        29009.35 KB/s

Also restarted ovirt-engine service on engine.
Note that from engine point of view when I go in volumes --> gvdata
--> bricks, still I see 10.4.4.58 and 10.4.4.59, probably because for
the engine those are the ip addresses of f18ovn01 and f18ovn03
nodes....
Or I don't know if it wrote anywhere in the db the gluster data or
only gets them dinamically
Gianluca



More information about the Users mailing list