On Fri, Aug 19, 2016 at 12:29 PM, Nicolas Ecarnot <nicolas(a)ecarnot.net>
wrote:
Hello,
I'm digging out this thread because I now had the time to work on this
subject, and I'm stuck.
This oVirt setup has a standalone engine, and 3 hosts.
These 3 hosts are hypervisors and gluster nodes, each using one NIC for
all the traffic, that is a very bad idea. (Well, it's working, but not
recommended).
I added 3 OTHER nodes, and so far, I only created the gluster setup and
created a replica-3 volume.
Each of these new nodes now have one NIC for management, one NIC for
gluster, and other NICs for other things.
Each NIC has an IP + DNS name in its dedicated VLAN : one for mgmt and one
for gluster.
The mgmt subnet is routed, though the gluster subnet is not.
Every node can ping each other, either using the mgmt or the gluster
subnets.
The creation of the gluster subnet and volume went very well and seems to
be perfect.
Now, in the oVirt web gui, I'm trying to add these nodes as oVirt hosts.
I'm using their mgmt DNS names, and I'm getting :
"Error while executing action: Server xxxxxxxx is already part of another
cluster."
Did you peer probe the gluster cluster prior to adding the nodes to oVirt?
What's the output of "gluster peer status"
If I understand correctly:
node1 - mgmt.ip.1 & gluster.ip.1
node2 - mgmt.ip.2 & gluster.ip.2
node3 - mgmt.ip.3 & gluster.ip.3
Did you create a network and assign "gluster" role to it in the cluster?
Were you able to add the first node to cluster, and got this error on
second node addition ?
From the error, it looks like oVirt does not understand the peer list
returned from gluster is a match with node being added.
Please provide the log snippet of the failure (from engine.log as well as
vdsm.log on node)
I found no idea when googling, except something related to gluster
(you
bet!), telling this may be related to the fact that there is already a
volume, managed with a different name.
Obviously, using a different name and IP is what I needed!
I used "transport.socket.bind-address" to make sure the gluster traffic
will only use the dedicated NICs.
Well, I also tried to created a storage domain relying on the freshly
created gluster volume, but as this subnet is not routed, it is not
reachable from the manager nor the existing SPM.
The existing SPM - isn't it one of the the 3 new nodes being added? Or are
you adding the 3 nodes to your existing cluster? If so, I suggest you try
adding this to a new cluster
I'm feeling I'm missing something here, so your help is warmly welcome.
Nicolas ECARNOT
PS : CentOS 7.2 everywhere, oVirt 3.6.7
Le 27/11/2015 à 20:00, Ivan Bulatovic a écrit :
> Hi Nicolas,
>
> what works for me in 3.6 is creating a new network for gluster within
> oVirt, marking it for gluster use only, optionally setting bonded
> interface upon NIC's that are dedicated for gluster traffic and
> providing it with an IP address without configuring a gateway, and then
> modifying /etc/hosts so that hostnames are resolvable between nodes.
> Every node should have two hostnames, one for ovirtmgmt network that is
> resolvable via DNS (or via /etc/hosts), and the other for gluster
> network that is resolvable purely via /etc/hosts (every node should
> contain entries for themselves and for each gluster node).
>
> Peers should be probed via their gluster hostnames, while ensuring that
> gluster peer status contains only addresses and hostnames that are
> dedicated for gluster on each node. Same goes for adding bricks,
> creating a volume etc.
>
> This way, no communication (except gluster one) should be allowed
> through gluster dedicated vlan. To be on the safe side, we can also
> force gluster to listen only on dedicated interfaces via
> transport.socket.bind-address option (haven't tried this one, will do).
>
> Separation of gluster (or in the future any storage network), live
> migration network, vm and management network is always a good thing.
> Perhaps, we could manage failover of those networks within oVirt, ie. in
> case lm network is down - use gluster network for lm and vice versa.
> Cool candidate for an RFE, but first we need this supported within
> gluster itself. This may prove useful when there is not enough NIC's
> available to do a bond beneath every defined network. But we can still
> separate traffic and provide failover by selecting multiple networks
> without actually doing any load balancing between the two.
>
> As Nathanaël mentioned, marking network for gluster use is only
> available in 3.6. I'm also interested if there is a better way around
> this procedure, or perharps enhancing it.
>
> Kind regards,
>
> Ivan
>
> On 11/27/2015 05:47 PM, Nathanaël Blanchet wrote:
>
>> Hello Nicolas,
>>
>> Did you have a look to this :
>>
http://www.ovirt.org/Features/Select_Network_For_Gluster ?
>> But it is only available from >=3.6...
>>
>> Le 27/11/2015 17:02, Nicolas Ecarnot a écrit :
>>
>>> Hello,
>>>
>>> [Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD
>>> on the hosts].
>>>
>>> On the switchs, I have created a dedicated VLAN to isolate the
>>> glusterFS traffic, but I'm not using it yet.
>>> I was thinking of creating a dedicated IP for each node's gluster
>>> NIC, and a DNS record by the way ("my_nodes_name_GL"), but I fear
>>> using this hostname or this ip in oVirt GUI host network interface
>>> tab, leading oVirt think this is a different host.
>>>
>>> Not being sure this fear is clearly described, let's say :
>>> - On each node, I create a second ip+(dns record in the soa) used by
>>> gluster, plugged on the correct VLAN
>>> - in oVirt gui, in the host network setting tab, the interface will
>>> be seen, with its ip, but reverse-dns-related to a different hostname.
>>> Here, I fear oVirt might check this reverse DNS and declare this NIC
>>> belongs to another host.
>>>
>>> I would also prefer not use a reverse pointing to the name of the
>>> host management ip, as this is evil and I'm a good guy.
>>>
>>> On your side, how do you cope with a dedicated storage network in
>>> case of storage+compute mixed hosts?
>>>
>>>
>>
>
--
Nicolas ECARNOT
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users