[ovirt-users] Dedicated NICs for gluster network

Ramesh Nachimuthu rnachimu at redhat.com
Mon Aug 22 06:10:37 UTC 2016



On 08/22/2016 11:24 AM, Sahina Bose wrote:
>
>
> On Fri, Aug 19, 2016 at 6:20 PM, Nicolas Ecarnot <nicolas at ecarnot.net 
> <mailto:nicolas at ecarnot.net>> wrote:
>
>     Le 19/08/2016 à 13:43, Sahina Bose a écrit :
>>
>>>         Or are you adding the 3 nodes to your existing cluster? If
>>>         so, I suggest you try adding this to a new cluster
>>         OK, I tried and succeed to create a new cluster.
>>         In this new cluster, I was ABLE to add the first new host,
>>         using its mgmt DNS name.
>>         This first host still has to have its NICs configured, and
>>         (using Chrome or FF) the access to the network settings
>>         window is stalling the browser (I tried to restart even the
>>         engine, to no avail). Thus, I can not setup this first node NICs.
>>
>>         Thus, I can not add any further host because oVirt relies on
>>         a first host to validate the further ones.
>>
>>
>>
>>     Network team should be able to help you here.
>>
>
>     OK, there were no mean I could continue this way (browser crash),
>     so I tried and succeed doing so :
>     - remove the newly created host and cluster
>     - create a new DATACENTER
>     - create a new cluster in this DC
>     - add the first new host : OK
>     - add the 2 other new hosts : OK
>
>     Now, I can smoothly configure their NICs.
>
>     Doing all this, I saw that oVirt detected there already was
>     existing gluster cluster and volume, and integrated it in oVirt.
>
>     Then, I was able to create a new storage domain in this new DC and
>     cluster, using one of the *gluster* FQDN's host. It went nicely.
>
>     BUT, when viewing the volume tab and brick details, the displayed
>     brick names are the host DNS name, and NOT the host GLUSTER DNS names.
>
>     I'm worrying about this, confirmed by what I read in the logs :
>
>     2016-08-19 14:46:30,484 WARN
>     [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>     (DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate
>     brick 'serv-vm-al04-data.sdis.isere.fr:/gluster/data/brick04
>     ' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct
>     network as no gluster network found in cluster
>     '1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
>     2016-08-19 14:46:30,492 WARN
>     [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>     (DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate
>     brick 'serv-vm-al05-data.sdis.isere.fr:/gluster/data/brick04
>     ' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct
>     network as no gluster network found in cluster
>     '1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
>     2016-08-19 14:46:30,500 WARN
>     [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>     (DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate
>     brick 'serv-vm-al06-data.sdis.isere.fr:/gluster/data/brick04
>     ' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct
>     network as no gluster network found in cluster
>     '1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
>
>     [oVirt shell (connected)]# list clusters
>
>     id         : 00000001-0001-0001-0001-000000000045
>     name       : cluster51
>     description: Cluster d'alerte de test
>
>     id         : 1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30
>     name       : cluster52
>     description: Cluster d'alerte de test
>
>     [oVirt shell (connected)]#
>
>     "cluster52" is the recent cluster, and I do have a dedicated
>     gluster network, marked as gluster network, in the correct DC and
>     cluster.
>     The only point is that :
>     - Each host has its name ("serv-vm-al04") and a second name for
>     gluster ("serv-vm-al04-data").
>     - Using blahblahblah-data is correct on a gluster point of view
>     - Maybe oVirt is disturb not to be able to ping the gluster FQDN
>     (not routed) and then throwing this error?
>
>
> We do have a limitation currently that if you use multiple FQDNs, 
> oVirt cannot associate it to the gluster brick correctly. This will be 
> a problem only when you try brick management from oVirt - i.e try to 
> remove or replace brick from oVirt. For monitoring brick status and 
> detecting bricks - this is not an issue, and you can ignore the error 
> in logs.
>
> Adding Ramesh who has a patch to fix this .

Patch https://gerrit.ovirt.org/#/c/60083/ is posted to address this 
issue. But it will work only if the oVirt Engine can resolve FQDN 
*'serv-vm-al04-data.xx*'* to an IP address which is mapped to the 
gluster NIC (NIC with gluster network) on the host.

Sahina: Can you review the patch :-)

Regards,
Ramesh

>     -- 
>     Nicolas ECARNOT
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160822/d3029443/attachment-0001.html>


More information about the Users mailing list