
Thank you Sahina, Yes, I setup my gluster volume outside oVirt, and as you guessed my volumes are identified by name, not by IP: [RDKVM1][root@rdkvm1 ~]# gluster volume info gluvol2 Volume Name: gluvol2 Type: Replicate Volume ID: 3c1e9936-4cce-4a21-83f3-ac8611348484 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: rdkvm1-data:/data/sdc1/gluvol2 Brick2: rdkvm2-data:/data/sdc1/gluvol2 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet
From the engine host:
[RDHEAD1][root@rdhead1 ~]# nslookup rdkvm1-data Server: 10.3.231.123 Address: 10.3.231.123#53 Name: rdkvm1-data.rd-connect.eu Address: 10.3.10.5 [RDHEAD1][root@rdhead1 ~]# nslookup rdkvm2-data Server: 10.3.231.123 Address: 10.3.231.123#53 Name: rdkvm2-data.rd-connect.eu Address: 10.3.10.6 [RDHEAD1][root@rdhead1 ~]# host rdkvm1-data rdkvm1-data.rd-connect.eu has address 10.3.10.5 Note that the engine does not have a connection to 10.3.10.x despite it resolves names. I have the system in production. Is there a secure way to test the patch in this circumstance? Best regards, Felip M -- Felip Moll Marquès Computer Science Engineer E-Mail - lipixx@gmail.com WebPage - http://lipix.ciutadella.es 2016-10-21 9:20 GMT+02:00 Sahina Bose <sabose@redhat.com>:
Hi!
Looks like you have setup your gluster volume outside of oVirt and your bricks are identified via "rdkvm1-data" , "rdkvm2-data" and not the IP addresses associated with the gluster network ( 10.3.10.5, 10.3.10.6) What does "gluster volume info gluvol2" return?
Currently we cannot identify multiple FQDNs for a host and resolve it correctly to the correct network. There was a patch - https://gerrit.ovirt.org/#/c/60083/, can you review if it will solve your usecase? Will rdkvm1-data be resolvable from engine?
On Mon, Oct 10, 2016 at 3:58 PM, Felip Moll <lipixx@gmail.com> wrote:
Hello,
I have latest version of ovirt 4 installed on a Centos 7, 2 hypervisor nodes (rdkvm[1-2]) and 1 ovirt-engine node (rdhead1).
I receive the following warning in the logs despite of having the gluster network set up. Everything is running fine.
2016-10-10 12:24:02,825 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler7) [5a52dffe] Could not associate brick 'rdkvm1-data:/data/sdb1/gluvol1' of volume '47e45087-1a07-4790-9d30-77edbefa5f2e' with correct network as no gluster network found in cluster '57f387dc-0315-020b-019c-0000000000e6' 2016-10-10 12:24:02,831 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler7) [5a52dffe] Could not associate brick 'rdkvm2-data:/data/sdb1/gluvol1' of volume '47e45087-1a07-4790-9d30-77edbefa5f2e' with correct network as no gluster network found in cluster '57f387dc-0315-020b-019c-0000000000e6' 2016-10-10 12:24:02,838 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler7) [5a52dffe] Could not associate brick 'rdkvm1-data:/data/sdc1/gluvol2' of volume '3c1e9936-4cce-4a21-83f3-ac8611348484' with correct network as no gluster network found in cluster '57f387dc-0315-020b-019c-0000000000e6' 2016-10-10 12:24:02,844 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler7) [5a52dffe] Could not associate brick 'rdkvm2-data:/data/sdc1/gluvol2' of volume '3c1e9936-4cce-4a21-83f3-ac8611348484' with correct network as no gluster network found in cluster '57f387dc-0315-020b-019c-0000000000e6'
My ovirt-engine node is not connected to the gluster network directly. The glusterNetwork is defined in the cluster and is attached to bond0 interface of rdkvm1 and rdkvm2.
[RDHEAD1][root@rdhead1 ~]# su - postgres -c "psql -d engine -c \"SELECT id,name,type,addr,subnet,vlan_id,storage_pool_id FROM network\"" id | name | type | addr | subnet | vlan_id | storage_pool_id
--------------------------------------+----------------+------+------+--------+---------+-------------------------------------- 00000000-0000-0000-0000-000000000009 | ovirtmgmt | | | | | 57f387dc-0389-0252-01f4-000000000316 628ed584-feaf-4952-908c-5b2d654c0731 | glusterNetwork | | | | | 57f387dc-0389-0252-01f4-000000000316 965b2aeb-1230-4c24-9ca2-e815187372a9 | BMCNetwork | | | | | 57f387dc-0389-0252-01f4-000000000316 (3 rows)
[RDHEAD1][root@rdhead1 ~]# su - postgres -c "psql -d engine -c \"SELECT volume_id,status,network_id FROM gluster_volume_bricks\"" volume_id | status | network_id --------------------------------------+--------+------------ 3c1e9936-4cce-4a21-83f3-ac8611348484 | UP | 3c1e9936-4cce-4a21-83f3-ac8611348484 | UP | 47e45087-1a07-4790-9d30-77edbefa5f2e | UP | 47e45087-1a07-4790-9d30-77edbefa5f2e | UP | (4 rows)
I tried to manually attach the network_id to the volume bricks, but after a while it gets emptied:
[RDHEAD1][root@rdhead1 squid-in-a-can]# su - postgres -c "psql -d engine -c \"update gluster_volume_bricks set network_id='628ed584-feaf-4952-908c-5b2d654c0731' \"" UPDATE 4
Look:
2016-10-10 12:27:28,595 INFO [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2) [33756067] Network address for brick '10.3.10.5:/data/sdc1/gluvol2' detected as 'rdkvm1-data'. Updating engine DB accordingly. 2016-10-10 12:27:28,607 INFO [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2) [33756067] Network address for brick '10.3.10.6:/data/sdc1/gluvol2' detected as 'rdkvm2-data'. Updating engine DB accordingly. 2016-10-10 12:27:28,619 INFO [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2) [33756067] Network address for brick '10.3.10.5:/data/sdb1/gluvol1' detected as 'rdkvm1-data'. Updating engine DB accordingly. 2016-10-10 12:27:28,623 INFO [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2) [33756067] Network address for brick '10.3.10.6:/data/sdb1/gluvol1' detected as 'rdkvm2-data'. Updating engine DB accordingly.
How can I solve this situation?
Thank you Felip M
-- Felip Moll Marquès Computer Science Engineer E-Mail - lipixx@gmail.com WebPage - http://lipix.ciutadella.es _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users