[ovirt-users] GetGlusterVolumeAdvancedDetailsQuery & GetGlusterVolumeProfileInfoQuery when using seperate storage network
Sahina Bose
sabose at redhat.com
Thu May 7 10:07:21 UTC 2015
On 05/07/2015 01:34 PM, Jorick Astrego wrote:
>
>
> On 05/06/2015 08:15 PM, knarra wrote:
>> On 05/06/2015 11:22 PM, Jorick Astrego wrote:
>>>
>>>
>>> On 05/06/2015 06:24 PM, knarra wrote:
>>>> On 05/06/2015 06:59 PM, Jorick Astrego wrote:
>>>>>
>>>>>
>>>>> On 05/06/2015 02:49 PM, knarra wrote:
>>>>>> On 05/06/2015 05:33 PM, Jorick Astrego wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Looking forward to bz 1049994 Allow choosing network
>>>>>>> interface for gluster domain traffic.
>>>>>>>
>>>>>>> Currently I have the bricks on a different storage network and
>>>>>>> can't get the volume details or profile it.
>>>>>>>
>>>>>>> Will this be handled in 3.6 properly? I don't see any changes in
>>>>>>> gerrit regarding this, but I can be ovelooking it.
>>>>>>>
>>>>>>> The errors I get currently:
>>>>>>>
>>>>>>> Could not fetch brick profile stats
>>>>>>>
>>>>>>> 2015-05-06 10:34:22,430 ERROR
>>>>>>> [org.ovirt.engine.core.bll.gluster.GetGlusterVolumeProfileInfoQuery]
>>>>>>> (ajp--127.0.0.1-8702-27) Query
>>>>>>> GetGlusterVolumeProfileInfoQuery failed. Exception message
>>>>>>> is null : java.lang.NullPointerException:
>>>>>>> java.lang.NullPointerException
>>>>>>>
>>>>>>> and
>>>>>>>
>>>>>>> Error in fetching the brick details, please try again.
>>>>>>>
>>>>>>> 2015-05-06 10:36:14,205 ERROR
>>>>>>> [org.ovirt.engine.core.bll.gluster.GetGlusterVolumeAdvancedDetailsQuery]
>>>>>>> (ajp--127.0.0.1-8702-55) Query
>>>>>>> GetGlusterVolumeAdvancedDetailsQuery failed. Exception
>>>>>>> message is VdcBLLException: Volume status failed
>>>>>>> error: Staging failed on *.*.*.*. Error: No brick
>>>>>>> glustertest1.netbulae.test/gluster/brick1 in volume data
>>>>>>> Staging failed on *.*.*.*. Error: No brick
>>>>>>> glustertest1.netbulae.test:/gluster/brick1 in volume data
>>>>>>> return code: -1 (Failed with error GlusterVolumeStatusFailed
>>>>>>> and code 4157) :
>>>>>>> org.ovirt.engine.core.common.errors.VdcBLLException:
>>>>>>> VdcBLLException: Volume status failed
>>>>>>> error: Staging failed on *.*.*.*. Error: No brick
>>>>>>> glustertest1.netbulae.test:/gluster/brick1 in volume data
>>>>>>> Staging failed on *.*.*.*. Error: No brick
>>>>>>> glustertest1.netbulae.test:/gluster/brick1 in volume data
>>>>>>> return code: -1 (Failed with error GlusterVolumeStatusFailed
>>>>>>> and code 4157):
>>>>>>> org.ovirt.engine.core.common.errors.VdcBLLException:
>>>>>>> VdcBLLException: Volume status failed
>>>>>>> error: Staging failed on *.*.*.*. Error: No brick
>>>>>>> glustertest1.netbulae.test:/gluster/brick1 in volume data
>>>>>>> Staging failed on *.*.*.*. Error: No brick
>>>>>>> glustertest1.netbulae.test:/gluster/brick1 in volume data
>>>>>>> return code: -1 (Failed with error GlusterVolumeStatusFailed
>>>>>>> and code 4157)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Met vriendelijke groet, With kind regards,
>>>>>>>
>>>>>>> Jorick Astrego*
>>>>>>>
>>>>>>> Netbulae Virtualization Experts *
>>>>>>> ------------------------------------------------------------------------
>>>>>>> Tel: 053 20 30 270 info at netbulae.eu Staalsteden 4-3A KvK
>>>>>>> 08198180
>>>>>>> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW
>>>>>>> NL821234584B01
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------------------------------------------------
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>> Hi,
>>>>>>
>>>>>> Can you please check what does your gluster peer status on
>>>>>> each of your nodes return? I guess they are in disconnected
>>>>>> state and that is the reason you are not able to view these details.
>>>>>>
>>>>>> Thanks
>>>>>> kasturi
>>>>>>
>>>>> On the nodes it gives me the following:
>>>>>
>>>>> gluster peer status
>>>>> Connection failed. Please check if gluster daemon is operational.
>>>>>
>>>> This means that glusterd is not running on this node. you could
>>>> check the status of glusterd by running the command 'service
>>>> glusterd status'.
>>>>
>>>> please start glusterd by running the command 'service glusterd
>>>> start' on both of your nodes.
>>>>
>>>> Ideally when glusterd goes down node in ovirt should move to
>>>> non-operational. Because of this BZ 1207150 as of now it is not
>>>> changing the state to non operational.
>>>
>>> There is no glusterd on the compute nodes in our setup, we have two
>>> clusters. One for virt hosts only and one for GlusterFS only.
>>>
>>>
>>> Like I said, everything is Up and running fine. It's just that I
>>> can't get the stats because the hostname != GlusterFS NIC ip
>>>
>>>
>>>>>
>>>>>
>>>>> But everyting is up and ovirt found the manually configured volume
>>>>> perfectly. But the hostname it lists as glustertest1.netbulae.test
>>>>> is not what my volume uses for communication as I created the
>>>>> volume using the ip's of the storage network.
>>>>>
>>>>>
>>>>> gluster peer status
>>>>> Number of Peers: 2
>>>>>
>>>>> Hostname: 10.1.1.3
>>>>> Uuid: 1cc0875e-1699-42ae-aed2-9152667ed5af
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: 10.1.1.2
>>>>> Uuid: a0b3ac13-7388-441a-a238-1deb023cab6c
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>> Did you import already existing cluster ?
>>>
>>> No, I provisioned the nodes. Added them to our glusterfs cluster
>>> (with virt service disabled) and created the volume manually.
>>>
>>> oVirt auto-discovered the manual created volume after that.
>>>
>>> Error: No brick *glustertest1.netbulae.test*:/gluster/brick1 in
>>> volume data
>>>
>>> Hostname: *10.1.1.3*
>>>
>>> Things should work better in 3.6 (bz1049994), but I don't see any
>>> code changes to "GetGlusterVolumeProfileInfoQuery" linked to this in
>>> Bugzilla.
>>>
>>
>> HI Jorik,
>>
>> For more information on using separate storage network please
>> refer to the following feature page.
>>
>> http://www.ovirt.org/Features/Select_Network_For_Gluster
>>
>> Thanks
>> kasturi.
>
>
> I don't think you understand completely, the page you refer to is for
> 3.6. I'm running 3.5.2 without this feature.
>
> The question was, will this get fixed along with the changes in 3.6 as
> I don't see any code changes refering to the statistics/profiling of
> the gluster volume......
>
> It will take less time testing 3.6 to find out than trying to explain
> it any further ;-p
Yes, it should. As we have changed the way we identify the brick name
based on whether a different network was used to add the brick.
Earlier, the brick name was blindly thought to be <hostname>:<mount path
to directory>. Now the ip address of the interface is used to identify
the brick, if a different interface was used.
We're yet to test out all the flows, but theoretically it should work :)
>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego*
>
> Netbulae Virtualization Experts *
> ------------------------------------------------------------------------
> Tel: 053 20 30 270 info at netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
>
>
> ------------------------------------------------------------------------
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150507/918c7a6b/attachment-0001.html>
More information about the Users
mailing list