[Users] oVirt October Updates
Gianluca Cecchi
gianluca.cecchi at gmail.com
Tue Oct 22 10:14:36 UTC 2013
On Mon, Oct 21, 2013 at 11:33 AM, Vijay Bellur wrote:
> The following commands might help:
Thanks for your commands.
In the mean time I noticed that as I changed gluster sources to allow
live migration, offsetting the port to 50152+
as in http://review.gluster.org/#/c/6075/
referred in
https://bugzilla.redhat.com/show_bug.cgi?id=1018178
I missed the iptables rules that contained:
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
So I had some gluster communication problems too.
I updated it at the moment to
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
until libvirt for fedora19 get patched as upstream
(a quick test on rebuilding libvirt-1.0.5.6-3.fc19.src.rpm with the
patches proposed gave many failure and in the mean time I saw that
also some other parts are to be patched...)
I put in iptables these lines
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
I'm going to retest completely and see how it behaves.
See here for my test case and begin of problem simulation:
http://lists.ovirt.org/pipermail/users/2013-October/017228.html
It represents a realistic maintenance scenario that forced a manual
alignment on gluster that is in my opinin not feasible....
Eventually I'm going to put into a bugzilla if new test with correct
iptables ports gives problems
>
>> I would like to have oVirt more conscious about it and have sort of
>> capability to solve itself the misalignments generated on gluster
>> backend during mainteneance of a node.
>> At the moment it seems to me it only shows volumes are ok in the sense
>> of started, but they could be very different...
>> For example another tab with details about heal info; something like
>> the output of the command
>>
>> gluster volume heal $VOLUME info
>>
>> and/or
>>
>> gluster volume heal $VOLUME info split-brain
>
>
> Yes, we are looking to build this for monitoring replicated gluster volumes.
Good!
Gianluca
More information about the Users
mailing list