On Thu, Mar 3, 2016 at 9:06 AM, Dan Kenigsberg <danken(a)redhat.com> wrote:
On Thu, Mar 03, 2016 at 12:54:25AM +0000, David LeVene wrote:
>
> Can you check our patches? They should resolve the problem we saw in the
> log:
https://gerrit.ovirt.org/#/c/54237 (based on oVirt-3.6.3)
>
> -- I've manually applied the patch to the node that I was testing on
> and the networking comes on-line correctly - now I'm encountering a
> gluster issue with cannot find master domain.
You are most welcome to share your logs (preferably on a different
thread, to avoid confusion)
>
> Without the fixes, as a workaround, I would suggest (if possible) to
disable
IPv6 on your host boot line and check if all works out for you.
> -- Ok, but as I can manually apply the patch its good now. Do
you know
> what version are we hoping to have this put into as I won't perform an
> ovirt/vdsm update until its part of the upstream RPM's
The fix has been proposed to ovirt-3.6.4. I'll make sure it's accepted.
>
> Do you need IPv6 connectivity? If so, you'll need to use a vdsm hook or
another interface that is not controlled by oVirt.
> -- Ideally I'd prefer not to have it, but the way our
network has been
> configured some hosts are IPv6 only, so at a min the guests need it..
> the hypervisors not so much.
May I tap to what your IPv6 experience? (only if you feel confortable
sharing
this publically). What does these IPv6-only servers do? What does the
guest do
with them?
>
> -- I've now hit an issue with it not starting up the master storage
> gluster domain - as it’s a separate issue I'll review the mailing
> lists & create a new item if its related.. I've attached the
> supervdsm.log incase you can save me some time and point me in the
> right direction!
All I see is this
MainProcess|jsonrpc.Executor/4::ERROR::2016-03-03
11:15:04,699::supervdsmServer::118::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
File "/usr/share/vdsm/supervdsmServer", line 116, in wrapper
res = func(*args, **kwargs)
File "/usr/share/vdsm/supervdsmServer", line 531, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.py", line 496, in volumeInfo
xmltree = _execGlusterXml(command)
File "/usr/share/vdsm/gluster/cli.py", line 108, in _execGlusterXml
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
return code: 2
We have this logs before the exception:
MainProcess|jsonrpc.Executor/3::DEBUG::2016-03-03
11:02:42,945::utils::669::root::(execCmd) /usr/bin/taskset --cpu-list 0-39
/usr/sbin/gluster --mode=script volume info --re
mote-host=ovirtmount.test.lab data --xml (cwd None)
The command looks correct
MainProcess|jsonrpc.Executor/3::DEBUG::2016-03-03
11:02:43,024::utils::687::root::(execCmd) FAILED: <err> = '\n'; <rc> =
2
gluster command line failed in an unhelpful way.
(Adding Sahina)
David, can you try to run this command manually on this host? maybe there
is some
--verbose flag revealing more info?
You may also try a simpler command:
gluster volume info --remote-host=ovirtmount.test.lab data
Another issue you should check - gluster version on the hosts and on the
gluster nodes *must*
match - otherwise you should expect failures accessing gluster server.
We have this patch for handling such errors gracefully - can you test it?
https://gerrit.ovirt.org/53785
(Adding Ala)
Nir