On 2/1/14, 5:09 PM, "Dan Kenigsberg" <danken(a)redhat.com> wrote:
On Fri, Jan 31, 2014 at 01:26:20PM -0500, Matt Warren wrote:
> Any troubleshooting ideas for vdsm reporting UNKNONW version when it's
> clearly 4.13 installed?
>
> >>> Host x is installed with VDSM version (<UNKNOWN>) and cannot
join
> >>>cluster Default which is compatible with VDSM versions [4.13, 4.9,
>4.11,
> >>>4.12, 4.10].
> >>>In reading some google searches, suggestion was to first run
>"vdsClient
> >>>-s 0 getVdsCaps". This returns "Failed to initialize
storage" for any
> >>>vdsClient command.
Engine cannot tell which Vdsm is install, since there's a serious
problem there - so serious that Vdsm cannot report its own version.
Could you restart Vdsm and follow vdsm.log and supervdsm.log for hints
on why does the storage subsystem fail to start?
Dan, thanks for the reply. I ended up down this chain of reasoning on
Friday. Discovered the vdsm.log and saw that it was having sudo problems.
Those were likely cause by my puppet agent stepping on whatever setup vdsm
had done to sudoers on install. I repaired what I could from the sudoers
changes in the ovirt git repo (though not exact as that file looks to have
a number of paths evaluated at runtime).
VDSM was able to run cleanly and then ovirt could detect the version and
carry on.