[ovirt-users] Setup of hosted Engine Fails
Jiri Moskovcak
jmoskovc at redhat.com
Mon Jul 21 08:32:24 UTC 2014
Hi Andrew,
thanks for debugging this, please create a bug against vdsm to make sure
it gets proper attention.
Thanks,
Jirka
On 07/19/2014 12:36 PM, Andrew Lau wrote:
> Quick update, it seems to be related to the latest vdsm package,
>
> service vdsmd start
> vdsm: Running mkdirs
> vdsm: Running configure_coredump
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running gencerts
> vdsm: Running check_is_configured
> libvirt is not configured for vdsm yet
> Modules libvirt are not configured
> Traceback (most recent call last):
> File "/usr/bin/vdsm-tool", line 145, in <module>
> sys.exit(main())
> File "/usr/bin/vdsm-tool", line 142, in main
> return tool_command[cmd]["command"](*args[1:])
> File "/usr/lib64/python2.6/site-packages/vdsm/tool/configurator.py",
> line 282, in isconfigured
> raise RuntimeError(msg)
> RuntimeError:
>
> One of the modules is not configured to work with VDSM.
> To configure the module use the following:
> 'vdsm-tool configure [module_name]'.
>
> If all modules are not configured try to use:
> 'vdsm-tool configure --force'
> (The force flag will stop the module's service and start it
> afterwards automatically to load the new configuration.)
>
> vdsm: stopped during execute check_is_configured task (task returned
> with error code 1).
> vdsm start [FAILED]
>
> yum downgrade vdsm*
>
> Here's the package changes for reference,
>
> --> Running transaction check
> ---> Package vdsm.x86_64 0:4.14.9-0.el6 will be a downgrade
> ---> Package vdsm.x86_64 0:4.14.11-0.el6 will be erased
> ---> Package vdsm-cli.noarch 0:4.14.9-0.el6 will be a downgrade
> ---> Package vdsm-cli.noarch 0:4.14.11-0.el6 will be erased
> ---> Package vdsm-python.x86_64 0:4.14.9-0.el6 will be a downgrade
> ---> Package vdsm-python.x86_64 0:4.14.11-0.el6 will be erased
> ---> Package vdsm-python-zombiereaper.noarch 0:4.14.9-0.el6 will be a
> downgrade
> ---> Package vdsm-python-zombiereaper.noarch 0:4.14.11-0.el6 will be erased
> ---> Package vdsm-xmlrpc.noarch 0:4.14.9-0.el6 will be a downgrade
> ---> Package vdsm-xmlrpc.noarch 0:4.14.11-0.el6 will be erased
>
> service vdsmd start
> initctl: Job is already running: libvirtd
> vdsm: Running mkdirs
> vdsm: Running configure_coredump
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running gencerts
> vdsm: Running check_is_configured
> libvirt is already configured for vdsm
> sanlock service is already configured
> vdsm: Running validate_configuration
> SUCCESS: ssl configured to true. No conflicts
> vdsm: Running prepare_transient_repository
> vdsm: Running syslog_available
> vdsm: Running nwfilter
> vdsm: Running dummybr
> vdsm: Running load_needed_modules
> vdsm: Running tune_system
> vdsm: Running test_space
> vdsm: Running test_lo
> vdsm: Running unified_network_persistence_upgrade
> vdsm: Running restore_nets
> vdsm: Running upgrade_300_nets
> Starting up vdsm daemon:
> vdsm start [ OK ]
> [root at ov-hv1-2a-08-23 ~]# service vdsmd status
> VDS daemon server is running
>
>
> On Sat, Jul 19, 2014 at 6:58 PM, Andrew Lau <andrew at andrewklau.com
> <mailto:andrew at andrewklau.com>> wrote:
>
> It seems vdsm is not running,
>
> service vdsmd status
> VDS daemon is not running, and its watchdog is running
>
> The only logs in /var/log/vdsm/ that appear to have any content is
> /var/log/vdsm/supervdsm.log - everything else is blank
>
> MainThread::DEBUG::2014-07-19
> 18:55:34,793::supervdsmServer::424::SuperVdsm.Server::(main)
> Terminated normally
> MainThread::DEBUG::2014-07-19
> 18:55:38,033::netconfpersistence::134::root::(_getConfigs)
> Non-existing config set.
> MainThread::DEBUG::2014-07-19
> 18:55:38,034::netconfpersistence::134::root::(_getConfigs)
> Non-existing config set.
> MainThread::DEBUG::2014-07-19
> 18:55:38,058::supervdsmServer::384::SuperVdsm.Server::(main) Making
> sure I'm root - SuperVdsm
> MainThread::DEBUG::2014-07-19
> 18:55:38,059::supervdsmServer::393::SuperVdsm.Server::(main) Parsing
> cmd args
> MainThread::DEBUG::2014-07-19
> 18:55:38,059::supervdsmServer::396::SuperVdsm.Server::(main)
> Cleaning old socket /var/run/vdsm/svdsm.sock
> MainThread::DEBUG::2014-07-19
> 18:55:38,059::supervdsmServer::400::SuperVdsm.Server::(main) Setting
> up keep alive thread
> MainThread::DEBUG::2014-07-19
> 18:55:38,059::supervdsmServer::406::SuperVdsm.Server::(main)
> Creating remote object manager
> MainThread::DEBUG::2014-07-19
> 18:55:38,061::supervdsmServer::417::SuperVdsm.Server::(main) Started
> serving super vdsm object
> sourceRoute::DEBUG::2014-07-19
> 18:55:38,062::sourceRouteThread::56::root::(_subscribeToInotifyLoop)
> sourceRouteThread.subscribeToInotifyLoop started
>
>
> On Sat, Jul 19, 2014 at 6:48 PM, Andrew Lau <andrew at andrewklau.com
> <mailto:andrew at andrewklau.com>> wrote:
>
> Here's a snippet from my hosted-engine-setup log
>
> 2014-07-19 18:45:14 DEBUG otopi.context
> context._executeMethod:138 Stage late_setup METHOD
> otopi.plugins.ovirt_hosted_engine_setup.vm.configurevm.Plugin._late_setup
> 2014-07-19 18:45:14 DEBUG otopi.context
> context._executeMethod:152 method exception
> Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/otopi/context.py",
> line 142, in _executeMethod
> method['method']()
> File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/vm/configurevm.py",
> line 99, in _late_setup
> response = serv.s.list()
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
> return self.__send(self.__name, args)
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
> verbose=self.__verbose
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1235, in request
> self.send_content(h, request_body)
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1349, in
> send_content
> connection.endheaders()
> File "/usr/lib64/python2.6/httplib.py", line 908, in endheaders
> self._send_output()
> File "/usr/lib64/python2.6/httplib.py", line 780, in _send_output
> self.send(msg)
> File "/usr/lib64/python2.6/httplib.py", line 739, in send
> self.connect()
> File
> "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py",
> line 188, in connect
> sock = socket.create_connection((self.host, self.port),
> self.timeout)
> File "/usr/lib64/python2.6/socket.py", line 567, in
> create_connection
> raise error, msg
> error: [Errno 111] Connection refused
> 2014-07-19 18:45:14 ERROR otopi.context
> context._executeMethod:161 Failed to execute stage 'Environment
> setup': [Errno 111] Connection refused
>
>
> On Wed, Jul 16, 2014 at 9:05 PM, Sandro Bonazzola
> <sbonazzo at redhat.com <mailto:sbonazzo at redhat.com>> wrote:
>
> Il 16/07/2014 00:47, Christopher Jaggon ha scritto:
> > Here is a list of packages :
> >
> > rpm -qa | grep -i vdsm gives :
> >
> > vdsm-python-4.14.9-0.el6.x86_64
> > vdsm-python-zombiereaper-4.14.9-0.el6.noarch
> > vdsm-xmlrpc-4.14.9-0.el6.noarch
> > vdsm-4.14.9-0.el6.x86_64
> > vdsm-cli-4.14.9-0.el6.noarch
> >
> > When I try to run the hosted engine setup I get this
> error in the log :
> >
> > [ INFO ] Waiting for VDSM hardware info
> > [ INFO ] Waiting for VDSM hardware info
> > [ ERROR ] Failed to execute stage 'Environment setup':
> [Errno 111] Connection refused
> > [ INFO ] Stage: Clean up
> >
> > Any advice and why this maybe so?
>
>
> Can you please provide hosted-engine setup, vdsm, supervdsm
> and libvirt logs?
>
>
>
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org <mailto:Users at ovirt.org>
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community
> collaboration.
> See how it works at redhat.com <http://redhat.com>
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
More information about the Users
mailing list