Il 01/04/2014 16:15, Sandro Bonazzola ha scritto:
Il 01/04/2014 15:38, ovirt-test(a)arcor.de ha scritto:
> Hello,
>
> I'm new to this list and I need help installing a self hosted engine
>
> I've installed CentOS 6.5 and oVirt 3.4. The following repositories are enabled:
> yum localinstall
http://resources.ovirt.org/releases/ovirt-release.noarch.rpm
> yum localinstall
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
> yum localinstall
http://mirrors.dotsrc.org/jpackage/6.0/generic/free/RPMS/jpackage-release...
>
> Just wanted to check out the self hosted feature. But I get this error:
>
> # hosted-engine --deploy
> [ INFO ] Stage: Initializing
> Continuing will configure this host for serving as hypervisor and create a
VM where you have to install oVirt Engine afterwards.
> Are you sure you want to continue? (Yes, No)[Yes]:
> [ INFO ] Generating a temporary VNC password.
> [ INFO ] Stage: Environment setup
> Configuration files: []
> Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140401153028.log
> Version: otopi-1.2.0 (otopi-1.2.0-1.el6)
> [ INFO ] Hardware supports virtualization
> [ INFO ] Stage: Environment packages setup
> [ INFO ] Stage: Programs detection
> [ INFO ] Stage: Environment setup
> [ ERROR ] Failed to execute stage 'Environment setup': <Fault 1:
"<type 'exceptions.TypeError'>:cannot marshal None unless allow_none is
enabled">
> [ INFO ] Stage: Clean up
> [ INFO ] Stage: Pre-termination
> [ INFO ] Stage: Termination
>
> It is not this error:
>
http://lists.ovirt.org/pipermail/users/2014-March/022424.html
>
> In my logfile are the following errors:
> 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel plugin.executeRaw:366
execute: ('/sbin/service', 'vdsmd', 'status'),
executable='None', cwd='None', env=None
> 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel plugin.executeRaw:383
execute-result: ('/sbin/service', 'vdsmd', 'status'), rc=0
> 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:441
execute-output: ('/sbin/service', 'vdsmd', 'status') stdout:
> VDS daemon server is running
>
> 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:446
execute-output: ('/sbin/service', 'vdsmd', 'status') stderr:
>
>
> 2014-04-01 15:30:32 DEBUG otopi.plugins.otopi.services.rhel rhel.status:147 service
vdsmd status True
> 2014-04-01 15:30:32 DEBUG otopi.context context._executeMethod:152 method exception
> Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
> method['method']()
> File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
line 157, in _late_setup
> self._connect()
> File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
line 78, in _connect
> hwinfo = serv.s.getVdsHardwareInfo()
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
> return self.__send(self.__name, args)
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
> verbose=self.__verbose
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request
> return self._parse_response(h.getfile(), sock)
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1392, in _parse_response
> return u.close()
> File "/usr/lib64/python2.6/xmlrpclib.py", line 838, in close
> raise Fault(**self._stack[0])
> Fault: <Fault 1: "<type 'exceptions.TypeError'>:cannot marshal
None unless allow_none is enabled">
> 2014-04-01 15:30:32 ERROR otopi.context context._executeMethod:161 Failed to execute
stage 'Environment setup': <Fault 1: "<type
'exceptions.TypeError'>:cannot marshal None unless allow_none is
enabled">
> 2014-04-01 15:30:32 DEBUG otopi.context context.dumpEnvironment:468 ENVIRONMENT DUMP
- BEGIN
Corresponding to the above call to getVdsHardwareInfo vdsm log shows:
Thread-22::DEBUG::2014-04-01 15:30:32,100::BindingXMLRPC::1067::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-22::DEBUG::2014-04-01 15:30:32,110::BindingXMLRPC::1074::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Done',
'code': 0},
'info': {'systemProductName': 'ProLiant DL380 G5',
'systemSerialNumber': 'CZC6451JFR', 'systemFamily': None,
'systemVersion': 'Not Specified',
'systemUUID': '435a4336-3435-435a-4336-3435314a4652',
'systemManufacturer': 'HP'}}
And corresponding supervdsm log:
MainProcess|Thread-22::DEBUG::2014-04-01
15:30:32,109::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
getHardwareInfo with () {}
MainProcess|Thread-22::DEBUG::2014-04-01
15:30:32,109::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper) return
getHardwareInfo with
{'systemProductName': 'ProLiant DL380 G5', 'systemSerialNumber':
'CZC6451JFR', 'systemFamily': None, 'systemVersion': 'Not
Specified', 'systemUUID':
'435a4336-3435-435a-4336-3435314a4652', 'systemManufacturer':
'HP'}
There's any other log after that line in vdsm.log provided.
Looks like something bad happened at socket level just after that call. Can anybody in
VDSM team take a look at this issue?
>
> Can anybody help me how to fix this problem?
Can you please attach also vdsm logs?
>
> Thank you
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at
redhat.com