[ovirt-devel] Problems installing ovirt 3.5 on Centos 7

Dan Kenigsberg danken at redhat.com
Wed Aug 27 10:31:04 UTC 2014


On Wed, Aug 27, 2014 at 08:49:32AM +0200, Sandro Bonazzola wrote:
> Il 26/08/2014 18:59, Andy Ryan ha scritto:
> > I'm trying to install oVirt 3.5 on a CentOS 7 system that has a bonded network interface.  ovirt-hosted-engine-setup fails to complete.  I get the
> > follow messages in the install log:
> 
> Adding users list.
> 
> > 
> > Running configure...
> > Reconfiguration of libvirt is done.
> > 
> > Done configuring modules to VDSM.
> > 
> > 2014-08-26 11:40:02 DEBUG otopi.plugins.ovirt_hosted_engine_setup.system.vdsmenv plugin.execute:866 execute-output: ('/bin/vdsm-tool', 'config
> > 
> > 
> > 2014-08-26 11:40:02 DEBUG otopi.plugins.otopi.services.systemd systemd.state:134 starting service vdsmd
> > 2014-08-26 11:40:02 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:785 execute: ('/bin/systemctl', 'start', 'vdsmd.service'), ex
> > 2014-08-26 11:41:32 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:803 execute-result: ('/bin/systemctl', 'start', 'vdsmd.servic
> > 2014-08-26 11:41:32 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:861 execute-output: ('/bin/systemctl', 'start', 'vdsmd.service')
> > 
> > 
> > 2014-08-26 11:41:32 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:866 execute-output: ('/bin/systemctl', 'start', 'vdsmd.service')
> > Job for vdsmd.service failed. See 'systemctl status vdsmd.service' and 'journalctl -xn' for details.
> > 
> > 2014-08-26 11:41:32 DEBUG otopi.context context._executeMethod:152 method exception
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 142, in _executeMethod
> >     method['method']()
> >   File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py", line 155, in _late_setup
> >     state=True
> >   File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 138, in state
> >     'start' if state else 'stop'
> >   File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 77, in _executeServiceCommand
> >     raiseOnError=raiseOnError
> >   File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 871, in execute
> >     command=args[0],
> > RuntimeError: Command '/bin/systemctl' failed to execute
> > 2014-08-26 11:41:32 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Environment setup': Command '/bin/systemctl' faile
> > 2014-08-26 11:41:32 DEBUG otopi.context context.dumpEnvironment:468 ENVIRONMENT DUMP - BEGIN
> > 2014-08-26 11:41:32 DEBUG otopi.context context.dumpEnvironment:478 ENV BASE/error=bool:'True'
> > 2014-08-26 11:41:32 DEBUG otopi.context context.dumpEnvironment:478 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeEr
> > 2014-08-26 11:41:32 DEBUG otopi.context context.dumpEnvironment:482 ENVIRONMENT DUMP - END
> > 2014-08-26 11:41:32 INFO otopi.context context.runSequence:395 Stage: Clean up
> > 2014-08-26 11:41:32 DEBUG otopi.context context.runSequence:399 STAGE cleanup
> > 
> > And in /var/log/messages
> > 
> > Aug 26 11:55:03 localhost systemd: Starting Virtual Desktop Server Manager...
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running mkdirs
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running configure_coredump
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running configure_vdsm_logs
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running run_init_hooks
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running gencerts
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running check_is_configured
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: libvirt is already configured for vdsm
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: vdsm: Running validate_configuration
> > Aug 26 11:55:03 localhost vdsmd_init_common.sh: SUCCESS: ssl configured to true. No conflicts
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running prepare_transient_repository
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running syslog_available
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running nwfilter
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running dummybr
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running load_needed_modules
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running tune_system
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running test_space
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running test_lo
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running unified_network_persistence_upgrade
> > Aug 26 11:55:04 localhost vdsmd_init_common.sh: vdsm: Running restore_nets
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3 is being created...
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to balance-rr (0).
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to active-backup (1).
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to balance-xor (2).
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to broadcast (3).
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to 802.3ad (4).
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to balance-tlb (5).
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: setting mode to balance-alb (6).
> > Aug 26 11:55:05 localhost avahi-daemon[726]: Withdrawing workstation service for g7je1p3LjMcQbb3.
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3 is being deleted...
> > Aug 26 11:55:05 localhost kernel: bonding: g7je1p3LjMcQbb3: released all slaves
> > Aug 26 11:55:05 localhost kernel: bonding: bond0: Removing slave p1p1.
> > Aug 26 11:55:05 localhost kernel: bonding: bond0: Error: cannot release p1p1.
> 
> Not sure why kernel issued above error but looks like the cause of
> 
> > RuntimeError: Command '/bin/systemctl' failed to execute
> 
> while restarting vdsmd.
> Which version of vdsm are you using?

Is firewalld running? Can you see blocked process with `pgrep
firewall-cmd`? Or pstree -lp `pgrep libvirtd` ?

I've seen libvirtd blocking on firewall-cmd, and I'd love if someone
files a meaningful bug report on this (hint hint).



More information about the Devel mailing list