[ovirt-users] hosted-engine --deploy fails

Andreas Mather andreas at allaboutapps.at
Wed Dec 24 21:29:58 UTC 2014


Hi All!

Just did more research on this and it seems as if the reason was related to
my interface configuration. Disclaimer upfront: I've a public IP configured
on this server (since it's a hosted root server), but changed the IP addr
here to 192.168.0.99

I started with the output from ´vdsm-tool restore-nets':
    ipv4addr, prefix = addr['address'].split('/')
ValueError: need more than 1 value to unpack

So I dumped the addr dictionary:
{'address': '192.168.0.99',
 'family': 'inet',
 'flags': frozenset(['permanent']),
 'index': 2,
 'label': 'eth0',
 'prefixlen': 32,
 'scope': 'universe'}

I've no clue why there's no "/32" at the end, but that's what my netmask
actually is due to the special configuration I got from my hosting provider:

[root at vhost1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:52:9F:A8:AA:BB
IPADDR=192.168.0.99
NETMASK=255.255.255.255
SCOPE="peer 192.168.0.1"

(again, public IPs changed to private one, if that matters. And I skipped
the IPv6 config above...)

So what I did next, was to patch the netinfo.py:
[root at vhost1 vdsm]# diff -u netinfo_orig.py netinfo.py
--- netinfo_orig.py 2014-12-24 22:16:23.362198715 +0100
+++ netinfo.py 2014-12-24 22:16:02.567625247 +0100
@@ -368,7 +368,12 @@
         if addr['family'] == 'inet':
             ipv4addrs.append(addr['address'])
             if 'secondary' not in addr['flags']:
-                ipv4addr, prefix = addr['address'].split('/')
+                """Assume /32 if no prefix was found"""
+                if addr['address'].find('/') == -1:
+                    ipv4addr = addr['address']
+                    prefix = "32"
+                else:
+                    ipv4addr, prefix = addr['address'].split('/')
                 ipv4netmask = prefix2netmask(addr['prefixlen'])
         else:
             ipv6addrs.append(addr['address'])


and recompiled it:
[root at vhost1 vdsm]# python -m py_compile netinfo.py



Et voilà:
vdsm-tool ran fine:
`hosted-engine --deploy' passed the previous failing stage!

Hope this helps to find the root cause....

Regards,

Andreas


On Tue, Dec 23, 2014 at 10:58 PM, Andreas Mather <andreas at allaboutapps.at>
wrote:

> Hi All!
>
> Just tried to install the hosted engine on a fresh CentOS 6.6: beside
> setting up a gluster cluster, I just added the repo and installed
> hosted-engine-setup package. Otherwise it's just a very minimal
> installation. hosted-engine deploy failed immediately. The issue seems to
> be the same as described here
> http://lists.ovirt.org/pipermail/users/2014-October/028461.html but that
> conversation didn't continue after the reporting user was asked for
> additional details.
>
> Failed installation:
>
> [root at vhost1 ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
>           Continuing will configure this host for serving as hypervisor
> and create a VM where you have to install oVirt Engine afterwards.
>           Are you sure you want to continue? (Yes, No)[Yes]:
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>           Configuration files: []
>           Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141223224236-y9ttk4.log
>           Version: otopi-1.3.0 (otopi-1.3.0-1.el6)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ ERROR ] Failed to execute stage 'Environment setup': Command
> '/sbin/service' failed to execute
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
> [ INFO  ] Answer file '/etc/ovirt-hosted-engine/answers.conf' has been
> updated
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
> Packages:
>
> [root at vhost1 ~]# rpm -qa|grep hosted-engine
> ovirt-hosted-engine-setup-1.2.1-1.el6.noarch
> ovirt-hosted-engine-ha-1.2.4-1.el6.noarch
>
> [root at vhost1 ~]# rpm -qa|grep vdsm
> vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
> vdsm-python-4.16.7-1.gitdb83943.el6.noarch
> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-4.16.7-1.gitdb83943.el6.x86_64
> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
> vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
>
>
> So here's the output that was requested in the other thread. Hope someone
> can help me here. Thanks!
>
> [root at vhost1 ~]#    find /var/lib/vdsm/persistence
> /var/lib/vdsm/persistence
>
> [root at vhost1 ~]#     find /var/run/vdsm/netconf
> find: `/var/run/vdsm/netconf': No such file or directory
>
> [root at vhost1 ~]# ip l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
> noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
> 1000 link/ether 00:25:90:d8:0a:b0 brd ff:ff:ff:ff:ff:ff 4: ;vdsmdummy;:
> <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether
> 62:e5:28:13:9d:ba brd ff:ff:ff:ff:ff:ff 5: bond0:
> <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether
> 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff virsh -r net
>
> [root at vhost1 ~]#  virsh -r net
> error: unknown command: 'net'
>
> [root at vhost1 ~]#  virsh -r net-list
> Name                 State      Autostart     Persistent
> --------------------------------------------------
> ;vdsmdummy;          active     no            no
>
> [root at vhost1 ~]# vdsm-tool restore-nets
> Traceback (most recent call last):
>   File "/usr/share/vdsm/vdsm-restore-net-config", line 137, in <module>
>     restore()
>   File "/usr/share/vdsm/vdsm-restore-net-config", line 123, in restore
>     unified_restoration()
>   File "/usr/share/vdsm/vdsm-restore-net-config", line 57, in
> unified_restoration
>     _inRollback=True)
>   File "/usr/share/vdsm/network/api.py", line 616, in setupNetworks
>     netinfo._libvirtNets2vdsm(libvirt_nets)))
>   File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 822, in get
>     d['nics'][dev.name] = _nicinfo(dev, paddr, ipaddrs)
>   File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 653, in
> _nicinfo
>     info = _devinfo(link, ipaddrs)
>   File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 681, in
> _devinfo
>     ipv4addr, ipv4netmask, ipv4addrs, ipv6addrs = getIpInfo(link.name,
> ipaddrs)
>   File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 370, in
> getIpInfo
>     ipv4addr, prefix = addr['address'].split('/')
> ValueError: need more than 1 value to unpack
> Traceback (most recent call last):
>   File "/usr/bin/vdsm-tool", line 209, in main
>     return tool_command[cmd]["command"](*args)
>   File "/usr/lib/python2.6/site-packages/vdsm/tool/restore_nets.py", line
> 36, in restore_command
>     restore()
>   File "/usr/lib/python2.6/site-packages/vdsm/tool/restore_nets.py", line
> 45, in restore
>     raise EnvironmentError('Failed to restore the persisted networks')
> EnvironmentError: Failed to restore the persisted networks
>
>
>
> following is mentioned in the original thread, but should help other
> googlers:
>
>
> ovirt-hosted-engine-setup.log (first attempt):
>
> 2014-12-23 22:04:38 DEBUG otopi.plugins.otopi.services.rhel
> plugin.execute:861 execute-output: ('/sbin/service', 'vdsmd', 'start')
> stdout:
> Starting multipathd daemon: [  OK  ]
> Starting rpcbind: [  OK  ]
> Starting ntpd: [  OK  ]
> Loading the softdog kernel module: [  OK  ]
> Starting wdmd: [  OK  ]
> Starting sanlock: [  OK  ]
> supervdsm start[  OK  ]
> Starting iscsid: [  OK  ]
> [  OK  ]
> vdsm: Running mkdirs
> vdsm: Running configure_coredump
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running check_is_configured
> libvirt is already configured for vdsm
> vdsm: Running validate_configuration
> SUCCESS: ssl configured to true. No conflicts
> vdsm: Running prepare_transient_repository
> vdsm: Running syslog_available
> vdsm: Running nwfilter
> vdsm: Running dummybr
> vdsm: Running load_needed_modules
> vdsm: Running tune_system
> vdsm: Running test_space
> vdsm: Running test_lo
> vdsm: Running unified_network_persistence_upgrade
> vdsm: stopped during execute unified_network_persistence_upgrade task
> (task returned with error code 1).
> vdsm start[FAILED]
>
> 2014-12-23 22:04:38 DEBUG otopi.plugins.otopi.services.rhel
> plugin.execute:866 execute-output: ('/sbin/service', 'vdsmd', 'start')
> stderr:
> initctl: Job is already running: libvirtd
> libvirt: Network Filter Driver error : Network filter not found: no
> nwfilter with matching name 'vdsm-no-mac-spoofing'
>
> 2014-12-23 22:04:38 DEBUG otopi.context context._executeMethod:152 method
> exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
> _executeMethod
>     method['method']()
>   File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
> line 155, in _late_setup
>     state=True
>   File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 188, in
> state
>     'start' if state else 'stop'
>   File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 96, in
> _executeServiceCommand
>     raiseOnError=raiseOnError
>   File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
> execute
>     command=args[0],
> RuntimeError: Command '/sbin/service' failed to execute
> 2014-12-23 22:04:38 ERROR otopi.context context._executeMethod:161 Failed
> to execute stage 'Environment setup': Command '/sbin/service' failed to
> execute
>
>
> ovirt-hosted-engine-setup.log (further attempts):
>
> 2014-12-23 22:42:40 DEBUG otopi.plugins.otopi.services.rhel
> plugin.execute:861 execute-output: ('/sbin/service', 'vdsmd', 'start')
> stdout:
> vdsm: Running mkdirs
> vdsm: Running configure_coredump
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running check_is_configured
> libvirt is already configured for vdsm
> vdsm: Running validate_configuration
> SUCCESS: ssl configured to true. No conflicts
> vdsm: Running prepare_transient_repository
> vdsm: Running syslog_available
> vdsm: Running nwfilter
> vdsm: Running dummybr
> vdsm: Running load_needed_modules
> vdsm: Running tune_system
> vdsm: Running test_space
> vdsm: Running test_lo
> vdsm: Running unified_network_persistence_upgrade
> vdsm: stopped during execute unified_network_persistence_upgrade task
> (task returned with error code 1).
> vdsm start[FAILED]
>
> 2014-12-23 22:42:40 DEBUG otopi.plugins.otopi.services.rhel
> plugin.execute:866 execute-output: ('/sbin/service', 'vdsmd', 'start')
> stderr:
> initctl: Job is already running: libvirtd
>
> 2014-12-23 22:42:40 DEBUG otopi.context context._executeMethod:152 method
> exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
> _executeMethod
>     method['method']()
>   File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
> line 155, in _late_setup
>     state=True
>   File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 188, in
> state
>     'start' if state else 'stop'
>   File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 96, in
> _executeServiceCommand
>     raiseOnError=raiseOnError
>   File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
> execute
>     command=args[0],
> RuntimeError: Command '/sbin/service' failed to execute
> 2014-12-23 22:42:40 ERROR otopi.context context._executeMethod:161 Failed
> to execute stage 'Environment setup': Command '/sbin/service' failed to
> execute
>
> ----
>
> Andreas
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141224/900f30e3/attachment-0001.html>


More information about the Users mailing list