On Thu, Jan 4, 2018 at 8:36 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Wed, Jan 3, 2018 at 6:20 PM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Hello,

I’m deploying oVirt HE on a new DC and the installation failed when configuring the ovirtmgmt interface:

[ INFO  ] Configuring the management bridge
[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup networks {'ovirtmgmt': {'bonding': 'bond0', 'ipaddr': u'146.164.37.103', 'netmask': u'255.255.255.0', 'defaultRoute': True, 'gateway': u'146.164.37.1'}}. Error: "Command Host.setupNetworks with args {'bondings': {}, 'options': {'connectivityCheck': False}, 'networks': {'ovirtmgmt': {'bonding': 'bond0', 'ipaddr': u'146.164.37.103', 'netmask': u'255.255.255.0', 'defaultRoute': True, 'gateway': u'146.164.37.1'}}} failed:
         (code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.setupNetworks of <vdsm.API.Global object at 0x3f394d0>> with arguments: ({u'ovirtmgmt': {u'bonding': u'bond0', u'ipaddr': u'146.164.37.103', u'netmask': u'255.255.255.0', u'defaultRoute': True, u'gateway': u'146.164.37.1'}}, {}, {u'connectivityCheck': False}) error: 'NoneType' object is not iterable"})"
[ INFO  ] Yum Performing yum transaction rollback
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20171230013538.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
          Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171230004808-wwq1ib.log

It appears this happened because on the bond0 interface, the NFS storage network is tagged directly on bond0:

11: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 40:f2:e9:f3:5c:62 brd ff:ff:ff:ff:ff:ff
    inet 146.164.37.103/24 brd 146.164.37.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::42f2:e9ff:fef3:5c62/64 scope link 
       valid_lft forever preferred_lft forever
13: bond0.10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 40:f2:e9:f3:5c:62 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.3/28 brd 192.168.10.15 scope global bond0.10
       valid_lft forever preferred_lft forever
    inet6 fe80::42f2:e9ff:fef3:5c62/64 scope link 
       valid_lft forever preferred_lft forever

I’ve collected the logs and they are located here for download:

Can you please add there /var/log/vdsm/* ? Thanks.

I see that you also filed [1]. Let's continue the analysis there,
it's more effective than on the mailing list. Thanks.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1530839
 

Adding Dan.

Best regards,
--
Didi



--
Didi