[ovirt-users] "Setup Host Networks" fails (was: Ovirt 4 failing to setup CentOS 7.2 host (everything default))
Edward Haas
ehaas at redhat.com
Wed Feb 1 10:35:09 UTC 2017
The server seems to have a special network configuration that limits its
communication to the Gateway through SCOPE definition.
IPADDR=176.9.146.137
NETMASK=255.255.255.255
SCOPE="peer 176.9.146.129"
oVirt host does not support such a setup.
VDSM acquires the iface and tries to configure it based on the existing
values, ignoring the SCOPE.
It ends up with a full mask set and lost of connectivity.
Please set your server with a 'regular' ip/mask/gateway settings.
If you want to keep the limitations of SCOPE, you can set some firewall
rules instead.
Thanks,
Edy.
On Tue, Jan 31, 2017 at 1:19 PM, Eidmantas Ivanauskas <
eidmantasivanauskas at gmail.com> wrote:
> Thank you for tidying up the thread. Never used these mailing lists (at
> all on any platform), so I am trying to minimize damage ;).
>
> Here are some additional logs from the host. (upon using Setup Host
> Networks).
>
> ########### messages
> .
> Jan 31 10:44:28 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is
> Down
> Jan 31 10:44:29 CentOS-73-64-minimal systemd: Started /usr/sbin/ifup
> enp4s0.
> Jan 31 10:44:29 CentOS-73-64-minimal systemd: Starting /usr/sbin/ifup
> enp4s0.
> Jan 31 10:44:29 CentOS-73-64-minimal kernel: IPv6: ADDRCONF(NETDEV_UP):
> enp4s0: link is not ready
> Jan 31 10:44:29 CentOS-73-64-minimal kernel: 8021q: adding VLAN 0 to HW
> filter on device enp4s0
> Jan 31 10:44:33 CentOS-73-64-minimal daemonAdapter: libvirt: Network
> Driver error : Network not found: no network with matching name
> 'vdsm-ovirtmgmt'
> Jan 31 10:44:33 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is Up
> 1000 Mbps Full Duplex, Flow Control: Rx/Tx
> Jan 31 10:44:33 CentOS-73-64-minimal kernel: IPv6:
> ADDRCONF(NETDEV_CHANGE): enp4s0: link becomes ready
> Jan 31 10:46:34 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is
> Down
> Jan 31 10:46:34 CentOS-73-64-minimal systemd: Started /usr/sbin/ifup
> enp4s0.
> Jan 31 10:46:34 CentOS-73-64-minimal systemd: Starting /usr/sbin/ifup
> enp4s0.
> Jan 31 10:46:34 CentOS-73-64-minimal kernel: IPv6: ADDRCONF(NETDEV_UP):
> enp4s0: link is not ready
> Jan 31 10:46:34 CentOS-73-64-minimal kernel: 8021q: adding VLAN 0 to HW
> filter on device enp4s0
> Jan 31 10:46:38 CentOS-73-64-minimal journal: vdsm vds ERROR connectivity
> check failed#012Traceback (most recent call last):#012 File
> "/usr/share/vdsm/API.py", line 1473, in setupNetworks#012
> supervdsm.getProxy().setupNetworks(networks, bondings, options)#012 File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__#012 return callMethod()#012 File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py",
> line 51, in <lambda>#012 **kwargs)#012 File "<string>", line 2, in
> setupNetworks#012 File "/usr/lib64/python2.7/multiprocessing/managers.py",
> line 773, in _callmethod#012 raise convert_to_error(kind,
> result)#012ConfigNetworkError: (10, 'connectivity check failed')
> Jan 31 10:46:39 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is Up
> 1000 Mbps Full Duplex, Flow Control: Rx/Tx
> Jan 31 10:46:39 CentOS-73-64-minimal kernel: IPv6:
> ADDRCONF(NETDEV_CHANGE): enp4s0: link becomes ready
>
>
>
>
>
>
>
> ######## journalctl
>
> Jan 31 11:05:07 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is
> Down
> Jan 31 11:05:07 CentOS-73-64-minimal systemd[1]: Started /usr/sbin/ifup
> enp4s0.
> Jan 31 11:05:07 CentOS-73-64-minimal systemd[1]: Starting /usr/sbin/ifup
> enp4s0.
> Jan 31 11:05:07 CentOS-73-64-minimal kernel: IPv6: ADDRCONF(NETDEV_UP):
> enp4s0: link is not ready
> Jan 31 11:05:07 CentOS-73-64-minimal kernel: 8021q: adding VLAN 0 to HW
> filter on device enp4s0
> Jan 31 11:05:11 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is Up
> 1000 Mbps Full Duplex, Flow Control: Rx/Tx
> Jan 31 11:05:11 CentOS-73-64-minimal kernel: IPv6:
> ADDRCONF(NETDEV_CHANGE): enp4s0: link becomes ready
> Jan 31 11:05:11 CentOS-73-64-minimal daemonAdapter[11064]: libvirt:
> Network Driver error : Network not found: no network with matching name
> 'vdsm-ovirtmgmt'
> Jan 31 11:07:12 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is
> Down
> Jan 31 11:07:12 CentOS-73-64-minimal systemd[1]: Started /usr/sbin/ifup
> enp4s0.
> Jan 31 11:07:12 CentOS-73-64-minimal systemd[1]: Starting /usr/sbin/ifup
> enp4s0.
> Jan 31 11:07:12 CentOS-73-64-minimal kernel: IPv6: ADDRCONF(NETDEV_UP):
> enp4s0: link is not ready
> Jan 31 11:07:12 CentOS-73-64-minimal kernel: 8021q: adding VLAN 0 to HW
> filter on device enp4s0
> Jan 31 11:07:16 CentOS-73-64-minimal vdsm[11340]: vdsm vds ERROR
> connectivity check failed
> Traceback (most recent
> call last):
> File
> "/usr/share/vdsm/API.py", line 1473, in setupNetworks
> supervdsm.getProxy().setupNetworks(networks,
> bondings, options)
> File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__
> return callMethod()
> File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in <lambda>
> **kwargs)
> File "<string>", line
> 2, in setupNetworks
> File
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise
> convert_to_error(kind, result)
> ConfigNetworkError: (10,
> 'connectivity check failed')
> Jan 31 11:07:17 CentOS-73-64-minimal kernel: e1000e: enp4s0 NIC Link is Up
> 1000 Mbps Full Duplex, Flow Control: Rx/Tx
> Jan 31 11:07:17 CentOS-73-64-minimal kernel: IPv6:
> ADDRCONF(NETDEV_CHANGE): enp4s0: link becomes ready
>
>
>
>
> On Jan 31, 2017, at 11:06 AM, Yedidyah Bar David <didi at redhat.com> wrote:
>
> On Tue, Jan 31, 2017 at 9:43 AM, Eidmantas Ivanauskas
> <eidmantasivanauskas at gmail.com> wrote:
>
> I could SSH into it, because (as per supervdsm log) the original network
> config was restored.
>
> Engine and host are in completely different continents (America, Europe)
>
>
>
> I cleared both engine.log and host logs. Went into Ovirt GUI and ‘Setup
> Host
> Networks
> and dragged ovirtmgmt in my GUI to bind with the main ethernet adapter.
>
> As always, the console freezes (as network connection is lost after those
> retries and the original config is restored).
>
> engine.log
> http://pastebin.com/kT6WSq2W
>
> and host-deploy doesn’t add any extra info. I guess it’s because I am
> actually not deploying the host, but rather doing ‘Setup Host Networks’
> attachment of ‘ovirtmgmt’?
>
>
> OK. Re-adding Eduard and changing subject. Best,
>
>
>
> Thanks so much for replying.
>
>
>
> On Jan 31, 2017, at 9:28 AM, Sandro Bonazzola <sbonazzo at redhat.com> wrote:
>
>
>
> On Mon, Jan 30, 2017 at 6:05 PM, Eidmantas Ivanauskas
> <eidmantasivanauskas at gmail.com> wrote:
>
>
> Hey guys,
>
> Trying to setup a new Ovirt Engine + a separate CentOS 7.2 based Host.
> After adding the host, SSH freezes (possibly due to the log below), then
> the
> GUI just displays ‘Non Operational’.
>
>
>
> Please note that following CentOS 7.3 release, 7.2 is not supported
> anymore.
> Latest qemu-kvm-ev 2.6 requires CentOS 7.3.
>
>
>
> [root at CentOS-73-64-minimal ~]# cat /etc/centos-release
> CentOS Linux release 7.3.1611 (Core)
> [root at CentOS-73-64-minimal ~]#
>
> It was 7.3 after all, not 7.2. Sorry.
>
>
>
> The engine is running in Hetzner DC which allows for only MAC based
> routing, but I don’t think this part of the setup is connected. I heard
> people having issues routing additional IP pools.
>
> Here’s the supervdsm.log from the host server.
>
> http://pastebin.com/irfShv9n
>
> What am I missing here? It’s been a long long day.
> Thanks.
> Eid
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
> Didi
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170201/7dbcabc1/attachment-0001.html>
More information about the Users
mailing list