[ovirt-users] Deploying self-hosted engine - Waiting for VDSM host to become operation + received packet with own address as source
Simone Tiraboschi
stirabos at redhat.com
Mon Jan 8 09:01:54 UTC 2018
On Mon, Jan 8, 2018 at 3:23 AM, Sam McLeod <mailinglists at smcleod.net> wrote:
> Hello,
>
> I'm trying to setup a host as a self-hosted engine as per
> https://www.ovirt.org/documentation/self-hosted/
> chap-Deploying_Self-Hosted_Engine/.
> The host is configured with two bonded network interfaces:
>
> bond0 = management network for hypervisors, setup as active/passive.
> bond1 = network that has VLANs for various network segments for virtual
> machines to use, setup as LACP bond to upstream switches.
>
> On the host, both networks are operational and work as expected.
>
> When setting up the self-hosted engine, bond0 is selected as network to
> bridge with, and a unique IP is given to the self-hosted engine VM.
>
> During the final stages of the self-hosted engine setup, the installer
> gets stuck on ' Waiting for the VDSM host to become operational'.
> While it is repeating this every minute or so the host repeats the message
> 'bond0: received packet with own address as source address', which is odd
> to me as it's in active/passive bond and I'd only expect to see this kind
> of message on XOR / load balanced interfaces.
>
Could you please attach/share hosted-engine-setup, vdsm, supervdsm and
host-deploy (on the engine VM) logs?
>
> Host console screenshot: https://imgur.com/a/a2JLd
> Host OS: CentOS 7.4
> oVirt version: 4.2.0
>
> ip a on host while install is stuck waiting for VDSM:
>
> # ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen
> 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: enp2s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq
> master bond1 portid 0100000000000000000000373031384833 state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> 3: enp2s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq
> master bond1 portid 0200000000000000000000373031384833 state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> 4: ens1f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
> master bond0 portid 0100000000000000000000474835323543 state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> 5: ens1f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
> master bond0 portid 0200000000000000000000474835323543 state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> 6: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue
> state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::7ae3:b5ff:fe10:7488/64 scope link
> valid_lft forever preferred_lft forever
> 7: storage at bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
> noqueue state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> inet 10.51.40.172/24 brd 10.51.40.255 scope global storage
> valid_lft forever preferred_lft forever
> inet6 fe80::7ae3:b5ff:fe10:7488/64 scope link
> valid_lft forever preferred_lft forever
> 9: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
> 1000
> link/ether ae:c0:02:25:42:24 brd ff:ff:ff:ff:ff:ff
> 10: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
> link/ether be:92:5d:c3:28:4d brd ff:ff:ff:ff:ff:ff
> 30: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
> noqueue master ovirtmgmt state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> 46: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> inet 10.51.14.112/24 brd 10.51.14.255 scope global ovirtmgmt
> valid_lft forever preferred_lft forever
> 47: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
> 1000
> link/ether 8e:c0:25:88:40:de brd ff:ff:ff:ff:ff:ff
>
>
> Before the self-hosted engine install was run the following did not exist:
>
> ;vdsmdummy;
> ovs-system
> br-int
> ovirtmgmt
>
> and bond0 was *not* a slave of ovirtmgmt.
>
> I'm now going to kick off a complete reinstall of CentOS 7 on the host as
> I've since tried cleaning up the host using the ovirt-hosted-engine-cleanup
> command and removing the packages which seem to leave the network
> configuration a mess an doesn't actually cleanup files on disk as expected.
>
>
Yes, currently ovirt-hosted-engine-cleanup is not able to revert to initial
network configuration but hosted-engine-setup is supposed to be able to
consume an existing management bridge.
>
> --
> Sam McLeod (protoporpoise on IRC)
> https://smcleod.net
> https://twitter.com/s_mcleod
>
> Words are my own opinions and do not necessarily represent those of
> my employer or partners.
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180108/a81ff0d9/attachment.html>
More information about the Users
mailing list