[ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

John Florian jflorian at doubledog.org
Sat Dec 19 00:08:48 UTC 2015


I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs
101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more
(VID 1) for everything else.  Because I know of no way to manipulate the
network configuration from the management GUI once the HE is running and
with only a single Host, I made the OS configuration as close as
possible to what I'd want when done.  This looks like:

[root at orthosie ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
noqueue master ovirtmgmt state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
3: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
bond0 state UP qlen 1000
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
4: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
bond0 state UP qlen 1000
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
5: em3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
bond0 state UP qlen 1000
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
6: em4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
bond0 state UP qlen 1000
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
8: bond0.1 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
    inet 172.16.7.8/24 brd 172.16.7.255 scope global bond0.1
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
       valid_lft forever preferred_lft forever
9: bond0.101 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.203/24 brd 192.168.101.255 scope global bond0.101
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
       valid_lft forever preferred_lft forever
10: bond0.102 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.203/24 brd 192.168.102.255 scope global bond0.102
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
       valid_lft forever preferred_lft forever
11: bond0.103 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.103.203/24 brd 192.168.103.255 scope global bond0.103
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
       valid_lft forever preferred_lft forever
12: bond0.104 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.104.203/24 brd 192.168.104.255 scope global bond0.104
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link
       valid_lft forever preferred_lft forever
13: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
    link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.102/24 brd 192.168.100.255 scope global ovirtmgmt
       valid_lft forever preferred_lft forever

The hosted-engine deploy script got stuck near the end when it wanted
the HA broker to take over.  It said the ovirtmgmt network was
unavailable on the Host and suggested trying to activate it within the
GUI.  Though I had my bonding and bridging all configured prior to any
HE deployment attempt (as shown above), the GUI didn’t see it that way. 
It knew of the bond, and the 4 IFs of course, but it showed all 4 IFs as
down and the required ovirtmgmt network was off on the right side –
effectively not yet associated with the physical devices.  I dragged the
ovirtmgmt net over to the left to associate it the 4 IFs and pressed
Save.  The GUI now shows all 4 IFs up with ovirtmgmt assigned.  But it
is not in sync -- specifically the netmask property on the host is
"255.255.255.0" while on the DC its "24".  They're saying the same
thing; just in different ways.

Since I only have the one Host, how can I sync this?

-- 
John Florian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151218/26b76bf8/attachment-0001.html>


More information about the Users mailing list