I have built a new host and my setup is a single hyperconverged node.. I followed the
directions to create a new logical network. I see that the engine has marked the network
as being down. (Hosts>Network Interfaces Tab>Setup Host Networks)
Here is my network config on the host-
[root@vmh ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp96s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt
state UP group default qlen 1000
link/ether 0c:c4:7a:f9:b9:88 brd ff:ff:ff:ff:ff:ff
3: enp96s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
default qlen 1000
link/ether 0c:c4:7a:f9:b9:89 brd ff:ff:ff:ff:ff:ff
inet 172.30.51.2/30 brd 172.30.51.3 scope global noprefixroute dynamic enp96s0f1
valid_lft 82411sec preferred_lft 82411sec
inet6 fe80::d899:439c:5ee8:e292/64 scope link noprefixroute
valid_lft forever preferred_lft forever
19: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
qlen 1000
link/ether 96:e4:00:aa:71:f7 brd ff:ff:ff:ff:ff:ff
23: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
qlen 1000
link/ether 36:0f:60:69:e1:2b brd ff:ff:ff:ff:ff:ff
24: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen
1000
link/ether f6:3b:14:e1:15:48 brd ff:ff:ff:ff:ff:ff
25: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default qlen 1000
link/ether 0c:c4:7a:f9:b9:88 brd ff:ff:ff:ff:ff:ff
inet 172.30.50.3/24 brd 172.30.50.255 scope global dynamic ovirtmgmt
valid_lft 84156sec preferred_lft 84156sec
inet6 fe80::ec4:7aff:fef9:b988/64 scope link
valid_lft forever preferred_lft forever
26: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt
state UNKNOWN group default qlen 1000
link/ether fe:16:3e:50:53:cd brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe50:53cd/64 scope link
valid_lft forever preferred_lft forever
VLAN 101 is attached to enp96s0f0 which is my ovirtmgmt interface. -IP range 172.30.50.x
VLAN 102 is attached to enp96s0f1 which is my storage NIC for gluster. -IP range
172.30.51.x
VLAN 103 is attached to enp96s0f1 is intended for most of my VMs that are not
infrastructure related. -IP range 192.168.2.x
I am pretty confident my router/switch is setup correctly.. As a test, I can go to
localhost>networking>add VLAN and assign enp96s0f1 to VLAN 103 and it does get an IP
address in the 192.168.2.x range. The host can also ping the 192.168.2.1 gateway.
Why doesn't the engine think the VLAN is up? Which logs do I need to review?
Show replies by date
I did some further testing. In short, if I attach logical network/VLAN 103 to enp96s0f0
the network will come up and be available for use. If I attach VLAN 103 to enp96s0f1 the
logical network never comes up.
I have sort of answered my own question, though I don't fully understand why.
If my memory serves me correctly, I had used the exact same router setup on my 4.2.7 Ovirt
node. I believe I was able to use VLAN 103 on either interface without issue. Is there
something different about 4.3.6 Ovirt node? Is there some sort of rule that won't
allow a logical network to be shared with the same interface as the interface for
storage/gluster?