[ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network
Sverker Abrahamsson
sverker at abrahamsson.com
Thu Dec 29 10:44:32 UTC 2016
Hi
The rpm packages you listed in the other mail are installed but I had
not run vdsm-tool ovn-config to create tunnel as the OVN controller is
on the same host.
[root at h2 ~]# rpm -q openvswitch-ovn-common
openvswitch-ovn-common-2.6.90-1.el7.centos.x86_64
[root at h2 ~]# rpm -q openvswitch-ovn-host
openvswitch-ovn-host-2.6.90-1.el7.centos.x86_64
[root at h2 ~]# rpm -q python-openvswitch
python-openvswitch-2.6.90-1.el7.centos.noarch
After removing my manually created br-int and run
vdsm-tool ovn-config 127.0.0.1 172.27.1.1
then I have the br-int but 'ip link show' does not show any link
'genev_sys_' nor does 'ovs-vsctl show' any port for ovn. I assume these
are when there is an actual tunnel?
[root at h2 ~]# ovs-vsctl show
ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Bridge ovirtbridge
Port ovirtbridge
Interface ovirtbridge
type: internal
Bridge "ovsbridge0"
Port "ovsbridge0"
Interface "ovsbridge0"
type: internal
Port "eth0"
Interface "eth0"
ovs_version: "2.6.90"
[root at h2 ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master ovs-system state UP mode DEFAULT qlen 1000
link/ether 44:8a:5b:84:7d:b3 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
DEFAULT qlen 1000
link/ether 5a:14:cf:28:47:e2 brd ff:ff:ff:ff:ff:ff
4: ovsbridge0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN mode DEFAULT qlen 1000
link/ether 44:8a:5b:84:7d:b3 brd ff:ff:ff:ff:ff:ff
5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
DEFAULT qlen 1000
link/ether 9e:b0:3a:9d:f2:4b brd ff:ff:ff:ff:ff:ff
6: ovirtbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN mode DEFAULT qlen 1000
link/ether a6:f6:e5:a4:5b:45 brd ff:ff:ff:ff:ff:ff
7: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master
ovirtmgmt state UNKNOWN mode DEFAULT qlen 1000
link/ether 66:e0:1c:c3:a9:d8 brd ff:ff:ff:ff:ff:ff
8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP mode DEFAULT qlen 1000
link/ether 66:e0:1c:c3:a9:d8 brd ff:ff:ff:ff:ff:ff
Firewall settings:
[root at h2 ~]# firewall-cmd --list-all-zones
work
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
drop
target: DROP
icmp-block-inversion: no
interfaces:
sources:
services:
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
internal
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client mdns samba-client ssh
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
external
target: default
icmp-block-inversion: no
interfaces:
sources:
services: ssh
ports:
protocols:
masquerade: yes
forward-ports:
sourceports:
icmp-blocks:
rich rules:
trusted
target: ACCEPT
icmp-block-inversion: no
interfaces:
sources:
services:
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
home
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client mdns samba-client ssh
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
dmz
target: default
icmp-block-inversion: no
interfaces:
sources:
services: ssh
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 ovsbridge0
sources:
services: dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
block
target: %%REJECT%%
icmp-block-inversion: no
interfaces:
sources:
services:
ports:
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
ovirt (active)
target: default
icmp-block-inversion: no
interfaces: ovirtbridge ovirtmgmt
sources:
services: dhcp ovirt-fence-kdump-listener ovirt-http ovirt-https
ovirt-imageio-proxy ovirt-postgres ovirt-provider-ovn
ovirt-vmconsole-proxy ovirt-websocket-proxy ssh vdsm
ports:
protocols:
masquerade: yes
forward-ports:
sourceports:
icmp-blocks:
rich rules:
rule family="ipv4" port port="6641" protocol="tcp" accept
rule family="ipv4" port port="6642" protocol="tcp" accept
The db dump is attached
/Sverker
Den 2016-12-29 kl. 09:50, skrev Marcin Mirecki:
> Hi,
>
> Can you please do: "sudo ovsdb-client dump"
> on the host and send me the output?
>
> Have you configured the ovn controller to connect to the
> OVN north? You can do it using "vdsm-tool ovn-config" or
> using the OVN tools directly.
> Please check out: https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
> for details.
>
> Also please note that the OVN provider is completely different
> from the neutron-openvswitch plugin. Please don't mix the two.
>
> Marcin
>
>
> ----- Original Message -----
>> From: "Marcin Mirecki" <mmirecki at redhat.com>
>> To: "Sverker Abrahamsson" <sverker at abrahamsson.com>
>> Cc: "Ovirt Users" <users at ovirt.org>
>> Sent: Thursday, December 29, 2016 9:27:19 AM
>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network
>>
>> Hi,
>>
>> br-int is the OVN integration bridge, it should have been created
>> when installing OVN. I assume you have the following packages installed
>> on the host:
>> openvswitch-ovn-common
>> openvswitch-ovn-host
>> python-openvswitch
>>
>> Please give me some time to look at the connectivity problem.
>>
>> Marcin
>>
>>
>>
>> ----- Original Message -----
>>> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
>>> To: "Marcin Mirecki" <mmirecki at redhat.com>
>>> Cc: "Ovirt Users" <users at ovirt.org>
>>> Sent: Thursday, December 29, 2016 12:47:04 AM
>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt
>>> network
>>>
>>> From
>>> /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook
>>> (installed by ovirt-provider-ovn-driver rpm):
>>>
>>> BRIDGE_NAME = 'br-int'
>>>
>>>
>>> Den 2016-12-28 kl. 23:56, skrev Sverker Abrahamsson:
>>>> Googling on the message about br-int suggested adding that bridge to ovs:
>>>>
>>>> ovs-vsctl add-br br-int
>>>>
>>>> Then the VM is able to boot, but it fails to get network connectivity.
>>>> Output in /var/log/messages:
>>>>
>>>> Dec 28 23:31:35 h2 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl
>>>> --timeout=5 -- --if-exists del-port vnet0 -- add-port br-int vnet0 --
>>>> set Interface vnet0 "external-ids:attached-mac=\"00:1a:4a:16:01:51\""
>>>> -- set Interface vnet0
>>>> "external-ids:iface-id=\"e8853aac-8a75-41b0-8010-e630017dcdd8\"" --
>>>> set Interface vnet0
>>>> "external-ids:vm-id=\"b9440d60-ef5a-4e2b-83cf-081df7c09e6f\"" -- set
>>>> Interface vnet0 external-ids:iface-status=active
>>>> Dec 28 23:31:35 h2 kernel: device vnet0 entered promiscuous mode
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
>>>> libvirt-J-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
>>>> libvirt-P-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F J-vnet0-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X J-vnet0-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F J-vnet0-arp-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X J-vnet0-arp-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev
>>>> --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
>>>> vnet0 -g FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0
>>>> -g FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in
>>>> vnet0 -g HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -E FP-vnet0 FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -E FJ-vnet0 FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/iptables -w2 -w -E HJ-vnet0 HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev
>>>> --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out
>>>> vnet0 -g FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in
>>>> vnet0 -g FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in
>>>> vnet0 -g HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -E FP-vnet0 FO-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -E FJ-vnet0 FI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ip6tables -w2 -w -E HJ-vnet0 HI-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
>>>> libvirt-I-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
>>>> libvirt-O-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-I-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet0
>>>> libvirt-O-vnet0' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F I-vnet0-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X I-vnet0-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -F I-vnet0-arp-mac' failed:
>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
>>>> '/usr/sbin/ebtables --concurrent -t nat -X I-vnet0-arp-mac' failed:
>>>>
>>>>
>>>> [root at h2 etc]# ovs-vsctl show
>>>> ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
>>>> Bridge ovirtbridge
>>>> Port "ovirtport0"
>>>> Interface "ovirtport0"
>>>> type: internal
>>>> Port ovirtbridge
>>>> Interface ovirtbridge
>>>> type: internal
>>>> Bridge "ovsbridge0"
>>>> Port "ovsbridge0"
>>>> Interface "ovsbridge0"
>>>> type: internal
>>>> Port "eth0"
>>>> Interface "eth0"
>>>> Bridge br-int
>>>> Port br-int
>>>> Interface br-int
>>>> type: internal
>>>> Port "vnet0"
>>>> Interface "vnet0"
>>>> ovs_version: "2.6.90"
>>>>
>>>> Searching through the code it appears that br-int comes from
>>>> neutron-openvswitch plugin ??
>>>>
>>>> [root at h2 share]# rpm -qf
>>>> /usr/share/otopi/plugins/ovirt-host-deploy/openstack/neutron_openvswitch.py
>>>> ovirt-host-deploy-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch
>>>>
>>>>
>>>> /Sverker
>>>>
>>>> Den 2016-12-28 kl. 23:24, skrev Sverker Abrahamsson:
>>>>> In addition I had to add an alias to modprobe:
>>>>>
>>>>> [root at h2 modprobe.d]# cat dummy.conf
>>>>> alias dummy0 dummy
>>>>>
>>>>>
>>>>> Den 2016-12-28 kl. 23:03, skrev Sverker Abrahamsson:
>>>>>> Hi
>>>>>> I first tried to set device name to dummy_0, but then ifup did not
>>>>>> succeed in creating the device unless I first did 'ip link add
>>>>>> dummy_0 type dummy' but then it would not suceed to establish the if
>>>>>> on reboot.
>>>>>>
>>>>>> Setting fake_nics = dummy0 would not work neither, but this works:
>>>>>>
>>>>>> fake_nics = dummy*
>>>>>>
>>>>>> The engine is now able to find the if and assign bridge ovirtmgmt to
>>>>>> it.
>>>>>>
>>>>>> However, I then run into the next issue when starting a VM:
>>>>>>
>>>>>> 2016-12-28 22:28:23,897 ERROR
>>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>>>> (ForkJoinPool-1-worker-2) [] Correlation ID: null, Call Stack: null,
>>>>>> Custom Event ID: -1, Message: VM CentOS7 is down with error. Exit
>>>>>> message: Cannot get interface MTU on 'br-int': No such device.
>>>>>>
>>>>>> This VM has a nic on ovirtbridge, which comes from the OVN provider.
>>>>>>
>>>>>> /Sverker
>>>>>>
>>>>>> Den 2016-12-28 kl. 14:38, skrev Marcin Mirecki:
>>>>>>> Sverker,
>>>>>>>
>>>>>>> Can you try adding a vnic named veth_* or dummy_*,
>>>>>>> (or alternatively add the name of the vnic to
>>>>>>> vdsm.config fake_nics), and setup the management
>>>>>>> network using this vnic?
>>>>>>> I suppose adding the vnic you use for connecting
>>>>>>> to the engine to fake_nics should make it visible
>>>>>>> to the engine, and you should be able to use it for
>>>>>>> the setup.
>>>>>>>
>>>>>>> Marcin
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>>> From: "Marcin Mirecki" <mmirecki at redhat.com>
>>>>>>>> To: "Sverker Abrahamsson" <sverker at abrahamsson.com>
>>>>>>>> Cc: "Ovirt Users" <users at ovirt.org>
>>>>>>>> Sent: Wednesday, December 28, 2016 12:06:26 PM
>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
>>>>>>>> ovirtmgmt network
>>>>>>>>
>>>>>>>>> I have an internal OVS bridge called ovirtbridge which has a port
>>>>>>>>> with
>>>>>>>>> IP address, but in the host network settings that port is not
>>>>>>>>> visible.
>>>>>>>> I just verified and unfortunately the virtual ports are not
>>>>>>>> visible in engine
>>>>>>>> to assign a network to :(
>>>>>>>> I'm afraid that the engine is not ready for such a scenario (even
>>>>>>>> if it
>>>>>>>> works).
>>>>>>>> Please give me some time to look for a solution.
>>>>>>>>
>>>>>>>> ----- Original Message -----
>>>>>>>>> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
>>>>>>>>> To: "Marcin Mirecki" <mmirecki at redhat.com>
>>>>>>>>> Cc: "Ovirt Users" <users at ovirt.org>
>>>>>>>>> Sent: Wednesday, December 28, 2016 11:48:24 AM
>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
>>>>>>>>> ovirtmgmt
>>>>>>>>> network
>>>>>>>>>
>>>>>>>>> Hi Marcin
>>>>>>>>> Yes, that is my issue. I don't want to let ovirt/vdsm see eth0 nor
>>>>>>>>> ovsbridge0 since as soon as it sees them it messes up the network
>>>>>>>>> config
>>>>>>>>> so that the host will be unreachable.
>>>>>>>>>
>>>>>>>>> I have an internal OVS bridge called ovirtbridge which has a port
>>>>>>>>> with
>>>>>>>>> IP address, but in the host network settings that port is not
>>>>>>>>> visible.
>>>>>>>>> It doesn't help to name it ovirtmgmt.
>>>>>>>>>
>>>>>>>>> The engine is able to communicate with the host on the ip it has
>>>>>>>>> been
>>>>>>>>> given, it's just that it believes that it HAS to have a ovirtmgmt
>>>>>>>>> network which can't be on OVN.
>>>>>>>>>
>>>>>>>>> /Sverker
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Den 2016-12-28 kl. 10:45, skrev Marcin Mirecki:
>>>>>>>>>> Hi Sverker,
>>>>>>>>>>
>>>>>>>>>> The management network is mandatory on each host. It's used by the
>>>>>>>>>> engine to communicate with the host.
>>>>>>>>>> Looking at your description and the exception it looks like it is
>>>>>>>>>> missing.
>>>>>>>>>> The error is caused by not having any network for the host
>>>>>>>>>> (network list retrieved in
>>>>>>>>>> InterfaceDaoImpl.getHostNetworksByCluster -
>>>>>>>>>> which
>>>>>>>>>> gets all the networks on nics for a host from vds_interface
>>>>>>>>>> table in the
>>>>>>>>>> DB).
>>>>>>>>>>
>>>>>>>>>> Could you maybe create a virtual nic connected to ovsbridge0 (as I
>>>>>>>>>> understand you
>>>>>>>>>> have no physical nic available) and use this for the management
>>>>>>>>>> network?
>>>>>>>>>>
>>>>>>>>>>> I then create a bridge for use with ovirt, with a private address.
>>>>>>>>>> I'm not quite sure I understand. Is this yet another bridge
>>>>>>>>>> connected to
>>>>>>>>>> ovsbridge0?
>>>>>>>>>> You could also attach the vnic for the management network here
>>>>>>>>>> if need
>>>>>>>>>> be.
>>>>>>>>>>
>>>>>>>>>> Please keep in mind that OVN has no use in setting up the
>>>>>>>>>> management
>>>>>>>>>> network.
>>>>>>>>>> The OVN provider can only handle external networks, which can
>>>>>>>>>> not be used
>>>>>>>>>> for a
>>>>>>>>>> management network.
>>>>>>>>>>
>>>>>>>>>> Marcin
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ----- Original Message -----
>>>>>>>>>>> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
>>>>>>>>>>> To: users at ovirt.org
>>>>>>>>>>> Sent: Wednesday, December 28, 2016 12:39:59 AM
>>>>>>>>>>> Subject: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt
>>>>>>>>>>> network
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Hi
>>>>>>>>>>> For long time I've been looking for proper support in ovirt for
>>>>>>>>>>> Open
>>>>>>>>>>> vSwitch
>>>>>>>>>>> so I'm happy that it is moving in the right direction. However,
>>>>>>>>>>> there
>>>>>>>>>>> seems
>>>>>>>>>>> to still be a dependency on a ovirtmgmt bridge and I'm unable
>>>>>>>>>>> to move
>>>>>>>>>>> that
>>>>>>>>>>> to the OVN provider.
>>>>>>>>>>>
>>>>>>>>>>> The hosting center where I rent hw instances has a bit special
>>>>>>>>>>> network
>>>>>>>>>>> setup,
>>>>>>>>>>> so I have one physical network port with a /32 netmask and
>>>>>>>>>>> point-to-point
>>>>>>>>>>> config to router. The physical port I connect to a ovs bridge
>>>>>>>>>>> which has
>>>>>>>>>>> the
>>>>>>>>>>> public ip. Since ovirt always messes up the network config when
>>>>>>>>>>> I've
>>>>>>>>>>> tried
>>>>>>>>>>> to let it have access to the network config for the physical
>>>>>>>>>>> port, I've
>>>>>>>>>>> set
>>>>>>>>>>> eht0 and ovsbridge0 as hidden in vdsm.conf.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I then create a bridge for use with ovirt, with a private
>>>>>>>>>>> address. With
>>>>>>>>>>> the
>>>>>>>>>>> OVN provider I am now able to import these into the engine and
>>>>>>>>>>> it looks
>>>>>>>>>>> good. When creating a VM I can select that it will have a vNic
>>>>>>>>>>> on my OVS
>>>>>>>>>>> bridge.
>>>>>>>>>>>
>>>>>>>>>>> However, I can't start the VM as an exception is thrown in the
>>>>>>>>>>> log:
>>>>>>>>>>>
>>>>>>>>>>> 2016-12-28 00:13:33,350 ERROR
>>>>>>>>>>> [org.ovirt.engine.core.bll.RunVmCommand]
>>>>>>>>>>> (default task-5) [3c882d53] Error during ValidateFailure.:
>>>>>>>>>>> java.lang.NullPointerException
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit.validateRequiredNetworksAvailable(NetworkPolicyUnit.java:140)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit.filter(NetworkPolicyUnit.java:69)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.runInternalFilters(SchedulingManager.java:597)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.runFilters(SchedulingManager.java:564)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSchedule(SchedulingManager.java:494)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:133)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:940)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:886)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:366)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.canRunActions(PrevalidatingMultipleActionsRunner.java:113)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.execute(PrevalidatingMultipleActionsRunner.java:76)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:613)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>> at
>>>>>>>>>>> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:583)
>>>>>>>>>>>
>>>>>>>>>>> [bll.jar:]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Looking at that section of code where the exception is thrown,
>>>>>>>>>>> I see
>>>>>>>>>>> that
>>>>>>>>>>> it
>>>>>>>>>>> iterates over host networks to find required networks, which I
>>>>>>>>>>> assume is
>>>>>>>>>>> ovirtmgmt. In the host network setup dialog I don't see any
>>>>>>>>>>> networks at
>>>>>>>>>>> all
>>>>>>>>>>> but it lists ovirtmgmt as required. It also list the OVN
>>>>>>>>>>> networks but
>>>>>>>>>>> these
>>>>>>>>>>> can't be statically assigned as they are added dynamically when
>>>>>>>>>>> needed,
>>>>>>>>>>> which is fine.
>>>>>>>>>>>
>>>>>>>>>>> I believe that I either need to remove ovirtmgmt network or
>>>>>>>>>>> configure
>>>>>>>>>>> that
>>>>>>>>>>> it
>>>>>>>>>>> is provided by the OVN provider, but neither is possible.
>>>>>>>>>>> Preferably it
>>>>>>>>>>> shouldn't be hardcoded which network is management and
>>>>>>>>>>> mandatory but be
>>>>>>>>>>> possible to configure.
>>>>>>>>>>>
>>>>>>>>>>> /Sverker
>>>>>>>>>>> Den 2016-12-27 kl. 17:10, skrev Marcin Mirecki:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users at ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161229/78aa1246/attachment-0001.html>
-------------- next part --------------
AutoAttach table
_uuid mappings system_description system_name
----- -------- ------------------ -----------
Bridge table
_uuid auto_attach controller datapath_id datapath_type datapath_version external_ids fail_mode flood_vlans flow_tables ipfix mcast_snooping_enable mirrors name netflow other_config ports protocols rstp_enable rstp_status sflow status stp_enable
------------------------------------ ----------- ---------- ------------------ ------------- ---------------- ------------ --------- ----------- ----------- ----- --------------------- ------- ------------ ------- ------------------------ ---------------------------------------------------------------------------- --------- ----------- ----------- ----- ------ ----------
d4af5bbb-df14-4ffc-aae9-b6566a5ffd87 [] [] "0000448a5b847db3" "" "<unknown>" {} [] [] {} [] false [] "ovsbridge0" [] {} [a7cca8f5-3437-43dc-8310-195454fb7771, b7361c57-41aa-4a8f-b6fe-67e643129aca] [] false {} [] {} false
9d3ab09e-a146-4bf2-a5bb-ba9948c4f2dd [] [] "00009eb03a9df24b" "" "<unknown>" {} secure [] {} [] false [] br-int [] {disable-in-band="true"} [406caf72-a6f9-4fd8-83dc-2bc4fb21944c] [] false {} [] {} false
a4e5f6a5-4ec1-455b-98e9-8b5e6a8b4cc4 [] [] "0000a6f6e5a45b45" "" "<unknown>" {} [] [] {} [] false [] ovirtbridge [] {} [7b45917b-abaf-4ebd-b501-ce76f07fe65e] [] false {} [] {} false
Controller table
_uuid connection_mode controller_burst_limit controller_rate_limit enable_async_messages external_ids inactivity_probe is_connected local_gateway local_ip local_netmask max_backoff other_config role status target
----- --------------- ---------------------- --------------------- --------------------- ------------ ---------------- ------------ ------------- -------- ------------- ----------- ------------ ---- ------ ------
Flow_Sample_Collector_Set table
_uuid bridge external_ids id ipfix
----- ------ ------------ -- -----
Flow_Table table
_uuid external_ids flow_limit groups name overflow_policy prefixes
----- ------------ ---------- ------ ---- --------------- --------
IPFIX table
_uuid cache_active_timeout cache_max_flows external_ids obs_domain_id obs_point_id other_config sampling targets
----- -------------------- --------------- ------------ ------------- ------------ ------------ -------- -------
Interface table
_uuid admin_state bfd bfd_status cfm_fault cfm_fault_status cfm_flap_count cfm_health cfm_mpid cfm_remote_mpids cfm_remote_opstate duplex error external_ids ifindex ingress_policing_burst ingress_policing_rate lacp_current link_resets link_speed link_state lldp mac mac_in_use mtu mtu_request name ofport ofport_request options other_config statistics status type
------------------------------------ ----------- --- ---------- --------- ---------------- -------------- ---------- -------- ---------------- ------------------ ------ ----- ------------ ------- ---------------------- --------------------- ------------ ----------- ---------- ---------- ---- --- ------------------- ---- ----------- ------------ ------ -------------- ------- ------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------- --------
db4ffac0-fd98-4143-91c0-3ca4767ebc52 down {} {} [] [] [] [] [] [] [] [] [] {} 5 0 0 [] 0 [] down {} [] "9e:b0:3a:9d:f2:4b" 1500 [] br-int 65534 [] {} {} {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=0, tx_dropped=0, tx_errors=0, tx_packets=0} {driver_name=openvswitch} internal
21df7a58-e572-4042-8b64-6370d8e58f92 up {} {} [] [] [] [] [] [] [] [] [] {} 4 0 0 [] 1 [] up {} [] "44:8a:5b:84:7d:b3" 1500 [] "ovsbridge0" 65534 [] {} {} {collisions=0, rx_bytes=193718, rx_crc_err=0, rx_dropped=39, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=2672, tx_bytes=32719, tx_dropped=0, tx_errors=0, tx_packets=248} {driver_name=openvswitch} internal
d8c5780d-b9bb-477a-8732-b29c28020831 up {} {} [] [] [] [] [] [] [] [] [] {} 6 0 0 [] 1 [] up {} [] "a6:f6:e5:a4:5b:45" 1500 [] ovirtbridge 65534 [] {} {} {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=822, tx_dropped=0, tx_errors=0, tx_packets=13} {driver_name=openvswitch} internal
0ac4a1a4-1f06-4ba6-b013-c4fb454f315c up {} {} [] [] [] [] [] [] [] full [] {} 2 0 0 [] 1 1000000000 up {} [] "44:8a:5b:84:7d:b3" 1500 [] "eth0" 1 [] {} {} {collisions=0, rx_bytes=234394, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=2711, tx_bytes=33603, tx_dropped=0, tx_errors=0, tx_packets=257} {driver_name="r8169", driver_version="2.3LK-NAPI", firmware_version=""} ""
Manager table
_uuid connection_mode external_ids inactivity_probe is_connected max_backoff other_config status target
----- --------------- ------------ ---------------- ------------ ----------- ------------ ------ ------
Mirror table
_uuid external_ids name output_port output_vlan select_all select_dst_port select_src_port select_vlan snaplen statistics
----- ------------ ---- ----------- ----------- ---------- --------------- --------------- ----------- ------- ----------
NetFlow table
_uuid active_timeout add_id_to_interface engine_id engine_type external_ids targets
----- -------------- ------------------- --------- ----------- ------------ -------
Open_vSwitch table
_uuid bridges cur_cfg datapath_types db_version external_ids iface_types manager_options next_cfg other_config ovs_version ssl statistics system_type system_version
------------------------------------ ------------------------------------------------------------------------------------------------------------------ ------- ---------------- ---------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------- --------------- -------- ------------ ----------- --- ---------- ----------- --------------
ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23 [9d3ab09e-a146-4bf2-a5bb-ba9948c4f2dd, a4e5f6a5-4ec1-455b-98e9-8b5e6a8b4cc4, d4af5bbb-df14-4ffc-aae9-b6566a5ffd87] 68 [netdev, system] "7.14.0" {hostname="h2.limetransit.com", ovn-encap-ip="172.27.1.1", ovn-encap-type=geneve, ovn-remote="tcp:127.0.0.1:6642", system-id="6e4dd29f-7607-48d7-8e5a-eef4c6aeefb5"} [geneve, gre, internal, lisp, patch, stt, system, tap, vxlan] [] 68 {} "2.6.90" [] {} centos "7"
Port table
_uuid bond_active_slave bond_downdelay bond_fake_iface bond_mode bond_updelay external_ids fake_bridge interfaces lacp mac name other_config protected qos rstp_statistics rstp_status statistics status tag trunks vlan_mode
------------------------------------ ----------------- -------------- --------------- --------- ------------ ------------ ----------- -------------------------------------- ---- --- ------------ ------------ --------- --- --------------- ----------- ---------- ------ --- ------ ---------
406caf72-a6f9-4fd8-83dc-2bc4fb21944c [] 0 false [] 0 {} false [db4ffac0-fd98-4143-91c0-3ca4767ebc52] [] [] br-int {} false [] {} {} {} {} [] [] []
b7361c57-41aa-4a8f-b6fe-67e643129aca [] 0 false [] 0 {} false [0ac4a1a4-1f06-4ba6-b013-c4fb454f315c] [] [] "eth0" {} false [] {} {} {} {} [] [] []
7b45917b-abaf-4ebd-b501-ce76f07fe65e [] 0 false [] 0 {} false [d8c5780d-b9bb-477a-8732-b29c28020831] [] [] ovirtbridge {} false [] {} {} {} {} [] [] []
a7cca8f5-3437-43dc-8310-195454fb7771 [] 0 false [] 0 {} false [21df7a58-e572-4042-8b64-6370d8e58f92] [] [] "ovsbridge0" {} false [] {} {} {} {} [] [] []
QoS table
_uuid external_ids other_config queues type
----- ------------ ------------ ------ ----
Queue table
_uuid dscp external_ids other_config
----- ---- ------------ ------------
SSL table
_uuid bootstrap_ca_cert ca_cert certificate external_ids private_key
----- ----------------- ------- ----------- ------------ -----------
sFlow table
_uuid agent external_ids header polling sampling targets
----- ----- ------------ ------ ------- -------- -------
More information about the Users
mailing list