[ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

Marcin Mirecki mmirecki at redhat.com
Fri Dec 30 14:34:06 UTC 2016


Hi,

The OVN provider does not require you to add any bridges manually.
As I understand we were dealing with two problems:
1. You only had one physical nic and wanted to put a bridge on it,
   attaching the management network to the bridge. This was the reason for
   creating the bridge (the recommended setup would be to used a separate
   physical nic for the management network). This bridge has nothing to
   do with the OVN bridge.
2. OVN - you want to use OVN on this system. For this you have to install
   OVN on your hosts. This should create the br-int bridge, which are
   then used by the OVN provider. This br-int bridge must be configured
   to connect to other hosts using the geneve tunnels.

In both cases the systems will not be aware of any bridges you create.
They need a nic (be it physical or virtual) to connect to other system.
Usually this is the physical nic. In your case you decided to put a bridge
on the physical nic, and give oVirt a virtual nic attached to this bridge.
This works, but keep in mind that the bridge you have introduced is outside
of oVirt's (and OVN) control (and as such is not supported).

> What is the purpose of
> adding my bridges to Ovirt through the external provider and configure
> them on my VM

I am not quite sure I understand.
The external provider (OVN provider to be specific), does not add any bridges
to the system. It is using the br-int bridge created by OVN. The networks
created by the OVN provider are purely logical entities, implemented using
the OVN br-int bridge.

Marcin


----- Original Message -----
> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
> To: "Marcin Mirecki" <mmirecki at redhat.com>
> Cc: "Ovirt Users" <users at ovirt.org>
> Sent: Friday, December 30, 2016 12:15:43 PM
> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network
> 
> Hi
> That is the logic I quite don't understand. What is the purpose of
> adding my bridges to Ovirt through the external provider and configure
> them on my VM if you are disregarding that and using br-int anyway?
> 
> /Sverker
> 
> Den 2016-12-30 kl. 10:53, skrev Marcin Mirecki:
> > Sverker,
> >
> > br-int is the integration bridge created by default in OVN. This is the
> > bridge we use for the OVN provider. As OVN is required to be installed,
> > we assume that this bridge is present.
> > Using any other ovs bridge is not supported, and will require custom code
> > changes (such as the ones you created).
> >
> > The proper setup in your case would probably be to create br-int and
> > connect
> > this to your ovirtbridge, although I don't know the details of your env, so
> > this is just my best guess.
> >
> > Marcin
> >
> >
> > ----- Original Message -----
> >> From: "Sverker Abrahamsson" <sverker at abrahamsson.com>
> >> To: "Marcin Mirecki" <mmirecki at redhat.com>
> >> Cc: "Ovirt Users" <users at ovirt.org>, "Numan Siddique"
> >> <nusiddiq at redhat.com>
> >> Sent: Friday, December 30, 2016 1:14:50 AM
> >> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt
> >> network
> >>
> >> Even better, if the value is not hardcoded then the configured value is
> >> used. Might be that I'm missunderstanding something but this is the
> >> behaviour I expected instead of that it is using br-int.
> >>
> >> Attached is a patch which properly sets up the xml, in case there is
> >> already a virtual port there + testcode of some variants
> >>
> >> /Sverker
> >>
> >> Den 2016-12-29 kl. 22:55, skrev Sverker Abrahamsson:
> >>> When I change
> >>> /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook
> >>> to instead of hardcoded to br-int use BRIDGE_NAME = 'ovirtbridge' then
> >>> I get the expected behaviour and I get a working network connectivity
> >>> in my VM with IP provided by dhcp.
> >>>
> >>> /Sverker
> >>>
> >>> Den 2016-12-29 kl. 22:07, skrev Sverker Abrahamsson:
> >>>> By default the vNic profile of my OVN bridge ovirtbridge gets a
> >>>> Network filter named vdsm-no-mac-spoofing. If I instead set No filter
> >>>> then I don't get those ebtables / iptables messages. It seems that
> >>>> there is some issue between ovirt/vdsm and firewalld, which we can
> >>>> put to the side for now.
> >>>>
> >>>> It is not clear for me why the port is added on br-int instead of the
> >>>> bridge I've assigned to the VM, which is ovirtbridge??
> >>>>
> >>>> /Sverker
> >>>>
> >>>> Den 2016-12-29 kl. 14:20, skrev Sverker Abrahamsson:
> >>>>> The specific command most likely fails because there is no chain
> >>>>> named libvirt-J-vnet0, but when should that have been created?
> >>>>> /Sverker
> >>>>>
> >>>>> -------- Vidarebefordrat meddelande --------
> >>>>> Ämne: 	Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt
> >>>>> network
> >>>>> Datum: 	Thu, 29 Dec 2016 08:06:29 -0500 (EST)
> >>>>> Från: 	Marcin Mirecki <mmirecki at redhat.com>
> >>>>> Till: 	Sverker Abrahamsson <sverker at abrahamsson.com>
> >>>>> Kopia: 	Ovirt Users <users at ovirt.org>, Lance Richardson
> >>>>> <lrichard at redhat.com>, Numan Siddique <nusiddiq at redhat.com>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Let me add the OVN team.
> >>>>>
> >>>>> Lance, Numan,
> >>>>>
> >>>>> Can you please look at this?
> >>>>>
> >>>>> Trying to plug a vNIC results in:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 ovs-vsctl: ovs|00001|vsctl|INFO|Called as
> >>>>>>>>>>>> ovs-vsctl
> >>>>>>>>>>>> --timeout=5 -- --if-exists del-port vnet0 -- add-port br-int
> >>>>>>>>>>>> vnet0 --
> >>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>> "external-ids:attached-mac=\"00:1a:4a:16:01:51\""
> >>>>>>>>>>>> -- set Interface vnet0
> >>>>>>>>>>>> "external-ids:iface-id=\"e8853aac-8a75-41b0-8010-e630017dcdd8\""
> >>>>>>>>>>>> --
> >>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>> "external-ids:vm-id=\"b9440d60-ef5a-4e2b-83cf-081df7c09e6f\"" --
> >>>>>>>>>>>> set
> >>>>>>>>>>>> Interface vnet0 external-ids:iface-status=active
> >>>>>>>>>>>> Dec 28 23:31:35 h2 kernel: device vnet0 entered promiscuous mode
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0
> >>>>>>>>>>>> -j
> >>>>>>>>>>>> libvirt-J-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>> More details below
> >>>>>
> >>>>>
> >>>>> ----- Original Message -----
> >>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>> Sent: Thursday, December 29, 2016 1:42:11 PM
> >>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt
> >>>>>> network
> >>>>>>
> >>>>>> Hi
> >>>>>> Same problem still..
> >>>>>> /Sverker
> >>>>>>
> >>>>>> Den 2016-12-29 kl. 13:34, skrev Marcin Mirecki:
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> The tunnels are created to connect multiple OVN controllers.
> >>>>>>> If there is only one, there is no need for the tunnels, so none
> >>>>>>> will be created, this is the correct behavior.
> >>>>>>>
> >>>>>>> Does the problem still occur after setting configuring the
> >>>>>>> OVN-controller?
> >>>>>>>
> >>>>>>> Marcin
> >>>>>>>
> >>>>>>> ----- Original Message -----
> >>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>> Sent: Thursday, December 29, 2016 11:44:32 AM
> >>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>> ovirtmgmt
> >>>>>>>> network
> >>>>>>>>
> >>>>>>>> Hi
> >>>>>>>> The rpm packages you listed in the other mail are installed but I
> >>>>>>>> had
> >>>>>>>> not run vdsm-tool ovn-config to create tunnel as the OVN controller
> >>>>>>>> is
> >>>>>>>> on the same host.
> >>>>>>>>
> >>>>>>>> [root at h2 ~]# rpm -q openvswitch-ovn-common
> >>>>>>>> openvswitch-ovn-common-2.6.90-1.el7.centos.x86_64
> >>>>>>>> [root at h2 ~]# rpm -q openvswitch-ovn-host
> >>>>>>>> openvswitch-ovn-host-2.6.90-1.el7.centos.x86_64
> >>>>>>>> [root at h2 ~]# rpm -q python-openvswitch
> >>>>>>>> python-openvswitch-2.6.90-1.el7.centos.noarch
> >>>>>>>>
> >>>>>>>> After removing my manually created br-int and run
> >>>>>>>>
> >>>>>>>> vdsm-tool ovn-config 127.0.0.1 172.27.1.1
> >>>>>>>>
> >>>>>>>> then I have the br-int but 'ip link show' does not show any link
> >>>>>>>> 'genev_sys_' nor does 'ovs-vsctl show' any port for ovn. I assume
> >>>>>>>> these
> >>>>>>>> are when there is an actual tunnel?
> >>>>>>>>
> >>>>>>>> [root at h2 ~]# ovs-vsctl show
> >>>>>>>> ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
> >>>>>>>>        Bridge br-int
> >>>>>>>>            fail_mode: secure
> >>>>>>>>            Port br-int
> >>>>>>>>                Interface br-int
> >>>>>>>>                    type: internal
> >>>>>>>>        Bridge ovirtbridge
> >>>>>>>>            Port ovirtbridge
> >>>>>>>>                Interface ovirtbridge
> >>>>>>>>                    type: internal
> >>>>>>>>        Bridge "ovsbridge0"
> >>>>>>>>            Port "ovsbridge0"
> >>>>>>>>                Interface "ovsbridge0"
> >>>>>>>>                    type: internal
> >>>>>>>>            Port "eth0"
> >>>>>>>>                Interface "eth0"
> >>>>>>>>        ovs_version: "2.6.90"
> >>>>>>>>
> >>>>>>>> [root at h2 ~]# ip link show
> >>>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> >>>>>>>> mode
> >>>>>>>> DEFAULT qlen 1
> >>>>>>>>        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> >>>>>>>> master ovs-system state UP mode DEFAULT qlen 1000
> >>>>>>>>        link/ether 44:8a:5b:84:7d:b3 brd ff:ff:ff:ff:ff:ff
> >>>>>>>> 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >>>>>>>> mode
> >>>>>>>> DEFAULT qlen 1000
> >>>>>>>>        link/ether 5a:14:cf:28:47:e2 brd ff:ff:ff:ff:ff:ff
> >>>>>>>> 4: ovsbridge0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>>>>>>> noqueue
> >>>>>>>> state UNKNOWN mode DEFAULT qlen 1000
> >>>>>>>>        link/ether 44:8a:5b:84:7d:b3 brd ff:ff:ff:ff:ff:ff
> >>>>>>>> 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
> >>>>>>>> DEFAULT qlen 1000
> >>>>>>>>        link/ether 9e:b0:3a:9d:f2:4b brd ff:ff:ff:ff:ff:ff
> >>>>>>>> 6: ovirtbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>>>>>>> noqueue
> >>>>>>>> state UNKNOWN mode DEFAULT qlen 1000
> >>>>>>>>        link/ether a6:f6:e5:a4:5b:45 brd ff:ff:ff:ff:ff:ff
> >>>>>>>> 7: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >>>>>>>> master
> >>>>>>>> ovirtmgmt state UNKNOWN mode DEFAULT qlen 1000
> >>>>>>>>        link/ether 66:e0:1c:c3:a9:d8 brd ff:ff:ff:ff:ff:ff
> >>>>>>>> 8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>>>>>>> noqueue
> >>>>>>>> state UP mode DEFAULT qlen 1000
> >>>>>>>>        link/ether 66:e0:1c:c3:a9:d8 brd ff:ff:ff:ff:ff:ff
> >>>>>>>>
> >>>>>>>> Firewall settings:
> >>>>>>>> [root at h2 ~]# firewall-cmd --list-all-zones
> >>>>>>>> work
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services: dhcpv6-client ssh
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> drop
> >>>>>>>>      target: DROP
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services:
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> internal
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services: dhcpv6-client mdns samba-client ssh
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> external
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services: ssh
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: yes
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> trusted
> >>>>>>>>      target: ACCEPT
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services:
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> home
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services: dhcpv6-client mdns samba-client ssh
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> dmz
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services: ssh
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> public (active)
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces: eth0 ovsbridge0
> >>>>>>>>      sources:
> >>>>>>>>      services: dhcpv6-client ssh
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> block
> >>>>>>>>      target: %%REJECT%%
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces:
> >>>>>>>>      sources:
> >>>>>>>>      services:
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: no
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> ovirt (active)
> >>>>>>>>      target: default
> >>>>>>>>      icmp-block-inversion: no
> >>>>>>>>      interfaces: ovirtbridge ovirtmgmt
> >>>>>>>>      sources:
> >>>>>>>>      services: dhcp ovirt-fence-kdump-listener ovirt-http
> >>>>>>>>      ovirt-https
> >>>>>>>> ovirt-imageio-proxy ovirt-postgres ovirt-provider-ovn
> >>>>>>>> ovirt-vmconsole-proxy ovirt-websocket-proxy ssh vdsm
> >>>>>>>>      ports:
> >>>>>>>>      protocols:
> >>>>>>>>      masquerade: yes
> >>>>>>>>      forward-ports:
> >>>>>>>>      sourceports:
> >>>>>>>>      icmp-blocks:
> >>>>>>>>      rich rules:
> >>>>>>>>            rule family="ipv4" port port="6641" protocol="tcp" accept
> >>>>>>>>            rule family="ipv4" port port="6642" protocol="tcp" accept
> >>>>>>>>
> >>>>>>>> The db dump is attached
> >>>>>>>> /Sverker
> >>>>>>>> Den 2016-12-29 kl. 09:50, skrev Marcin Mirecki:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> Can you please do: "sudo ovsdb-client dump"
> >>>>>>>>> on the host and send me the output?
> >>>>>>>>>
> >>>>>>>>> Have you configured the ovn controller to connect to the
> >>>>>>>>> OVN north? You can do it using "vdsm-tool ovn-config" or
> >>>>>>>>> using the OVN tools directly.
> >>>>>>>>> Please check
> >>>>>>>>> out:https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
> >>>>>>>>> for details.
> >>>>>>>>>
> >>>>>>>>> Also please note that the OVN provider is completely different
> >>>>>>>>> from the neutron-openvswitch plugin. Please don't mix the two.
> >>>>>>>>>
> >>>>>>>>> Marcin
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> ----- Original Message -----
> >>>>>>>>>> From: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>> To: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>> Sent: Thursday, December 29, 2016 9:27:19 AM
> >>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>> ovirtmgmt
> >>>>>>>>>> network
> >>>>>>>>>>
> >>>>>>>>>> Hi,
> >>>>>>>>>>
> >>>>>>>>>> br-int is the OVN integration bridge, it should have been created
> >>>>>>>>>> when installing OVN. I assume you have the following packages
> >>>>>>>>>> installed
> >>>>>>>>>> on the host:
> >>>>>>>>>>        openvswitch-ovn-common
> >>>>>>>>>>        openvswitch-ovn-host
> >>>>>>>>>>        python-openvswitch
> >>>>>>>>>>
> >>>>>>>>>> Please give me some time to look at the connectivity problem.
> >>>>>>>>>>
> >>>>>>>>>> Marcin
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>> Sent: Thursday, December 29, 2016 12:47:04 AM
> >>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>> network
> >>>>>>>>>>>
> >>>>>>>>>>> From
> >>>>>>>>>>> /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook
> >>>>>>>>>>> (installed by ovirt-provider-ovn-driver rpm):
> >>>>>>>>>>>
> >>>>>>>>>>> BRIDGE_NAME = 'br-int'
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Den 2016-12-28 kl. 23:56, skrev Sverker Abrahamsson:
> >>>>>>>>>>>> Googling on the message about br-int suggested adding that
> >>>>>>>>>>>> bridge to
> >>>>>>>>>>>> ovs:
> >>>>>>>>>>>>
> >>>>>>>>>>>> ovs-vsctl add-br br-int
> >>>>>>>>>>>>
> >>>>>>>>>>>> Then the VM is able to boot, but it fails to get network
> >>>>>>>>>>>> connectivity.
> >>>>>>>>>>>> Output in /var/log/messages:
> >>>>>>>>>>>>
> >>>>>>>>>>>> Dec 28 23:31:35 h2 ovs-vsctl: ovs|00001|vsctl|INFO|Called as
> >>>>>>>>>>>> ovs-vsctl
> >>>>>>>>>>>> --timeout=5 -- --if-exists del-port vnet0 -- add-port br-int
> >>>>>>>>>>>> vnet0 --
> >>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>> "external-ids:attached-mac=\"00:1a:4a:16:01:51\""
> >>>>>>>>>>>> -- set Interface vnet0
> >>>>>>>>>>>> "external-ids:iface-id=\"e8853aac-8a75-41b0-8010-e630017dcdd8\""
> >>>>>>>>>>>> --
> >>>>>>>>>>>> set Interface vnet0
> >>>>>>>>>>>> "external-ids:vm-id=\"b9440d60-ef5a-4e2b-83cf-081df7c09e6f\"" --
> >>>>>>>>>>>> set
> >>>>>>>>>>>> Interface vnet0 external-ids:iface-status=active
> >>>>>>>>>>>> Dec 28 23:31:35 h2 kernel: device vnet0 entered promiscuous mode
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0
> >>>>>>>>>>>> -j
> >>>>>>>>>>>> libvirt-J-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0
> >>>>>>>>>>>> -j
> >>>>>>>>>>>> libvirt-P-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F J-vnet0-mac' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X J-vnet0-mac' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F J-vnet0-arp-mac'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X J-vnet0-arp-mac'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev
> >>>>>>>>>>>> --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev
> >>>>>>>>>>>> --physdev-out
> >>>>>>>>>>>> vnet0 -g FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in
> >>>>>>>>>>>> vnet0
> >>>>>>>>>>>> -g FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> >>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>> vnet0 -g HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -E FP-vnet0 FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -E FJ-vnet0 FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/iptables -w2 -w -E HJ-vnet0 HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev
> >>>>>>>>>>>> --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev
> >>>>>>>>>>>> --physdev-out
> >>>>>>>>>>>> vnet0 -g FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev
> >>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>> vnet0 -g FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> >>>>>>>>>>>> --physdev-in
> >>>>>>>>>>>> vnet0 -g HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -E FP-vnet0 FO-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -E FJ-vnet0 FI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ip6tables -w2 -w -E HJ-vnet0 HI-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0
> >>>>>>>>>>>> -j
> >>>>>>>>>>>> libvirt-I-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0
> >>>>>>>>>>>> -j
> >>>>>>>>>>>> libvirt-O-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-I-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet0
> >>>>>>>>>>>> libvirt-O-vnet0' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F I-vnet0-mac' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X I-vnet0-mac' failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -F I-vnet0-arp-mac'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>> Dec 28 23:31:35 h2 firewalld: WARNING: COMMAND_FAILED:
> >>>>>>>>>>>> '/usr/sbin/ebtables --concurrent -t nat -X I-vnet0-arp-mac'
> >>>>>>>>>>>> failed:
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> [root at h2 etc]# ovs-vsctl show
> >>>>>>>>>>>> ebb6aede-cbbc-4f4f-a88a-a9cd72b2bd23
> >>>>>>>>>>>>        Bridge ovirtbridge
> >>>>>>>>>>>>            Port "ovirtport0"
> >>>>>>>>>>>>                Interface "ovirtport0"
> >>>>>>>>>>>>                    type: internal
> >>>>>>>>>>>>            Port ovirtbridge
> >>>>>>>>>>>>                Interface ovirtbridge
> >>>>>>>>>>>>                    type: internal
> >>>>>>>>>>>>        Bridge "ovsbridge0"
> >>>>>>>>>>>>            Port "ovsbridge0"
> >>>>>>>>>>>>                Interface "ovsbridge0"
> >>>>>>>>>>>>                    type: internal
> >>>>>>>>>>>>            Port "eth0"
> >>>>>>>>>>>>                Interface "eth0"
> >>>>>>>>>>>>        Bridge br-int
> >>>>>>>>>>>>            Port br-int
> >>>>>>>>>>>>                Interface br-int
> >>>>>>>>>>>>                    type: internal
> >>>>>>>>>>>>            Port "vnet0"
> >>>>>>>>>>>>                Interface "vnet0"
> >>>>>>>>>>>>        ovs_version: "2.6.90"
> >>>>>>>>>>>>
> >>>>>>>>>>>> Searching through the code it appears that br-int comes from
> >>>>>>>>>>>> neutron-openvswitch plugin ??
> >>>>>>>>>>>>
> >>>>>>>>>>>> [root at h2 share]# rpm -qf
> >>>>>>>>>>>> /usr/share/otopi/plugins/ovirt-host-deploy/openstack/neutron_openvswitch.py
> >>>>>>>>>>>> ovirt-host-deploy-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>
> >>>>>>>>>>>> Den 2016-12-28 kl. 23:24, skrev Sverker Abrahamsson:
> >>>>>>>>>>>>> In addition I had to add an alias to modprobe:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> [root at h2 modprobe.d]# cat dummy.conf
> >>>>>>>>>>>>> alias dummy0 dummy
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Den 2016-12-28 kl. 23:03, skrev Sverker Abrahamsson:
> >>>>>>>>>>>>>> Hi
> >>>>>>>>>>>>>> I first tried to set device name to dummy_0, but then ifup did
> >>>>>>>>>>>>>> not
> >>>>>>>>>>>>>> succeed in creating the device unless I first did 'ip link add
> >>>>>>>>>>>>>> dummy_0 type dummy' but then it would not suceed to establish
> >>>>>>>>>>>>>> the if
> >>>>>>>>>>>>>> on reboot.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Setting fake_nics = dummy0 would not work neither, but this
> >>>>>>>>>>>>>> works:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> fake_nics = dummy*
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> The engine is now able to find the if and assign bridge
> >>>>>>>>>>>>>> ovirtmgmt to
> >>>>>>>>>>>>>> it.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> However, I then run into the next issue when starting a VM:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> 2016-12-28 22:28:23,897 ERROR
> >>>>>>>>>>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>>>>>>>>>>>>> (ForkJoinPool-1-worker-2) [] Correlation ID: null, Call Stack:
> >>>>>>>>>>>>>> null,
> >>>>>>>>>>>>>> Custom Event ID: -1, Message: VM CentOS7 is down with error.
> >>>>>>>>>>>>>> Exit
> >>>>>>>>>>>>>> message: Cannot get interface MTU on 'br-int': No such device.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> This VM has a nic on ovirtbridge, which comes from the OVN
> >>>>>>>>>>>>>> provider.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Den 2016-12-28 kl. 14:38, skrev Marcin Mirecki:
> >>>>>>>>>>>>>>> Sverker,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Can you try adding a vnic named veth_* or dummy_*,
> >>>>>>>>>>>>>>> (or alternatively add the name of the vnic to
> >>>>>>>>>>>>>>> vdsm.config fake_nics), and setup the management
> >>>>>>>>>>>>>>> network using this vnic?
> >>>>>>>>>>>>>>> I suppose adding the vnic you use for connecting
> >>>>>>>>>>>>>>> to the engine to fake_nics should make it visible
> >>>>>>>>>>>>>>> to the engine, and you should be able to use it for
> >>>>>>>>>>>>>>> the setup.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>> From: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>> To: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>> Sent: Wednesday, December 28, 2016 12:06:26 PM
> >>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>>>>>>>> ovirtmgmt network
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I have an internal OVS bridge called ovirtbridge which has
> >>>>>>>>>>>>>>>>> a port
> >>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>> IP address, but in the host network settings that port is
> >>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>> visible.
> >>>>>>>>>>>>>>>> I just verified and unfortunately the virtual ports are not
> >>>>>>>>>>>>>>>> visible in engine
> >>>>>>>>>>>>>>>> to assign a network to :(
> >>>>>>>>>>>>>>>> I'm afraid that the engine is not ready for such a scenario
> >>>>>>>>>>>>>>>> (even
> >>>>>>>>>>>>>>>> if it
> >>>>>>>>>>>>>>>> works).
> >>>>>>>>>>>>>>>> Please give me some time to look for a solution.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>> To: "Marcin Mirecki"<mmirecki at redhat.com>
> >>>>>>>>>>>>>>>>> Cc: "Ovirt Users"<users at ovirt.org>
> >>>>>>>>>>>>>>>>> Sent: Wednesday, December 28, 2016 11:48:24 AM
> >>>>>>>>>>>>>>>>> Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Hi Marcin
> >>>>>>>>>>>>>>>>> Yes, that is my issue. I don't want to let ovirt/vdsm see
> >>>>>>>>>>>>>>>>> eth0
> >>>>>>>>>>>>>>>>> nor
> >>>>>>>>>>>>>>>>> ovsbridge0 since as soon as it sees them it messes up the
> >>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>> config
> >>>>>>>>>>>>>>>>> so that the host will be unreachable.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I have an internal OVS bridge called ovirtbridge which has
> >>>>>>>>>>>>>>>>> a port
> >>>>>>>>>>>>>>>>> with
> >>>>>>>>>>>>>>>>> IP address, but in the host network settings that port is
> >>>>>>>>>>>>>>>>> not
> >>>>>>>>>>>>>>>>> visible.
> >>>>>>>>>>>>>>>>> It doesn't help to name it ovirtmgmt.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> The engine is able to communicate with the host on the ip
> >>>>>>>>>>>>>>>>> it has
> >>>>>>>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>> given, it's just that it believes that it HAS to have a
> >>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>> network which can't be on OVN.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Den 2016-12-28 kl. 10:45, skrev Marcin Mirecki:
> >>>>>>>>>>>>>>>>>> Hi Sverker,
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> The management network is mandatory on each host. It's
> >>>>>>>>>>>>>>>>>> used by
> >>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>> engine to communicate with the host.
> >>>>>>>>>>>>>>>>>> Looking at your description and the exception it looks
> >>>>>>>>>>>>>>>>>> like it
> >>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>> missing.
> >>>>>>>>>>>>>>>>>> The error is caused by not having any network for the host
> >>>>>>>>>>>>>>>>>> (network list retrieved in
> >>>>>>>>>>>>>>>>>> InterfaceDaoImpl.getHostNetworksByCluster -
> >>>>>>>>>>>>>>>>>> which
> >>>>>>>>>>>>>>>>>> gets all the networks on nics for a host from
> >>>>>>>>>>>>>>>>>> vds_interface
> >>>>>>>>>>>>>>>>>> table in the
> >>>>>>>>>>>>>>>>>> DB).
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Could you maybe create a virtual nic connected to
> >>>>>>>>>>>>>>>>>> ovsbridge0 (as
> >>>>>>>>>>>>>>>>>> I
> >>>>>>>>>>>>>>>>>> understand you
> >>>>>>>>>>>>>>>>>> have no physical nic available) and use this for the
> >>>>>>>>>>>>>>>>>> management
> >>>>>>>>>>>>>>>>>> network?
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> I then create a bridge for use with ovirt, with a private
> >>>>>>>>>>>>>>>>>>> address.
> >>>>>>>>>>>>>>>>>> I'm not quite sure I understand. Is this yet another
> >>>>>>>>>>>>>>>>>> bridge
> >>>>>>>>>>>>>>>>>> connected to
> >>>>>>>>>>>>>>>>>> ovsbridge0?
> >>>>>>>>>>>>>>>>>> You could also attach the vnic for the management network
> >>>>>>>>>>>>>>>>>> here
> >>>>>>>>>>>>>>>>>> if need
> >>>>>>>>>>>>>>>>>> be.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Please keep in mind that OVN has no use in setting up the
> >>>>>>>>>>>>>>>>>> management
> >>>>>>>>>>>>>>>>>> network.
> >>>>>>>>>>>>>>>>>> The OVN provider can only handle external networks, which
> >>>>>>>>>>>>>>>>>> can
> >>>>>>>>>>>>>>>>>> not be used
> >>>>>>>>>>>>>>>>>> for a
> >>>>>>>>>>>>>>>>>> management network.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Marcin
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> ----- Original Message -----
> >>>>>>>>>>>>>>>>>>> From: "Sverker Abrahamsson"<sverker at abrahamsson.com>
> >>>>>>>>>>>>>>>>>>> To:users at ovirt.org
> >>>>>>>>>>>>>>>>>>> Sent: Wednesday, December 28, 2016 12:39:59 AM
> >>>>>>>>>>>>>>>>>>> Subject: [ovirt-users] Issue with OVN/OVS and mandatory
> >>>>>>>>>>>>>>>>>>> ovirtmgmt
> >>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Hi
> >>>>>>>>>>>>>>>>>>> For long time I've been looking for proper support in
> >>>>>>>>>>>>>>>>>>> ovirt for
> >>>>>>>>>>>>>>>>>>> Open
> >>>>>>>>>>>>>>>>>>> vSwitch
> >>>>>>>>>>>>>>>>>>> so I'm happy that it is moving in the right direction.
> >>>>>>>>>>>>>>>>>>> However,
> >>>>>>>>>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>>> seems
> >>>>>>>>>>>>>>>>>>> to still be a dependency on a ovirtmgmt bridge and I'm
> >>>>>>>>>>>>>>>>>>> unable
> >>>>>>>>>>>>>>>>>>> to move
> >>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> to the OVN provider.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> The hosting center where I rent hw instances has a bit
> >>>>>>>>>>>>>>>>>>> special
> >>>>>>>>>>>>>>>>>>> network
> >>>>>>>>>>>>>>>>>>> setup,
> >>>>>>>>>>>>>>>>>>> so I have one physical network port with a /32 netmask
> >>>>>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> point-to-point
> >>>>>>>>>>>>>>>>>>> config to router. The physical port I connect to a ovs
> >>>>>>>>>>>>>>>>>>> bridge
> >>>>>>>>>>>>>>>>>>> which has
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> public ip. Since ovirt always messes up the network
> >>>>>>>>>>>>>>>>>>> config when
> >>>>>>>>>>>>>>>>>>> I've
> >>>>>>>>>>>>>>>>>>> tried
> >>>>>>>>>>>>>>>>>>> to let it have access to the network config for the
> >>>>>>>>>>>>>>>>>>> physical
> >>>>>>>>>>>>>>>>>>> port, I've
> >>>>>>>>>>>>>>>>>>> set
> >>>>>>>>>>>>>>>>>>> eht0 and ovsbridge0 as hidden in vdsm.conf.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> I then create a bridge for use with ovirt, with a private
> >>>>>>>>>>>>>>>>>>> address. With
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>> OVN provider I am now able to import these into the
> >>>>>>>>>>>>>>>>>>> engine and
> >>>>>>>>>>>>>>>>>>> it looks
> >>>>>>>>>>>>>>>>>>> good. When creating a VM I can select that it will have a
> >>>>>>>>>>>>>>>>>>> vNic
> >>>>>>>>>>>>>>>>>>> on my OVS
> >>>>>>>>>>>>>>>>>>> bridge.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> However, I can't start the VM as an exception is thrown
> >>>>>>>>>>>>>>>>>>> in the
> >>>>>>>>>>>>>>>>>>> log:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> 2016-12-28 00:13:33,350 ERROR
> >>>>>>>>>>>>>>>>>>> [org.ovirt.engine.core.bll.RunVmCommand]
> >>>>>>>>>>>>>>>>>>> (default task-5) [3c882d53] Error during
> >>>>>>>>>>>>>>>>>>> ValidateFailure.:
> >>>>>>>>>>>>>>>>>>> java.lang.NullPointerException
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit.validateRequiredNetworksAvailable(NetworkPolicyUnit.java:140)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.policyunits.NetworkPolicyUnit.filter(NetworkPolicyUnit.java:69)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.runInternalFilters(SchedulingManager.java:597)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.runFilters(SchedulingManager.java:564)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSchedule(SchedulingManager.java:494)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:133)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:940)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:886)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:366)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.canRunActions(PrevalidatingMultipleActionsRunner.java:113)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.execute(PrevalidatingMultipleActionsRunner.java:76)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:613)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:583)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> [bll.jar:]
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Looking at that section of code where the exception is
> >>>>>>>>>>>>>>>>>>> thrown,
> >>>>>>>>>>>>>>>>>>> I see
> >>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> iterates over host networks to find required networks,
> >>>>>>>>>>>>>>>>>>> which I
> >>>>>>>>>>>>>>>>>>> assume is
> >>>>>>>>>>>>>>>>>>> ovirtmgmt. In the host network setup dialog I don't see
> >>>>>>>>>>>>>>>>>>> any
> >>>>>>>>>>>>>>>>>>> networks at
> >>>>>>>>>>>>>>>>>>> all
> >>>>>>>>>>>>>>>>>>> but it lists ovirtmgmt as required. It also list the OVN
> >>>>>>>>>>>>>>>>>>> networks but
> >>>>>>>>>>>>>>>>>>> these
> >>>>>>>>>>>>>>>>>>> can't be statically assigned as they are added
> >>>>>>>>>>>>>>>>>>> dynamically when
> >>>>>>>>>>>>>>>>>>> needed,
> >>>>>>>>>>>>>>>>>>> which is fine.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> I believe that I either need to remove ovirtmgmt network
> >>>>>>>>>>>>>>>>>>> or
> >>>>>>>>>>>>>>>>>>> configure
> >>>>>>>>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>>>>> it
> >>>>>>>>>>>>>>>>>>> is provided by the OVN provider, but neither is possible.
> >>>>>>>>>>>>>>>>>>> Preferably it
> >>>>>>>>>>>>>>>>>>> shouldn't be hardcoded which network is management and
> >>>>>>>>>>>>>>>>>>> mandatory but be
> >>>>>>>>>>>>>>>>>>> possible to configure.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> /Sverker
> >>>>>>>>>>>>>>>>>>> Den 2016-12-27 kl. 17:10, skrev Marcin Mirecki:
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>> Users mailing list
> >>>>>>>>>>>> Users at ovirt.org
> >>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>> _______________________________________________
> >>>>>>>>>> Users mailing list
> >>>>>>>>>> Users at ovirt.org
> >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>>>>>>
> >>>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Users mailing list
> >>>>> Users at ovirt.org
> >>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Users mailing list
> >>>> Users at ovirt.org
> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users at ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>
> 
> 
> 


More information about the Users mailing list