oVIRT 4 / OVN / Communication issues of instances between nodes.

Note: When I configured vdsm-tool ovn-config, I passed it the IP address of the OVN-Controller which is using the ovirtmgmt network, which is just one of the NIC's on the nodes. I am opening up new thread as this I feel differs a bit from my original request. I have OVN which I believe is deployed correctly. I have noticed that if instances get spun up on the same oVIRT node they can all talk without issues to one another, however if one instance gets spun up on another node even if it has the same (OVN network/subnet), it can't ping or reach other instances in the subnet. I noticed that the OVN-Controller of the instance that can't talk is logging: 2016-12-02T22:50:54.907Z|00181|pinctrl|INFO|DHCPOFFER 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:54.908Z|00182|pinctrl|INFO|DHCPACK 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:55.695Z|00183|ofctrl|INFO|Dropped 7 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:50:55.695Z|00184|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:10.705Z|00185|ofctrl|INFO|Dropped 6 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:10.705Z|00186|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:20.710Z|00187|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:20.710Z|00188|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:35.718Z|00189|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:35.718Z|00190|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:45.724Z|00191|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:45.724Z|00192|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:55.730Z|00193|ofctrl|INFO|Dropped 5 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:51:55.730Z|00194|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:10.738Z|00195|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:10.739Z|00196|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:20.744Z|00197|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:20.744Z|00198|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:35.752Z|00199|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:35.752Z|00200|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:45.758Z|00201|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:45.758Z|00202|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33)
From the OVN-Controller:
[root@dev001-022-002 ~]# ovn-nbctl show switch ddb3b92f-b359-4b59-a41a-ebae6df7fe9a (devins-net) port 6b289418-8b8e-42b4-8334-c71584afcd3e addresses: ["00:1a:4a:16:01:5c dynamic"] port 71ef81f1-7c20-4c68-b536-d274703f7541 addresses: ["00:1a:4a:16:01:61 dynamic"] port 91d4f4f5-4b9f-42c0-aa2c-8a101474bb84 addresses: ["00:1a:4a:16:01:5e dynamic"] Do I need to do something special in order to allow communication between nodes of instances on same OVN network? Output of ovs-vsctl show from node3: 61af799c-a621-445e-8183-23dcb38ea3cc Bridge br-int fail_mode: secure Port "ovn-456949-0" Interface "ovn-456949-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.74"} Port "ovn-c0dc09-0" Interface "ovn-c0dc09-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.73"} Port br-int Interface br-int type: internal ovs_version: "2.6.90" -- Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

Devin, Please not the OVN-controller is not the central part where OVN northd is running. OVN-controllers are the OVN processes deployed on the hosts. The correct usage of the 'vdsm-tool ovn-config'. - the IP of the OVN-central (not to be confused with OVN-controllers, which is the part of OVN running on the hosts) - the local host IP to be used for tunneling to other OVN hosts for example, if the OVN-central IP should be 10.10.10.1, and the IP of the local host used for tunneling: 10.10.10.101: vdsm-tool ovn-config 10.10.10.1 10.10.10.101 Looking at the output of 'ovs-vsctl' the tunnels have been created. The OVN log saying 'dropping duplicate flow' is worrying, let me forward this to the OVN team to take a look at it. Marcin ----- Original Message -----
From: "Devin Acosta" <devin@pabstatencio.com> To: "users" <Users@ovirt.org> Sent: Saturday, December 3, 2016 12:24:21 AM Subject: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Note: When I configured vdsm-tool ovn-config, I passed it the IP address of the OVN-Controller which is using the ovirtmgmt network, which is just one of the NIC's on the nodes.
I am opening up new thread as this I feel differs a bit from my original request. I have OVN which I believe is deployed correctly. I have noticed that if instances get spun up on the same oVIRT node they can all talk without issues to one another, however if one instance gets spun up on another node even if it has the same (OVN network/subnet), it can't ping or reach other instances in the subnet. I noticed that the OVN-Controller of the instance that can't talk is logging:
2016-12-02T22:50:54.907Z|00181|pinctrl|INFO|DHCPOFFER 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:54.908Z|00182|pinctrl|INFO|DHCPACK 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:55.695Z|00183|ofctrl|INFO|Dropped 7 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:50:55.695Z|00184|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:10.705Z|00185|ofctrl|INFO|Dropped 6 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:10.705Z|00186|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:20.710Z|00187|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:20.710Z|00188|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:35.718Z|00189|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:35.718Z|00190|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:45.724Z|00191|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:45.724Z|00192|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:55.730Z|00193|ofctrl|INFO|Dropped 5 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:51:55.730Z|00194|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:10.738Z|00195|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:10.739Z|00196|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:20.744Z|00197|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:20.744Z|00198|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:35.752Z|00199|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:35.752Z|00200|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:45.758Z|00201|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:45.758Z|00202|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33)
From the OVN-Controller:
[root@dev001-022-002 ~]# ovn-nbctl show switch ddb3b92f-b359-4b59-a41a-ebae6df7fe9a (devins-net) port 6b289418-8b8e-42b4-8334-c71584afcd3e addresses: ["00:1a:4a:16:01:5c dynamic"] port 71ef81f1-7c20-4c68-b536-d274703f7541 addresses: ["00:1a:4a:16:01:61 dynamic"] port 91d4f4f5-4b9f-42c0-aa2c-8a101474bb84 addresses: ["00:1a:4a:16:01:5e dynamic"]
Do I need to do something special in order to allow communication between nodes of instances on same OVN network?
Output of ovs-vsctl show from node3:
61af799c-a621-445e-8183-23dcb38ea3cc Bridge br-int fail_mode: secure Port "ovn-456949-0" Interface "ovn-456949-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.74"} Port "ovn-c0dc09-0" Interface "ovn-c0dc09-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.73"} Port br-int Interface br-int type: internal ovs_version: "2.6.90"
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Marcin, For OVN to work properly does the port that the traffic flows over need to be a bridge, or OVS port? Right now it's just going over the ovirtmgmt network which is just a standard port. I know like in Neutron you have to configure (br-ex) and then it would need to be using the OVS protocol, and then all the nodes would need to be an OVS port. I presume OVN tries to simplify this setup? I also seen that there is (openvswitch-ovn-vtep), would this need to be configured in any way? On Mon, Dec 5, 2016 at 1:43 AM, Marcin Mirecki <mmirecki@redhat.com> wrote:
Devin,
Please not the OVN-controller is not the central part where OVN northd is running. OVN-controllers are the OVN processes deployed on the hosts. The correct usage of the 'vdsm-tool ovn-config'. - the IP of the OVN-central (not to be confused with OVN-controllers, which is the part of OVN running on the hosts) - the local host IP to be used for tunneling to other OVN hosts for example, if the OVN-central IP should be 10.10.10.1, and the IP of the local host used for tunneling: 10.10.10.101: vdsm-tool ovn-config 10.10.10.1 10.10.10.101
Looking at the output of 'ovs-vsctl' the tunnels have been created.
The OVN log saying 'dropping duplicate flow' is worrying, let me forward this to the OVN team to take a look at it.
Marcin
----- Original Message -----
From: "Devin Acosta" <devin@pabstatencio.com> To: "users" <Users@ovirt.org> Sent: Saturday, December 3, 2016 12:24:21 AM Subject: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Note: When I configured vdsm-tool ovn-config, I passed it the IP address of the OVN-Controller which is using the ovirtmgmt network, which is just one of the NIC's on the nodes.
I am opening up new thread as this I feel differs a bit from my original request. I have OVN which I believe is deployed correctly. I have noticed that if instances get spun up on the same oVIRT node they can all talk without issues to one another, however if one instance gets spun up on another node even if it has the same (OVN network/subnet), it can't ping or reach other instances in the subnet. I noticed that the OVN-Controller of the instance that can't talk is logging:
2016-12-02T22:50:54.907Z|00181|pinctrl|INFO|DHCPOFFER 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:54.908Z|00182|pinctrl|INFO|DHCPACK 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:55.695Z|00183|ofctrl|INFO|Dropped 7 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:50:55.695Z|00184|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:10.705Z|00185|ofctrl|INFO|Dropped 6 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:10.705Z|00186|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:20.710Z|00187|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:20.710Z|00188|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:35.718Z|00189|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:35.718Z|00190|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:45.724Z|00191|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:45.724Z|00192|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:55.730Z|00193|ofctrl|INFO|Dropped 5 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:51:55.730Z|00194|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:10.738Z|00195|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:10.739Z|00196|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:20.744Z|00197|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:20.744Z|00198|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:35.752Z|00199|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:35.752Z|00200|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:45.758Z|00201|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:45.758Z|00202|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33)
From the OVN-Controller:
[root@dev001-022-002 ~]# ovn-nbctl show switch ddb3b92f-b359-4b59-a41a-ebae6df7fe9a (devins-net) port 6b289418-8b8e-42b4-8334-c71584afcd3e addresses: ["00:1a:4a:16:01:5c dynamic"] port 71ef81f1-7c20-4c68-b536-d274703f7541 addresses: ["00:1a:4a:16:01:61 dynamic"] port 91d4f4f5-4b9f-42c0-aa2c-8a101474bb84 addresses: ["00:1a:4a:16:01:5e dynamic"]
Do I need to do something special in order to allow communication between nodes of instances on same OVN network?
Output of ovs-vsctl show from node3:
61af799c-a621-445e-8183-23dcb38ea3cc Bridge br-int fail_mode: secure Port "ovn-456949-0" Interface "ovn-456949-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.74"} Port "ovn-c0dc09-0" Interface "ovn-c0dc09-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.73"} Port br-int Interface br-int type: internal ovs_version: "2.6.90"
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

Marcin, Also I noticed in your original post it mentions: ip link - the result should include a link called genev_sys_ ... I noticed that on my hosts I don't see any links with name: genev_sys_ ?? Could this be a problem? lo: enp4s0f0: enp4s0f1: enp7s0f0: enp7s0f1: bond0: DEV-NOC: ovirtmgmt: bond0.700@bond0: DEV-VM-NET: bond0.705@bond0: ;vdsmdummy;: vnet0: vnet1: vnet2: vnet3: vnet4: ovs-system: br-int: vnet5: vnet6: However, the br-int appears to have been configured: [root@las01-902-001 ~]# ovs-vsctl show 4c817c66-9842-471d-b53a-963e27e3364f Bridge br-int fail_mode: secure Port "vnet6" Interface "vnet6" Port "vnet5" Interface "vnet5" Port "ovn-456949-0" Interface "ovn-456949-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.74"} Port "ovn-252778-0" Interface "ovn-252778-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.75"} Port br-int Interface br-int type: internal ovs_version: "2.6.90" However there is no traffic showing: [root@las01-902-001 ~]# ifconfig br-int br-int: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 2e:c4:a6:fa:0c:40 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 On Mon, Dec 5, 2016 at 8:05 AM, Devin Acosta <devin@pabstatencio.com> wrote:
Marcin,
For OVN to work properly does the port that the traffic flows over need to be a bridge, or OVS port? Right now it's just going over the ovirtmgmt network which is just a standard port. I know like in Neutron you have to configure (br-ex) and then it would need to be using the OVS protocol, and then all the nodes would need to be an OVS port. I presume OVN tries to simplify this setup? I also seen that there is (openvswitch-ovn-vtep), would this need to be configured in any way?
On Mon, Dec 5, 2016 at 1:43 AM, Marcin Mirecki <mmirecki@redhat.com> wrote:
Devin,
Please not the OVN-controller is not the central part where OVN northd is running. OVN-controllers are the OVN processes deployed on the hosts. The correct usage of the 'vdsm-tool ovn-config'. - the IP of the OVN-central (not to be confused with OVN-controllers, which is the part of OVN running on the hosts) - the local host IP to be used for tunneling to other OVN hosts for example, if the OVN-central IP should be 10.10.10.1, and the IP of the local host used for tunneling: 10.10.10.101: vdsm-tool ovn-config 10.10.10.1 10.10.10.101
Looking at the output of 'ovs-vsctl' the tunnels have been created.
The OVN log saying 'dropping duplicate flow' is worrying, let me forward this to the OVN team to take a look at it.
Marcin
From: "Devin Acosta" <devin@pabstatencio.com> To: "users" <Users@ovirt.org> Sent: Saturday, December 3, 2016 12:24:21 AM Subject: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Note: When I configured vdsm-tool ovn-config, I passed it the IP address of the OVN-Controller which is using the ovirtmgmt network, which is just one of the NIC's on the nodes.
I am opening up new thread as this I feel differs a bit from my original request. I have OVN which I believe is deployed correctly. I have noticed that if instances get spun up on the same oVIRT node they can all talk without issues to one another, however if one instance gets spun up on another node even if it has the same (OVN network/subnet), it can't
----- Original Message ----- ping or
reach other instances in the subnet. I noticed that the OVN-Controller of the instance that can't talk is logging:
2016-12-02T22:50:54.907Z|00181|pinctrl|INFO|DHCPOFFER 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:54.908Z|00182|pinctrl|INFO|DHCPACK 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:55.695Z|00183|ofctrl|INFO|Dropped 7 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:50:55.695Z|00184|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:10.705Z|00185|ofctrl|INFO|Dropped 6 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:10.705Z|00186|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:20.710Z|00187|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:20.710Z|00188|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:35.718Z|00189|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:35.718Z|00190|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:45.724Z|00191|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:45.724Z|00192|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:55.730Z|00193|ofctrl|INFO|Dropped 5 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:51:55.730Z|00194|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:10.738Z|00195|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:10.739Z|00196|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:20.744Z|00197|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:20.744Z|00198|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:35.752Z|00199|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:35.752Z|00200|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:45.758Z|00201|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:45.758Z|00202|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33)
From the OVN-Controller:
[root@dev001-022-002 ~]# ovn-nbctl show switch ddb3b92f-b359-4b59-a41a-ebae6df7fe9a (devins-net) port 6b289418-8b8e-42b4-8334-c71584afcd3e addresses: ["00:1a:4a:16:01:5c dynamic"] port 71ef81f1-7c20-4c68-b536-d274703f7541 addresses: ["00:1a:4a:16:01:61 dynamic"] port 91d4f4f5-4b9f-42c0-aa2c-8a101474bb84 addresses: ["00:1a:4a:16:01:5e dynamic"]
Do I need to do something special in order to allow communication between nodes of instances on same OVN network?
Output of ovs-vsctl show from node3:
61af799c-a621-445e-8183-23dcb38ea3cc Bridge br-int fail_mode: secure Port "ovn-456949-0" Interface "ovn-456949-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.74"} Port "ovn-c0dc09-0" Interface "ovn-c0dc09-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.73"} Port br-int Interface br-int type: internal ovs_version: "2.6.90"
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co
-- Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

From: "Devin Acosta" <devin@pabstatencio.com> To: "Marcin Mirecki" <mmirecki@redhat.com> Cc: "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 12:11:46 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Marcin,
Also I noticed in your original post it mentions:
ip link - the result should include a link called genev_sys_ ...
I noticed that on my hosts I don't see any links with name: genev_sys_ ?? Could this be a problem?
lo: enp4s0f0: enp4s0f1: enp7s0f0: enp7s0f1: bond0: DEV-NOC: ovirtmgmt: bond0.700@bond0: DEV-VM-NET: bond0.705@bond0: ;vdsmdummy;: vnet0: vnet1: vnet2: vnet3: vnet4: ovs-system: br-int: vnet5: vnet6:
Hi Devin, What distribution and kernel version are you using? One thing you could check is whether the vport_geneve kernel module is being loaded, e.g. you should see something like: $ lsmod | grep vport vport_geneve 12560 1 openvswitch 246755 5 vport_geneve If vport_geneve is not loaded, you could "sudo modprobe vport_geneve" to make sure it's available and can be loaded. The first 100 lines or so of ovs-vswitchd.log might have some useful information about where things are going wrong. It does sound as though there is some issue with geneve tunnels, which would certainly explain issues with inter-node traffic. Regards, Lance

Lance, I found some interesting logs, we have (3) oVIRT nodes. We are running: CentOS Linux release 7.2.1511 (Core) Linux hostname 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux What is interesting now is my 2nd node now won't hand out DHCP any more, however it does have a strange "tunnel" error message on node2 also, see below. What is bizarre node1 nor node3 show that bizarre tunnel message? [ovirt-node1] [root@ovirt-node1 ~]# lsmod | grep vport vport_geneve 12815 1 geneve 13381 1 vport_geneve openvswitch 84535 1 vport_geneve [ovirt-node1 / ovn-controller.log] 2016-12-05T20:47:56.761Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log 2016-12-05T20:47:56.762Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-05T20:47:56.762Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-05T20:47:56.764Z|00004|reconnect|INFO|tcp:172.20.192.77:6642: connecting... 2016-12-05T20:47:56.764Z|00005|reconnect|INFO|tcp:172.20.192.77:6642: connected 2016-12-05T20:47:56.768Z|00006|binding|INFO|Claiming lport 91d4f4f5-4b9f-42c0-aa2c-8a101474bb84 for this chassis. 2016-12-05T20:47:56.768Z|00007|binding|INFO|Claiming 00:1a:4a:16:01:5e dynamic 2016-12-05T20:47:56.768Z|00008|binding|INFO|Claiming lport 71ef81f1-7c20-4c68-b536-d274703f7541 for this chassis. 2016-12-05T20:47:56.768Z|00009|binding|INFO|Claiming 00:1a:4a:16:01:61 dynamic 2016-12-05T20:47:56.768Z|00010|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2016-12-05T20:47:56.768Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2016-12-05T20:47:56.768Z|00012|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2016-12-05T20:47:56.768Z|00013|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2016-12-05T20:47:56.768Z|00014|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T20:47:56.770Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2016-12-05T20:47:56.770Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2016-12-05T20:47:56.770Z|00017|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T20:47:56.771Z|00018|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T20:47:56.772Z|00019|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T20:47:56.773Z|00020|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-05T20:47:56.774Z|00021|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x17): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x17): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 17-00 00 00 00 00 00 00 00 |...x............| 00000010 00 00 00 00 00 00 00 00-32 00 00 00 00 00 00 64 |........2......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00022|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x1f): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x1f): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 1f-00 00 00 00 00 00 00 00 |................| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00023|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x21): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x21): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 21-00 00 00 00 00 00 00 00 |.......!........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00024|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x2c): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x2c): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 2c-00 00 00 00 00 00 00 00 |.......,........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00025|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x2d): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x2d): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 2d-00 00 00 00 00 00 00 00 |.......-........| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00026|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x31): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x31): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 31-00 00 00 00 00 00 00 00 |...x...1........| 00000010 00 00 00 00 00 00 00 00-32 00 00 00 00 00 00 64 |........2......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00027|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x39): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x39): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 39-00 00 00 00 00 00 00 00 |.......9........| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00028|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x47): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x47): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 47-00 00 00 00 00 00 00 00 |.......G........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00029|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x49): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x49): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 49-00 00 00 00 00 00 00 00 |...x...I........| 00000010 00 00 00 00 00 00 00 00-15 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00030|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x4a): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x4a): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 4a-00 00 00 00 00 00 00 00 |.......J........| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00031|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x53): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x53): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 53-00 00 00 00 00 00 00 00 |.......S........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T20:47:56.774Z|00032|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x61): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x61): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 61-00 00 00 00 00 00 00 00 |...x...a........| 00000010 00 00 00 00 00 00 00 00-15 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| [ovirt-node2] # lsmod | grep vport vport_geneve 12815 1 geneve 13381 1 vport_geneve openvswitch 84535 1 vport_geneve [ovirt-node2 in the ovs-vswitchd.log] What is of interest, is the "tunnel" error line, not sure where it's getting IP 50.48.48.48, not even an IP we use on this network, nor do i know where it's getting IP 47.51.48.55 ? 2016-12-05T20:35:04.499Z|00001|tunnel(revalidator25)|WARN|receive tunnel port not found (tun_id=0x3230203838373438,tun_src=47.51.48.55,tun_dst=50.48.48.48,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=20,tun_ttl=114,tun_flags=csum|key,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x1234) 2016-12-05T20:35:04.345Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-12-05T20:35:04.347Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2016-12-05T20:35:04.347Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2016-12-05T20:35:04.347Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2016-12-05T20:35:04.348Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-05T20:35:04.348Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-05T20:35:04.350Z|00007|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-12-05T20:35:04.350Z|00008|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-12-05T20:35:04.350Z|00009|ofproto_dpif|INFO|system@ovs-system: Datapath does not support truncate action 2016-12-05T20:35:04.350Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-12-05T20:35:04.350Z|00011|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state 2016-12-05T20:35:04.350Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_zone 2016-12-05T20:35:04.350Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_mark 2016-12-05T20:35:04.350Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_label 2016-12-05T20:35:04.350Z|00015|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state_nat 2016-12-05T20:35:04.493Z|00016|bridge|INFO|bridge br-int: added interface vnet0 on port 5 2016-12-05T20:35:04.493Z|00001|ofproto_dpif_upcall(handler1)|INFO|received packet on unassociated datapath port 0 2016-12-05T20:35:04.493Z|00017|bridge|INFO|bridge br-int: added interface br-int on port 65534 2016-12-05T20:35:04.493Z|00018|bridge|INFO|bridge br-int: using datapath ID 000016d6e0b66442 2016-12-05T20:35:04.493Z|00019|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" 2016-12-05T20:35:04.494Z|00020|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.6.90 2016-12-05T20:35:04.499Z|00001|tunnel(revalidator25)|WARN|receive tunnel port not found (tun_id=0x3230203838373438,tun_src=47.51.48.55,tun_dst=50.48.48.48,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=20,tun_ttl=114,tun_flags=csum|key,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x1234) 2016-12-05T20:35:09.657Z|00021|bridge|INFO|bridge br-int: added interface ovn-c0dc09-0 on port 6 2016-12-05T20:35:09.657Z|00022|bridge|INFO|bridge br-int: added interface ovn-252778-0 on port 7 2016-12-05T20:35:09.660Z|00023|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T20:35:09.660Z|00024|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00025|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T20:35:09.660Z|00026|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00027|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T20:35:09.660Z|00028|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00029|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T20:35:09.660Z|00030|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00031|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T20:35:09.660Z|00032|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00033|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00034|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00035|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00036|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:09.660Z|00037|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T20:35:14.495Z|00038|memory|INFO|281084 kB peak resident set size after 10.2 seconds 2016-12-05T20:35:14.495Z|00039|memory|INFO|handlers:23 ofconns:2 ports:4 revalidators:9 rules:76 udpif keys:1 2016-12-05T20:35:19.659Z|00040|connmgr|INFO|br-int<->unix: 73 flow_mods 10 s ago (72 adds, 1 deletes) 2016-12-05T20:37:43.716Z|00041|connmgr|INFO|br-int<->unix: 8 flow_mods 10 s ago (7 deletes, 1 modifications) 2016-12-05T20:39:02.163Z|00042|bridge|INFO|bridge br-int: added interface ovn-c0dc09-0 on port 8 2016-12-05T20:39:12.165Z|00043|connmgr|INFO|br-int<->unix: 8 flow_mods 10 s ago (7 adds, 1 modifications) 2016-12-05T20:45:34.604Z|00044|connmgr|INFO|br-int<->unix: 8 flow_mods 10 s ago (7 deletes, 1 modifications) 2016-12-05T20:47:56.774Z|00045|bridge|INFO|bridge br-int: added interface ovn-c0dc09-0 on port 9 2016-12-05T20:48:06.776Z|00046|connmgr|INFO|br-int<->unix: 8 flow_mods 10 s ago (7 adds, 1 modifications) [ovirt-node3 / ovs-vswitchd.log] 2016-12-05T21:15:19.969Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-12-05T21:15:19.973Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2016-12-05T21:15:19.973Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2016-12-05T21:15:19.973Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2016-12-05T21:15:19.973Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-05T21:15:19.973Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-05T21:15:19.978Z|00007|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-12-05T21:15:19.978Z|00008|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-12-05T21:15:19.978Z|00009|ofproto_dpif|INFO|system@ovs-system: Datapath does not support truncate action 2016-12-05T21:15:19.978Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-12-05T21:15:19.978Z|00011|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state 2016-12-05T21:15:19.978Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_zone 2016-12-05T21:15:19.978Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_mark 2016-12-05T21:15:19.978Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_label 2016-12-05T21:15:19.978Z|00015|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state_nat 2016-12-05T21:15:20.148Z|00001|ofproto_dpif_upcall(handler1)|INFO|received packet on unassociated datapath port 0 2016-12-05T21:15:20.148Z|00016|bridge|INFO|bridge br-int: added interface vnet0 on port 7 2016-12-05T21:15:20.148Z|00017|bridge|INFO|bridge br-int: added interface br-int on port 65534 2016-12-05T21:15:20.148Z|00018|bridge|INFO|bridge br-int: using datapath ID 0000921726222e4b 2016-12-05T21:15:20.148Z|00019|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" 2016-12-05T21:15:20.150Z|00020|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.6.90 2016-12-05T21:15:30.150Z|00021|memory|INFO|281052 kB peak resident set size after 10.2 seconds 2016-12-05T21:15:30.150Z|00022|memory|INFO|handlers:23 ports:2 revalidators:9 rules:4 2016-12-05T21:15:47.900Z|00023|bridge|INFO|bridge br-int: added interface ovn-456949-0 on port 8 2016-12-05T21:15:47.900Z|00024|bridge|INFO|bridge br-int: added interface ovn-c0dc09-0 on port 9 2016-12-05T21:15:47.901Z|00025|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T21:15:47.901Z|00026|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.901Z|00027|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T21:15:47.901Z|00028|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.901Z|00029|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T21:15:47.901Z|00030|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.901Z|00031|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T21:15:47.901Z|00032|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.901Z|00033|ofproto_dpif|WARN|Rejecting ct action because datapath does not support ct action (your kernel module may be out of date) 2016-12-05T21:15:47.901Z|00034|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.901Z|00035|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.901Z|00036|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.902Z|00037|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.902Z|00038|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:47.902Z|00039|connmgr|INFO|br-int<->unix: sending OFPBAC_BAD_TYPE error reply to OFPT_FLOW_MOD message 2016-12-05T21:15:57.901Z|00040|connmgr|INFO|br-int<->unix: 73 flow_mods 10 s ago (72 adds, 1 deletes) [ovirt-node3 ovn-controller.log] 2016-12-05T21:15:47.858Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log 2016-12-05T21:15:47.860Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-05T21:15:47.860Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-05T21:15:47.864Z|00004|reconnect|INFO|tcp:172.20.192.77:6642: connecting... 2016-12-05T21:15:47.864Z|00005|reconnect|INFO|tcp:172.20.192.77:6642: connected 2016-12-05T21:15:47.867Z|00006|binding|INFO|Claiming lport 6b289418-8b8e-42b4-8334-c71584afcd3e for this chassis. 2016-12-05T21:15:47.867Z|00007|binding|INFO|Claiming 00:1a:4a:16:01:5c dynamic 2016-12-05T21:15:47.867Z|00008|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2016-12-05T21:15:47.867Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2016-12-05T21:15:47.868Z|00010|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2016-12-05T21:15:47.868Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2016-12-05T21:15:47.868Z|00012|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T21:15:47.869Z|00013|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2016-12-05T21:15:47.869Z|00014|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2016-12-05T21:15:47.870Z|00015|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T21:15:47.871Z|00016|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T21:15:47.872Z|00017|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T21:15:47.901Z|00018|ofctrl|INFO|dropping duplicate flow: table_id=29, priority=50, metadata=0x2,dl_dst=00:1a:4a:16:01:62, actions=set_field:0x5->reg15,resubmit(,32) 2016-12-05T21:15:47.901Z|00019|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x16): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x16): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 16-00 00 00 00 00 00 00 00 |...x............| 00000010 00 00 00 00 00 00 00 00-32 00 00 00 00 00 00 64 |........2......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00020|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x1c): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x1c): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 1c-00 00 00 00 00 00 00 00 |................| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00021|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x1e): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x1e): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 1e-00 00 00 00 00 00 00 00 |................| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00022|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x27): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x27): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 27-00 00 00 00 00 00 00 00 |.......'........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00023|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x28): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x28): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 28-00 00 00 00 00 00 00 00 |.......(........| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00024|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x2b): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x2b): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 2b-00 00 00 00 00 00 00 00 |...x...+........| 00000010 00 00 00 00 00 00 00 00-32 00 00 00 00 00 00 64 |........2......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00025|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x31): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x31): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 31-00 00 00 00 00 00 00 00 |.......1........| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00026|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x3d): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x3d): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 3d-00 00 00 00 00 00 00 00 |.......=........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00027|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x3f): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x3f): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 3f-00 00 00 00 00 00 00 00 |...x...?........| 00000010 00 00 00 00 00 00 00 00-15 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00028|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x40): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x40): (***truncated to 64 bytes from 160***) 00000000 04 0e 00 a0 00 00 00 40-00 00 00 00 00 00 00 00 |.......@........| 00000010 00 00 00 00 00 00 00 00-36 00 00 00 00 00 00 64 |........6......d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00029|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x46): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x46): (***truncated to 64 bytes from 136***) 00000000 04 0e 00 88 00 00 00 46-00 00 00 00 00 00 00 00 |.......F........| 00000010 00 00 00 00 00 00 00 00-19 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-08 00 00 01 01 08 00 00 |..."............| 2016-12-05T21:15:47.902Z|00030|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x54): OFPBAC_BAD_TYPE OFPT_FLOW_MOD (OF1.3) (xid=0x54): (***truncated to 64 bytes from 120***) 00000000 04 0e 00 78 00 00 00 54-00 00 00 00 00 00 00 00 |...x...T........| 00000010 00 00 00 00 00 00 00 00-15 00 00 00 00 00 00 64 |...............d| 00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00 |................| 00000030 00 01 00 22 80 00 0a 02-86 dd 00 01 01 08 00 00 |..."............| Your help is greatly appreciated! Devin On Mon, Dec 5, 2016 at 12:31 PM, Lance Richardson <lrichard@redhat.com> wrote:
From: "Devin Acosta" <devin@pabstatencio.com> To: "Marcin Mirecki" <mmirecki@redhat.com> Cc: "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 12:11:46 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Marcin,
Also I noticed in your original post it mentions:
ip link - the result should include a link called genev_sys_ ...
I noticed that on my hosts I don't see any links with name: genev_sys_ ?? Could this be a problem?
lo: enp4s0f0: enp4s0f1: enp7s0f0: enp7s0f1: bond0: DEV-NOC: ovirtmgmt: bond0.700@bond0: DEV-VM-NET: bond0.705@bond0: ;vdsmdummy;: vnet0: vnet1: vnet2: vnet3: vnet4: ovs-system: br-int: vnet5: vnet6:
Hi Devin,
What distribution and kernel version are you using?
One thing you could check is whether the vport_geneve kernel module is being loaded, e.g. you should see something like:
$ lsmod | grep vport vport_geneve 12560 1 openvswitch 246755 5 vport_geneve
If vport_geneve is not loaded, you could "sudo modprobe vport_geneve" to make sure it's available and can be loaded.
The first 100 lines or so of ovs-vswitchd.log might have some useful information about where things are going wrong.
It does sound as though there is some issue with geneve tunnels, which would certainly explain issues with inter-node traffic.
Regards,
Lance
-- Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 4:17:35 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
I found some interesting logs, we have (3) oVIRT nodes.
We are running: CentOS Linux release 7.2.1511 (Core) Linux hostname 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
<snip>
2016-12-05T20:47:56.774Z|00021|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x17): OFPBAC_BAD_TYPE
This (generally unintelligible message usually indicates that the kernel openvswitch module doesn't support conntrack. <snip>
2016-12-05T20:35:04.345Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-12-05T20:35:04.347Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2016-12-05T20:35:04.347Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2016-12-05T20:35:04.347Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2016-12-05T20:35:04.348Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-05T20:35:04.348Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-05T20:35:04.350Z|00007|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-12-05T20:35:04.350Z|00008|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-12-05T20:35:04.350Z|00009|ofproto_dpif|INFO|system@ovs-system: Datapath does not support truncate action 2016-12-05T20:35:04.350Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-12-05T20:35:04.350Z|00011|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state 2016-12-05T20:35:04.350Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_zone 2016-12-05T20:35:04.350Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_mark 2016-12-05T20:35:04.350Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_label 2016-12-05T20:35:04.350Z|00015|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state_nat
OK, "Datapath does not support ct_*" confirms that the kernel openvswitch module doesn't support the conntrack features needed by OVN. Most likely the loaded module is the stock CentOS one, you can build the out-of-tree kernel module RPM from the same source tree where you built the other OVS/OVN RPMs via: make rpm-fedora-kmod This should leave an RPM named something like: openvswitch-kmod-2.6.90-1.el7.centos.x86_64.rpm Install that and reboot and things should be working better. Regards, Lance
Your help is greatly appreciated!
Devin
On Mon, Dec 5, 2016 at 12:31 PM, Lance Richardson <lrichard@redhat.com> wrote:
From: "Devin Acosta" <devin@pabstatencio.com> To: "Marcin Mirecki" <mmirecki@redhat.com> Cc: "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 12:11:46 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Marcin,
Also I noticed in your original post it mentions:
ip link - the result should include a link called genev_sys_ ...
I noticed that on my hosts I don't see any links with name: genev_sys_ ?? Could this be a problem?
lo: enp4s0f0: enp4s0f1: enp7s0f0: enp7s0f1: bond0: DEV-NOC: ovirtmgmt: bond0.700@bond0: DEV-VM-NET: bond0.705@bond0: ;vdsmdummy;: vnet0: vnet1: vnet2: vnet3: vnet4: ovs-system: br-int: vnet5: vnet6:
Hi Devin,
What distribution and kernel version are you using?
One thing you could check is whether the vport_geneve kernel module is being loaded, e.g. you should see something like:
$ lsmod | grep vport vport_geneve 12560 1 openvswitch 246755 5 vport_geneve
If vport_geneve is not loaded, you could "sudo modprobe vport_geneve" to make sure it's available and can be loaded.
The first 100 lines or so of ovs-vswitchd.log might have some useful information about where things are going wrong.
It does sound as though there is some issue with geneve tunnels, which would certainly explain issues with inter-node traffic.
Regards,
Lance
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

Lance, Well I installed the new kernel module and it cleared up a lot of the errors I was seeing in the log, but what I notice is that I still can't ping instances between hosts. I'm starting to wonder am I missing something fundamental here? I don't see anything in the ovs-vswitchd.log to show tunnel? I do show in the kernel log on reload of the module: [1056295.308707] openvswitch: module verification failed: signature and/or required key missing - tainting kernel [1056295.311034] openvswitch: Open vSwitch switching datapath 2.6.90 [1056295.311145] openvswitch: LISP tunneling driver [1056295.311147] openvswitch: GRE over IPv4 tunneling driver [1056295.311153] openvswitch: Geneve tunneling driver [1056295.311164] openvswitch: VxLAN tunneling driver [1056295.311166] openvswitch: STT tunneling driver [node2] [root@ovirt-node2 openvswitch]# cat ovs-vswitchd.log 2016-12-06T04:22:23.192Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-12-06T04:22:23.194Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2016-12-06T04:22:23.194Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2016-12-06T04:22:23.194Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2016-12-06T04:22:23.194Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-06T04:22:23.195Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-06T04:22:23.197Z|00007|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-12-06T04:22:23.197Z|00008|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-12-06T04:22:23.197Z|00009|ofproto_dpif|INFO|system@ovs-system: Datapath supports truncate action 2016-12-06T04:22:23.197Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-12-06T04:22:23.197Z|00011|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state 2016-12-06T04:22:23.197Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zone 2016-12-06T04:22:23.197Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_mark 2016-12-06T04:22:23.197Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_label 2016-12-06T04:22:23.197Z|00015|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state_nat 2016-12-06T04:22:23.339Z|00001|ofproto_dpif_upcall(handler1)|INFO|received packet on unassociated datapath port 0 2016-12-06T04:22:23.339Z|00016|bridge|INFO|bridge br-int: added interface vnet0 on port 5 2016-12-06T04:22:23.339Z|00017|bridge|INFO|bridge br-int: added interface br-int on port 65534 2016-12-06T04:22:23.339Z|00018|bridge|INFO|bridge br-int: using datapath ID 000016d6e0b66442 2016-12-06T04:22:23.339Z|00019|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" 2016-12-06T04:22:23.340Z|00020|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.6.90 2016-12-06T04:22:32.437Z|00021|bridge|INFO|bridge br-int: added interface ovn-c0dc09-0 on port 6 2016-12-06T04:22:32.437Z|00022|bridge|INFO|bridge br-int: added interface ovn-252778-0 on port 7 2016-12-06T04:22:33.342Z|00023|memory|INFO|281400 kB peak resident set size after 10.2 seconds 2016-12-06T04:22:33.342Z|00024|memory|INFO|handlers:23 ofconns:2 ports:4 revalidators:9 rules:79 2016-12-06T04:22:42.440Z|00025|connmgr|INFO|br-int<->unix: 76 flow_mods 10 s ago (75 adds, 1 deletes) [root@ovirt-node2 openvswitch]# cat ovn-controller.log 2016-12-06T04:22:32.398Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log 2016-12-06T04:22:32.400Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-12-06T04:22:32.400Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-12-06T04:22:32.402Z|00004|reconnect|INFO|tcp:172.20.192.77:6642: connecting... 2016-12-06T04:22:32.403Z|00005|reconnect|INFO|tcp:172.20.192.77:6642: connected 2016-12-06T04:22:32.406Z|00006|binding|INFO|Claiming lport 56432d2b-a96d-4ac7-b0e9-3450a006e1d4 for this chassis. 2016-12-06T04:22:32.406Z|00007|binding|INFO|Claiming 00:1a:4a:16:01:64 dynamic 2016-12-06T04:22:32.407Z|00008|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2016-12-06T04:22:32.407Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2016-12-06T04:22:32.407Z|00010|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2016-12-06T04:22:32.407Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2016-12-06T04:22:32.408Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2016-12-06T04:22:32.408Z|00013|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2016-12-06T04:22:32.440Z|00014|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:22:32.441Z|00015|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:22:32.441Z|00016|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:22:37.408Z|00017|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:22:42.408Z|00018|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:22:47.409Z|00019|ofctrl|INFO|Dropped 1 log messages in last 5 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:22:47.409Z|00020|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:22:57.411Z|00021|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:22:57.411Z|00022|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:23:12.413Z|00023|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:23:12.413Z|00024|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:23:22.415Z|00025|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:23:22.415Z|00026|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:23:37.417Z|00027|ofctrl|INFO|Dropped 5 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:23:37.417Z|00028|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:23:47.419Z|00029|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:23:47.419Z|00030|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-06T04:23:57.421Z|00031|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-06T04:23:57.421Z|00032|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) [root@ovirt-node2 openvswitch]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no DEV-NOC 8000.0cc47a1ef306 no bond0 DEV-VM-NET 8000.0cc47a1ef306 no bond0.700 ovirtmgmt 8000.0cc47a08b3c2 no enp7s0f0 -- Devin Acosta Red Hat Certified Architect, LinuxStack devin@linuxguru.co On Mon, Dec 5, 2016 at 2:34 PM, Lance Richardson <lrichard@redhat.com> wrote:
From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 4:17:35 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
I found some interesting logs, we have (3) oVIRT nodes.
We are running: CentOS Linux release 7.2.1511 (Core) Linux hostname 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
<snip>
2016-12-05T20:47:56.774Z|00021|ofctrl|INFO|OpenFlow error: OFPT_ERROR (OF1.3) (xid=0x17): OFPBAC_BAD_TYPE
This (generally unintelligible message usually indicates that the kernel openvswitch module doesn't support conntrack.
<snip>
2016-12-05T20:35:04.345Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-12-05T20:35:04.347Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2016-12-05T20:35:04.347Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2016-12-05T20:35:04.347Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes
and 32
CPU cores 2016-12-05T20:35:04.348Z|00005|reconnect|INFO|unix:/ var/run/openvswitch/db.sock: connecting... 2016-12-05T20:35:04.348Z|00006|reconnect|INFO|unix:/ var/run/openvswitch/db.sock: connected 2016-12-05T20:35:04.350Z|00007|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2016-12-05T20:35:04.350Z|00008|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2016-12-05T20:35:04.350Z|00009|ofproto_dpif|INFO|system@ovs-system: Datapath does not support truncate action 2016-12-05T20:35:04.350Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2016-12-05T20:35:04.350Z|00011|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state 2016-12-05T20:35:04.350Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_zone 2016-12-05T20:35:04.350Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_mark 2016-12-05T20:35:04.350Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_label 2016-12-05T20:35:04.350Z|00015|ofproto_dpif|INFO|system@ovs-system: Datapath does not support ct_state_nat
OK, "Datapath does not support ct_*" confirms that the kernel openvswitch module doesn't support the conntrack features needed by OVN.
Most likely the loaded module is the stock CentOS one, you can build the out-of-tree kernel module RPM from the same source tree where you built the other OVS/OVN RPMs via:
make rpm-fedora-kmod
This should leave an RPM named something like:
openvswitch-kmod-2.6.90-1.el7.centos.x86_64.rpm
Install that and reboot and things should be working better.
Regards,
Lance
Your help is greatly appreciated!
Devin
On Mon, Dec 5, 2016 at 12:31 PM, Lance Richardson <lrichard@redhat.com> wrote:
From: "Devin Acosta" <devin@pabstatencio.com> To: "Marcin Mirecki" <mmirecki@redhat.com> Cc: "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 12:11:46 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Marcin,
Also I noticed in your original post it mentions:
ip link - the result should include a link called genev_sys_ ...
I noticed that on my hosts I don't see any links with name:
genev_sys_ ??
Could this be a problem?
lo: enp4s0f0: enp4s0f1: enp7s0f0: enp7s0f1: bond0: DEV-NOC: ovirtmgmt: bond0.700@bond0: DEV-VM-NET: bond0.705@bond0: ;vdsmdummy;: vnet0: vnet1: vnet2: vnet3: vnet4: ovs-system: br-int: vnet5: vnet6:
Hi Devin,
What distribution and kernel version are you using?
One thing you could check is whether the vport_geneve kernel module is being loaded, e.g. you should see something like:
$ lsmod | grep vport vport_geneve 12560 1 openvswitch 246755 5 vport_geneve
If vport_geneve is not loaded, you could "sudo modprobe vport_geneve" to make sure it's available and can be loaded.
The first 100 lines or so of ovs-vswitchd.log might have some useful information about where things are going wrong.
It does sound as though there is some issue with geneve tunnels, which would certainly explain issues with inter-node traffic.
Regards,
Lance
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 11:28:17 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
Well I installed the new kernel module and it cleared up a lot of the errors I was seeing in the log, but what I notice is that I still can't ping instances between hosts. I'm starting to wonder am I missing something fundamental here? I don't see anything in the ovs-vswitchd.log to show tunnel?
Hi Devin, OK, some small progress then. I think the best next step would be to look at the current state of your system. Could you send the output of the following commands? On the nodes running ovn-controller: ps -fwwC ovn-controller ovs-vsctl show ovs-dpctl show -s ovs-ofctl -O OpenFlow13 dump-flows br-int On the node running ovn-northd: ovn-sbctl show ovn-sbctl dump-flows Thanks, Lance

Lance, I have attached the output of each into different files. I really appreciate your help very much. -- Devin Acosta Red Hat Certified Architect, LinuxStack devin@linuxguru.co On Tue, Dec 6, 2016 at 7:36 AM, Lance Richardson <lrichard@redhat.com> wrote:
From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Monday, December 5, 2016 11:28:17 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
Well I installed the new kernel module and it cleared up a lot of the errors I was seeing in the log, but what I notice is that I still can't ping instances between hosts. I'm starting to wonder am I missing something fundamental here? I don't see anything in the ovs-vswitchd.log to show tunnel?
Hi Devin,
OK, some small progress then. I think the best next step would be to look at the current state of your system. Could you send the output of the following commands?
On the nodes running ovn-controller:
ps -fwwC ovn-controller ovs-vsctl show ovs-dpctl show -s ovs-ofctl -O OpenFlow13 dump-flows br-int
On the node running ovn-northd:
ovn-sbctl show ovn-sbctl dump-flows
Thanks,
Lance

From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Tuesday, December 6, 2016 10:49:59 AM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
I have attached the output of each into different files. I really appreciate your help very much.
Based on asking around about the "dropping duplicate flow:", it's a known issue that is harmless (other than the noise). I'll try to find out if someone has a fix in the works. It seems your node1 has no port bindings... is that expected?
From the counters, it looks like node2 and node3 have attempted to send packets on the geneve tunnels, but neither has received anything.
Could you verify that node2 and node3 have connectivity on the IPs used for the tunnels, e.g. by trying to ping 172.10.10.75 and 172.10.10.73 from node2? If that works, the issue might be iptables rules dropping geneve packets, the simplest way around that would be to "systemctl stop firewalld" if that's running (ok for a lab environment anyway). Thanks, Lance

Lance, It appears that firewalld was my issue, can you just confirm with me what Ports should be opened for Geneve and OVN to work properly? On Tue, Dec 6, 2016 at 8:43 AM, Lance Richardson <lrichard@redhat.com> wrote:
From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Tuesday, December 6, 2016 10:49:59 AM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
I have attached the output of each into different files. I really appreciate your help very much.
Based on asking around about the "dropping duplicate flow:", it's a known issue that is harmless (other than the noise). I'll try to find out if someone has a fix in the works.
It seems your node1 has no port bindings... is that expected?
From the counters, it looks like node2 and node3 have attempted to send packets on the geneve tunnels, but neither has received anything.
Could you verify that node2 and node3 have connectivity on the IPs used for the tunnels, e.g. by trying to ping 172.10.10.75 and 172.10.10.73 from node2?
If that works, the issue might be iptables rules dropping geneve packets, the simplest way around that would be to "systemctl stop firewalld" if that's running (ok for a lab environment anyway).
Thanks,
Lance
-- Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co

From: "Devin Acosta" <devin@pabstatencio.com> To: "Lance Richardson" <lrichard@redhat.com> Cc: "Marcin Mirecki" <mmirecki@redhat.com>, "users" <Users@ovirt.org> Sent: Tuesday, December 6, 2016 12:07:31 PM Subject: Re: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Lance,
It appears that firewalld was my issue, can you just confirm with me what Ports should be opened for Geneve and OVN to work properly?
Hi Devin, That's good to hear! On the ovn-northd node, you need TCP ports 6641 and 6642 open (as mentioned in Marcin's blog), on the ovn-controller nodes you need to allow packets with destination UDP port 6081 (Geneve tunnels). Lance

Lance, We have a problem with communication between different hosts in OVN. Could you please take a look at the log below? The part with "dropping duplicate flow" sounds worrying. Thanks, Marcin ----- Original Message -----
From: "Devin Acosta" <devin@pabstatencio.com> To: "users" <Users@ovirt.org> Sent: Saturday, December 3, 2016 12:24:21 AM Subject: [ovirt-users] oVIRT 4 / OVN / Communication issues of instances between nodes.
Note: When I configured vdsm-tool ovn-config, I passed it the IP address of the OVN-Controller which is using the ovirtmgmt network, which is just one of the NIC's on the nodes.
I am opening up new thread as this I feel differs a bit from my original request. I have OVN which I believe is deployed correctly. I have noticed that if instances get spun up on the same oVIRT node they can all talk without issues to one another, however if one instance gets spun up on another node even if it has the same (OVN network/subnet), it can't ping or reach other instances in the subnet. I noticed that the OVN-Controller of the instance that can't talk is logging:
2016-12-02T22:50:54.907Z|00181|pinctrl|INFO|DHCPOFFER 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:54.908Z|00182|pinctrl|INFO|DHCPACK 00:1a:4a:16:01:5c 10.10.10.4 2016-12-02T22:50:55.695Z|00183|ofctrl|INFO|Dropped 7 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:50:55.695Z|00184|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:10.705Z|00185|ofctrl|INFO|Dropped 6 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:10.705Z|00186|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:20.710Z|00187|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:20.710Z|00188|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:35.718Z|00189|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:35.718Z|00190|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:45.724Z|00191|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:51:45.724Z|00192|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:51:55.730Z|00193|ofctrl|INFO|Dropped 5 log messages in last 10 seconds (most recently, 0 seconds ago) due to excessive rate 2016-12-02T22:51:55.730Z|00194|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:10.738Z|00195|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:10.739Z|00196|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:20.744Z|00197|ofctrl|INFO|Dropped 3 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:20.744Z|00198|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:35.752Z|00199|ofctrl|INFO|Dropped 5 log messages in last 15 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:35.752Z|00200|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33) 2016-12-02T22:52:45.758Z|00201|ofctrl|INFO|Dropped 4 log messages in last 10 seconds (most recently, 5 seconds ago) due to excessive rate 2016-12-02T22:52:45.758Z|00202|ofctrl|INFO|dropping duplicate flow: table_id=32, priority=150, reg10=0x2/0x2, actions=resubmit(,33)
From the OVN-Controller:
[root@dev001-022-002 ~]# ovn-nbctl show switch ddb3b92f-b359-4b59-a41a-ebae6df7fe9a (devins-net) port 6b289418-8b8e-42b4-8334-c71584afcd3e addresses: ["00:1a:4a:16:01:5c dynamic"] port 71ef81f1-7c20-4c68-b536-d274703f7541 addresses: ["00:1a:4a:16:01:61 dynamic"] port 91d4f4f5-4b9f-42c0-aa2c-8a101474bb84 addresses: ["00:1a:4a:16:01:5e dynamic"]
Do I need to do something special in order to allow communication between nodes of instances on same OVN network?
Output of ovs-vsctl show from node3:
61af799c-a621-445e-8183-23dcb38ea3cc Bridge br-int fail_mode: secure Port "ovn-456949-0" Interface "ovn-456949-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.74"} Port "ovn-c0dc09-0" Interface "ovn-c0dc09-0" type: geneve options: {csum="true", key=flow, remote_ip="172.10.10.73"} Port br-int Interface br-int type: internal ovs_version: "2.6.90"
--
Devin Acosta Red Hat Certified Architect, LinuxStack 602-354-1220 || devin@linuxguru.co
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Devin Acosta
-
Lance Richardson
-
Marcin Mirecki