
One way is to set the interfaces on the vm's to use dhcp. The ovn dhcp is set to use the maximum allowed mtu value (1442 if host nic mtu is 1500). Another option would be to increase the host mtu all network devices outside the vm to be bigger by 58 than the vm mtu. The difference is caused by the tunneling overhead on each packet. On Tue, May 8, 2018 at 7:11 PM, Samuli Heinonen <samppah@neutraali.net> wrote:
Thanks Marcin! I set MTU to 1400 and connections seem to work. I haven't experienced any disconnects so far.
Is there any other way to set MTU rather than setting it per VM? Ie. setting it on oVirt/OVN side.
-samuli
Marcin Mirecki wrote:
Could you try the following: on the vms, lower the mtu of the vnics connected to the ovn network? And try again?
On Tue, May 8, 2018 at 11:40 AM, Samuli Heinonen<samppah@neutraali.net> wrote:
Hi Marcin,
Here is ip addr output from virtual machines:
[root@testi2 ~]# ip addr 1: lo:<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1a:4a:16:01:05 brd ff:ff:ff:ff:ff:ff inet 10.0.1.25/24 brd 10.0.1.255 scope global dynamic eth0 valid_lft 86331sec preferred_lft 86331sec inet6 fe80::21a:4aff:fe16:105/64 scope link valid_lft forever preferred_lft forever 3: eth2:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1a:4a:16:01:03 brd ff:ff:ff:ff:ff:ff inet 10.0.200.10/24 brd 10.0.200.255 scope global dynamic eth2 valid_lft 86334sec preferred_lft 86334sec inet6 fe80::21a:4aff:fe16:103/64 scope link valid_lft forever preferred_lft forever
eth0 connected to network ovirtmgmt eth2 connected to OVN network vm-public
[root@testi6 ~]# ip addr 1: lo:<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1a:4a:16:01:0b brd ff:ff:ff:ff:ff:ff inet 10.0.1.27/24 brd 10.0.1.255 scope global dynamic eth0 valid_lft 86187sec preferred_lft 86187sec inet6 fe80::21a:4aff:fe16:10b/64 scope link valid_lft forever preferred_lft forever 3: eth1:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1a:4a:16:01:0c brd ff:ff:ff:ff:ff:ff inet 10.0.200.11/24 brd 10.0.200.255 scope global dynamic eth1 valid_lft 86301sec preferred_lft 86301sec inet6 fe80::21a:4aff:fe16:10c/64 scope link valid_lft forever preferred_lft forever
eth0 connected to network ovirtmgmt eth1 connected to OVN network vm-public
Best regards, Samuli
Marcin Mirecki kirjoitti 08.05.2018 10:14:
Hi Samuli,
Your configuration looks correct. Can you also send me the result of 'ip addr' on your vm's?
Thanks, Marcin
On Mon, May 7, 2018 at 7:44 PM, Samuli Heinonen <samppah@neutraali.net> wrote:
Hi Marcin,
Thank you for your response.
I used engine-setup to do the configuration. Only exception is that I had to run "vdsm-tool ovn-config engine-ip local-ip" (ie. vdsm-tool ovn-config 10.0.1.101 10.0.1.21) on hypervisors.
Here is the output of requested commands:
[root@oe ~]# ovn-sbctl show Chassis "049183d5-61b6-4b9c-bae3-c7b10d30f8cb" hostname: "o2.hirundinidae.local" Encap geneve ip: "10.0.1.18" options: {csum="true"} Port_Binding "87c5e44a-7c8b-41b2-89a6-fa52f27643ed" Chassis "972f1b7b-10de-4e4f-a5f9-f080890f087d" hostname: "o3.hirundinidae.local" Encap geneve ip: "10.0.1.21" options: {csum="true"} Port_Binding "ccea5185-3efa-4d9c-9475-9e46009fea4f" Port_Binding "e868219c-f16c-45c6-b7b1-72d044fee602"
[root@oe ~]# ovn-nbctl show switch 7d264a6c-ea48-4a6d-9663-5244102dc9bb (vm-private) port 4ec3ecf6-d04a-406c-8354-c5e195ffde05 addresses: ["00:1a:4a:16:01:06 dynamic"] switch 40aedb7d-b1c3-400e-9ddb-16bee3bb312a (vm-public) port 87c5e44a-7c8b-41b2-89a6-fa52f27643ed addresses: ["00:1a:4a:16:01:03"] port ccea5185-3efa-4d9c-9475-9e46009fea4f addresses: ["00:1a:4a:16:01:0c"] port e868219c-f16c-45c6-b7b1-72d044fee602 addresses: ["00:1a:4a:16:01:0a"]
[root@o2 ~]# ip addr 1: lo:<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 [1] scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s31f6:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP qlen 1000 link/ether 78:f2:9e:90:bc:64 brd ff:ff:ff:ff:ff:ff 3: enp0s20f0u5c2:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master public state UNKNOWN qlen 1000 link/ether 50:3e:aa:4c:9b:01 brd ff:ff:ff:ff:ff:ff 4: ovs-system:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 82:49:e1:15:af:56 brd ff:ff:ff:ff:ff:ff 5: br-int:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether a2:bb:78:7e:35:4b brd ff:ff:ff:ff:ff:ff 21: public:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 50:3e:aa:4c:9b:01 brd ff:ff:ff:ff:ff:ff inet6 fe80::523e:aaff:fe4c:9b01/64 scope link valid_lft forever preferred_lft forever 22: ovirtmgmt:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 78:f2:9e:90:bc:64 brd ff:ff:ff:ff:ff:ff inet 10.0.1.18/24 [2] brd 10.0.1.255 scope global ovirtmgmt
valid_lft forever preferred_lft forever inet6 fe80::7af2:9eff:fe90:bc64/64 scope link valid_lft forever preferred_lft forever 23: genev_sys_6081:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN qlen 1000 link/ether 02:c0:7a:e3:4e:76 brd ff:ff:ff:ff:ff:ff inet6 fe80::c0:7aff:fee3:4e76/64 scope link valid_lft forever preferred_lft forever 24: ;vdsmdummy;:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether a2:2f:f2:58:88:da brd ff:ff:ff:ff:ff:ff 26: vnet0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:103/64 scope link valid_lft forever preferred_lft forever 29: vnet1:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:05 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:105/64 scope link valid_lft forever preferred_lft forever
[root@o2 ~]# ovs-vsctl show 6be6d37c-74cf-485e-9957-f8eb4bddb2ca Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "ovn-972f1b-0" Interface "ovn-972f1b-0" type: geneve options: {csum="true", key=flow, remote_ip="10.0.1.21"} Port "vnet0" Interface "vnet0" ovs_version: "2.9.0"
[root@o3 ~]# ip addr 1: lo:<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 [1] scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s31f6:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP qlen 1000 link/ether 78:f2:9e:90:bc:50 brd ff:ff:ff:ff:ff:ff 3: enp0s20f0u5c2:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master public state UNKNOWN qlen 1000 link/ether 50:3e:aa:4c:9c:03 brd ff:ff:ff:ff:ff:ff 4: ovs-system:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 7e:43:c1:b0:48:73 brd ff:ff:ff:ff:ff:ff 5: br-int:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 3a:fe:68:34:31:4c brd ff:ff:ff:ff:ff:ff 21: public:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 50:3e:aa:4c:9c:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::523e:aaff:fe4c:9c03/64 scope link valid_lft forever preferred_lft forever 22: ovirtmgmt:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 78:f2:9e:90:bc:50 brd ff:ff:ff:ff:ff:ff inet 10.0.1.21/24 [3] brd 10.0.1.255 scope global ovirtmgmt
valid_lft forever preferred_lft forever inet6 fe80::7af2:9eff:fe90:bc50/64 scope link valid_lft forever preferred_lft forever 24: ;vdsmdummy;:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 02:92:3f:89:f2:c7 brd ff:ff:ff:ff:ff:ff 25: vnet0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:16:3e:0b:b1:2d brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe0b:b12d/64 scope link valid_lft forever preferred_lft forever 27: vnet2:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:0b brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:10b/64 scope link valid_lft forever preferred_lft forever 29: vnet4:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:0c brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:10c/64 scope link valid_lft forever preferred_lft forever 31: vnet6:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:107/64 scope link valid_lft forever preferred_lft forever 32: vnet7:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master public state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:09 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:109/64 scope link valid_lft forever preferred_lft forever 33: vnet8:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000 link/ether fe:1a:4a:16:01:0a brd ff:ff:ff:ff:ff:ff inet6 fe80::fc1a:4aff:fe16:10a/64 scope link valid_lft forever preferred_lft forever 34: genev_sys_6081:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN qlen 1000 link/ether 46:88:1c:22:6f:c3 brd ff:ff:ff:ff:ff:ff inet6 fe80::4488:1cff:fe22:6fc3/64 scope link valid_lft forever preferred_lft forever
[root@o3 ~]# ovs-vsctl show 8c2c19fc-d9e4-423d-afcb-f5ecff602ca7 Bridge br-int fail_mode: secure Port "vnet4" Interface "vnet4" Port "ovn-049183-0" Interface "ovn-049183-0" type: geneve options: {csum="true", key=flow, remote_ip="10.0.1.18"} Port "vnet8" Interface "vnet8" Port br-int Interface br-int type: internal ovs_version: "2.9.0"
Best regards, Samuli
Marcin Mirecki wrote:
Hi Samuli,
Let's first make sure the configuration is correct. How did you configure the env? Did you use the automatic engine-setup configuration?
Can you please send me the output of the following:
on engine: ovn-sbctl show ovn-nbctl show
on hosts: ip addr ovs-vsctl show
The 'vdsm-tool ovn-config' command configures the ovn controller to use the first ip as the ovn central, and the local tunnel to use the second one.
Regards, Marcin
On Sun, May 6, 2018 at 10:42 AM, Samuli Heinonen<samppah@neutraali.net> wrote:
Hi all,
I'm building a home lab using oVirt+GlusterFS in hyperconverged(ish) setup.
My setup consists of 2x nodes with ASRock H110M-STX motherboard, Intel Pentium G4560 3,5 GHz CPU and 16 GB RAM. Motherboard has integrated Intel Gigabit I219V LAN. At the moment I'm using RaspberryPi as Gluster arbiter node. Nodes are connected to basic "desktop switch" without any management available.
Hardware is nowhere near perfect, but it get its job done and is enough for playing around. However I'm having problems getting OVN to work properly and I'm clueless where to look next.
oVirt is setup like this: oVirt engine host oe / 10.0.1.101 oVirt hypervisor host o2 / 10.0.1.18 oVirt hypervisor host o3 / 10.0.1.21 OVN network 10.0.200.0/24 [4]
When I spin up a VM in o2 and o3 with IP address in network 10.0.1.0/24 [5]
everything works fine. VMs can interact between each other without any problems.
Problems show up when I try to use OVN based network between virtual machines. If virtual machines are on same hypervisor then everything seems to work ok. But if I have virtual machine on hypervisor o2 and another one on hypervisor o3 then TCP connections doesn't work very well. UDP seems to be ok and it's possible to ping hosts, do dns& ntp queries and so on.
Problem with TCP is that for example when taking SSH connection to another host at some point connection just hangs and most of the time it's not even possible to even log in before connectiong hangs. If I look into tcpdump at that point it looks like packets never reach destination. Also, if I have multiple connections, then all of them hang at the same time.
I have tried switching off tx checksum and other similar settings, but it didn't make any difference.
I'm suspecting that hardware is not good enough. Before investigating into new hardware I'd like to get some confirmation that everything is setup correctly.
When setting up oVirt/OVN I had to run following undocumented command to get it working at all: vdsm-tool ovn-config 10.0.1.101 10.0.1.21 (oVirt engine IP, hypervisor IP). Especially this makes me think that I have missed some crucial part in configuration.
On oVirt engine in /var/log/openvswitch/ovsdb-server-nb.log there are error messages: 2018-05-06T08:30:05.418Z|00913|stream_ssl|WARN|SSL_read: unexpected SSL connection close 2018-05-06T08:30:05.418Z|00914|jsonrpc|WARN|ssl:127.0.0.1:53152 [6]: receive error: Protocol error 2018-05-06T08:30:05.419Z|00915|reconnect|WARN|ssl:127.0.0.1:53152 [6]: connection dropped (Protocol error)
To be honest, I'm not sure what's causing those error messages or are they related. I found out some bug reports stating that they are not critical.
Any ideas what to do next or should I just get better hardware? :)
Best regards, Samuli Heinonen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users [7]
Links: ------ [1] http://127.0.0.1/8 [2] http://10.0.1.18/24 [3] http://10.0.1.21/24 [4] http://10.0.200.0/24 [5] http://10.0.1.0/24 [6] http://127.0.0.1:53152 [7] http://lists.ovirt.org/mailman/listinfo/users