OVN and MTU of vnics based on it clarification

Hello, I'm testing a virtual rhcs cluster based on 4 nodes that are CentOS 7.4 VMs. So the stack is based on Corosync/Pacemaker I have two oVirt hosts and so my plan is to put two VMs on first host and two VMs on the second host, to simulate a two sites config and site loss, before going to physical production config. Incidentally the two hypervisor hosts are indeed placed into different physical datacenters. So far so good. I decided to use OVN for the intracluster dedicated network configured for corosync (each VM has two vnics, one on production lan and one for intracluster). I detected that the cluster worked and formed (also only two nodes) only if the VMs run on the same host, while it seems they are not able to communicate when on different hosts. Ping is ok and an attempt of ssh session between them on intracluster lan, but cluster doesn't come up So after digging in past mailing list mails I found this recent one: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/RMS7XFOZ67O3ERJ... where the solution was to set 1400 for the MTU of the interfaces on OVN network. It seems it resolves the problem also in my scenario: - I live migrated two VMs on the second host and rhcs clusterware didn't complain - I relocated a resource group composed by several LV/FS, VIP and application from VM running on host1 to VM running on host2 without problems. So the question is: can anyone confirm what are guidelines for settings vnics on OVN? Is there already a document in place about MTU settings for OVN based vnics? Other particular settings or limitations if I want to configure a vnic on OVN? Thanks, Gianluca

On Sat, 7 Jul 2018 16:28:49 +0200 Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I'm testing a virtual rhcs cluster based on 4 nodes that are CentOS 7.4 VMs. So the stack is based on Corosync/Pacemaker I have two oVirt hosts and so my plan is to put two VMs on first host and two VMs on the second host, to simulate a two sites config and site loss, before going to physical production config. Incidentally the two hypervisor hosts are indeed placed into different physical datacenters. So far so good. I decided to use OVN for the intracluster dedicated network configured for corosync (each VM has two vnics, one on production lan and one for intracluster). I detected that the cluster worked and formed (also only two nodes) only if the VMs run on the same host, while it seems they are not able to communicate when on different hosts. Ping is ok and an attempt of ssh session between them on intracluster lan, but cluster doesn't come up So after digging in past mailing list mails I found this recent one: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/RMS7XFOZ67O3ERJ...
where the solution was to set 1400 for the MTU of the interfaces on OVN network. It seems it resolves the problem also in my scenario: - I live migrated two VMs on the second host and rhcs clusterware didn't complain - I relocated a resource group composed by several LV/FS, VIP and application from VM running on host1 to VM running on host2 without problems.
There will be a new feature [1][2] about propagating the MTU of the logical network into the guest. In ovirt-4.2.5 the logical network MTU <= 1500 will be propagated for clusters with switch type OVS and linux bridge, and MTU > 1500 will be propagated only for clusters with switch type linux bridge, if the requirements [3] are fulfilled in oVirt >= 4.2.5. OVS clusters will work for MTU > 1500 latest in oVirt 4.3. In this new feature a new default config setting "MTU for tunneled networks" is introduced, which will be set initially to 1442.
So the question is: can anyone confirm what are guidelines for settings vnics on OVN?
In the context of oVirt, I am only aware of [1] and [4]. Starting from oVirt 4.1 you can activate the OVN's internal dhcp server by creating a subnet for the network [4]. The default configuration will offer a MTU of 1442 to the guest, which is optimal for GENEVE tunneled networks over physical networks with a MTU of 1500.
Is there already a document in place about MTU settings for OVN based vnics?
There are some documents about MTU in OpenStack referenced in [1].
Other particular settings or limitations if I want to configure a vnic on OVN?
libvirt's network filters are not applied to OVN networks, so you should disable network filtering in oVirt's vNIC profile. This is tracked in [5]. [1] https://ovirt.org/develop/release-management/features/network/managed_mtu_fo... [2] https://github.com/oVirt/ovirt-site/pull/1667 [3] https://ovirt.org/develop/release-management/features/network/managed_mtu_fo... [4] https://github.com/oVirt/ovirt-provider-ovn/#section-dhcp [5] https://bugzilla.redhat.com/show_bug.cgi?id=1502754
Thanks,
Gianluca
participants (2)
-
Dominik Holler
-
Gianluca Cecchi