Hello,I'm testing a virtual rhcs cluster based on 4 nodes that are CentOS 7.4 VMs.
So the stack is based on Corosync/Pacemaker
I have two oVirt hosts and so my plan is to put two VMs on first host and two VMs on the second host, to simulate a two sites config and site loss, before going to physical production config.
Incidentally the two hypervisor hosts are indeed placed into different physical datacenters.
So far so good.
I decided to use OVN for the intracluster dedicated network configured for corosync (each VM has two vnics, one on production lan and one for intracluster).
I detected that the cluster worked and formed (also only two nodes) only if the VMs run on the same host, while it seems they are not able to communicate when on different hosts. Ping is ok and an attempt of ssh session between them on intracluster lan, but cluster doesn't come up
So after digging in past mailing list mails I found this recent one:
where the solution was to set 1400 for the MTU of the interfaces on OVN network.
It seems it resolves the problem also in my scenario:
- I live migrated two VMs on the second host and rhcs clusterware didn't complain
- I relocated a resource group composed by several LV/FS, VIP and application from VM running on host1 to VM running on host2 without problems.
So the question is: can anyone confirm what are guidelines for settings vnics on OVN? Is there already a document in place about MTU settings for OVN based vnics? Other particular settings or limitations if I want to configure a vnic on OVN?
Thanks,
Gianluca