<div dir="ltr"><div><div>Your described setup seems correct.<br><br>Please attempt to isolate the issue by trying to pass traffic between the hosts, taking the VM/s out of the equation.<br></div>You may also consider connecting the hosts directly to each other, to make sure this is not a switch problem.<br><br></div><div>Thanks,<br></div><div>Edy.<br></div><div><div><div><br><br></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 6, 2017 at 1:50 AM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div><div>Hello,<br></div>I'm testing an Oracle RAC with 2 Oracle Linux VMs inside a 4.0.6 environment.<br></div><div>They run on two different hosts<br></div>I would like to configure RAC intracluster communication with jumbo frames.<br></div><div>At VM level network adapter is eth1 (mapped to a vlan 95 at oVirt hosts side)<br></div><div>At oVirt side I configured a vm enabled vlan with mtu=9000<br></div>I verified that at hosts side I have<br><br>vlan95: flags=4163<UP,BROADCAST,<wbr>RUNNING,MULTICAST> mtu 9000<br> ether 00:1c:c4:ab:be:ba txqueuelen 1000 (Ethernet)<br> RX packets 61706 bytes 3631426 (3.4 MiB)<br> RX errors 0 dropped 0 overruns 0 frame 0<br> TX packets 3 bytes 258 (258.0 B)<br> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br><br></div>And able to do a <br></div>ping -M do -s 8972 ip<br></div>from each host to the other one<br></div>In VMs I configure the same MTU=9000 in ifcfg-eth1<br><br></div>But actually inside VMs it works erratically: the same ping test is ok between the VMs but Oracle checks sometimes work and sometimes give error on communication.<br></div>At initial cluster config, the second node fails to start the cluster.<br></div>I tried 5-6 times and also tried then to set mtu=8000 inside the VMs, supposing some sort of inner overhead to consider (such as 2 times 28 bytes) but nothing.<br></div>As soon as I set MTU=1500 at VM side, the cluster is able to form without any problem.<br></div>I can survive without jumbo frames in this particular case, because this is only a test, but the question remains about eventual best practices to put in place if I want to use jumbo frames.<br><br></div><div>One thing I see is that at VM side I see many drops when interface mtu was 9000, such as<br><br></div><div>eth1 Link encap:Ethernet HWaddr 00:1A:4A:17:01:57 <br> inet addr:192.168.10.32 Bcast:192.168.10.255 Mask:255.255.255.0<br> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br> RX packets:93046 errors:0 dropped:54964 overruns:0 frame:0<br> TX packets:26258 errors:0 dropped:0 overruns:0 carrier:0<br> collisions:0 txqueuelen:1000 <br> RX bytes:25726242 (24.5 MiB) TX bytes:33573207 (32.0 MiB)<br><br></div><div>at host side I see drops at bond0 level only:<br><br>[root@ovmsrv05 ~]# brctl show<br>bridge name bridge id STP enabled interfaces<br>;vdsmdummy; 8000.000000000000 no <br>vlan100 8000.001cc446ef73 no bond1.100<br>vlan65 8000.001cc446ef73 no bond1.65<br> vnet0<br> vnet1<br>vlan95 8000.001cc4abbeba no bond0.95<br> vnet2<br><br>bond0: flags=5187<UP,BROADCAST,<wbr>RUNNING,MASTER,MULTICAST> mtu 9000<br> ether 00:1c:c4:ab:be:ba txqueuelen 1000 (Ethernet)<br> RX packets 2855175 bytes <a href="tel:(312)%20686-8334" value="+13126868334" target="_blank">3126868334</a> (2.9 GiB)<br> RX errors 0 dropped 11686 overruns 0 frame 0<br> TX packets 1012849 bytes 478702140 (456.5 MiB)<br> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br><br>bond0.95: flags=4163<UP,BROADCAST,<wbr>RUNNING,MULTICAST> mtu 9000<br> ether 00:1c:c4:ab:be:ba txqueuelen 1000 (Ethernet)<br> RX packets 100272 bytes 27125992 (25.8 MiB)<br> RX errors 0 dropped 0 overruns 0 frame 0<br> TX packets 42355 bytes 40833904 (38.9 MiB)<br> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br><br>vlan95: flags=4163<UP,BROADCAST,<wbr>RUNNING,MULTICAST> mtu 9000<br> ether 00:1c:c4:ab:be:ba txqueuelen 1000 (Ethernet)<br> RX packets 62576 bytes 3719175 (3.5 MiB)<br> RX errors 0 dropped 0 overruns 0 frame 0<br> TX packets 3 bytes 258 (258.0 B)<br> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br><br>vnet2: flags=4163<UP,BROADCAST,<wbr>RUNNING,MULTICAST> mtu 9000<br> inet6 fe80::fc1a:4aff:fe17:157 prefixlen 64 scopeid 0x20<link><br> ether fe:1a:4a:17:01:57 txqueuelen 1000 (Ethernet)<br> RX packets 21014 bytes 24139492 (23.0 MiB)<br> RX errors 0 dropped 0 overruns 0 frame 0<br> TX packets 85777 bytes 21089777 (20.1 MiB)<br> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br><br>[root@ovmsrv05 ~]# cat /proc/net/bonding/bond0 <br>Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br><br>Bonding Mode: fault-tolerance (active-backup)<br>Primary Slave: None<br>Currently Active Slave: enp3s0<br>MII Status: up<br>MII Polling Interval (ms): 100<br>Up Delay (ms): 0<br>Down Delay (ms): 0<br><br>Slave Interface: enp3s0<br>MII Status: up<br>Speed: 1000 Mbps<br>Duplex: full<br>Link Failure Count: 0<br>Permanent HW addr: 00:1c:c4:ab:be:ba<br>Slave queue ID: 0<br><br>Slave Interface: enp5s0<br>MII Status: up<br>Speed: 1000 Mbps<br>Duplex: full<br>Link Failure Count: 0<br>Permanent HW addr: 00:1c:c4:ab:be:bc<br>Slave queue ID: 0<br><br></div><div><br></div>Any hint?<br></div><div>Thanks in advance,<br></div>Gianluca<br><div><div><div><div><div><div><div><div><div><br></div></div></div></div></div></div></div></div></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>