[Users] oVirt 3.4 - MTU Override not honoured by virtual NICs

I have a SAN Network defined in oVirt where I have enabled Jumbo frames (by setting the MTU to 9000). Still I see now VM Nic;s attached to this network get MTU 1500 and are therefore basically not working. I have created several VMs before all (still) have Jumbo Frames on the their san-interface, but newly created VMs get 1500. Currently on 3.4 Beta2 (upgraded from alpha --> beta1 --->) and this behaviour have started somewhere along the way.. The host physical interface em2 Link encap:Ethernet HWaddr D0:67:E5:F9:2E:1C inet6 addr: fe80::d267:e5ff:fef9:2e1c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:503209263 errors:0 dropped:0 overruns:0 frame:0 TX packets:483040537 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2395173035170 (2.1 TiB) TX bytes:408352341106 (380.3 GiB) And on a VM vnet25 Link encap:Ethernet HWaddr FE:1A:4A:2F:D2:A3 inet6 addr: fe80::fc1a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:167 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:17662 (17.2 KiB) TX bytes:26684 (26.0 KiB) If manully setting the MTU everything work as expected.. (Until you reboot the machine) Anyone else seen this ? Rgds Jonas

----- Original Message -----
From: "Jonas Israelsson" <jonas@israelsson.com> To: users@ovirt.org Sent: Saturday, March 1, 2014 3:28:15 PM Subject: [Users] oVirt 3.4 - MTU Override not honoured by virtual NICs
I have a SAN Network defined in oVirt where I have enabled Jumbo frames (by setting the MTU to 9000). Still I see now VM Nic;s attached to this network get MTU 1500 and are therefore basically not working.
I have created several VMs before all (still) have Jumbo Frames on the their san-interface, but newly created VMs get 1500. Currently on 3.4 Beta2 (upgraded from alpha --> beta1 --->) and this behaviour have started somewhere along the way..
The host physical interface
em2 Link encap:Ethernet HWaddr D0:67:E5:F9:2E:1C inet6 addr: fe80::d267:e5ff:fef9:2e1c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:503209263 errors:0 dropped:0 overruns:0 frame:0 TX packets:483040537 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2395173035170 (2.1 TiB) TX bytes:408352341106 (380.3 GiB)
And on a VM
vnet25 Link encap:Ethernet HWaddr FE:1A:4A:2F:D2:A3 inet6 addr: fe80::fc1a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:167 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:17662 (17.2 KiB) TX bytes:26684 (26.0 KiB)
If manully setting the MTU everything work as expected.. (Until you reboot the machine)
Anyone else seen this ?
This sounds like a reasonable behaviour since there is a clear separation between the host networking configuration to the vms networking configuration. What you're attempting to achieve is guest-os level network configuration which isn't controlled by the ovirt-engine. The current method to control guest network internals is by using the cloud-init-integration however MTU is not part of its api [1]. You can add it to the 3.5 planning thread as a requested item. [1] http://www.ovirt.org/Features/Cloud-Init_Integration
Rgds Jonas
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This sounds like a reasonable behaviour since there is a clear separation between the host networking configuration to the vms networking configuration. What you're attempting to achieve is guest-os level network configuration which isn't controlled by the ovirt-engine. If I understand this correctly the the VMs NIC is mapped to a Virtual Nic in the Host with a similar MAC, created when the VM starts.
# VM internal NIC eth1 Link encap:Ethernet HWaddr 00:1A:4A:2F:D2:A8 inet addr:192.168.43.15 Bcast:192.168.43.255 Mask:255.255.255.0 inet6 addr: fe80::21a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:468 (468.0 b) TX bytes:578 (578.0 b) # Host Virtual Nic vnet26 Link encap:Ethernet HWaddr FE:1A:4A:2F:D2:A8 inet6 addr: fe80::fc1a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:648 (648.0 b) TX bytes:468 (468.0 b) As you can see the MTU differs between the two. If setting the MTU of vnet26 to match the VM;s internal nic (and the host physical NIC for that matter), everything is back to normal. I'm quite sure that nic (vnet26) must match what is set on the host physical NIC, I cant see how it would work otherwise. Rgds Jonas

----- Original Message -----
This sounds like a reasonable behaviour since there is a clear separation between the host networking configuration to the vms networking configuration. What you're attempting to achieve is guest-os level network configuration which isn't controlled by the ovirt-engine. If I understand this correctly the the VMs NIC is mapped to a Virtual Nic in the Host with a similar MAC, created when the VM starts.
# VM internal NIC eth1 Link encap:Ethernet HWaddr 00:1A:4A:2F:D2:A8 inet addr:192.168.43.15 Bcast:192.168.43.255 Mask:255.255.255.0 inet6 addr: fe80::21a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:468 (468.0 b) TX bytes:578 (578.0 b)
# Host Virtual Nic vnet26 Link encap:Ethernet HWaddr FE:1A:4A:2F:D2:A8 inet6 addr: fe80::fc1a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:648 (648.0 b) TX bytes:468 (468.0 b)
Tap should definitely have the same MTU as the bridge it's started on.
As you can see the MTU differs between the two. If setting the MTU of vnet26 to match the VM;s internal nic (and the host physical NIC for that matter), everything is back to normal.
I'm quite sure that nic (vnet26) must match what is set on the host physical NIC, I cant see how it would work otherwise.
You are correct, can you please open a bug for it? Also please specify libvirt/vdsm/kernel versions that you're using (maybe it was already fixed in updated version?)
Rgds Jonas

On 02/03/14 15:49, Mike Kolesnik wrote:
----- Original Message -----
This sounds like a reasonable behaviour since there is a clear separation between the host networking configuration to the vms networking configuration. What you're attempting to achieve is guest-os level network configuration which isn't controlled by the ovirt-engine. If I understand this correctly the the VMs NIC is mapped to a Virtual Nic in the Host with a similar MAC, created when the VM starts.
# VM internal NIC eth1 Link encap:Ethernet HWaddr 00:1A:4A:2F:D2:A8 inet addr:192.168.43.15 Bcast:192.168.43.255 Mask:255.255.255.0 inet6 addr: fe80::21a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:468 (468.0 b) TX bytes:578 (578.0 b)
# Host Virtual Nic vnet26 Link encap:Ethernet HWaddr FE:1A:4A:2F:D2:A8 inet6 addr: fe80::fc1a:4aff:fe2f:d2a8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:648 (648.0 b) TX bytes:468 (468.0 b) Tap should definitely have the same MTU as the bridge it's started on.
As you can see the MTU differs between the two. If setting the MTU of vnet26 to match the VM;s internal nic (and the host physical NIC for that matter), everything is back to normal.
I'm quite sure that nic (vnet26) must match what is set on the host physical NIC, I cant see how it would work otherwise.
You are correct, can you please open a bug for it? Also please specify libvirt/vdsm/kernel versions that you're using (maybe it was already fixed in updated version?) Thanks, BZ 1071660
participants (3)
-
Jonas Israelsson
-
Mike Kolesnik
-
Moti Asayag