This sounds like a reasonable behaviour since there is a clear
separation
between the host networking configuration to the vms networking configuration.
What you're attempting to achieve is guest-os level network configuration which
isn't controlled by the ovirt-engine.
If I understand this correctly the the
VMs NIC is mapped to a Virtual
Nic in the Host with a similar MAC, created when the VM starts.
# VM internal NIC
eth1 Link encap:Ethernet HWaddr 00:1A:4A:2F:D2:A8
inet addr:192.168.43.15 Bcast:192.168.43.255 Mask:255.255.255.0
inet6 addr: fe80::21a:4aff:fe2f:d2a8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:468 (468.0 b) TX bytes:578 (578.0 b)
# Host Virtual Nic
vnet26 Link encap:Ethernet HWaddr FE:1A:4A:2F:D2:A8
inet6 addr: fe80::fc1a:4aff:fe2f:d2a8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:648 (648.0 b) TX bytes:468 (468.0 b)
As you can see the MTU differs between the two. If setting the MTU of
vnet26 to match the VM;s internal nic (and the host physical NIC for
that matter), everything is back to normal.
I'm quite sure that nic (vnet26) must match what is set on the host
physical NIC, I cant see how it would work otherwise.
Rgds Jonas