Odd question: changing network MTU

I have an oVirt 4.3 cluster, running in one location. I have to move it to another location. I've got a couple of 1G links between the sites, and that's enough bandwidth for this (at least temporarily), but... I have my iSCSI networks defined with a MTU of 9000, and it turns out the site-to-site links only allow 1500 (and these links are going away after this is done, so I don't think either carrier would be interested in changing things to support larger). Because of that, the storage won't connect up. I tried going "under the hood" and setting a firewalld rule to force the MSS to a smaller value, but that didn't seem to get it. What happens if I change the MTU of an active iSCSI network in oVirt? I could just go manually change it on each node's iSCSI interfaces, but I'm not sure if oVirt might change it back. Also, I'm not sure what would happen to open iSCSI TCP connections (would they reduce gracefully). Any other suggestions/tips/etc.? I'd like to make this as transparent as possible, so was hoping to live-migrate VMs and storage. -- Chris Adams <cma@cmadams.net>

On Mon, Apr 12, 2021 at 9:05 PM Chris Adams <cma@cmadams.net> wrote:
I have an oVirt 4.3 cluster, running in one location. I have to move it to another location. I've got a couple of 1G links between the sites, and that's enough bandwidth for this (at least temporarily), but... I have my iSCSI networks defined with a MTU of 9000, and it turns out the site-to-site links only allow 1500 (and these links are going away after this is done, so I don't think either carrier would be interested in changing things to support larger).
Because of that, the storage won't connect up. I tried going "under the hood" and setting a firewalld rule to force the MSS to a smaller value, but that didn't seem to get it.
What happens if I change the MTU of an active iSCSI network in oVirt? I could just go manually change it on each node's iSCSI interfaces, but I'm not sure if oVirt might change it back.
oVirt will not modify your setting, the only thing we set on the nodes are node.startup and node.session.xxx: 200 def addIscsiNode(iface, target, credentials=None): 201 # There are 2 formats for an iSCSI node record. An old style format where 202 # the path is /var/lib/iscsi/nodes/{target}/{portal} and a new style format 203 # where the portal path is a directory containing a record file for each 204 # bounded iface. Explicitly specifying tpgt on iSCSI login imposes creation 205 # of the node record in the new style format which enables to access a 206 # portal through multiple ifaces for multipathing. 207 with _iscsiadmTransactionLock: 208 iscsiadm.node_new(iface.name, target.address, target.iqn) 209 try: 210 if credentials is not None: 211 for key, value in credentials.getIscsiadmOptions(): 212 key = "node.session." + key 213 iscsiadm.node_update(iface.name, target.address, 214 target.iqn, key, value) 215 216 setRpFilterIfNeeded(iface.netIfaceName, target.portal.hostname, 217 True) 218 219 iscsiadm.node_login(iface.name, target.address, target.iqn) 220 221 iscsiadm.node_update(iface.name, target.address, target.iqn, 222 "node.startup", "manual") You can add more configuration here ^^^ 223 except: 224 removeIscsiNode(iface, target) 225 raise You can also modify the nodes outside of ovirt, but oVirt may remove the iscsi nodes with your modifications. So I think modifying vdsm to do what you want is your best choice. If this works and can be useful to others, we can think how to make this more generic, maybe adding some configuration that will be applied to all nodes.
Also, I'm not sure what would happen to open iSCSI TCP connections (would they reduce gracefully).
Your vms are running on top of multipath, so even if the iscsi connection was broken and recovered, the vm is protected from the short outage. You can try to ask about it in open-iscsi mailing list: https://groups.google.com/g/open-iscsi
Any other suggestions/tips/etc.? I'd like to make this as transparent as possible, so was hoping to live-migrate VMs and storage.
Ales may have more insight on the network side. Nir

Once upon a time, Nir Soffer <nsoffer@redhat.com> said:
On Mon, Apr 12, 2021 at 9:05 PM Chris Adams <cma@cmadams.net> wrote:
What happens if I change the MTU of an active iSCSI network in oVirt? I could just go manually change it on each node's iSCSI interfaces, but I'm not sure if oVirt might change it back.
oVirt will not modify your setting, the only thing we set on the nodes are node.startup and node.session.xxx:
Well, it wouldn't be the iSCSI part that I'd worry about, but the network part. The MTU is set on the networks in oVirt that are used for iSCSI, not in the iSCSI part of the config. I actually didn't even realize you could set an MTU in the iSCSI config, I see it just defaults to 0 (I assume to get interface/path MTU - didn't see any documentation about the iface.mtu setting). I might look at that as a method.
If this works and can be useful to others, we can think how to make this more generic, maybe adding some configuration that will be applied to all nodes.
Heh, this is such a corner case, I wouldn't really wish doing this on anyone. :)
Also, I'm not sure what would happen to open iSCSI TCP connections (would they reduce gracefully).
Your vms are running on top of multipath, so even if the iscsi connection was broken and recovered, the vm is protected from the short outage.
Hmm, true. What I'm considering right now is not changing anything in oVirt, just rolling through the systems, setting them to maintenance mode to be extra safe, manually changing the interface MTUs, and re-activating them (just need to see if oVirt and/or NetworkManager changes it back when just going back to active). -- Chris Adams <cma@cmadams.net>

On Tue, Apr 13, 2021 at 3:03 AM Chris Adams <cma@cmadams.net> wrote:
Once upon a time, Nir Soffer <nsoffer@redhat.com> said:
On Mon, Apr 12, 2021 at 9:05 PM Chris Adams <cma@cmadams.net> wrote:
What happens if I change the MTU of an active iSCSI network in oVirt? I could just go manually change it on each node's iSCSI interfaces, but I'm not sure if oVirt might change it back.
oVirt will not modify your setting, the only thing we set on the nodes are node.startup and node.session.xxx:
Well, it wouldn't be the iSCSI part that I'd worry about, but the network part. The MTU is set on the networks in oVirt that are used for iSCSI, not in the iSCSI part of the config.
I actually didn't even realize you could set an MTU in the iSCSI config, I see it just defaults to 0 (I assume to get interface/path MTU - didn't see any documentation about the iface.mtu setting). I might look at that as a method.
If this works and can be useful to others, we can think how to make this more generic, maybe adding some configuration that will be applied to all nodes.
Heh, this is such a corner case, I wouldn't really wish doing this on anyone. :)
Also, I'm not sure what would happen to open iSCSI TCP connections (would they reduce gracefully).
Your vms are running on top of multipath, so even if the iscsi connection was broken and recovered, the vm is protected from the short outage.
Hmm, true.
What I'm considering right now is not changing anything in oVirt, just rolling through the systems, setting them to maintenance mode to be extra safe, manually changing the interface MTUs, and re-activating them (just need to see if oVirt and/or NetworkManager changes it back when just going back to active).
That is generally a bad idea, oVirt has its own network persistence, it would be reverted on reboot and the engine would complain that those networks are out of sync. So the safest way is to change MTU in the engine and then sync all networks on all hosts that have this network attached.
-- Chris Adams <cma@cmadams.net>
Best regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

Once upon a time, Ales Musil <amusil@redhat.com> said:
That is generally a bad idea, oVirt has its own network persistence, it would be reverted on reboot and the engine would complain that those networks are out of sync.
Part of the email had been trimmed - this was a temporary thing for physically moving this cluster to a new location, where the VLANs are all bridged across a couple of links that don't support jumbo frames. I didn't want to make a permanent change, and in my testing, lowering the MTU while iSCSI connections were active broke them. So I rolled through the hosts, putting them in maintenance, changing the MTU manually, and reactivating them, and that did work. The engine did note that the networks were out of sync, but did not force them back (which was my main concern). I definitely wouldn't do this for any normal thing, but it did work for my temporary setup during the move. I was able to move my running VMs and their storage to a new home, 5 miles away, without shutting any of them down. -- Chris Adams <cma@cmadams.net>

On Thu, Apr 15, 2021 at 4:32 PM Chris Adams <cma@cmadams.net> wrote:
Once upon a time, Ales Musil <amusil@redhat.com> said:
That is generally a bad idea, oVirt has its own network persistence, it would be reverted on reboot and the engine would complain that those networks are out of sync.
Part of the email had been trimmed - this was a temporary thing for physically moving this cluster to a new location, where the VLANs are all bridged across a couple of links that don't support jumbo frames. I didn't want to make a permanent change, and in my testing, lowering the MTU while iSCSI connections were active broke them.
So I rolled through the hosts, putting them in maintenance, changing the MTU manually, and reactivating them, and that did work. The engine did note that the networks were out of sync, but did not force them back (which was my main concern).
I definitely wouldn't do this for any normal thing, but it did work for my temporary setup during the move. I was able to move my running VMs and their storage to a new home, 5 miles away, without shutting any of them down.
Cool! This can be good content for ovirt blog: https://blogs.ovirt.org/
participants (3)
-
Ales Musil
-
Chris Adams
-
Nir Soffer