Hi,
I'm currently running the platform with following version and facing the issue with
iSCSI timeout issue during the network switch maintenance.
oVirt Version 4.5
oVirt manager : Redhat 9.0
oVirt node : Rocky 9.4
Here's a drawing of the topology:
+---------------+
| Storage |
+---------------+
192.168.11.2 | | 192.168.11.1
| | 10GbE Links
| |
+---------------+ +---------------+
| Nexus 5k #1 | | Nexus 5k #2 |
+---------------+ +---------------+
192.168.11.3 | | 192.168.11.4
| | 10GbE Links
| |
+---------------+
| oVirt 4.5 |
+---------------+
VLANx: 192.168.11.3/28
VLANx: 192.168.11.4/28
I've configured the oVirt nodes without bonding,it works like one NIC assigned with
vLANX with IP .3 and another NIC assigned with vLANX with IP .4. I could see actively only
NIC is utilizing the iSCSI traffic(without ISCSI Bond).
I would like to utilize both NIC traffic Active-Active mode what is the topology to
follow? [iSCSI in LACP is not recommended for iscsi traffic which read from blog].Correct
me if I'm wrong.
When actively used NIC connected switch reboot,the iscsi traffic which over to another nic
takes almost 30 seconds.Can I reduce the failover of NIC timing from 30 seconds to 3
seconds something like that? I don't see the lapsed timeout behaviour in CentOS 8 but
rocky9 behaviors differently.
Any suggestions or recommendation?
Show replies by date
Hello!
Have You the multipath package installed? Please, show the output of the command
'multipath -ll'.
Unless you have a some awesome storage packed with SSDs, it is very unlikely you will
exceed 10Gb. For simplicity I would just use a 10Gb fail-over bonded configuration. If
you can saturate a 10Gb link then it should be possible to do what you suggest.
For starters, I suspect the VLAN configuration you are proposing wouldn't work. Yes,
you would want two VLANs, but they would be:
192.168.11.0/28
192.168.11.16/28
You would want to give the storage an IP in each separate VLAN, and give each oVirt node
an IP in each separate VLAN. So for example storage could be:
192.168.11.1/28
192.168.11.17/28
and node1 could be:
192.168.11.14/28
192.168.11.30/28
and node2 could be:
192.168.11.13/28
192.168.11.29/28
...and so on.
It wouldn't totally shock me if there was some storage solution that used custom load
balancing that would work the way you suggest, but using separate sub-nets is probably
the most standard. You can run into all sorts of ARP fun by putting two Linux interfaces
on the same sub-net.
On the storage side you would create an iSCSI portal on each of the two sub-nets. Make
sure your initiator and target are all setup per the storage requirements. I've done
it with 4x1Gb (and 4 VLANs) in FreeNAS; exact steps are storage brand specific.
On the oVirt side you would have to configure multipath and dm for multipath. You should
be able to google how to do that; steps would be generic for Linux. Once configured
correctly you should get a dm with multiple connected paths.