iSCSI multipath with separate subnets... still not possible in 4.4.x?

I'm looking through quite a few bug reports and mailing list threads, but want to make sure I'm not missing some recent development. It appears that doing iSCSI with two separate, non-routed subnets is still not possible with 4.4.x. I have the dead-standard iSCSI setup with two separate switches, separate interfaces on hosts and storage, and separate subnets that have no gateway and are completely unreachable except from directly attached interfaces. The hosted-engine comes up with multiple paths and everything is perfect, but that's because Ansible/hosted-engine deploy script have configured things correctly. Once you need to import or add new storage domains, it's not possible to do so in a way that gets both paths connected *and* persists across host reboots. Following the docs to create 'iSCSI Multipath" bonds (plural, you can _not_ create a single bond that includes both interfaces and hope things route correctly... oVirt will try to connect from the interface for storage network A to the target on storage network B, which can't happen since they are not routed (and should not be). So, there's nothing in the docs about how you can accomplish multipathing, but there are a few mailing list messages that say "just create two separate "iSCSI Multipath" bonds in the datacenter, one for each of your two interfaces. You can do this, and you'll get hopeful that things might work now. You can do discovery and it succeeds, because no more trying to connect to unreachable targets. However, and big caveat, there's no way to tell this new/imported domain, "Oh, use this other interface as well, so you have redundant paths". Once the domain is attached and activated, you have a single path. You can then manage the domain, do a discovery, see a path that isn't connected yet, and log into it as well. Now you have two paths, is everything right with the world?!? Nope, it's impossible to persist that connection, it will be gone on next reboot and you'll always have to manually visit each host, do discovery, and login. Nothing in the UI allows you to "Save" that second connection in a way that it will be used again. Clicking "OK" does not, and going back to the "iSCSI Multipath" area of the Data Center you can't edit each of the bonds and make sure each logical network has every possible target checked, because these targets you've manually logged into are never listed in that area of the UI. So I really, really hope I'm wrong because I'd like to move past this snag and onto the next one (which is that bond interfaces in 4.4.x will not allow you to attach additional networks... works great in 4.3, appears broken in 4.4.x). But, no sense chasing that yet if iSCSI multipath isn't possible, which is looking likely. Has anyone had success, running iSCSI in by far the most common setup out there, but also in a way oVirt really doesn't want to let you? This is driving me nuts, I've paved and rebuilt these hosts dozens of times now, trying different methods in the hopes of getting multipath that persists.

Hi Mark, Am 14.07.20 um 02:14 schrieb Mark R:
I'm looking through quite a few bug reports and mailing list threads, but want to make sure I'm not missing some recent development. It appears that doing iSCSI with two separate, non-routed subnets is still not possible with 4.4.x. I have the dead-standard iSCSI setup with two separate switches, separate interfaces on hosts and storage, and separate subnets that have no gateway and are completely unreachable except from directly attached interfaces.
I haven't tested 4.4 yet but AFAIK nothing has changed, OVirt iSCSI bonds don't work with separated, isolated subnets: https://bugzilla.redhat.com/show_bug.cgi?id=1474904 I don't use them as multipathing generally works without OVirt bonds in my setup, I configured multipathd directly to use round robin e.g.. cu, Uwe

I second that, I’ve tirelessly talked about this and just given up, it’s a basic feature that keeps oVirt lagging behind.
On 18 Jul 2020, at 04:47, Uwe Laverenz <uwe@laverenz.de> wrote:
Hi Mark,
Am 14.07.20 um 02:14 schrieb Mark R:
I'm looking through quite a few bug reports and mailing list threads, but want to make sure I'm not missing some recent development. It appears that doing iSCSI with two separate, non-routed subnets is still not possible with 4.4.x. I have the dead-standard iSCSI setup with two separate switches, separate interfaces on hosts and storage, and separate subnets that have no gateway and are completely unreachable except from directly attached interfaces.
I haven't tested 4.4 yet but AFAIK nothing has changed, OVirt iSCSI bonds don't work with separated, isolated subnets:
https://bugzilla.redhat.com/show_bug.cgi?id=1474904
I don't use them as multipathing generally works without OVirt bonds in my setup, I configured multipathd directly to use round robin e.g..
cu, Uwe _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EXRSANPZHZ2JE2...

I don't use them as multipathing generally works without OVirt bonds in my setup, I configured multipathd directly to use round robin e.g.
Thanks, Uwe. Am I understanding correctly that you're just letting your nodes attach to the iSCSI storage on their own by leaving "node.startup = automatic" in /etc/iscsi/iscsid.conf so the hosts attach to all known targets as they boot, long before oVirt services ever attempt to connect them? I've considered flipping that to automatic, it's currently on "manual" as the idea was "let's let oVirt connect and manage the storage it wants". As another poster below mentioned, going the route of two separate iSCSI bonds in the "iSCSI Multipath" section does work when you're adding new storage domains. The aspect he talks about, where you connect both paths and save it, isn't possible if you import an existing storage domain. When importing, the UI won't expose the "Add" button that's available when creating a new domain, so you can't add redundant paths. You can import the storage, then edit it and discover/login to the other path, but that does _not_ save to the database and will not persist across reboots or connect on other hosts you add to the cluster (have to login manually on each). You can't edit your iSCSI bonds and check the box for these manually logged in targets either, they'll never populate in that part of the UI so can't be selected. I think it's just a UI issue because some very easy fidling in the database makes it work exactly as you'd expect (and as it does for domains you newly add instead of importing ). Sorry, rambling, but I am curious about your "node.startup" setting in iscid.conf. If left at 'automatic' (the default), are your hosts attaching all the disks as they boot and oVirt doesn't mind that? It could be the path I'll take as honestly I'd much prefer configuring the storage connections directly. Thanks! Mark

Am 22.07.20 um 21:55 schrieb Mark R:
Thanks, Uwe. Am I understanding correctly that you're just letting your nodes attach to the iSCSI storage on their own by leaving "node.startup = automatic" in /etc/iscsi/iscsid.conf so the hosts attach to all known targets as they boot, long before oVirt services ever attempt to connect them? I've considered flipping that to
No, I use OVirt to connect to the iSCSI targets, this works as expected. The thing I do not use are OVirt's iSCSI bonds. What I configure manually is multipathd in order to use round robin policy.
As another poster below mentioned, going the route of two separate iSCSI bonds in the "iSCSI Multipath" section does work when you're adding new storage domains. The aspect he talks about, where you connect both paths and save it, isn't possible if you import an existing storage domain. When importing, the UI won't expose the "Add" button that's available when creating a new domain, so you can't add redundant paths. You can import the storage, then edit it and discover/login to the other path, but that does _not_ save to the database and will not persist across reboots or connect on other hosts you add to the cluster (have to login manually on each). You can't edit your iSCSI bonds and check the box for these manually logged in targets either, they'll never populate in that part of the UI so can't be selected. I think it's just a UI issue because some very easy fidling in the database makes it work exactly as you'd expect (and as it does for domains you newly add instead of importing ).
This sounds quite ugly, I wasn't aware of this.
Sorry, rambling, but I am curious about your "node.startup" setting in iscid.conf. If left at 'automatic' (the default), are your hosts attaching all the disks as they boot and oVirt doesn't mind that? It could be the path I'll take as honestly I'd much prefer configuring the storage connections directly.
As I said, the only thing I change is /etc/multipath.conf: https://lists.ovirt.org/pipermail/users/2017-July/083308.html cu, Uwe

I don't quite understand what multipath is, but when I deployed it in my iSCSI repository, only one ip address is specified when installing hosted-engine. However, in order to specify the second subnet, I went to my host portal where I was deploying the engine and added the second subnet in the Storage -> iSCSi targets section. It should be noted that the storage is connected to hosts directly. However, after that when adding subsequent hosts through the engine I saw through their administration panels that in Storage -> iScsi section there are also two targets with different subnets and the command on the host multipath -ll also showed the normal operation of the two targets. I hope this helps you in any way.

I saw through their administration panels that in Storage -> iScsi section there are also two targets with different subnets and the command on the host multipath -ll also showed the normal operation of the two targets. That's mean that: multipath can not create a logic LUN which using two subnet as it's concurrent path , and the LUN can not avoid failing of It's only IP path, is it ?? -----邮件原件----- 发件人: users-bounces@ovirt.org <users-bounces@ovirt.org> 代表 Patrick Lomakin 发送时间: 2020年7月20日 16:04 收件人: users@ovirt.org 主题: [ovirt-users] Re: iSCSI multipath with separate subnets... still not possible in 4.4.x? I don't quite understand what multipath is, but when I deployed it in my iSCSI repository, only one ip address is specified when installing hosted-engine. However, in order to specify the second subnet, I went to my host portal where I was deploying the engine and added the second subnet in the Storage -> iSCSi targets section. It should be noted that the storage is connected to hosts directly. However, after that when adding subsequent hosts through the engine I saw through their administration panels that in Storage -> iScsi section there are also two targets with different subnets and the command on the host multipath -ll also showed the normal operation of the two targets. I hope this helps you in any way. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XUZLGQVE2DNFD...

I think there's a misunderstanding behind the name, what happens is the following: When you deploy the hosted-engine you can choose only one target and path probably, so it uses the "default" interface, this is also true when you run a scan/discovery. As example, if using a standard iSCSI configuration (best practice is also having a different network switches, etc.. which i show in this example, but it works well even if you don't have), you will probably have two interfaces, each in his own Fault Domain, so let's say we have eth3 as FD1 and eth4 as FD2. Subnets: FD1 = 192.168.100.0/24 FD2 = 192.168.200.0/24 You will then create a logical network for each of those Fault Domains, so FD1 and FD2, make sure those are only attached and are not marked as VM network and are not "Required". After that, you will have to attach those logical networks to the hosts and configure their IP. For example on host1: We attach FD1 to eth3 and give it 192.168.100.11/24 We attach FD2 to eth4 and give it 192.168.200.11/24 Note that i did not specify any gateway, It's a simple Layer2 configuration. You will have to do this for each host (you can use ansible for automation which makes it very simple). After you have attached you first Storage Domain using iSCSI and did the procedure above you will under the Datacenter a new Tab named "iSCSI Multipathing" the name is confusing because it does not control the Multipath, configuration of the Multipath should be done by referencing to the vendor best practice and configure it under "/etc/multipath/conf.d" because the VDSM override /etc/multipath.conf In the "iSCSI Multipathing" section you will need to add those two Fault Domains. So for FD1 you will need to select the logical network we created earlier named "FD1" and then select the appropriate targets related to it only!, do the same for FD2. What happens behind the scene is that VDSM configure iSCSI (oVirt/RHEL uses libiscsi). You can observe that two interfaces were created, eth3 and eth4: "iscsiadm -m iface" Then look at the sessions: "iscsiadm -m session" Easiest way around is just seeing what was created in the "/var/lib/iscsi" folder: "find /var/lib/iscsi" You will see how VDSM configured iSCSI to interact with the targets, it tells it to use those specific interfaces to those specific targets. you can observe a session "iscsiadm -m session -r1 -P3" to get more details. So what really happened now is that VDSM will take care of making sure iSCSI works on each host, it will automatically login to the targets using those interfaces. I do agree the name "Multipathing" is confusing.
participants (6)
-
Erez Zarum
-
Mark R
-
Patrick Lomakin
-
tommy
-
Uwe Laverenz
-
Vinícius Ferrão