[Users] CentOS 6 and multiple ISCSI interfaces/multipath

Hi all, I've recently started looking at oVirt to provide the next iteration of our virtualization infrastructure, as we have recently acquired some ISCSI storage. Up to now, we've been using KVM managed by libvirt/virt-manager locally on each of our hosts, using direct attached storage, which was obviously less than ideal. In setting it all up, I've hit a bit of a snag. We've got 4 GbE network interfaces on our pilot host (running vdsm on CentOS 6.4) which are connected to our storage arrays (equallogic). I've created 4 interfaces with iscsiadm for these and bound them, but when setting up the discs in oVirt, I've not seen a way of telling it which interfaces to use. The node makes the connection to the target successfully, but it seems its only connecting via the "default" iscsi interface, and not making a connection via each interface that I've defined. Obviously this means I can't use multipath, and being only GbE interfaces it means I'm not getting the performance I should. I've done some searching via Google, but I've not really found any thing that helps. Perhaps I've missed something, but can anyone give me any pointers for getting this to work across multiple interfaces? Thanks? Martin

This is an old issue with EQL, best practice is to have different iSCSI paths on different subnets or VLANs, and to login to every iSCSI portal when you create the storage domain. Alternatively, you can use bonds of course On Thu, Jun 13, 2013 at 4:41 PM, Martin Goldstone <m.j.goldstone@keele.ac.uk
wrote:
Hi all,
I've recently started looking at oVirt to provide the next iteration of our virtualization infrastructure, as we have recently acquired some ISCSI storage. Up to now, we've been using KVM managed by libvirt/virt-manager locally on each of our hosts, using direct attached storage, which was obviously less than ideal.
In setting it all up, I've hit a bit of a snag. We've got 4 GbE network interfaces on our pilot host (running vdsm on CentOS 6.4) which are connected to our storage arrays (equallogic). I've created 4 interfaces with iscsiadm for these and bound them, but when setting up the discs in oVirt, I've not seen a way of telling it which interfaces to use. The node makes the connection to the target successfully, but it seems its only connecting via the "default" iscsi interface, and not making a connection via each interface that I've defined. Obviously this means I can't use multipath, and being only GbE interfaces it means I'm not getting the performance I should.
I've done some searching via Google, but I've not really found any thing that helps. Perhaps I've missed something, but can anyone give me any pointers for getting this to work across multiple interfaces?
Thanks?
Martin
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Dan Yasny <dyasny@...> writes:
This is an old issue with EQL, best practice is to have different iSCSI
Alternatively, you can use bonds of course
On Thu, Jun 13, 2013 at 4:41 PM, Martin Goldstone <m.j.goldstone- gMJiioYAC8g@public.gmane.org.uk> wrote: Hi all, I've recently started looking at oVirt to provide the next iteration of our virtualization infrastructure, as we have recently acquired some ISCSI storage. Up to now, we've been using KVM managed by libvirt/virt-manager locally on each of our hosts, using direct attached storage, which was obviously less than ideal.
In setting it all up, I've hit a bit of a snag. We've got 4 GbE network interfaces on our pilot host (running vdsm on CentOS 6.4) which are connected to our storage arrays (equallogic). I've created 4 interfaces with iscsiadm for these and bound them, but when setting up the discs in oVirt, I've not seen a way of telling it which interfaces to use. The node makes
paths on different subnets or VLANs, and to login to every iSCSI portal when you create the storage domain. the connection to the target successfully, but it seems its only connecting via the "default" iscsi interface, and not making a connection via each interface that I've defined. Obviously this means I can't use multipath, and being only GbE interfaces it means I'm not getting the performance I should.
I've done some searching via Google, but I've not really found any thing
that helps. Perhaps I've missed something, but can anyone give me any pointers for getting this to work across multiple interfaces?
Thanks?
Martin _______________________________________________ Users mailing listUsers-
dEQiMlfYlSzYtjvyW6yDsg@public.gmane.orghttp://lists.ovirt.org/mailman/listin fo/users
_______________________________________________ Users mailing list Users@... http://lists.ovirt.org/mailman/listinfo/users
Hi Dan I work with Martin. The problem we are having seems to be with regards to how ovirt/vdsm manages iscsi sessions and not how our server is connected. I have manually tested iscsiadm on the same machine(Centos 6.4) and we are indeed getting multiple sessions from 2 ifaces with mpio working via dm-multipath, etc. vdsm however does not appear to be logging into the targets using the ifaces we use in iscsiadm, I have tried adding this to /etc/vdsm/vdsm.conf, but it makes little difference: [irs] iscsi_default_ifaces = eth2,eth3 Thanks Gary

Currently vdsm doesn't use the IFACE param at all, it simply tries to discover the target by IP, letting the kernel do the routing. So for multiple paths, you need to split the host NICs into different subnets, set up the portals on those subnets, and discover/login on those separate IPs from different subnets. The host kernel will take care of which NIC to use for which iSCSI session On Tue, Jun 18, 2013 at 4:25 AM, Gary Lloyd <g.lloyd@keele.ac.uk> wrote:
Dan Yasny <dyasny@...> writes:
This is an old issue with EQL, best practice is to have different iSCSI
Alternatively, you can use bonds of course
On Thu, Jun 13, 2013 at 4:41 PM, Martin Goldstone <m.j.goldstone- gMJiioYAC8g@public.gmane.org.uk> wrote: Hi all, I've recently started looking at oVirt to provide the next iteration of our virtualization infrastructure, as we have recently acquired some ISCSI storage. Up to now, we've been using KVM managed by libvirt/virt-manager locally on each of our hosts, using direct attached storage, which was obviously less than ideal.
In setting it all up, I've hit a bit of a snag. We've got 4 GbE network interfaces on our pilot host (running vdsm on CentOS 6.4) which are connected to our storage arrays (equallogic). I've created 4 interfaces with iscsiadm for these and bound them, but when setting up the discs in oVirt, I've not seen a way of telling it which interfaces to use. The node makes
paths on different subnets or VLANs, and to login to every iSCSI portal when you create the storage domain. the connection to the target successfully, but it seems its only connecting via the "default" iscsi interface, and not making a connection via each interface that I've defined. Obviously this means I can't use multipath, and being only GbE interfaces it means I'm not getting the performance I should.
I've done some searching via Google, but I've not really found any thing
that helps. Perhaps I've missed something, but can anyone give me any pointers for getting this to work across multiple interfaces?
Thanks?
Martin _______________________________________________ Users mailing listUsers-
dEQiMlfYlSzYtjvyW6yDsg@public.gmane.orghttp:// lists.ovirt.org/mailman/listin fo/users
_______________________________________________ Users mailing list Users@... http://lists.ovirt.org/mailman/listinfo/users
Hi Dan
I work with Martin.
The problem we are having seems to be with regards to how ovirt/vdsm manages iscsi sessions and not how our server is connected. I have manually tested iscsiadm on the same machine(Centos 6.4) and we are indeed getting multiple sessions from 2 ifaces with mpio working via dm-multipath, etc.
vdsm however does not appear to be logging into the targets using the ifaces we use in iscsiadm, I have tried adding this to /etc/vdsm/vdsm.conf, but it makes little difference:
[irs] iscsi_default_ifaces = eth2,eth3
Thanks
Gary
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Dan Yasny
-
Gary Lloyd
-
Martin Goldstone