
Hi, Context here : - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts - connected to some LUNs in iSCSI on a dedicated physical network Every host has two interfaces used for management and end-user LAN activity. Every host also have 4 additional NICs dedicated to the iSCSI network. Those 4 NICs were setup from the oVirt web GUI in a bonding with a unique IP address and connected to the SAN. Everything is working fine. I just had to manually tweak some points (MTU, other small things) but it is working. Recently, our SAN dealer told us that using bonding in an iSCSI context was terrible, and the recommendation is to use multipathing. My previous experience pre-oVirt was to agree with that. Long story short is just that when setting up the host from oVirt, it was so convenient to click and setup bonding, and observe it working that I did not pay further attention. (and we seem to have no bottleneck yet). Anyway, I dedicated a host to experiment, I things are not clear to me. I know how to setup NICs, iSCSI and multipath to present the host OS a partition or a logical volume, using multipathing instead of bonding. But in this precise case, what is disturbing me is that many layers described above are managed by oVirt (mount/unmount of LV, creation of bridges on top of bonded interfaces, managing the WWID amongst the cluster). And I see nothing related to multipath at the NICs level. Though I can setup everything fine in the host, this setup does not match what oVirt is expecting : oVirt is expecting a bridge named as the iSCSI network, and able to connect to the SAN. My multipathing is offering the access to the partition of the LUNs, it is not the same. I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath I here read :
Add an iSCSI Storage to the Data Center Make sure the Data Center contains networks. Go to the Data Center main tab and choose the specific Data Center At the sub tab choose "iSCSI Bond"
The only tabs I see are "Storage/Logical Networks/Network QoS/Clusters/Permissions". In this datacenter, I have one iSCSI master storage domain, two iSCSI storage domains and one NFS export domain. What did I miss?
Press the "new" button to add a new iSCSI Bond Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and this implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...? Well as you see, I need enlightenments. -- Nicolas Ecarnot

Hi Nicolas, Which DC level are you using? iSCSI multipath should be supported only from DC with compatibility version of 3.4 regards, Maor On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
Hi,
Context here : - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts - connected to some LUNs in iSCSI on a dedicated physical network
Every host has two interfaces used for management and end-user LAN activity. Every host also have 4 additional NICs dedicated to the iSCSI network.
Those 4 NICs were setup from the oVirt web GUI in a bonding with a unique IP address and connected to the SAN.
Everything is working fine. I just had to manually tweak some points (MTU, other small things) but it is working.
Recently, our SAN dealer told us that using bonding in an iSCSI context was terrible, and the recommendation is to use multipathing. My previous experience pre-oVirt was to agree with that. Long story short is just that when setting up the host from oVirt, it was so convenient to click and setup bonding, and observe it working that I did not pay further attention. (and we seem to have no bottleneck yet).
Anyway, I dedicated a host to experiment, I things are not clear to me. I know how to setup NICs, iSCSI and multipath to present the host OS a partition or a logical volume, using multipathing instead of bonding.
But in this precise case, what is disturbing me is that many layers described above are managed by oVirt (mount/unmount of LV, creation of bridges on top of bonded interfaces, managing the WWID amongst the cluster).
And I see nothing related to multipath at the NICs level. Though I can setup everything fine in the host, this setup does not match what oVirt is expecting : oVirt is expecting a bridge named as the iSCSI network, and able to connect to the SAN. My multipathing is offering the access to the partition of the LUNs, it is not the same.
I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath
I here read :
Add an iSCSI Storage to the Data Center Make sure the Data Center contains networks. Go to the Data Center main tab and choose the specific Data Center At the sub tab choose "iSCSI Bond"
The only tabs I see are "Storage/Logical Networks/Network QoS/Clusters/Permissions".
In this datacenter, I have one iSCSI master storage domain, two iSCSI storage domains and one NFS export domain.
What did I miss?
Press the "new" button to add a new iSCSI Bond Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and this implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...?
Well as you see, I need enlightenments.

Le 09-06-2014 13:55, Maor Lipchuk a écrit :
Hi Nicolas,
Which DC level are you using? iSCSI multipath should be supported only from DC with compatibility version of 3.4
Hi Maor, Oops you're right, my both 3.4 datacenters are using 3.3 level. I migrated recently. How safe or risky is it to increase this DC level ?
regards, Maor
On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
Hi,
Context here : - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts - connected to some LUNs in iSCSI on a dedicated physical network
Every host has two interfaces used for management and end-user LAN activity. Every host also have 4 additional NICs dedicated to the iSCSI network.
Those 4 NICs were setup from the oVirt web GUI in a bonding with a unique IP address and connected to the SAN.
Everything is working fine. I just had to manually tweak some points (MTU, other small things) but it is working.
Recently, our SAN dealer told us that using bonding in an iSCSI context was terrible, and the recommendation is to use multipathing. My previous experience pre-oVirt was to agree with that. Long story short is just that when setting up the host from oVirt, it was so convenient to click and setup bonding, and observe it working that I did not pay further attention. (and we seem to have no bottleneck yet).
Anyway, I dedicated a host to experiment, I things are not clear to me. I know how to setup NICs, iSCSI and multipath to present the host OS a partition or a logical volume, using multipathing instead of bonding.
But in this precise case, what is disturbing me is that many layers described above are managed by oVirt (mount/unmount of LV, creation of bridges on top of bonded interfaces, managing the WWID amongst the cluster).
And I see nothing related to multipath at the NICs level. Though I can setup everything fine in the host, this setup does not match what oVirt is expecting : oVirt is expecting a bridge named as the iSCSI network, and able to connect to the SAN. My multipathing is offering the access to the partition of the LUNs, it is not the same.
I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath
I here read :
Add an iSCSI Storage to the Data Center Make sure the Data Center contains networks. Go to the Data Center main tab and choose the specific Data Center At the sub tab choose "iSCSI Bond"
The only tabs I see are "Storage/Logical Networks/Network QoS/Clusters/Permissions".
In this datacenter, I have one iSCSI master storage domain, two iSCSI storage domains and one NFS export domain.
What did I miss?
Press the "new" button to add a new iSCSI Bond Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and this implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...?
Well as you see, I need enlightenments.
-- Nicolas Ecarnot

basically, you should upgrade your DC to 3.4, and then upgrade the clusters you desire also to 3.4. You might need to upgrade your hosts to be compatible with the cluster emulated machines, or they might become non-operational if qemu-kvm does not support it. ether way, you can always ask for advice in the mailing list if you encounter any problem. Regards, Maor On 06/09/2014 03:30 PM, Nicolas Ecarnot wrote:
Le 09-06-2014 13:55, Maor Lipchuk a écrit :
Hi Nicolas,
Which DC level are you using? iSCSI multipath should be supported only from DC with compatibility version of 3.4
Hi Maor,
Oops you're right, my both 3.4 datacenters are using 3.3 level. I migrated recently.
How safe or risky is it to increase this DC level ?
regards, Maor
On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
Hi,
Context here : - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts - connected to some LUNs in iSCSI on a dedicated physical network
Every host has two interfaces used for management and end-user LAN activity. Every host also have 4 additional NICs dedicated to the iSCSI network.
Those 4 NICs were setup from the oVirt web GUI in a bonding with a unique IP address and connected to the SAN.
Everything is working fine. I just had to manually tweak some points (MTU, other small things) but it is working.
Recently, our SAN dealer told us that using bonding in an iSCSI context was terrible, and the recommendation is to use multipathing. My previous experience pre-oVirt was to agree with that. Long story short is just that when setting up the host from oVirt, it was so convenient to click and setup bonding, and observe it working that I did not pay further attention. (and we seem to have no bottleneck yet).
Anyway, I dedicated a host to experiment, I things are not clear to me. I know how to setup NICs, iSCSI and multipath to present the host OS a partition or a logical volume, using multipathing instead of bonding.
But in this precise case, what is disturbing me is that many layers described above are managed by oVirt (mount/unmount of LV, creation of bridges on top of bonded interfaces, managing the WWID amongst the cluster).
And I see nothing related to multipath at the NICs level. Though I can setup everything fine in the host, this setup does not match what oVirt is expecting : oVirt is expecting a bridge named as the iSCSI network, and able to connect to the SAN. My multipathing is offering the access to the partition of the LUNs, it is not the same.
I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath
I here read :
Add an iSCSI Storage to the Data Center Make sure the Data Center contains networks. Go to the Data Center main tab and choose the specific Data Center At the sub tab choose "iSCSI Bond"
The only tabs I see are "Storage/Logical Networks/Network QoS/Clusters/Permissions".
In this datacenter, I have one iSCSI master storage domain, two iSCSI storage domains and one NFS export domain.
What did I miss?
Press the "new" button to add a new iSCSI Bond Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and this implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...?
Well as you see, I need enlightenments.

Le 09-06-2014 14:44, Maor Lipchuk a écrit :
basically, you should upgrade your DC to 3.4, and then upgrade the clusters you desire also to 3.4.
Well, that seems to have worked, except I had to raise the cluster level first, then the DC level. Now, I can see the iSCSI multipath tab has appeared. But I confirm what I wrote below :
I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath
Add an iSCSI Storage to the Data Center Make sure the Data Center contains networks. Go to the Data Center main tab and choose the specific Data Center At the sub tab choose "iSCSI Bond" Press the "new" button to add a new iSCSI Bond Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and this implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...?
I am actually trying to know whether bonding interfaces (at low level) for the iSCSI network is a bad thing, as was told by my storage provider? -- Nicolas Ecarnot

On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot <nicolas@ecarnot.net> wrote:
Le 09-06-2014 14:44, Maor Lipchuk a écrit :
basically, you should upgrade your DC to 3.4, and then upgrade the clusters you desire also to 3.4.
Well, that seems to have worked, except I had to raise the cluster level first, then the DC level.
Now, I can see the iSCSI multipath tab has appeared. But I confirm what I wrote below :
I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath
Add an iSCSI Storage to the Data Center Make sure the Data Center contains networks. Go to the Data Center main tab and choose the specific Data Center At the sub tab choose "iSCSI Bond" Press the "new" button to add a new iSCSI Bond Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and this implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...?
I am actually trying to know whether bonding interfaces (at low level) for the iSCSI network is a bad thing, as was told by my storage provider?
-- Nicolas Ecarnot
Hi Nicolas, I think the naming of the managed iscsi multipathing feature a "bond" might be a bit confusing. It's not an ethernet/nic bond, but a way to group networks and targets together, so it's not "bonding interfaces" Behind the scenes what it does is creates iscsi ifaces(/var/lib/iscsi/ifaces) and changes the way the iscsiadm calls are constructed to use those ifaces (instead of the default) to connect and login to the targets Hope that helps. -John

Is there any chance of multipath working with direct LUN instead of just storage domains ? I've asked/checked a couple of times, but not had much luck. Thanks *Gary Lloyd* ---------------------------------- IT Services Keele University ----------------------------------- On 9 June 2014 15:17, John Taylor <jtt77777@gmail.com> wrote:
Le 09-06-2014 14:44, Maor Lipchuk a écrit :
basically, you should upgrade your DC to 3.4, and then upgrade the clusters you desire also to 3.4.
Well, that seems to have worked, except I had to raise the cluster level first, then the DC level.
Now, I can see the iSCSI multipath tab has appeared. But I confirm what I wrote below :
I saw that multipathing is talked here : http://www.ovirt.org/Feature/iSCSI-Multipath
> Add an iSCSI Storage to the Data Center > Make sure the Data Center contains networks. > Go to the Data Center main tab and choose the specific Data Center > At the sub tab choose "iSCSI Bond" > Press the "new" button to add a new iSCSI Bond > Configure the networks you want to add to the new iSCSI Bond.
Anyway, I'm not sure to understand the point of this wiki page and
On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot <nicolas@ecarnot.net> wrote: this
implementation : it looks like a much higher level of multipathing over virtual networks, and not at all what I'm talking about above...?
I am actually trying to know whether bonding interfaces (at low level) for the iSCSI network is a bad thing, as was told by my storage provider?
-- Nicolas Ecarnot
Hi Nicolas, I think the naming of the managed iscsi multipathing feature a "bond" might be a bit confusing. It's not an ethernet/nic bond, but a way to group networks and targets together, so it's not "bonding interfaces" Behind the scenes what it does is creates iscsi ifaces(/var/lib/iscsi/ifaces) and changes the way the iscsiadm calls are constructed to use those ifaces (instead of the default) to connect and login to the targets Hope that helps.
-John _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Am 07.07.2014 10:17, schrieb Gary Lloyd:
Is there any chance of multipath working with direct LUN instead of just storage domains ? I've asked/checked a couple of times, but not had much luck.
Hi, the best way to get features into ovirt is to create a "Bug" titled as "RFE" (request for enhancement) here: https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt if you have any custom vdsm code it would be cool to share it with the community, so you might even not be responsible for maintaining it in the future, but that's your decision to make. HTH -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Hi Sven Thanks My colleague has already tried to submit the code and I think it was Itamar we spoke with sometime in Oct/Nov 13. I think someone decided that the functionality should be driven from the engine itself rather than having it set on the vdsm nodes. I will see about putting in for a feature request though. Cheers *Gary Lloyd* ---------------------------------- IT Services Keele University ----------------------------------- On 7 July 2014 09:45, Sven Kieske <S.Kieske@mittwald.de> wrote:
Am 07.07.2014 10:17, schrieb Gary Lloyd:
Is there any chance of multipath working with direct LUN instead of just storage domains ? I've asked/checked a couple of times, but not had much luck.
Hi,
the best way to get features into ovirt is to create a "Bug" titled as "RFE" (request for enhancement) here:
https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt
if you have any custom vdsm code it would be cool to share it with the community, so you might even not be responsible for maintaining it in the future, but that's your decision to make.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (5)
-
Gary Lloyd
-
John Taylor
-
Maor Lipchuk
-
Nicolas Ecarnot
-
Sven Kieske