[ovirt-users] iSCSI and multipath

Maor Lipchuk mlipchuk at redhat.com
Mon Jun 9 12:44:44 UTC 2014


basically, you should upgrade your DC to 3.4, and then upgrade the
clusters you desire also to 3.4.

You might need to upgrade your hosts to be compatible with the cluster
emulated machines, or they might become non-operational if qemu-kvm does
not support it.

ether way, you can always ask for advice in the mailing list if you
encounter any problem.

Regards,
Maor

On 06/09/2014 03:30 PM, Nicolas Ecarnot wrote:
> Le 09-06-2014 13:55, Maor Lipchuk a écrit :
>> Hi Nicolas,
>>
>> Which DC level are you using?
>> iSCSI multipath should be supported only from DC with compatibility
>> version of 3.4
> 
> Hi Maor,
> 
> Oops you're right, my both 3.4 datacenters are using 3.3 level.
> I migrated recently.
> 
> How safe or risky is it to increase this DC level ?
> 
>>
>> regards,
>> Maor
>>
>> On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
>>> Hi,
>>>
>>> Context here :
>>> - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
>>> - connected to some LUNs in iSCSI on a dedicated physical network
>>>
>>> Every host has two interfaces used for management and end-user LAN
>>> activity. Every host also have 4 additional NICs dedicated to the iSCSI
>>> network.
>>>
>>> Those 4 NICs were setup from the oVirt web GUI in a bonding with a
>>> unique IP address and connected to the SAN.
>>>
>>> Everything is working fine. I just had to manually tweak some points
>>> (MTU, other small things) but it is working.
>>>
>>>
>>> Recently, our SAN dealer told us that using bonding in an iSCSI context
>>> was terrible, and the recommendation is to use multipathing.
>>> My previous experience pre-oVirt was to agree with that. Long story
>>> short is just that when setting up the host from oVirt, it was so
>>> convenient to click and setup bonding, and observe it working that I did
>>> not pay further attention. (and we seem to have no bottleneck yet).
>>>
>>> Anyway, I dedicated a host to experiment, I things are not clear to me.
>>> I know how to setup NICs, iSCSI and multipath to present the host OS a
>>> partition or a logical volume, using multipathing instead of bonding.
>>>
>>> But in this precise case, what is disturbing me is that many layers
>>> described above are managed by oVirt (mount/unmount of LV, creation of
>>> bridges on top of bonded interfaces, managing the WWID amongst the
>>> cluster).
>>>
>>> And I see nothing related to multipath at the NICs level.
>>> Though I can setup everything fine in the host, this setup does not
>>> match what oVirt is expecting : oVirt is expecting a bridge named as the
>>> iSCSI network, and able to connect to the SAN.
>>> My multipathing is offering the access to the partition of the LUNs, it
>>> is not the same.
>>>
>>> I saw that multipathing is talked here :
>>> http://www.ovirt.org/Feature/iSCSI-Multipath
>>>
>>> I here read :
>>>>     Add an iSCSI Storage to the Data Center
>>>>     Make sure the Data Center contains networks.
>>>>     Go to the Data Center main tab and choose the specific Data Center
>>>>     At the sub tab choose "iSCSI Bond"
>>>
>>> The only tabs I see are "Storage/Logical Networks/Network
>>> QoS/Clusters/Permissions".
>>>
>>> In this datacenter, I have one iSCSI master storage domain, two iSCSI
>>> storage domains and one NFS export domain.
>>>
>>> What did I miss?
>>>
>>>>     Press the "new" button to add a new iSCSI Bond
>>>>     Configure the networks you want to add to the new iSCSI Bond.
>>>
>>> Anyway, I'm not sure to understand the point of this wiki page and this
>>> implementation : it looks like a much higher level of multipathing over
>>> virtual networks, and not at all what I'm talking about above...?
>>>
>>> Well as you see, I need enlightenments.
>>>
> 




More information about the Users mailing list