[ovirt-users] oVIRT 4.1 / iSCSI Multipathing

Vinícius Ferrão ferrao at if.ufrj.br
Fri Jul 21 18:56:05 UTC 2017


On 21 Jul 2017, at 15:12, Yaniv Kaul <ykaul at redhat.com<mailto:ykaul at redhat.com>> wrote:



On Wed, Jul 19, 2017 at 9:13 PM, Vinícius Ferrão <ferrao at if.ufrj.br<mailto:ferrao at if.ufrj.br>> wrote:
Hello,

I’ve skipped this message entirely yesterday. So this is per design? Because the best practices of iSCSI MPIO, as far as I know, recommends two completely separate paths. If this can’t be achieved with oVirt what’s the point of running MPIO?

With regular storage it is quite easy to achieve using 'iSCSI bonding'.
I think the Dell storage is a bit different and requires some more investigation - or experience with it.
 Y.

Yaniv, thank you for answering this. I’m really hoping that a solution would be found.

Actually I’m not running anything from DELL. My storage system is FreeNAS which is pretty standard and, as far as I know, iSCSI practices dictates segregate networks for proper working.

All other major virtualization products supports iSCSI this way: vSphere, XenServer and Hyper-V. So I was really surprised that oVirt (and even RHV, I requested a trial yesterday) does not implement ISCSI with the well know best practices.

There’s a picture of the architecture that I take from Google when searching for ”mpio best practives”: https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640

Ans as you can see it’s segregated networks on a machine reaching the same target.

In my case, my datacenter has five Hypervisor Machines, with two NICs dedicated for iSCSI. Both NICs connect to different converged ethernet switches and the iStorage is connected the same way.

So it really does not make sense that a the first NIC can reach the second NIC target. In a case of a switch failure the cluster will go down anyway, so what’s the point of running MPIO? Right?

Thanks once again,
V.



May we ask for a bug fix or a feature redesign on this?

MPIO is part of my datacenter, and it was originally build for running XenServer, but I’m considering the move to oVirt. MPIO isn’t working right and this can be a great no-go for me...

I’m willing to wait and hold my DC project if this can be fixed.

Any answer from the redhat folks?

Thanks,
V.

> On 18 Jul 2017, at 11:09, Uwe Laverenz <uwe at laverenz.de<mailto:uwe at laverenz.de>> wrote:
>
> Hi,
>
>
> Am 17.07.2017 um 14:11 schrieb Devin Acosta:
>
>> I am still troubleshooting the issue, I haven’t found any resolution to my issue at this point yet. I need to figure out by this Friday otherwise I need to look at Xen or another solution. iSCSI and oVIRT seems problematic.
>
> The configuration of iSCSI-Multipathing via OVirt didn't work for me either. IIRC the underlying problem in my case was that I use totally isolated networks for each path.
>
> Workaround: to make round robin work you have to enable it by editing "/etc/multipath.conf". Just add the 3 lines for the round robin setting (see comment in the file) and additionally add the "# VDSM PRIVATE" comment to keep vdsmd from overwriting your settings.
>
> My multipath.conf:
>
>
>> # VDSM REVISION 1.3
>> # VDSM PRIVATE
>> defaults {
>>    polling_interval            5
>>    no_path_retry               fail
>>    user_friendly_names         no
>>    flush_on_last_del           yes
>>    fast_io_fail_tmo            5
>>    dev_loss_tmo                30
>>    max_fds                     4096
>>    # 3 lines added manually for multipathing:
>>    path_selector               "round-robin 0"
>>    path_grouping_policy        multibus
>>    failback                    immediate
>> }
>> # Remove devices entries when overrides section is available.
>> devices {
>>    device {
>>        # These settings overrides built-in devices settings. It does not apply
>>        # to devices without built-in settings (these use the settings in the
>>        # "defaults" section), or to devices defined in the "devices" section.
>>        # Note: This is not available yet on Fedora 21. For more info see
>>        # https://bugzilla.redhat.com/1253799
>>        all_devs                yes
>>        no_path_retry           fail
>>    }
>> }
>
>
>
> To enable the settings:
>
>  systemctl restart multipathd
>
> See if it works:
>
>  multipath -ll
>
>
> HTH,
> Uwe
> _______________________________________________
> Users mailing list
> Users at ovirt.org<mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users at ovirt.org<mailto:Users at ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170721/f15d185e/attachment-0001.html>


More information about the Users mailing list