In my perspective, the ovirt developers put very general configuration of multipath.conf file, which in their opinion should work with as much as posslible arrays.
So you should modify this file and try to do some tests, plug in, plug out links etc.

If you want to get luns picture, you should better use: iscsiadm -m session -P 3.

2017-03-28 18:25 GMT+02:00 Gianluca Cecchi <gianluca.cecchi@gmail.com>:

Hello, 
I'm configuring an hypervisor for iSCSI Dell PS Series
It is a CentOS 7.3 + updates server.
The server has been already added to oVirt as a node, but without any storage domain configured yet.
It has access to one lun that will become the storage domain one.

Default oVirt generated multipath.conf is like this:

defaults {
    polling_interval            5
    no_path_retry               fail
    user_friendly_names         no
    flush_on_last_del           yes
    fast_io_fail_tmo            5
    dev_loss_tmo                30
    max_fds                     4096
}

devices {
    device {
        # These settings overrides built-in devices settings. It does not apply
        # to devices without built-in settings (these use the settings in the
        # "defaults" section), or to devices defined in the "devices" section.
        # Note: This is not available yet on Fedora 21. For more info see
        # https://bugzilla.redhat.com/1253799
        all_devs                yes
        no_path_retry           fail
    }
}


Apparently in device-mapper-multipath there is no builtin for this combination

  Vendor: EQLOGIC  Model: 100E-00          Rev: 8.1 

So, with the oVirt provided configuration a "show config" for multipath reports something like this at the end: 

        polling_interval 5
        path_selector "service-time 0"
        path_grouping_policy "failover"
        path_checker "directio"
        rr_min_io_rq 1
        max_fds 4096
        rr_weight "uniform"
        failback "manual"
        features "0"

and multipath layout this way

[root@ov300 etc]# multipath -l
364817197b5dfd0e5538d959702249b1c dm-3 EQLOGIC ,100E-00         
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 7:0:0:0 sde 8:64 active undef  running
`-+- policy='service-time 0' prio=0 status=enabled
  `- 8:0:0:0 sdf 8:80 active undef  running
[root@ov300 etc]# 

Following recommendations from Dell here:
http://en.community.dell.com/techcenter/extras/m/white_papers/20442422

I should put into defaults section these directives:

defaults {
    polling_interval            10
    path_selector               "round-robin 0"
    path_grouping_policy        multibus
    path_checker                tur
    rr_min_io_rq                10
    max_fds                     8192
    rr_weight                   priorities
    failback                    immediate
    features                    0
}

I'm trying to mix EQL and oVirt reccomendations to have the best for my use 
and arrived at this config (plus a blacklist section with my internal hd and my flash wwids that is not relevant here):

# VDSM REVISION 1.3
# VDSM PRIVATE

defaults {
    polling_interval            5
    no_path_retry               fail
    user_friendly_names         no
    flush_on_last_del           yes
    fast_io_fail_tmo            5
    dev_loss_tmo                30
#    Default oVirt value overwritten
#    max_fds                     4096
#
    max_fds                     8192
}

devices {
    device {
        # These settings overrides built-in devices settings. It does not apply
        # to devices without built-in settings (these use the settings in the
        # "defaults" section), or to devices defined in the "devices" section.
        # Note: This is not available yet on Fedora 21. For more info see
        # https://bugzilla.redhat.com/1253799
        all_devs                yes
        no_path_retry           fail
    }
    device {
        vendor                  "EQLOGIC"
        product                 "100E-00"
#        Default EQL configuration overwritten by oVirt default
#        polling_interval        10
#
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        path_checker            tur
        rr_min_io_rq            10
        rr_weight               priorities
        failback                immediate
        features                "0"
    }
}

After activating this config I have this mutipath layout

[root@ov300 etc]# multipath -l
364817197b5dfd0e5538d959702249b1c dm-3 EQLOGIC ,100E-00         
size=1.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 7:0:0:0 sde 8:64 active undef  running
  `- 8:0:0:0 sdf 8:80 active undef  running
[root@ov300 etc]# 

NOTE: at this moment the storage is not yet configured as a storage domain.
I also changed /etc/iscsi/iscsd.conf and sysctl.d parameters as from EQL doc indications

From an iSCSI initiator point of view:

[root@ov300 ~]# iscsiadm -m session
tcp: [1] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
tcp: [2] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
[root@ov300 ~]# 

[root@ov300 ~]# iscsiadm -m session -P 1
Target: iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
Current Portal: 10.10.100.41:3260,1
Persistent Portal: 10.10.100.9:3260,1
**********
Interface:
**********
Iface Name: ip1p1
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:f2d7fc1e2fc
Iface IPaddress: 10.10.100.87
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 10.10.100.42:3260,1
Persistent Portal: 10.10.100.9:3260,1
**********
Interface:
**********
Iface Name: ip1p2
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:f2d7fc1e2fc
Iface IPaddress: 10.10.100.87
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
[root@ov300 ~]# 

Do you think it is ok?

Thanks for any comments,

Gianluca

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users