On Mon, Feb 1, 2021 at 1:55 PM Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
On Sat, Jan 30, 2021 at 6:05 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> So you created that extra conf with this content but it didn't work ?
> multipath -v4 could hint you why it was complaining.
>
>
> Best Regards,
> Strahil Nikolov
>
Ok, I missed the surrounding root part
devices {
}
It seems that we need more examples in multpath.conf file
installed by vdsm.
Apparently "multipathd show config" didn't complain...
Now I put also that and it seems to work, thanks for pointing it
So at the end I have the multipath.conf default file installed by vdsm (so without the #
PRIVATE line)
and this in /etc/multipath/conf.d/eql.conf
devices {
device {
vendor "EQLOGIC"
product "100E-00"
Ben, why is this device missing from multipath builtin devices?
path_selector "round-robin 0"
path_grouping_policy multibus
path_checker tur
rr_min_io_rq 10
rr_weight priorities
failback immediate
features "0"
This is never needed, multipath generates this value.
Ben: please correct me if needed
no_path_retry 16
I'm don't think that you need this, since you should inherit the value from vdsm
multipath.conf, either from the "defaults" section, or from the
"overrides" section.
You must add no_path_retry here if you want to use another value, and you don't
want to use vdsm default value.
Note that if you use your own value, you need to match it to sanlock io_timeout.
See this document for more info:
https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md
}
}
Recreated initrd and rebooted the host and activated it without further problems.
And "multipathd show config" confirms it.
Yes, this is the recommended way to configure multipath, thanks Strahil for the
good advice!
But still I see this
# multipath -l
36090a0c8d04f21111fc4251c7c08d0a3 dm-13 EQLOGIC,100E-00
size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 16:0:0:0 sdc 8:32 active undef running
`- 18:0:0:0 sde 8:64 active undef running
36090a0d88034667163b315f8c906b0ac dm-12 EQLOGIC,100E-00
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 15:0:0:0 sdb 8:16 active undef running
`- 17:0:0:0 sdd 8:48 active undef running
that lets me think I'm not using the no_path_retry setting, but queue_if_no_path... I
could be wrong anyway..
Not this is expected. What is means, if I understand multipath
behavior correctly,
that the device queue data for no_path_retry * polling_internal seconds when all
paths failed. After that the device will fail all pending and new I/O
until at least
one path is recovered.
How to verify for sure (without dropping the paths, at least at the
moment) from the config?
Any option with multipath and/or dmsetup commands?
multipath show config -> find your device section, it will show the current
value for no_path_retry.
Nir