<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<div>Please note that it's necessary to add a magic line '# VDSM PRIVATE' as second line in /etc/multipath.conf. Otherwise vdsm would overwrite your settings.</div>
<div>Thus, /etc/multipath.conf should start with the following two lines:</div>
<div>
<blockquote type="cite"></blockquote>
</div>
<div># VDSM REVISION 1.3</div>
<div># VDSM PRIVATE</div>
<div><br>
</div>
<div><br>
</div>
<div>On Mon, 2016-05-30 at 22:09 +0300, Nir Soffer wrote:</div>
<blockquote type="cite">
<pre>But you may modify multipath configuration on the host.
We use now this multipath configuration (/etc/multipath.conf):
# VDSM REVISION 1.3
defaults {
polling_interval 5
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
deferred_remove yes
}
devices {
device {
all_devs yes
no_path_retry fail
}
}
This enforces failing of io request on devices that by default will queue such
requests for long or unlimited time. Queuing requests is very bad for vdsm, and
cause various commands to block for minutes during storage outage,
failing various
flows in vdsm and the ui.
See <a href="https://bugzilla.redhat.com/880738">https://bugzilla.redhat.com/880738</a>
However, in your case, using queuing may be the best way to do the switch
from one storage to another in the smoothest way.
You may try this setting:
devices {
device {
all_devs yes
no_path_retry 30
}
}
This will queue io requests for 30 seconds before failing.
Using this normally would be a bad idea with vdsm, since during storage outage,
vdsm may block for 30 seconds when no paths is available, and is not designed
for this behavior, but blocking from time to time for short time should be ok.
I think that modifying the configuration and reloading multipathd service should
be enough to use the new settings, but I'm not sure if this changes
existing sessions
or open devices.
</pre>
</blockquote>
</body>
</html>