[ovirt-users] oVirt 4.0 and multipath.conf for HPE 3PAR. What do you advise?
Nir Soffer
nsoffer at redhat.com
Sun Aug 14 11:22:18 UTC 2016
On Sat, Aug 13, 2016 at 4:03 PM, <aleksey.maksimov at it-kb.ru> wrote:
> Hello, oVirt guru's !
>
> I installed oVirt 4.0 on several servers HP ProLiant DL360 G5 with QLogic/Emulex 4G dual-port HBAs.
> These servers have multipath connection to the storage system HP 3PAR 7200.
>
> Before installing oVirt to servers I set up the configuration file /etc/multipath.conf according to the vendor recommendations from document "HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide (emr_na-c04448818-9.pdf)"
> https://blog.it-kb.ru/2016/06/12/configuring-device-mapper-multipathing-dm-multipat-mpio-in-centos-linux-7-2-with-emulex-and-qlogic-fc-hba-connecting-over-san-storage-hp-3par-7200-3par-os-3-2-2/
>
> Before installing oVirt my multipath.conf was the:
>
> ---> start of /etc/multipath.conf <-----
>
> defaults {
> polling_interval 10
This will cause delays in path checking, better use the default from vdsm conf
> user_friendly_names no
> find_multipaths yes
This ensures that devices with single path will not be detected by ovirt, unless
the device is listed in the "multipaths" section. This means you will have to
update multipath.conf manually on all hosts each time you want to add
a new device.
It is recommended to keep the default from vdsm.conf
> }
> blacklist {
> devnode "^cciss\/c[0-9]d[0-9]*"
Not sure why you need this, but this seems harmless
> }
> multipaths {
> multipath {
> wwid 360002ac000000000000000160000cec9
> alias 3par-vv2
> }
> multipath {
> wwid 360002ac000000000000000170000cec9
> alias 3par-vv1
> }
> }
> devices {
> device {
> vendor "3PARdata"
> product "VV"
> path_grouping_policy group_by_prio
> path_selector "round-robin 0"
> path_checker tur
> features "0"
> hardware_handler "1 alua"
> prio alua
> failback immediate
> rr_weight uniform
> no_path_retry 18
This means 18 retries, and with polling internal of 10 seconds, 180 second
timeout when all paths has become faulty. This will cause long timeouts in
various vdsm operations, leading to timeouts on engine side, and also
increase the chance of a host becoming non-operational because of delay
in storage monitoring.
It is recommended to use small number of retries, like 4, to avoid long delays
in vdsm.
> rr_min_io_rq 1
> detect_prio yes
> }
> }
> ---> end of /etc/multipath.conf <-----
>
> But after installing oVirt file multipath.conf has changed to:
>
> ---> start of /etc/multipath.conf <-----
> defaults {
> polling_interval 5
> no_path_retry fail
You can change this to small number like 4, to match other configuration.
> user_friendly_names no
> flush_on_last_del yes
> fast_io_fail_tmo 5
> dev_loss_tmo 30
> max_fds 4096
You should keep these values, unless the storage vendor has
a good reason to change them.
> }
> devices {
> device {
> all_devs yes
> no_path_retry fail
I would change this to:
no_path_retry 4
> }
> }
> ---> end of /etc/multipath.conf <-----
>
> Now I'm not sure that this configuration is optimal. What do you advise?
1. Add your changes to the file created by vdsm
2. Update no_path_retry to small number (e.g 4)
3. Add "# VDSM PRIVATE" to the second line - the first 2 lines should be:
# VDSM REVISION 1.2
# VDSM PRIVATE
With the "# VDSM PRIVATE" tag, vdsm will never overwrite multipath.conf.
You need to update this file on all hosts manually.
4. Copy multipath.conf to all hosts
5. Reload multipathd on all hosts:
systemctl reload multipathd
Nir
More information about the Users
mailing list