oVirt 4.0 and multipath.conf for HPE 3PAR. What do you advise?

Hello, oVirt guru's ! I installed oVirt 4.0 on several servers HP ProLiant DL360 G5 with QLogic/Emulex 4G dual-port HBAs. These servers have multipath connection to the storage system HP 3PAR 7200. Before installing oVirt to servers I set up the configuration file /etc/multipath.conf according to the vendor recommendations from document "HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide (emr_na-c04448818-9.pdf)" https://blog.it-kb.ru/2016/06/12/configuring-device-mapper-multipathing-dm-m... Before installing oVirt my multipath.conf was the: ---> start of /etc/multipath.conf <----- defaults { polling_interval 10 user_friendly_names no find_multipaths yes } blacklist { devnode "^cciss\/c[0-9]d[0-9]*" } multipaths { multipath { wwid 360002ac000000000000000160000cec9 alias 3par-vv2 } multipath { wwid 360002ac000000000000000170000cec9 alias 3par-vv1 } } devices { device { vendor "3PARdata" product "VV" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate rr_weight uniform no_path_retry 18 rr_min_io_rq 1 detect_prio yes } } ---> end of /etc/multipath.conf <----- But after installing oVirt file multipath.conf has changed to: ---> start of /etc/multipath.conf <----- defaults { polling_interval 5 no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 } devices { device { all_devs yes no_path_retry fail } } ---> end of /etc/multipath.conf <----- Now I'm not sure that this configuration is optimal. What do you advise?

Changing multpath conf is not supported, since this file is managed by VDSM. It should work well for the oVirt use case. Configuration should be changed via the manager if needed and not via the conf. Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: ydary@redhat.com IRC : ydary On Sat, Aug 13, 2016 at 4:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello, oVirt guru's !
I installed oVirt 4.0 on several servers HP ProLiant DL360 G5 with QLogic/Emulex 4G dual-port HBAs. These servers have multipath connection to the storage system HP 3PAR 7200.
Before installing oVirt to servers I set up the configuration file /etc/multipath.conf according to the vendor recommendations from document "HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide (emr_na-c04448818-9.pdf)" https://blog.it-kb.ru/2016/06/12/configuring-device-mapper- multipathing-dm-multipat-mpio-in-centos-linux-7-2-with- emulex-and-qlogic-fc-hba-connecting-over-san-storage- hp-3par-7200-3par-os-3-2-2/
Before installing oVirt my multipath.conf was the:
---> start of /etc/multipath.conf <-----
defaults { polling_interval 10 user_friendly_names no find_multipaths yes } blacklist { devnode "^cciss\/c[0-9]d[0-9]*" } multipaths { multipath { wwid 360002ac000000000000000160000cec9 alias 3par-vv2 } multipath { wwid 360002ac000000000000000170000cec9 alias 3par-vv1 } } devices { device { vendor "3PARdata" product "VV" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate rr_weight uniform no_path_retry 18 rr_min_io_rq 1 detect_prio yes } } ---> end of /etc/multipath.conf <-----
But after installing oVirt file multipath.conf has changed to:
---> start of /etc/multipath.conf <----- defaults { polling_interval 5 no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 } devices { device { all_devs yes no_path_retry fail } } ---> end of /etc/multipath.conf <-----
Now I'm not sure that this configuration is optimal. What do you advise? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sat, Aug 13, 2016 at 4:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello, oVirt guru's !
I installed oVirt 4.0 on several servers HP ProLiant DL360 G5 with QLogic/Emulex 4G dual-port HBAs. These servers have multipath connection to the storage system HP 3PAR 7200.
Before installing oVirt to servers I set up the configuration file /etc/multipath.conf according to the vendor recommendations from document "HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide (emr_na-c04448818-9.pdf)" https://blog.it-kb.ru/2016/06/12/configuring-device-mapper-multipathing-dm-m...
Before installing oVirt my multipath.conf was the:
---> start of /etc/multipath.conf <-----
defaults { polling_interval 10
This will cause delays in path checking, better use the default from vdsm conf
user_friendly_names no find_multipaths yes
This ensures that devices with single path will not be detected by ovirt, unless the device is listed in the "multipaths" section. This means you will have to update multipath.conf manually on all hosts each time you want to add a new device. It is recommended to keep the default from vdsm.conf
} blacklist { devnode "^cciss\/c[0-9]d[0-9]*"
Not sure why you need this, but this seems harmless
} multipaths { multipath { wwid 360002ac000000000000000160000cec9 alias 3par-vv2 } multipath { wwid 360002ac000000000000000170000cec9 alias 3par-vv1 } } devices { device { vendor "3PARdata" product "VV" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate rr_weight uniform no_path_retry 18
This means 18 retries, and with polling internal of 10 seconds, 180 second timeout when all paths has become faulty. This will cause long timeouts in various vdsm operations, leading to timeouts on engine side, and also increase the chance of a host becoming non-operational because of delay in storage monitoring. It is recommended to use small number of retries, like 4, to avoid long delays in vdsm.
rr_min_io_rq 1 detect_prio yes } } ---> end of /etc/multipath.conf <-----
But after installing oVirt file multipath.conf has changed to:
---> start of /etc/multipath.conf <----- defaults { polling_interval 5 no_path_retry fail
You can change this to small number like 4, to match other configuration.
user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096
You should keep these values, unless the storage vendor has a good reason to change them.
} devices { device { all_devs yes no_path_retry fail
I would change this to: no_path_retry 4
} } ---> end of /etc/multipath.conf <-----
Now I'm not sure that this configuration is optimal. What do you advise?
1. Add your changes to the file created by vdsm 2. Update no_path_retry to small number (e.g 4) 3. Add "# VDSM PRIVATE" to the second line - the first 2 lines should be: # VDSM REVISION 1.2 # VDSM PRIVATE With the "# VDSM PRIVATE" tag, vdsm will never overwrite multipath.conf. You need to update this file on all hosts manually. 4. Copy multipath.conf to all hosts 5. Reload multipathd on all hosts: systemctl reload multipathd Nir

Nir Soffer, thank you very much for your explanation. Trick with "# VDSM PRIVATE" works great. 14.08.2016, 14:22, "Nir Soffer" <nsoffer@redhat.com>:
On Sat, Aug 13, 2016 at 4:03 PM, <aleksey.maksimov@it-kb.ru> wrote:
Hello, oVirt guru's !
I installed oVirt 4.0 on several servers HP ProLiant DL360 G5 with QLogic/Emulex 4G dual-port HBAs. These servers have multipath connection to the storage system HP 3PAR 7200.
Before installing oVirt to servers I set up the configuration file /etc/multipath.conf according to the vendor recommendations from document "HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide (emr_na-c04448818-9.pdf)" https://blog.it-kb.ru/2016/06/12/configuring-device-mapper-multipathing-dm-m...
Before installing oVirt my multipath.conf was the:
---> start of /etc/multipath.conf <-----
defaults { polling_interval 10
This will cause delays in path checking, better use the default from vdsm conf
user_friendly_names no find_multipaths yes
This ensures that devices with single path will not be detected by ovirt, unless the device is listed in the "multipaths" section. This means you will have to update multipath.conf manually on all hosts each time you want to add a new device. It is recommended to keep the default from vdsm.conf
} blacklist { devnode "^cciss\/c[0-9]d[0-9]*"
Not sure why you need this, but this seems harmless
} multipaths { multipath { wwid 360002ac000000000000000160000cec9 alias 3par-vv2 } multipath { wwid 360002ac000000000000000170000cec9 alias 3par-vv1 } } devices { device { vendor "3PARdata" product "VV" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate rr_weight uniform no_path_retry 18
This means 18 retries, and with polling internal of 10 seconds, 180 second timeout when all paths has become faulty. This will cause long timeouts in various vdsm operations, leading to timeouts on engine side, and also increase the chance of a host becoming non-operational because of delay in storage monitoring.
It is recommended to use small number of retries, like 4, to avoid long delays in vdsm.
rr_min_io_rq 1 detect_prio yes } } ---> end of /etc/multipath.conf <-----
But after installing oVirt file multipath.conf has changed to:
---> start of /etc/multipath.conf <----- defaults { polling_interval 5 no_path_retry fail
You can change this to small number like 4, to match other configuration.
user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096
You should keep these values, unless the storage vendor has a good reason to change them.
} devices { device { all_devs yes no_path_retry fail
I would change this to:
no_path_retry 4
} } ---> end of /etc/multipath.conf <-----
Now I'm not sure that this configuration is optimal. What do you advise?
1. Add your changes to the file created by vdsm 2. Update no_path_retry to small number (e.g 4) 3. Add "# VDSM PRIVATE" to the second line - the first 2 lines should be:
# VDSM REVISION 1.2 # VDSM PRIVATE
With the "# VDSM PRIVATE" tag, vdsm will never overwrite multipath.conf. You need to update this file on all hosts manually.
4. Copy multipath.conf to all hosts 5. Reload multipathd on all hosts:
systemctl reload multipathd
Nir
participants (3)
-
aleksey.maksimov@it-kb.ru
-
Nir Soffer
-
Yaniv Dary