Hello,
in my 2 original 4.1 hosts I got some storage errors using rdbms machines when restoring or doing hevy I/O.
My storage domain is FC SAN based.
I solved the problem setting this conservative settings into /etc/vdsm/vdsm.conf.d

cat 50_thin_block_extension_rules.conf
[irs]

# Together with volume_utilization_chunk_mb, set the minimal free
# space before a thin provisioned block volume is extended. Use lower
# values to extend earlier.
volume_utilization_percent = 25

# Size of extension chunk in megabytes, and together with
# volume_utilization_percent, set the free space limit. Use higher
# values to extend in bigger chunks.
volume_utilization_chunk_mb = 4096

Then I added a third host in a second time and I wrongly supposed that an equal vdsm configurtion would have been deployed with "New Host" from gui....
But is not so.
Yesterday with a VM running on this third hypervisor I got the previous experimented messages; some cycles of these

VM dbatest6 has recovered from paused back to up.
VM dbatest6 has been paused due to no Storage space error.
VM dbatest6 has been paused.

in a 2 hours period.

Two questions:
- why not align hypervisor configuration when adding host and in particular the vdsm one? Any reason in general for having different config in hosts of the same cluster?
- the host that was running the VM was not the SPM.
Who is in charge of applying the settings about volume extension when a VM I/O load requires it because of a thin provisioned disk in use?
I presume not the SPM but the host that has in charge the VM, based on what I saw yesterday...

Thanks,
Gianluca