Hello,
I'm going to debug some spot events I have in my iSCSI connection frm hypervisors.
Sometimes I get

May 31, 2018, 7:14:51 AM
Storage domain ovsd3750 experienced a high latency of 8.96043 seconds from host ov200. This may cause performance and functional issues. Please consult your Storage Administrator.

Jun 1, 2018, 5:26:25 AM
Storage domain ovsd3750 experienced a high latency of 8.26526 seconds from host ov301. This may cause performance and functional issues. Please consult your Storage Administrator.

Jun 2, 2018, 5:21:37 AM
VDSM ov200 command SpmStatusVDS failed: (-202, 'Sanlock resource read failure', 'IO timeout')
--> it seems no impact, actually, after a few seconds it becomes again SPM

Jun 3, 2018, 7:00:14 AM
Storage domain ovsd3750 experienced a high latency of 6.37818 seconds from host ov300. This may cause performance and functional issues. Please consult your Storage Administrator.

And yesterday a VM running on ov301 node gets paused for a few seconds.

Jun 4, 2018, 7:02:26 AM 
VM dbatest3 has been paused.

Jun 4, 2018, 7:02:26 AM 
VM dbatest3 has been paused due to storage I/O problem.

Jun 4, 2018, 7:02:40 AM 
VM dbatest3 has recovered from paused back to up.

Some questions:

- I'm investigating with the users, but in case it is indeed this VM causing problems on storage latency, what are my best chance to avoid it?
To change disk profile for the disks of this particular VM? Or is there anything I can do globally?
Or any settings on storage domain itself?
What best practice? Is there a default sort of top I/O consuming barrier pre-defined on storage access speed from VMs?

- How many days is the default history for events and how I can see it from web admin gui or other means? Can I change this dwfault and how?
It seems I only see this possibly related parameter with engine-config command:
EventProcessingPoolSize
(with the value of 10 at this time)
?
Any pointer to configure events' history retained settings?

Thanks in advance,
Gianluca