On Mon, Jul 23, 2018 at 9:35 PM Ryan Bullock <rrb3942@gmail.com> wrote:

Hello All,

We recently stood up a new Ovirt install backed by an ISCSI SAN and it has been working great, but there are a few quirks I am trying to iron out.

We have run into an issue where when we fail-over our SAN (for maintenance, or otherwise) any VM with a Direct LUN gets paused and doesn’t resume. VMs without a direct LUN never paused.


I guess the other VMs did get paused, but they were resumed
automatically by the system, so from your point of view, they did 
not "pause".

You can check vdsm log if the other vms did pause and resume. I'm not
sure engine UI reports all pause and resume events.
 

Digging through posts on this list and reading some bug reports, it seems like this a known quirk with how Ovirt handles Direct LUNs (it doesn't monitor the LUNs and so it wont resume the VM).


Right.

Can you file a bug for supporting this?

Vdsm does monitor multipath events for all LUNs, but they are used only
for reporting purposes, see:
https://ovirt.org/develop/release-management/features/storage/multipath-events/

We could use the events for resuming vms using the multipath devices that
became available. This functionality will be even more important in the next version
since we plan to move to LUN per disk model.
 

To get the VMs to automatically restart I have attached VM leases to them and that seems to work fine, not as nice as a pause and resume, but it minimizes downtime.


Cool!
 

What I’m trying to understand is why the VMs with Direct LUNs paused, and ones without didn’t. My only speculation is that since the Non-Direct is using LVM on top of ISCSI, that LVM is adding its own layer of timeouts that cause it to mask the outage?


I don't know about additional retry mechanism in the data-path for LVM
based disks. I think we use the same multipath failover behavior.
 

My other question is, how can I keep my VMs with Direct LUNs from pausing during short outages? Can I put configurations in my multipath.conf for just the wwids of my Direct LUNs to increase the ‘no_path_retry’ to prevent the VMs from pausing in the first place? I know in general you don’t want to increase the ‘no_path_retry’ because it can cause timeout issues with VDSM and SPM operations (LVM changes, etc). But in the case of a Direct LUN would it cause any problems?


You can add a drop-in multipath configuration that will change
no_path_retry for specific device, or multiapth.

Increasing no_path_retry will cause larger delays when vdsm try to 
access the LUNs via lvm commands, but the delay should be only on
the first access when a LUN is not available.

Here is an example drop-in file:

# cat /etc/multipath/conf.d/my.conf
devices {
    device {
        vendor "my-vendor"
        product "my-product"
        # based on 5 seconds monitor interval, queue I/O for
        # 60 seconds when no path is available, before failing.
        no_path_retry 12
    }
}

multipaths {
    multipath {
        wwid "my-wwidr"
        no_path_retry 12
    }
}

See "man multipath.conf" for more info.

Nir