On Sat, Nov 3, 2018 at 1:36 PM Alan G <alan+ovirt@griff.me.uk> wrote:
I eventually figured out that shared targets was the problem. I'm now using three targets: one for BFS, one for hosted_storage and one for an additional storage domain. This seems to be working fine. However, I've noticed that the hosted_storage domain is only utilising a single path. Is there anyway to get hosted_storage working with MP?

It should use the same way you used to get multiple paths for BFS and the
2nd storage domains.

Did you try to configure iSCSI multipathing? The setting should be available
as a subtab in the DC tab.

Simone can add specific details for hosted engine setup and iSCSI multipathing.
 

The output of getDeviceList is below

BFS - 3600a098038304631373f4d2f70305a6b
hosted_storage - 3600a098038304630662b4d612d736762
2nd data domain - 3600a098038304630662b4d612d736764

[
... 
    {
        "status": "used",
        "vendorID": "NETAPP",
        "GUID": "3600a098038304630662b4d612d736762",
        "capacity": "107374182400",
        "fwrev": "9300",
        "discard_zeroes_data": 0,
        "vgUUID": "CeFXY1-34gB-NJPP-tw18-nZWo-qAWu-6cx82z",
        "pathlist": [
            {
                "connection": "172.31.6.7",
                "iqn": "iqn.1992-08.com.netapp:sn.39d910dede8311e8a98a00a098d7cd76:vs.5",
                "portal": "1030",

Do you have additional portal defined on the server side for this connection?
 
                "port": "3260",
                "initiatorname": "default"
            }
        ],
        "pvsize": "106971529216",
        "discard_max_bytes": 0,
        "pathstatus": [
            {
                "capacity": "107374182400",
                "physdev": "sdc",
                "type": "iSCSI",
                "state": "active",
                "lun": "0"
            }
        ],
        "devtype": "iSCSI",
        "physicalblocksize": "4096",
        "pvUUID": "eejJVE-BTns-VgK0-s00D-t1sP-Fc4y-l60XUt",
        "serial": "SNETAPP_LUN_C-Mode_80F0f+Ma-sgb",
        "logicalblocksize": "512",
        "productID": "LUN C-Mode"
    },
    ...
]

Nir
 


---- On Sat, 03 Nov 2018 00:01:02 +0000 Nir Soffer <nsoffer@redhat.com> wrote ----

On Fri, 2 Nov 2018, 20:31 Alan G <alan+ovirt@griff.me.uk wrote:
I'm setting up a lab with oVirt 4.2. All hosts are disk-less and boot from a NetApp using iSCSI. All storage domains are also iSCSI, to the same NetApp as BFS.

Whenever I put a host into maintenance vdsm seems to try to un-mount all iSCSI partitions including the OS partition, causing the host fail.

Is this a supported configuration?

This works (with some issues, see bellow) for FC, when all luns are always connected.

For iSCSI we don't have a way to prevent disconnect since we are not aware thst you boot from one of the luns. I guess we could detect that and avoid the disconnect but nobody sent a patch to implement it.

It can work if you serve the boot luns from a different portal on same server. The system will create additional iscsi connection for the ovirt storage domains and disconnecting from storage will not affect your boot luns connection.

It can also work if your luns look like FC devices - to check this option,  can you share output of:

    vdsm-client Host getDeviceList

On one of the hosts?

Elad, did we test such setup?

You also need to blacklist the boot lun in vdsm config - requires this patch for 4.2:

And add multipath configuration for the boot lun with "no_path_retry queue" to avoid readonly file system if you loose all paths to storage.

Nir
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org