For those still interested, t
he timeout issue doesnt occur on the
m
ultipath side, nor ovirt
,
but on the iscsid config side of things -

To shorten the timeout and fail a path faster, edit

/etc/iscsi/iscsid.conf

C
hange the value

node.session.timeo.replacement_timeout = 1
2
0

t
o something more useful, like I changed it to "10"


Reloading iscsid's config does nothing
,
you'll have to restart the host for it to work.

Alex



On 4 March 2013 17:11, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Ok, so finally got this working, anyone know how to change the timeout for multipathd from say 120 seconds to ~10 seconds ?

==> /var/log/messages <==
Mar  4 17:09:12 TESTHV01 kernel: session5: session recovery timed out after 120 secs
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 04 08 00 00 00 08 00
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00


Alex




On 4 March 2013 15:35, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi,

I just tested this with this config :


<target iqn.2013-02.local.vm:iscsi.lun1>
    <backing-store /vol/scsi.img>
            vendor_id ISCSI-MULTIPATH
            scsi_id MULTIPATHTEST
            scsi_sn 990000100001
            lun 1
    </backing-store>
</target>


However, upon discovery / login, the LUN ID was again :

1IET_0000100001


Alex




On 3 March 2013 18:34, Ayal Baron <abaron@redhat.com> wrote:


----- Original Message -----
>
>
>
>
>
> Hi there,
>
> I was doing some testing around ovirt and iscsi and found an issue
> where as when you use "dd" to create "backing-stores" for iscsi and
> you point ovirt to it to discover & login, it thinks the LUN ID is
> the same although the target is different and adds additional paths
> to the config (automagically?) bringing down the iSCSI storage
> domain.

There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt).  The GUID of a LUN has to be unique.  multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing.
Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.

Example:
<target MasterBackup>
     allow-in-use yes
<backing-store /dev/vg0/MasterBackup>
         lun 1
         scsi_id MasterBackup
         scsi_sn 444444444401
</backing-store>
</target>

>
> See attached screenshot of what I got when trying to a "new iscsi san
> storage domain" to ovirt. The Storage Domain is now down and I
> cannot get rid of the config (???) how do I force it to logout of
> the targets ??
>
>
> Also, anyone know how to deal with the duplicate LUN ID issue ?
>
>
> Thanks
> Alex
>
>
>
>
>
> --
>
>
>
> | RHCE | Senior Systems Engineer | www.vcore.co |
> | www.vsearchcloud.com |
>
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



--

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |



--

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |



--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |