Ok, so finally got this working, anyone know how to change the timeout for multipathd from say 120 seconds to ~10 seconds ?Alex
==> /var/log/messages <==
Mar 4 17:09:12 TESTHV01 kernel: session5: session recovery timed out after 120 secs
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 04 08 00 00 00 08 00
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
On 4 March 2013 15:35, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Alex1IET_0000100001Hi,However, upon discovery / login, the LUN ID was again :
I just tested this with this config :
<target iqn.2013-02.local.vm:iscsi.lun1>
<backing-store /vol/scsi.img>
vendor_id ISCSI-MULTIPATH
scsi_id MULTIPATHTEST
scsi_sn 990000100001
lun 1
</backing-store>
</target>
On 3 March 2013 18:34, Ayal Baron <abaron@redhat.com> wrote:
There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing.
----- Original Message -----
>
>
>
>
>
> Hi there,
>
> I was doing some testing around ovirt and iscsi and found an issue
> where as when you use "dd" to create "backing-stores" for iscsi and
> you point ovirt to it to discover & login, it thinks the LUN ID is
> the same although the target is different and adds additional paths
> to the config (automagically?) bringing down the iSCSI storage
> domain.
Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.
Example:
<target MasterBackup>
allow-in-use yes
<backing-store /dev/vg0/MasterBackup>
lun 1
scsi_id MasterBackup
scsi_sn 444444444401
</backing-store>
</target>
> _______________________________________________
>
> See attached screenshot of what I got when trying to a "new iscsi san
> storage domain" to ovirt. The Storage Domain is now down and I
> cannot get rid of the config (???) how do I force it to logout of
> the targets ??
>
>
> Also, anyone know how to deal with the duplicate LUN ID issue ?
>
>
> Thanks
> Alex
>
>
>
>
>
> --
>
>
>
> | RHCE | Senior Systems Engineer | www.vcore.co |
> | www.vsearchcloud.com |
>
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
--