[Users] ovirt / 2 iscsi storage domains / same LUN IDs

Hi there, I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain. See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ?? Also, anyone know how to deal with the duplicate LUN ID issue ? Thanks Alex -- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

another screenshot of how confused it can get alex On 28 February 2013 15:36, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

FWIW, restarting the HV "resolves" the issue and brings the storage domain back up; but I dont know whether that is because that's where the target (and iscsi initiator) runs or whether vdsmd then clears its cache / routes to a/the target(s) ? Alex On 28 February 2013 16:25, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
another screenshot of how confused it can get
alex
On 28 February 2013 15:36, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

ok, so it looks like ovirt is creating PVs, VGs, and LVs associated to a iSCSI disk ... now, when i try to add the other storage with the same LUN ID - this is how it looks like : [root@TESTHV01 ~]# pvs /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument WARNING: Volume Group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 is not consistent PV VG Fmt Attr PSize PFree /dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a-- 68.00g 54.12g /dev/sda2 vg_root lvm2 a-- 78.12g 0 /dev/sda3 vg_vol lvm2 a-- 759.26g 0 [root@TESTHV01 ~]# vgs /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument WARNING: Inconsistent metadata found for VG 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20 /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument Automatic metadata correction failed Recovery of volume group "7d0f78ff-aa25-4b64-a2ea-c8a65beda616" failed. /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument VG #PV #LV #SN Attr VSize VFree vg_root 1 2 0 wz--n- 78.12g 0 vg_vol 1 1 0 wz--n- 759.26g 0 [root@TESTHV01 ~]# lvs /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument WARNING: Inconsistent metadata found for VG 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20 /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument Automatic metadata correction failed Recovery of volume group "7d0f78ff-aa25-4b64-a2ea-c8a65beda616" failed. Skipping volume group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lv_root vg_root -wi-ao-- 68.36g lv_swap vg_root -wi-ao-- 9.77g lv_vol vg_vol -wi-ao-- 759.26g VS [root@TESTHV01 ~]# pvs l PV VG Fmt Attr PSize PFree /dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a-- 68.00g 54.12g /dev/sda2 vg_root lvm2 a-- 78.12g 0 /dev/sda3 vg_vol lvm2 a-- 759.26g 0 [root@TESTHV01 ~]# vgs VG #PV #LV #SN Attr VSize VFree 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 1 7 0 wz--n- 68.00g 54.12g vg_root 1 2 0 wz--n- 78.12g 0 vg_vol 1 1 0 wz--n- 759.26g 0 [root@TESTHV01 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert fe6f8584-b6da-4ef0-8879-bf23022827d7 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi----- 10.00g ids 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 128.00m inbox 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 128.00m leases 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 2.00g master 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-ao-- 1.00g metadata 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 512.00m outbox 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 128.00m lv_root vg_root -wi-ao-- 68.36g lv_swap vg_root -wi-ao-- 9.77g lv_vol vg_vol -wi-ao-- 759.26g Ovirt Node ( HV ) seems to use the LV created with in the same VG for the storage domain as the disk for the VM. And this is how it'll look like - various LVs are being created for the "management of the storage domain" (i guess) : Disk /dev/sdb: 73.4 GB, 73400320000 bytes 255 heads, 63 sectors/track, 8923 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/1IET_00010001: 73.4 GB, 73400320000 bytes 255 heads, 63 sectors/track, 8923 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-metadata: 536 MB, 536870912 bytes 255 heads, 63 sectors/track, 65 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-ids: 134 MB, 134217728 bytes 255 heads, 63 sectors/track, 16 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-leases: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-inbox: 134 MB, 134217728 bytes 255 heads, 63 sectors/track, 16 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-outbox: 134 MB, 134217728 bytes 255 heads, 63 sectors/track, 16 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-master: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879--bf23022827d7: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0000c91a Device Boot Start End Blocks Id System /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879 64 1306 9972736 8e Linux LVM Now I wonder whether there is a MAX on the amount of iSCSI storage domains you can have ? I'd think that that depends on the max of the device-mapper ? Is there a max ? Alex On 28 February 2013 16:30, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
FWIW, restarting the HV "resolves" the issue and brings the storage domain back up; but I dont know whether that is because that's where the target (and iscsi initiator) runs or whether vdsmd then clears its cache / routes to a/the target(s) ?
Alex
On 28 February 2013 16:25, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
another screenshot of how confused it can get
alex
On 28 February 2013 15:36, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com|
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

The problem persists despite me trying to manually change LUN ID on the target (scsi-target-utils installed on centos 6.3) .. does anyone know how to make ovirt (or vdsm?) based on iqn + lun id ? I'd think it'd resolve the issue I'm having. Anyone ? Thanks, Alex On 28 February 2013 16:30, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
FWIW, restarting the HV "resolves" the issue and brings the storage domain back up; but I dont know whether that is because that's where the target (and iscsi initiator) runs or whether vdsmd then clears its cache / routes to a/the target(s) ?
Alex
On 28 February 2013 16:25, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
another screenshot of how confused it can get
alex
On 28 February 2013 15:36, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com|
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

----- Original Message -----
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing. Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields. Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target>
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | | www.vsearchcloud.com |
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------000500010209060307080609 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Hi Ayal, Thanks for that - I was thinking the same after I did some more testing last week - e.g. shared a image file from a different iscsi target and the same iscsi target, the lun on the "same target" changed, the one on the "other target" did again the same - so I figured it had something to do with tgtd. Thanks for below, I was looking into those options to try next week :) ... Thanks Alex On 03/03/2013 06:34 PM, Ayal Baron wrote:
----- Original Message -----
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover& login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing. Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.
Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target>
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | | www.vsearchcloud.com |
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------000500010209060307080609 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <font size="-1"><font face="Tahoma">Hi Ayal,<br> <br> Thanks for that - I was thinking the same after I did some more testing last week - e.g. shared a image file from a different iscsi target and the same iscsi target, the lun on the "same target" changed, the one on the "other target" did again the same - so I figured it had something to do with tgtd. <br> <br> Thanks for below, I was looking into those options to try next week :) ... <br> <br> Thanks<br> Alex<br> <br> </font></font><br> On 03/03/2013 06:34 PM, Ayal Baron wrote: <blockquote cite="mid:784356398.1626108.1362335643783.JavaMail.root@redhat.com" type="cite"> <pre wrap=""> ----- Original Message ----- </pre> <blockquote type="cite"> <pre wrap=""> Hi there, I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain. </pre> </blockquote> <pre wrap=""> There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing. Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields. Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target> </pre> <blockquote type="cite"> <pre wrap=""> See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ?? Also, anyone know how to deal with the duplicate LUN ID issue ? Thanks Alex -- | RHCE | Senior Systems Engineer | <a class="moz-txt-link-abbreviated" href="http://www.vcore.co">www.vcore.co</a> | | <a class="moz-txt-link-abbreviated" href="http://www.vsearchcloud.com">www.vsearchcloud.com</a> | _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> </blockquote> </body> </html> --------------000500010209060307080609--

Hi, I just tested this with this config : <target iqn.2013-02.local.vm:iscsi.lun1> <backing-store /vol/scsi.img> vendor_id ISCSI-MULTIPATH scsi_id MULTIPATHTEST scsi_sn 990000100001 lun 1 </backing-store> </target> However, upon discovery / login, the LUN ID was again : 1IET_0000100001 Alex On 3 March 2013 18:34, Ayal Baron <abaron@redhat.com> wrote:
----- Original Message -----
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing. Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.
Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target>
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | | www.vsearchcloud.com |
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

Ok, so finally got this working, anyone know how to change the timeout for multipathd from say 120 seconds to ~10 seconds ? ==> /var/log/messages <== Mar 4 17:09:12 TESTHV01 kernel: session5: session recovery timed out after 120 secs Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 04 08 00 00 00 08 00 Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Alex On 4 March 2013 15:35, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi,
I just tested this with this config :
<target iqn.2013-02.local.vm:iscsi.lun1> <backing-store /vol/scsi.img> vendor_id ISCSI-MULTIPATH scsi_id MULTIPATHTEST scsi_sn 990000100001 lun 1 </backing-store> </target>
However, upon discovery / login, the LUN ID was again :
1IET_0000100001
Alex
On 3 March 2013 18:34, Ayal Baron <abaron@redhat.com> wrote:
----- Original Message -----
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing. Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.
Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target>
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | | www.vsearchcloud.com |
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

For those still interested, t he timeout issue doesnt occur on the m ultipath side, nor ovirt , but on the iscsid config side of things - To shorten the timeout and fail a path faster, edit /etc/iscsi/iscsid.conf C hange the value node.session.timeo.replacement_timeout = 1 2 0 t o something more useful, like I changed it to "10" Reloading iscsid's config does nothing , you'll have to restart the host for it to work. Alex On 4 March 2013 17:11, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Ok, so finally got this working, anyone know how to change the timeout for multipathd from say 120 seconds to ~10 seconds ?
==> /var/log/messages <== Mar 4 17:09:12 TESTHV01 kernel: session5: session recovery timed out after 120 secs Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 04 08 00 00 00 08 00 Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline device Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Mar 4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
Alex
On 4 March 2013 15:35, Alex Leonhardt <alex.tuxx@gmail.com> wrote:
Hi,
I just tested this with this config :
<target iqn.2013-02.local.vm:iscsi.lun1> <backing-store /vol/scsi.img> vendor_id ISCSI-MULTIPATH scsi_id MULTIPATHTEST scsi_sn 990000100001 lun 1 </backing-store> </target>
However, upon discovery / login, the LUN ID was again :
1IET_0000100001
Alex
On 3 March 2013 18:34, Ayal Baron <abaron@redhat.com> wrote:
----- Original Message -----
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain.
There is no question about the behaviour, it's not a bug, that is the way multipathing works (has nothing to do with oVirt). The GUID of a LUN has to be unique. multipathd seeing the same LUN ID across multiple targets assumes that it's the same LUN with multiple paths and that's how you get redundancy and load balancing. Why tgtd doesn't take care of this built in I could never grok, but what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.
Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target>
See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | | www.vsearchcloud.com |
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
-- | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
participants (2)
-
Alex Leonhardt
-
Ayal Baron