[Users] iSCSI discovery not showing all LUNs - oVirt 3.1

Trey Dockendorf treydock at gmail.com
Fri Jul 6 20:56:37 UTC 2012


On Fri, Jul 6, 2012 at 8:07 AM, Itamar Heim <iheim at redhat.com> wrote:
> On 07/05/2012 06:08 PM, Trey Dockendorf wrote:
>>
>> I have a Promise M300i iSCSI with 2 LUNs.  A 2TB LUN with ID
>> 2260-0001-557c-af0a and a 4TB LUN with ID 22d9-0001-553e-4d6a.
>>
>> What's strange, is the very first time I ran discovery I saw both
>> LUNs.  I checked the 2TB LUN and storage failed to add, I don't have
>> logs at this time, but I went back to repeat the process and now only
>> 1 LUN shows in the GUI (see attached image).  Also the size it reports
>> is way off.
>>
>> Looking at VDSM logs, I get this output when doing the login to a target
>>
>> {'devList':
>>    [
>>      {'vendorID': 'Promise',
>>       'capacity': '2188028149760',
>>       'fwrev': '0227',
>>       'partitioned': False,
>>       'vgUUID': 'AZ1iMt-gzBD-2uug-xTih-1z0b-PqPy-xSP0A4',
>>       'pathlist': [
>>         {
>>           'initiatorname': 'default',
>>           'connection': '192.168.203.100',
>>           'iqn': 'iqn.1994-12.com.promise.xxx',
>>           'portal': '1',
>>           'password': '******',
>>           'port': '3260'
>>         }
>>        ],
>>        'logicalblocksize': '512',
>>        'pathstatus': [
>>         {
>>           'physdev': 'sde',
>>           'type': 'iSCSI',
>>           'state': 'active',
>>           'lun': '0'
>>         }
>>        ],
>>        'devtype': 'iSCSI',
>>        'physicalblocksize': '512',
>>        'pvUUID': 'v2N3ok-wrki-OQQn-1XFL-w69n-8wAF-rmCFWt',
>>        'serial':
>> 'SPromise_VTrak_M300i_000000000000000000000000F08989F89FFF6C42',
>>        'GUID': '222600001557caf0a',
>>        'productID': 'VTrak M300i'
>>      },
>>      {
>>        'vendorID': 'Promise',
>>        'capacity': '20246190096384',
>>        'fwrev': '0227',
>>        'partitioned': False,
>>        'vgUUID': '',
>>        'pathlist': [
>>         {
>>           'initiatorname': 'default',
>>           'connection': '192.168.203.100',
>>           'iqn': 'iqn.1994-12.com.promise.xxx',
>>           'portal': '1',
>>           'password': '******',
>>           'port': '3260'
>>         }
>>        ],
>>        'logicalblocksize': '2048',
>>        'pathstatus': [
>>         {
>>           'physdev': 'sdf',
>>           'type': 'iSCSI',
>>           'state': 'active',
>>           'lun': '1'
>>         }
>>        ],
>>        'devtype': 'iSCSI',
>>        'physicalblocksize': '2048',
>>        'pvUUID': '',
>>        'serial':
>> 'SPromise_VTrak_M300i_000000000000000000000000DA3FF8D8099662D7',
>>        'GUID': '222d90001553e4d6a',
>>        'productID': 'VTrak M300i'
>>      }
>>    ]
>> }
>>
>> In that output both LUNs are seen.  I couldn't tell from the code what
>> format the "capacity" is in, but now the interface shows only the LUN
>> with the "4d6a" GUID as being 18TB.
>>
>> I've attached the VDSM Logs from the point of selecting my datacenter
>> to after clicking "Login".  Any suggestions?
>>
>> node - vdsm-4.10.0-2.el6.x86_64
>>
>> Thanks
>> - Trey
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> the LUN you don't see is 'dirty' and vdsm filters it.
> there are some patches for showing all LUNs and just graying them out at ui
> level (but these are post ovirt 3.1.
> dd with zeros the start of your LUN should bring it back
>

I re-initialized the RAID array, and attempted adding the storage
domain that resulted in failure again.  This is the error in Web
interface

"Error: Cannot attach Storage. Storage Domain doesn't exist."

I've attached log vdsm that is a snapshot from the time right before
clicking "Ok" and the error.  ovirt-engine is 3.1 and vdsm is
4.10.0-4.  Both engine and node are CentOS 6.2

I attempted to run the failing command manually

# /sbin/lvm pvcreate --config " devices { preferred_names =
[\"^/dev/mapper/\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\"a%1ATA_ST32000644NS_9WM7SV9Y|1ATA_ST32000644NS_9WM7ZXVC|222600001557caf0a%\",
\"r%.*%\" ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
--metadatasize 128m --metadatacopies 2 --metadataignore y
/dev/mapper/222600001557caf0a

Can't open /dev/mapper/222600001557caf0a exclusively.  Mounted filesystem?

What's strange is fuser shows nothing using that path, or the
/dev/dm-4 path it's referencing.  However the device created in dmesg
(/dev/sde) does show usage

# ls -la /dev/mapper/
total 0
drwxr-xr-x.  2 root root    180 Jul  6 15:23 .
drwxr-xr-x. 20 root root   4020 Jul  6 15:27 ..
lrwxrwxrwx.  1 root root      7 Jul  6 15:12
1ATA_ST32000644NS_9WM7SV9Y -> ../dm-2
lrwxrwxrwx.  1 root root      7 Jul  6 15:12
1ATA_ST32000644NS_9WM7ZXVC -> ../dm-3
lrwxrwxrwx.  1 root root      7 Jul  6 15:27 222600001557caf0a -> ../dm-4
crw-rw----.  1 root root 10, 58 Jul  6 15:11 control
lrwxrwxrwx.  1 root root      7 Jul  6 15:23
ef7e7c07--f144--4843--8526--4afd0ec33368-metadata -> ../dm-5
lrwxrwxrwx.  1 root root      7 Jul  6 15:11 vg_dhv01-lv_root -> ../dm-1
lrwxrwxrwx.  1 root root      7 Jul  6 15:11 vg_dhv01-lv_swap -> ../dm-0
[root at dhv01 ~]# fuser /dev/mapper/222600001557caf0a
[root at dhv01 ~]# fuser /dev/dm-4
[root at dhv01 ~]# dmesg
<SNIP>
scsi7 : iSCSI Initiator over TCP/IP
scsi 7:0:0:0: Direct-Access     Promise  VTrak M300i      0227 PQ: 0 ANSI: 4
sd 7:0:0:0: Attached scsi generic sg4 type 0
sd 7:0:0:0: [sde] 4273492480 512-byte logical blocks: (2.18 TB/1.98 TiB)
sd 7:0:0:0: [sde] Write Protect is off
sd 7:0:0:0: [sde] Mode Sense: 97 00 10 08
sd 7:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports
DPO and FUA
 sde: unknown partition table
sd 7:0:0:0: [sde] Attached SCSI disk
ata1: hard resetting link
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1.00: configured for UDMA/133
ata1: EH complete
ata2: hard resetting link
ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2.00: configured for UDMA/133
ata2: EH complete
ata3: hard resetting link
ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3.00: configured for UDMA/133
ata3: EH complete
ata4: hard resetting link
ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata4.00: configured for UDMA/133
ata4: EH complete
ata5: soft resetting link
ata5: EH complete
ata6: soft resetting link
ata6: EH complete
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding target to table

[root at dhv01 ~]# fuser /dev/sde
/dev/sde:             1388
[root at dhv01 ~]# ps aux | grep 1388
root      1388  0.0  0.0 557684  6108 ?        SLl  15:12   0:00
/sbin/multipathd
root      6200  0.0  0.0 103228   888 pts/0    S+   15:48   0:00 grep 1388


Any ideas on this?  I've seen a few mentions of this issue via google
but the only solution or possible solution I don't have access to,
https://access.redhat.com/knowledge/ja/node/110203.

Thanks
- Trey
-------------- next part --------------
A non-text attachment was scrubbed...
Name: vdsm_add_iscsi.log
Type: application/octet-stream
Size: 6707 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120706/bdff3712/attachment-0001.obj>


More information about the Users mailing list