Yes I agree. As I wrote in some previous message, scsi_id helped me.
For example:
cat /etc/targets.conf
<target iqn.2017-03.com.example.domain>
<backing-store /dev/md124>
scsi_id 00020001
</backing-store>
<backing-store /dev/md125>
scsi_id 00020002
</backing-store>
initiator-address 192.168.1.0/24
</target>
--
Lukas Kaplan
2017-04-04 12:05 GMT+02:00 Yaniv Kaul <ykaul(a)redhat.com>:
On Sun, Apr 2, 2017 at 10:17 PM, Lukáš Kaplan <lkaplan(a)dragon.cz> wrote:
> I am using Centos 7 and tgtd (scsi-target-utils)
>
Interesting - still doesn't explain it though. Certainly looks like some
tgtd issue (see
http://www.spinics.net/lists/linux-stgt/msg04392.html for
example). Try changing the scsi_id in targets.conf perhaps?
I recommend, btw, targetcli (LIO) instead. Fairly simple to set up.
Y.
>
> # cat /etc/centos-release
> CentOS Linux release 7.3.1611 (Core)
>
> # tgtd -V
> 1.0.55
>
> # rpm -qi scsi-target-utils
> Name : scsi-target-utils
> Version : 1.0.55
> Release : 4.el7
> Architecture: x86_64
> ... etc....
>
> --
> Lukas Kaplan
>
>
>
> 2017-03-31 21:59 GMT+02:00 Yaniv Kaul <ykaul(a)redhat.com>:
>
>>
>>
>> On Fri, Mar 31, 2017 at 3:43 PM, Lukáš Kaplan <lkaplan(a)dragon.cz> wrote:
>>
>>> I solved this issue now.
>>>
>>> I thought till today, that iSCSI LUN ID (WWN or WWID) is globaly
>>> unique. It is not true!
>>> If you power on two identical linux machines and create iSCSI target on
>>> them, their LUN IDs will be same...
>>>
>>> 360000000000000000e00000000010001 - for first LUN
>>> 360000000000000000e00000000010002 - for second LUN etc
>>>
>>
>> How did you get to such a number with so many zero's? Usually, there's
>> some vendor ID and so on there...
>> What target are you using?
>> Y.
>>
>>
>>>
>>> You have to change LUN ID manualy (and take care of that uniqueness in
>>> your domain) in /etc/tgtd/targets.conf for example:
>>>
>>> <target iqn.2017-03.com.example.domain>
>>> <backing-store /dev/md124>
>>> scsi_id 00020001
>>> </backing-store>
>>> <backing-store /dev/md125>
>>> scsi_id 00020002
>>> </backing-store>
>>> initiator-address 192.168.1.0/24
>>> </target>
>>>
>>>
>>>
>>> --
>>> Lukas Kaplan
>>>
>>>
>>>
>>> 2017-03-31 9:06 GMT+02:00 Lukáš Kaplan <lkaplan(a)dragon.cz>:
>>>
>>>> Is it possible that problem can be in conflicting LUN IDs?
>>>> I see that first LUN from both storage servers have same LUN ID
>>>> 360000000000000000e00000000010001. One storage server is connected
>>>> to ovirt and second is not connected because of described problem (ovirt
>>>> dont show lun after login and discovery).
>>>>
>>>> I am using tgtd as iscsi target server on both servers. Both have same
>>>> configuration (same disks, md raid6), but different iqn and ip
address...
>>>>
>>>> --
>>>> Lukas Kaplan
>>>>
>>>> Dragon Internet a.s.
>>>>
>>>
>>>
>>>> 2017-03-29 12:12 GMT+02:00 Liron Aravot <laravot(a)redhat.com>:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral
<emayoral(a)arsys.es>
>>>>> wrote:
>>>>>
>>>>>> I had a similar problem, in my case this was related to
multipath,
>>>>>> it was not masking the LUNs correctly, it was seeing it multiple
times (one
>>>>>> per path), and I could not select the LUNs in the oVirt
interface.
>>>>>>
>>>>>> Once I configured multipath correctly, everything worked like a
>>>>>> charm.
>>>>>>
>>>>>> Best regards,
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Eduardo Mayoral.
>>>>>>
>>>>>> On 29/03/17 11:30, Lukáš Kaplan wrote:
>>>>>>
>>>>>> Hello all,
>>>>>>
>>>>>> I did all steps as I described in previous email, but no change.
I
>>>>>> can't see any LUN after discovery and login of new iSCSI
storage.
>>>>>> (That storage is ok, if I try to connect it to another and older
>>>>>> ovirt domain, it is working...)
>>>>>>
>>>>>> I tryed it on 3 new iSCSI targets alredy, all have same
problem...
>>>>>>
>>>>>> Can somebody help me, please?
>>>>>>
>>>>>> --
>>>>>> Lukas Kaplan
>>>>>>
>>>>>>
>>>>> Hi Lukas,
>>>>> If you try to perform the discovery yourself, do you see the luns?
>>>>>
>>>>>>
>>>>>>
>>>>>> 2017-03-27 16:22 GMT+02:00 Lukáš Kaplan
<lkaplan(a)dragon.cz>:
>>>>>>
>>>>>>> I did following steps:
>>>>>>>
>>>>>>> - delete target on all initiators (ovirt nodes)
>>>>>>> iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
-p
>>>>>>> 10.53.1.201:3260 -u
>>>>>>> iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
-p
>>>>>>> 10.53.1.201:3260 -o delete
>>>>>>>
>>>>>>> - stop tgtd on target
>>>>>>> - fill storage by zeroes (dd if=/dev/zero of=/dev/md125
bs=4096
>>>>>>> status=progress)
>>>>>>> - start tgtd
>>>>>>> - tried to connect to ovirt (Discovery=ok, Login=ok, but can
not
>>>>>>> see any LUN).
>>>>>>>
>>>>>>> === After that I ran this commands on one node: ===
>>>>>>>
>>>>>>> [root@fudi-cn1 ~]# iscsiadm -m session -o show
>>>>>>> tcp: [1] 10.53.0.10:3260,1
iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>>>>>> (non-flash)
>>>>>>> tcp: [11] 10.53.0.201:3260,1
iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>>>>>> (non-flash)
>>>>>>> tcp: [12] 10.53.1.201:3260,1
iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>>>>>> (non-flash)
>>>>>>>
>>>>>>> [root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
>>>>>>> SENDTARGETS:
>>>>>>> DiscoveryAddress: 10.53.0.201,3260
>>>>>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>>>>>> Portal: 10.53.0.201:3260,1
>>>>>>> Iface Name: default
>>>>>>> iSNS:
>>>>>>> No targets found.
>>>>>>> STATIC:
>>>>>>> Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>>>>>> Portal: 10.53.1.201:3260,1
>>>>>>> Iface Name: default
>>>>>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>>>>>> Portal: 10.53.0.10:3260,1
>>>>>>> Iface Name: default
>>>>>>> Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>>>>>> Portal: 10.53.0.201:3260,1
>>>>>>> Iface Name: default
>>>>>>> FIRMWARE:
>>>>>>> No targets found.
>>>>>>>
>>>>>>> === On iscsi target: ===
>>>>>>> [root@fuvs-sn1 ~]# cat /proc/mdstat
>>>>>>> Personalities : [raid1] [raid6] [raid5] [raid4]
>>>>>>> md125 : active raid6 sdl1[11] sdk1[10] sdj1[9] sdi1[8]
sdh1[7]
>>>>>>> sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
>>>>>>> 9766302720 blocks super 1.2 level 6, 512k chunk,
algorithm 2
>>>>>>> [12/12] [UUUUUUUUUUUU]
>>>>>>> bitmap: 0/8 pages [0KB], 65536KB chunk
>>>>>>> ...etc...
>>>>>>>
>>>>>>>
>>>>>>> [root@fuvs-sn1 ~]# cat /etc/tgt/targets.conf
>>>>>>> default-driver iscsi
>>>>>>>
>>>>>>> <target iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T>
>>>>>>> # provided devicce as a iSCSI target
>>>>>>> backing-store /dev/md125
>>>>>>> # iSCSI Initiator's IP address you allow to
connect
>>>>>>> #initiator-address 10.53.0.0/23
>>>>>>> </target>
>>>>>>>
>>>>>>> --
>>>>>>> Lukas Kaplan
>>>>>>>
>>>>>>> 2017-03-25 12:36 GMT+01:00 Lukas Kaplan
<lkaplan(a)dragon.cz>:
>>>>>>>
>>>>>>>> Co muze myslet tim mappingem?
>>>>>>>>
>>>>>>>> Jinak muzu zkusit ddckem celou storage prepsat nulami.
>>>>>>>>
>>>>>>>> co ty na to?
>>>>>>>>
>>>>>>>> Odesláno z iPhonu
>>>>>>>>
>>>>>>>> Začátek přeposílané zprávy:
>>>>>>>>
>>>>>>>> *Od:* Yaniv Kaul <ykaul(a)redhat.com>
>>>>>>>> *Datum:* 24. března 2017 23:25:21 SEČ
>>>>>>>> *Komu:* Lukáš Kaplan <lkaplan(a)dragon.cz>
>>>>>>>> *Kopie:* users <users(a)ovirt.org>
>>>>>>>> *Předmět:* *Re: [ovirt-users] iSCSI Discovery cannot
detetect LUN*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Mar 24, 2017 at 1:34 PM, Lukáš Kaplan
<lkaplan(a)dragon.cz>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hello all,
>>>>>>>>>
>>>>>>>>> please do you have some experience with
troubleshooting adding of
>>>>>>>>> iSCSI domain to ovirt 4.1.1?
>>>>>>>>>
>>>>>>>>> I am chalenging this issue now:
>>>>>>>>>
>>>>>>>>> 1) I have successfuly installed oVirt 4.1.1
environment with
>>>>>>>>> self-hosted engine, 3 nodes and 3 storages (iSCSI
Master domain, iSCSI for
>>>>>>>>> hosted engine and NFS ISO domain). Everything is
working now.
>>>>>>>>>
>>>>>>>>> 2) But, when I want to add new iSCSI domain, I can
discover it, I
>>>>>>>>> can login, but I cant see any LUN on that storage. (I
had same problem in
>>>>>>>>> oVirt 4.1.0, so I made upgrade to 4.1.1)
>>>>>>>>>
>>>>>>>>
>>>>>>>> Are you sure mappings are correct?
>>>>>>>> Can you ensure the LUN is empty?
>>>>>>>> Y.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 3) Then I tryed to add this storage to another oVirt
environment
>>>>>>>>> (oVirt 3.6) and there are no problem. I can see LUN
on that storage and I
>>>>>>>>> can connect it to oVirt.
>>>>>>>>>
>>>>>>>>> I tryed to examine vdsm.log, but it is very detailed
and
>>>>>>>>> unredable for me :-/
>>>>>>>>>
>>>>>>>>> Thak you in advance, have a nice day,
>>>>>>>>> --
>>>>>>>>> Lukas Kaplan
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Users mailing list
>>>>>>>>> Users(a)ovirt.org
>>>>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing
listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>