[ovirt-users] [ansible/ansible] ovirt_storage_domains can not find iscsi disk during creation (state: present) (#25417)

victor 600833 victor600833 at gmail.com
Wed Jun 28 15:20:37 UTC 2017


*Here is our iscsi target config( on iscsi server), we have 2 iscsi
devices. *

/> ls
o- / ............................................................
............................................................. [...]
  o- backstores ............................................................
.................................................. [...]
  | o- block ............................................................
...................................... [Storage Objects: 0]
  | o- fileio ............................................................
..................................... [Storage Objects: 4]
  | | o- MAISON-core2-iscsi-dsk01 .......................................
[/MAISON/core2/iscsi/dsk01 (20.0GiB) write-thru activated]
  | | o- MAISON-core2-iscsi-dsk02 .......................................
[/MAISON/core2/iscsi/dsk02 (20.0GiB) write-thru activated]
  | | o- dsk01 ........................................................
[/MAISON/core2/iscsi/dsk01 (20.0GiB) write-thru deactivated]
  | | o- dsk02 ........................................................
[/MAISON/core2/iscsi/dsk02 (20.0GiB) write-thru deactivated]
  | o- pscsi ............................................................
...................................... [Storage Objects: 0]
  | o- ramdisk ............................................................
.................................... [Storage Objects: 0]
  | o- user ............................................................
....................................... [Storage Objects: 0]
  o- iscsi ............................................................
................................................ [Targets: 1]
  | o- iqn.2017-06.stet.iscsi.server:target ..............................
................................................ [TPGs: 1]
  |   o- tpg1 ............................................................
................................... [no-gen-acls, no-auth]
  |     o- acls ............................................................
.............................................. [ACLs: 2]
  |     | o- iqn.1994-05.com.redhat:287c2b12050
................................................................... [Mapped
LUNs: 2]
  |     | | o- mapped_lun0 ..............................
............................... [lun0 fileio/MAISON-core2-iscsi-dsk01 (rw)]
  |     | | o- mapped_lun1 ..............................
............................... [lun1 fileio/MAISON-core2-iscsi-dsk02 (rw)]
  |     | o- iqn.1994-05.com.redhat:f654257ba68
................................................................... [Mapped
LUNs: 2]
  |     |   o- mapped_lun0 ..............................
............................... [lun0 fileio/MAISON-core2-iscsi-dsk01 (rw)]
  |     |   o- mapped_lun1 ..............................
............................... [lun1 fileio/MAISON-core2-iscsi-dsk02 (rw)]
  |     o- luns ............................................................
.............................................. [LUNs: 2]
  |     | o- lun0 ....................................................
[fileio/MAISON-core2-iscsi-dsk01 (/MAISON/core2/iscsi/dsk01)]
  |     | o- lun1 ....................................................
[fileio/MAISON-core2-iscsi-dsk02 (/MAISON/core2/iscsi/dsk02)]
  |     o- portals ..............................
......................................................................
[Portals: 2]
  |       o- 10.10.1.129:3260 ..............................
................................................................... [OK]
  |       o- 10.10.1.1:3260 ..............................
..................................................................... [OK]
  o- loopback ............................................................
............................................. [Targets: 0]
  o- vhost ............................................................
................................................ [Targets: 0]
/>


*Here is our RHEV-H hypervisor  which has 3 nic  configured.*


*10.11.8.168/27 <http://10.11.8.168/27>  for management and NFS*

*10.10.1.11/25 <http://10.10.1.11/25> and 10.10.1.139/25
<http://10.10.1.139/25> are dedicated to ISCSI. The two are separated in
different *
*VLAN so as that 10.10.1.11/25 <http://10.10.1.11/25> will talk to
10.10.1.1 (iscsi server side) and 10.10.1.139/25 <http://10.10.1.139/25>
will talk to 10.10.1.12 <http://10.10.1.129:3260/>9 (server side).*


[root at pdx1cr207 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP qlen 1000
   link/ether 52:54:00:1c:25:02 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP qlen 1000
   link/ether 52:54:00:1c:25:02 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond1 state UP qlen 1000
   link/ether 52:54:00:9a:7d:06 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond1 state UP qlen 1000
   link/ether 52:54:00:9a:7d:06 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
   link/ether 52:54:00:e5:b8:42 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
   link/ether 52:54:00:e6:55:59 brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UP qlen 1000
   link/ether 52:54:00:9b:80:80 brd ff:ff:ff:ff:ff:ff
   inet 10.10.1.11/25 brd 10.10.1.127 scope global eth6
      valid_lft forever preferred_lft forever
   inet6 fe80::5054:ff:fe9b:8080/64 scope link
      valid_lft forever preferred_lft forever
9: eth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UP qlen 1000
   link/ether 52:54:00:f7:5f:f8 brd ff:ff:ff:ff:ff:ff
   inet 10.10.1.139/25 brd 10.10.1.255 scope global eth7
      valid_lft forever preferred_lft forever
   inet6 fe80::5054:ff:fef7:5ff8/64 scope link
      valid_lft forever preferred_lft forever
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue
master ovirtmgmt state UP qlen 1000
   link/ether 52:54:00:1c:25:02 brd ff:ff:ff:ff:ff:ff
11: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP qlen 1000
   link/ether 52:54:00:1c:25:02 brd ff:ff:ff:ff:ff:ff
   inet 10.11.8.168/27 brd 10.11.8.191 scope global ovirtmgmt
      valid_lft forever preferred_lft forever
12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP qlen 1000
   link/ether 52:54:00:9a:7d:06 brd ff:ff:ff:ff:ff:ff
   inet6 fe80::5054:ff:fe9a:7d06/64 scope link
      valid_lft forever preferred_lft forever
13: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
1000
   link/ether 7a:79:ea:86:d6:5d brd ff:ff:ff:ff:ff:ff


*Here is our playbook used to prouve the issue. It points 10.10.1.1 (iscsi
servers)*

---
- hosts: rhevm2
 gather_facts: False
 remote_user: root
 tasks:
 - name: authentication
   ovirt_auth:
    url: https://rhevm2.res1/ovirt-engine/api
    username: admin at internal
    password: rootroot01
    insecure: True
#
 - ovirt_storage_domains_facts:
    auth:
     url: https://rhevm2.res1/ovirt-engine/api
     token: "{{ ovirt_auth.token }}"
     insecure: True
    pattern: name=* and datacenter=default
 - debug:
    var: ovirt_storage_domains
#
 - name: create iscsi_dsk02
   ovirt_storage_domains:
    auth:
     url: https://rhevm2.res1/ovirt-engine/api
     token: "{{ ovirt_auth.token }}"
     insecure: True
    name: iscsi_dsk02
    domain_function: data
    host: pdx1cr207
    data_center: prslog
    iscsi:
     target: iqn.2017-06.stet.iscsi.server:target
     address: 10.10.1.1
     lun_id: 3600140550738b53dd774303bfedac122
#      lun_id: 36001405ed330f17f8e74ca1a08b7bb04
    state: present
    destroy: true
   register: res_iscsi_dsk02
 - debug:
    var: res_iscsi_dsk02


*We run the playbook and got error message:*

ansible @sichuan workspace]$ansible-playbook ds_create6.yml -v
Using /home/papa/environments/workspace/inventories/env_x1rop/ansible.cfg
as config file

PLAY [rhevm2]
********************************************************************************************************************************


TASK [authentication]
************************************************************************************************************************

ok: [rhevm2] => {"ansible_facts": {"ovirt_auth": {"ca_file": null,
"compress": true, "insecure": true, "kerberos": false, "timeout": 0, "token
":
"t1ibTsNmCh13nYhwpFpXJb5eK6YSIH-V4FdXGN2PKWq-g-F2ksAWjl51iYK0dB-2-yWHYvTGL6Xgo_2jtab2tA",
"url": "https://rhevm2.res1/ovirt-engine/api"}},
"changed": false}



TASK [ovirt_storage_domains_facts]
***********************************************************************************************************

ok: [rhevm2] => {"ansible_facts": {"ovirt_storage_domains": [{"available":
95563022336, "committed": 0, "critical_space_action_blocker": 5, "d
ata_centers": [{"id": "5950a8cc-03df-013e-02a8-0000000001ff"}],
"disk_profiles": [], "disk_snapshots": [], "disks": [], "external_status":
"ok
", "href":
"/ovirt-engine/api/storagedomains/d2b84bb1-526a-4981-b6b4-f69b7e079d0d",
"id": "d2b84bb1-526a-4981-b6b4-f69b7e079d0d", "master": tr
ue, "name": "dm02", "permissions": [], "storage": {"address":
"10.11.8.190", "nfs_version": "v3", "path": "/MAISON/nfs/domaines/dm02",
"type":
"nfs"}, "storage_connections": [], "storage_format": "v3", "templates": [],
"type": "data", "used": 9663676416, "vms": [], "warning_low_space
_indicator": 10, "wipe_after_delete": false}]}, "changed": false}



TASK [debug]
*********************************************************************************************************************************

ok: [rhevm2] => {
   "ovirt_storage_domains": [
       {
           "available": 95563022336,
           "committed": 0,
           "critical_space_action_blocker": 5,
           "data_centers": [
               {
                   "id": "5950a8cc-03df-013e-02a8-0000000001ff"
               }
           ],
           "disk_profiles": [],
           "disk_snapshots": [],
           "disks": [],
           "external_status": "ok",
           "href":
"/ovirt-engine/api/storagedomains/d2b84bb1-526a-4981-b6b4-f69b7e079d0d",
           "id": "d2b84bb1-526a-4981-b6b4-f69b7e079d0d",
           "master": true,
           "name": "dm02",
           "permissions": [],
           "storage": {
               "address": "10.11.8.190",
               "nfs_version": "v3",
               "path": "/MAISON/nfs/domaines/dm02",
               "type": "nfs"
           },
           "storage_connections": [],
           "storage_format": "v3",
           "templates": [],
           "type": "data",
           "used": 9663676416,
           "vms": [],
           "warning_low_space_indicator": 10,
           "wipe_after_delete": false
       }
   ]
}

TASK [create iscsi_dsk02]
********************************************************************************************************************

An exception occurred during task execution. To see the full traceback, use
-vvv. The error was: Error: Fault reason is "Operation Failed". Fa
ult detail is "[Storage domain cannot be reached. Please ensure it is
accessible from the host(s).]". HTTP response code is 400.
fatal: [rhevm2]: FAILED! => {"changed": false, "failed": true, "msg":
"Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain
cannot be reached. Please ensure it is accessible from the host(s).]\".
HTTP response code is 400."}
       to retry, use: --limit
@/home/papa/environments/workspace/ds_create6.retry

PLAY RECAP
***********************************************************************************************************************************

rhevm2                     : ok=3    changed=0    unreachable=0    failed=1



*Here is compressed vdsm et engine log:*

On Wed, Jun 28, 2017 at 3:28 PM, Yaniv Kaul <ykaul at redhat.com> wrote:

> Can you share vdsm log?
>
> On Wed, Jun 28, 2017 at 4:16 PM, victor 600833 <victor600833 at gmail.com>
> wrote:
>
>> Hi ondra,
>>
>> As you pointed. in your previous mail.
>>
>> The iscsiadm discovered througth  10.11.8.190. It should  have probed  10.10.1.1.
>> This may be the root cause.
>>
>>
>>
>> Victor,
>>
>>
>>
>>
>>
>> If you lookup at our playbook.
>>
>>   - name: create iscsi_dsk02
>>     ovirt_storage_domains:
>>      auth:
>>       url: https://rhevm2.res1/ovirt-engine/api
>>       token: "{{ ovirt_auth.token }}"
>>       insecure: True
>>      name: iscsi_dsk02
>>      domain_function: data
>>      host: pdx1cr207
>>      data_center: default
>>      iscsi:
>>       target: iqn.2017-06.stet.iscsi.server:target
>>       address: *10.10.1.1*
>>       lun_id: 3600140550738b53dd774303bfedac122
>>      state: absent
>>      destroy: true
>>     register: res_iscsi_dsk02
>>   - debug:
>>      var: res_iscsi_dsk02
>>
>> On Wed, Jun 28, 2017 at 7:58 AM, Ondra Machacek <notifications at github.com
>> > wrote:
>>
>>> It's using the host, which you will pass as host parameter to the task
>>> for iscsi discover.
>>>
>>> As Yaniv said, can you please share your issue on our mailing list:
>>>
>>> users at ovirt.org
>>>
>>> There are people with knowledge of iSCSI and they will help you.
>>>
>>>>>> You are receiving this because you were mentioned.
>>> Reply to this email directly, view it on GitHub
>>> <https://github.com/ansible/ansible/issues/25417#issuecomment-311563987>,
>>> or mute the thread
>>> <https://github.com/notifications/unsubscribe-auth/AScoAyV364pDjqXtCPSNodvhEeLw3b_Sks5sIet0gaJpZM4NyeaB>
>>> .
>>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170628/f2f30a75/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: engine.log.tgz
Type: application/x-gzip
Size: 764213 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170628/f2f30a75/attachment-0001.tgz>


More information about the Users mailing list