[Users] Unable to activate iSCSI domain after crash of host

Hello, Fedora 19 with 3.3.3. Only one host configured. After crash of host I'm not able to activate storage domain again. Any way to recover? Gianluca in engine.log 2014-02-07 08:11:12,602 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-32) [ 60d513d1] HostName = ovnode03 2014-02-07 08:11:12,602 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-32) [ 60d513d1] Command HSMGetAllTasksStatusesVDS execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IR SNonOperationalException: Not SPM: () 2014-02-07 08:11:12,613 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-32) [60d513d1] hostFr omVds::selectedVds - ovnode03, spmStatus Unknown_Pool, storage pool ISCSI 2014-02-07 08:11:12,615 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] START, ConnectStoragePoolVDSCommand(HostName = ovnode03, HostId = b6f8f68f-4f9e-4c87-918b-aa1ff60f575a, storagePoolId = 546cd29c-7249-473 3-8fd5-317cff38ed71, vds_spm_id = 1, masterDomainId = f741671e-6480-4d7b-b357-8cf6e8d2c0f1, masterVersion = 2), log id: 3e99a2c6 2014-02-07 08:11:15,747 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (ajp--127.0.0.1-8702-4) [465b0976] Lock Acquired to object EngineLock [exclusiveLocks= key: f741671e-6480-4d7b-b357-8cf6e8d2c0f1 value: STORAGE , sharedLocks= ] 2014-02-07 08:11:15,759 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (pool-6-thread-49) [465b0976] Running command: A ctivateStorageDomainCommand internal: false. Entities affected : ID: f741671e-6480-4d7b-b357-8cf6e8d2c0f1 Type: Storage 2014-02-07 08:11:15,762 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (pool-6-thread-49) [465b0976] Lock freed to obje ct EngineLock [exclusiveLocks= key: f741671e-6480-4d7b-b357-8cf6e8d2c0f1 value: STORAGE , sharedLocks= ] 2014-02-07 08:11:15,763 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (pool-6-thread-49) [465b0976] ActivateStorage Do main. Before Connect all hosts to pool. Time:2/7/14 8:11 AM 2014-02-07 08:11:15,765 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (pool-6-thread-49) [465b0976] START, ActivateStorageDomainVDSCommand( storagePoolId = 546cd29c-7249-4733-8fd5-317cff38ed71, ignoreFailoverLimit = false, storageDomainId = f741671e- 6480-4d7b-b357-8cf6e8d2c0f1), log id: da4b270 2014-02-07 08:11:16,739 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] Command org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=304, mMessage=Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed7 1, msdUUID=f741671e-6480-4d7b-b357-8cf6e8d2c0f1']] 2014-02-07 08:11:16,740 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] HostName = ovnode03 2014-02-07 08:11:16,740 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] Command ConnectStoragePoolVDS execution failed. Exception: IRSNoMasterDomainException: IRSGenericException: IRSErrorException: IRSNoMaste rDomainException: Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed71, msdUUID=f741671e-6480-4d7b-b357-8cf6e8d2c0f1' 2014-02-07 08:11:16,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] FINISH, ConnectStoragePoolVDSCommand, log id: 3e99a2c6 In vdsm.log I get: Thread-85157::ERROR::2014-02-07 08:12:44,774::task::850::TaskManager.Task::(_setError) Task=`b9f3c2b5-18fa-4135-96f7-c152b5 ffe675`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1008, in connectStoragePool masterVersion, options) File "/usr/share/vdsm/storage/hsm.py", line 1062, in _connectStoragePool res = pool.connect(hostID, scsiKey, msdUUID, masterVersion) File "/usr/share/vdsm/storage/sp.py", line 699, in connect self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion) File "/usr/share/vdsm/storage/sp.py", line 1244, in __rebuild masterVersion=masterVersion) File "/usr/share/vdsm/storage/sp.py", line 1603, in getMasterDomain raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID) StoragePoolMasterNotFound: Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed71, msdUUID=f741671e-6480-4 d7b-b357-8cf6e8d2c0f1' Thread-85157::DEBUG::2014-02-07 08:12:44,774::task::869::TaskManager.Task::(_run) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe67 5`::Task._run: b9f3c2b5-18fa-4135-96f7-c152b5ffe675 ('546cd29c-7249-4733-8fd5-317cff38ed71', 1, '546cd29c-7249-4733-8fd5-31 7cff38ed71', 'f741671e-6480-4d7b-b357-8cf6e8d2c0f1', 2) {} failed - stopping task Thread-85157::DEBUG::2014-02-07 08:12:44,775::task::1194::TaskManager.Task::(stop) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe6 75`::stopping in state preparing (force False) Thread-85157::DEBUG::2014-02-07 08:12:44,775::task::974::TaskManager.Task::(_decref) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ff e675`::ref 1 aborting True Thread-85157::INFO::2014-02-07 08:12:44,775::task::1151::TaskManager.Task::(prepare) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::aborting: Task is aborted: 'Cannot find master domain' - code 304 Thread-85157::DEBUG::2014-02-07 08:12:44,775::task::1156::TaskManager.Task::(prepare) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::Prepare: aborted: Cannot find master domain Thread-85157::DEBUG::2014-02-07 08:12:44,776::task::974::TaskManager.Task::(_decref) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::ref 0 aborting True Thread-85157::DEBUG::2014-02-07 08:12:44,776::task::909::TaskManager.Task::(_doAbort) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::Task._doAbort: force False Thread-85157::DEBUG::2014-02-07 08:12:44,776::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-85157::DEBUG::2014-02-07 08:12:44,776::task::579::TaskManager.Task::(_updateState) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::moving from state preparing -> state aborting Thread-85157::DEBUG::2014-02-07 08:12:44,777::task::534::TaskManager.Task::(__state_aborting) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::_aborting: recover policy none Thread-85157::DEBUG::2014-02-07 08:12:44,777::task::579::TaskManager.Task::(_updateState) Task=`b9f3c2b5-18fa-4135-96f7-c152b5ffe675`::moving from state aborting -> state failed Thread-85157::DEBUG::2014-02-07 08:12:44,777::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-85157::DEBUG::2014-02-07 08:12:44,777::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-85157::ERROR::2014-02-07 08:12:44,777::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed71, msdUUID=f741671e-6480-4d7b-b357-8cf6e8d2c0f1'", 'code ': 304}}

On Fri, Feb 7, 2014 at 9:53 AM, Dafna Ron wrote:
Already tried many times without results. One note: there was an initial problem wen I configured the storage. At first attempt I made an input of wrong password. I fear the new one was not retained forsome reason This my debug on host: [root@ovnode03 ~]# iscsiadm -m discovery -t st -p 192.168.230.101 --discover 192.168.230.101:3260,1 iqn.2013-09.local.localdomain:c6iscsit.target11 [root@ovnode03 ~]# iscsiadm -m node iqn.2013-09.local.localdomain:c6iscsit.target11 -l Logging in to [iface: default, target: iqn.2013-09.local.localdomain:c6iscsit.target11, portal: 192.168.230.101,3260] (multiple) iscsiadm: Could not login to [iface: default, target: iqn.2013-09.local.localdomain:c6iscsit.target11, portal: 192.168.230.101,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals If I go in /var/lib/iscsi/send_targets/192.168.230.101,3260 st_config contains: # BEGIN RECORD 6.2.0.873-17 discovery.startup = manual discovery.type = sendtargets discovery.sendtargets.address = 192.168.230.101 discovery.sendtargets.port = 3260 discovery.sendtargets.auth.authmethod = None discovery.sendtargets.timeo.login_timeout = 15 discovery.sendtargets.use_discoveryd = No discovery.sendtargets.discoveryd_poll_inval = 30 discovery.sendtargets.reopen_max = 5 discovery.sendtargets.timeo.auth_timeout = 45 discovery.sendtargets.timeo.active_timeout = 30 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 # END RECORD where is recorded che chap configured password on host? Also, are iscsi and iscsid services to be configured at startup, or is it vdsm that should take care of starting one or all of them? In my case I have this kind of config on host: iscsi enabled but in failed state iscsid active but disabled ? [root@ovnode03 192.168.230.101,3260]# systemctl status iscsi iscsi.service - Login and scanning of iSCSI devices Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled) Active: failed (Result: exit-code) since Fri 2014-02-07 08:41:17 CET; 1h 18min ago Docs: man:iscsid(8) man:iscsiadm(8) Process: 911 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=21) Process: 908 ExecStart=/usr/libexec/iscsi-mark-root-nodes (code=exited, status=0/SUCCESS) Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Starting Login and scanning of iSCSI devices... Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: iscsi.service: main process exited, code=exited, status=21/n/a Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Failed to start Login and scanning of iSCSI devices. Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Unit iscsi.service entered failed state. [root@ovnode03 192.168.230.101,3260]# systemctl status iscsid iscsid.service - Open-iSCSI Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled) Active: active (running) since Fri 2014-02-07 08:41:17 CET; 1h 19min ago Docs: man:iscsid(8) man:iscsiadm(8) Process: 875 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS) Main PID: 895 (iscsid) CGroup: name=systemd:/system/iscsid.service ââ894 /usr/sbin/iscsid ââ895 /usr/sbin/iscsid Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Starting Open-iSCSI... Feb 07 08:41:17 ovnode03.localdomain.local iscsid[875]: iSCSI logger with pid=894 started! Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Failed to read PID from file /var/run/iscsid.pid: Invalid argument Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Started Open-iSCSI. Feb 07 08:41:18 ovnode03.localdomain.local iscsid[894]: iSCSI daemon with pid=895 started! Feb 07 09:55:06 ovnode03.localdomain.local iscsid[894]: Login failed to authenticate with target iqn.2013-09.local.localdomain:c6iscsi...rget11 Feb 07 09:55:06 ovnode03.localdomain.local iscsid[894]: session 1 login rejected: Initiator failed authentication with target Feb 07 09:55:06 ovnode03.localdomain.local iscsid[894]: Connection1:0 to [target: iqn.2013-09.local.localdomain:c6iscsit.target11, por...tdown.

On Fri, Feb 7, 2014 at 10:19 AM, Dafna Ron wrote:
can you try to restart ovirt-engine as well? Also, can you run vdsClient -s 0 getDeviceList on the host?
restarted engine [root@ovirt ovirt-engine]# systemctl status ovirt-engine ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled) Active: active (running) since Fri 2014-02-07 10:23:16 CET; 45s ago Main PID: 18479 (ovirt-engine.py) CGroup: name=systemd:/system/ovirt-engine.service ââ18479 /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output start ââ18499 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Sta... Feb 07 10:23:16 ovirt.localdomain.local systemd[1]: Started oVirt Engine. On node [root@ovnode03 192.168.230.101,3260]# vdsClient -s 0 getDeviceList [] [root@ovnode03 192.168.230.101,3260]# engine.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvYjliV19JclZha3c/edit?usp=sharin... Thanks for viewing Gianluca

If you do not have an iptables or any connectivity issue still existing from the host to the storage than your host is not seeing any of the devices on the storage it might be a problem with access list (password or iqn) but the storage is not exposing the luns to the host. On 02/07/2014 09:29 AM, Gianluca Cecchi wrote:
-- Dafna Ron

On Fri, Feb 7, 2014 at 12:02 PM, Dafna Ron wrote:
O my iscsi target (CentOS 6.5 with sw iscsi target), I still have [root@c6iscsit ~]# tgtadm --lld iscsi --mode target --op show Target 1: iqn.2013-09.local.localdomain:c6iscsit.target11 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 53683 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/VG_ISCSI/ISCSI_OV01 Backing store flags: Account information: ovirt ACL information: 192.168.230.102 192.168.230.103 My node has ip 192.168.230.102 and can ping it no iptables rules [root@c6iscsit ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination and in targets.conf that is in place default-driver iscsi <target iqn.2013-09.local.localdomain:c6iscsit.target11> backing-store /dev/VG_ISCSI/ISCSI_OV01 incominguser ovirt my_ovirt_setup_pwd initiator-address 192.168.230.102 initiator-address 192.168.230.103 </target> so it seems ok to me And discovery is ok from ovirt node.... where are chap user/pwd stored on ovirt node? Gianluca

On Fri, Feb 7, 2014 at 2:02 PM, Dafna Ron wrote:
there is yet the "historical" warning about getuid_callout not valid any more for fedora based distro... but if I remember well should not be of influence in output apart the warnings, correct? [root@ovnode03 ~]# multipath -ll Feb 07 14:09:49 | multipath.conf +5, invalid keyword: getuid_callout Feb 07 14:09:49 | multipath.conf +18, invalid keyword: getuid_callout I'm going to check rdbms tables too... Gianluca

On Fri, Feb 7, 2014 at 2:11 PM, Gianluca Cecchi wrote:
I'm going to check rdbms tables too...
Gianluca
it seems that the table is storage_server_connections but the value seems (correctly in my opinion) encrypted... how can I update it eventually? engine=# select * from storage_server_connections ; id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --------------------------------------+----------------------------------------------+-----------+-------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------- ------------------+-------------------------------------------------+------+--------+--------------+---------------+-----------+-------------+ -----------+------------- 6a5b159d-4c11-43cc-aa09-55c325de47b3 | 192.168.230.101 | ovirt | lf1mtw6jWq0tcO/jBeLtSdrx9WSMvLOJxMF/Z4UWsgK W10jYKXzkxG8iPgX9xMEcOhTJCeMNtC6EQES5Tq0MjHGPfuzigwL9nejZEZwtDvOFmKZtCBSGaKoOyjQpU8hfoqq7u47jvGE5VmVwDQ40p6goXWDHMWPxdCk2IzAOBsDlsnrJGmqLioRDj JQVya28cJsgzGoaLFHZMQD8bfW7ay3cQ6k8Hxlz99MKNpxxoV0fju1Blpfrqpa2bCSpQ5w0PrVHmJrW4eiBEd/Rg/XV497PGatAcwQr7hD5/uG/GLoqBbCMyR9S11Ot90aprL0Gd9cOlM4 VngzCD/2JqFmvhA== | iqn.2013-09.local.localdomain:c6iscsit.target11 | 3260 | 1 | 3 | | | | | Gianluca

On Fri, Feb 7, 2014 at 3:11 PM, Dafna Ron wrote:
what happens when you try to update from the UI? (edit the storage)
I think it's a cat that tries to bite its tail matter... ;-) If I go and select storage, storage domain OV01 and edit, I see all empty so I cannot edit because probably it has to authenticate first... or not? Gianluca

well... that actually sounds like a bug to me - can you open it once we manage to find a solution? did you get anything in multipath -ll? one more question... did you try to put the host in maintenance and than activate it again? On 02/07/2014 02:17 PM, Gianluca Cecchi wrote:
-- Dafna Ron

On Fri, Feb 7, 2014 at 3:48 PM, Dafna Ron wrote:
well... that actually sounds like a bug to me - can you open it once we manage to find a solution?
Yes, eventually I start form a clean config replicating and in case send full logs, once we have a solution
did you get anything in multipath -ll?
No. I got what I wrote before: [root@ovnode03 ~]# multipath -ll Feb 07 16:09:55 | multipath.conf +5, invalid keyword: getuid_callout Feb 07 16:09:55 | multipath.conf +18, invalid keyword: getuid_callout [root@ovnode03 ~]#
one more question... did you try to put the host in maintenance and than activate it again?
Yes more than once, but with same results. Tried just now and it fails the same Gianluca

where can I find the function that encrypts iscsi chap password and put the encrypted value into storage_server_connections table? So that I can try to reinsert it and verify. Thanks Gianluca

----- Original Message -----
You can just put plain password, it should work... If you want to encrypt use: echo -n 'PASSWORD' | openssl pkeyutl -encrypt -certin -inkey /etc/pki/ovirt-engine/certs/engine.cer | openssl enc -a | tr -d '\n' But Dafna, isn't there a way at UI to re-specify password, so it be encrypted by the application?

On Mon, Feb 10, 2014 at 10:56 AM, Alon Bar-Lev <alonbl@redhat.com> wrote:
In my opinion when I first defined the ISCSI domain and input a wrong password there was something not correctly managed when I then used the correct one. In fact in my opinion it seems there is no correspondence between storage_domains table and storage_server_connections table. If I take a glusterfs domain named gv01 I see this: engine=# select * from storage_server_connections where id=(select storage from storage_domains where storage_name='gv01'); id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --------------------------------------+---------------+-----------+----------+-----+------+--------+--------------+---------------+----------- +-------------+-----------+------------- 66663b6a-aff3-47fa-b7ca-8e809804cbe2 | ovnode01:gv01 | | | | | | 7 | | glusterfs | | | (1 row) Instead for this ISCSI domain named OV01 engine=# select * from storage_server_connections where id=(select storage from storage_domains where storage_name='OV01'); id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retran s ----+------------+-----------+----------+-----+------+--------+--------------+---------------+----------+-------------+-----------+----------- -- (0 rows) In particular: engine=# select * from storage_domains where storage_name='OV01'; id | storage | storage_name | storage_description | storage_comment | storage_pool_id | available_disk_size | used_disk_size | commited_disk_size | actual_images_size | status | storage_pool_name | storage_type | storage_domain_type | storage_domain_format_type | last_time_used_as_master | storage_domain_shared_status | recoverable --------------------------------------+----------------------------------------+--------------+---------------------+-----------------+------- -------------------------------+---------------------+----------------+--------------------+--------------------+--------+-------------------+ --------------+---------------------+----------------------------+--------------------------+------------------------------+------------- f741671e-6480-4d7b-b357-8cf6e8d2c0f1 | uqe7UZ-PaBY-IiLj-XLAY-XoCZ-cmOk-cMJkeX | OV01 | | | 546cd2 9c-7249-4733-8fd5-317cff38ed71 | 44 | 5 | 10 | 1 | 4 | ISCSI | 3 | 0 | 3 | 0 | 2 | t (1 row) engine=# select * from storage_pool where id='546cd29c-7249-4733-8fd5-317cff38ed71'; id | name | description | storage_pool_type | storage_pool_format_type | status | master_domain_version | spm_vds_id | compatibility_version | _create_date | _update_date | quota_enforcement_type | free_text_commen t --------------------------------------+-------+-------------+-------------------+--------------------------+--------+-----------------------+- -----------+-----------------------+-------------------------------+-------------------------------+------------------------+----------------- -- 546cd29c-7249-4733-8fd5-317cff38ed71 | ISCSI | | 3 | 3 | 4 | 2 | | 3.3 | 2014-02-05 11:46:50.797079+01 | 2014-02-05 23:53:18.864716+01 | 0 | (1 row) engine=# select * from storage_server_connections where user_name='ovirt'; id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --------------------------------------+-----------------+-----------+------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------+---------- ---------------------------------------+------+--------+--------------+---------------+----------+-------------+-----------+------------- 6a5b159d-4c11-43cc-aa09-55c325de47b3 | 192.168.230.101 | ovirt | rMlQVigk7Ah3vJHWqE5jv24vDwZEWd14EExWKLjVowXGNa4ptPZ1O/8uf0ubK8zuQ9/i6qeF h6a7tSahr9yHXF80XEinpo0REZKfa78wUHYLbl8BMnMqYA9TA521Ef0ELBXwB5jmEmdnhew8RRRTjou7ihnnQOX/BMpcjxI0Q8K2Cex+Blk6eoRAtLbKdSdQwbW8W/hhUCmrf94mNHlHPM 9jv/HPApq3DU4iXCtbzQJMOXaQbMmYHORloILhAJnlTci59qj67sKkZm4BFUPEBS1K9QQZ0Lnkj/dkqenSeUyZ6MnFm20fI0qdJevqBq2Zl3kW5OZX6d+eIxRQTIYFUQ== | iqn.2013- 09.local.localdomain:c6iscsit.target11 | 3260 | 1 | 3 | | | | | (1 row) If I run this update and then restart engine and vdsmd on host I can get ISCSI domain active again... engine=# update storage_server_connections set id=(select storage from storage_domains where storage_name='OV01') where user_name='ovirt'; UPDATE 1 What do you think about it? Gianluca

On 02/10/2014 08:00 PM, Gianluca Cecchi wrote:
the problem is that the storage already exists but non-operational and we cannot edit a storage in any status other than active. so if the password changed during a storage issue, the storage cannot recover to active state if the password had changed and the luns are not visible on the storage and we also cannot edit the password for the domain...
-- Dafna Ron
participants (3)
-
Alon Bar-Lev
-
Dafna Ron
-
Gianluca Cecchi