[ovirt-users] iSCSI Multipathing -> host inactive

Uwe Laverenz uwe at laverenz.de
Wed Aug 24 14:04:20 UTC 2016


Hi Elad,

thank you very much for clearing things up.

Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a' 
and 'b' are in completely separate networks this can never work as long 
as there is no routing between the networks.

So it seems the iSCSI-bonding feature is not useful for my setup. I 
still wonder how and where this feature is supposed to be used?

thank you,
Uwe

Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:
> Thanks.
>
> You're getting an iSCSI connection timeout [1], [2]. It means the host
> cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.
>
> This causes the host to loose its connection to the storage and also,
> the connection to the engine becomes inactive. Therefore, the host
> changes its status to Non-responsive [3] and since it's the SPM, the
> whole DC, with all its storage domains become inactive.
>
>
> vdsm.log:
> [1]
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
>     conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
>     iscsi.addIscsiNode(self._iface, self._target, self._cred)
>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
>     iscsiadm.node_login(iface.name <http://iface.name>, portalStr,
> target.iqn)
>   File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
>     raise IscsiNodeError(rc, out, err)
> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:
> iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260]
> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ
> et: iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260].',
> 'iscsiadm: initiator reported error (8 - connection timed out)',
> 'iscsiadm: Could not log into all portals'])
>
>
>
> vdsm.log:
> [2]
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
>     conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
>     iscsi.addIscsiNode(self._iface, self._target, self._cred)
>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
>     iscsiadm.node_login(iface.name <http://iface.name>, portalStr,
> target.iqn)
>   File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
>     raise IscsiNodeError(rc, out, err)
> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:
> iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260]
> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1, target:
> iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260].',
> 'iscsiadm: initiator reported error (8 - connection timed out)',
> 'iscsiadm: Could not log into all portals'])
>
>
> engine.log:
> [3]
>
>
> 2016-08-24 14:10:23,222 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,
> Custom Event ID:
>  -1, Message: iSCSI bond 'iBond' was successfully created in Data Center
> 'Default' but some of the hosts encountered connection issues.
>
>
>
> 2016-08-24 14:10:23,208 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (org.ovirt.thread.pool-8-thread-25) [15d1637f] Command
> 'org.ovirt.engine.core.vdsbrok
> er.vdsbroker.ConnectStorageServerVDSCommand' return value '
> ServerConnectionStatusReturnForXmlRpc:{status='StatusForXmlRpc
> [code=5022, message=Message timeout which can be caused by communication
> issues]'}
>
>
>
> On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz <uwe at laverenz.de
> <mailto:uwe at laverenz.de>> wrote:
>
>     Hi Elad,
>
>     I sent you a download message.
>
>     thank you,
>     Uwe
>     _______________________________________________
>     Users mailing list
>     Users at ovirt.org <mailto:Users at ovirt.org>
>     http://lists.ovirt.org/mailman/listinfo/users
>     <http://lists.ovirt.org/mailman/listinfo/users>
>
>



More information about the Users mailing list