<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter <span dir="ltr"><<a href="mailto:juergen.gotteswinter@internetx.com" target="_blank">juergen.gotteswinter@internetx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">iSCSI & Ovirt is an awful combination, no matter if multipathed or<br>
bonded. its always gambling how long it will work, and when it fails why<br>
did it fail.<br></blockquote><div><br></div><div>I disagree. In most cases, it's actually a lower layer issues. In most cases, btw, it's because multipathing was not configured (or not configured correctly).</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
its supersensitive to latency, and superfast with setting an host to<br>
inactive because the engine thinks something is wrong with it. in most<br>
cases there was no real reason for.<br></blockquote><div><br></div><div>Did you open bugs for those issues? I'm not aware of 'no real reason' issues.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
we had this in several different hardware combinations, self built<br>
filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer<br>
<br>
Been there, done that, wont do again.<br></blockquote><div><br></div><div>We've had good success and reliability with most enterprise level storage, such as EMC, NetApp, Dell filers.</div><div>When properly configured, of course.</div><div>Y.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
Am 24.08.2016 um 16:04 schrieb Uwe Laverenz:<br>
> Hi Elad,<br>
><br>
> thank you very much for clearing things up.<br>
><br>
> Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a'<br>
> and 'b' are in completely separate networks this can never work as long<br>
> as there is no routing between the networks.<br>
><br>
> So it seems the iSCSI-bonding feature is not useful for my setup. I<br>
> still wonder how and where this feature is supposed to be used?<br>
><br>
> thank you,<br>
> Uwe<br>
><br>
> Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:<br>
>> Thanks.<br>
>><br>
>> You're getting an iSCSI connection timeout [1], [2]. It means the host<br>
>> cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.<br>
>><br>
>> This causes the host to loose its connection to the storage and also,<br>
>> the connection to the engine becomes inactive. Therefore, the host<br>
>> changes its status to Non-responsive [3] and since it's the SPM, the<br>
>> whole DC, with all its storage domains become inactive.<br>
>><br>
>><br>
>> vdsm.log:<br>
>> [1]<br>
>> Traceback (most recent call last):<br>
>> File "/usr/share/vdsm/storage/hsm.<wbr>py", line 2400, in<br>
>> connectStorageServer<br>
>> conObj.connect()<br>
>> File "/usr/share/vdsm/storage/<wbr>storageServer.py", line 508, in connect<br>
>> iscsi.addIscsiNode(self._<wbr>iface, self._target, self._cred)<br>
>> File "/usr/share/vdsm/storage/<wbr>iscsi.py", line 204, in addIscsiNode<br>
>> iscsiadm.node_login(<a href="http://iface.name" rel="noreferrer" target="_blank">iface.name</a> <<a href="http://iface.name" rel="noreferrer" target="_blank">http://iface.name</a>>, portalStr,<br>
>> target.iqn)<br>
>> File "/usr/share/vdsm/storage/<wbr>iscsiadm.py", line 336, in node_login<br>
>> raise IscsiNodeError(rc, out, err)<br>
>> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:<br>
>> iqn.2005-10.org.freenas.ctl:<wbr>tgtb, portal: 10.0.132.121,3260]<br>
>> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ<br>
>> et: iqn.2005-10.org.freenas.ctl:<wbr>tgtb, portal: 10.0.132.121,3260].',<br>
>> 'iscsiadm: initiator reported error (8 - connection timed out)',<br>
>> 'iscsiadm: Could not log into all portals'])<br>
>><br>
>><br>
>><br>
>> vdsm.log:<br>
>> [2]<br>
>> Traceback (most recent call last):<br>
>> File "/usr/share/vdsm/storage/hsm.<wbr>py", line 2400, in<br>
>> connectStorageServer<br>
>> conObj.connect()<br>
>> File "/usr/share/vdsm/storage/<wbr>storageServer.py", line 508, in connect<br>
>> iscsi.addIscsiNode(self._<wbr>iface, self._target, self._cred)<br>
>> File "/usr/share/vdsm/storage/<wbr>iscsi.py", line 204, in addIscsiNode<br>
>> iscsiadm.node_login(<a href="http://iface.name" rel="noreferrer" target="_blank">iface.name</a> <<a href="http://iface.name" rel="noreferrer" target="_blank">http://iface.name</a>>, portalStr,<br>
>> target.iqn)<br>
>> File "/usr/share/vdsm/storage/<wbr>iscsiadm.py", line 336, in node_login<br>
>> raise IscsiNodeError(rc, out, err)<br>
>> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:<br>
>> iqn.2005-10.org.freenas.ctl:<wbr>tgta, portal: 10.0.131.121,3260]<br>
>> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1, target:<br>
>> iqn.2005-10.org.freenas.ctl:<wbr>tgta, portal: 10.0.131.121,3260].',<br>
>> 'iscsiadm: initiator reported error (8 - connection timed out)',<br>
>> 'iscsiadm: Could not log into all portals'])<br>
>><br>
>><br>
>> engine.log:<br>
>> [3]<br>
>><br>
>><br>
>> 2016-08-24 14:10:23,222 WARN<br>
>> [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirector]<br>
>> (default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,<br>
>> Custom Event ID:<br>
>> -1, Message: iSCSI bond 'iBond' was successfully created in Data Center<br>
>> 'Default' but some of the hosts encountered connection issues.<br>
>><br>
>><br>
>><br>
>> 2016-08-24 14:10:23,208 INFO<br>
>> [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>ConnectStorageServerVDSCommand<wbr>]<br>
>><br>
>> (org.ovirt.thread.pool-8-<wbr>thread-25) [15d1637f] Command<br>
>> 'org.ovirt.engine.core.vdsbrok<br>
>> er.vdsbroker.<wbr>ConnectStorageServerVDSCommand<wbr>' return value '<br>
>> ServerConnectionStatusReturnFo<wbr>rXmlRpc:{status='<wbr>StatusForXmlRpc<br>
>> [code=5022, message=Message timeout which can be caused by communication<br>
>> issues]'}<br>
>><br>
>><br>
>><br>
>> On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz <<a href="mailto:uwe@laverenz.de">uwe@laverenz.de</a><br>
>> <mailto:<a href="mailto:uwe@laverenz.de">uwe@laverenz.de</a>>> wrote:<br>
>><br>
>> Hi Elad,<br>
>><br>
>> I sent you a download message.<br>
>><br>
>> thank you,<br>
>> Uwe<br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list<br>
>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org">Users@ovirt.org</a>><br>
>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>><br>
>><br>
>><br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div></div>