<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 26, 2016 at 1:33 PM, InterNetX - Juergen Gotteswinter <span dir="ltr"><<a href="mailto:jg@internetx.com" target="_blank">jg@internetx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:<br>
><br>
><br>
> On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter<br>
> <<a href="mailto:juergen.gotteswinter@internetx.com">juergen.gotteswinter@<wbr>internetx.com</a><br>
</span><span class="">> <mailto:<a href="mailto:juergen.gotteswinter@internetx.com">juergen.gotteswinter@<wbr>internetx.com</a>>> wrote:<br>
><br>
> iSCSI & Ovirt is an awful combination, no matter if multipathed or<br>
> bonded. its always gambling how long it will work, and when it fails why<br>
> did it fail.<br>
><br>
><br>
> I disagree. In most cases, it's actually a lower layer issues. In most<br>
> cases, btw, it's because multipathing was not configured (or not<br>
> configured correctly).<br>
><br>
<br>
</span>experience tells me it is like is said, this was something i have seen<br>
from 3.0 up to 3.6. Ovirt, and - suprprise - RHEV. Both act the same<br>
way. I am absolutly aware of Multpath Configurations, iSCSI Multipathing<br>
is very widespread in use in our DC. But such problems are an excluse<br>
Ovirt/RHEV Feature.<br></blockquote><div><br></div><div>I don't think the resentful tone is appropriate for the oVirt community mailing list.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
><br>
><br>
> its supersensitive to latency, and superfast with setting an host to<br>
> inactive because the engine thinks something is wrong with it. in most<br>
> cases there was no real reason for.<br>
><br>
><br>
> Did you open bugs for those issues? I'm not aware of 'no real reason'<br>
> issues.<br>
><br>
<br>
</span>Support Tickets for Rhev Installation, after Support (even after massive<br>
escalation requests) kept telling me the same again and again i gave up<br>
and we dropped the RHEV Subscriptions to Migrate the VMS to a different<br>
Plattform Solution (still iSCSI Backend). Problems gone.<br></blockquote><div><br></div><div>I wish you peace of mind with your new platform solution.</div><div>From a (shallow!) search I've made on oVirt bugs, I could not find any oVirt issue you've reported or commented on.</div><div>I am aware of the request to set rp_filter correctly for setups with multiple interfaces in the same IP subnet.<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
<br>
><br>
><br>
> we had this in several different hardware combinations, self built<br>
> filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer<br>
><br>
> Been there, done that, wont do again.<br>
><br>
><br>
> We've had good success and reliability with most enterprise level<br>
> storage, such as EMC, NetApp, Dell filers.<br>
> When properly configured, of course.<br>
> Y.<br>
><br>
<br>
</span>Dell Equallogic? Cant really believe since Ovirt / Rhev and the<br>
Equallogic Network Configuration wont play nice together (EQL wants all<br>
Interfaces in the same Subnet). And they only work like expected when<br>
there Hit Kit Driverpackage is installed. Without Path Failover is like<br>
russian Roulette. But Ovirt hates the Hit Kit, so this Combo ends up in<br>
a huge mess, because Ovirt does changes to iSCSI, as well as the Hit Kit<br>
-> Kaboom. Host not available.<br></blockquote><div><br></div><div>Thanks - I'll look into this specific storage. </div><div>I'm aware it's unique in some cases, but I don't have experience with it specifically.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
There are several KB Articles in the RHN, without real Solution.<br>
<br>
<br>
But like you try to tell between the lines, this must be the Customers<br>
misconfiguration. Yep, typical Supportkilleranswer. Same Style than in<br>
RHN Tickets, i am done with this.<br></blockquote><div><br></div><div>A funny sentence I've read yesterday: </div><div>Schrodinger's backup: "the condition of any backup is unknown until restore is attempted."</div><div><br></div><div>In a sense, this is similar to no SPOF high availability setup - in many cases you don't know it works well until needed.</div><div>There are simply many variables and components involved. </div><div>That was all I meant, nothing between the lines and I apologize if I've given you a different impression.</div><div>Y.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks.<br>
<div><div class="h5"><br>
><br>
><br>
><br>
> Am 24.08.2016 um 16:04 schrieb Uwe Laverenz:<br>
> > Hi Elad,<br>
> ><br>
> > thank you very much for clearing things up.<br>
> ><br>
> > Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a'<br>
> > and 'b' are in completely separate networks this can never work as<br>
> long<br>
> > as there is no routing between the networks.<br>
> ><br>
> > So it seems the iSCSI-bonding feature is not useful for my setup. I<br>
> > still wonder how and where this feature is supposed to be used?<br>
> ><br>
> > thank you,<br>
> > Uwe<br>
> ><br>
> > Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:<br>
> >> Thanks.<br>
> >><br>
> >> You're getting an iSCSI connection timeout [1], [2]. It means the<br>
> host<br>
> >> cannot connect to the targets from iface: enp9s0f1 nor iface:<br>
> enp9s0f0.<br>
> >><br>
> >> This causes the host to loose its connection to the storage and also,<br>
> >> the connection to the engine becomes inactive. Therefore, the host<br>
> >> changes its status to Non-responsive [3] and since it's the SPM, the<br>
> >> whole DC, with all its storage domains become inactive.<br>
> >><br>
> >><br>
> >> vdsm.log:<br>
> >> [1]<br>
> >> Traceback (most recent call last):<br>
> >> File "/usr/share/vdsm/storage/hsm.<wbr>py", line 2400, in<br>
> >> connectStorageServer<br>
> >> conObj.connect()<br>
> >> File "/usr/share/vdsm/storage/<wbr>storageServer.py", line 508, in<br>
> connect<br>
> >> iscsi.addIscsiNode(self._<wbr>iface, self._target, self._cred)<br>
> >> File "/usr/share/vdsm/storage/<wbr>iscsi.py", line 204, in addIscsiNode<br>
> >> iscsiadm.node_login(<a href="http://iface.name" rel="noreferrer" target="_blank">iface.name</a> <<a href="http://iface.name" rel="noreferrer" target="_blank">http://iface.name</a>><br>
> <<a href="http://iface.name" rel="noreferrer" target="_blank">http://iface.name</a>>, portalStr,<br>
> >> target.iqn)<br>
> >> File "/usr/share/vdsm/storage/<wbr>iscsiadm.py", line 336, in node_login<br>
> >> raise IscsiNodeError(rc, out, err)<br>
> >> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:<br>
> >> iqn.2005-10.org.freenas.ctl:<wbr>tgtb, portal: 10.0.132.121,3260]<br>
> >> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ<br>
> >> et: iqn.2005-10.org.freenas.ctl:<wbr>tgtb, portal: 10.0.132.121,3260].',<br>
> >> 'iscsiadm: initiator reported error (8 - connection timed out)',<br>
> >> 'iscsiadm: Could not log into all portals'])<br>
> >><br>
> >><br>
> >><br>
> >> vdsm.log:<br>
> >> [2]<br>
> >> Traceback (most recent call last):<br>
> >> File "/usr/share/vdsm/storage/hsm.<wbr>py", line 2400, in<br>
> >> connectStorageServer<br>
> >> conObj.connect()<br>
> >> File "/usr/share/vdsm/storage/<wbr>storageServer.py", line 508, in<br>
> connect<br>
> >> iscsi.addIscsiNode(self._<wbr>iface, self._target, self._cred)<br>
> >> File "/usr/share/vdsm/storage/<wbr>iscsi.py", line 204, in addIscsiNode<br>
> >> iscsiadm.node_login(<a href="http://iface.name" rel="noreferrer" target="_blank">iface.name</a> <<a href="http://iface.name" rel="noreferrer" target="_blank">http://iface.name</a>><br>
> <<a href="http://iface.name" rel="noreferrer" target="_blank">http://iface.name</a>>, portalStr,<br>
> >> target.iqn)<br>
> >> File "/usr/share/vdsm/storage/<wbr>iscsiadm.py", line 336, in node_login<br>
> >> raise IscsiNodeError(rc, out, err)<br>
> >> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:<br>
> >> iqn.2005-10.org.freenas.ctl:<wbr>tgta, portal: 10.0.131.121,3260]<br>
> >> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1,<br>
> target:<br>
> >> iqn.2005-10.org.freenas.ctl:<wbr>tgta, portal: 10.0.131.121,3260].',<br>
> >> 'iscsiadm: initiator reported error (8 - connection timed out)',<br>
> >> 'iscsiadm: Could not log into all portals'])<br>
> >><br>
> >><br>
> >> engine.log:<br>
> >> [3]<br>
> >><br>
> >><br>
> >> 2016-08-24 14:10:23,222 WARN<br>
> >><br>
> [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirector]<br>
> >> (default task-25) [15d1637f] Correlation ID: 15d1637f, Call<br>
> Stack: null,<br>
> >> Custom Event ID:<br>
> >> -1, Message: iSCSI bond 'iBond' was successfully created in Data<br>
> Center<br>
> >> 'Default' but some of the hosts encountered connection issues.<br>
> >><br>
> >><br>
> >><br>
> >> 2016-08-24 14:10:23,208 INFO<br>
> >><br>
> [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>ConnectStorageServerVDSCommand<wbr>]<br>
> >><br>
> >> (org.ovirt.thread.pool-8-<wbr>thread-25) [15d1637f] Command<br>
> >> 'org.ovirt.engine.core.vdsbrok<br>
> >> er.vdsbroker.<wbr>ConnectStorageServerVDSCommand<wbr>' return value '<br>
> >> ServerConnectionStatusReturnFo<wbr>rXmlRpc:{status='<wbr>StatusForXmlRpc<br>
> >> [code=5022, message=Message timeout which can be caused by<br>
> communication<br>
> >> issues]'}<br>
> >><br>
> >><br>
> >><br>
> >> On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz <<a href="mailto:uwe@laverenz.de">uwe@laverenz.de</a><br>
> <mailto:<a href="mailto:uwe@laverenz.de">uwe@laverenz.de</a>><br>
</div></div><span class="">> >> <mailto:<a href="mailto:uwe@laverenz.de">uwe@laverenz.de</a> <mailto:<a href="mailto:uwe@laverenz.de">uwe@laverenz.de</a>>>> wrote:<br>
> >><br>
> >> Hi Elad,<br>
> >><br>
> >> I sent you a download message.<br>
> >><br>
> >> thank you,<br>
> >> Uwe<br>
> >> ______________________________<wbr>_________________<br>
> >> Users mailing list<br>
> >> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org">Users@ovirt.org</a>><br>
</span>> <mailto:<a href="mailto:Users@ovirt.org">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org">Users@ovirt.org</a>>><br>
<div class="HOEnZb"><div class="h5">> >> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>><br>
> >> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>>><br>
> >><br>
> >><br>
> > ______________________________<wbr>_________________<br>
> > Users mailing list<br>
> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org">Users@ovirt.org</a>><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>><br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org">Users@ovirt.org</a>><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
> <<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>><br>
><br>
><br>
</div></div></blockquote></div><br></div></div>