<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">
<div class=""><br class="">
</div>
<div>
<blockquote type="cite" class="">
<div class="">On 21 Jul 2017, at 15:12, Yaniv Kaul <<a href="mailto:ykaul@redhat.com" class="">ykaul@redhat.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="ltr" class=""><br class="">
<div class="gmail_extra"><br class="">
<div class="gmail_quote">On Wed, Jul 19, 2017 at 9:13 PM, Vinícius Ferrão <span dir="ltr" class="">
<<a href="mailto:ferrao@if.ufrj.br" target="_blank" class="">ferrao@if.ufrj.br</a>></span> wrote:<br class="">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br class="">
<br class="">
I’ve skipped this message entirely yesterday. So this is per design? Because the best practices of iSCSI MPIO, as far as I know, recommends two completely separate paths. If this can’t be achieved with oVirt what’s the point of running MPIO?<br class="">
</blockquote>
<div class=""><br class="">
</div>
<div class="">With regular storage it is quite easy to achieve using 'iSCSI bonding'.</div>
<div class="">I think the Dell storage is a bit different and requires some more investigation - or experience with it.</div>
<div class=""> Y.</div>
</div>
</div>
</div>
</div>
</blockquote>
<div><br class="">
</div>
<div>Yaniv, thank you for answering this. I’m really hoping that a solution would be found.</div>
<div><br class="">
</div>
<div>Actually I’m not running anything from DELL. My storage system is FreeNAS which is pretty standard and, as far as I know, iSCSI practices dictates segregate networks for proper working.</div>
<div><br class="">
</div>
<div>All other major virtualization products supports iSCSI this way: vSphere, XenServer and Hyper-V. So I was really surprised that oVirt (and even RHV, I requested a trial yesterday) does not implement ISCSI with the well know best practices.</div>
<div><br class="">
</div>
<div>There’s a picture of the architecture that I take from Google when searching for ”mpio best practives”:
<a href="https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640" class="">
https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640</a></div>
<div><br class="">
</div>
<div>Ans as you can see it’s segregated networks on a machine reaching the same target.</div>
<div><br class="">
</div>
<div>In my case, my datacenter has five Hypervisor Machines, with two NICs dedicated for iSCSI. Both NICs connect to different converged ethernet switches and the iStorage is connected the same way.</div>
<div><br class="">
</div>
<div>So it really does not make sense that a the first NIC can reach the second NIC target. In a case of a switch failure the cluster will go down anyway, so what’s the point of running MPIO? Right?</div>
<div><br class="">
</div>
<div>Thanks once again,</div>
<div>V.</div>
<div><br class="">
</div>
<blockquote type="cite" class="">
<div class="">
<div dir="ltr" class="">
<div class="gmail_extra">
<div class="gmail_quote">
<div class=""><br class="">
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br class="">
May we ask for a bug fix or a feature redesign on this?<br class="">
<br class="">
MPIO is part of my datacenter, and it was originally build for running XenServer, but I’m considering the move to oVirt. MPIO isn’t working right and this can be a great no-go for me...<br class="">
<br class="">
I’m willing to wait and hold my DC project if this can be fixed.<br class="">
<br class="">
Any answer from the redhat folks?<br class="">
<br class="">
Thanks,<br class="">
V.<br class="">
<div class="HOEnZb">
<div class="h5"><br class="">
> On 18 Jul 2017, at 11:09, Uwe Laverenz <<a href="mailto:uwe@laverenz.de" class="">uwe@laverenz.de</a>> wrote:<br class="">
><br class="">
> Hi,<br class="">
><br class="">
><br class="">
> Am 17.07.2017 um 14:11 schrieb Devin Acosta:<br class="">
><br class="">
>> I am still troubleshooting the issue, I haven’t found any resolution to my issue at this point yet. I need to figure out by this Friday otherwise I need to look at Xen or another solution. iSCSI and oVIRT seems problematic.<br class="">
><br class="">
> The configuration of iSCSI-Multipathing via OVirt didn't work for me either. IIRC the underlying problem in my case was that I use totally isolated networks for each path.<br class="">
><br class="">
> Workaround: to make round robin work you have to enable it by editing "/etc/multipath.conf". Just add the 3 lines for the round robin setting (see comment in the file) and additionally add the "# VDSM PRIVATE" comment to keep vdsmd from overwriting your settings.<br class="">
><br class="">
> My multipath.conf:<br class="">
><br class="">
><br class="">
>> # VDSM REVISION 1.3<br class="">
>> # VDSM PRIVATE<br class="">
>> defaults {<br class="">
>> polling_interval 5<br class="">
>> no_path_retry fail<br class="">
>> user_friendly_names no<br class="">
>> flush_on_last_del yes<br class="">
>> fast_io_fail_tmo 5<br class="">
>> dev_loss_tmo 30<br class="">
>> max_fds 4096<br class="">
>> # 3 lines added manually for multipathing:<br class="">
>> path_selector "round-robin 0"<br class="">
>> path_grouping_policy multibus<br class="">
>> failback immediate<br class="">
>> }<br class="">
>> # Remove devices entries when overrides section is available.<br class="">
>> devices {<br class="">
>> device {<br class="">
>> # These settings overrides built-in devices settings. It does not apply<br class="">
>> # to devices without built-in settings (these use the settings in the<br class="">
>> # "defaults" section), or to devices defined in the "devices" section.<br class="">
>> # Note: This is not available yet on Fedora 21. For more info see<br class="">
>> # <a href="https://bugzilla.redhat.com/1253799" rel="noreferrer" target="_blank" class="">
https://bugzilla.redhat.com/<wbr class="">1253799</a><br class="">
>> all_devs yes<br class="">
>> no_path_retry fail<br class="">
>> }<br class="">
>> }<br class="">
><br class="">
><br class="">
><br class="">
> To enable the settings:<br class="">
><br class="">
> systemctl restart multipathd<br class="">
><br class="">
> See if it works:<br class="">
><br class="">
> multipath -ll<br class="">
><br class="">
><br class="">
> HTH,<br class="">
> Uwe<br class="">
> ______________________________<wbr class="">_________________<br class="">
> Users mailing list<br class="">
> <a href="mailto:Users@ovirt.org" class="">Users@ovirt.org</a><br class="">
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" class="">
http://lists.ovirt.org/<wbr class="">mailman/listinfo/users</a><br class="">
<br class="">
______________________________<wbr class="">_________________<br class="">
Users mailing list<br class="">
<a href="mailto:Users@ovirt.org" class="">Users@ovirt.org</a><br class="">
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" class="">http://lists.ovirt.org/<wbr class="">mailman/listinfo/users</a><br class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</div>
</blockquote>
</div>
<br class="">
</body>
</html>