<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hello,<br>
<br>
[<font color="#ff0000">Unusual setup</font>]<br>
Last week, I eventually managed to make a 4.2.1.7 oVirt work with
iscsi-multipathing on both hosts and guest, connected to a Dell
Equallogic SAN which is providing one single virtual ip - my hosts
have two dedicated NICS for iscsi, but <font color="#ff0000">on
the same VLAN</font>. Torture-tests showed good resilience.<br>
<br>
[<font color="#009900">Classical setup</font>]<br>
But this year, we plan to create at least two additional DCs but
to connect their hosts to a "classical" SAN, ie which provides TWO
IPs <font color="#009900">on segregated VLANs (not routed)</font>,
and we'd like to use the same iscsi-multipathing feature.<br>
<br>
The discussion below could lead to think that oVirt needs the two
iscsi VLANs to be routed, allowing the hosts in one VLAN to access
to resources in the other.<br>
As Vinicius explained, this is not a best practice to say the
least.<br>
<br>
Searching through the mailing list archive, I found no answer to
Vinicius' question.<br>
<br>
May a Redhat storage and/or network expert enlighten us on these
points?<br>
<br>
Regards,<br>
<br>
-- <br>
Nicolas Ecarnot<br>
<br>
Le 21/07/2017 à 20:56, Vinícius Ferrão a écrit :<br>
</div>
<blockquote type="cite"
cite="mid:54FF808B-215E-441D-9864-38DBDD9F32E8@if.ufrj.br">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div class=""><br class="">
</div>
<div>
<blockquote type="cite" class="">
<div class="">On 21 Jul 2017, at 15:12, Yaniv Kaul <<a
href="mailto:ykaul@redhat.com" class=""
moz-do-not-send="true">ykaul@redhat.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="ltr" class=""><br class="">
<div class="gmail_extra"><br class="">
<div class="gmail_quote">On Wed, Jul 19, 2017 at 9:13
PM, Vinícius Ferrão <span dir="ltr" class="">
<<a href="mailto:ferrao@if.ufrj.br"
target="_blank" class="" moz-do-not-send="true">ferrao@if.ufrj.br</a>></span>
wrote:<br class="">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br class="">
<br class="">
I’ve skipped this message entirely yesterday. So
this is per design? Because the best practices of
iSCSI MPIO, as far as I know, recommends two
completely separate paths. If this can’t be achieved
with oVirt what’s the point of running MPIO?<br
class="">
</blockquote>
<div class=""><br class="">
</div>
<div class="">With regular storage it is quite easy to
achieve using 'iSCSI bonding'.</div>
<div class="">I think the Dell storage is a bit
different and requires some more investigation - or
experience with it.</div>
<div class=""> Y.</div>
</div>
</div>
</div>
</div>
</blockquote>
<div><br class="">
</div>
<div>Yaniv, thank you for answering this. I’m really hoping that
a solution would be found.</div>
<div><br class="">
</div>
<div>Actually I’m not running anything from DELL. My storage
system is FreeNAS which is pretty standard and, as far as I
know, iSCSI practices dictates segregate networks for proper
working.</div>
<div><br class="">
</div>
<div>All other major virtualization products supports iSCSI this
way: vSphere, XenServer and Hyper-V. So I was really surprised
that oVirt (and even RHV, I requested a trial yesterday) does
not implement ISCSI with the well know best practices.</div>
<div><br class="">
</div>
<div>There’s a picture of the architecture that I take from
Google when searching for ”mpio best practives”:
<a
href="https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640"
class="" moz-do-not-send="true">
https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640</a></div>
<div><br class="">
</div>
<div>Ans as you can see it’s segregated networks on a machine
reaching the same target.</div>
<div><br class="">
</div>
<div>In my case, my datacenter has five Hypervisor Machines,
with two NICs dedicated for iSCSI. Both NICs connect to
different converged ethernet switches and the iStorage is
connected the same way.</div>
<div><br class="">
</div>
<div>So it really does not make sense that a the first NIC can
reach the second NIC target. In a case of a switch failure the
cluster will go down anyway, so what’s the point of running
MPIO? Right?</div>
<div><br class="">
</div>
<div>Thanks once again,</div>
<div>V.</div>
</div>
</blockquote>
</body>
</html>