This is a multi-part message in MIME format.
--------------A8E57FF88753DC01914C0685
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Marcin,
Hmm, so if you are using multipath with VDSM you have to manually
edit the vdsm.conf file to put the right IP in every time the active
controller switches? That sort of defeats the purpose of multipath....
That was the issue we were having: we'd spin up another host, it would
connect to the SAN which would then reballance the disks among
controllers, and all our other hosts would lose their connection to the
active controller and pause all of the VMs. It's the "Device is not on
preferred path" issue that is common on the MD3x00 line. We had the
same errors with VMWare, but VMWare was able to automatically switch to
the active path.
On 2017-03-26 05:42 PM, Marcin Kruk wrote:
But on the Dell MD32x00 you have got two controllers. The trick is
that you have to sustain link to both controllers, so the best option
is to use multipath as Yaniv said. Otherwise you get an error
notifications from the array.
The problem is with iSCSI target.
After server reboot, VDSM tries to connect to target which was
previously set, but it could be inactive.
So in that case you have to remember to edit configuration in
vdsm.conf, because vdsm.conf do not accept target with multi IP addresses.
2017-03-26 9:40 GMT+02:00 Yaniv Kaul <ykaul(a)redhat.com
<mailto:ykaul@redhat.com>>:
On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell
<ctassell(a)gmail.com <mailto:ctassell@gmail.com>> wrote:
Hi Everyone,
I'm about to setup an oVirt cluster with two hosts hitting a
Linux storage server. Since the Linux box can provide the
storage in pretty much any form, I'm wondering which option is
"best." Our primary focus is on reliability, with performance
being a close second. Since we will only be using a single
storage server I was thinking NFS would probably beat out
GlusterFS, and that NFSv4 would be a better choice than
NFSv3. I had assumed that that iSCSI would be better
performance wise, but from what I'm seeing online that might
not be the case.
NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
support, which is nice.
Gluster probably requires 3 servers.
In most cases, I don't think people see the difference in
performance between NFS and iSCSI. The theory is that block
storage is faster, but in practice, most don't get to those limits
where it matters really.
Our servers will be using a 1G network backbone for regular
traffic and a dedicated 10G backbone with LACP for redundancy
and extra bandwidth for storage traffic if that makes a
difference.
LCAP many times (especially on NFS) does not provide extra
bandwidth, as the (single) NFS connection tends to be sticky to a
single physical link.
It's one of the reasons I personally prefer iSCSI with multipathing.
I'll probably try to do some performance benchmarks with 2-3
options, but the reliability issue is a little harder to test
for. Has anyone had any particularly bad experiences with a
particular storage option? We have been using iSCSI with a
Dell MD3x00 SAN and have run into a bunch of issues with the
multipath setup, but that won't be a problem with the new SAN
since it's only got a single controller interface.
A single controller is not very reliable. If reliability is your
primary concern, I suggest ensuring there is no single point of
failure - or at least you are aware of all of them (does the
storage server have redundant power supply? to two power sources?
Of course in some scenarios it's an overkill and perhaps not
practical, but you should be aware of your weak spots).
I'd stick with what you are most comfortable managing - creating,
backing up, extending, verifying health, etc.
Y.
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>
--------------A8E57FF88753DC01914C0685
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hi Marcin,<br>
<br>
Hmm, so if you are using multipath with VDSM you have to
manually edit the vdsm.conf file to put the right IP in every time
the active controller switches? That sort of defeats the purpose
of multipath.... That was the issue we were having: we'd spin up
another host, it would connect to the SAN which would then
reballance the disks among controllers, and all our other hosts
would lose their connection to the active controller and pause all
of the VMs. It's the "Device is not on preferred path" issue that
is common on the MD3x00 line. We had the same errors with VMWare,
but VMWare was able to automatically switch to the active path.<br>
<br>
On 2017-03-26 05:42 PM, Marcin Kruk wrote:<br>
</div>
<blockquote
cite="mid:CAFJSZpSb_ZVHB0uvOPTFmAdEVCpHHuMk1Vj0zVZ+jdX2=A=R5Q@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>But on the Dell MD32x00 you have got two controllers. The
trick is that you have to sustain link to both controllers, so
the best option is to use multipath as Yaniv said. Otherwise
you get an error notifications from the array.<br>
</div>
The problem is with iSCSI target.<br>
After server reboot, VDSM tries to connect to target which was
previously set, but it could be inactive.<br>
So in that case you have to remember to edit configuration in
vdsm.conf, because vdsm.conf do not accept target with multi IP
addresses.<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">2017-03-26 9:40 GMT+02:00 Yaniv Kaul
<span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:ykaul@redhat.com"
target="_blank">ykaul@redhat.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote"><span class="">On
Sat, Mar 25,
2017 at 9:20 AM, Charles Tassell <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:ctassell@gmail.com"
target="_blank">ctassell(a)gmail.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
Everyone,<br>
<br>
I'm about to setup an oVirt cluster with two
hosts hitting a Linux storage server. Since the
Linux box can provide the storage in pretty much
any form, I'm wondering which option is "best."
Our primary focus is on reliability, with
performance being a close second. Since we will
only be using a single storage server I was
thinking NFS would probably beat out GlusterFS,
and that NFSv4 would be a better choice than
NFSv3. I had assumed that that iSCSI would be
better performance wise, but from what I'm seeing
online that might not be the case.<br>
</blockquote>
<div><br>
</div>
</span>
<div>NFS 4.2 is better than NFS 3 in the sense that
you'll get DISCARD support, which is nice.</div>
<div>Gluster probably requires 3 servers.</div>
<div>In most cases, I don't think people see the
difference in performance between NFS and iSCSI. The
theory is that block storage is faster, but in
practice, most don't get to those limits where it
matters really.</div>
<span class="">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Our servers will be using a 1G network backbone
for regular traffic and a dedicated 10G backbone
with LACP for redundancy and extra bandwidth for
storage traffic if that makes a difference.<br>
</blockquote>
<div><br>
</div>
</span>
<div>LCAP many times (especially on NFS) does not
provide extra bandwidth, as the (single) NFS
connection tends to be sticky to a single physical
link.</div>
<div>It's one of the reasons I personally prefer iSCSI
with multipathing.</div>
<span class="">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I'll probably try to do some performance
benchmarks with 2-3 options, but the reliability
issue is a little harder to test for. Has anyone
had any particularly bad experiences with a
particular storage option? We have been using
iSCSI with a Dell MD3x00 SAN and have run into a
bunch of issues with the multipath setup, but that
won't be a problem with the new SAN since it's
only got a single controller interface.<br>
</blockquote>
<div><br>
</div>
</span>
<div>A single controller is not very reliable. If
reliability is your primary concern, I suggest
ensuring there is no single point of failure - or at
least you are aware of all of them (does the storage
server have redundant power supply? to two power
sources? Of course in some scenarios it's an
overkill and perhaps not practical, but you should
be aware of your weak spots).</div>
<div><br>
</div>
<div>I'd stick with what you are most comfortable
managing - creating, backing up, extending,
verifying health, etc.</div>
<span class="HOEnZb"><font
color="#888888">
<div>Y.</div>
</font></span><span class="">
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/mailman<wbr>/li...
</blockquote>
</span></div>
<br>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/<wbr>mailman/li...
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
</body>
</html>
--------------A8E57FF88753DC01914C0685--