MPIO by concept is when you have two dedicated paths for iSCSI.
So you don’t put iSCSI inside LACP, because it won’t do the MPIO magic. Since it’s the
same path with a single IP.
The right approach is two subjects, completely segregated without routing. You can use the
same switch, it will not be redundant on the switch part, but it will be on the
connections and you have two paths to follow load balancing between them.
But to be honest I never get how oVirt handles MPIO. The iSCSI Multipath button on the
interface request that all points, on different paths, to be reached, which doesn’t make
sense for my understanding. In the past I’ve opened a ticket about this but I simply gave
up. Ended using XenServer instead for this case specifically, which I was trying to
avoid.
Sent from my iPhone
On 17 Jan 2019, at 22:14, John Florian <jflorian(a)doubledog.org>
wrote:
I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and a QNAP TS-569
Pro NAS with twin gbit NIC and five 7k2 drives. At present, the I have 5 VLANs, each with
their own subnet as:
my "main" net (VLAN 1, 172.16.7.0/24)
ovirtmgmt (VLAN 100, 192.168.100.0/24)
four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24)
On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP address for each
of the four storage nets giving me:
bond0.101@bond0: 192.168.101.101
bond0.102@bond0: 192.168.102.102
bond0.103@bond0: 192.168.103.103
bond0.104@bond0: 192.168.104.104
The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:
Host 1:
bond0.101@bond0: 192.168.101.203
bond0.102@bond0: 192.168.102.203
bond0.103@bond0: 192.168.103.203
bond0.104@bond0: 192.168.104.203
Host 2:
bond0.101@bond0: 192.168.101.204
bond0.102@bond0: 192.168.102.204
bond0.103@bond0: 192.168.103.204
bond0.104@bond0: 192.168.104.204
I believe my performance could be better though. While running bonnie++ on a VM, the NAS
reports top disk throughput around 70MB/s and the network (both NICs) topping out around
90MB/s. I suspect I'm being hurt by the load balancing across the NICs. I've
played with various load balancing options for the LAGs (src-dst-ip and src-dst-mac) but
with little difference in effect. Watching the resource monitor on the NAS, I can see
that one NIC almost exclusive does transmits while the other is almost exclusively
receives. Here's the bonnie report (my apologies to those reading plain-text here):
Version 1.97 Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num
Files Create Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU
/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
unamed 4G 267 97 75284 21 22775 8 718 97 43559 7 189.5 8 16 6789 60 +++++ +++ 24948 75 14792 86 +++++ +++ 18163 51
Latency 69048us 754ms 898ms 61246us 311ms 1126ms Latency 33937us 1132us 1299us 528us 22us 458us
I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to get my head
around how to best set that up or to even know if it would be helpful. I only have one
switch (a Catalyst 3750g) in this small setup so fault tolerance at that level isn't a
goal.
So... what would the recommendation be? I've never done MPIO before but know where
it's at in the web UI at least.
--
John Florian
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6PFUJFCL36V...