So just to make sure I follow:
As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out.  I figured I would transform the setup described below into one where each host has:
To get there though,  I need to remove the 4 existing logical storage nets from my hosts, pull 2 NICs out of the existing bond and so on.  But when I've attempted that, I get things into a funky state where the hosts become non-operational because the old storage nets are "required".  I unchecked that setting thinking that to be the right path.  But I could never get much further towards the new setup because the existing storage domain as all the old connections and I see no way to "forget" them, at least through the engine -- I didn't try to fight it behind its back with iscsiadmin to do session logouts.  Somewhere in all this mess I got into a Catch-22 where I couldn't do anything with the old SD because no host was suitable and no host could be made suitable because the SD couldn't be connected.  I tried all sorts of things of varying levels of scariness but wound up putting things back to present for now since I clearly need some further advice.

One option that struck me as a possibility, but exceeded my risk aversion threshold was to remove the storage domain entirely and create a new one pointing to the same LUNs.  Is that what I need to do to forget the old connections?  Is that safe to all my existing logical disks, etc?  Does the engine just see an group of LUNs with oVirt "things" and magically reconstruct it all from what's there?  I'm guessing that's the case because I have recreated an engine before and know that all the critical bits live in the SD, but I just want to be sure I don't commit to something really boneheaded.

On 1/17/19 7:43 PM, Vinícius Ferrão wrote:
MPIO by concept is when you have two dedicated paths for iSCSI.

So you don’t put iSCSI inside LACP, because it won’t do the MPIO magic. Since it’s the same path with a single IP.

The right approach is two subjects, completely segregated without routing. You can use the same switch, it will not be redundant on the switch part, but it will be on the connections and you have two paths to follow load balancing between them.

But to be honest I never get how oVirt handles MPIO. The iSCSI Multipath button on the interface request that all points, on different paths, to be reached, which doesn’t make sense for my understanding. In the past I’ve opened a ticket about this but I simply gave up. Ended using XenServer instead for this case specifically, which I was trying to avoid.

Sent from my iPhone

On 17 Jan 2019, at 22:14, John Florian <jflorian@doubledog.org> wrote:

I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives.  At present, the I have 5 VLANs, each with their own subnet as:

  1. my "main" net (VLAN 1, 172.16.7.0/24)
  2. ovirtmgmt (VLAN 100, 192.168.100.0/24)
  3. four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24)

On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP address for each of the four storage nets giving me:

  • bond0.101@bond0: 192.168.101.101
  • bond0.102@bond0: 192.168.102.102
  • bond0.103@bond0: 192.168.103.103
  • bond0.104@bond0: 192.168.104.104

The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:

Host 1:

  • bond0.101@bond0: 192.168.101.203
  • bond0.102@bond0: 192.168.102.203
  • bond0.103@bond0: 192.168.103.203
  • bond0.104@bond0: 192.168.104.203

Host 2:

  • bond0.101@bond0: 192.168.101.204
  • bond0.102@bond0: 192.168.102.204
  • bond0.103@bond0: 192.168.103.204
  • bond0.104@bond0: 192.168.104.204

I believe my performance could be better though.  While running bonnie++ on a VM, the NAS reports top disk throughput around 70MB/s and the network (both NICs) topping out around 90MB/s.  I suspect I'm being hurt by the load balancing across the NICs.  I've played with various load balancing options for the LAGs (src-dst-ip and src-dst-mac) but with little difference in effect.  Watching the resource monitor on the NAS, I can see that one NIC almost exclusive does transmits while the other is almost exclusively receives.  Here's the bonnie report (my apologies to those reading plain-text here):

Bonnie++ Benchmark results

Version 1.97 Sequential Output Sequential Input Random
Seeks

Sequential Create Random Create

Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete

K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU
/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
unamed 4G 267 97 75284 21 22775 8 718 97 43559 7 189.5 8 16 6789 60 +++++ +++ 24948 75 14792 86 +++++ +++ 18163 51
Latency 69048us 754ms 898ms 61246us 311ms 1126ms Latency 33937us 1132us 1299us 528us 22us 458us


I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to get my head around how to best set that up or to even know if it would be helpful.  I only have one switch (a Catalyst 3750g) in this small setup so fault tolerance at that level isn't a goal.

So... what would the recommendation be?  I've never done MPIO before but know where it's at in the web UI at least.

-- 
John Florian
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6PFUJFCL36VDEI6J6YUBGJNXTJHFQLYX/


-- 
John Florian