need network design advice for iSCSI

I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives. At present, the I have 5 VLANs, each with their own subnet as: 1. my "main" net (VLAN 1, 172.16.7.0/24) 2. ovirtmgmt (VLAN 100, 192.168.100.0/24) 3. four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24) On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP address for each of the four storage nets giving me: * bond0.101@bond0: 192.168.101.101 * bond0.102@bond0: 192.168.102.102 * bond0.103@bond0: 192.168.103.103 * bond0.104@bond0: 192.168.104.104 The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG: Host 1: * bond0.101@bond0: 192.168.101.203 * bond0.102@bond0: 192.168.102.203 * bond0.103@bond0: 192.168.103.203 * bond0.104@bond0: 192.168.104.203 Host 2: * bond0.101@bond0: 192.168.101.204 * bond0.102@bond0: 192.168.102.204 * bond0.103@bond0: 192.168.103.204 * bond0.104@bond0: 192.168.104.204 I believe my performance could be better though. While running bonnie++ on a VM, the NAS reports top disk throughput around 70MB/s and the network (both NICs) topping out around 90MB/s. I suspect I'm being hurt by the load balancing across the NICs. I've played with various load balancing options for the LAGs (src-dst-ip and src-dst-mac) but with little difference in effect. Watching the resource monitor on the NAS, I can see that one NIC almost exclusive does transmits while the other is almost exclusively receives. Here's the bonnie report (my apologies to those reading plain-text here): Bonnie++ Benchmark results *Version 1.97* *Sequential Output* *Sequential Input* *Random Seeks* *Sequential Create* *Random Create* Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU unamed 4G 267 97 75284 21 22775 8 718 97 43559 7 189.5 8 16 6789 60 +++++ +++ 24948 75 14792 86 +++++ +++ 18163 51 Latency 69048us 754ms 898ms 61246us 311ms 1126ms Latency 33937us 1132us 1299us 528us 22us 458us I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to get my head around how to best set that up or to even know if it would be helpful. I only have one switch (a Catalyst 3750g) in this small setup so fault tolerance at that level isn't a goal. So... what would the recommendation be? I've never done MPIO before but know where it's at in the web UI at least. -- John Florian

MPIO by concept is when you have two dedicated paths for iSCSI. So you don’t put iSCSI inside LACP, because it won’t do the MPIO magic. Since it’s the same path with a single IP. The right approach is two subjects, completely segregated without routing. You can use the same switch, it will not be redundant on the switch part, but it will be on the connections and you have two paths to follow load balancing between them. But to be honest I never get how oVirt handles MPIO. The iSCSI Multipath button on the interface request that all points, on different paths, to be reached, which doesn’t make sense for my understanding. In the past I’ve opened a ticket about this but I simply gave up. Ended using XenServer instead for this case specifically, which I was trying to avoid. Sent from my iPhone
On 17 Jan 2019, at 22:14, John Florian <jflorian@doubledog.org> wrote:
I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives. At present, the I have 5 VLANs, each with their own subnet as:
my "main" net (VLAN 1, 172.16.7.0/24) ovirtmgmt (VLAN 100, 192.168.100.0/24) four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24) On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP address for each of the four storage nets giving me:
bond0.101@bond0: 192.168.101.101 bond0.102@bond0: 192.168.102.102 bond0.103@bond0: 192.168.103.103 bond0.104@bond0: 192.168.104.104 The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:
Host 1:
bond0.101@bond0: 192.168.101.203 bond0.102@bond0: 192.168.102.203 bond0.103@bond0: 192.168.103.203 bond0.104@bond0: 192.168.104.203 Host 2:
bond0.101@bond0: 192.168.101.204 bond0.102@bond0: 192.168.102.204 bond0.103@bond0: 192.168.103.204 bond0.104@bond0: 192.168.104.204 I believe my performance could be better though. While running bonnie++ on a VM, the NAS reports top disk throughput around 70MB/s and the network (both NICs) topping out around 90MB/s. I suspect I'm being hurt by the load balancing across the NICs. I've played with various load balancing options for the LAGs (src-dst-ip and src-dst-mac) but with little difference in effect. Watching the resource monitor on the NAS, I can see that one NIC almost exclusive does transmits while the other is almost exclusively receives. Here's the bonnie report (my apologies to those reading plain-text here):
Version 1.97 Sequential Output Sequential Input Random Seeks Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU unamed 4G 267 97 75284 21 22775 8 718 97 43559 7 189.5 8 16 6789 60 +++++ +++ 24948 75 14792 86 +++++ +++ 18163 51 Latency 69048us 754ms 898ms 61246us 311ms 1126ms Latency 33937us 1132us 1299us 528us 22us 458us
I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to get my head around how to best set that up or to even know if it would be helpful. I only have one switch (a Catalyst 3750g) in this small setup so fault tolerance at that level isn't a goal.
So... what would the recommendation be? I've never done MPIO before but know where it's at in the web UI at least.
-- John Florian _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6PFUJFCL36VDEI...

So just to make sure I follow: * I will want a distinct VLAN and IP address for each NIC acting as an iSCSI initiator. * In the middle the switch would be configured as basic access ports without any LACP. * Do I want the same for the target? The QNAP docs say that for MPIO I would want to use their port trunking feature and a single IP for both NICs on that end, which confuses me as it seems to contradict the idea of two (or more) completely independent channels. As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out. I figured I would transform the setup described below into one where each host has: * 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net To get there though, I need to remove the 4 existing logical storage nets from my hosts, pull 2 NICs out of the existing bond and so on. But when I've attempted that, I get things into a funky state where the hosts become non-operational because the old storage nets are "required". I unchecked that setting thinking that to be the right path. But I could never get much further towards the new setup because the existing storage domain as all the old connections and I see no way to "forget" them, at least through the engine -- I didn't try to fight it behind its back with iscsiadmin to do session logouts. Somewhere in all this mess I got into a Catch-22 where I couldn't do anything with the old SD because no host was suitable and no host could be made suitable because the SD couldn't be connected. I tried all sorts of things of varying levels of scariness but wound up putting things back to present for now since I clearly need some further advice. One option that struck me as a possibility, but exceeded my risk aversion threshold was to remove the storage domain entirely and create a new one pointing to the same LUNs. Is that what I need to do to forget the old connections? Is that safe to all my existing logical disks, etc? Does the engine just see an group of LUNs with oVirt "things" and magically reconstruct it all from what's there? I'm guessing that's the case because I have recreated an engine before and know that all the critical bits live in the SD, but I just want to be sure I don't commit to something really boneheaded. On 1/17/19 7:43 PM, Vinícius Ferrão wrote:
MPIO by concept is when you have two dedicated paths for iSCSI.
So you don’t put iSCSI inside LACP, because it won’t do the MPIO magic. Since it’s the same path with a single IP.
The right approach is two subjects, completely segregated without routing. You can use the same switch, it will not be redundant on the switch part, but it will be on the connections and you have two paths to follow load balancing between them.
But to be honest I never get how oVirt handles MPIO. The iSCSI Multipath button on the interface request that all points, on different paths, to be reached, which doesn’t make sense for my understanding. In the past I’ve opened a ticket about this but I simply gave up. Ended using XenServer instead for this case specifically, which I was trying to avoid.
Sent from my iPhone
On 17 Jan 2019, at 22:14, John Florian <jflorian@doubledog.org <mailto:jflorian@doubledog.org>> wrote:
I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives. At present, the I have 5 VLANs, each with their own subnet as:
1. my "main" net (VLAN 1, 172.16.7.0/24) 2. ovirtmgmt (VLAN 100, 192.168.100.0/24) 3. four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24)
On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP address for each of the four storage nets giving me:
* bond0.101@bond0: 192.168.101.101 * bond0.102@bond0: 192.168.102.102 * bond0.103@bond0: 192.168.103.103 * bond0.104@bond0: 192.168.104.104
The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:
Host 1:
* bond0.101@bond0: 192.168.101.203 * bond0.102@bond0: 192.168.102.203 * bond0.103@bond0: 192.168.103.203 * bond0.104@bond0: 192.168.104.203
Host 2:
* bond0.101@bond0: 192.168.101.204 * bond0.102@bond0: 192.168.102.204 * bond0.103@bond0: 192.168.103.204 * bond0.104@bond0: 192.168.104.204
I believe my performance could be better though. While running bonnie++ on a VM, the NAS reports top disk throughput around 70MB/s and the network (both NICs) topping out around 90MB/s. I suspect I'm being hurt by the load balancing across the NICs. I've played with various load balancing options for the LAGs (src-dst-ip and src-dst-mac) but with little difference in effect. Watching the resource monitor on the NAS, I can see that one NIC almost exclusive does transmits while the other is almost exclusively receives. Here's the bonnie report (my apologies to those reading plain-text here):
Bonnie++ Benchmark results
*Version 1.97* *Sequential Output* *Sequential Input* *Random Seeks* *Sequential Create* *Random Create*
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU unamed 4G 267 97 75284 21 22775 8 718 97 43559 7 189.5 8 16 6789 60 +++++ +++ 24948 75 14792 86 +++++ +++ 18163 51 Latency 69048us 754ms 898ms 61246us 311ms 1126ms Latency 33937us 1132us 1299us 528us 22us 458us
I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to get my head around how to best set that up or to even know if it would be helpful. I only have one switch (a Catalyst 3750g) in this small setup so fault tolerance at that level isn't a goal.
So... what would the recommendation be? I've never done MPIO before but know where it's at in the web UI at least.
-- John Florian _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6PFUJFCL36VDEI...
-- John Florian

Hi John, Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out. I figured I would transform the setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net
This is exactly the setup I use. I have run this successfully with CentOS/LIO and FreeNAS iSCSI targets with good performance. In short: - 2 separate, isolated networks for iSCSI with dedicated adapters on hosts and iSCSI target - jumbo frames enabled - no VLANs config needed on hosts, untagged VLANs on switch - do _not_ use LACP, let multipathd handle failovers Same experience as Vinicius: what did _not_ work for me is the iSCSI-Bonding in OVirt. It seems to require that all storage IPs are reachable from all other IPs, which is not the case in every setup. To get multipathing to work I use multipath directly:
I will post a bonnie++ result later. If you need more details please let me know. cu, Uwe

Shani, Can you help here with iSCSI bonding? Thanks On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz <uwe@laverenz.de> wrote:
Hi John,
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out. I figured I would transform the setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net
This is exactly the setup I use. I have run this successfully with CentOS/LIO and FreeNAS iSCSI targets with good performance.
In short:
- 2 separate, isolated networks for iSCSI with dedicated adapters on hosts and iSCSI target - jumbo frames enabled - no VLANs config needed on hosts, untagged VLANs on switch - do _not_ use LACP, let multipathd handle failovers
Same experience as Vinicius: what did _not_ work for me is the iSCSI-Bonding in OVirt. It seems to require that all storage IPs are reachable from all other IPs, which is not the case in every setup.
To get multipathing to work I use multipath directly:
I will post a bonnie++ result later. If you need more details please let me know.
cu, Uwe _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6...

Hi, I'm not familiar with network construction, so I guess I'm not the right address for that :) Regards, Shani Leviim On Mon, Jan 21, 2019, 09:22 Eitan Raviv <eraviv@redhat.com wrote:
Shani, Can you help here with iSCSI bonding? Thanks
On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz <uwe@laverenz.de> wrote:
Hi John,
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm
also
having troubles figuring that out. I figured I would transform the setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net
This is exactly the setup I use. I have run this successfully with CentOS/LIO and FreeNAS iSCSI targets with good performance.
In short:
- 2 separate, isolated networks for iSCSI with dedicated adapters on hosts and iSCSI target - jumbo frames enabled - no VLANs config needed on hosts, untagged VLANs on switch - do _not_ use LACP, let multipathd handle failovers
Same experience as Vinicius: what did _not_ work for me is the iSCSI-Bonding in OVirt. It seems to require that all storage IPs are reachable from all other IPs, which is not the case in every setup.
To get multipathing to work I use multipath directly:
I will post a bonnie++ result later. If you need more details please let me know.
cu, Uwe _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6...

Hello people, in the past Maor Lipchuk (from RH) tried very hard to help me and Uwe but we was unable to converge on the solution. This was discussed a year ago and on my understanding it still and oVirt bug. As today, if you simple “DuckDuckGo” for “ovirt iscsi multipath not working” the third link points to this bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1474904 <https://bugzilla.redhat.com/show_bug.cgi?id=1474904> Which is the one I’ve mentioned and it’s extremely similar to John Florian case, which was my case too. @John, take a look at the bugzilla link and see if the desired topology match with your case. Regards,
On 21 Jan 2019, at 05:21, Eitan Raviv <eraviv@redhat.com> wrote:
Shani, Can you help here with iSCSI bonding? Thanks
On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz <uwe@laverenz.de> wrote:
Hi John,
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out. I figured I would transform the setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net
This is exactly the setup I use. I have run this successfully with CentOS/LIO and FreeNAS iSCSI targets with good performance.
In short:
- 2 separate, isolated networks for iSCSI with dedicated adapters on hosts and iSCSI target - jumbo frames enabled - no VLANs config needed on hosts, untagged VLANs on switch - do _not_ use LACP, let multipathd handle failovers
Same experience as Vinicius: what did _not_ work for me is the iSCSI-Bonding in OVirt. It seems to require that all storage IPs are reachable from all other IPs, which is not the case in every setup.
To get multipathing to work I use multipath directly:
I will post a bonnie++ result later. If you need more details please let me know.
cu, Uwe _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6...

Okay, both the BZ and ML posts are interesting and helpful. I'm kind of surprised there seems to be so much trouble and confusion for what I would have thought to be a very common setup. Are most people using something else? I think this gives me what I need for my next stab at doing this but I"m still puzzled on how to tear down what I have in oVirt so that I can redo it. Specifically, I didn't see how to delete the existing iSCSI connections. I've read that this can only be done through the REST API. I have managed to redo the interfaces on my Hosts so that everything is now on just 2 NICs each, leaving 2 NICs free for a foothold on a new setup. From all of my experimentation, it would appear that my only option is to create a new storage domain and export/import each disk volume one by one. Maybe there's a migration option I have yet to see, but I don't see any way around creating a new storage domain here. On 1/21/19 7:12 AM, Vinícius Ferrão wrote:
Hello people, in the past Maor Lipchuk (from RH) tried very hard to help me and Uwe but we was unable to converge on the solution.
This was discussed a year ago and on my understanding it still and oVirt bug. As today, if you simple “DuckDuckGo” for “ovirt iscsi multipath not working” the third link points to this bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1474904
Which is the one I’ve mentioned and it’s extremely similar to John Florian case, which was my case too.
@John, take a look at the bugzilla link and see if the desired topology match with your case.
Regards,
On 21 Jan 2019, at 05:21, Eitan Raviv <eraviv@redhat.com <mailto:eraviv@redhat.com>> wrote:
Shani, Can you help here with iSCSI bonding? Thanks
On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz <uwe@laverenz.de <mailto:uwe@laverenz.de>> wrote:
Hi John,
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out. I figured I would transform the setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net
This is exactly the setup I use. I have run this successfully with CentOS/LIO and FreeNAS iSCSI targets with good performance.
In short:
- 2 separate, isolated networks for iSCSI with dedicated adapters on hosts and iSCSI target - jumbo frames enabled - no VLANs config needed on hosts, untagged VLANs on switch - do _not_ use LACP, let multipathd handle failovers
Same experience as Vinicius: what did _not_ work for me is the iSCSI-Bonding in OVirt. It seems to require that all storage IPs are reachable from all other IPs, which is not the case in every setup.
To get multipathing to work I use multipath directly:
I will post a bonnie++ result later. If you need more details please let me know.
cu, Uwe _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6...
_______________________________________________ Users mailing list --users@ovirt.org To unsubscribe send an email tousers-leave@ovirt.org Privacy Statement:https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/NSE5BCLJSIFDX2...

Hello, another guy with what appears to be the same problem... https://bugzilla.redhat.com/show_bug.cgi?id=1588741 PS: Uwe I’m ccing you. Sent from my iPhone
On 29 Jan 2019, at 13:57, John Florian <jflorian@doubledog.org> wrote:
Okay, both the BZ and ML posts are interesting and helpful. I'm kind of surprised there seems to be so much trouble and confusion for what I would have thought to be a very common setup. Are most people using something else?
I think this gives me what I need for my next stab at doing this but I"m still puzzled on how to tear down what I have in oVirt so that I can redo it. Specifically, I didn't see how to delete the existing iSCSI connections. I've read that this can only be done through the REST API. I have managed to redo the interfaces on my Hosts so that everything is now on just 2 NICs each, leaving 2 NICs free for a foothold on a new setup. From all of my experimentation, it would appear that my only option is to create a new storage domain and export/import each disk volume one by one. Maybe there's a migration option I have yet to see, but I don't see any way around creating a new storage domain here.
On 1/21/19 7:12 AM, Vinícius Ferrão wrote: Hello people, in the past Maor Lipchuk (from RH) tried very hard to help me and Uwe but we was unable to converge on the solution.
This was discussed a year ago and on my understanding it still and oVirt bug. As today, if you simple “DuckDuckGo” for “ovirt iscsi multipath not working” the third link points to this bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1474904
Which is the one I’ve mentioned and it’s extremely similar to John Florian case, which was my case too.
@John, take a look at the bugzilla link and see if the desired topology match with your case.
Regards,
On 21 Jan 2019, at 05:21, Eitan Raviv <eraviv@redhat.com> wrote:
Shani, Can you help here with iSCSI bonding? Thanks
On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz <uwe@laverenz.de> wrote:
Hi John,
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also having troubles figuring that out. I figured I would transform the setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt and "main" net * 1 NIC for my 1st storage net * 1 NIC for my 2nd storage net
This is exactly the setup I use. I have run this successfully with CentOS/LIO and FreeNAS iSCSI targets with good performance.
In short:
- 2 separate, isolated networks for iSCSI with dedicated adapters on hosts and iSCSI target - jumbo frames enabled - no VLANs config needed on hosts, untagged VLANs on switch - do _not_ use LACP, let multipathd handle failovers
Same experience as Vinicius: what did _not_ work for me is the iSCSI-Bonding in OVirt. It seems to require that all storage IPs are reachable from all other IPs, which is not the case in every setup.
To get multipathing to work I use multipath directly:
I will post a bonnie++ result later. If you need more details please let me know.
cu, Uwe _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NSE5BCLJSIFDX2...

Hi, Am Montag, den 21.01.2019, 06:43 +0100 schrieb Uwe Laverenz:
I will post a bonnie++ result later. If you need more details please
Attached are the results of the smallest setup (my home lab): storage server is a HP N40L with 16GB RAM, 4x2TB WD RE as RAID10, CentOS 7 with LIO as iSCSI target with 2 Gigabit networks (jumbo frames: mtu 9000). cu, Uwe
participants (5)
-
Eitan Raviv
-
John Florian
-
Shani Leviim
-
Uwe Laverenz
-
Vinícius Ferrão