Multipath handling in oVirt

Hello, I'm starting over on this subject because I wanted to clarify what was the oVirt way to manage multipathing. (Here I will talk only about the data/iSCSI/SAN/LUN/you name it networks.) According to what I see in the host network setup, one can assign *ONE* data network to an interface or to a group of interfaces. That implies that if my host has two data-dedicated interfaces, I can - either group them using bonding (and oVirt is handy for that in the host network setup), then assign the data virtual network to this bond - either assign each nic a different ip in each a different VLAN, then use two different data networks, and assign them each a different data network. I never played this game and don't know where it's going to. At first, may the oVirt storage experts comment on the above to check it's ok. Then, as many users here, our hardware is this : - Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers - SANs : Equallogic PS4xxx and PS6xxx Equallogic's recommendation is that bonding is evil in iSCSI access. To them, multipath is the only true way. After reading tons of docs and using Dell support, everything is telling me to use at least two different NICs with different ip, not bonded - using the same network is bad but ok. How can oVirt handle that ? -- Nicolas ECARNOT

--_000_14859507311530897camelacroniscom_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Tmljb2xhcywNCg0KVGFrZSBhIGxvb2sgYXQNCmh0dHA6Ly93d3cub3ZpcnQub3JnL2RvY3VtZW50 YXRpb24vYWRtaW4tZ3VpZGUvY2hhcC1TdG9yYWdlLyNjb25maWd1cmluZy1pc2NzaS1tdWx0aXBh dGhpbmcNCg0KVGhlIHJlY29tbWVuZGVkIHdheSBpcyB0byB1c2UgZGlmZmVyZW50IFZMQU5zLiBF cXVhbGxvZ2ljIGhhcyB0byBiZSBjb25uZWN0ZWQgdG8gdGhlIGRpZmZlcmVudCBWTEFOcyBhcyB3 ZWxsLg0KDQoNCk9uIFdlZCwgMjAxNy0wMi0wMSBhdCAxMTo1MCArMDEwMCwgTmljb2xhcyBFY2Fy bm90IHdyb3RlOg0KDQpIZWxsbywNCg0KSSdtIHN0YXJ0aW5nIG92ZXIgb24gdGhpcyBzdWJqZWN0 IGJlY2F1c2UgSSB3YW50ZWQgdG8gY2xhcmlmeSB3aGF0IHdhcw0KdGhlIG9WaXJ0IHdheSB0byBt YW5hZ2UgbXVsdGlwYXRoaW5nLg0KDQooSGVyZSBJIHdpbGwgdGFsayBvbmx5IGFib3V0IHRoZSBk YXRhL2lTQ1NJL1NBTi9MVU4veW91IG5hbWUgaXQgbmV0d29ya3MuKQ0KQWNjb3JkaW5nIHRvIHdo YXQgSSBzZWUgaW4gdGhlIGhvc3QgbmV0d29yayBzZXR1cCwgb25lIGNhbiBhc3NpZ24gKk9ORSoN CmRhdGEgbmV0d29yayB0byBhbiBpbnRlcmZhY2Ugb3IgdG8gYSBncm91cCBvZiBpbnRlcmZhY2Vz Lg0KDQpUaGF0IGltcGxpZXMgdGhhdCBpZiBteSBob3N0IGhhcyB0d28gZGF0YS1kZWRpY2F0ZWQg aW50ZXJmYWNlcywgSSBjYW4NCi0gZWl0aGVyIGdyb3VwIHRoZW0gdXNpbmcgYm9uZGluZyAoYW5k IG9WaXJ0IGlzIGhhbmR5IGZvciB0aGF0IGluIHRoZQ0KaG9zdCBuZXR3b3JrIHNldHVwKSwgdGhl biBhc3NpZ24gdGhlIGRhdGEgdmlydHVhbCBuZXR3b3JrIHRvIHRoaXMgYm9uZA0KLSBlaXRoZXIg YXNzaWduIGVhY2ggbmljIGEgZGlmZmVyZW50IGlwIGluIGVhY2ggYSBkaWZmZXJlbnQgVkxBTiwg dGhlbg0KdXNlIHR3byBkaWZmZXJlbnQgZGF0YSBuZXR3b3JrcywgYW5kIGFzc2lnbiB0aGVtIGVh Y2ggYSBkaWZmZXJlbnQgZGF0YQ0KbmV0d29yay4gSSBuZXZlciBwbGF5ZWQgdGhpcyBnYW1lIGFu ZCBkb24ndCBrbm93IHdoZXJlIGl0J3MgZ29pbmcgdG8uDQoNCkF0IGZpcnN0LCBtYXkgdGhlIG9W aXJ0IHN0b3JhZ2UgZXhwZXJ0cyBjb21tZW50IG9uIHRoZSBhYm92ZSB0byBjaGVjaw0KaXQncyBv ay4NCg0KVGhlbiwgYXMgbWFueSB1c2VycyBoZXJlLCBvdXIgaGFyZHdhcmUgaXMgdGhpcyA6DQot IEhvc3RzIDogRGVsbCBwb3dlcmVkZ2UsIG1vc3RseSBibGFkZXMgKE02MTAsNjIwLDYzMCksIG9y IHJhY2sgc2VydmVycw0KLSBTQU5zIDogRXF1YWxsb2dpYyBQUzR4eHggYW5kIFBTNnh4eA0KDQpF cXVhbGxvZ2ljJ3MgcmVjb21tZW5kYXRpb24gaXMgdGhhdCBib25kaW5nIGlzIGV2aWwgaW4gaVND U0kgYWNjZXNzLg0KVG8gdGhlbSwgbXVsdGlwYXRoIGlzIHRoZSBvbmx5IHRydWUgd2F5Lg0KQWZ0 ZXIgcmVhZGluZyB0b25zIG9mIGRvY3MgYW5kIHVzaW5nIERlbGwgc3VwcG9ydCwgZXZlcnl0aGlu ZyBpcyB0ZWxsaW5nDQptZSB0byB1c2UgYXQgbGVhc3QgdHdvIGRpZmZlcmVudCBOSUNzIHdpdGgg ZGlmZmVyZW50IGlwLCBub3QgYm9uZGVkIC0NCnVzaW5nIHRoZSBzYW1lIG5ldHdvcmsgaXMgYmFk IGJ1dCBvay4NCg0KSG93IGNhbiBvVmlydCBoYW5kbGUgdGhhdCA/DQoNCg0K --_000_14859507311530897camelacroniscom_ Content-Type: text/html; charset="utf-8" Content-ID: <5B468BEB72F5384788FD36C18E70FC4C@acronis.com> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPGRpdj5OaWNvbGFz LDwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+VGFrZSBhIGxvb2sgYXQ8L2Rpdj4NCjxk aXY+PGEgaHJlZj0iaHR0cDovL3d3dy5vdmlydC5vcmcvZG9jdW1lbnRhdGlvbi9hZG1pbi1ndWlk ZS9jaGFwLVN0b3JhZ2UvI2NvbmZpZ3VyaW5nLWlzY3NpLW11bHRpcGF0aGluZyI+aHR0cDovL3d3 dy5vdmlydC5vcmcvZG9jdW1lbnRhdGlvbi9hZG1pbi1ndWlkZS9jaGFwLVN0b3JhZ2UvI2NvbmZp Z3VyaW5nLWlzY3NpLW11bHRpcGF0aGluZzwvYT48L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8 ZGl2PlRoZSByZWNvbW1lbmRlZCB3YXkgaXMgdG8gdXNlIGRpZmZlcmVudCBWTEFOcy4gRXF1YWxs b2dpYyBoYXMgdG8gYmUgY29ubmVjdGVkIHRvIHRoZSBkaWZmZXJlbnQgVkxBTnMgYXMgd2VsbC48 L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5PbiBXZWQs IDIwMTctMDItMDEgYXQgMTE6NTAgJiM0MzswMTAwLCBOaWNvbGFzIEVjYXJub3Qgd3JvdGU6PC9k aXY+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4NCjxwcmU+SGVsbG8sDQoNCkknbSBzdGFydGlu ZyBvdmVyIG9uIHRoaXMgc3ViamVjdCBiZWNhdXNlIEkgd2FudGVkIHRvIGNsYXJpZnkgd2hhdCB3 YXMgDQp0aGUgb1ZpcnQgd2F5IHRvIG1hbmFnZSBtdWx0aXBhdGhpbmcuDQoNCihIZXJlIEkgd2ls bCB0YWxrIG9ubHkgYWJvdXQgdGhlIGRhdGEvaVNDU0kvU0FOL0xVTi95b3UgbmFtZSBpdCBuZXR3 b3Jrcy4pDQpBY2NvcmRpbmcgdG8gd2hhdCBJIHNlZSBpbiB0aGUgaG9zdCBuZXR3b3JrIHNldHVw LCBvbmUgY2FuIGFzc2lnbiAqT05FKiANCmRhdGEgbmV0d29yayB0byBhbiBpbnRlcmZhY2Ugb3Ig dG8gYSBncm91cCBvZiBpbnRlcmZhY2VzLg0KDQpUaGF0IGltcGxpZXMgdGhhdCBpZiBteSBob3N0 IGhhcyB0d28gZGF0YS1kZWRpY2F0ZWQgaW50ZXJmYWNlcywgSSBjYW4NCi0gZWl0aGVyIGdyb3Vw IHRoZW0gdXNpbmcgYm9uZGluZyAoYW5kIG9WaXJ0IGlzIGhhbmR5IGZvciB0aGF0IGluIHRoZSAN Cmhvc3QgbmV0d29yayBzZXR1cCksIHRoZW4gYXNzaWduIHRoZSBkYXRhIHZpcnR1YWwgbmV0d29y ayB0byB0aGlzIGJvbmQNCi0gZWl0aGVyIGFzc2lnbiBlYWNoIG5pYyBhIGRpZmZlcmVudCBpcCBp biBlYWNoIGEgZGlmZmVyZW50IFZMQU4sIHRoZW4gDQp1c2UgdHdvIGRpZmZlcmVudCBkYXRhIG5l dHdvcmtzLCBhbmQgYXNzaWduIHRoZW0gZWFjaCBhIGRpZmZlcmVudCBkYXRhIA0KbmV0d29yay4g SSBuZXZlciBwbGF5ZWQgdGhpcyBnYW1lIGFuZCBkb24ndCBrbm93IHdoZXJlIGl0J3MgZ29pbmcg dG8uDQoNCkF0IGZpcnN0LCBtYXkgdGhlIG9WaXJ0IHN0b3JhZ2UgZXhwZXJ0cyBjb21tZW50IG9u IHRoZSBhYm92ZSB0byBjaGVjayANCml0J3Mgb2suDQoNClRoZW4sIGFzIG1hbnkgdXNlcnMgaGVy ZSwgb3VyIGhhcmR3YXJlIGlzIHRoaXMgOg0KLSBIb3N0cyA6IERlbGwgcG93ZXJlZGdlLCBtb3N0 bHkgYmxhZGVzIChNNjEwLDYyMCw2MzApLCBvciByYWNrIHNlcnZlcnMNCi0gU0FOcyA6IEVxdWFs bG9naWMgUFM0eHh4IGFuZCBQUzZ4eHgNCg0KRXF1YWxsb2dpYydzIHJlY29tbWVuZGF0aW9uIGlz IHRoYXQgYm9uZGluZyBpcyBldmlsIGluIGlTQ1NJIGFjY2Vzcy4NClRvIHRoZW0sIG11bHRpcGF0 aCBpcyB0aGUgb25seSB0cnVlIHdheS4NCkFmdGVyIHJlYWRpbmcgdG9ucyBvZiBkb2NzIGFuZCB1 c2luZyBEZWxsIHN1cHBvcnQsIGV2ZXJ5dGhpbmcgaXMgdGVsbGluZyANCm1lIHRvIHVzZSBhdCBs ZWFzdCB0d28gZGlmZmVyZW50IE5JQ3Mgd2l0aCBkaWZmZXJlbnQgaXAsIG5vdCBib25kZWQgLSAN CnVzaW5nIHRoZSBzYW1lIG5ldHdvcmsgaXMgYmFkIGJ1dCBvay4NCg0KSG93IGNhbiBvVmlydCBo YW5kbGUgdGhhdCA/DQoNCjwvcHJlPg0KPC9ibG9ja3F1b3RlPg0KPC9ib2R5Pg0KPC9odG1sPg0K --_000_14859507311530897camelacroniscom_--

Hi, As for me personally I have such a config: compute nodes with 4x1G nics and storages with 2x1G nics and 2 switches (not stackable). All servers runs on CentOS 7.X (7.3 at this monent). On compute nodes I have bonding with two nic1 and nic2 (attached to different switches) for mgmt and VM's network, and the other two nics nic3 and nic4 without bonding (and also attached to different switches). On storage nodes I have no bonding and nics nic1 and nic2 connected to different switches. I have two networks for iSCSI: 10.0.2.0/24 and 10.0.3.0/24, nic1 of storage and nic3 of computes connected to one network; nic2 of storage and nic4 of computes - to another one. On webUI I've created network iSCSI1 and iSCSI2 for nic3 and nic4, also created multipath. To have active/active links with double bw throughput I've added 'path_grouping_policy "multibus"' in defaults section of /etc/multipath.conf. After all of that, I have 200+MB/sec throughput to the storage (like raid0 with 2 sata hdd) and I can lose one nic/link/swith without stopping vms. [root@compute02 ~]# multipath -ll 360014052f28c9a600000000000000000 dm-6 LIO-ORG ,ClusterLunHDD size=902G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 6:0:0:0 sdc 8:32 active ready running `- 8:0:0:0 sdf 8:80 active ready running 36001405551a9610d09b4ff9aa836b906 dm-40 LIO-ORG ,SSD_DOMAIN size=915G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 7:0:0:0 sde 8:64 active ready running `- 9:0:0:0 sdh 8:112 active ready running 360014055eb8d30a91044649bda9ee620 dm-5 LIO-ORG ,ClusterLunSSD size=135G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 6:0:0:1 sdd 8:48 active ready running `- 8:0:0:1 sdg 8:96 active ready running [root@compute02 ~]# iscsiadm -m session tcp: [1] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash) tcp: [2] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash) tcp: [3] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash) tcp: [4] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash) [root@compute02 ~]# ip route show | head -4 default via 10.0.1.1 dev ovirtmgmt 10.0.1.0/24 dev ovirtmgmt proto kernel scope link src 10.0.1.102 10.0.2.0/24 dev enp5s0.2 proto kernel scope link src 10.0.2.102 10.0.3.0/24 dev enp2s0.3 proto kernel scope link src 10.0.3.102 [root@compute02 ~]# brctl show ovirtmgmt bridge name bridge id STP enabled interfaces ovirtmgmt 8000.000475b4f262 no bond0.1001 [root@compute02 ~]# cat /proc/net/bonding/bond0 | grep "Bonding\|Slave Interface" Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Slave Interface: enp4s6 Slave Interface: enp6s0 01.02.2017 12:50, Nicolas Ecarnot пишет:
Hello,
I'm starting over on this subject because I wanted to clarify what was the oVirt way to manage multipathing.
(Here I will talk only about the data/iSCSI/SAN/LUN/you name it networks.) According to what I see in the host network setup, one can assign *ONE* data network to an interface or to a group of interfaces.
That implies that if my host has two data-dedicated interfaces, I can - either group them using bonding (and oVirt is handy for that in the host network setup), then assign the data virtual network to this bond - either assign each nic a different ip in each a different VLAN, then use two different data networks, and assign them each a different data network. I never played this game and don't know where it's going to.
At first, may the oVirt storage experts comment on the above to check it's ok.
Then, as many users here, our hardware is this : - Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers - SANs : Equallogic PS4xxx and PS6xxx
Equallogic's recommendation is that bonding is evil in iSCSI access. To them, multipath is the only true way. After reading tons of docs and using Dell support, everything is telling me to use at least two different NICs with different ip, not bonded - using the same network is bad but ok.
How can oVirt handle that ?

Hello, Before replying further, may I ask you, Yura, to post a screenshot of your iSCSI multipathing setup in the web GUI? And also the same for the network setup of a host ? Thank you. -- Nicolas ECARNOT Le 01/02/2017 à 13:14, Yura Poltoratskiy a écrit :
Hi,
As for me personally I have such a config: compute nodes with 4x1G nics and storages with 2x1G nics and 2 switches (not stackable). All servers runs on CentOS 7.X (7.3 at this monent).
On compute nodes I have bonding with two nic1 and nic2 (attached to different switches) for mgmt and VM's network, and the other two nics nic3 and nic4 without bonding (and also attached to different switches). On storage nodes I have no bonding and nics nic1 and nic2 connected to different switches.
I have two networks for iSCSI: 10.0.2.0/24 and 10.0.3.0/24, nic1 of storage and nic3 of computes connected to one network; nic2 of storage and nic4 of computes - to another one.
On webUI I've created network iSCSI1 and iSCSI2 for nic3 and nic4, also created multipath. To have active/active links with double bw throughput I've added 'path_grouping_policy "multibus"' in defaults section of /etc/multipath.conf.
After all of that, I have 200+MB/sec throughput to the storage (like raid0 with 2 sata hdd) and I can lose one nic/link/swith without stopping vms.
[root@compute02 ~]# multipath -ll 360014052f28c9a600000000000000000 dm-6 LIO-ORG ,ClusterLunHDD size=902G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 6:0:0:0 sdc 8:32 active ready running `- 8:0:0:0 sdf 8:80 active ready running 36001405551a9610d09b4ff9aa836b906 dm-40 LIO-ORG ,SSD_DOMAIN size=915G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 7:0:0:0 sde 8:64 active ready running `- 9:0:0:0 sdh 8:112 active ready running 360014055eb8d30a91044649bda9ee620 dm-5 LIO-ORG ,ClusterLunSSD size=135G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 6:0:0:1 sdd 8:48 active ready running `- 8:0:0:1 sdg 8:96 active ready running
[root@compute02 ~]# iscsiadm -m session tcp: [1] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash) tcp: [2] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash) tcp: [3] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash) tcp: [4] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)
[root@compute02 ~]# ip route show | head -4 default via 10.0.1.1 dev ovirtmgmt 10.0.1.0/24 dev ovirtmgmt proto kernel scope link src 10.0.1.102 10.0.2.0/24 dev enp5s0.2 proto kernel scope link src 10.0.2.102 10.0.3.0/24 dev enp2s0.3 proto kernel scope link src 10.0.3.102
[root@compute02 ~]# brctl show ovirtmgmt bridge name bridge id STP enabled interfaces ovirtmgmt 8000.000475b4f262 no bond0.1001
[root@compute02 ~]# cat /proc/net/bonding/bond0 | grep "Bonding\|Slave Interface" Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Slave Interface: enp4s6 Slave Interface: enp6s0
01.02.2017 12:50, Nicolas Ecarnot пишет:
Hello,
I'm starting over on this subject because I wanted to clarify what was the oVirt way to manage multipathing.
(Here I will talk only about the data/iSCSI/SAN/LUN/you name it networks.) According to what I see in the host network setup, one can assign *ONE* data network to an interface or to a group of interfaces.
That implies that if my host has two data-dedicated interfaces, I can - either group them using bonding (and oVirt is handy for that in the host network setup), then assign the data virtual network to this bond - either assign each nic a different ip in each a different VLAN, then use two different data networks, and assign them each a different data network. I never played this game and don't know where it's going to.
At first, may the oVirt storage experts comment on the above to check it's ok.
Then, as many users here, our hardware is this : - Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers - SANs : Equallogic PS4xxx and PS6xxx
Equallogic's recommendation is that bonding is evil in iSCSI access. To them, multipath is the only true way. After reading tons of docs and using Dell support, everything is telling me to use at least two different NICs with different ip, not bonded - using the same network is bad but ok.
How can oVirt handle that ?
-- Nicolas ECARNOT

This is a multi-part message in MIME format. --------------F3DADF57883F2EA87FFA8034 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Here you are: iSCSI multipathing <https://dl.dropboxusercontent.com/u/106774860/iSCSI_multipathing.png> network setup of a host <https://dl.dropboxusercontent.com/u/106774860/host_network.png> 01.02.2017 15:31, Nicolas Ecarnot пишет:
Hello,
Before replying further, may I ask you, Yura, to post a screenshot of your iSCSI multipathing setup in the web GUI?
And also the same for the network setup of a host ?
Thank you.
--------------F3DADF57883F2EA87FFA8034 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <p>Here you are:</p> <p><a href="https://dl.dropboxusercontent.com/u/106774860/iSCSI_multipathing.png">iSCSI multipathing</a></p> <p><a href="https://dl.dropboxusercontent.com/u/106774860/host_network.png">network setup of a host</a></p> <br> <br> <div class="moz-cite-prefix">01.02.2017 15:31, Nicolas Ecarnot пишет:<br> </div> <blockquote cite="mid:8d8f0e30-b70d-a1b7-57e9-ffe8bb8940b9@ecarnot.net" type="cite">Hello, <br> <br> Before replying further, may I ask you, Yura, to post a screenshot of your iSCSI multipathing setup in the web GUI? <br> <br> And also the same for the network setup of a host ? <br> <br> Thank you. <br> <br> </blockquote> <br> </body> </html> --------------F3DADF57883F2EA87FFA8034--

Le 01/02/2017 à 15:31, Yura Poltoratskiy a écrit :
Here you are:
iSCSI multipathing <https://dl.dropboxusercontent.com/u/106774860/iSCSI_multipathing.png>
network setup of a host <https://dl.dropboxusercontent.com/u/106774860/host_network.png>
01.02.2017 15:31, Nicolas Ecarnot пишет:
Hello,
Before replying further, may I ask you, Yura, to post a screenshot of your iSCSI multipathing setup in the web GUI?
And also the same for the network setup of a host ?
Thank you.
Thank you Yura. To Yaniv and Pavel, yes, this leads to this oVirt feature of iSCSI multipathing, indeed. I would be curious to see (on Yura's hosts for instance) the translation of the oVirt iSCSI multipathing in CLI commands (multipath -ll, iscsiadm -m session -P3, dmsetup table, ...) Yura's setup seems to be perfectly fitted to oVirt (2 NICs, 2 VLANs, 2 targets in different VLANs, iSCSI multipathing), but I'm trying to see how I could make this work with our Equallogic presenting one and only one virtual ip (thus one target VLAN)... -- Nicolas ECARNOT
participants (3)
-
Nicolas Ecarnot
-
Pavel Gashev
-
Yura Poltoratskiy