On Tue, Mar 28, 2017 at 6:36 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:


Why not building the vlan configuration using oVirt?

iSCSI to a DELL PS Series Storage array... no bonding in their minds.... ;-(
 
You can try to
ifdown eth3.100 and eth4.100,
configure the ovirtmgmt on top if the bond by ovirt
and ifup eth3.100 and eth4.100 again.

But that is ugly and you have to test if this works.



I will see as soon as I will try to add an oVirt network using p1p1 and p1p2




So at the end no way to configure this, neither from web gui nor manually setting some parameter under the oVirt curtains.
Furthermore I discovered that for iSCSI oVirt creates its own interface configuration files (understandable), even if they are already in place (under the usual place /var/lib/iscsi/ifaces)

I abandoned the idea to have both the bonds and the iSCSI setup on the same pair of interfaces.
In the mean time I found a way to have a "correct" (I hope) configuration for multipath iSCSI with Dell PS, all using the oVirt GUI.
Possibly a sort of bug but please don't fix if this is the case.... ;-)
I would like to share and get comments on it.

This is also a sort of follow up to this February thread and other ones:
http://lists.ovirt.org/pipermail/users/2017-February/079349.html

The shared files posted by Yura are not available any more, so I don't know if my configuration matches with his one as detailed here:
http://lists.ovirt.org/pipermail/users/2017-February/079355.html

At the end I have this config:

- Host network interfaces:
https://drive.google.com/file/d/0BwoPbcrMv8mvV1QxbnBaakstME0/view?usp=sharing

- Setup Host netowrks page
https://drive.google.com/file/d/0BwoPbcrMv8mvWUpHTlBwWWJxeVE/view?usp=sharing

- iSCSI Multipathing under datacenter pane
https://drive.google.com/file/d/0BwoPbcrMv8mvQjgyR2lfcmFiQUU/view?usp=sharing

- iscsi1 network
https://drive.google.com/file/d/0BwoPbcrMv8mvYzlubnpWdFJIRlk/view?usp=sharing

- iscsi2 network
https://drive.google.com/file/d/0BwoPbcrMv8mvQjdOSmxnSTFaSUU/view?usp=sharing

It seems oVirt doesn't check the "network subnet" where the adapter are on, but only the vlan id.

The requirements are:
- your iSCSI network has to be on vlan (tipical configuration I think): say it is 100
- you configure the switch port of one network card (say p1p1) so that it has a tag 100
On oVirt Host OS you will have an iscsi1 network, with p1p1.100 interface configured by oVirt for iSCSI connection
- you configure the switch port of one network card (say p1p2)  so that its native vlan is 100
If the packet is not tagged, it will actually transit on vlan 100.
On oVirt host OS you will have an iscsi2 network (not tagged), with p1p2 interface configured by oVirt for iSCSI connection

With this workaround you initially create the iSCSI storage domain and it will do discovery with only one path (for example the tagged iscsi1 p1p1.100 path) to the unique ip portal (10.10.100.9 in my case).
Then you will go configuring the iSCSI multipathing using iscsi1 and iscsi2.

My multipath custom configuration is found here:
http://lists.ovirt.org/pipermail/users/2017-March/080898.html

Ovirt already sets this in /etc/sysctl.d/vdsm.conf 
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2

I created the sysctl.d/50-iscsi.conf configuration like this:

net.ipv4.conf.p1p1/100.arp_announce=2
# p1p2 not on vlan, using /etc/sysctl.d/vdsm.conf configuration
#

net.ipv4.conf.p1p1/100.arp_ignore=1
# p1p2 not on vlan, using /etc/sysctl.d/vdsm.conf configuration
#

net.ipv4.conf.p1p1/100.rp_filter=2
# p1p2 not on vlan
net.ipv4.conf.p1p2.rp_filter=2
#

At OS level, with host and storage domain up I have:

[root@ov300 ~]# iscsiadm -m node
10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
[root@ov300 ~]# 

[root@ov300 ~]# iscsiadm -m session
tcp: [4] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
tcp: [5] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
[root@ov300 ~]# 

[root@ov300 ~]# iscsiadm -m session -P 1
Target: iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
Current Portal: 10.10.100.41:3260,1
Persistent Portal: 10.10.100.9:3260,1
**********
Interface:
**********
Iface Name: p1p1.100
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:f2d7fc1e2fc
Iface IPaddress: 10.10.100.87
Iface HWaddress: <empty>
Iface Netdev: p1p1.100
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 10.10.100.42:3260,1
Persistent Portal: 10.10.100.9:3260,1
**********
Interface:
**********
Iface Name: p1p2
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:f2d7fc1e2fc
Iface IPaddress: 10.10.100.88
Iface HWaddress: <empty>
Iface Netdev: p1p2
SID: 5
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
[root@ov300 ~]# 

[root@ov300 ~]# multipath -l
364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00         
size=1.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 12:0:0:0 sde 8:64 active undef  running
  `- 13:0:0:0 sdf 8:80 active undef  running
[root@ov300 ~]# 

I only got to have VM vlans on the 1Gbit bond, so I dediced, as this will be a test cluster, to create the live migration vlan 187 on only one of the 10Gbit/s interfaces and one other 162 vlan used for NFS on the other 10Gbit/s interface.

Cheers,
Gianluca