[ovirt-users] iSCSI Multipath issues

Vinícius Ferrão ferrao at if.ufrj.br
Tue Jul 25 15:25:40 UTC 2017


Bug opened here:
https://bugzilla.redhat.com/show_bug.cgi?id=1474904

Thanks,
V.

On 25 Jul 2017, at 12:08, Vinícius Ferrão <ferrao at if.ufrj.br<mailto:ferrao at if.ufrj.br>> wrote:

Hello Maor,

Thanks for answering and looking deeper in this case. You’re welcome to connect to my machine since it’s reachable over the internet. I’ll be opening a ticket in moments. Just to feed an update here:

I’ve done what you asked, but since I’m running Self Hosted Engine, I lost the connection to HE, here’s the CLI:



Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3

 node status: OK
 See `nodectl check` for more information

Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or https://146.164.37.103:9090/

[root at ovirt3 ~]# iscsiadm -m session -u
Logging out of session [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
Logging out of session [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logging out of session [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260] successful.
Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260] successful.
Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260] successful.
Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260] successful.
Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260] successful.
[root at ovirt3 ~]# service iscsid stop
Redirecting to /bin/systemctl stop  iscsid.service
Warning: Stopping iscsid.service, but it can still be activated by:
 iscsid.socket

[root at ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces

[root at ovirt3 ~]# service iscsid start
Redirecting to /bin/systemctl start  iscsid.service

And finally:

[root at ovirt3 ~]# hosted-engine --vm-status
.
.
.

It just hangs.

Thanks,
V.

On 25 Jul 2017, at 05:54, Maor Lipchuk <mlipchuk at redhat.com<mailto:mlipchuk at redhat.com>> wrote:

Hi Vinícius,

I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly

Thanks,
Maor


On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk <mlipchuk at redhat.com<mailto:mlipchuk at redhat.com>> wrote:
Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.

based on the  VDSM logs:
    u'connectionParams':[
       {
          u'netIfaceName':u'eno3.11',
          u'connection':u'192.168.11.14',
       },
       {
          u'netIfaceName':u'eno3.11',
          u'connection':u'192.168.12.14',
       }
          u'netIfaceName':u'eno4.12',
          u'connection':u'192.168.11.14',
       },
       {
          u'netIfaceName':u'eno4.12',
          u'connection':u'192.168.12.14',
       }
    ],

Can you try to reconnect to the iSCSI storage domain after
re-initializing your iscsiadm on your host.

1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it

2. In your VDSM host, log out from your iscsi open sessions which are
related to this storage domain
if that is your only iSCSI storage domain log out from all the sessions:
 "iscsiadm -m session -u"

3. Stop the iscsid service:
 "service iscsid stop"

4. Move your network interfaces configured in the iscsiadm to a
temporary folder:
  mv /var/lib/iscsi/ifaces/* /tmp/ifaces

5. Start the iscsid service
 "service iscsid start"

Regards,
Maor and Benny

On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz <uwe at laverenz.de<mailto:uwe at laverenz.de>> wrote:
Hi,


Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:

I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
trying to enable the feature without success too.

Here’s what I’ve done, step-by-step.

1. Installed oVirt Node 4.1.3 with the following network settings:

eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
eno3 with 9216 MTU.
eno4 with 9216 MTU.
vlan11 on eno3 with 9216 MTU and fixed IP addresses.
vlan12 on eno4 with 9216 MTU and fixed IP addresses.

eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
different switches.


This is the point: the OVirt implementation of iSCSI-Bonding assumes that
all network interfaces in the bond can connect/reach all targets, including
those in the other net(s). The fact that you use separate, isolated networks
means that this is not the case in your setup (and not in mine).

I am not sure if this is a bug, a design flaw or a feature, but as a result
of this OVirt's iSCSI-Bonding does not work for us.

Please see my mail from yesterday for a workaround.

cu,
Uwe
_______________________________________________
Users mailing list
Users at ovirt.org<mailto:Users at ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users at ovirt.org<mailto:Users at ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170725/3e467580/attachment-0001.html>


More information about the Users mailing list