On Fri, Jun 22, 2018 at 6:48 PM Bernhard Dick <bernhard(a)bdick.de> wrote:
Am 22.06.2018 um 17:38 schrieb Nir Soffer:
> On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <bernhard(a)bdick.de
> <mailto:bernhard@bdick.de>> wrote:
> I've a problem creating an iSCSI storage domain. My hosts are running
> the current ovirt 4.2 engine-ng
>
>
> What is engine-ng?
sorry, I mixed it up. It is ovirt node-ng.
>
> version. I can detect and login to the
> iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets
page).
> That happens with our storage and with a linux based iSCSI target
which
> I created for testing purposes.
>
>
> Linux based iscsi based target works fine, we use it a lot for testing
> environment.
>
> Can you share the output of these commands on the the host connected
> to the storage server?
>
> lsblk
NAME MAJ:MIN RM SIZE RO
TYPE MOUNTPOINT
sda 8:0 0 64G 0
disk
sda1 8:1 0 1G 0
part /boot
sda2 8:2 0 63G 0 part
onn-pool00_tmeta 253:0 0 1G 0
lvm
onn-pool00-tpool 253:2 0 44G 0 lvm
onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0
lvm /
onn-pool00 253:12 0 44G 0 lvm
onn-var_log_audit 253:13 0 2G 0
lvm /var/log/audit
onn-var_log 253:14 0 8G 0
lvm /var/log
onn-var 253:15 0 15G 0
lvm /var
onn-tmp 253:16 0 1G 0
lvm /tmp
onn-home 253:17 0 1G 0
lvm /home
onn-root 253:18 0 17G 0 lvm
onn-ovirt--node--ng--4.2.2--0.20180430.0+1 253:19 0 17G 0 lvm
onn-var_crash 253:20 0 10G 0 lvm
onn-pool00_tdata 253:1 0 44G 0
lvm
onn-pool00-tpool 253:2 0 44G 0 lvm
onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0
lvm /
onn-pool00 253:12 0 44G 0 lvm
onn-var_log_audit 253:13 0 2G 0
lvm /var/log/audit
onn-var_log 253:14 0 8G 0
lvm /var/log
onn-var 253:15 0 15G 0
lvm /var
onn-tmp 253:16 0 1G 0
lvm /tmp
onn-home 253:17 0 1G 0
lvm /home
onn-root 253:18 0 17G 0 lvm
onn-ovirt--node--ng--4.2.2--0.20180430.0+1 253:19 0 17G 0 lvm
onn-var_crash 253:20 0 10G 0 lvm
onn-swap 253:4 0 6.4G 0
lvm [SWAP]
sdb 8:16 0 256G 0
disk
gluster_vg_sdb-gluster_thinpool_sdb_tmeta 253:5 0 1G
0 lvm
gluster_vg_sdb-gluster_thinpool_sdb-tpool 253:7 0 129G 0 lvm
gluster_vg_sdb-gluster_thinpool_sdb 253:8 0 129G 0 lvm
gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0
lvm /gluster_bricks/data
gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0
lvm /gluster_bricks/vmstore
gluster_vg_sdb-gluster_thinpool_sdb_tdata 253:6 0 129G 0 lvm
gluster_vg_sdb-gluster_thinpool_sdb-tpool 253:7 0 129G 0 lvm
gluster_vg_sdb-gluster_thinpool_sdb 253:8 0 129G 0 lvm
gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0
lvm /gluster_bricks/data
gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0
lvm /gluster_bricks/vmstore
gluster_vg_sdb-gluster_lv_engine 253:9 0 100G 0
lvm /gluster_bricks/engine
sdc 8:32 0 500G 0
disk
sdd 8:48 0 1G 0
disk
sr0 11:0 1 1.1G 0
rom
Is sdc your LUN?
here sdc is from the storage, sdd is from the linux based target.
> multipath -ll
No Output
You don't have any multipath devices. oVirt block storage is using
only multipath devices. This means that you will no see any devices
on engine side.
> cat /etc/multipath.conf
# VDSM REVISION 1.3
# VDSM PRIVATE
# VDSM REVISION 1.5
You are mixing several versions here. Is this 1.3 or 1.5 file?
# This file is managed by vdsm.
# [...]
defaults {
# [...]
polling_interval 5
# [...]
no_path_retry 4
According to this this is a 1.5 version.
# [...]
user_friendly_names no
# [...]
flush_on_last_del yes
# [...]
fast_io_fail_tmo 5
# [...]
dev_loss_tmo 30
# [...]
max_fds 4096
}
# Remove devices entries when overrides section is available.
devices {
device {
# [...]
all_devs yes
no_path_retry 4
}
}
# [...]
# inserted by blacklist_all_disks.sh
blacklist {
devnode "*"
}
This is your issue - why do you blacklist all devices?
By lsblk output I think you are running hyperconverge setup, which
wrongly disabled all multipath devices, instead of the local devices
used by gluster.
To fix this:
1. Remove the wrong multipath blacklist
2. Find the WWID of the local devices used by gluster
these are /dev/sda and /dev/sdb
3. Add blacklist for these specific devices using
blacklist {
wwid XXX-YYY
wwid YYY-ZZZ
}
With this you should be able to access all LUNs from the storage
server (assuming you configured the storage so the host can see them).
Finally, it is recommended to use a drop-in configuration file for
local changes, and *never* touch /etc/multipath.conf, so vdsm is
able to manage this file.
This is done by putting your changes in:
/etc/multipath/conf.d/local.conf
Example:
$ cat /etc/multipath/conf.d/local.conf
# Local multipath configuration for host XXX
# blacklist boot device and device used for gluster storage.
blacklist {
wwid XXX-YYY
wwid YYY-ZZZ
}
You probably want to backup these files and have a script to
deploy them to the hosts if you need to restore the setup.
Once you have a proper drop-in configuration, you can use
the standard vdsm multipath configuration by removing the line
# VDSM PRIVATE
And running:
vdsm-tool configure --force --module multipath
In EL7.6 we expect to have a fix for this issue, blacklisting
automatically local devices.
See
https://bugzilla.redhat.com/1593459
> vdsm-client Host getDeviceList
[]
Expected in this configuration.
> Nir
>
> When I logon to the ovirt hosts I see that they are connected with
the
> target LUNs (dmesg is telling that there are iscsi devices being
found
> and they are getting assigned to devices in /dev/sdX ). Writing and
> reading from the devices (also accros hosts) works. Do you have some
> advice how to troubleshoot this?
>
> Regards
> Bernhard
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCN...
>
--
Dipl.-Inf. Bernhard Dick
Auf dem Anger 24
DE-46485 Wesel
www.BernhardDick.de
jabber: bernhard(a)jabber.bdick.de
Tel : +49.2812068620 <+49%20281%202068620>
Mobil : +49.1747607927 <+49%20174%207607927>
FAX : +49.2812068621 <+49%20281%202068621>
USt-IdNr.: DE274728845