
Hi, I've a problem creating an iSCSI storage domain. My hosts are running the current ovirt 4.2 engine-ng version. I can detect and login to the iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page). That happens with our storage and with a linux based iSCSI target which I created for testing purposes. When I logon to the ovirt hosts I see that they are connected with the target LUNs (dmesg is telling that there are iscsi devices being found and they are getting assigned to devices in /dev/sdX ). Writing and reading from the devices (also accros hosts) works. Do you have some advice how to troubleshoot this? Regards Bernhard

On 06/22/2018 10:20 AM, Bernhard Dick wrote:
Hi,
I've a problem creating an iSCSI storage domain. My hosts are running the current ovirt 4.2 engine-ng version. I can detect and login to the iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page). That happens with our storage and with a linux based iSCSI target which I created for testing purposes. When I logon to the ovirt hosts I see that they are connected with the target LUNs (dmesg is telling that there are iscsi devices being found and they are getting assigned to devices in /dev/sdX ). Writing and reading from the devices (also accros hosts) works. Do you have some advice how to troubleshoot this?
Stating the obvious... you're not LUN masking them out? Normally, you'd create an access mask that allows the ovirt hypevisors to see the LUNs. But without that, maybe your default security policy is to prevent all (?).

On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <bernhard@bdick.de> wrote:
Hi,
I've a problem creating an iSCSI storage domain. My hosts are running the current ovirt 4.2 engine-ng
What is engine-ng?
version. I can detect and login to the iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page). That happens with our storage and with a linux based iSCSI target which I created for testing purposes.
Linux based iscsi based target works fine, we use it a lot for testing environment. Can you share the output of these commands on the the host connected to the storage server? lsblk multipath -ll cat /etc/multipath.conf vdsm-client Host getDeviceList Nir When I logon to the ovirt hosts I see that they are connected with the
target LUNs (dmesg is telling that there are iscsi devices being found and they are getting assigned to devices in /dev/sdX ). Writing and reading from the devices (also accros hosts) works. Do you have some advice how to troubleshoot this?
Regards Bernhard _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBU...

Am 22.06.2018 um 17:38 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <bernhard@bdick.de <mailto:bernhard@bdick.de>> wrote: I've a problem creating an iSCSI storage domain. My hosts are running the current ovirt 4.2 engine-ng
What is engine-ng? sorry, I mixed it up. It is ovirt node-ng.
version. I can detect and login to the iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page). That happens with our storage and with a linux based iSCSI target which I created for testing purposes.
Linux based iscsi based target works fine, we use it a lot for testing environment.
Can you share the output of these commands on the the host connected to the storage server?
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 64G 0 disk sda1 8:1 0 1G 0 part /boot sda2 8:2 0 63G 0 part onn-pool00_tmeta 253:0 0 1G 0 lvm onn-pool00-tpool 253:2 0 44G 0 lvm onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0 lvm / onn-pool00 253:12 0 44G 0 lvm onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit onn-var_log 253:14 0 8G 0 lvm /var/log onn-var 253:15 0 15G 0 lvm /var onn-tmp 253:16 0 1G 0 lvm /tmp onn-home 253:17 0 1G 0 lvm /home onn-root 253:18 0 17G 0 lvm onn-ovirt--node--ng--4.2.2--0.20180430.0+1 253:19 0 17G 0 lvm onn-var_crash 253:20 0 10G 0 lvm onn-pool00_tdata 253:1 0 44G 0 lvm onn-pool00-tpool 253:2 0 44G 0 lvm onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0 lvm / onn-pool00 253:12 0 44G 0 lvm onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit onn-var_log 253:14 0 8G 0 lvm /var/log onn-var 253:15 0 15G 0 lvm /var onn-tmp 253:16 0 1G 0 lvm /tmp onn-home 253:17 0 1G 0 lvm /home onn-root 253:18 0 17G 0 lvm onn-ovirt--node--ng--4.2.2--0.20180430.0+1 253:19 0 17G 0 lvm onn-var_crash 253:20 0 10G 0 lvm onn-swap 253:4 0 6.4G 0 lvm [SWAP] sdb 8:16 0 256G 0 disk gluster_vg_sdb-gluster_thinpool_sdb_tmeta 253:5 0 1G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb-tpool 253:7 0 129G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb 253:8 0 129G 0 lvm gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0 lvm /gluster_bricks/data gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0 lvm /gluster_bricks/vmstore gluster_vg_sdb-gluster_thinpool_sdb_tdata 253:6 0 129G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb-tpool 253:7 0 129G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb 253:8 0 129G 0 lvm gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0 lvm /gluster_bricks/data gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0 lvm /gluster_bricks/vmstore gluster_vg_sdb-gluster_lv_engine 253:9 0 100G 0 lvm /gluster_bricks/engine sdc 8:32 0 500G 0 disk sdd 8:48 0 1G 0 disk sr0 11:0 1 1.1G 0 rom here sdc is from the storage, sdd is from the linux based target.
multipath -ll No Output cat /etc/multipath.conf # VDSM REVISION 1.3 # VDSM PRIVATE # VDSM REVISION 1.5
# This file is managed by vdsm. # [...] defaults { # [...] polling_interval 5 # [...] no_path_retry 4 # [...] user_friendly_names no # [...] flush_on_last_del yes # [...] fast_io_fail_tmo 5 # [...] dev_loss_tmo 30 # [...] max_fds 4096 } # Remove devices entries when overrides section is available. devices { device { # [...] all_devs yes no_path_retry 4 } } # [...] # inserted by blacklist_all_disks.sh blacklist { devnode "*" }
vdsm-client Host getDeviceList []
Nir
When I logon to the ovirt hosts I see that they are connected with the target LUNs (dmesg is telling that there are iscsi devices being found and they are getting assigned to devices in /dev/sdX ). Writing and reading from the devices (also accros hosts) works. Do you have some advice how to troubleshoot this?
Regards Bernhard _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBU...
-- Dipl.-Inf. Bernhard Dick Auf dem Anger 24 DE-46485 Wesel www.BernhardDick.de jabber: bernhard@jabber.bdick.de Tel : +49.2812068620 Mobil : +49.1747607927 FAX : +49.2812068621 USt-IdNr.: DE274728845

On Fri, Jun 22, 2018 at 6:48 PM Bernhard Dick <bernhard@bdick.de> wrote:
Am 22.06.2018 um 17:38 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <bernhard@bdick.de <mailto:bernhard@bdick.de>> wrote: I've a problem creating an iSCSI storage domain. My hosts are running the current ovirt 4.2 engine-ng
What is engine-ng? sorry, I mixed it up. It is ovirt node-ng.
version. I can detect and login to the iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets
page).
That happens with our storage and with a linux based iSCSI target
which
I created for testing purposes.
Linux based iscsi based target works fine, we use it a lot for testing environment.
Can you share the output of these commands on the the host connected to the storage server?
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 64G 0 disk sda1 8:1 0 1G 0 part /boot sda2 8:2 0 63G 0 part onn-pool00_tmeta 253:0 0 1G 0 lvm onn-pool00-tpool 253:2 0 44G 0 lvm onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0 lvm / onn-pool00 253:12 0 44G 0 lvm onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit onn-var_log 253:14 0 8G 0 lvm /var/log onn-var 253:15 0 15G 0 lvm /var onn-tmp 253:16 0 1G 0 lvm /tmp onn-home 253:17 0 1G 0 lvm /home onn-root 253:18 0 17G 0 lvm onn-ovirt--node--ng--4.2.2--0.20180430.0+1 253:19 0 17G 0 lvm onn-var_crash 253:20 0 10G 0 lvm onn-pool00_tdata 253:1 0 44G 0 lvm onn-pool00-tpool 253:2 0 44G 0 lvm onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:3 0 17G 0 lvm / onn-pool00 253:12 0 44G 0 lvm onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit onn-var_log 253:14 0 8G 0 lvm /var/log onn-var 253:15 0 15G 0 lvm /var onn-tmp 253:16 0 1G 0 lvm /tmp onn-home 253:17 0 1G 0 lvm /home onn-root 253:18 0 17G 0 lvm onn-ovirt--node--ng--4.2.2--0.20180430.0+1 253:19 0 17G 0 lvm onn-var_crash 253:20 0 10G 0 lvm onn-swap 253:4 0 6.4G 0 lvm [SWAP] sdb 8:16 0 256G 0
disk
gluster_vg_sdb-gluster_thinpool_sdb_tmeta 253:5 0 1G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb-tpool 253:7 0 129G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb 253:8 0 129G 0 lvm gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0 lvm /gluster_bricks/data gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0 lvm /gluster_bricks/vmstore gluster_vg_sdb-gluster_thinpool_sdb_tdata 253:6 0 129G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb-tpool 253:7 0 129G 0 lvm gluster_vg_sdb-gluster_thinpool_sdb 253:8 0 129G 0 lvm gluster_vg_sdb-gluster_lv_data 253:10 0 64G 0 lvm /gluster_bricks/data gluster_vg_sdb-gluster_lv_vmstore 253:11 0 64G 0 lvm /gluster_bricks/vmstore gluster_vg_sdb-gluster_lv_engine 253:9 0 100G 0 lvm /gluster_bricks/engine sdc 8:32 0 500G 0 disk sdd 8:48 0 1G 0 disk sr0 11:0 1 1.1G 0 rom
Is sdc your LUN?
here sdc is from the storage, sdd is from the linux based target.
multipath -ll No Output
You don't have any multipath devices. oVirt block storage is using only multipath devices. This means that you will no see any devices on engine side.
cat /etc/multipath.conf # VDSM REVISION 1.3 # VDSM PRIVATE # VDSM REVISION 1.5
You are mixing several versions here. Is this 1.3 or 1.5 file?
# This file is managed by vdsm. # [...] defaults { # [...] polling_interval 5 # [...] no_path_retry 4
According to this this is a 1.5 version.
# [...] user_friendly_names no # [...] flush_on_last_del yes # [...] fast_io_fail_tmo 5 # [...] dev_loss_tmo 30 # [...] max_fds 4096 } # Remove devices entries when overrides section is available. devices { device { # [...] all_devs yes no_path_retry 4 } } # [...] # inserted by blacklist_all_disks.sh
blacklist { devnode "*" }
This is your issue - why do you blacklist all devices? By lsblk output I think you are running hyperconverge setup, which wrongly disabled all multipath devices, instead of the local devices used by gluster. To fix this: 1. Remove the wrong multipath blacklist 2. Find the WWID of the local devices used by gluster these are /dev/sda and /dev/sdb 3. Add blacklist for these specific devices using blacklist { wwid XXX-YYY wwid YYY-ZZZ } With this you should be able to access all LUNs from the storage server (assuming you configured the storage so the host can see them). Finally, it is recommended to use a drop-in configuration file for local changes, and *never* touch /etc/multipath.conf, so vdsm is able to manage this file. This is done by putting your changes in: /etc/multipath/conf.d/local.conf Example: $ cat /etc/multipath/conf.d/local.conf # Local multipath configuration for host XXX # blacklist boot device and device used for gluster storage. blacklist { wwid XXX-YYY wwid YYY-ZZZ } You probably want to backup these files and have a script to deploy them to the hosts if you need to restore the setup. Once you have a proper drop-in configuration, you can use the standard vdsm multipath configuration by removing the line # VDSM PRIVATE And running: vdsm-tool configure --force --module multipath In EL7.6 we expect to have a fix for this issue, blacklisting automatically local devices. See https://bugzilla.redhat.com/1593459
vdsm-client Host getDeviceList []
Expected in this configuration.
Nir
When I logon to the ovirt hosts I see that they are connected with the target LUNs (dmesg is telling that there are iscsi devices being found and they are getting assigned to devices in /dev/sdX ). Writing and reading from the devices (also accros hosts) works. Do you have some advice how to troubleshoot this?
Regards Bernhard _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBU...
-- Dipl.-Inf. Bernhard Dick Auf dem Anger 24 DE-46485 Wesel www.BernhardDick.de
jabber: bernhard@jabber.bdick.de
Tel : +49.2812068620 <+49%20281%202068620> Mobil : +49.1747607927 <+49%20174%207607927> FAX : +49.2812068621 <+49%20281%202068621> USt-IdNr.: DE274728845

On Fri, Jun 22, 2018 at 6:48 PM Bernhard Dick <bernhard@bdick.de <mailto:bernhard@bdick.de>> wrote:
Am 22.06.2018 um 17:38 schrieb Nir Soffer: > On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <bernhard@bdick.de <mailto:bernhard@bdick.de> > <mailto:bernhard@bdick.de <mailto:bernhard@bdick.de>>> wrote: [...]
Is sdc your LUN?
here sdc is from the storage, sdd is from the linux based target.
> multipath -ll No Output
You don't have any multipath devices. oVirt block storage is using only multipath devices. This means that you will no see any devices on engine side.
> cat /etc/multipath.conf # VDSM REVISION 1.3 # VDSM PRIVATE # VDSM REVISION 1.5
You are mixing several versions here. Is this 1.3 or 1.5 file? hm I didn't touch the file. Maybe something went weird during update
Hi, Am 22.06.2018 um 18:12 schrieb Nir Soffer: procedures.
# This file is managed by vdsm. # [...] defaults { # [...] polling_interval 5 # [...] no_path_retry 4
According to this this is a 1.5 version.
# [...] user_friendly_names no # [...] flush_on_last_del yes # [...] fast_io_fail_tmo 5 # [...] dev_loss_tmo 30 # [...] max_fds 4096 } # Remove devices entries when overrides section is available. devices { device { # [...] all_devs yes no_path_retry 4 } } # [...] # inserted by blacklist_all_disks.sh
blacklist { devnode "*" }
This is your issue - why do you blacklist all devices?
By lsblk output I think you are running hyperconverge setup, which wrongly disabled all multipath devices, instead of the local devices used by gluster.
To fix this:
1. Remove the wrong multipath blacklist 2. Find the WWID of the local devices used by gluster these are /dev/sda and /dev/sdb 3. Add blacklist for these specific devices using
blacklist { wwid XXX-YYY wwid YYY-ZZZ }
With this you should be able to access all LUNs from the storage server (assuming you configured the storage so the host can see them). Finally, it is recommended to use a drop-in configuration file for local changes, and *never* touch /etc/multipath.conf, so vdsm is able to manage this file.
This is done by putting your changes in: /etc/multipath/conf.d/local.conf
Example:
$ cat /etc/multipath/conf.d/local.conf # Local multipath configuration for host XXX # blacklist boot device and device used for gluster storage. blacklist { wwid XXX-YYY wwid YYY-ZZZ }
You probably want to backup these files and have a script to deploy them to the hosts if you need to restore the setup.
Once you have a proper drop-in configuration, you can use the standard vdsm multipath configuration by removing the line
# VDSM PRIVATE
And running:
vdsm-tool configure --force --module multipath
That solved it. Blacklisting the local drives however does not really seem to work. I assume that is due to the local drives are virtio storage drives in my case (as it is a testing environment based on virtual Hosts) and they do have type 0x80 wwids of the Form "0QEMU QEMU HARDDISK drive-scsi1". Thanks for your help! Regards Bernhard
In EL7.6 we expect to have a fix for this issue, blacklisting automatically local devices. See https://bugzilla.redhat.com/1593459
> vdsm-client Host getDeviceList []
Expected in this configuration.
> Nir > > When I logon to the ovirt hosts I see that they are connected with the > target LUNs (dmesg is telling that there are iscsi devices being found > and they are getting assigned to devices in /dev/sdX ). Writing and > reading from the devices (also accros hosts) works. Do you have some > advice how to troubleshoot this? > > Regards > Bernhard > _______________________________________________ > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> > To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> > <mailto:users-leave@ovirt.org <mailto:users-leave@ovirt.org>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBU... >
-- Dipl.-Inf. Bernhard Dick Auf dem Anger 24 DE-46485 Wesel www.BernhardDick.de <http://www.BernhardDick.de>
jabber: bernhard@jabber.bdick.de <mailto:bernhard@jabber.bdick.de>
Tel : +49.2812068620 <tel:+49%20281%202068620> Mobil : +49.1747607927 <tel:+49%20174%207607927> FAX : +49.2812068621 <tel:+49%20281%202068621> USt-IdNr.: DE274728845
-- Dipl.-Inf. Bernhard Dick Auf dem Anger 24 DE-46485 Wesel www.BernhardDick.de jabber: bernhard@jabber.bdick.de Tel : +49.2812068620 Mobil : +49.1747607927 FAX : +49.2812068621 USt-IdNr.: DE274728845
participants (3)
-
Bernhard Dick
-
Christopher Cox
-
Nir Soffer