ISCSI Domain & LVM

Hello all, i have a question regarding this note in ovirt storage documentation: **Important:** If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. What does "you must create a filter to hide the guest logical volumes" exactly mean? I assume i have to set some filter in lvm.conf on all ovirt hosts, but im not sure about what to filter? What i already saw is that after creating the ISCSI domain and cloning/moving/creating virtual machines on it there are new PVs and LVs visible on the ovirt hosts having an object UUID in the name (output from "pvs" and "lvs" commands). Is this expected behavior or do i have to filter exactly this ones by allowing only local disks to be scanned for PVs/LVs? Or do i have to setup filter to allow only local disks + ISCSI disks (in my case /dev/sd?) to be scanned for PVs/LVs? I noticed too that after detaching and removing the ISCSI domain i still have UUID PVs. They all show up with "input/output error" in the output of "pvs" and stay there until i reboot the ovirt hosts. On my ISCSI target system i already set the correct lvm filters so that "targetcli" is happy after reboot. Thank you! Happy new year, Robert

Hey, from https://ovirt.org/blog/2017/12/lvm-configuration-the-easy-way.html """ Why is a solution required? Because scanning and activating other logical volumes may cause data corruption, slow boot, and other issues. The solution is configuring an LVM filter on each host, which allows LVM on a host to scan only the logical volumes required directly by the host. To achieve this, we have introduced a vdsm-tool command, config-lvm-filter, that will configure the host for you. The new command, vdsm-tool config-lvm-filter analyzes the current LVM configuration to decide whether a filter should be configured. Then, if the LVM filter has yet to be configured, the command generates an LVM filter option for this host, and adds the option to the LVM configuration. """ Is that what you're looking for? We should improve the docs here. Would you mind filing a documentation bug on https://github.com/oVirt/ovirt-site/issues/new ? Best wishes, Greg On Thu, Jan 3, 2019 at 12:08 PM <tehnic@take3.ro> wrote:
Hello all,
i have a question regarding this note in ovirt storage documentation:
**Important:** If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption.
What does "you must create a filter to hide the guest logical volumes" exactly mean? I assume i have to set some filter in lvm.conf on all ovirt hosts, but im not sure about what to filter?
What i already saw is that after creating the ISCSI domain and cloning/moving/creating virtual machines on it there are new PVs and LVs visible on the ovirt hosts having an object UUID in the name (output from "pvs" and "lvs" commands). Is this expected behavior or do i have to filter exactly this ones by allowing only local disks to be scanned for PVs/LVs? Or do i have to setup filter to allow only local disks + ISCSI disks (in my case /dev/sd?) to be scanned for PVs/LVs?
I noticed too that after detaching and removing the ISCSI domain i still have UUID PVs. They all show up with "input/output error" in the output of "pvs" and stay there until i reboot the ovirt hosts.
On my ISCSI target system i already set the correct lvm filters so that "targetcli" is happy after reboot.
Thank you!
Happy new year, Robert _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/D5Z7WSSIL6FOWC...
-- GREG SHEREMETA SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX Red Hat NA <https://www.redhat.com/> gshereme@redhat.com IRC: gshereme <https://red.ht/sig>

Hello Greg, this is what i was looking for. After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM on the ovirt hosts. Additionally i made following tests: - Cloning + running a VM on the ISCSI domain - Detaching + (re-)attaching of the ISCSI domain - Detaching, removing + (re-)import of the ISCSI domain - Creating new ISCSI domain (well, i needed to use "force operation" because creating on same ISCSI target) All tests were successful. As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857> Thank you. Best regards, Robert

Hello, I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what "vdsm-tool config-lvm-filter" suggests. Is this correct ? [root@myhostname ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host: logical volume: /dev/mapper/onn_myhostname--iscsi-home mountpoint: /home devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1 mountpoint: / devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-swap mountpoint: [SWAP] devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-tmp mountpoint: /tmp devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-var mountpoint: /var devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-var_crash mountpoint: /var/crash devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-var_log mountpoint: /var/log devices: /dev/mapper/mpath-myhostname-disk1p2 logical volume: /dev/mapper/onn_myhostname--iscsi-var_log_audit mountpoint: /var/log/audit devices: /dev/mapper/mpath-myhostname-disk1p2 This is the recommended LVM filter for this host: filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ] This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually. Configure LVM filter? [yes,NO] Am 05.01.2019 um 19:34 schrieb tehnic@take3.ro:
Hello Greg,
this is what i was looking for.
After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM on the ovirt hosts.
Additionally i made following tests: - Cloning + running a VM on the ISCSI domain - Detaching + (re-)attaching of the ISCSI domain - Detaching, removing + (re-)import of the ISCSI domain - Creating new ISCSI domain (well, i needed to use "force operation" because creating on same ISCSI target)
All tests were successful.
As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857> Thank you.
Best regards, Robert _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2J... --
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------

On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk <rs@databay.de> wrote:
Hello,
I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
[root@myhostname ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/onn_myhostname--iscsi-home mountpoint: /home devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1 mountpoint: / devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-swap mountpoint: [SWAP] devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-tmp mountpoint: /tmp devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var mountpoint: /var devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_crash mountpoint: /var/crash devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log mountpoint: /var/log devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log_audit mountpoint: /var/log/audit devices: /dev/mapper/mpath-myhostname-disk1p2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
Configure LVM filter? [yes,NO]
Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system? Ralf, can you share the output of: lsblk multipath -ll multipathd show paths format "%d %P" Nir
Am 05.01.2019 um 19:34 schrieb tehnic@take3.ro:
Hello Greg,
this is what i was looking for.
After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM on the ovirt hosts.
Additionally i made following tests: - Cloning + running a VM on the ISCSI domain - Detaching + (re-)attaching of the ISCSI domain - Detaching, removing + (re-)import of the ISCSI domain - Creating new ISCSI domain (well, i needed to use "force operation" because creating on same ISCSI target)
All tests were successful.
As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857> <https://github.com/oVirt/ovirt-site/issues/1857> Thank you.
Best regards, Robert _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2J...
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <rs@databay.de>
*Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQ...

Hello, I cannot tell if this is expected on a node-ng system. I worked hard to get it up and running like this. 2 Diskless Hosts (EPYC 2x16 Core, 256GB RAM) boot via ISCSI ibft from Gigabit Onboard, Initial-Ramdisk establishes multipathing and thats what I get (and want). So i've redundant connections (1x1GB, 2x10GB Ethernet as bond) to my storage (Ubuntu Box 1xEPYC 16 Core, 128Gig RAM currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via ISCSI are thin-provisioned and the sparse-files grow and shrink when discarding in VM's/Host via fstrim. My exercises doing this also as UEFI boot were stopped by node-ng installer partitioning which refused to set up a UEFI FAT Boot partition on the already accessible ISCSI targets.. So systems do legacy BIOS boot now. This works happily even if I unplug one of the Ethernet-Cables. [root@myhostname ~]# multipath -ll mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0 LIO-ORG ,myhostname-disk size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='queue-length 0' prio=50 status=active | `- 3:0:0:0 sdb 8:16 active ready running |-+- policy='queue-length 0' prio=50 status=enabled | `- 4:0:0:0 sdc 8:32 active ready running `-+- policy='queue-length 0' prio=50 status=enabled `- 0:0:0:0 sda 8:0 active ready running See attached lsblk. Am 08.01.2019 um 14:46 schrieb Nir Soffer:
On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Hello,
I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
[root@myhostname ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/onn_myhostname--iscsi-home mountpoint: /home devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1 mountpoint: / devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-swap mountpoint: [SWAP] devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-tmp mountpoint: /tmp devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var mountpoint: /var devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_crash mountpoint: /var/crash devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log mountpoint: /var/log devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log_audit mountpoint: /var/log/audit devices: /dev/mapper/mpath-myhostname-disk1p2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
Configure LVM filter? [yes,NO]
Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?
Ralf, can you share the output of: lsblk multipath -ll multipathd show paths format "%d %P"
Nir
Am 05.01.2019 um 19:34 schrieb tehnic@take3.ro <mailto:tehnic@take3.ro>:
Hello Greg,
this is what i was looking for.
After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM on the ovirt hosts.
Additionally i made following tests: - Cloning + running a VM on the ISCSI domain - Detaching + (re-)attaching of the ISCSI domain - Detaching, removing + (re-)import of the ISCSI domain - Creating new ISCSI domain (well, i needed to use "force operation" because creating on same ISCSI target)
All tests were successful.
As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857> <https://github.com/oVirt/ovirt-site/issues/1857> Thank you.
Best regards, Robert _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2J...
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------ _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQ...
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------

On Tue, Jan 8, 2019 at 4:57 PM Ralf Schenk <rs@databay.de> wrote:
Hello,
I cannot tell if this is expected on a node-ng system. I worked hard to get it up and running like this. 2 Diskless Hosts (EPYC 2x16 Core, 256GB RAM) boot via ISCSI ibft from Gigabit Onboard, Initial-Ramdisk establishes multipathing and thats what I get (and want). So i've redundant connections (1x1GB, 2x10GB Ethernet as bond) to my storage (Ubuntu Box 1xEPYC 16 Core, 128Gig RAM currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via ISCSI are thin-provisioned and the sparse-files grow and shrink when discarding in VM's/Host via fstrim.
My exercises doing this also as UEFI boot were stopped by node-ng installer partitioning which refused to set up a UEFI FAT Boot partition on the already accessible ISCSI targets.. So systems do legacy BIOS boot now.
This works happily even if I unplug one of the Ethernet-Cables.
[root@myhostname ~]# multipath -ll mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0 LIO-ORG ,myhostname-disk size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='queue-length 0' prio=50 status=active | `- 3:0:0:0 sdb 8:16 active ready running |-+- policy='queue-length 0' prio=50 status=enabled | `- 4:0:0:0 sdc 8:32 active ready running `-+- policy='queue-length 0' prio=50 status=enabled `- 0:0:0:0 sda 8:0 active ready running
See attached lsblk.
Looks like the lvm filter suggested by vdsm-tool is correct. Try to configure it and see if you hosts boot correctly. If not you will have to change to filter to match your setup. But multipath device named "mpath-myhostname" is alarming. I would expect to see the device as 36001405a26254e2bfd34b179d6e98ba4 in lsblk. Maybe this is ok with the way your hosts are configured. Can you share also your /etc/multipath.conf? and any files under /etc/multpath/conf.d/? Also check that vdsm does not report mpath-myhostname or /dev/mapper/36001405a26254e2bfd34b179d6e98ba4 in the output of vdsm-client Host getDeviceList Finally, the devices used for booting the host should have special more robust multipath configuration, that will queue io forever instead of failing. Otherwise your host root file system cab become read only if you loose all paths to storage at the same time. The only way to recover from this is to reboot the host. See these bugs for more info on the needed setup: https://bugzilla.redhat.com/show_bug.cgi?id=1436415 https://bugzilla.redhat.com/show_bug.cgi?id=1435335 To use proper setup for your boot multipath, you need this patch which is not available yet in 4.2: https://gerrit.ovirt.org/c/93301/ Nir
Am 08.01.2019 um 14:46 schrieb Nir Soffer:
On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk <rs@databay.de> wrote:
Hello,
I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
[root@myhostname ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/onn_myhostname--iscsi-home mountpoint: /home devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1 mountpoint: / devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-swap mountpoint: [SWAP] devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-tmp mountpoint: /tmp devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var mountpoint: /var devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_crash mountpoint: /var/crash devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log mountpoint: /var/log devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log_audit mountpoint: /var/log/audit devices: /dev/mapper/mpath-myhostname-disk1p2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
Configure LVM filter? [yes,NO]
Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?
Ralf, can you share the output of: lsblk multipath -ll multipathd show paths format "%d %P"
Nir
Am 05.01.2019 um 19:34 schrieb tehnic@take3.ro:
Hello Greg,
this is what i was looking for.
After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM on the ovirt hosts.
Additionally i made following tests: - Cloning + running a VM on the ISCSI domain - Detaching + (re-)attaching of the ISCSI domain - Detaching, removing + (re-)import of the ISCSI domain - Creating new ISCSI domain (well, i needed to use "force operation" because creating on same ISCSI target)
All tests were successful.
As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857> <https://github.com/oVirt/ovirt-site/issues/1857> Thank you.
Best regards, Robert _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2J...
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <rs@databay.de>
*Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQ...
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <rs@databay.de>
*Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------

Hello, I manually renamed them via multipath.conf multipaths { multipath { wwid 36001405a26254e2bfd34b179d6e98ba4 alias mpath-myhostname-disk1 } } Before I configured multipath (I installed without first before I knew "mpath" parameter in setup !) I had the problem of readonly root a few times but the hints and settings (for iscsid.conf) I found out didn't help. Thats why I had to tweak dracut ramdisk to set up multipath and understand LVM activation and so on of ovirt-ng the hard way. So you suggest to add "no_path_retry queue" to above config according to your statements in https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ? I cannot access https://bugzilla.redhat.com/show_bug.cgi?id=1436415 And yes the disks get listed and is also shown in GUI. How can I filter this out ? I think https://gerrit.ovirt.org/c/93301/ shows a way to do this. Will this be in 4.3 ? So far thanks for your good hints. [ { "status": "used", "vendorID": "LIO-ORG", "GUID": "mpath-myhostname-disk1", "capacity": "53687091200", "fwrev": "4.0", "discard_zeroes_data": 0, "vgUUID": "", "pathlist": [ { "initiatorname": "default", "connection": "172.16.1.3", "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1", "portal": "1", "user": "myhostname", "password": "l3tm31scs1-2018", "port": "3260" }, { "initiatorname": "ovirtmgmt", "connection": "192.168.1.3", "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1", "portal": "1", "user": "myhostname", "password": "l3tm31scs1-2018", "port": "3260" }, { "initiatorname": "ovirtmgmt", "connection": "192.168.1.3", "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1", "portal": "1", "user": "myhostname", "password": "l3tm31scs1-2018", "port": "3260" } ], "pvsize": "", "discard_max_bytes": 4194304, "pathstatus": [ { "capacity": "53687091200", "physdev": "sda", "type": "iSCSI", "state": "active", "lun": "0" }, { "capacity": "53687091200", "physdev": "sdb", "type": "iSCSI", "state": "active", "lun": "0" }, { "capacity": "53687091200", "physdev": "sdc", "type": "iSCSI", "state": "active", "lun": "0" } ], "devtype": "iSCSI", "physicalblocksize": "512", "pvUUID": "", "serial": "SLIO-ORG_myhostname-disk_a26254e2-bfd3-4b17-9d6e-98ba4ab45902", "logicalblocksize": "512", "productID": "myhostname-disk" } ] Am 08.01.2019 um 16:19 schrieb Nir Soffer:
On Tue, Jan 8, 2019 at 4:57 PM Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Hello,
I cannot tell if this is expected on a node-ng system. I worked hard to get it up and running like this. 2 Diskless Hosts (EPYC 2x16 Core, 256GB RAM) boot via ISCSI ibft from Gigabit Onboard, Initial-Ramdisk establishes multipathing and thats what I get (and want). So i've redundant connections (1x1GB, 2x10GB Ethernet as bond) to my storage (Ubuntu Box 1xEPYC 16 Core, 128Gig RAM currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via ISCSI are thin-provisioned and the sparse-files grow and shrink when discarding in VM's/Host via fstrim.
My exercises doing this also as UEFI boot were stopped by node-ng installer partitioning which refused to set up a UEFI FAT Boot partition on the already accessible ISCSI targets.. So systems do legacy BIOS boot now.
This works happily even if I unplug one of the Ethernet-Cables.
[root@myhostname ~]# multipath -ll mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0 LIO-ORG ,myhostname-disk size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='queue-length 0' prio=50 status=active | `- 3:0:0:0 sdb 8:16 active ready running |-+- policy='queue-length 0' prio=50 status=enabled | `- 4:0:0:0 sdc 8:32 active ready running `-+- policy='queue-length 0' prio=50 status=enabled `- 0:0:0:0 sda 8:0 active ready running
See attached lsblk.
Looks like the lvm filter suggested by vdsm-tool is correct.
Try to configure it and see if you hosts boot correctly. If not you will have to change to filter to match your setup.
But multipath device named "mpath-myhostname" is alarming. I would expect to see the device as 36001405a26254e2bfd34b179d6e98ba4 in lsblk.Maybe this is ok with the way your hosts are configured.
Can you share also your /etc/multipath.conf? and any files under /etc/multpath/conf.d/?
Also check that vdsm does not report mpath-myhostname or /dev/mapper/36001405a26254e2bfd34b179d6e98ba4 in the output of
vdsm-client Host getDeviceList
Finally, the devices used for booting the host should have special more robust multipath configuration, that will queue io forever instead of failing. Otherwise your host root file system cab become read only if you loose all paths to storage at the same time. The only way to recover from this is to reboot the host.
See these bugs for more info on the needed setup: https://bugzilla.redhat.com/show_bug.cgi?id=1436415 https://bugzilla.redhat.com/show_bug.cgi?id=1435335
To use proper setup for your boot multipath, you need this patch which is not available yet in 4.2: https://gerrit.ovirt.org/c/93301/
Nir
Am 08.01.2019 um 14:46 schrieb Nir Soffer:
On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote:
Hello,
I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
[root@myhostname ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/onn_myhostname--iscsi-home mountpoint: /home devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1 mountpoint: / devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-swap mountpoint: [SWAP] devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-tmp mountpoint: /tmp devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var mountpoint: /var devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_crash mountpoint: /var/crash devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log mountpoint: /var/log devices: /dev/mapper/mpath-myhostname-disk1p2
logical volume: /dev/mapper/onn_myhostname--iscsi-var_log_audit mountpoint: /var/log/audit devices: /dev/mapper/mpath-myhostname-disk1p2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
Configure LVM filter? [yes,NO]
Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?
Ralf, can you share the output of: lsblk multipath -ll multipathd show paths format "%d %P"
Nir
Am 05.01.2019 um 19:34 schrieb tehnic@take3.ro <mailto:tehnic@take3.ro>:
Hello Greg,
this is what i was looking for.
After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM on the ovirt hosts.
Additionally i made following tests: - Cloning + running a VM on the ISCSI domain - Detaching + (re-)attaching of the ISCSI domain - Detaching, removing + (re-)import of the ISCSI domain - Creating new ISCSI domain (well, i needed to use "force operation" because creating on same ISCSI target)
All tests were successful.
As you wished i filed a bug: <https://github.com/oVirt/ovirt-site/issues/1857> <https://github.com/oVirt/ovirt-site/issues/1857> Thank you.
Best regards, Robert _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2J...
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------ _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQ...
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------

Hello, i see i have already devices { device { vendor "LIO-ORG" hardware_handler "1 alua" features "1 queue_if_no_path" path_grouping_policy "failover" path_selector "queue-length 0" failback immediate path_checker directio #path_checker tur prio alua prio_args exclusive_pref_bit #fast_io_fail_tmo 25 *no_path_retry queue* } } Which should result in the same beahaviour, correct ? Am 08.01.2019 um 16:48 schrieb Ralf Schenk:
Hello,
I manually renamed them via multipath.conf
multipaths { multipath { wwid 36001405a26254e2bfd34b179d6e98ba4 alias mpath-myhostname-disk1 } }
Before I configured multipath (I installed without first before I knew "mpath" parameter in setup !) I had the problem of readonly root a few times but the hints and settings (for iscsid.conf) I found out didn't help. Thats why I had to tweak dracut ramdisk to set up multipath and understand LVM activation and so on of ovirt-ng the hard way.
So you suggest to add
"no_path_retry queue"
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------

On Tue, Jan 8, 2019 at 6:11 PM Ralf Schenk <rs@databay.de> wrote:
Hello,
i see i have already
devices { device { vendor "LIO-ORG" hardware_handler "1 alua" features "1 queue_if_no_path" path_grouping_policy "failover" path_selector "queue-length 0" failback immediate path_checker directio #path_checker tur prio alua prio_args exclusive_pref_bit #fast_io_fail_tmo 25 *no_path_retry queue* } }
Which should result in the same beahaviour, correct ?
Yes, but this is very bad for vdsm if vdsm try to manage such devices. It is better to put this configuration only for the boot device used by the host, and not for any LIO-ORG device.
Am 08.01.2019 um 16:48 schrieb Ralf Schenk:
Hello,
I manually renamed them via multipath.conf
multipaths { multipath { wwid 36001405a26254e2bfd34b179d6e98ba4 alias mpath-myhostname-disk1
So here is a better place for no_path_retry queue
} }
Before I configured multipath (I installed without first before I knew "mpath" parameter in setup !) I had the problem of readonly root a few times but the hints and settings (for iscsid.conf) I found out didn't help. Thats why I had to tweak dracut ramdisk to set up multipath and understand LVM activation and so on of ovirt-ng the hard way.
So you suggest to add
"no_path_retry queue"
--
*Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <rs@databay.de>
*Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------

On Tue, Jan 8, 2019 at 5:49 PM Ralf Schenk <rs@databay.de> wrote: ...
multipaths {
multipath { wwid 36001405a26254e2bfd34b179d6e98ba4 alias mpath-myhostname-disk1 } }
...
So you suggest to add
"no_path_retry queue"
to above config according to your statements in https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?
Yes ... And yes the disks get listed and is also shown in GUI. How can I filter
this out ? I think https://gerrit.ovirt.org/c/93301/ shows a way to do this. Will this be in 4.3 ?
Yes, it is available, but I'm not sure using 4.3 at this point is a good idea. It would be safer to apply this small patch to 4.2. Nir

Hello, Thanks for your help. I implemented your suggestions regarding multipath config ("no_path_retry queue" only on root wwid) and also the Patch of https://gerrit.ovirt.org/c/93301/ and now my root on ISCSI is filtered from GUI and hopefully rock solid stable. Bye Am 08.01.2019 um 17:33 schrieb Nir Soffer:
On Tue, Jan 8, 2019 at 5:49 PM Ralf Schenk <rs@databay.de <mailto:rs@databay.de>> wrote: ...
multipaths {
multipath { wwid 36001405a26254e2bfd34b179d6e98ba4 alias mpath-myhostname-disk1 } }
...
So you suggest to add
"no_path_retry queue"
to above config according to your statements in https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?
Yes ...
And yes the disks get listed and is also shown in GUI. How can I filter this out ? I think https://gerrit.ovirt.org/c/93301/ shows a way to do this. Will this be in 4.3 ?
Yes, it is available, but I'm not sure using 4.3 at this point is a good idea. It would be safer to apply this small patch to 4.2. Nir
-- *Ralf Schenk* fon +49 (0) 24 05 / 40 83 70 fax +49 (0) 24 05 / 40 83 759 mail *rs@databay.de* <mailto:rs@databay.de> *Databay AG* Jens-Otto-Krag-Straße 11 D-52146 Würselen *www.databay.de* <http://www.databay.de> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202 Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp Hermanns Aufsichtsratsvorsitzender: Wilhelm Dohmen ------------------------------------------------------------------------
participants (4)
-
Greg Sheremeta
-
Nir Soffer
-
Ralf Schenk
-
tehnic@take3.ro