Hello,
I manually renamed them via multipath.conf
multipaths {
multipath {
wwid 36001405a26254e2bfd34b179d6e98ba4
alias mpath-myhostname-disk1
}
}
Before I configured multipath (I installed without first before I knew
"mpath" parameter in setup !) I had the problem of readonly root a few
times but the hints and settings (for iscsid.conf) I found out didn't
help. Thats why I had to tweak dracut ramdisk to set up multipath and
understand LVM activation and so on of ovirt-ng the hard way.
So you suggest to add
"no_path_retry queue"
to above config according to your statements in
https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?
I cannot access
https://bugzilla.redhat.com/show_bug.cgi?id=1436415
And yes the disks get listed and is also shown in GUI. How can I filter
this out ? I think
https://gerrit.ovirt.org/c/93301/ shows a way to do
this. Will this be in 4.3 ?
So far thanks for your good hints.
[
{
"status": "used",
"vendorID": "LIO-ORG",
"GUID": "mpath-myhostname-disk1",
"capacity": "53687091200",
"fwrev": "4.0",
"discard_zeroes_data": 0,
"vgUUID": "",
"pathlist": [
{
"initiatorname": "default",
"connection": "172.16.1.3",
"iqn":
"iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
"portal": "1",
"user": "myhostname",
"password": "l3tm31scs1-2018",
"port": "3260"
},
{
"initiatorname": "ovirtmgmt",
"connection": "192.168.1.3",
"iqn":
"iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
"portal": "1",
"user": "myhostname",
"password": "l3tm31scs1-2018",
"port": "3260"
},
{
"initiatorname": "ovirtmgmt",
"connection": "192.168.1.3",
"iqn":
"iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
"portal": "1",
"user": "myhostname",
"password": "l3tm31scs1-2018",
"port": "3260"
}
],
"pvsize": "",
"discard_max_bytes": 4194304,
"pathstatus": [
{
"capacity": "53687091200",
"physdev": "sda",
"type": "iSCSI",
"state": "active",
"lun": "0"
},
{
"capacity": "53687091200",
"physdev": "sdb",
"type": "iSCSI",
"state": "active",
"lun": "0"
},
{
"capacity": "53687091200",
"physdev": "sdc",
"type": "iSCSI",
"state": "active",
"lun": "0"
}
],
"devtype": "iSCSI",
"physicalblocksize": "512",
"pvUUID": "",
"serial":
"SLIO-ORG_myhostname-disk_a26254e2-bfd3-4b17-9d6e-98ba4ab45902",
"logicalblocksize": "512",
"productID": "myhostname-disk"
}
]
Am 08.01.2019 um 16:19 schrieb Nir Soffer:
On Tue, Jan 8, 2019 at 4:57 PM Ralf Schenk <rs(a)databay.de
<mailto:rs@databay.de>> wrote:
Hello,
I cannot tell if this is expected on a node-ng system. I worked
hard to get it up and running like this. 2 Diskless Hosts (EPYC
2x16 Core, 256GB RAM) boot via ISCSI ibft from Gigabit Onboard,
Initial-Ramdisk establishes multipathing and thats what I get (and
want). So i've redundant connections (1x1GB, 2x10GB Ethernet as
bond) to my storage (Ubuntu Box 1xEPYC 16 Core, 128Gig RAM
currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and
targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via
ISCSI are thin-provisioned and the sparse-files grow and shrink
when discarding in VM's/Host via fstrim.
My exercises doing this also as UEFI boot were stopped by node-ng
installer partitioning which refused to set up a UEFI FAT Boot
partition on the already accessible ISCSI targets.. So systems do
legacy BIOS boot now.
This works happily even if I unplug one of the Ethernet-Cables.
[root@myhostname ~]# multipath -ll
mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0
LIO-ORG ,myhostname-disk
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 3:0:0:0 sdb 8:16 active ready running
|-+- policy='queue-length 0' prio=50 status=enabled
| `- 4:0:0:0 sdc 8:32 active ready running
`-+- policy='queue-length 0' prio=50 status=enabled
`- 0:0:0:0 sda 8:0 active ready running
See attached lsblk.
Looks like the lvm filter suggested by vdsm-tool is correct.
Try to configure it and see if you hosts boot correctly. If not you
will have to change
to filter to match your setup.
But multipath device named "mpath-myhostname" is alarming. I would
expect to
see the device as 36001405a26254e2bfd34b179d6e98ba4 in lsblk.Maybe this is
ok with the way your hosts are configured.
Can you share also your /etc/multipath.conf? and any files under
/etc/multpath/conf.d/?
Also check that vdsm does not report mpath-myhostname or
/dev/mapper/36001405a26254e2bfd34b179d6e98ba4 in the output of
vdsm-client Host getDeviceList
Finally, the devices used for booting the host should have special
more robust
multipath configuration, that will queue io forever instead of
failing. Otherwise your
host root file system cab become read only if you loose all paths to
storage at the
same time. The only way to recover from this is to reboot the host.
See these bugs for more info on the needed setup:
https://bugzilla.redhat.com/show_bug.cgi?id=1436415
https://bugzilla.redhat.com/show_bug.cgi?id=1435335
To use proper setup for your boot multipath, you need this patch which is
not available yet in 4.2:
https://gerrit.ovirt.org/c/93301/
Nir
Am 08.01.2019 um 14:46 schrieb Nir Soffer:
> On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk <rs(a)databay.de
> <mailto:rs@databay.de>> wrote:
>
> Hello,
>
> I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That
> is what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
>
> [root@myhostname ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
> logical volume: /dev/mapper/onn_myhostname--iscsi-home
> mountpoint: /home
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume:
> /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
> mountpoint: /
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume: /dev/mapper/onn_myhostname--iscsi-swap
> mountpoint: [SWAP]
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume: /dev/mapper/onn_myhostname--iscsi-tmp
> mountpoint: /tmp
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume: /dev/mapper/onn_myhostname--iscsi-var
> mountpoint: /var
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume: /dev/mapper/onn_myhostname--iscsi-var_crash
> mountpoint: /var/crash
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume: /dev/mapper/onn_myhostname--iscsi-var_log
> mountpoint: /var/log
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> logical volume:
> /dev/mapper/onn_myhostname--iscsi-var_log_audit
> mountpoint: /var/log/audit
> devices: /dev/mapper/mpath-myhostname-disk1p2
>
> This is the recommended LVM filter for this host:
>
> filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|",
> "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add
> a new
> device to the volume group, you will need to edit the filter
> manually.
>
> Configure LVM filter? [yes,NO]
>
>
> Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node
> system?
>
> Ralf, can you share the output of:
> lsblk
> multipath -ll
> multipathd show paths format "%d %P"
>
> Nir
>
>
>
>
> Am 05.01.2019 um 19:34 schrieb tehnic(a)take3.ro
> <mailto:tehnic@take3.ro>:
>> Hello Greg,
>>
>> this is what i was looking for.
>>
>> After running "vdsm-tool config-lvm-filter" on all hosts (and
rebooting them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to local
LVM on the ovirt hosts.
>>
>> Additionally i made following tests:
>> - Cloning + running a VM on the ISCSI domain
>> - Detaching + (re-)attaching of the ISCSI domain
>> - Detaching, removing + (re-)import of the ISCSI domain
>> - Creating new ISCSI domain (well, i needed to use "force
operation" because creating on same ISCSI target)
>>
>> All tests were successful.
>>
>> As you wished i filed a bug:
<
https://github.com/oVirt/ovirt-site/issues/1857>
<
https://github.com/oVirt/ovirt-site/issues/1857>
>> Thank you.
>>
>> Best regards,
>> Robert
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YG...
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *rs(a)databay.de* <mailto:rs@databay.de>
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <
http://www.databay.de>
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
> Dipl.-Kfm. Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>
> ------------------------------------------------------------------------
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6G...
>
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <
http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
Dipl.-Kfm. Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <
http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------