By the way, why do you use multipath for local storage like this EVO nvme ?
Happy New Year !
Best Regards,
Strahil NikolovOn Dec 31, 2019 21:51, Strahil <hunter86_bg(a)yahoo.com> wrote:
You can check
https://access.redhat.com/solutions/2437991 &
https://access.redhat.com/solutions/3014361
You have 2 options:
1. Set a udev rule like this one (replace NETAPP with your storage)
ACTION!="add|change", GOTO="rule_end"
ENV{ID_VENDOR}=="NETAPP*", RUN+="/bin/sh -c 'echo 4096 >
/sys%p/queue/max_sectors_kb'" LABEL="rule_end"
2. Set max_sectors_kb in devices section of multipath.conf
You will need to stop LVM and then flush the device map , so the new option to take
effect (faster is to reboot)
Good Luck & Happy New Year.
Best Regards,
Strahil Nikolov
On Dec 31, 2019 17:53, Stefan Wolf <shb256(a)gmail.com> wrote:
>
> hi all,
>
> i ve 4 nodes running with current ovirt.
> I ve only a problem on one host even after a fresh installation .
> I ve installed the latest image.
> Than I add the node to the cluster
> Everything is working good.
> After this I configure the network.
> BUT, after a restart the host does not come up again.
> I got this error: blk_cloned_rq_check_limits: over max size limit
> every 5 seconds
>
> I can continue with control-D
> or I can login with root password to fix the problem. but i dont know what is the
problem and where does it came from
>
> I ve also changed the sas disk to nvme storage, but I ve changed this on every host.
And this problem exists only on one host
>
> i found this
https://lists.centos.org/pipermail/centos/2017-December/167727.html
> the output is
> [root@kvm380 ~]# ./test.sh
> Sys Block Node : Device max_sectors_kb
max_hw_sectors_kb
> /sys/block/dm-0 : onn_kvm380-pool00_tmeta 256 4096
> /sys/block/dm-1 : onn_kvm380-pool00_tdata 256 4096
> /sys/block/dm-10 : onn_kvm380-var 256 4096
> /sys/block/dm-11 : onn_kvm380-tmp 256 4096
> /sys/block/dm-12 : onn_kvm380-home 256 4096
> /sys/block/dm-13 : onn_kvm380-var_crash 256 4096
> /sys/block/dm-2 : onn_kvm380-pool00-tpool 256 4096
> /sys/block/dm-3 : onn_kvm380-ovirt--node--ng--4.3.7--0.20191121.0+1
256 4096
> /sys/block/dm-4 : onn_kvm380-swap 256 4096
> /sys/block/dm-5 : eui.0025385991b1e27a 512 2048
> /sys/block/dm-6 : eui.0025385991b1e27a1 512 2048
> /sys/block/dm-7 : onn_kvm380-pool00 256 4096
> /sys/block/dm-8 : onn_kvm380-var_log_audit 256 4096
> /sys/block/dm-9 : onn_kvm380-var_log 256 4096
> cat: /sys/block/nvme0n1/device/vendor: Datei oder Verzeichnis nicht gefunden
> /sys/block/nvme0n1: Samsung SSD 970 EVO 1TB 512 2048
> /sys/block/sda : HP LOGICAL VOLUME 256 4096
>
> is the nvme not starting correct
> [root@kvm380 ~]# systemctl status multipathd
> ● multipathd.service - Device-Mapper Multipath Device Controller
> Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor
preset: enabled)
> Active: active (running) since Di 2019-12-31 16:16:32 CET; 31min ago
> Process: 1919 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
> Process: 1916 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
> Process: 1911 ExecStartPre=/sbin/modprobe dm-multipath (code=exited,
status=0/SUCCESS)
> Main PID: 1921 (multipathd)
> Tasks: 7
> CGroup: /system.slice/multipathd.service
> └─1921 /sbin/multipathd
>
> Dez 31 16:47:58 kvm380.durchhalten.intern multipathd[1921]: nvme0n1: mark as failed
> Dez 31 16:47:58 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
Entering recovery mode: max_retries=4
> Dez 31 16:47:58 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
remaining active paths: 0
> Dez 31 16:48:02 kvm380.durchhalten.intern multipathd[1921]: 259:0: reinstated
> Dez 31 16:48:02 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
queue_if_no_path enabled
> Dez 31 16:48:02 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
Recovered to normal mode
> Dez 31 16:48:02 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
remaining active paths: 1
> Dez 31 16:48:03 kvm380.durchhalten.intern multipathd[1921]: nvme0n1: mark as failed
> Dez 31 16:48:03 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
Entering recovery mode: max_retries=4
> Dez 31 16:48:03 kvm380.durchhalten.intern multipathd[1921]: eui.0025385991b1e27a:
remaining active paths: 0
>
> why is it marked as failed?
>
> if i create a new volume with cockpit and use it for bricks for gluster, every thing
is fine. until reboot
>
>
> maybe some one can point me the direction
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHHFFWAY5T5...