
Hi! I had already booted the first node so I tried this on the second node. After cleaning up with dmsetup I ran ansible script again. It claimed success put multipathd was still checking for paths. I tried to do 'systemctl reload multipathd' but instead did restart (too quick fingers). Anyway it worked. There was some kind of hiccup because of that as the engine seemed to re-activate the node. No hosts went down so in the end it worked. Thanks for the help, Juhani On Fri, Mar 5, 2021 at 1:46 PM Vojtech Juranek <vjuranek@redhat.com> wrote:
On Friday, 5 March 2021 12:16:56 CET Juhani Rautiainen wrote:
Hi!
Ansible script fails and reason seems to be those stale DM links. We are currently still in 4.3.10 as I wanted to do this change before the upgrade to 4.4. We have SHE and it is currently on a 3PAR disk. When we upgrade we can do the SHE change at the same time as I didn't want to do two SHE restores (1st in 4.3 and then in 4.4). As there wasn't any hint how you can remove those stale DM links the best solution probably is to put nodes in maintenance and reboot them.
you can remove the stale links by
dmsetup remove -f /dev/mapper/<name>
BTW I did dezone those disks after I had removed them from oVirt UI. I mean you can't remove them while they are still on oVirt?
if the LUN is on target where no other LUN are used by oVirt, multipath devices should be removed and vdsm should be logged out from the tergat (at least on recent oVirt release). If you still use other LUNs from the storage server on the host (as part of other storage domains), there's no point in removing the LUN as it will be discovered again, as vdsm does rescans of the storage in various flows. So the flow has to be
1. remove storage domain using the LUN 2. unzone the LUN on storage server 3. remove corresponding multipath devices from the hosts
There can be some improvement as proposed by Nir under BZ #1310330 like blacklisting removed devices, but this makes everything more complex. It may be considered and implemented in the future, but unfortunately not available right now.
Thanks, Juhani
On Fri, Mar 5, 2021 at 11:02 AM Vojtech Juranek <vjuranek@redhat.com> wrote:
On Friday, 5 March 2021 09:02:51 CET Juhani Rautiainen wrote:
Hi!
We are running ovirt 4.3.10 and we are migrating from 3PAR to Dell SC. I've managed to transfer disks from one LUN away and removed it (put it in maintenanced, detached it and removed it). Now multipath seems to spam the logs for missing disks. How can I stop this?
you can remove multipath device either manually or you can use ansible playbook for it, please try one attached to
https://bugzilla.redhat.com/1310330
see
https://bugzilla.redhat.com/show_bug.cgi?id=1310330#c56
how to use it. You also have to change `hosts` to group of hosts you want to test on and remove `connection: local`
If it fail, please report back. It can fail as there are stale DM links, see
bugzilla.redhat.com/1928041
this was fixed recently and should be in next oVirt relase.
Alternatively, you can reboot the host.
Mar 5 09:58:58 ovirt01 multipathd: 360002ac00000000000000272000057b9: sdr - tur checker reports path is down Mar 5 09:58:59 ovirt01 multipathd: 360002ac00000000000000272000057b9: sdh - tur checker reports path is down
And so on. Any idea how I can quiet this? And why didn't oVirt do this automatically?
to be able to remove multipath devices, the LUN has to be unzoned first on the storage server and this cannot be done by oVirt, as oVirt doesn't manage the storage server - this has to be done by administator of storage server which has subsequently remove multipath devices from the hosts e.g. by using ansible script mention above.
Thanks, Juhani _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QLENIZGWGU U4J OETXNQCNIK5GTU2EFVZ/