On Fri, Apr 23, 2021 at 5:19 PM Ryan Chewning <ryan_chewning(a)trimble.com> wrote:
On Fri, Apr 23, 2021 at 6:25 AM Vojtech Juranek <vjuranek(a)redhat.com> wrote:
>
> On Friday, 23 April 2021 02:44:43 CEST Ryan Chewning wrote:
> > Hi List,
> >
> > We need to add and remove directly mapped LUNs to multiple VMs in our
> > Non-Production environment. The environment is backed by an iSCSI SAN. In
> > testing when removing a directly mapped LUN it doesn't remove the
> > underlying multipath and devices. Several questions.
> >
> > 1) Is this the expected behavior?
>
> yes, before removing multipath devices, you need to unzone LUN on storage
> server. As oVirt doesn't manage storage server in case of iSCSI, it has to be
> done by storage sever admin and therefore oVirt cannot manage whole flow.
>
Thank you for the information. Perhaps you can expand then on how the volumes are picked
up once mapped from the Storage system? Traditionally when mapping storage from an iSCSI
or Fibre Channel storage we have to initiate a LIP or iSCSI login. How is it that oVirt
doesn't need to do this?
> > 2) Are we supposed to go to each KVM host and manually remove the
> > underlying multipath devices?
>
> oVirt provides ansible script for it:
>
>
https://github.com/oVirt/ovirt-ansible-collection/blob/master/examples/
> remove_mpath_device.yml
>
> Usage is as follows:
>
> ansible-playbook --extra-vars "lun=<LUN_ID>" remove_mpath_device.yml
We'll look into this. At least in our Non Production environment when we take down a
development environment or refresh the data there are at least 14 volumes that have to be
removed and readded.
>
>
> > 3) Is there a technical reason that oVirt doesn't do this as part of the
> > steps to removing the storage?
>
> as mentioned above, oVirt doesn't manage iSCSI server and cannot unzone LUN
> from the server. For managed storage oVirt does that.
I understand ovirt is not able to unzoning the LUN as that is managed on the Storage
system. However oVirt does create the multipath device and underlying block devices.
Not really, this is a common misunderstanding about how oVirt manages
storage.
oVirt does not have the concept of adding a LUN or removing a LUN. This
is not a coincidence that oVirt UI doe snot have a LUNs tab. The reason
is that oVirt does not manage LUNs.
oVirt logs in to the iSCSI target, and the results of this is creating
multipath devices for all LUNs from the target. This is done
automatically by the system, mostly because oVirt configure multipath to
grab all SCSI (or other) devices on the system.
At this point oVirt does not know which devices will be discovered. On
the host vdsm reports the LUNs to oVirt engine. The admin may add LUNs
for storage domains, or for VMs (direct LUN). LUNs that are not used by
oVirt are visible on the host and are not managed by oVirt.
Since vdsm does not know which LUNs are expected, it does SCSI rescan in
many flows to make sure all LUNs are visible on the host. For example
after resizing a LUN on the server (not controlled by oVirt), the new
size may not be available on the host until the next SCSI rescan.
Another example is a new LUN added on the server.
When a LUN is removed from a storage domain, or from a VM, oVirt does
not remove it from the host. For example you can remove a LUN from a
storage domain and then add it as direct LUN to a VM, or the other way
around.
We expected that to be cleaned up when a LUN is deleted.
We don't have the concept of deleting a LUN in oVirt. This is done on
the server by the storage admin, outside of oVirt.
It would be nice if oVirt had a way to remove a specific LUN from the
system using the UI, but this feature was never implemented. What we
have now is the ansible script that should make this easy enough.
Note that the ansible script is not a complete solution. If you remove
the LUN from the host before un-zoning the LUN on the server side, the
automatic SCSI rescan in oVirt will discover and add back the LUN right
after you removed it.
Nir