On Thu, Mar 31, 2022 at 3:13 PM Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
On Thu, Mar 31, 2022 at 1:30 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>
>
>
> Removing a storage domain requires moving the storage domain to maintainance
> and detaching it. In this state oVirt does not use the domain so it is
> safe to remove
> the lvs and vg on any host in the cluster.
>
> But if you remove the storage domain in engine with:
>
> [x] Format Domain, i.e. Storage Content will be lost!
>
> vdsm will remove all the lvs and the vg for you.
>
> If you forgot to format the domain when removing it, removing manually
> is fine.
>
> Nir
>
Thanks for answering, Nir.
In fact I think I didn't select to format the domain and so the LVM structure
remained in place (I did it some time ago...)
When you write "vdsm will remove all the lvs and the vg for you", how does vdsm
act and work in this case and how does it coordinate the nodes' view of LVM structures
so that they are consistent, with no cluster LVM in place?
oVirt has its own clustered lvm solution, using sanlock.
In oVirt only the SPM host creates, extends, or deletes or changes tags in
logical volumes. Other host only consume the logical volumes by activating
them for running vms or performing storage operations.
I presume it is lvmlockd using sanlock as external lock manager,
lvmlockd is not involved. When oVirt was created, lvmlockd supported
only dlm, which does not scale for oVirt use case. So oVirt uses sanlock
directly to manage cluster locks.
but how can I run LVM commands mimicking what vdsm probably does?
Or is it automagic and I need only to run the LVM commands above without worrying about
it?
There is no magic, but you don't need to mimic what vdsm is doing.
When I manually remove LVs, VG and PV on the first node, what to do
on other nodes? Simply a
vgscan --config 'devices { filter = ["a|.*|" ] }'
Don't run this on ovirt hosts, the host should not scan all vgs without
a filter.
or what?
When you remove a storage domain engine, even without formatting it, no
host is using the logical volumes. Vdsm on all hosts can see the vg, but
never activate the logical volumes.
You can remove the vg on any host, since you are the only user of this vg.
Vdsm on other hosts can see the vg, but since it does not use the vg, it is
not affected.
The vg metadata is stored on one pv. When you remove a vg, lvm clears
the metadata on this pv. Other pvs cannot be affected by this change.
The only risk is trying to modify the same vg from multiple hosts at the
same time, which can corrupt the vg metadata.
Regarding removing the vg on other nodes - you don't need to do anything.
On the host, the vg is hidden since you use lvm filter. Vdsm can see the
vg since vdsm uses lvm filter with all the luns on the system. Vdsm will
see the change the next time it runs pvs, vgs, or lvs.
Nir