On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys
<Rik.Theys(a)esat.kuleuven.be> wrote:
> While rebooting one of the hosts in an oVirt cluster, I noticed that
> thin_check is run on the thin pool devices of one of the VM's on which
> the disk is assigned to.
>
> That seems strange to me. I would expect the host to stay clear of any
> VM disks.
We expect the same thing, but unfortunately systemd and lvm try to
auto activate stuff. This may be good idea for desktop system, but
probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and
rebooted and the host no longer lists the LV's of the VM. Since I
rebooted the host before without this issue, I'm not sure a single
reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above
settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
include the PV's for the hypervisor disks (on which the OS is installed)
so the system lvm commands only touches those. Since vdsm is using its
own lvm.conf this should be OK for vdsm?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>