[ovirt-users] thin_check run on VM disk by host on startup ?!

Nir Soffer nsoffer at redhat.com
Wed Aug 31 09:47:06 UTC 2016


On Wed, Aug 31, 2016 at 10:43 AM, Rik Theys <Rik.Theys at esat.kuleuven.be> wrote:
> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys at esat.kuleuven.be> wrote:
>>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>>> thin_check is run on the thin pool devices of one of the VM's on which
>>> the disk is assigned to.
>>>
>>> That seems strange to me. I would expect the host to stay clear of any
>>> VM disks.
>>
>> We expect the same thing, but unfortunately systemd and lvm try to
>> auto activate stuff. This may be good idea for desktop system, but
>> probably bad idea for a server and in particular a hypervisor.
>>
>> We don't have a solution yet, but you can try these:
>>
>> 1. disable lvmetad service
>>
>>     systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>>     systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>
>> Edit /etc/lvm/lvm.conf:
>>
>>     use_lvmetad = 0
>>
>> 2. disable lvm auto activation
>>
>> Edit /etc/lvm/lvm.conf:
>>
>>     auto_activation_volume_list = []
>>
>> 3. both 1 and 2
>>
>
> I've now applied both of the above and regenerated the initramfs and
> rebooted and the host no longer lists the LV's of the VM. Since I
> rebooted the host before without this issue, I'm not sure a single
> reboot is enough to conclude it has fully fixed the issue.
>
> You mention that there's no solution yet. Does that mean the above
> settings are not 100% certain to avoid this behaviour?

Yes, these settings were suggested by lvm developers, but we did not test them
yet, and of course did not integrate these settings in vdsm
deployment. This will
require modifying lvm.conf during configuration and verifying that lvm.conf is
configured correctly when starting vdsm.



More information about the Users mailing list