thin_check run on VM disk by host on startup ?!

Hi, While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to. That seems strange to me. I would expect the host to stay clear of any VM disks. When I look at the 'lvs' output on the host, it seems to have activated the VM's volume group that has the thin pool in it. It has also activated one other volume group (and it's LV's) that is _not_ a thin pool. All other VM disks are shown as LV's with their uuid. See output below. Is this expected behaviour? I would hope/expect that the host will never touch any VM disks. Am I supposed to configure an LVM filter myself to prevent this issue? We had a thin pool completely break on an VM a while ago and I never determined the root cause (was a test VM). If the host changed something on the disk while the VM was running on the other host this might have been the root cause. Regards, Rik LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 05a0fc6e-b43a-47b4-8979-92458dc1c76b a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g 08d25785-5b52-4795-863c-222b3416ed8d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 3.00g 0a9d0272-afc5-4d2e-87e4-ce32312352ee a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 40.00g 0bc1cdc9-edfc-4192-9017-b6d55c01195d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 17.00g 181e6a61-0d4b-41c9-97a0-ef6b6d2e85e4 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 400.00g 1913b207-d229-4874-9af0-c7e019aea51d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 128.00m 1adf2b74-60c2-4c16-8c92-e7da81642bc6 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g 1e64f92f-d266-4746-ada6-0f7f20b158a6 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 65.00g 22a0bc66-7c74-486a-9be3-bb8112c6fc9e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 9.00g 2563b835-58be-4a4e-ac02-96556c5e8c1c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 9.00g 27edac09-6438-4be4-b930-8834e7eecad5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g 2b57d8e1-d304-47a3-aa2f-7f44804f5c66 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 15.00g 2f073f9b-4e46-4d7b-9012-05a26fede87d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 36.00g 310ef6e4-0759-4f1a-916f-6c757e15feb5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g 3125ac02-64cb-433b-a1c1-7105a11a1d1c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 36.00g 315e9ad9-7787-40b6-8b43-d766b08708e2 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g 32e4e88d-5482-40f6-a906-ad8907d773ce a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g 32f0948b-324f-4e73-937a-fef25abd9bdc a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 32.00g 334e0961-2dad-4d58-86d9-bfec3a4e83e4 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 11.00g 36c4a274-cbbe-4a03-95e5-8b98ffde0b1b a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 1.17t 36d95661-308f-42e6-a887-ddd4d4ed04e9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 250.00g 39d88a73-4e59-44a7-b773-5bc446817415 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g 3aeb1c19-7113-4a4b-9e27-849af904ed41 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 32.00g 43ac529e-4295-4e77-ae06-963104950921 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 33.00g 462f1cc4-e3a7-4fcd-b46c-7f356bfc5589 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g 479fc9a4-592c-4287-b8fe-db7600360676 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 1.00t 4d05ce38-2d33-4f01-ae33-973ff8eeb897 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g 4d307170-7d45-4bb9-9885-68aeacee5f33 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 24.00g 4dca97ab-25de-4b69-9094-bf02698abf0e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 8.00g 54013d0f-fd47-4c91-aa31-f6dd2d4eeaad a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g 55a2a2aa-88f2-4aca-bbfd-052f6be3079d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g 57131782-4eda-4fbd-aeaa-af49d12a79e9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 14.00g 5884b686-6a2e-4166-9727-303097a349fe a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 32.00g 6164a2b2-e84b-4bf9-abab-f7bc19e1099d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g 62c8a0be-4d7f-4297-af1c-5c9d794f4ba0 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 500.00g 67499042-4c19-425c-9106-b3c382edbec1 a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 1.00t 6ba95bb2-04ed-4512-95f4-34e38168f9e9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 22.00g 6ebf91c2-2051-4e2b-b1c3-57350834f236 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 100.00g 70c834a0-05f4-469f-877c-24101c7abf25 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 6.00g 71401df8-19dd-45d0-be33-f07937fe042d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 6.00g 73d5eb12-88e3-44a2-a791-9b7ad5b77953 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g 74e0a4ea-d873-4381-85a8-122cd4d2e961 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 260.00g 776901e3-bf78-4cab-aac6-b02588a7b1e1 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 48.00g 86d608ac-1a24-451e-9a29-1b7cbc13c654 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g 88fbb53f-5019-4cc5-aca6-686731c43f67 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 14.00g 8dc43564-9a9d-4600-a715-0b62c4d0308f a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g 90167fb2-0941-4cf3-ae56-7cefca7909dd a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g 99a367c1-ce88-45c7-ac21-aa82c7e9b86e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g 9d0fa9df-4b14-46cc-91a4-e66155230b9e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g 9d31be10-59c7-4994-9c72-c57608c1b1f0 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 250.00g 9f6dbd0b-78ef-4b84-8467-04e5bd05abfd a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 7.00g 9fb1bdee-0e84-46df-89e2-f05afccd3867 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 70.00g a0db7a2a-4923-4873-81a0-2f6b7037e643 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 31.00g a270bad3-cabb-4c14-bf71-347ee148e343 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 50.00g a9bfdea5-ce26-42d2-af9c-2a6f02df0d2d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g b5efe389-6535-412a-a4ed-f3f6147a6dae a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g b6c6e718-5c16-4918-b47b-a2c936228a08 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 15.00g baca4655-c58a-413f-a6c7-b39add9eecb8 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g bf378ca0-373a-4a9c-85cd-9983a3004ab7 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 100.00g c360aaaf-68a4-450d-acac-ff474a579f6c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 100.00g c77e9e8c-40a2-4db0-822e-7a089cc77b8a a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 128.00m cb634231-8bb3-4725-89b7-90c33046bc7c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 35.25g cfca3f6e-5c46-4e65-96a5-37cede02a3b5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 256.00g d22ff40f-4ce2-4f3a-8c35-07ca72ceb91d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g d6ed59ff-6f43-4ebf-b83d-683acd9013c9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g d7357da8-1899-462f-b4a9-d7cf9d6f77c2 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g d9dbfb34-0889-42d8-9c31-bad4948f9ca5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 33.00g dc057b04-446b-48d4-b559-bb85f38139ca a797e417-adeb-4611-b4cf-10844132eef4 -wi-ao---- 24.00g e5d4a078-cd51-4a5d-9f58-29ef253821f9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 12.00g eca39fab-60a8-4384-8196-32b55f24107b a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g ed23275c-d5e8-4f7c-a795-52bf59c93d4a a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g ee5eab01-88cb-4a4b-bcd3-46e26ae47feb a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g f0676f9b-c942-4ebc-8614-d79b437d667a a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 18.00g f778d555-cf2c-4f47-83fe-507ab4b0e96d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 15.00g ids a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 128.00m inbox a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 128.00m leases a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 2.00g master a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 1.00g metadata a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 512.00m outbox a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 128.00m 24d78600-22f4-44f7-987b-fbd866736249 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 37.50g 27c6e31e-dad2-4df1-a76d-ee337f1766b1 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 8.00g 28506d83-8d12-43ea-aa71-7f4a93486aa9 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 1.17t 292b02b2-dd4d-43b0-9799-f8f663cff6ef a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 75.00g 3d39299a-1912-4a91-9c4a-dcaaa056627a a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 32.00g 64daff8f-e92a-4823-879f-78bc8314d2e6 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 128.00m 6a3eaf14-21f8-4517-9eef-be274ce07857 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 7.00g 6c1136cb-4525-4cb5-b705-f81265878355 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 19.00g ab9dfe0d-72a6-4d94-9206-27198f8dc579 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 128.00m cb59ea43-4418-4da3-86e1-77feaf2637b1 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 3.00g cc3dca9d-b1a7-4b05-8f53-7d9da0655be5 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 431.00g d2274e23-dc3f-49e3-b5f4-2b3eada29937 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 4.00g d57e7183-0f57-42bb-bb14-dd7bf7e320cb a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 6.00g f7545aa2-e953-4f6c-bd62-1b1fe33bc66e a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 7.00g fa1d8c9c-9874-4fad-b059-a0c60053dcfb a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 16.00g ids a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 128.00m inbox a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 128.00m leases a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 2.00g master a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 1.00g metadata a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 512.00m outbox a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 128.00m imap maildata Vwi---tz-- 750.00g pool0 ldap maildata Vwi---tz-- 3.00g pool0 log maildata Vwi---tz-- 10.00g pool0 mailconfig maildata Vwi---tz-- 2.00g pool0 pool0 maildata twi---tz-- 1020.00g postfix maildata Vwi---tz-- 10.00g pool0 root vg_amazone -wi-ao---- 32.00g swap vg_amazone -wi-ao---- 64.00g phplogs vg_logs -wi-a----- 12.00g wwwlogs vg_logs -wi-a----- 12.00g -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On 08/30/2016 02:51 PM, Rik Theys wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We had a thin pool completely break on an VM a while ago and I never determined the root cause (was a test VM). If the host changed something on the disk while the VM was running on the other host this might have been the root cause.
I just rebooted the affected VM and indeed the systems fails to activate the thinpool now :-(. When I try to activate it I get: Check of pool maildata/pool0 failed: (status:1). Manual repair required! 0 logical volume(s) in volume group "maildata" now active. Mvg, Rik -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor. We don't have a solution yet, but you can try these: 1. disable lvmetad service systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket Edit /etc/lvm/lvm.conf: use_lvmetad = 0 2. disable lvm auto activation Edit /etc/lvm/lvm.conf: auto_activation_volume_list = [] 3. both 1 and 2 Currently we don't touch lvm.conf, and override it using --config option from the command line when running lvm commands from vdsm. But this does not help with startup time automatic activation and lvm peeking into lvs owned by vms. We are probably going to introduce the changes above in the future.
When I look at the 'lvs' output on the host, it seems to have activated the VM's volume group that has the thin pool in it. It has also activated one other volume group (and it's LV's) that is _not_ a thin pool. All other VM disks are shown as LV's with their uuid. See output below.
Is this expected behaviour? I would hope/expect that the host will never touch any VM disks. Am I supposed to configure an LVM filter myself to prevent this issue?
We had a thin pool completely break on an VM a while ago and I never determined the root cause (was a test VM). If the host changed something on the disk while the VM was running on the other host this might have been the root cause.
Regards,
Rik
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 05a0fc6e-b43a-47b4-8979-92458dc1c76b a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
08d25785-5b52-4795-863c-222b3416ed8d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 3.00g
0a9d0272-afc5-4d2e-87e4-ce32312352ee a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 40.00g
0bc1cdc9-edfc-4192-9017-b6d55c01195d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 17.00g
181e6a61-0d4b-41c9-97a0-ef6b6d2e85e4 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 400.00g
1913b207-d229-4874-9af0-c7e019aea51d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 128.00m
1adf2b74-60c2-4c16-8c92-e7da81642bc6 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g
1e64f92f-d266-4746-ada6-0f7f20b158a6 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 65.00g
22a0bc66-7c74-486a-9be3-bb8112c6fc9e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 9.00g
2563b835-58be-4a4e-ac02-96556c5e8c1c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 9.00g
27edac09-6438-4be4-b930-8834e7eecad5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g
2b57d8e1-d304-47a3-aa2f-7f44804f5c66 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 15.00g
2f073f9b-4e46-4d7b-9012-05a26fede87d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 36.00g
310ef6e4-0759-4f1a-916f-6c757e15feb5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g
3125ac02-64cb-433b-a1c1-7105a11a1d1c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 36.00g
315e9ad9-7787-40b6-8b43-d766b08708e2 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g
32e4e88d-5482-40f6-a906-ad8907d773ce a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
32f0948b-324f-4e73-937a-fef25abd9bdc a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 32.00g
334e0961-2dad-4d58-86d9-bfec3a4e83e4 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 11.00g
36c4a274-cbbe-4a03-95e5-8b98ffde0b1b a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 1.17t
36d95661-308f-42e6-a887-ddd4d4ed04e9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 250.00g
39d88a73-4e59-44a7-b773-5bc446817415 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g
3aeb1c19-7113-4a4b-9e27-849af904ed41 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 32.00g
43ac529e-4295-4e77-ae06-963104950921 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 33.00g
462f1cc4-e3a7-4fcd-b46c-7f356bfc5589 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
479fc9a4-592c-4287-b8fe-db7600360676 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 1.00t
4d05ce38-2d33-4f01-ae33-973ff8eeb897 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g
4d307170-7d45-4bb9-9885-68aeacee5f33 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 24.00g
4dca97ab-25de-4b69-9094-bf02698abf0e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 8.00g
54013d0f-fd47-4c91-aa31-f6dd2d4eeaad a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g
55a2a2aa-88f2-4aca-bbfd-052f6be3079d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g
57131782-4eda-4fbd-aeaa-af49d12a79e9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 14.00g
5884b686-6a2e-4166-9727-303097a349fe a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 32.00g
6164a2b2-e84b-4bf9-abab-f7bc19e1099d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
62c8a0be-4d7f-4297-af1c-5c9d794f4ba0 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 500.00g
67499042-4c19-425c-9106-b3c382edbec1 a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 1.00t
6ba95bb2-04ed-4512-95f4-34e38168f9e9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 22.00g
6ebf91c2-2051-4e2b-b1c3-57350834f236 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 100.00g
70c834a0-05f4-469f-877c-24101c7abf25 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 6.00g
71401df8-19dd-45d0-be33-f07937fe042d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 6.00g
73d5eb12-88e3-44a2-a791-9b7ad5b77953 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
74e0a4ea-d873-4381-85a8-122cd4d2e961 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 260.00g
776901e3-bf78-4cab-aac6-b02588a7b1e1 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 48.00g
86d608ac-1a24-451e-9a29-1b7cbc13c654 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g
88fbb53f-5019-4cc5-aca6-686731c43f67 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 14.00g
8dc43564-9a9d-4600-a715-0b62c4d0308f a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g
90167fb2-0941-4cf3-ae56-7cefca7909dd a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g
99a367c1-ce88-45c7-ac21-aa82c7e9b86e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g
9d0fa9df-4b14-46cc-91a4-e66155230b9e a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
9d31be10-59c7-4994-9c72-c57608c1b1f0 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 250.00g
9f6dbd0b-78ef-4b84-8467-04e5bd05abfd a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 7.00g
9fb1bdee-0e84-46df-89e2-f05afccd3867 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 70.00g
a0db7a2a-4923-4873-81a0-2f6b7037e643 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 31.00g
a270bad3-cabb-4c14-bf71-347ee148e343 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 50.00g
a9bfdea5-ce26-42d2-af9c-2a6f02df0d2d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g
b5efe389-6535-412a-a4ed-f3f6147a6dae a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 10.00g
b6c6e718-5c16-4918-b47b-a2c936228a08 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 15.00g
baca4655-c58a-413f-a6c7-b39add9eecb8 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g
bf378ca0-373a-4a9c-85cd-9983a3004ab7 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 100.00g
c360aaaf-68a4-450d-acac-ff474a579f6c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 100.00g
c77e9e8c-40a2-4db0-822e-7a089cc77b8a a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 128.00m
cb634231-8bb3-4725-89b7-90c33046bc7c a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 35.25g
cfca3f6e-5c46-4e65-96a5-37cede02a3b5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 256.00g
d22ff40f-4ce2-4f3a-8c35-07ca72ceb91d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g
d6ed59ff-6f43-4ebf-b83d-683acd9013c9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 16.00g
d7357da8-1899-462f-b4a9-d7cf9d6f77c2 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 5.00g
d9dbfb34-0889-42d8-9c31-bad4948f9ca5 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 33.00g
dc057b04-446b-48d4-b559-bb85f38139ca a797e417-adeb-4611-b4cf-10844132eef4 -wi-ao---- 24.00g
e5d4a078-cd51-4a5d-9f58-29ef253821f9 a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 12.00g
eca39fab-60a8-4384-8196-32b55f24107b a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g
ed23275c-d5e8-4f7c-a795-52bf59c93d4a a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 4.00g
ee5eab01-88cb-4a4b-bcd3-46e26ae47feb a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 64.00g
f0676f9b-c942-4ebc-8614-d79b437d667a a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 18.00g
f778d555-cf2c-4f47-83fe-507ab4b0e96d a797e417-adeb-4611-b4cf-10844132eef4 -wi------- 15.00g
ids a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 128.00m
inbox a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 128.00m
leases a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 2.00g
master a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 1.00g
metadata a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 512.00m
outbox a797e417-adeb-4611-b4cf-10844132eef4 -wi-a----- 128.00m
24d78600-22f4-44f7-987b-fbd866736249 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 37.50g
27c6e31e-dad2-4df1-a76d-ee337f1766b1 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 8.00g
28506d83-8d12-43ea-aa71-7f4a93486aa9 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 1.17t
292b02b2-dd4d-43b0-9799-f8f663cff6ef a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 75.00g
3d39299a-1912-4a91-9c4a-dcaaa056627a a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 32.00g
64daff8f-e92a-4823-879f-78bc8314d2e6 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 128.00m
6a3eaf14-21f8-4517-9eef-be274ce07857 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 7.00g
6c1136cb-4525-4cb5-b705-f81265878355 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 19.00g
ab9dfe0d-72a6-4d94-9206-27198f8dc579 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 128.00m
cb59ea43-4418-4da3-86e1-77feaf2637b1 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 3.00g
cc3dca9d-b1a7-4b05-8f53-7d9da0655be5 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 431.00g
d2274e23-dc3f-49e3-b5f4-2b3eada29937 a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 4.00g
d57e7183-0f57-42bb-bb14-dd7bf7e320cb a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 6.00g
f7545aa2-e953-4f6c-bd62-1b1fe33bc66e a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 7.00g
fa1d8c9c-9874-4fad-b059-a0c60053dcfb a7ba2db3-517c-408a-8b27-ea45989d6416 -wi------- 16.00g
ids a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 128.00m
inbox a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 128.00m
leases a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 2.00g
master a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 1.00g
metadata a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 512.00m
outbox a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a----- 128.00m
imap maildata Vwi---tz-- 750.00g pool0 ldap maildata Vwi---tz-- 3.00g pool0 log maildata Vwi---tz-- 10.00g pool0 mailconfig maildata Vwi---tz-- 2.00g pool0 pool0 maildata twi---tz-- 1020.00g postfix maildata Vwi---tz-- 10.00g pool0 root vg_amazone -wi-ao---- 32.00g swap vg_amazone -wi-ao---- 64.00g phplogs vg_logs -wi-a----- 12.00g wwwlogs vg_logs -wi-a----- 12.00g
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>> _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue. You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour? I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm? Regards, Rik -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On 08/31/2016 09:43 AM, Rik Theys wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm?
This does not seem to work. The host can not be activated as it can't find his volume group(s). To be able to use the global_filter in /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf to revert back to the default. I've moved my filter from global_filter to filter and that seems to work. When lvmetad is disabled I believe this should have the same effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also udev might ignore the filter setting? Rik -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
On 08/31/2016 09:43 AM, Rik Theys wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm?
This does not seem to work. The host can not be activated as it can't find his volume group(s). To be able to use the global_filter in /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf to revert back to the default.
I've moved my filter from global_filter to filter and that seems to work. When lvmetad is disabled I believe this should have the same effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also udev might ignore the filter setting?
Right, global_filter exist so you can override filter used from the command line. For example, hiding certain devices from vdsm. This is why we are using filter in vdsm, leaving global_filter for the administrator. Can you explain why do you need global_filter or filter for the hypervisor disks? Do you have any issue with the current settings, disabling auto activation and lvmetad? Nir

Hi, On 08/31/2016 11:51 AM, Nir Soffer wrote:
On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
On 08/31/2016 09:43 AM, Rik Theys wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm?
This does not seem to work. The host can not be activated as it can't find his volume group(s). To be able to use the global_filter in /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf to revert back to the default.
I've moved my filter from global_filter to filter and that seems to work. When lvmetad is disabled I believe this should have the same effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also udev might ignore the filter setting?
Right, global_filter exist so you can override filter used from the command line.
For example, hiding certain devices from vdsm. This is why we are using filter in vdsm, leaving global_filter for the administrator.
Can you explain why do you need global_filter or filter for the hypervisor disks?
Based on the comment in /etc/lvm/lvm.conf regarding global_filter I concluded that not only lvmetad but also udev might perform action on the devices and I wanted to prevent that. I've now set the following settings in /etc/lvm/lvm.conf: use_lvmetad = 0 auto_activation_volume_list = [] filter = ["a|/dev/sda5|", "r|.*|" ] On other systems I have kept the default filter.
Do you have any issue with the current settings, disabling auto activation and lvmetad?
Keeping those two disabled also seems to work. The ovirt LV's do show up in 'lvs' output but are not activated. I wanted to be absolutely sure the VM LV's were not touched, I added the filter on some of our hosts. Regards, Rik -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
On 08/31/2016 11:51 AM, Nir Soffer wrote:
On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
On 08/31/2016 09:43 AM, Rik Theys wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm?
This does not seem to work. The host can not be activated as it can't find his volume group(s). To be able to use the global_filter in /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf to revert back to the default.
I've moved my filter from global_filter to filter and that seems to work. When lvmetad is disabled I believe this should have the same effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also udev might ignore the filter setting?
Right, global_filter exist so you can override filter used from the command line.
For example, hiding certain devices from vdsm. This is why we are using filter in vdsm, leaving global_filter for the administrator.
Can you explain why do you need global_filter or filter for the hypervisor disks?
Based on the comment in /etc/lvm/lvm.conf regarding global_filter I concluded that not only lvmetad but also udev might perform action on the devices and I wanted to prevent that.
I've now set the following settings in /etc/lvm/lvm.conf:
use_lvmetad = 0 auto_activation_volume_list = [] filter = ["a|/dev/sda5|", "r|.*|" ]
Better use /dev/disk/by-uuid/ to select the specific device, without depending on device order.
On other systems I have kept the default filter.
Do you have any issue with the current settings, disabling auto activation and lvmetad?
Keeping those two disabled also seems to work. The ovirt LV's do show up in 'lvs' output but are not activated.
Good
I wanted to be absolutely sure the VM LV's were not touched, I added the filter on some of our hosts.
The only problem with this filter is it may break if you change the host in some way, like boot from another disk. It would be nice if you file a bug for this, and mention the configuration that fixes this issue, we certainly need to improve the way we configure lvm. Nir

Hi, On 08/31/2016 02:04 PM, Nir Soffer wrote:
On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
On 08/31/2016 11:51 AM, Nir Soffer wrote:
On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
On 08/31/2016 09:43 AM, Rik Theys wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote: > While rebooting one of the hosts in an oVirt cluster, I noticed that > thin_check is run on the thin pool devices of one of the VM's on which > the disk is assigned to. > > That seems strange to me. I would expect the host to stay clear of any > VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm?
This does not seem to work. The host can not be activated as it can't find his volume group(s). To be able to use the global_filter in /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf to revert back to the default.
I've moved my filter from global_filter to filter and that seems to work. When lvmetad is disabled I believe this should have the same effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also udev might ignore the filter setting?
Right, global_filter exist so you can override filter used from the command line.
For example, hiding certain devices from vdsm. This is why we are using filter in vdsm, leaving global_filter for the administrator.
Can you explain why do you need global_filter or filter for the hypervisor disks?
Based on the comment in /etc/lvm/lvm.conf regarding global_filter I concluded that not only lvmetad but also udev might perform action on the devices and I wanted to prevent that.
I've now set the following settings in /etc/lvm/lvm.conf:
use_lvmetad = 0 auto_activation_volume_list = [] filter = ["a|/dev/sda5|", "r|.*|" ]
Better use /dev/disk/by-uuid/ to select the specific device, without depending on device order.
On other systems I have kept the default filter.
Do you have any issue with the current settings, disabling auto activation and lvmetad?
Keeping those two disabled also seems to work. The ovirt LV's do show up in 'lvs' output but are not activated.
Good
When vdsm runs the lvchange command to activate the LV of a VM (so it can boot it), will LVM still try to scan the new LV for PV's (and thin pools, etc)? Is this also prevented by the auto_activation_volume_list parameter in this case?
I wanted to be absolutely sure the VM LV's were not touched, I added the filter on some of our hosts.
The only problem with this filter is it may break if you change the host in some way, like boot from another disk.
I am aware of this issue, but in this case I would rather mess with a hypervisor that no longer boots because it needs an updated lvm.conf than try to fix corrupted VM's because the host was accessing the disks of the VM while it was running on another node.
It would be nice if you file a bug for this, and mention the configuration that fixes this issue, we certainly need to improve the way we configure lvm.
Against which component of oVirt should I file this bug? Regards, Rik -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On Wed, Aug 31, 2016 at 3:15 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
On 08/31/2016 02:04 PM, Nir Soffer wrote:
On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
On 08/31/2016 11:51 AM, Nir Soffer wrote:
On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
On 08/31/2016 09:43 AM, Rik Theys wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote: > On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote: >> While rebooting one of the hosts in an oVirt cluster, I noticed that >> thin_check is run on the thin pool devices of one of the VM's on which >> the disk is assigned to. >> >> That seems strange to me. I would expect the host to stay clear of any >> VM disks. > > We expect the same thing, but unfortunately systemd and lvm try to > auto activate stuff. This may be good idea for desktop system, but > probably bad idea for a server and in particular a hypervisor. > > We don't have a solution yet, but you can try these: > > 1. disable lvmetad service > > systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket > systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket > > Edit /etc/lvm/lvm.conf: > > use_lvmetad = 0 > > 2. disable lvm auto activation > > Edit /etc/lvm/lvm.conf: > > auto_activation_volume_list = [] > > 3. both 1 and 2 >
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only include the PV's for the hypervisor disks (on which the OS is installed) so the system lvm commands only touches those. Since vdsm is using its own lvm.conf this should be OK for vdsm?
This does not seem to work. The host can not be activated as it can't find his volume group(s). To be able to use the global_filter in /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf to revert back to the default.
I've moved my filter from global_filter to filter and that seems to work. When lvmetad is disabled I believe this should have the same effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also udev might ignore the filter setting?
Right, global_filter exist so you can override filter used from the command line.
For example, hiding certain devices from vdsm. This is why we are using filter in vdsm, leaving global_filter for the administrator.
Can you explain why do you need global_filter or filter for the hypervisor disks?
Based on the comment in /etc/lvm/lvm.conf regarding global_filter I concluded that not only lvmetad but also udev might perform action on the devices and I wanted to prevent that.
I've now set the following settings in /etc/lvm/lvm.conf:
use_lvmetad = 0 auto_activation_volume_list = [] filter = ["a|/dev/sda5|", "r|.*|" ]
Better use /dev/disk/by-uuid/ to select the specific device, without depending on device order.
On other systems I have kept the default filter.
Do you have any issue with the current settings, disabling auto activation and lvmetad?
Keeping those two disabled also seems to work. The ovirt LV's do show up in 'lvs' output but are not activated.
Good
When vdsm runs the lvchange command to activate the LV of a VM (so it can boot it), will LVM still try to scan the new LV for PV's (and thin pools, etc)? Is this also prevented by the auto_activation_volume_list parameter in this case?
I don't know, but I don't see why lvm would scan the lv for pv and thin pools, this should happen on on the guest. We never had reports about such issue.
I wanted to be absolutely sure the VM LV's were not touched, I added the filter on some of our hosts.
The only problem with this filter is it may break if you change the host in some way, like boot from another disk.
I am aware of this issue, but in this case I would rather mess with a hypervisor that no longer boots because it needs an updated lvm.conf than try to fix corrupted VM's because the host was accessing the disks of the VM while it was running on another node.
It would be nice if you file a bug for this, and mention the configuration that fixes this issue, we certainly need to improve the way we configure lvm.
Against which component of oVirt should I file this bug?
vdsm, Core, Storage team Thanks, Nir

On Wed, Aug 31, 2016 at 10:43 AM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
On 08/30/2016 04:47 PM, Nir Soffer wrote:
On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
While rebooting one of the hosts in an oVirt cluster, I noticed that thin_check is run on the thin pool devices of one of the VM's on which the disk is assigned to.
That seems strange to me. I would expect the host to stay clear of any VM disks.
We expect the same thing, but unfortunately systemd and lvm try to auto activate stuff. This may be good idea for desktop system, but probably bad idea for a server and in particular a hypervisor.
We don't have a solution yet, but you can try these:
1. disable lvmetad service
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf:
use_lvmetad = 0
2. disable lvm auto activation
Edit /etc/lvm/lvm.conf:
auto_activation_volume_list = []
3. both 1 and 2
I've now applied both of the above and regenerated the initramfs and rebooted and the host no longer lists the LV's of the VM. Since I rebooted the host before without this issue, I'm not sure a single reboot is enough to conclude it has fully fixed the issue.
You mention that there's no solution yet. Does that mean the above settings are not 100% certain to avoid this behaviour?
Yes, these settings were suggested by lvm developers, but we did not test them yet, and of course did not integrate these settings in vdsm deployment. This will require modifying lvm.conf during configuration and verifying that lvm.conf is configured correctly when starting vdsm.
participants (2)
-
Nir Soffer
-
Rik Theys