best way to remove SAN lun

Hello, currently I have a cluster of 3 hosts where each one has FC SAN connectivity to 4 LUNs: 3 are already configured as storage domains (1TB, 2TB, 4TB), one is free, not allocated. See here for screenshot: https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs/view?usp=sharin... At the moment the command "multipath -l" run on hosts shows all the 4 LUNs. Now I want to do 2 things at storage array level: - remove the 2TB storage domain LUN - remove the 20Gb LUN not yet allocated What is the correct workflow, supposing I have already emptied the 2TB from VM disks ad such? Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at the end "remove"? I think I continue to see 4 LUNs at this point, correct? Now I proceed with removal of lun at storage array level? Should I select an SD line and then "Scan disks" to see refresh the SAN and see in multipath only 2 of them at the end? Or any manual command at host level before removal from array? Thanks in advance Gianluca

On Tue, Feb 21, 2017 at 10:19 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
Hello, currently I have a cluster of 3 hosts where each one has FC SAN connectivity to 4 LUNs: 3 are already configured as storage domains (1TB, 2TB, 4TB), one is free, not allocated. See here for screenshot: https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs/ view?usp=sharing
At the moment the command "multipath -l" run on hosts shows all the 4 LUNs.
Now I want to do 2 things at storage array level:
- remove the 2TB storage domain LUN - remove the 20Gb LUN not yet allocated
What is the correct workflow, supposing I have already emptied the 2TB from VM disks ad such? Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at the end "remove"?
Yes, these should be your first steps.
I think I continue to see 4 LUNs at this point, correct?
Yes.
Now I proceed with removal of lun at storage array level?
Should I select an SD line and then "Scan disks" to see refresh the SAN and see in multipath only 2 of them at the end? Or any manual command at host level before removal from array?
After removing the storage domains you should be able to remove the luns. I am not extremely familiar with the multipath and low-level scsi commands but I would try the scan disks button and if the luns are not gone from your host you can manually remove them. I think that involves removing the device from multipath (multipath -d) and deleting it from the scsi subsystem. Thanks in advance
Hope this helped you.

On Tue, Feb 21, 2017 at 5:12 PM, Adam Litke <alitke@redhat.com> wrote:
On Tue, Feb 21, 2017 at 10:19 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, currently I have a cluster of 3 hosts where each one has FC SAN connectivity to 4 LUNs: 3 are already configured as storage domains (1TB, 2TB, 4TB), one is free, not allocated. See here for screenshot: https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs /view?usp=sharing
At the moment the command "multipath -l" run on hosts shows all the 4 LUNs.
Now I want to do 2 things at storage array level:
- remove the 2TB storage domain LUN - remove the 20Gb LUN not yet allocated
What is the correct workflow, supposing I have already emptied the 2TB from VM disks ad such? Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at the end "remove"?
Yes, these should be your first steps.
I think I continue to see 4 LUNs at this point, correct?
Yes.
Now I proceed with removal of lun at storage array level?
Should I select an SD line and then "Scan disks" to see refresh the SAN and see in multipath only 2 of them at the end? Or any manual command at host level before removal from array?
After removing the storage domains you should be able to remove the luns. I am not extremely familiar with the multipath and low-level scsi commands but I would try the scan disks button and if the luns are not gone from your host you can manually remove them. I think that involves removing the device from multipath (multipath -d) and deleting it from the scsi subsystem.
Thanks in advance
Hope this helped you.
Hello, the "Scan Disks" seems related to the particular storage domain selected in storage tab, not overall FC SAN connectivity... If I then select "manage domain", it still shows the now missing disks with an exclamation mark aside I try to follow standard RH EL 7 way for removal: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm... I can remove at os level the 20Gb lun that was never used in oVirt, but for the previous 2Tb storage domain related LUN I get the error that is in use.... [root@ovmsrv06 ~]# multipath -f 3600a0b80002999020000cd3c5501458f Feb 21 17:25:58 | 3600a0b80002999020000cd3c5501458f: map in use Feb 21 17:25:58 | failed to remove multipath map 3600a0b80002999020000cd3c5501458f [root@ovmsrv06 ~]# [root@ovmsrv06 ~]# fuser /dev/mapper/3600a0b80002999020000cd3c5501458f [root@ovmsrv06 ~]# [root@ovmsrv06 ~]# ll /dev/mapper/3600a0b80002999020000cd3c5501458f lrwxrwxrwx. 1 root root 7 Feb 21 17:25 /dev/mapper/3600a0b80002999020000cd3c5501458f -> ../dm-4 [root@ovmsrv06 ~]# fuser /dev/dm-4 [root@ovmsrv06 ~]# Strange thing is that vgs command returns differrent value on th three hosts [root@ovmsrv05 vdsm]# vgs VG #PV #LV #SN Attr VSize VFree 922b5269-ab56-4c4d-838f-49d33427e2ab 1 22 0 wz--n- 4.00t 3.49t cl_ovmsrv05 1 3 0 wz--n- 67.33g 0 [root@ovmsrv05 vdsm]# [root@ovmsrv06 ~]# vgs VG #PV #LV #SN Attr VSize VFree 922b5269-ab56-4c4d-838f-49d33427e2ab 1 22 0 wz--n- 4.00t 3.49t cl 1 3 0 wz--n- 67.33g 0 [root@ovmsrv06 ~]# [root@ovmsrv07 vdsm]# vgs VG #PV #LV #SN Attr VSize VFree 900b1853-e192-4661-a0f9-7c7c396f6f49 1 10 0 wz--n- 2.00t 1.76t 922b5269-ab56-4c4d-838f-49d33427e2ab 1 27 0 wz--n- 4.00t 3.34t cl 1 3 0 wz--n- 67.33g 0 [root@ovmsrv07 vdsm]# So no host as a VG the 1TB storage domain and In particular ovmsrv07 has a VG of 2TB that I suspect was the previosu storage domain [root@ovmsrv07 vdsm]# lvs 900b1853-e192-4661-a0f9-7c7c396f6f49 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 35b8834e-a429-4223-b293-51d562b6def4 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi------- 128.00m 7ed43974-1039-4a68-a8b3-321e7594fe4c 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi------- 240.00g d7f6be37-0f6c-43e3-b0af-a511fc59c842 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi------- 128.00m ids 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 128.00m inbox 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 128.00m leases 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 2.00g master 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 1.00g metadata 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 512.00m outbox 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 128.00m xleases 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 1.00g [root@ovmsrv07 vdsm]# Gianluca

On Tue, Feb 21, 2017 at 6:12 PM, Adam Litke <alitke@redhat.com> wrote:
On Tue, Feb 21, 2017 at 10:19 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, currently I have a cluster of 3 hosts where each one has FC SAN connectivity to 4 LUNs: 3 are already configured as storage domains (1TB, 2TB, 4TB), one is free, not allocated. See here for screenshot:
https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs/view?usp=sharin...
At the moment the command "multipath -l" run on hosts shows all the 4 LUNs.
Now I want to do 2 things at storage array level:
- remove the 2TB storage domain LUN - remove the 20Gb LUN not yet allocated
What is the correct workflow, supposing I have already emptied the 2TB from VM disks ad such? Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at the end "remove"?
Yes, these should be your first steps.
I think I continue to see 4 LUNs at this point, correct?
We do not manage devices, so this must be performed manually by the system administrator. To remove devices, you have to: 1. On the storage server, unzone the devices so the host(s) will not see them This must be done before you remove the multipath device and paths, otherwise vdsm will discover the device again during the periodic scsi/fc rescans. 2. On all hosts ,remove the multipath device and the underlying scsi devices, as explained here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm... Nir
Yes.
Now I proceed with removal of lun at storage array level?
Should I select an SD line and then "Scan disks" to see refresh the SAN and see in multipath only 2 of them at the end? Or any manual command at host level before removal from array?
After removing the storage domains you should be able to remove the luns. I am not extremely familiar with the multipath and low-level scsi commands but I would try the scan disks button and if the luns are not gone from your host you can manually remove them. I think that involves removing the device from multipath (multipath -d) and deleting it from the scsi subsystem.
Thanks in advance
Hope this helped you.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_CY4PR14MB1687FBBD5E481FEC20D489B3E9510CY4PR14MB1687namp_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGVsbG8gR2lhbmx1Y2EsDQpJIGhhdmUgSVNDU0kgU0FOIGhlcmUgYW5kIG5vdCBGQyBidXQgcHJv Y2VzcyBzaG91bGQgYmUgdmVyeSBzaW1pbGFyLg0KQ29ycmVjdCBvcGVyYXRpb24gaXMNClN0b3Jh Z2UgbWFpbnRlbmFuY2UgLT4gZGV0YWNoIC0+IHJlbW92ZQ0KDQpPbmNlIHlvdSBoYXZlIHJlbW92 ZWQgdGhlIHN0b3JhZ2UsIE92aXJ0IHdpbGwgcmVsZWFzZSB0aGUgc2FubG9jay4NCk5vdGUgdGhh dCB5b3VyIHNlcnZlciB3aWxsIHN0aWxsIOKAnHNlZeKAnSB0aGUgZGV2aWNlIGFuZCBtdWx0aXBh dGggd2lsbCBzdGlsbCBsaXN0IGl0IGFzIGFjdGl2ZS4NCg0KWW91IGNhbiBub3cgdW5tYXAgTFVO IHVzaW5nIHlvdXIgU0FOIG1hbmFnZXIuDQpJZiB5b3VyIG11bHRpcGF0aCBpcyBjb3JyZWN0bHkg Y29uZmlndXJlZCwgZGV2aWNlIHdpbGwg4oCcZmFpbOKAnSBidXQgbm90aGluZyBlbHNlIChtdWx0 aXBhdGggYW5kIHNlcnZlciB3b27igJl0IGhhbmcpLg0KWW91IHdpbGwgYmUgYWJsZSB0byBwZXJm b3JtIGJhc2ljIGxpc3Qgb3BlcmF0aW9ucyB2aWEgc2VydmVyIHNoZWxsIHdpdGhvdXQgZXhwZXJp ZW5jaW5nIGZyZWV6ZSBvciBsb2Nrcy4qDQpPdmlydCB3aWxsIGNvbnRpbnVlIHRvIHNjYW4gbmV3 IGRldmljZXMgc28geW91IHdvbuKAmXQgYmUgYWJsZSB0byBtYW51YWxseSByZW1vdmUgdGhlbSB1 bnRpbCB5b3UgdW5tYXAgZGV2aWNlcyBmcm9tIFNBTiBtYW5hZ2VyLg0KDQpJZiB5b3UgbmVlZCB0 byByZW1vdmUgZGV2aWNlcyBtYW51YWxseSwgaXQgaXMgYWR2aXNlZCB0byBmb2xsb3cgdGhpcyBn dWlkZS4NCmh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9jdW1lbnRhdGlvbi9lbi1VUy9SZWRf SGF0X0VudGVycHJpc2VfTGludXgvNi9odG1sL1N0b3JhZ2VfQWRtaW5pc3RyYXRpb25fR3VpZGUv cmVtb3ZpbmdfZGV2aWNlcy5odG1sDQoNCiogSWYgeW91IGRvIGZhY2UgY29tbWFuZCBmcmVlemUg ZXRjLCByZS1lbmFibGUgTFVOIG1hcHBpbmdzIGFuZCBjaGVjayB5b3VyIG11bHRpcGF0aC5jb25m DQoNCkNoZWVycw0KQUcNCg0KRnJvbTogdXNlcnMtYm91bmNlc0BvdmlydC5vcmcgW21haWx0bzp1 c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZ10gT24gQmVoYWxmIE9mIEdpYW5sdWNhIENlY2NoaQ0KU2Vu dDogVHVlc2RheSwgRmVicnVhcnkgMjEsIDIwMTcgNDoxOSBQTQ0KVG86IHVzZXJzIDx1c2Vyc0Bv dmlydC5vcmc+DQpTdWJqZWN0OiBbb3ZpcnQtdXNlcnNdIGJlc3Qgd2F5IHRvIHJlbW92ZSBTQU4g bHVuDQoNCkhlbGxvLA0KY3VycmVudGx5IEkgaGF2ZSBhIGNsdXN0ZXIgb2YgMyBob3N0cyB3aGVy ZSBlYWNoIG9uZSBoYXMgRkMgU0FOIGNvbm5lY3Rpdml0eSB0byA0IExVTnM6IDMgYXJlIGFscmVh ZHkgY29uZmlndXJlZCBhcyBzdG9yYWdlIGRvbWFpbnMgKDFUQiwgMlRCLCA0VEIpLCBvbmUgaXMg ZnJlZSwgbm90IGFsbG9jYXRlZC4NClNlZSBoZXJlIGZvciBzY3JlZW5zaG90Og0KaHR0cHM6Ly9k cml2ZS5nb29nbGUuY29tL2ZpbGUvZC8wQndvUGJjck12OG12UlZaWk1UbE5jVFE1TUdzL3ZpZXc/ dXNwPXNoYXJpbmcNCg0KQXQgdGhlIG1vbWVudCB0aGUgY29tbWFuZCAibXVsdGlwYXRoIC1sIiBy dW4gb24gaG9zdHMgc2hvd3MgYWxsIHRoZSA0IExVTnMuDQoNCk5vdyBJIHdhbnQgdG8gZG8gMiB0 aGluZ3MgYXQgc3RvcmFnZSBhcnJheSBsZXZlbDoNCg0KLSByZW1vdmUgdGhlIDJUQiBzdG9yYWdl IGRvbWFpbiBMVU4NCi0gcmVtb3ZlIHRoZSAyMEdiIExVTiBub3QgeWV0IGFsbG9jYXRlZA0KDQpX aGF0IGlzIHRoZSBjb3JyZWN0IHdvcmtmbG93LCBzdXBwb3NpbmcgSSBoYXZlIGFscmVhZHkgZW1w dGllZCB0aGUgMlRCIGZyb20gVk0gZGlza3MgYWQgc3VjaD8NClNlbGVjdCAyVGIgU0QsIHRoZW4g RGF0YWNlbnRlciBzdWJ0YWIsIHRoZW4gIm1haW50ZW5hbmNlIiwgZGV0YWNoIiBhbmQgYXQgdGhl IGVuZCAicmVtb3ZlIj8NCg0KSSB0aGluayBJIGNvbnRpbnVlIHRvIHNlZSA0IExVTnMgYXQgdGhp cyBwb2ludCwgY29ycmVjdD8NCg0KTm93IEkgcHJvY2VlZCB3aXRoIHJlbW92YWwgb2YgbHVuIGF0 IHN0b3JhZ2UgYXJyYXkgbGV2ZWw/DQoNClNob3VsZCBJIHNlbGVjdCBhbiBTRCBsaW5lIGFuZCB0 aGVuICJTY2FuIGRpc2tzIiB0byBzZWUgcmVmcmVzaCB0aGUgU0FOIGFuZCBzZWUgaW4gbXVsdGlw YXRoIG9ubHkgMiBvZiB0aGVtIGF0IHRoZSBlbmQ/DQpPciBhbnkgbWFudWFsIGNvbW1hbmQgYXQg aG9zdCBsZXZlbCBiZWZvcmUgcmVtb3ZhbCBmcm9tIGFycmF5Pw0KDQpUaGFua3MgaW4gYWR2YW5j ZQ0KDQpHaWFubHVjYQ0K --_000_CY4PR14MB1687FBBD5E481FEC20D489B3E9510CY4PR14MB1687namp_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6 V2luZ2RpbmdzOw0KCXBhbm9zZS0xOjUgMCAwIDAgMCAwIDAgMCAwIDA7fQ0KQGZvbnQtZmFjZQ0K CXtmb250LWZhbWlseToiQ2FtYnJpYSBNYXRoIjsNCglwYW5vc2UtMToyIDQgNSAzIDUgNCA2IDMg MiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsNCglwYW5vc2UtMToyIDE1 IDUgMiAyIDIgNCAzIDIgNDt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3JtYWws IGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1ib3R0 b206LjAwMDFwdDsNCglmb250LXNpemU6MTIuMHB0Ow0KCWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcg Um9tYW4iLHNlcmlmO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXBy aW9yaXR5Ojk5Ow0KCWNvbG9yOmJsdWU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5 Ojk5Ow0KCWNvbG9yOnB1cnBsZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCnAuTXNv TGlzdFBhcmFncmFwaCwgbGkuTXNvTGlzdFBhcmFncmFwaCwgZGl2Lk1zb0xpc3RQYXJhZ3JhcGgN Cgl7bXNvLXN0eWxlLXByaW9yaXR5OjM0Ow0KCW1hcmdpbi10b3A6MGluOw0KCW1hcmdpbi1yaWdo dDowaW47DQoJbWFyZ2luLWJvdHRvbTowaW47DQoJbWFyZ2luLWxlZnQ6LjVpbjsNCgltYXJnaW4t Ym90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWlseToiVGltZXMg TmV3IFJvbWFuIixzZXJpZjt9DQpwLm1zb25vcm1hbDAsIGxpLm1zb25vcm1hbDAsIGRpdi5tc29u b3JtYWwwDQoJe21zby1zdHlsZS1uYW1lOm1zb25vcm1hbDsNCgltc28tbWFyZ2luLXRvcC1hbHQ6 YXV0bzsNCgltYXJnaW4tcmlnaHQ6MGluOw0KCW1zby1tYXJnaW4tYm90dG9tLWFsdDphdXRvOw0K CW1hcmdpbi1sZWZ0OjBpbjsNCglmb250LXNpemU6MTIuMHB0Ow0KCWZvbnQtZmFtaWx5OiJUaW1l cyBOZXcgUm9tYW4iLHNlcmlmO30NCnNwYW4uRW1haWxTdHlsZTE4DQoJe21zby1zdHlsZS10eXBl OnBlcnNvbmFsLXJlcGx5Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixzYW5zLXNlcmlmOw0KCWNv bG9yOndpbmRvd3RleHQ7fQ0KLk1zb0NocERlZmF1bHQNCgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0 LW9ubHk7DQoJZm9udC1zaXplOjEwLjBwdDsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1z ZXJpZjt9DQpAcGFnZSBXb3JkU2VjdGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2lu OjEuMGluIDEuMGluIDEuMGluIDEuMGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3Jk U2VjdGlvbjE7fQ0KLyogTGlzdCBEZWZpbml0aW9ucyAqLw0KQGxpc3QgbDANCgl7bXNvLWxpc3Qt aWQ6MTIyMDkzODY2MTsNCgltc28tbGlzdC10eXBlOmh5YnJpZDsNCgltc28tbGlzdC10ZW1wbGF0 ZS1pZHM6LTIwODI4MTgxMTggODkyMjUxOTU0IDY3Njk4NjkxIDY3Njk4NjkzIDY3Njk4Njg5IDY3 Njk4NjkxIDY3Njk4NjkzIDY3Njk4Njg5IDY3Njk4NjkxIDY3Njk4NjkzO30NCkBsaXN0IGwwOmxl dmVsMQ0KCXttc28tbGV2ZWwtc3RhcnQtYXQ6MDsNCgltc28tbGV2ZWwtbnVtYmVyLWZvcm1hdDpi dWxsZXQ7DQoJbXNvLWxldmVsLXRleHQ674K3Ow0KCW1zby1sZXZlbC10YWItc3RvcDpub25lOw0K CW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246bGVmdDsNCgl0ZXh0LWluZGVudDotLjI1aW47DQoJ Zm9udC1mYW1pbHk6U3ltYm9sOw0KCW1zby1mYXJlYXN0LWZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ bXNvLWJpZGktZm9udC1mYW1pbHk6IlRpbWVzIE5ldyBSb21hbiI7fQ0KQGxpc3QgbDA6bGV2ZWwy DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OmJ1bGxldDsNCgltc28tbGV2ZWwtdGV4dDpvOw0K CW1zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246bGVm dDsNCgl0ZXh0LWluZGVudDotLjI1aW47DQoJZm9udC1mYW1pbHk6IkNvdXJpZXIgTmV3Ijt9DQpA bGlzdCBsMDpsZXZlbDMNCgl7bXNvLWxldmVsLW51bWJlci1mb3JtYXQ6YnVsbGV0Ow0KCW1zby1s ZXZlbC10ZXh0Ou+CpzsNCgltc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVt YmVyLXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LS4yNWluOw0KCWZvbnQtZmFtaWx5Oldp bmdkaW5nczt9DQpAbGlzdCBsMDpsZXZlbDQNCgl7bXNvLWxldmVsLW51bWJlci1mb3JtYXQ6YnVs bGV0Ow0KCW1zby1sZXZlbC10ZXh0Ou+CtzsNCgltc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCglt c28tbGV2ZWwtbnVtYmVyLXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LS4yNWluOw0KCWZv bnQtZmFtaWx5OlN5bWJvbDt9DQpAbGlzdCBsMDpsZXZlbDUNCgl7bXNvLWxldmVsLW51bWJlci1m b3JtYXQ6YnVsbGV0Ow0KCW1zby1sZXZlbC10ZXh0Om87DQoJbXNvLWxldmVsLXRhYi1zdG9wOm5v bmU7DQoJbXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5kZW50Oi0uMjVp bjsNCglmb250LWZhbWlseToiQ291cmllciBOZXciO30NCkBsaXN0IGwwOmxldmVsNg0KCXttc28t bGV2ZWwtbnVtYmVyLWZvcm1hdDpidWxsZXQ7DQoJbXNvLWxldmVsLXRleHQ674KnOw0KCW1zby1s ZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246bGVmdDsNCgl0 ZXh0LWluZGVudDotLjI1aW47DQoJZm9udC1mYW1pbHk6V2luZ2RpbmdzO30NCkBsaXN0IGwwOmxl dmVsNw0KCXttc28tbGV2ZWwtbnVtYmVyLWZvcm1hdDpidWxsZXQ7DQoJbXNvLWxldmVsLXRleHQ6 74K3Ow0KCW1zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRp b246bGVmdDsNCgl0ZXh0LWluZGVudDotLjI1aW47DQoJZm9udC1mYW1pbHk6U3ltYm9sO30NCkBs aXN0IGwwOmxldmVsOA0KCXttc28tbGV2ZWwtbnVtYmVyLWZvcm1hdDpidWxsZXQ7DQoJbXNvLWxl dmVsLXRleHQ6bzsNCgltc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVy LXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LS4yNWluOw0KCWZvbnQtZmFtaWx5OiJDb3Vy aWVyIE5ldyI7fQ0KQGxpc3QgbDA6bGV2ZWw5DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OmJ1 bGxldDsNCgltc28tbGV2ZWwtdGV4dDrvgqc7DQoJbXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJ bXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5kZW50Oi0uMjVpbjsNCglm b250LWZhbWlseTpXaW5nZGluZ3M7fQ0Kb2wNCgl7bWFyZ2luLWJvdHRvbTowaW47fQ0KdWwNCgl7 bWFyZ2luLWJvdHRvbTowaW47fQ0KLS0+PC9zdHlsZT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4N CjxvOnNoYXBlZGVmYXVsdHMgdjpleHQ9ImVkaXQiIHNwaWRtYXg9IjEwMjYiIC8+DQo8L3htbD48 IVtlbmRpZl0tLT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4NCjxvOnNoYXBlbGF5b3V0IHY6ZXh0 PSJlZGl0Ij4NCjxvOmlkbWFwIHY6ZXh0PSJlZGl0IiBkYXRhPSIxIiAvPg0KPC9vOnNoYXBlbGF5 b3V0PjwveG1sPjwhW2VuZGlmXS0tPg0KPC9oZWFkPg0KPGJvZHkgbGFuZz0iRU4tVVMiIGxpbms9 ImJsdWUiIHZsaW5rPSJwdXJwbGUiPg0KPGRpdiBjbGFzcz0iV29yZFNlY3Rpb24xIj4NCjxwIGNs YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5 OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+SGVsbG8gR2lhbmx1Y2EsPG86cD48L286 cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6 ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj5JIGhh dmUgSVNDU0kgU0FOIGhlcmUgYW5kIG5vdCBGQyBidXQgcHJvY2VzcyBzaG91bGQgYmUgdmVyeSBz aW1pbGFyLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFu IHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDss c2Fucy1zZXJpZiI+Q29ycmVjdCBvcGVyYXRpb24gaXMNCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N CjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQt ZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+U3RvcmFnZSBtYWludGVuYW5j ZSAtJmd0OyBkZXRhY2ggLSZndDsgcmVtb3ZlPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xh c3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6 JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48 L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtm b250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPk9uY2UgeW91IGhhdmUg cmVtb3ZlZCB0aGUgc3RvcmFnZSwgT3ZpcnQgd2lsbCByZWxlYXNlIHRoZSBzYW5sb2NrLjxvOnA+ PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250 LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+ Tm90ZSB0aGF0IHlvdXIgc2VydmVyIHdpbGwgc3RpbGwg4oCcc2Vl4oCdIHRoZSBkZXZpY2UgYW5k IG11bHRpcGF0aCB3aWxsIHN0aWxsIGxpc3QgaXQgYXMgYWN0aXZlLjxvOnA+PC9vOnA+PC9zcGFu PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0 O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+PG86cD4mbmJzcDs8 L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt c2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj5Z b3UgY2FuIG5vdyB1bm1hcCBMVU4gdXNpbmcgeW91ciBTQU4gbWFuYWdlci48bzpwPjwvbzpwPjwv c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEx LjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPklmIHlvdXIg bXVsdGlwYXRoIGlzIGNvcnJlY3RseSBjb25maWd1cmVkLCBkZXZpY2Ugd2lsbCDigJxmYWls4oCd IGJ1dCBub3RoaW5nIGVsc2UgKG11bHRpcGF0aCBhbmQgc2VydmVyIHdvbuKAmXQgaGFuZykuPG86 cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZv bnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlm Ij5Zb3Ugd2lsbCBiZSBhYmxlIHRvIHBlcmZvcm0gYmFzaWMgbGlzdCBvcGVyYXRpb25zIHZpYSBz ZXJ2ZXIgc2hlbGwgd2l0aG91dCBleHBlcmllbmNpbmcgZnJlZXplIG9yIGxvY2tzLio8bzpwPjwv bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z aXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPk92 aXJ0IHdpbGwgY29udGludWUgdG8gc2NhbiBuZXcgZGV2aWNlcyBzbyB5b3Ugd29u4oCZdCBiZSBh YmxlIHRvIG1hbnVhbGx5IHJlbW92ZSB0aGVtIHVudGlsIHlvdSB1bm1hcCBkZXZpY2VzIGZyb20g U0FOIG1hbmFnZXIuPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+ PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZx dW90OyxzYW5zLXNlcmlmIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0i TXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVv dDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPklmIHlvdSBuZWVkIHRvIHJlbW92ZSBkZXZpY2Vz IG1hbnVhbGx5LCBpdCBpcyBhZHZpc2VkIHRvIGZvbGxvdyB0aGlzIGd1aWRlLjxvOnA+PC9vOnA+ PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6 MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+PGEgaHJl Zj0iaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9kb2N1bWVudGF0aW9uL2VuLVVTL1JlZF9IYXRf RW50ZXJwcmlzZV9MaW51eC82L2h0bWwvU3RvcmFnZV9BZG1pbmlzdHJhdGlvbl9HdWlkZS9yZW1v dmluZ19kZXZpY2VzLmh0bWwiPmh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9jdW1lbnRhdGlv bi9lbi1VUy9SZWRfSGF0X0VudGVycHJpc2VfTGludXgvNi9odG1sL1N0b3JhZ2VfQWRtaW5pc3Ry YXRpb25fR3VpZGUvcmVtb3ZpbmdfZGV2aWNlcy5odG1sPC9hPjxvOnA+PC9vOnA+PC9zcGFuPjwv cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2Zv bnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+PG86cD4mbmJzcDs8L286 cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6 ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj4qIElm IHlvdSBkbyBmYWNlIGNvbW1hbmQgZnJlZXplIGV0YywgcmUtZW5hYmxlIExVTiBtYXBwaW5ncyBh bmQgY2hlY2sgeW91ciBtdWx0aXBhdGguY29uZjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5 OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+ PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7 Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj5DaGVlcnM8bzpwPjwv bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z aXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPkFH PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9 ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNl cmlmIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48 Yj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp JnF1b3Q7LHNhbnMtc2VyaWYiPkZyb206PC9zcGFuPjwvYj48c3BhbiBzdHlsZT0iZm9udC1zaXpl OjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPiB1c2Vy cy1ib3VuY2VzQG92aXJ0Lm9yZyBbbWFpbHRvOnVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnXQ0KPGI+ T24gQmVoYWxmIE9mIDwvYj5HaWFubHVjYSBDZWNjaGk8YnI+DQo8Yj5TZW50OjwvYj4gVHVlc2Rh eSwgRmVicnVhcnkgMjEsIDIwMTcgNDoxOSBQTTxicj4NCjxiPlRvOjwvYj4gdXNlcnMgJmx0O3Vz ZXJzQG92aXJ0Lm9yZyZndDs8YnI+DQo8Yj5TdWJqZWN0OjwvYj4gW292aXJ0LXVzZXJzXSBiZXN0 IHdheSB0byByZW1vdmUgU0FOIGx1bjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN c29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3Jt YWwiPkhlbGxvLDxvOnA+PC9vOnA+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPmN1 cnJlbnRseSBJIGhhdmUgYSBjbHVzdGVyIG9mIDMgaG9zdHMgd2hlcmUgZWFjaCBvbmUgaGFzIEZD IFNBTiBjb25uZWN0aXZpdHkgdG8gNCBMVU5zOiAzIGFyZSBhbHJlYWR5IGNvbmZpZ3VyZWQgYXMg c3RvcmFnZSBkb21haW5zICgxVEIsIDJUQiwgNFRCKSwgb25lIGlzIGZyZWUsIG5vdCBhbGxvY2F0 ZWQuPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5T ZWUgaGVyZSBmb3Igc2NyZWVuc2hvdDo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPjxhIGhyZWY9Imh0dHBzOi8vZHJpdmUuZ29vZ2xlLmNvbS9maWxl L2QvMEJ3b1BiY3JNdjhtdlJWWlpNVGxOY1RRNU1Hcy92aWV3P3VzcD1zaGFyaW5nIj5odHRwczov L2RyaXZlLmdvb2dsZS5jb20vZmlsZS9kLzBCd29QYmNyTXY4bXZSVlpaTVRsTmNUUTVNR3Mvdmll dz91c3A9c2hhcmluZzwvYT48bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNz PSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xh c3M9Ik1zb05vcm1hbCI+QXQgdGhlIG1vbWVudCB0aGUgY29tbWFuZCAmcXVvdDttdWx0aXBhdGgg LWwmcXVvdDsgcnVuIG9uIGhvc3RzIHNob3dzIGFsbCB0aGUgNCBMVU5zLjxvOnA+PC9vOnA+PC9w Pg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48 L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5Ob3cgSSB3YW50IHRvIGRv IDIgdGhpbmdzIGF0IHN0b3JhZ2UgYXJyYXkgbGV2ZWw6PG86cD48L286cD48L3A+DQo8L2Rpdj4N CjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjwvZGl2 Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPi0gcmVtb3ZlIHRoZSAyVEIgc3RvcmFnZSBk b21haW4gTFVOPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9y bWFsIj4tIHJlbW92ZSB0aGUgMjBHYiBMVU4gbm90IHlldCBhbGxvY2F0ZWQ8bzpwPjwvbzpwPjwv cD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+ PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+V2hhdCBpcyB0aGUgY29y cmVjdCB3b3JrZmxvdywgc3VwcG9zaW5nIEkgaGF2ZSBhbHJlYWR5IGVtcHRpZWQgdGhlIDJUQiBm cm9tIFZNIGRpc2tzIGFkIHN1Y2g/PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBj bGFzcz0iTXNvTm9ybWFsIj5TZWxlY3QgMlRiIFNELCB0aGVuIERhdGFjZW50ZXIgc3VidGFiLCB0 aGVuICZxdW90O21haW50ZW5hbmNlJnF1b3Q7LCBkZXRhY2gmcXVvdDsgYW5kIGF0IHRoZSBlbmQg JnF1b3Q7cmVtb3ZlJnF1b3Q7PyZuYnNwOzxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0K PHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+ DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5JIHRoaW5rIEkgY29udGludWUgdG8gc2VlIDQgTFVOcyBh dCB0aGlzIHBvaW50LCBjb3JyZWN0PzxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAg Y2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8 cCBjbGFzcz0iTXNvTm9ybWFsIj5Ob3cgSSBwcm9jZWVkIHdpdGggcmVtb3ZhbCBvZiBsdW4gYXQg c3RvcmFnZSBhcnJheSBsZXZlbD88bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNs YXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAg Y2xhc3M9Ik1zb05vcm1hbCI+U2hvdWxkIEkgc2VsZWN0IGFuIFNEIGxpbmUgYW5kIHRoZW4gJnF1 b3Q7U2NhbiBkaXNrcyZxdW90OyB0byBzZWUgcmVmcmVzaCB0aGUgU0FOIGFuZCBzZWUgaW4gbXVs dGlwYXRoIG9ubHkgMiBvZiB0aGVtIGF0IHRoZSBlbmQ/PG86cD48L286cD48L3A+DQo8L2Rpdj4N CjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5PciBhbnkgbWFudWFsIGNvbW1hbmQgYXQgaG9z dCBsZXZlbCBiZWZvcmUgcmVtb3ZhbCBmcm9tIGFycmF5PzxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+ DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rp dj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5UaGFua3MgaW4gYWR2YW5jZTxvOnA+PC9v OnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8 L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5HaWFubHVjYTxv OnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K --_000_CY4PR14MB1687FBBD5E481FEC20D489B3E9510CY4PR14MB1687namp_--

On Tue, Feb 21, 2017 at 5:42 PM, Andrea Ghelardi <a.ghelardi@iontrading.com> wrote:
Hello Gianluca,
I have ISCSI SAN here and not FC but process should be very similar.
Correct operation is
Storage maintenance -> detach -> remove
Once you have removed the storage, Ovirt will release the sanlock.
Note that your server will still “see” the device and multipath will still list it as active.
You can now unmap LUN using your SAN manager.
If your multipath is correctly configured, device will “fail” but nothing else (multipath and server won’t hang).
You will be able to perform basic list operations via server shell without experiencing freeze or locks.*
Yes, but /var/log/messages fills up with multipath errors. This is not desirable
Ovirt will continue to scan new devices so you won’t be able to manually remove them until you unmap devices from SAN manager.
If you need to remove devices manually, it is advised to follow this guide.
https://access.redhat.com/documentation/en-US/Red_Hat_ Enterprise_Linux/6/html/Storage_Administration_Guide/removing_devices.html
* If you do face command freeze etc, re-enable LUN mappings and check your multipath.conf
Yes, but the problem is that I receive error for the LUN previously configured as storage domain. I presume sum stale devices of kind "dm-??" have remained somewhere. I could reboot one ot the 3 hosts and verified that now the layout is correct on it. But I can't easily reboot just now the remaining 2 hosts, so it would be better to find what forces the multipath device in use.... Gianluca

The id of the removed storage domain was 900b1853-e192-4661-a0f9-7c7c396f6f49 and on the not yet reboted hosts I get this The multipath device was 3600a0b80002999020000cd3c5501458f With pvs command I get this errors related to them: [root@ovmsrv06 ~]# pvs /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 0: Input/output error /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 2199023190016: Input/output error /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 2199023247360: Input/output error /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 536805376: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 536862720: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 134152192: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 134209536: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 1073676288: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 1073733632: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 2147418112: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 2147475456: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 134152192: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 134209536: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 134152192: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 134209536: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 1073676288: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 1073733632: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 4096: Input/output error PV VG Fmt Attr PSize PFree /dev/cciss/c0d0p2 cl lvm2 a-- 67.33g 0 /dev/mapper/3600a0b8000299aa80000d08b55014119 922b5269-ab56-4c4d-838f-49d33427e2ab lvm2 a-- 4.00t 3.49t [root@ovmsrv06 ~]# How to clean? Gianluca

On Tue, Feb 21, 2017 at 7:06 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
The id of the removed storage domain was 900b1853-e192-4661-a0f9-7c7c396f6f49 and on the not yet reboted hosts I get this The multipath device was 3600a0b80002999020000cd3c5501458f
With pvs command I get this errors related to them:
[root@ovmsrv06 ~]# pvs /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 0: Input/output error /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 2199023190016: Input/output error /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 2199023247360: Input/output error /dev/mapper/3600a0b80002999020000cd3c5501458f: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 536805376: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 536862720: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/metadata: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 134152192: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 134209536: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/outbox: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 1073676288: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 1073733632: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/xleases: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 2147418112: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 2147475456: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/leases: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 134152192: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 134209536: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/ids: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 134152192: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 134209536: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/inbox: read failed after 0 of 4096 at 4096: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 0: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 1073676288: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 1073733632: Input/output error /dev/900b1853-e192-4661-a0f9-7c7c396f6f49/master: read failed after 0 of 4096 at 4096: Input/output error PV VG Fmt Attr PSize PFree /dev/cciss/c0d0p2 cl lvm2 a-- 67.33g 0 /dev/mapper/3600a0b8000299aa80000d08b55014119 922b5269-ab56-4c4d-838f-49d33427e2ab lvm2 a-- 4.00t 3.49t [root@ovmsrv06 ~]#
How to clean?
This is caused by active lvs on the remove storage domains that were not deactivated during the removal. This is a very old known issue. You have remove the remove device mapper entries - you can see the devices using: dmsetup status Then you can remove the mapping using: dmsetup remove device-name Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors. Nir
Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This is caused by active lvs on the remove storage domains that were not deactivated during the removal. This is a very old known issue.
You have remove the remove device mapper entries - you can see the devices using:
dmsetup status
Then you can remove the mapping using:
dmsetup remove device-name
Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors.
Nir
OK Nir, thanks for advising. So what I run with success on the 2 hosts [root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1) do dmsetup remove $dev done [root@ovmsrv05 vdsm]# and now I can run [root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f [root@ovmsrv05 vdsm]# Also, with related names depending on host, previous maps to single devices were for example in ovmsrv05: 3600a0b80002999020000cd3c5501458f dm-4 IBM ,1814 FAStT size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 0:0:0:2 sdb 8:16 failed undef running | `- 1:0:0:2 sdh 8:112 failed undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdg 8:96 failed undef running `- 1:0:1:2 sdn 8:208 failed undef running And removal of single path devices: [root@ovmsrv05 root]# for dev in sdb sdh sdg sdn do echo 1 > /sys/block/${dev}/device/delete done [root@ovmsrv05 vdsm]# All clean now... ;-) Thanks again, Gianluca

On Tue, Feb 21, 2017 at 7:25 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This is caused by active lvs on the remove storage domains that were not deactivated during the removal. This is a very old known issue.
You have remove the remove device mapper entries - you can see the devices using:
dmsetup status
Then you can remove the mapping using:
dmsetup remove device-name
Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors.
Nir
OK Nir, thanks for advising.
So what I run with success on the 2 hosts
[root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1) do dmsetup remove $dev done [root@ovmsrv05 vdsm]#
and now I can run
[root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f [root@ovmsrv05 vdsm]#
Also, with related names depending on host,
previous maps to single devices were for example in ovmsrv05:
3600a0b80002999020000cd3c5501458f dm-4 IBM ,1814 FAStT size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 0:0:0:2 sdb 8:16 failed undef running | `- 1:0:0:2 sdh 8:112 failed undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdg 8:96 failed undef running `- 1:0:1:2 sdn 8:208 failed undef running
And removal of single path devices:
[root@ovmsrv05 root]# for dev in sdb sdh sdg sdn do echo 1 > /sys/block/${dev}/device/delete done [root@ovmsrv05 vdsm]#
All clean now... ;-)
Great! I think we should have a script doing all these steps. Nir

Hello, Not sure it is the same issue, but we have had a "major" issue recently in our production system when removing a ISCSI volume from oVirt, and then removing it from SAN. The issue being that each host was still trying to access regularly to the SAN volume in spite of not being completely removed from oVirt. This led to an massive increase of error logs, which filled completely /var/log partition, which snowballed into crashing vdsm and other nasty consequences. Anyway, the solution was to manually logout from SAN (in each host) with iscsiadm and manually remove iscsi targets (again in each host). It was not difficult once the problem was found because currently we only have 3 hosts in this cluster, but I'm wondering what would happen if we had hundreds of hosts ? Maybe I'm being naive but shouldn't this be "oVirt job" ? Is there a RFE still waiting to be included on this subject or should I write one ? cordialement, regards, Nelson LAMEIRAS Ingénieur Systèmes et Réseaux / Systems and Networks engineer Tel: +33 5 32 09 09 70 nelson.lameiras@lyra-network.com www.lyra-network.com | www.payzen.eu Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE ----- Original Message ----- From: "Nir Soffer" <nsoffer@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, "Adam Litke" <alitke@redhat.com> Cc: "users" <users@ovirt.org> Sent: Tuesday, February 21, 2017 6:32:18 PM Subject: Re: [ovirt-users] best way to remove SAN lun On Tue, Feb 21, 2017 at 7:25 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This is caused by active lvs on the remove storage domains that were not deactivated during the removal. This is a very old known issue.
You have remove the remove device mapper entries - you can see the devices using:
dmsetup status
Then you can remove the mapping using:
dmsetup remove device-name
Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors.
Nir
OK Nir, thanks for advising.
So what I run with success on the 2 hosts
[root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1) do dmsetup remove $dev done [root@ovmsrv05 vdsm]#
and now I can run
[root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f [root@ovmsrv05 vdsm]#
Also, with related names depending on host,
previous maps to single devices were for example in ovmsrv05:
3600a0b80002999020000cd3c5501458f dm-4 IBM ,1814 FAStT size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 0:0:0:2 sdb 8:16 failed undef running | `- 1:0:0:2 sdh 8:112 failed undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdg 8:96 failed undef running `- 1:0:1:2 sdn 8:208 failed undef running
And removal of single path devices:
[root@ovmsrv05 root]# for dev in sdb sdh sdg sdn do echo 1 > /sys/block/${dev}/device/delete done [root@ovmsrv05 vdsm]#
All clean now... ;-)
Great! I think we should have a script doing all these steps. Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Feb 22, 2017 at 9:03 AM, Nelson Lameiras <nelson.lameiras@lyra-network.com> wrote:
Hello,
Not sure it is the same issue, but we have had a "major" issue recently in our production system when removing a ISCSI volume from oVirt, and then removing it from SAN.
What version? OS version? The order must be: 1. remove the LUN from storage domain will be available in next 4.1 release. in older versions you have to remove the storage domain 2. unzone the LUN on the server 3. remove the multipath devices and the paths on the nodes
The issue being that each host was still trying to access regularly to the SAN volume in spite of not being completely removed from oVirt.
What do you mean by "not being completely removed"? Who was accessing the volume?
This led to an massive increase of error logs, which filled completely /var/log partition,
Which log was full with errors?
which snowballed into crashing vdsm and other nasty consequences.
You should have big enough /var/log to avoid such issues.
Anyway, the solution was to manually logout from SAN (in each host) with iscsiadm and manually remove iscsi targets (again in each host). It was not difficult once the problem was found because currently we only have 3 hosts in this cluster, but I'm wondering what would happen if we had hundreds of hosts ?
Maybe I'm being naive but shouldn't this be "oVirt job" ? Is there a RFE still waiting to be included on this subject or should I write one ?
We have RFE for this here: https://bugzilla.redhat.com/1310330 But you must understand that ovirt does not control your storage server, you are responsible to add devices on the storage server, and remove them. We are only consuming the devices. Even we we provide a way to remove devices on all hosts, you will have to remove the device on the storage server before removing it from hosts. If not, ovirt will find the removed devices again in the next scsi rescan, and we do lot of these to support automatic discovery of new devices or resized devices. Nir
cordialement, regards,
Nelson LAMEIRAS Ingénieur Systèmes et Réseaux / Systems and Networks engineer Tel: +33 5 32 09 09 70 nelson.lameiras@lyra-network.com
www.lyra-network.com | www.payzen.eu
Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
----- Original Message ----- From: "Nir Soffer" <nsoffer@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, "Adam Litke" <alitke@redhat.com> Cc: "users" <users@ovirt.org> Sent: Tuesday, February 21, 2017 6:32:18 PM Subject: Re: [ovirt-users] best way to remove SAN lun
On Tue, Feb 21, 2017 at 7:25 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This is caused by active lvs on the remove storage domains that were not deactivated during the removal. This is a very old known issue.
You have remove the remove device mapper entries - you can see the devices using:
dmsetup status
Then you can remove the mapping using:
dmsetup remove device-name
Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors.
Nir
OK Nir, thanks for advising.
So what I run with success on the 2 hosts
[root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1) do dmsetup remove $dev done [root@ovmsrv05 vdsm]#
and now I can run
[root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f [root@ovmsrv05 vdsm]#
Also, with related names depending on host,
previous maps to single devices were for example in ovmsrv05:
3600a0b80002999020000cd3c5501458f dm-4 IBM ,1814 FAStT size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 0:0:0:2 sdb 8:16 failed undef running | `- 1:0:0:2 sdh 8:112 failed undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdg 8:96 failed undef running `- 1:0:1:2 sdn 8:208 failed undef running
And removal of single path devices:
[root@ovmsrv05 root]# for dev in sdb sdh sdg sdn do echo 1 > /sys/block/${dev}/device/delete done [root@ovmsrv05 vdsm]#
All clean now... ;-)
Great!
I think we should have a script doing all these steps.
Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Feb 22, 2017 at 9:27 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Feb 22, 2017 at 9:03 AM, Nelson Lameiras <nelson.lameiras@lyra-network.com> wrote:
Hello,
Not sure it is the same issue, but we have had a "major" issue recently in our production system when removing a ISCSI volume from oVirt, and then removing it from SAN.
What version? OS version?
The order must be:
1. remove the LUN from storage domain will be available in next 4.1 release. in older versions you have to remove the storage domain
2. unzone the LUN on the server
3. remove the multipath devices and the paths on the nodes
The issue being that each host was still trying to access regularly to the SAN volume in spite of not being completely removed from oVirt.
What do you mean by "not being completely removed"?
Who was accessing the volume?
This led to an massive increase of error logs, which filled completely /var/log partition,
Which log was full with errors?
which snowballed into crashing vdsm and other nasty consequences.
You should have big enough /var/log to avoid such issues.
- Log rotation should be set better not to consume excessive amounts of space. I'm seeing /etc/vdsm/logrotate/vdsm - not sure why it's not under /etc/logrotate.d . Looking at the file, seems like there's a 15M limit and 100 files, which translates to 1.5GB - and it is supposed to be compressed (not sure XZ is a good choice - it's very CPU intensive). Others (Gluster?) do not seem to have a size limit, just weekly. Need to look at other components as well. - At least on ovirt-node, we'd like to separate some directories to different partitions. So for example core dumps (which should be limited as well) on /var/core do not fill the same partition as /var/log is and thus render the host unusable. And again, looking at file, we have a 'size 0' on /var/log/core/*.dump - and 'rotate 1' - not sure what it means - but it should not be in /var/log/core, but /var/core, I reckon. Y.
Anyway, the solution was to manually logout from SAN (in each host) with
iscsiadm and manually remove iscsi targets (again in each host). It was not difficult once the problem was found because currently we only have 3 hosts in this cluster, but I'm wondering what would happen if we had hundreds of hosts ?
Maybe I'm being naive but shouldn't this be "oVirt job" ? Is there a RFE
still waiting to be included on this subject or should I write one ?
We have RFE for this here: https://bugzilla.redhat.com/1310330
But you must understand that ovirt does not control your storage server, you are responsible to add devices on the storage server, and remove them. We are only consuming the devices.
Even we we provide a way to remove devices on all hosts, you will have to remove the device on the storage server before removing it from hosts. If not, ovirt will find the removed devices again in the next scsi rescan, and we do lot of these to support automatic discovery of new devices or resized devices.
Nir
cordialement, regards,
Nelson LAMEIRAS Ingénieur Systèmes et Réseaux / Systems and Networks engineer Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070> nelson.lameiras@lyra-network.com
www.lyra-network.com | www.payzen.eu
Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
----- Original Message ----- From: "Nir Soffer" <nsoffer@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, "Adam Litke" <
alitke@redhat.com>
Cc: "users" <users@ovirt.org> Sent: Tuesday, February 21, 2017 6:32:18 PM Subject: Re: [ovirt-users] best way to remove SAN lun
On Tue, Feb 21, 2017 at 7:25 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This is caused by active lvs on the remove storage domains that were
not
deactivated during the removal. This is a very old known issue.
You have remove the remove device mapper entries - you can see the devices using:
dmsetup status
Then you can remove the mapping using:
dmsetup remove device-name
Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors.
Nir
OK Nir, thanks for advising.
So what I run with success on the 2 hosts
[root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1) do dmsetup remove $dev done [root@ovmsrv05 vdsm]#
and now I can run
[root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f [root@ovmsrv05 vdsm]#
Also, with related names depending on host,
previous maps to single devices were for example in ovmsrv05:
3600a0b80002999020000cd3c5501458f dm-4 IBM ,1814 FAStT size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 0:0:0:2 sdb 8:16 failed undef running | `- 1:0:0:2 sdh 8:112 failed undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdg 8:96 failed undef running `- 1:0:1:2 sdn 8:208 failed undef running
And removal of single path devices:
[root@ovmsrv05 root]# for dev in sdb sdh sdg sdn do echo 1 > /sys/block/${dev}/device/delete done [root@ovmsrv05 vdsm]#
All clean now... ;-)
Great!
I think we should have a script doing all these steps.
Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hello Nir, I think I was not clear in my explanations, let me try again : we have a oVirt 4.0.5.5 cluster with multiple hosts (centos 7.2). In this cluster, we added a SAN volume (iscsi) a few months ago directly in GUI Later we had to remove a DATA volume (SAN iscsi). Below the steps we have taken : 1- we migrated all disks outside the volume (oVirt) 2- we put volume on maintenance (oVirt) 3- we detach volume (oVirt) 4- we removed/destroyed volume (oVirt) In SAN : 5- we put it offline on SAN 6- we delete it from SAN We thought this would be enough, but later we has a serious incident when log partition went full (partially our fault) : /var/log/messages was continuously logging that it was still trying to reach the SAN volumes (we have since taken care of the issue of log space issue => more aggressive logrotate, etc) The real solution was to add two more steps, using shell in ALL hosts : 4a - logout from SAN : iscsiadm -m node --logout -T iqn.XXXXXXXX 4b - remove iscsi targets : rm -fr /var/lib/iscsi/nodes/iqn.XXXXXXXXX This effectively solved our problem, but was fastidious since we had to do it manually in all hosts (imagine if we had hundreds of hosts...) So my question was : shouldn't it be oVirt job to system "logout" and "remove iscsi targets" automatically when a volume is removed from oVirt ? maybe not, and I'm missing something? cordialement, regards, Nelson LAMEIRAS Ingénieur Systèmes et Réseaux / Systems and Networks engineer Tel: +33 5 32 09 09 70 nelson.lameiras@lyra-network.com www.lyra-network.com | www.payzen.eu Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE ----- Original Message ----- From: "Nir Soffer" <nsoffer@redhat.com> To: "Nelson Lameiras" <nelson.lameiras@lyra-network.com> Cc: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, "Adam Litke" <alitke@redhat.com>, "users" <users@ovirt.org> Sent: Wednesday, February 22, 2017 8:27:26 AM Subject: Re: [ovirt-users] best way to remove SAN lun On Wed, Feb 22, 2017 at 9:03 AM, Nelson Lameiras <nelson.lameiras@lyra-network.com> wrote:
Hello,
Not sure it is the same issue, but we have had a "major" issue recently in our production system when removing a ISCSI volume from oVirt, and then removing it from SAN.
What version? OS version? The order must be: 1. remove the LUN from storage domain will be available in next 4.1 release. in older versions you have to remove the storage domain 2. unzone the LUN on the server 3. remove the multipath devices and the paths on the nodes
The issue being that each host was still trying to access regularly to the SAN volume in spite of not being completely removed from oVirt.
What do you mean by "not being completely removed"? Who was accessing the volume?
This led to an massive increase of error logs, which filled completely /var/log partition,
Which log was full with errors?
which snowballed into crashing vdsm and other nasty consequences.
You should have big enough /var/log to avoid such issues.
Anyway, the solution was to manually logout from SAN (in each host) with iscsiadm and manually remove iscsi targets (again in each host). It was not difficult once the problem was found because currently we only have 3 hosts in this cluster, but I'm wondering what would happen if we had hundreds of hosts ?
Maybe I'm being naive but shouldn't this be "oVirt job" ? Is there a RFE still waiting to be included on this subject or should I write one ?
We have RFE for this here: https://bugzilla.redhat.com/1310330 But you must understand that ovirt does not control your storage server, you are responsible to add devices on the storage server, and remove them. We are only consuming the devices. Even we we provide a way to remove devices on all hosts, you will have to remove the device on the storage server before removing it from hosts. If not, ovirt will find the removed devices again in the next scsi rescan, and we do lot of these to support automatic discovery of new devices or resized devices. Nir
cordialement, regards,
Nelson LAMEIRAS Ingénieur Systèmes et Réseaux / Systems and Networks engineer Tel: +33 5 32 09 09 70 nelson.lameiras@lyra-network.com
www.lyra-network.com | www.payzen.eu
Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
----- Original Message ----- From: "Nir Soffer" <nsoffer@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, "Adam Litke" <alitke@redhat.com> Cc: "users" <users@ovirt.org> Sent: Tuesday, February 21, 2017 6:32:18 PM Subject: Re: [ovirt-users] best way to remove SAN lun
On Tue, Feb 21, 2017 at 7:25 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This is caused by active lvs on the remove storage domains that were not deactivated during the removal. This is a very old known issue.
You have remove the remove device mapper entries - you can see the devices using:
dmsetup status
Then you can remove the mapping using:
dmsetup remove device-name
Once you removed the stale lvs, you will be able to remove the multipath device and the underlying paths, and lvm will not complain about read errors.
Nir
OK Nir, thanks for advising.
So what I run with success on the 2 hosts
[root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1) do dmsetup remove $dev done [root@ovmsrv05 vdsm]#
and now I can run
[root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f [root@ovmsrv05 vdsm]#
Also, with related names depending on host,
previous maps to single devices were for example in ovmsrv05:
3600a0b80002999020000cd3c5501458f dm-4 IBM ,1814 FAStT size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 0:0:0:2 sdb 8:16 failed undef running | `- 1:0:0:2 sdh 8:112 failed undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdg 8:96 failed undef running `- 1:0:1:2 sdn 8:208 failed undef running
And removal of single path devices:
[root@ovmsrv05 root]# for dev in sdb sdh sdg sdn do echo 1 > /sys/block/${dev}/device/delete done [root@ovmsrv05 vdsm]#
All clean now... ;-)
Great!
I think we should have a script doing all these steps.
Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (6)
-
Adam Litke
-
Andrea Ghelardi
-
Gianluca Cecchi
-
Nelson Lameiras
-
Nir Soffer
-
Yaniv Kaul