clean up a fc lun to reuse it

Hi all, I tried to install the Hosted-Engine on a FC lun, and the first time, it failed at the end because of a sanlock issue. So I fixed it and the re-deoployed from scratch until this new issue [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Physical device initialization failed. Please check that the device is empty and accessible by the host.]". HTTP response code is 400. Before, I found that the lun [2] was used: [ INFO ] ok: [localhost] The following luns have been found on the requested target: [1] 36006016039a142001792ac5b1b9e513a 1024GiB DGC VRAID status: free, paths: 8 active [2] 36006016039a142006755b25b7505d26b 100GiB DGC VRAID status: used, paths: 8 active So it was the good explanation for the failure. Following the admin guide, I cleaned up the lun using |# dd if=/dev/zero of=/dev/mapper/||36006016039a142006755b25b7505d26b bs=1M count=200 oflag=direct| And re-deployed it until it complains the device was not empty despite of before. The script still reported that the status of the lun was "used" Finally, I rebooted the host, and that time the script considered the lun status as free. So there is somewhere a bit of code into the ansible script that keeps the old status of the lun before dd, until rebooting. Is there an other way to refresh the lun status after dd ? Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr

On Wed, Oct 3, 2018 at 12:24 PM Nathanaël Blanchet <blanchet@abes.fr> wrote:
Hi all,
I tried to install the Hosted-Engine on a FC lun, and the first time, it failed at the end because of a sanlock issue. So I fixed it and the re-deoployed from scratch until this new issue [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Physical device initialization failed. Please check that the device is empty and accessible by the host.]". HTTP response code is 400.
Before, I found that the lun [2] was used:
[ INFO ] ok: [localhost] The following luns have been found on the requested target: [1] 36006016039a142001792ac5b1b9e513a 1024GiB DGC VRAID status: free, paths: 8 active
[2] 36006016039a142006755b25b7505d26b 100GiB DGC VRAID status: used, paths: 8 active
So it was the good explanation for the failure.
Following the admin guide, I cleaned up the lun using
# dd if=/dev/zero of=/dev/mapper/36006016039a142006755b25b7505d26b bs=1M count=200 oflag=direct
And re-deployed it until it complains the device was not empty despite of before. The script still reported that the status of the lun was "used"
Finally, I rebooted the host, and that time the script considered the lun status as free.
So there is somewhere a bit of code into the ansible script that keeps the old status of the lun before dd, until rebooting.
Hello, for removing a volume i usually disable the logical volume (vgremove) and then the physical volume (pvremove). At this point i have the device marked as clean. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet è la più grande biblioteca del mondo. Ma il problema è che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca@gmail.com>

Hi Luca, great idea! But a dd is more than a lvremove... But as well it could mean that the deploy script may look for a particular vg which stays present after the dd. Rebooting forces the vgscan not to find this particular vg... So vgremove is really the good way to clean up a lun, and it should be the official way into the admin guide Le 03/10/2018 à 13:23, Luca 'remix_tj' Lorenzetto a écrit :
On Wed, Oct 3, 2018 at 12:24 PM Nathanaël Blanchet <blanchet@abes.fr> wrote:
Hi all,
I tried to install the Hosted-Engine on a FC lun, and the first time, it failed at the end because of a sanlock issue. So I fixed it and the re-deoployed from scratch until this new issue [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Physical device initialization failed. Please check that the device is empty and accessible by the host.]". HTTP response code is 400.
Before, I found that the lun [2] was used:
[ INFO ] ok: [localhost] The following luns have been found on the requested target: [1] 36006016039a142001792ac5b1b9e513a 1024GiB DGC VRAID status: free, paths: 8 active
[2] 36006016039a142006755b25b7505d26b 100GiB DGC VRAID status: used, paths: 8 active
So it was the good explanation for the failure.
Following the admin guide, I cleaned up the lun using
# dd if=/dev/zero of=/dev/mapper/36006016039a142006755b25b7505d26b bs=1M count=200 oflag=direct
And re-deployed it until it complains the device was not empty despite of before. The script still reported that the status of the lun was "used"
Finally, I rebooted the host, and that time the script considered the lun status as free.
So there is somewhere a bit of code into the ansible script that keeps the old status of the lun before dd, until rebooting.
Hello,
for removing a volume i usually disable the logical volume (vgremove) and then the physical volume (pvremove). At this point i have the device marked as clean.
Luca
-- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
participants (2)
-
Luca 'remix_tj' Lorenzetto
-
Nathanaël Blanchet