<div dir="ltr">Try to deactivate this LV:<div><br></div><div>lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 30, 2018 at 5:25 PM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Tue, Jan 30, 2018 at 3:14 PM, Elad Ben Aharon <span dir="ltr"><<a href="mailto:ebenahar@redhat.com" target="_blank">ebenahar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">In a case of disk migration failure with leftover LV on the destination domain, lvremove is what needed. Also, make sure to remove the image directory on the destination domain (located under /rhev/data-center/%spuuid%/%sd<wbr>uuid%/images/)</div><div class="gmail_extra"><br></div></blockquote><div><br></div></span><div>Ok, exactly the 2 steps I've done after noticing the broken link inside the image directory. I suspected something to be done at rdbms level.</div><div>Thanks for confirmation.</div><div>The bad was that the VM crashed because of this error.</div><div><br></div><div>Not a big problem in my case, as this is a test env.</div><div>I had to move many disk images from SAN to SAN, about 200Gb size each, and so I powered off the VMs and then moved the disks.</div><div>I didn't get anymore the problem at destination, but I did get sometimes the opposite: unable to remove the LV at source:</div><div><br></div><div>command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume: (['Cannot remove Logical Volume: (u\'f7b481c8-b744-43d4-9faa-<wbr>0e494a308490\', "[\'6d23ec3e-79fb-47f0-85ff-<wbr>08480476ea68\']")'],)<br></div><div><br></div><div>and I see that the apparently the LV remains active/open (ao flag) at source...</div><div>Even after putting the source storage domain, by now empty, into maintenance, at host side I get this kid of thing:</div><div><br></div><div> # lvs f7b481c8-b744-43d4-9faa-<wbr>0e494a308490</div><div> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert</div><div> 45e20ac8-f3c6-462b-a2c3-<wbr>30f0c8262f84 f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-ao---- 200.00g </div><div> 4d27c0e1-546b-48ba-80fb-<wbr>95dcd87b5583 f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-ao---- 380.00g </div><div> 675d5f06-320a-4236-8d57-<wbr>9ff7cc7eb200 f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi------- 50.00g </div><div> 6d23ec3e-79fb-47f0-85ff-<wbr>08480476ea68 f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-ao---- 200.00g </div><div> 823a79f2-b09a-4665-9dfa-<wbr>8ccd2850225f f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi------- 128.00m </div><div> 863062c3-b3b0-4494-ad2f-<wbr>76a1c29c069a f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi------- 128.00m </div><div> a2d5f043-55c2-4701-ac8c-<wbr>ccb796a799d1 f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-ao---- 200.00g </div><div> e26eac74-4445-4f3b-b2f5-<wbr>ef048ae6d91b f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-ao---- 200.00g </div><div> ids f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-a----- 128.00m </div><div> inbox f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-a----- 128.00m </div><div> leases f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-a----- 2.00g </div><div> master f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi------- 1.00g </div><div> metadata f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-a----- 512.00m </div><div> outbox f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-a----- 128.00m </div><div> xleases f7b481c8-b744-43d4-9faa-<wbr>0e494a308490 -wi-a----- 1.00g </div><div><br></div><div><br></div><div> and for some hours (now I don't get it anymore, perhaps a consequence of putting the SD into maintenance), the lvs command exhibited also this kind of output/error</div><div><br></div><div><br></div><div><div> WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-<wbr>c8ft-DfUTyK on /dev/c0097b1a-a387-4ffa-a62b-<wbr>f9e6972197ef/e26eac74-4445-<wbr>4f3b-b2f5-ef048ae6d91b was already found on /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/e26eac74-4445-<wbr>4f3b-b2f5-ef048ae6d91b.</div><div> WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-<wbr>JGhK-zekrOl on /dev/c0097b1a-a387-4ffa-a62b-<wbr>f9e6972197ef/45e20ac8-f3c6-<wbr>462b-a2c3-30f0c8262f84 was already found on /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/45e20ac8-f3c6-<wbr>462b-a2c3-30f0c8262f84.</div><div> WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-<wbr>WCkB-CdzLmN on /dev/c0097b1a-a387-4ffa-a62b-<wbr>f9e6972197ef/6d23ec3e-79fb-<wbr>47f0-85ff-08480476ea68 was already found on /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/6d23ec3e-79fb-<wbr>47f0-85ff-08480476ea68.</div><div> WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-<wbr>w4Bp-GlNeRf on /dev/c0097b1a-a387-4ffa-a62b-<wbr>f9e6972197ef/a2d5f043-55c2-<wbr>4701-ac8c-ccb796a799d1 was already found on /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/a2d5f043-55c2-<wbr>4701-ac8c-ccb796a799d1.</div><div> WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-<wbr>c8ft-DfUTyK prefers device /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/e26eac74-4445-<wbr>4f3b-b2f5-ef048ae6d91b because device is used by LV.</div><div> WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-<wbr>JGhK-zekrOl prefers device /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/45e20ac8-f3c6-<wbr>462b-a2c3-30f0c8262f84 because device is used by LV.</div><div> WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-<wbr>WCkB-CdzLmN prefers device /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/6d23ec3e-79fb-<wbr>47f0-85ff-08480476ea68 because device is used by LV.</div><div> WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-<wbr>w4Bp-GlNeRf prefers device /dev/f7b481c8-b744-43d4-9faa-<wbr>0e494a308490/a2d5f043-55c2-<wbr>4701-ac8c-ccb796a799d1 because device is used by LV.</div></div><div><br></div><div>I have been able to power on VMs with the new underlying storage domain active.</div><div><br></div><div>Any hint on how to clean. I can shutdown and restart this host (that in this moment is the only one running), but it could be of help in other similar scenarios too...</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Gianluca</div></font></span></div></div></div>
</blockquote></div><br></div>