[ovirt-users] leftover of disk moving operation

Gianluca Cecchi gianluca.cecchi at gmail.com
Tue Jan 30 15:25:53 UTC 2018


On Tue, Jan 30, 2018 at 3:14 PM, Elad Ben Aharon <ebenahar at redhat.com>
wrote:

> In a case of disk migration failure with leftover LV on the destination
> domain, lvremove is what needed. Also, make sure to remove the image
> directory on the destination domain (located under
> /rhev/data-center/%spuuid%/%sduuid%/images/)
>
>
Ok, exactly the 2 steps I've done after noticing the broken link inside the
image directory. I suspected something to be done at rdbms level.
Thanks for confirmation.
The bad was that the VM crashed because of this error.

Not a big problem in my case, as this is a test env.
I had to move many disk images from SAN to SAN, about 200Gb size each, and
so I powered off the VMs and  then moved the disks.
I didn't get anymore the problem at destination, but I did get sometimes
the opposite: unable to remove the LV at source:

command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume:
(['Cannot remove Logical Volume:
(u\'f7b481c8-b744-43d4-9faa-0e494a308490\',
"[\'6d23ec3e-79fb-47f0-85ff-08480476ea68\']")'],)

and I see that the apparently the LV remains active/open (ao flag)  at
source...
Even after putting the source storage domain, by now empty, into
maintenance, at host side I get this kid of thing:

 # lvs f7b481c8-b744-43d4-9faa-0e494a308490
  LV                                   VG
 Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  45e20ac8-f3c6-462b-a2c3-30f0c8262f84 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao---- 200.00g
  4d27c0e1-546b-48ba-80fb-95dcd87b5583 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao---- 380.00g
  675d5f06-320a-4236-8d57-9ff7cc7eb200 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-------  50.00g
  6d23ec3e-79fb-47f0-85ff-08480476ea68 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao---- 200.00g
  823a79f2-b09a-4665-9dfa-8ccd2850225f f7b481c8-b744-43d4-9faa-0e494a308490
-wi------- 128.00m
  863062c3-b3b0-4494-ad2f-76a1c29c069a f7b481c8-b744-43d4-9faa-0e494a308490
-wi------- 128.00m
  a2d5f043-55c2-4701-ac8c-ccb796a799d1 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao---- 200.00g
  e26eac74-4445-4f3b-b2f5-ef048ae6d91b f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao---- 200.00g
  ids                                  f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a----- 128.00m
  inbox                                f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a----- 128.00m
  leases                               f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a-----   2.00g
  master                               f7b481c8-b744-43d4-9faa-0e494a308490
-wi-------   1.00g
  metadata                             f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a----- 512.00m
  outbox                               f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a----- 128.00m
  xleases                              f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a-----   1.00g


 and for some hours (now I don't get it anymore, perhaps a consequence of
putting the SD into maintenance), the lvs command exhibited also this kind
of output/error


  WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-c8ft-DfUTyK on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/e26eac74-4445-4f3b-b2f5-ef048ae6d91b
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/e26eac74-4445-4f3b-b2f5-ef048ae6d91b.
  WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-JGhK-zekrOl on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/45e20ac8-f3c6-462b-a2c3-30f0c8262f84
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/45e20ac8-f3c6-462b-a2c3-30f0c8262f84.
  WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-WCkB-CdzLmN on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/6d23ec3e-79fb-47f0-85ff-08480476ea68
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68.
  WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-w4Bp-GlNeRf on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/a2d5f043-55c2-4701-ac8c-ccb796a799d1
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/a2d5f043-55c2-4701-ac8c-ccb796a799d1.
  WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-c8ft-DfUTyK prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/e26eac74-4445-4f3b-b2f5-ef048ae6d91b
because device is used by LV.
  WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-JGhK-zekrOl prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/45e20ac8-f3c6-462b-a2c3-30f0c8262f84
because device is used by LV.
  WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-WCkB-CdzLmN prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
because device is used by LV.
  WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-w4Bp-GlNeRf prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/a2d5f043-55c2-4701-ac8c-ccb796a799d1
because device is used by LV.

I have been able to power on VMs with the new underlying storage domain
active.

Any hint on how to clean. I can shutdown and restart this host (that in
this moment is the only one running), but it could be of help in other
similar scenarios too...

Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180130/be8b2b68/attachment.html>


More information about the Users mailing list