This is a multi-part message in MIME format.
--------------21CE01923A1840D4FE07ECE7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Many thanks to Nicolas who saved my life!
When the export of disks (base + snapshot) has finished, I managed to=20
boot up the vm into libvirt/kvm with the top disk snapshot as the main di=
sk.
Then, I believed reimporting the vm was the last thing to do, but=20
integrated virt-v2v doesn't support importing vm with external=20
snapshots, so when the importing process has finished, I couldn't boot=20
up the vm.
I had to merge the snapshots with qemu tools:
qemu-img rebase-b base.raw snap2.qcow2
qemu-img commit snap2.qcow2
And then, attaching the base image of each disk to the libvirt vm before=20
reimporting it chosing "preallocated" for raw disks.
This is a manual method, but it was first necessary to find the disk id=20
into lvm thanks to ovirt-shell: list disks --query "name=3Dhortensia*"=20
--show-all.
When finding the volume group id corresponding to the vm, I had to=20
activate all the logical volume with lvchange -ay /dev/... and then=20
finding qcow2 information with qemu-img info --backing-chain
*In this specific desastry, is there something to do with ovirt itself=20
instead of exporting/reimporting, knowing that vm disks on the lun are=20
intact, while the main reason is that the reference to some disks are=20
broken into database?*
Le 06/12/2017 =C3=A0 11:30, Nicolas Ecarnot a =C3=A9crit=C2=A0:
Le 06/12/2017 =C3=A0 11:21, Nathana=C3=ABl Blanchet a
=C3=A9crit=C2=A0:
> Hi all,
> I'm about to lose one very important vm. I shut down
this vm for=20
> maintenance and then I moved the four disks to a new created lun.=20
> This vm has 2 snapshots.
> After successful move, the vm refuses to start with this
message:
> Bad volume specification {u'index': 0,
u'domainID':=20
> u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0',
u'format':=20
> u'cow', u'bootOrder': u'1', u'discard': False,
u'volumeID':=20
> u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize':=20
> '2147483648', u'imageID':
u'4a95614e-bf1d-407c-aa72-2df414abcb7a',=20
> u'specParams': {}, u'readonly': u'false', u'iface':
u'virtio',=20
> u'optional': u'false', u'deviceId':=20
> u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize':
'2147483648',=20
> u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350',
u'device':=20
> u'disk', u'shared': u'false', u'propagateErrors':
u'off', u'type':=20
> u'disk'}.
> I tried to merge the snaphots, export , clone from
snapshot, copy=20
> disks, or deactivate disks and every action fails when it is about dis=
k.
> I began to dd lv group to get a
new vm intended to a standalone=20
> libvirt/kvm, the vm quite boots up but it is an outdated version=20
> before the first snapshot. There is a lot of disks when doing a "lvs=20
> | grep 961ea94a" supposed to be disks snapshots. Which of them must I=20
> choose to get the last vm before shutting down? I'm not used to deal=20
> snapshot with virsh/libvirt, so some help will be much appreciated.
> Is there some unknown command to recover this vm into
ovirt?
> Thank you in advance.
>
_______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
Beside specific oVirt answers, did you try to get informations about=20
the snapshot tree with qemu-img info --backing-chain on the adequate=20
/dev/... logical volume?
As you know how to dd from LVs, you could extract every needed=20
snapshots files and rebuild your VM outside of oVirt.
Then take time to re-import it later and safely.
--=20
Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau
P=C3=B4le Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 =09
T=C3=A9l. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
--------------21CE01923A1840D4FE07ECE7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Dutf=
-8"
</head
<body text=3D"#000000"
bgcolor=3D"#FFFFFF"
<p>Many thanks to Nicolas
who saved my life!</p
<p>When the export of
disks (base + snapshot) has finished, I
managed to boot up the vm into libvirt/kvm with the top disk
snapshot as the main disk.<br
Then, I believed reimporting the vm was the last thing to do, but
integrated virt-v2v doesn't support importing vm with external
snapshots, so when the importing process has finished, I couldn't
boot up the vm. <br
I had to merge the snapshots
with qemu tools:</p
<pre class=3D"bash"
style=3D"font-family:monospace;">qemu-img rebase =
<span style=3D"color: #660033;">-b</span> base.raw snap2.qcow2
qemu-img commit snap2.qcow2
<font size=3D"+1"
</font></pre
<p>And then, attaching the base image of each disk to
the libvirt vm
before reimporting it chosing "preallocated" for raw disks. <br
</p
<p>This is a manual method<font size=3D"+1">, but it was first
necessary to</font> find the disk id into lvm thanks to
ovirt-shell: list disks --query "name=3Dhortensia*" --show-all.</p
<p>When finding the volume group id corresponding to
the vm, I had
to activate all the logical volume with lvchange -ay /dev/... and
then finding qcow2 information with qemu-img info --backing-chain</=
p
<p><b>In this specific desastry, is there
something to do with ovirt
itself instead of exporting/reimporting, knowing that vm disks
on the lun are intact, while the main reason is that the
reference to some disks are broken into database?</b><br
<font size=3D"+1"></font></p
<br
<div class=3D"moz-cite-prefix">Le 06/12/2017 =C3=A0 11:30, Nicolas Ec=
arnot
a =C3=A9crit=C2=A0:<br
</div
<blockquote type=3D"cite"
cite=3D"mid:a61581e7-2a34-2351-df71-1bf44cfc82e4@ecarnot.net">Le
06/12/2017 =C3=A0 11:21, Nathana=C3=ABl Blanchet a =C3=A9crit=C2=A0=
:
<br
<blockquote
type=3D"cite">Hi all,
<br
<br
I'm about to lose one very important vm. I shut
down this vm for
maintenance and then I moved the four disks to a new created
lun. This vm has 2 snapshots.
<br
<br
After successful move, the vm refuses to start with
this
message:
<br
<br
Bad volume specification {u'index': 0,
u'domainID':
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0',
u'format': u'cow', u'bootOrder': u'1',
u'discard': False,
u'volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b',
'apparentsize': '2147483648', u'imageID':
u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {},
u'readonly': u'false', u'iface': u'virtio',
u'optional':
u'false', u'deviceId':
u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
'truesize': '2147483648', u'poolID':
u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device':
u'disk',
u'shared': u'false', u'propagateErrors': u'off',
u'type':
u'disk'}.
<br
<br
I tried to merge the snaphots, export , clone from
snapshot,
copy disks, or deactivate disks and every action fails when it
is about disk.
<br
<br
I began to dd lv group to get a new vm intended to a
standalone
libvirt/kvm, the vm quite boots up but it is an outdated version
before the first snapshot. There is a lot of disks when doing a
"lvs | grep 961ea94a" supposed to be disks snapshots. Which of
them must I choose to get the last vm before shutting down? I'm
not used to deal snapshot with virsh/libvirt, so some help will
be much appreciated.
<br
<br
Is there some unknown command to recover this vm into
ovirt?
<br
<br
Thank you in advance.
<br
<br
<br
<br
_______________________________________________
<br
Users mailing list
<br
<a
class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.=
org">Users(a)ovirt.org</a
<br
<a class=3D"moz-txt-link-freetext"
href=3D"http://lists.ovirt.org=
/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a=
<br
<br
</blockquote
<br
Beside specific oVirt answers, did you try to get
informations
about the snapshot tree with qemu-img info --backing-chain on the
adequate /dev/... logical volume?
<br
As you know how to dd from
LVs, you could extract every needed
snapshots files and rebuild your VM outside of oVirt.
<br
Then take time to re-import it
later and safely.
<br
<br
</blockquote
<br
<pre class=3D"moz-signature"
cols=3D"72">--=20
Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau
P=C3=B4le Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 =09
T=C3=A9l. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:blanchet@abes.fr">bl=
anchet(a)abes.fr</a> </pre
</body
</html
--------------21CE01923A1840D4FE07ECE7--