[ovirt-users] critical production issue for a vm
Donny Davis
donny at fortnebula.com
Thu Dec 7 13:32:09 UTC 2017
This is just a shot in the dark, but have you tried to use the disk copy
feature? You can copy the disks back to where they were and try starting
the VM
On Thu, Dec 7, 2017 at 7:48 AM, Nathanaël Blanchet <blanchet at abes.fr> wrote:
>
>
> Le 06/12/2017 à 15:56, Maor Lipchuk a écrit :
>
>
>
> On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot <nicolas at ecarnot.net>
> wrote:
>
>> Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :
>>
>>> Hi all,
>>>
>>> I'm about to lose one very important vm. I shut down this vm for
>>> maintenance and then I moved the four disks to a new created lun. This vm
>>> has 2 snapshots.
>>>
>>> After successful move, the vm refuses to start with this message:
>>>
>>> Bad volume specification {u'index': 0, u'domainID':
>>> u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format':
>>> u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID':
>>> u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648
>>> <%28214%29%20748-3648>', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>>> u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional':
>>> u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>>> 'truesize': '2147483648 <%28214%29%20748-3648>', u'poolID':
>>> u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared':
>>> u'false', u'propagateErrors': u'off', u'type': u'disk'}.
>>>
>>> I tried to merge the snaphots, export , clone from snapshot, copy disks,
>>> or deactivate disks and every action fails when it is about disk.
>>>
>>> I began to dd lv group to get a new vm intended to a standalone
>>> libvirt/kvm, the vm quite boots up but it is an outdated version before the
>>> first snapshot. There is a lot of disks when doing a "lvs | grep 961ea94a"
>>> supposed to be disks snapshots. Which of them must I choose to get the last
>>> vm before shutting down? I'm not used to deal snapshot with virsh/libvirt,
>>> so some help will be much appreciated.
>>>
>>
> The disks which you want to copy should contain the entire volume chain.
> Based on the log you mentioned, It looks like this image is problematic:
>
> storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
> imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
> volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'
>
> What if you try to deactivate this image and try to run the VM, will it
> run?
>
> I already tried to do what you suggest, but it is the same, and more this
> disk is part of a volume groupe, so I can't boot the vm without it.
>
>
>
>
>
>>
>>> Is there some unknown command to recover this vm into ovirt?
>>>
>>> Thank you in advance.
>>>
>>>
>>>
>
>
>
>
>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> Beside specific oVirt answers, did you try to get informations about the
>> snapshot tree with qemu-img info --backing-chain on the adequate /dev/...
>> logical volume?
>> As you know how to dd from LVs, you could extract every needed snapshots
>> files and rebuild your VM outside of oVirt.
>> Then take time to re-import it later and safely.
>>
>> --
>> Nicolas ECARNOT
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> _______________________________________________
> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax 33 (0)4 67 54 84 14blanchet at abes.fr
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171207/8f3b2d83/attachment.html>
More information about the Users
mailing list