[Users] SOLVED: Recovering VMs when the engine is lost

Itamar Heim iheim at redhat.com
Thu Nov 29 10:52:48 UTC 2012


On 11/28/2012 07:36 AM, Joern Ott wrote:
> Hello Itamar and Ayal,
>
> I tried to recover the VMs by using the first method (converting the master into an export domain) but after the changes, I couldn't add the export domain as it was "an incompatible version". Therefore I explored the other way of restoring:
> - First, I copied the master/vms folder from the master storage  to my local host (and renamed al the .ovf to .xml so I can double click them and view them in an xml browser on my stupid Windows)
> - As we store the ovirt NFS export folder on /data/ovirt, I connected to all nodes and moved the $UUID folder from /data/ovirt to /data/old.
> - I added all the cpu nodes and NFS storages to the new cluster
> - I recreated the virtual machines based on the xml files in the specs and our internal vm specifications with exactly the same MAC address (we specify our own) and disk sizes, having the disks residing on the same storages as the old disks
> - Looking at the xml file of the new VMs I figured out the folders and file names of the new disks and then moved the old disks over:
> cd /data/ovirt/$UUID/images/$FOLDERUUID
> mv /data/old/images/$OLDFOLDERUUID/$OLDDISKUUID ./$DISKUUID
> mv /data/old/images/$OLDFOLDERUUID/$OLDDISKUUID.meta ./$DISKUUID.meta
> mv /data/old/images/$OLDFOLDERUUID/$OLDDISKUUID.lease ./$DISKUUID.lease
> - After doing this, the new VM starts nicely using the disks with all the content on them
>
> Of course, you can also choose other storages for the new disks, but then you have to rsync -avS the sparse files over to the new storage and this takes hours for 600G disks.
>
> Caveats: There were a few machines I actually created from a template. In the XML files for those machines, we had the wrong MAC addresses for the nics. I assume, that these were the ones generated during the template clone and not the final ones I set after cloning the VMs. But as we have nice host specifications as yaml files for every host, I was relying more on them than on the XML.
>
> In the end, it took me around 3 minutes of manual work to recreate the vm, move the disks over and start the vm. Automating that with a bit of yaml and xml parsing magic and recreating the VMS automatically via script could lead to recreating a vm in under 1 minute (except when moving disks from storage to storage).
>
> Kind regards
> Jörn
>
>> -----Original Message-----
>> From: Itamar Heim [mailto:iheim at redhat.com]
>> Sent: Freitag, 23. November 2012 15:34
>> To: Joern Ott
>> Cc: users at ovirt.org; Ayal Baron
>> Subject: Re: [Users] Recovering VMs when the engine is lost
>>
>> On 11/23/2012 04:16 PM, Joern Ott wrote:
>>> Hello Itamar,
>>>
>>>> -----Original Message-----
>>>> From: Itamar Heim [mailto:iheim at redhat.com]
>>>> Sent: Freitag, 23. November 2012 14:11
>>>> To: Joern Ott
>>>> Cc: users at ovirt.org; Ayal Baron
>>>> Subject: Re: [Users] Recovering VMs when the engine is lost
>>>>
>>>> On 11/23/2012 02:58 PM, Joern Ott wrote:
>>>>> Hey Itamar,
>>>>>
>>>>> this is an NFS storage.
>>>>
>>>> we don't currently support importing an existing data storage domain.
>>>> so you need to create a new nfs data domain.
>>>>
>>>> i suggest two 'hacky' options after that 1. less hacky - convert the
>>>> current nfs data domain to an export domain, and import the VMs from it.
>>>
>>> Is there any info on how to do this, I didn't find much info on export
>> domains on the wiki. The last time I tried to move thin disks with ovirt GUI
>> from one storage to another, they were expanded and filled up the
>> destination. In the database, the disks were shown to be on the destination
>> host and were marked as "invalid", so this procedure also has some risks.
>>
>> check this one, and please wikify for others if you find it helpful:
>> http://comments.gmane.org/gmane.comp.emulators.ovirt.user/4428
>>
>>>
>>>> 2. more hacky - recreate the VMs on the new nfs data domain with
>>>> exact same details, and copy the disk over the newly created disks.
>>>>
>>>> benefit of the other is you may be able to move the data, instead of
>>>> full blown copy, while in option 1 (which is much simpler/less error
>>>> prone), you need to import all vm's again.
>>>
>>> The advantage of this would be faster speed as most disks are thin
>> provisioned disks. So I think, I will go for this option.
>>
>> just to make it clear - this is risky, and you should know what you are doing
>> when copying/moving files over the new layout, which must be identical to
>> previous definitions.
>>
>>>>
>>>> there is current work to detect disks from a storage domain:
>>>> http://wiki.ovirt.org/wiki/Features/Domain_Scan
>>>>
>>>> when it will be ready (and better when import existing storage domain
>>>> will be supported), this would be easier.
>>>>
>>>>>
>>>>> KR
>>>>> Jörn
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Itamar Heim [mailto:iheim at redhat.com]
>>>>>> Sent: Freitag, 23. November 2012 10:32
>>>>>> To: Joern Ott
>>>>>> Cc: users at ovirt.org; Ayal Baron
>>>>>> Subject: Re: [Users] Recovering VMs when the engine is lost
>>>>>>
>>>>>> On 11/23/2012 11:03 AM, Joern Ott wrote:
>>>>>>> Hello everybody,
>>>>>>>
>>>>>>> I managed to re-install the server the oVirt engine was running on
>>>>>>> in my testlab. All the cpu and storages are still there and the
>>>>>>> VMs are still running without problems. Is there a way, I can
>>>>>>> attach the still existing nodes to the new oVirt engine and
>>>>>>> recover/import the
>>>> VMs?
>>>>>>>
>>>>>>> Alternatively, is there a way, I can import the VMs disks? I still
>>>>>>> know how the data center and clusters were set up and the specs
>>>>>>> for the VMs are also documented, so I'd like to re-attach the
>>>>>>> disks, which would save me some time compared to the re-installs.
>>>>>>>
>>>>>>
>>>>>> is this an iscsi or nfs data storage domain?
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>> Kind regards
>>> Jörn
>>>
>>
>>
>
>

while hacky, i wonder if worth wiki-fying.




More information about the Users mailing list