[Kimchi-devel] [RFC] Guest cloning
Aline Manera
alinefm at linux.vnet.ibm.com
Mon Oct 13 16:13:58 UTC 2014
On 10/10/2014 06:34 AM, Yu Xin Huo wrote:
> When user click on clone button below, a request is sent to server.
> Server should pre-check to see whether all relevant storage pools have
> enough space to copy vm volumes.
>
>
> If the pre-check fail, response a message indicate the vm volumes that
> need re-assign a pool. Then UI popup a dialog below.
> Once user selected the pool and click 'Clone' button, then re-send the
> request with [{disk1: Pool-A},{disk2: Pool-B},{disk3: Pool-C}]
>
There may be a time between user clicks on "Clone" until get a response
from pool capacity.
In that time, what will be shown to user? A loading icon in somewhere?
>
> Once the cloning process is triggered.
>
>
I think we can display a new VM box with an loading icon on "Livetitle"
with "Clone in progress..." or something like it.
I don't think we need to display the progress messages to user. At least
I would just care about new VM cloned.
> On 10/3/2014 2:05 AM, Crístian Viana wrote:
>> Hi everyone,
>>
>> I'm presenting here my proposal for the feature "Guest cloning" which
>> is expected to be implemented for Kimchi 1.4.
>>
>>
>> Description
>>
>> Cloning a guest means creating a new guest with a copy of the settings
>> and data of the original guest. All data described by its XML will be
>> copied completely, with the following exceptions:
>>
>> * name: the new guest will have an automatically generated name. We
>> can append "-clone<n>" to the original guest's name, where <n> is
>> related to the number of clones created from that guest. For
>> example, cloning a guest named "myfedora" will create a new guest
>> named "myfedora-clone1"; if another clone for that same guest is
>> requested, it will be named "myfedora-clone2".
>> * uuid: the new guest will have an automatically generated UUID. We
>> can create a random UUID for every cloned guest.
>> * devices/interface/mac: the new guest will have an automatically
>> generated MAC address for every network interface. We can create
>> random MAC addresses for every cloned guest.
>> * devices/disk: the new guest will have copies of the original
>> guest's disks. Depending on the storage pool type of each disk, a
>> different procedure may be used to copy that disk:
>>
>> * DIR, NFS, Logical: the disk file will be copied to a new file
>> with a modified name (e.g. "disk.img" -> "disk-clone1.img")
>> on the same storage pool.
>> * SCSI, iSCSI: the volume data will be copied as a new disk
>> file on the storage pool "default".
>>
>>
>> REST API
>>
>> Only one new REST command will be added.
>>
>>
>> Syntax
>>
>> POST /vms//<vm-name>//clone
>>
>>
>> Parameters:
>>
>> None.
>>
>>
>> Return:
>>
>> An asynchronous Task with "target_uri" containing "/vms/</new-vm-name/>".
>> As expected with any Task, the cloning process can be tracked by
>> checking the corresponding task's status.
>>
>>
>> Discussion
>>
>> I think the most challenging part of this feature is how to deal with
>> different types of disks while not prompting the user with any input.
>> There are a lot of possibilities and a lot of things that can go wrong
>> during the disks copy but we still need to do whatever is easier for
>> the user. For example, do we really have to create the new disks in
>> the same storage pool as the original disk's? If that's not possible
>> (e.g. not available space), should we create them in another pool with
>> available space? Should we ask any input from the user (e.g. "Would you
>> like to create the new disk on the same storage pool or on a different
>> one?")? What about the *SCSI pool types, is it OK to copy the volume
>> data to a different storage pool (i.e. "default") like I'm proposing
>> here? I couldn't think of a way to add a new volume in an existing pool
>> of those types. How about making the *SCSI volumes shareable between
>> the original and the new VMs? I don't like that approach because then
>> both VMs will use the same disk, whatever is changed in one VM is also
>> changed in the other one, and that's not a clone for me, that's a
>> "hardlink".
>>
>> Any feedback is welcome!
>>
>> Best regards,
>> Crístian.
>>
>>
>> _______________________________________________
>> Kimchi-devel mailing list
>> Kimchi-devel at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/kimchi-devel
>
>
>
> _______________________________________________
> Kimchi-devel mailing list
> Kimchi-devel at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/kimchi-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141013/ec4b48a1/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 139826 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141013/ec4b48a1/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 27729 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141013/ec4b48a1/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 71219 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141013/ec4b48a1/attachment-0002.png>
More information about the Kimchi-devel
mailing list