[Kimchi-devel] [RFC] Guest cloning

Yu Xin Huo huoyuxin at linux.vnet.ibm.com
Tue Oct 14 10:29:50 UTC 2014


Eclipse can run long task in background. 'Run in Background' is just 
like kimchi's async tasks.
It use a centralized task manager to manage those tasks.

Currently, kimchi's long tasks are distributed along with each feature.
The problem is that if the user is not waiting on the original location, 
user need to assume that if no error occur, the task is completed and he 
need to remember the object he is operating on.




On 10/14/2014 2:47 PM, simonjin wrote:
> On 10/14/2014 12:15 AM, Aline Manera wrote:
>>
>> On 10/10/2014 07:25 AM, Yu Xin Huo wrote:
>>> I am also thinking to add a centralized area in header to hold all 
>>> asynchronous tasks.
>>
>> Hrm... Tasks are an internal concept (from development view). The 
>> user doesn't know what they are.
>>
> It'd be good to have this feature to let user know what's going on in 
> the kimchi
> and even more it'd be better to have a progress bar(UI) for each under 
> task like guest clone/migrate.
>
> -Simon
>>>
>>>
>>> On 10/3/2014 2:05 AM, Crístian Viana wrote:
>>>> Hi everyone,
>>>>
>>>> I'm presenting here my proposal for the feature "Guest cloning" which
>>>> is expected to be implemented for Kimchi 1.4.
>>>>
>>>>
>>>>     Description
>>>>
>>>> Cloning a guest means creating a new guest with a copy of the settings
>>>> and data of the original guest. All data described by its XML will be
>>>> copied completely, with the following exceptions:
>>>>
>>>>   * name: the new guest will have an automatically generated name.
>>>>     We can append "-clone<n>" to the original guest's name, where
>>>>     <n> is related to the number of clones created from that guest.
>>>>     For example, cloning a guest named "myfedora" will create a new
>>>>     guest named "myfedora-clone1"; if another clone for that same
>>>>     guest is requested, it will be named "myfedora-clone2".
>>>>   * uuid: the new guest will have an automatically generated UUID.
>>>>     We can create a random UUID for every cloned guest.
>>>>   * devices/interface/mac: the new guest will have an automatically
>>>>     generated MAC address for every network interface. We can
>>>>     create random MAC addresses for every cloned guest.
>>>>   * devices/disk: the new guest will have copies of the original
>>>>     guest's disks. Depending on the storage pool type of each disk,
>>>>     a different procedure may be used to copy that disk:
>>>>
>>>>       * DIR, NFS, Logical: the disk file will be copied to a new
>>>>         file with a modified name (e.g. "disk.img" ->
>>>>         "disk-clone1.img") on the same storage pool.
>>>>       * SCSI, iSCSI: the volume data will be copied as a new disk
>>>>         file on the storage pool "default".
>>>>
>>>>
>>>>     REST API
>>>>
>>>> Only one new REST command will be added.
>>>>
>>>>
>>>>         Syntax
>>>>
>>>> POST /vms//<vm-name>//clone
>>>>
>>>>
>>>>         Parameters:
>>>>
>>>> None.
>>>>
>>>>
>>>>         Return:
>>>>
>>>> An asynchronous Task with "target_uri" containing 
>>>> "/vms/</new-vm-name/>".
>>>> As expected with any Task, the cloning process can be tracked by
>>>> checking the corresponding task's status.
>>>>
>>>>
>>>>     Discussion
>>>>
>>>> I think the most challenging part of this feature is how to deal with
>>>> different types of disks while not prompting the user with any input.
>>>> There are a lot of possibilities and a lot of things that can go wrong
>>>> during the disks copy but we still need to do whatever is easier for
>>>> the user.  For example, do we really have to create the new disks in
>>>> the same storage pool as the original disk's? If that's not possible
>>>> (e.g. not available space), should we create them in another pool with
>>>> available space? Should we ask any input from the user (e.g. "Would you
>>>> like to create the new disk on the same storage pool or on a different
>>>> one?")? What about the *SCSI pool types, is it OK to copy the volume
>>>> data to a different storage pool (i.e. "default") like I'm proposing
>>>> here? I couldn't think of a way to add a new volume in an existing pool
>>>> of those types. How about making the *SCSI volumes shareable between
>>>> the original and the new VMs? I don't like that approach because then
>>>> both VMs will use the same disk, whatever is changed in one VM is also
>>>> changed in the other one, and that's not a clone for me, that's a
>>>> "hardlink".
>>>>
>>>> Any feedback is welcome!
>>>>
>>>> Best regards,
>>>> Crístian.
>>>>
>>>>
>>>> _______________________________________________
>>>> Kimchi-devel mailing list
>>>> Kimchi-devel at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/kimchi-devel
>>>
>>>
>>>
>>> _______________________________________________
>>> Kimchi-devel mailing list
>>> Kimchi-devel at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/kimchi-devel
>>
>>
>> _______________________________________________
>> Kimchi-devel mailing list
>> Kimchi-devel at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/kimchi-devel
>
>
> -- 
> Yun Tong Jin, Simon
> Linux Technology Center, Open Virtualization project
> IBM Systems & Technology Group
> jinyt at cn.ibm.com, Phone: 824549654

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141014/85304d4d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ehgbbacd.png
Type: image/png
Size: 149296 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141014/85304d4d/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 25735 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/kimchi-devel/attachments/20141014/85304d4d/attachment-0001.png>


More information about the Kimchi-devel mailing list