[PATCH v3 0/4] CDROM Management

From: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> This patch set implements host storage devices management. It implements full CDROM device add, remove and update functionality. It implements basic Disk functionalities. To test this contribution: - get info of all storages (cdrom and disks) of a VM curl -u <user> -H "Content-Type: application/json" -H "Accept: application/json" http://localhost:8000/vms/<vm_name>/storages -X GET - add a new cdrom to a VM curl -u <user> -H "Content-Type: application/json" -H "Accept: application/json" http://localhost:8000/vms/<vm_name>/storages -X POST -d'{"type": "cdrom", "path": "<path_to_iso>"}' - get specific info from cdrom device curl -u <user> -H "Content-Type: application/json" -H "Accet: application/json" http://localhost:8000/vms/<vm_name>/storages/<dev_name> -X GET - delete a cdrom device from a VM curl -u <user> -H "Content-Type: application/json" -H "Accet: application/json" http://localhost:8000/vms/<vm_name>/storages/<dev_name> -X DELETE - update a cdrom device from a VM curl -u <user> -H "Content-Type: application/json" -H "Accet: application/json" http://localhost:8000/vms/<vm_name>/storages/<dev_name> -X PUT -d '{"path":<path_to_iso>}' Changelog: V3: - Using lxml to parse the XML information in model/vms.py - Addressed comments/suggestions from the ML in other files V2: - Add devices to mockmodel and add test cases to rest API - Assign name automatically to new devices, if not passed by user - Fix minor errors Rodrigo Trujillo (4): CDROM Management: Add storage sub-collection to sub-resource to guest resource CDROM Management: Update controller and API.json for guest storages CDROM Management: Devices management model implementation CDROM Management: Guest vm storage devices mockmodel and rest api test cases docs/API.md | 21 +++++ src/kimchi/API.json | 33 +++++++ src/kimchi/control/vm/storages.py | 49 ++++++++++ src/kimchi/mockmodel.py | 66 ++++++++++++++ src/kimchi/model/vms.py | 184 +++++++++++++++++++++++++++++++++++++- src/kimchi/xmlutils.py | 5 ++ tests/test_rest.py | 74 +++++++++++++++ 7 files changed, 428 insertions(+), 4 deletions(-) create mode 100644 src/kimchi/control/vm/storages.py -- 1.8.3.1

From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> This patch changes API.md with new storage sub-collection/sub-resource information. Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/docs/API.md b/docs/API.md index 48a293f..b1a49cd 100644 --- a/docs/API.md +++ b/docs/API.md @@ -115,6 +115,27 @@ Represents a snapshot of the Virtual Machine's primary monitor. * **GET**: Redirect to the latest screenshot of a Virtual Machine in PNG format + +### Sub-collection: Virtual Machine storages +**URI:** /vms/*:name*/storages +* **GET**: Retrieve a summarized list of all storages of specified guest +* **POST**: Attach a new storage or virtual drive to specified virtual machine. + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. + +### Sub-resource: storage +**URI:** /vms/*:name*/storages/*:dev* +* **GET**: Retrieve storage information + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. +* **PUT**: Update storage information + * path: Path of cdrom iso or disk. +* **DELETE**: Remove the storage. Simulate eject a cdrom + + + ### Collection: Templates **URI:** /templates -- 1.8.3.1

Thanks Rodrigo and Daniel to help on this task, some comments below. After considering CDROM manage for second time,common usecases for CDROM are 'insert' and 'eject'. It means the CDROM device is there, just change the media in it. For Kimchi that is to PUT the path of CDROM. Basically, I think it is rare to add a new CDROM device or remove a CDROM device. But I agree with for hard disks we want devices add or remove. What do you think? On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch changes API.md with new storage sub-collection/sub-resource information.
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/docs/API.md b/docs/API.md index 48a293f..b1a49cd 100644 --- a/docs/API.md +++ b/docs/API.md @@ -115,6 +115,27 @@ Represents a snapshot of the Virtual Machine's primary monitor.
* **GET**: Redirect to the latest screenshot of a Virtual Machine in PNG format
+ +### Sub-collection: Virtual Machine storages +**URI:** /vms/*:name*/storages +* **GET**: Retrieve a summarized list of all storages of specified guest +* **POST**: Attach a new storage or virtual drive to specified virtual machine. + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. + +### Sub-resource: storage +**URI:** /vms/*:name*/storages/*:dev* +* **GET**: Retrieve storage information + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. +* **PUT**: Update storage information + * path: Path of cdrom iso or disk. +* **DELETE**: Remove the storage. Simulate eject a cdrom I think for DELETE we mean to delete the device, so eject will be simulated by PUT '' to this device. + + + ### Collection: Templates
**URI:** /templates

Hello Royce, On 02/12/2014 01:46 AM, Royce Lv wrote:
Thanks Rodrigo and Daniel to help on this task, some comments below.
After considering CDROM manage for second time,common usecases for CDROM are 'insert' and 'eject'. It means the CDROM device is there, just change the media in it.
For Kimchi that is to PUT the path of CDROM. Basically, I think it is rare to add a new CDROM device or remove a CDROM device. But I agree with for hard disks we want devices add or remove.
What do you think?
I agree that the "eject" function is better represented by a PUT with blank path. I'll make the required adjustments to reflect this. About the removal of a CDROM device, I agree that it is kind of rare too. But shouldn't we support it anyway? I am a little worried about removing this support just to later in the road someone miss it and then ask to add it again. I've read your comments about providing support to CDROM only and forget about hard disks. I haven't participated in the RFC or any discussion about the design of this feature, but at this point, and I beg your pardon if I sound "lazy", it is easier and faster to just leave hard disk support in this contribution than to chop the patches and make it CDROM only. If we realize that the hard disk support provided in these patches are incomplete, we can just complement it with further patches in the future. How does that sound to you?
On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch changes API.md with new storage sub-collection/sub-resource information.
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/docs/API.md b/docs/API.md index 48a293f..b1a49cd 100644 --- a/docs/API.md +++ b/docs/API.md @@ -115,6 +115,27 @@ Represents a snapshot of the Virtual Machine's primary monitor.
* **GET**: Redirect to the latest screenshot of a Virtual Machine in PNG format
+ +### Sub-collection: Virtual Machine storages +**URI:** /vms/*:name*/storages +* **GET**: Retrieve a summarized list of all storages of specified guest +* **POST**: Attach a new storage or virtual drive to specified virtual machine. + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. + +### Sub-resource: storage +**URI:** /vms/*:name*/storages/*:dev* +* **GET**: Retrieve storage information + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. +* **PUT**: Update storage information + * path: Path of cdrom iso or disk. +* **DELETE**: Remove the storage. Simulate eject a cdrom I think for DELETE we mean to delete the device, so eject will be simulated by PUT '' to this device. + + + ### Collection: Templates
**URI:** /templates

UPDATE: I've talked with Aline and she made a good case on why I should rip off the "hard disk" support from this patch. I wasn't aware that this support is a separated item assigned to someone else. The version 4 of this contribution will have cdrom support only. On 02/12/2014 09:24 AM, Daniel H Barboza wrote:
Hello Royce,
On 02/12/2014 01:46 AM, Royce Lv wrote:
Thanks Rodrigo and Daniel to help on this task, some comments below.
After considering CDROM manage for second time,common usecases for CDROM are 'insert' and 'eject'. It means the CDROM device is there, just change the media in it.
For Kimchi that is to PUT the path of CDROM. Basically, I think it is rare to add a new CDROM device or remove a CDROM device. But I agree with for hard disks we want devices add or remove.
What do you think?
I agree that the "eject" function is better represented by a PUT with blank path. I'll make the required adjustments to reflect this.
About the removal of a CDROM device, I agree that it is kind of rare too. But shouldn't we support it anyway? I am a little worried about removing this support just to later in the road someone miss it and then ask to add it again.
I've read your comments about providing support to CDROM only and forget about hard disks. I haven't participated in the RFC or any discussion about the design of this feature, but at this point, and I beg your pardon if I sound "lazy", it is easier and faster to just leave hard disk support in this contribution than to chop the patches and make it CDROM only. If we realize that the hard disk support provided in these patches are incomplete, we can just complement it with further patches in the future.
How does that sound to you?
On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch changes API.md with new storage sub-collection/sub-resource information.
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/docs/API.md b/docs/API.md index 48a293f..b1a49cd 100644 --- a/docs/API.md +++ b/docs/API.md @@ -115,6 +115,27 @@ Represents a snapshot of the Virtual Machine's primary monitor.
* **GET**: Redirect to the latest screenshot of a Virtual Machine in PNG format
+ +### Sub-collection: Virtual Machine storages +**URI:** /vms/*:name*/storages +* **GET**: Retrieve a summarized list of all storages of specified guest +* **POST**: Attach a new storage or virtual drive to specified virtual machine. + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. + +### Sub-resource: storage +**URI:** /vms/*:name*/storages/*:dev* +* **GET**: Retrieve storage information + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. +* **PUT**: Update storage information + * path: Path of cdrom iso or disk. +* **DELETE**: Remove the storage. Simulate eject a cdrom I think for DELETE we mean to delete the device, so eject will be simulated by PUT '' to this device. + + + ### Collection: Templates
**URI:** /templates
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

On 2014年02月12日 19:24, Daniel H Barboza wrote:
Hello Royce,
On 02/12/2014 01:46 AM, Royce Lv wrote:
Thanks Rodrigo and Daniel to help on this task, some comments below.
After considering CDROM manage for second time,common usecases for CDROM are 'insert' and 'eject'. It means the CDROM device is there, just change the media in it.
For Kimchi that is to PUT the path of CDROM. Basically, I think it is rare to add a new CDROM device or remove a CDROM device. But I agree with for hard disks we want devices add or remove.
What do you think?
I agree that the "eject" function is better represented by a PUT with blank path. I'll make the required adjustments to reflect this.
About the removal of a CDROM device, I agree that it is kind of rare too. But shouldn't we support it anyway? I am a little worried about removing this support just to later in the road someone miss it and then ask to add it again.
OK
I've read your comments about providing support to CDROM only and forget about hard disks. I haven't participated in the RFC or any discussion about the design of this feature, but at this point, and I beg your pardon if I sound "lazy", it is easier and faster to just leave hard disk support in this contribution than to chop the patches and make it CDROM only. If we realize that the hard disk support provided in these patches are incomplete, we can just complement it with further patches in the future. I'm OK with inner logic imcomplete if we do not declare officially support it, we can complete it in the future. I mean at least from the rest API we have to prevent users to call it, if wrongly used, that will make actual damage to their system.
How does that sound to you?
On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch changes API.md with new storage sub-collection/sub-resource information.
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/docs/API.md b/docs/API.md index 48a293f..b1a49cd 100644 --- a/docs/API.md +++ b/docs/API.md @@ -115,6 +115,27 @@ Represents a snapshot of the Virtual Machine's primary monitor.
* **GET**: Redirect to the latest screenshot of a Virtual Machine in PNG format
+ +### Sub-collection: Virtual Machine storages +**URI:** /vms/*:name*/storages +* **GET**: Retrieve a summarized list of all storages of specified guest +* **POST**: Attach a new storage or virtual drive to specified virtual machine. + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. + +### Sub-resource: storage +**URI:** /vms/*:name*/storages/*:dev* +* **GET**: Retrieve storage information + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. +* **PUT**: Update storage information + * path: Path of cdrom iso or disk. +* **DELETE**: Remove the storage. Simulate eject a cdrom I think for DELETE we mean to delete the device, so eject will be simulated by PUT '' to this device. + + + ### Collection: Templates
**URI:** /templates

On 02/12/2014 01:46 AM, Royce Lv wrote:
Thanks Rodrigo and Daniel to help on this task, some comments below.
After considering CDROM manage for second time,common usecases for CDROM are 'insert' and 'eject'. It means the CDROM device is there, just change the media in it.
For Kimchi that is to PUT the path of CDROM. Basically, I think it is rare to add a new CDROM device or remove a CDROM device. But I agree with for hard disks we want devices add or remove.
What do you think?
Royce, we had a long discussion about empty cdrom and we could not find a user case for it. You can check the scrum meeting log here: http://kimchi-project.github.io/kimchi/meetings/kimchi.scrum.2014-02-12-13.0... So we decide to do not allow an empty path. The operations available for cdrom are add/remove and update. And to update you need to specify a valid path
On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch changes API.md with new storage sub-collection/sub-resource information.
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/docs/API.md b/docs/API.md index 48a293f..b1a49cd 100644 --- a/docs/API.md +++ b/docs/API.md @@ -115,6 +115,27 @@ Represents a snapshot of the Virtual Machine's primary monitor.
* **GET**: Redirect to the latest screenshot of a Virtual Machine in PNG format
+ +### Sub-collection: Virtual Machine storages +**URI:** /vms/*:name*/storages +* **GET**: Retrieve a summarized list of all storages of specified guest +* **POST**: Attach a new storage or virtual drive to specified virtual machine. + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. + +### Sub-resource: storage +**URI:** /vms/*:name*/storages/*:dev* +* **GET**: Retrieve storage information + * dev: The name of the storage in the vm. + * type: The type of the storage (cdrom, disk). + * path: Path of cdrom iso or disk. +* **PUT**: Update storage information + * path: Path of cdrom iso or disk. +* **DELETE**: Remove the storage. Simulate eject a cdrom I think for DELETE we mean to delete the device, so eject will be simulated by PUT '' to this device. + + + ### Collection: Templates
**URI:** /templates
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> This patch adds the control classes for guest storages Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> API.json: adding virtio device pattern Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/API.json | 33 ++++++++++++++++++++++++++ src/kimchi/control/vm/storages.py | 49 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+) create mode 100644 src/kimchi/control/vm/storages.py diff --git a/src/kimchi/API.json b/src/kimchi/API.json index f025661..8ea9de7 100644 --- a/src/kimchi/API.json +++ b/src/kimchi/API.json @@ -296,6 +296,39 @@ }, "additionalProperties": false }, + "storages_create": { + "type": "object", + "properties": { + "dev": { + "description": "The storage device name", + "type": "string", + "pattern": "^hd[b-z]|vd[a-z]$" + }, + "type": { + "description": "The storage type", + "type": "string", + "pattern": "^cdrom|disk$", + "required": true + }, + "path": { + "description": "Path of iso image file or disk mount point", + "type": "string", + "pattern": "^((/)|(http)[s]?:|[t]?(ftp)[s]?:)+.*$", + "required": true + } + } + }, + "storage_update": { + "type": "object", + "properties": { + "path": { + "description": "Path of iso image file or disk mount point", + "type": "string", + "pattern": "^((/)|(http)[s]?:|[t]?(ftp)[s]?:)+.*$", + "required": true + } + } + }, "template_update": { "type": "object", "properties": { diff --git a/src/kimchi/control/vm/storages.py b/src/kimchi/control/vm/storages.py new file mode 100644 index 0000000..953620f --- /dev/null +++ b/src/kimchi/control/vm/storages.py @@ -0,0 +1,49 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2013 +# +# Authors: +# ShaoHe Feng <shaohef@linux.vnet.ibm.com> +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("storages") +class Storages(Collection): + def __init__(self, model, vm): + super(Storages, self).__init__(model) + self.resource = Storage + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class Storage(Resource): + def __init__(self, model, vm, ident): + super(Storage, self).__init__(model, ident) + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/storages/%s' + self.update_params = ['path'] + + @property + def data(self): + return self.info -- 1.8.3.1

On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch adds the control classes for guest storages
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
API.json: adding virtio device pattern
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/API.json | 33 ++++++++++++++++++++++++++ src/kimchi/control/vm/storages.py | 49 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+) create mode 100644 src/kimchi/control/vm/storages.py
diff --git a/src/kimchi/API.json b/src/kimchi/API.json index f025661..8ea9de7 100644 --- a/src/kimchi/API.json +++ b/src/kimchi/API.json @@ -296,6 +296,39 @@ }, "additionalProperties": false }, + "storages_create": { + "type": "object", + "properties": { + "dev": { + "description": "The storage device name", + "type": "string", + "pattern": "^hd[b-z]|vd[a-z]$" + }, + "type": { + "description": "The storage type", + "type": "string", + "pattern": "^cdrom|disk$", As I mentioned, shall we consider to exclude cdrom from this? + "required": true + }, + "path": { + "description": "Path of iso image file or disk mount point", + "type": "string", + "pattern": "^((/)|(http)[s]?:|[t]?(ftp)[s]?:)+.*$", + "required": true + } + } + }, + "storage_update": { + "type": "object", + "properties": { + "path": { + "description": "Path of iso image file or disk mount point", + "type": "string", + "pattern": "^((/)|(http)[s]?:|[t]?(ftp)[s]?:)+.*$", + "required": true + } + } + }, "template_update": { "type": "object", "properties": { diff --git a/src/kimchi/control/vm/storages.py b/src/kimchi/control/vm/storages.py new file mode 100644 index 0000000..953620f --- /dev/null +++ b/src/kimchi/control/vm/storages.py @@ -0,0 +1,49 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2013 +# +# Authors: +# ShaoHe Feng <shaohef@linux.vnet.ibm.com> Author:) +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("storages") +class Storages(Collection): + def __init__(self, model, vm): + super(Storages, self).__init__(model) + self.resource = Storage + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class Storage(Resource): + def __init__(self, model, vm, ident): + super(Storage, self).__init__(model, ident) + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/storages/%s' + self.update_params = ['path'] + + @property + def data(self): + return self.info

On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch adds the control classes for guest storages
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
API.json: adding virtio device pattern
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/API.json | 33 ++++++++++++++++++++++++++ src/kimchi/control/vm/storages.py | 49 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+) create mode 100644 src/kimchi/control/vm/storages.py
diff --git a/src/kimchi/API.json b/src/kimchi/API.json index f025661..8ea9de7 100644 --- a/src/kimchi/API.json +++ b/src/kimchi/API.json @@ -296,6 +296,39 @@ }, "additionalProperties": false }, + "storages_create": { + "type": "object", + "properties": { + "dev": { + "description": "The storage device name", + "type": "string", + "pattern": "^hd[b-z]|vd[a-z]$" vd* means its virtio disk, cdrom just support emulation. + }, + "type": { + "description": "The storage type", + "type": "string", + "pattern": "^cdrom|disk$", I think disk is out of cdrom scope, let's put it in another feature. + "required": true + }, + "path": { + "description": "Path of iso image file or disk mount point", + "type": "string", + "pattern": "^((/)|(http)[s]?:|[t]?(ftp)[s]?:)+.*$", + "required": true + } + } + }, + "storage_update": { + "type": "object", + "properties": { + "path": { + "description": "Path of iso image file or disk mount point", + "type": "string", + "pattern": "^((/)|(http)[s]?:|[t]?(ftp)[s]?:)+.*$", + "required": true + } + } + }, "template_update": { "type": "object", "properties": { diff --git a/src/kimchi/control/vm/storages.py b/src/kimchi/control/vm/storages.py new file mode 100644 index 0000000..953620f --- /dev/null +++ b/src/kimchi/control/vm/storages.py @@ -0,0 +1,49 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2013 +# +# Authors: +# ShaoHe Feng <shaohef@linux.vnet.ibm.com> +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("storages") +class Storages(Collection): + def __init__(self, model, vm): + super(Storages, self).__init__(model) + self.resource = Storage + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class Storage(Resource): + def __init__(self, model, vm, ident): + super(Storage, self).__init__(model, ident) + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/storages/%s' + self.update_params = ['path'] + + @property + def data(self): + return self.info

From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> This patch adds the CREATE, LOOKUP, UPDATE and DELETE functionalities to guest storage devices support. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> Using lxml and objectify to parse the XML info. Pep8 adjustments in model/vms.py Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 184 ++++++++++++++++++++++++++++++++++++++++++++++-- src/kimchi/xmlutils.py | 5 ++ 2 files changed, 185 insertions(+), 4 deletions(-) diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index 1244eeb..3c2a661 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -20,13 +20,17 @@ # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA +import libvirt import os +import socket +import string import time +import urlparse import uuid -from xml.etree import ElementTree -import libvirt from cherrypy.process.plugins import BackgroundTask +from lxml import etree, objectify +from xml.etree import ElementTree from kimchi import vnc from kimchi import xmlutils @@ -36,7 +40,8 @@ from kimchi.model.config import CapabilitiesModel from kimchi.model.templates import TemplateModel from kimchi.model.utils import get_vm_name from kimchi.screenshot import VMScreenshot -from kimchi.utils import run_setfacl_set_attr, template_name_from_uri +from kimchi.utils import kimchi_log, run_setfacl_set_attr +from kimchi.utils import template_name_from_uri DOM_STATE_MAP = {0: 'nostate', @@ -47,6 +52,10 @@ DOM_STATE_MAP = {0: 'nostate', 5: 'shutoff', 6: 'crashed'} +DEV_TYPE_SRC_ATTR_MAP = {'file': 'file', + 'block': 'dev', + 'dir': 'dir'} + GUESTS_STATS_INTERVAL = 5 VM_STATIC_UPDATE_PARAMS = {'name': './name'} VM_LIVE_UPDATE_PARAMS = {} @@ -222,7 +231,7 @@ class VMModel(object): dom = self.get_vm(name, self.conn) dom = self._static_vm_update(dom, params) self._live_vm_update(dom, params) - return dom.name().decode('utf-8') + return dom.name() def _static_vm_update(self, dom, params): state = DOM_STATE_MAP[dom.info()[0]] @@ -453,3 +462,170 @@ class LibvirtVMScreenshot(VMScreenshot): stream.finish() finally: os.close(fd) + + +class StoragesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + + def _get_storage_xml(self, params): + storage = """ + <disk type='%(src_type)s' device='%(dev_type)s'> + <driver name='qemu' type='raw'/>%(source)s + <target dev='%(dev)s' bus='ide'/> + </disk> + """ + info = {} + info['dev'] = params.get('dev') + info['dev_type'] = params.get('type') + info['src_type'] = params.get('src_type') + + # Working with url paths + if info['src_type'] == 'network': + net = {} + output = urlparse.urlparse(params.get('path')) + net['protocol'] = output.scheme + net['port'] = output.port or socket.getservbyname(output.scheme) + net['hostname'] = output.hostname + net['url_path'] = output.path + info['source'] = """ + <source protocol='%(protocol)s' name='%(url_path)s'> + <host name='%(hostname)s' port='%(port)s'/> + </source>""" % net + else: + # Fixing source attribute + info['source'] = """ + <source %s='%s'/>""" % (DEV_TYPE_SRC_ATTR_MAP[info['src_type']], + params.get('path')) + return storage % info + + def create(self, vm_name, params): + #TODO: Check if device name is already use + path = params.get('path') + + # Checking if path exist, if not url + if path.startswith('/'): + if not os.path.exists(path): + msg = "Path specified for device is not valid" + raise InvalidParameter(msg) + elif path.endswith('.iso') or os.path.isfile(path): + params['src_type'] = 'file' + elif os.path.isdir(path): + params['src_type'] = 'dir' + else: + params['src_type'] = 'block' + else: + params['src_type'] = 'network' + + # Use device name passed or pick next + dev_name = params.get('dev') + if not dev_name: + params['dev'] = self._get_storage_device_name(vm_name) + + # Add device to VM + dev_xml = self._get_storage_xml(params) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.attachDeviceFlags(dev_xml, libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to attach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + return params['dev'] + + def _get_storage_device_name(self, vm_name): + dev_list = [dev for dev in self.get_list(vm_name) + if dev.startswith('hd')] + if len(dev_list) == 0: + return 'hda' + dev_list.sort() + last_dev = dev_list.pop() + # TODO: Improve to device names "greater then" hdz + next_dev_letter_pos = string.ascii_lowercase.index(last_dev[2]) + 1 + return 'hd' + string.ascii_lowercase[next_dev_letter_pos] + + def get_list(self, vm_name): + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + storages = [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='disk']")] + storages += [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='cdrom']")] + return storages + + +class StorageModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.kargs = kargs + + def _get_device_xml(self, vm_name, dev_name): + # Get VM xml and then devices xml + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + disk = devices.xpath("./disk/target[@dev='%s']/.." % dev_name) + return disk[0] + + def lookup(self, vm_name, dev_name): + # Retrieve disk xml and format return dict + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + source = disk.source + path = "" + if source is not None: + src_type = disk.attrib['type'] + if src_type == 'network': + host = source.host + path = (source.attrib['protocol'] + '://' + + host.attrib['name'] + ':' + + host.attrib['port'] + source.attrib['name']) + else: + path = source.attrib[DEV_TYPE_SRC_ATTR_MAP[src_type]] + dev_type = disk.attrib['device'] + return {'dev': dev_name, + 'type': dev_type, + 'path': path} + + def delete(self, vm_name, dev_name): + # Get storage device xml + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.detachDeviceFlags(etree.tostring(disk), + libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to detach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + def update(self, vm_name, dev_name, params): + info = self.lookup(vm_name, dev_name) + backup_params = info.copy() + try: + self.delete(vm_name, dev_name) + except: + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + info.update(params) + kargs = {'conn': self.conn} + stgModel = StoragesModel(**kargs) + try: + dev_name = stgModel.create(vm_name, info) + return dev_name + except Exception as e: + # Restoring previous device + dev_name = stgModel.create(vm_name, backup_params) + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) diff --git a/src/kimchi/xmlutils.py b/src/kimchi/xmlutils.py index 51ff0ec..176ceb1 100644 --- a/src/kimchi/xmlutils.py +++ b/src/kimchi/xmlutils.py @@ -40,3 +40,8 @@ def xml_item_update(xml, xpath, value): item = root.find(xpath) item.text = value return ElementTree.tostring(root, encoding="utf-8") + +def xml_get_child(xml, xpath): + root = ElementTree.fromstring(xml) + item = root.find(xpath) + return ElementTree.tostring(item, encoding="utf-8") -- 1.8.3.1

On 2014?02?12? 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch adds the CREATE, LOOKUP, UPDATE and DELETE functionalities to guest storage devices support.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
Using lxml and objectify to parse the XML info. Pep8 adjustments in model/vms.py
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 184 ++++++++++++++++++++++++++++++++++++++++++++++-- src/kimchi/xmlutils.py | 5 ++ 2 files changed, 185 insertions(+), 4 deletions(-)
diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index 1244eeb..3c2a661 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -20,13 +20,17 @@ # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+import libvirt import os +import socket +import string import time +import urlparse import uuid -from xml.etree import ElementTree
-import libvirt from cherrypy.process.plugins import BackgroundTask +from lxml import etree, objectify +from xml.etree import ElementTree
from kimchi import vnc from kimchi import xmlutils @@ -36,7 +40,8 @@ from kimchi.model.config import CapabilitiesModel from kimchi.model.templates import TemplateModel from kimchi.model.utils import get_vm_name from kimchi.screenshot import VMScreenshot -from kimchi.utils import run_setfacl_set_attr, template_name_from_uri +from kimchi.utils import kimchi_log, run_setfacl_set_attr +from kimchi.utils import template_name_from_uri
DOM_STATE_MAP = {0: 'nostate', @@ -47,6 +52,10 @@ DOM_STATE_MAP = {0: 'nostate', 5: 'shutoff', 6: 'crashed'}
+DEV_TYPE_SRC_ATTR_MAP = {'file': 'file', + 'block': 'dev', + 'dir': 'dir'} + GUESTS_STATS_INTERVAL = 5 VM_STATIC_UPDATE_PARAMS = {'name': './name'} VM_LIVE_UPDATE_PARAMS = {} @@ -222,7 +231,7 @@ class VMModel(object): dom = self.get_vm(name, self.conn) dom = self._static_vm_update(dom, params) self._live_vm_update(dom, params) - return dom.name().decode('utf-8') + return dom.name()
def _static_vm_update(self, dom, params): state = DOM_STATE_MAP[dom.info()[0]] @@ -453,3 +462,170 @@ class LibvirtVMScreenshot(VMScreenshot): stream.finish() finally: os.close(fd) + + +class StoragesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + + def _get_storage_xml(self, params): + storage = """ + <disk type='%(src_type)s' device='%(dev_type)s'> + <driver name='qemu' type='raw'/>%(source)s + <target dev='%(dev)s' bus='ide'/> + </disk> + """ To keep consistency, may I suggest we use lxml.builder and lxml etree. + info = {} + info['dev'] = params.get('dev') + info['dev_type'] = params.get('type') + info['src_type'] = params.get('src_type') + + # Working with url paths + if info['src_type'] == 'network': + net = {} + output = urlparse.urlparse(params.get('path')) + net['protocol'] = output.scheme + net['port'] = output.port or socket.getservbyname(output.scheme) + net['hostname'] = output.hostname + net['url_path'] = output.path + info['source'] = """ + <source protocol='%(protocol)s' name='%(url_path)s'> + <host name='%(hostname)s' port='%(port)s'/> + </source>""" % net Here too. + else: + # Fixing source attribute + info['source'] = """ + <source %s='%s'/>""" % (DEV_TYPE_SRC_ATTR_MAP[info['src_type']], + params.get('path')) + return storage % info + + def create(self, vm_name, params): + #TODO: Check if device name is already use + path = params.get('path') + + # Checking if path exist, if not url + if path.startswith('/'): + if not os.path.exists(path): + msg = "Path specified for device is not valid" + raise InvalidParameter(msg) + elif path.endswith('.iso') or os.path.isfile(path): + params['src_type'] = 'file' + elif os.path.isdir(path): + params['src_type'] = 'dir' REF:
http://wiki.qemu.org/download/qemu-doc.html#disk_005fimages_005ffat_005fimag... http://en.wikibooks.org/wiki/QEMU/Devices/Storage 'dir' type is used to share directory between host(linux) and guest(windows) without using samba or nfs, or create a boot floppy. So I don't think its related to CDROM management and will cause unexpected result. Let's get rid of it temporarily until we start to support this feature.
+ else: + params['src_type'] = 'block' Does it mean other type is classified to block? I think only block device should be used. Also I suggest we only suggest the same usecase with template cdrom, that is file or valid file link, The use of host block device may be out of our scope. + else: + params['src_type'] = 'network' Will it be better if we validate the source file as we did in osinfo.py? response = urllib2.urlopen(request) data = response.read() + + # Use device name passed or pick next + dev_name = params.get('dev') + if not dev_name: + params['dev'] = self._get_storage_device_name(vm_name) + + # Add device to VM + dev_xml = self._get_storage_xml(params) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.attachDeviceFlags(dev_xml, libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to attach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + return params['dev'] + + def _get_storage_device_name(self, vm_name): + dev_list = [dev for dev in self.get_list(vm_name) + if dev.startswith('hd')] + if len(dev_list) == 0: + return 'hda' + dev_list.sort() + last_dev = dev_list.pop() + # TODO: Improve to device names "greater then" hdz + next_dev_letter_pos = string.ascii_lowercase.index(last_dev[2]) + 1 + return 'hd' + string.ascii_lowercase[next_dev_letter_pos] + + def get_list(self, vm_name): + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + storages = [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='disk']")] + storages += [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='cdrom']")] + return storages + + +class StorageModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.kargs = kargs + + def _get_device_xml(self, vm_name, dev_name): + # Get VM xml and then devices xml + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + disk = devices.xpath("./disk/target[@dev='%s']/.." % dev_name) + return disk[0] + + def lookup(self, vm_name, dev_name): + # Retrieve disk xml and format return dict + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + source = disk.source + path = "" + if source is not None: + src_type = disk.attrib['type'] + if src_type == 'network': + host = source.host + path = (source.attrib['protocol'] + '://' + + host.attrib['name'] + ':' + + host.attrib['port'] + source.attrib['name']) + else: + path = source.attrib[DEV_TYPE_SRC_ATTR_MAP[src_type]] + dev_type = disk.attrib['device'] + return {'dev': dev_name, + 'type': dev_type, + 'path': path} + + def delete(self, vm_name, dev_name): + # Get storage device xml + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.detachDeviceFlags(etree.tostring(disk), + libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to detach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + def update(self, vm_name, dev_name, params): + info = self.lookup(vm_name, dev_name) + backup_params = info.copy() + try: + self.delete(vm_name, dev_name) + except: + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + info.update(params) + kargs = {'conn': self.conn} + stgModel = StoragesModel(**kargs) + try: + dev_name = stgModel.create(vm_name, info) + return dev_name + except Exception as e: + # Restoring previous device + dev_name = stgModel.create(vm_name, backup_params) + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) diff --git a/src/kimchi/xmlutils.py b/src/kimchi/xmlutils.py index 51ff0ec..176ceb1 100644 --- a/src/kimchi/xmlutils.py +++ b/src/kimchi/xmlutils.py @@ -40,3 +40,8 @@ def xml_item_update(xml, xpath, value): item = root.find(xpath) item.text = value return ElementTree.tostring(root, encoding="utf-8") + +def xml_get_child(xml, xpath): + root = ElementTree.fromstring(xml) + item = root.find(xpath) + return ElementTree.tostring(item, encoding="utf-8")

On 02/12/2014 07:08 AM, Royce Lv wrote:
On 2014?02?12? 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo<rodrigo.trujillo@linux.vnet.ibm.com>
This patch adds the CREATE, LOOKUP, UPDATE and DELETE functionalities to guest storage devices support.
Signed-off-by: Rodrigo Trujillo<rodrigo.trujillo@linux.vnet.ibm.com>
Using lxml and objectify to parse the XML info. Pep8 adjustments in model/vms.py
Signed-off-by: Daniel Henrique Barboza<danielhb@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 184 ++++++++++++++++++++++++++++++++++++++++++++++-- src/kimchi/xmlutils.py | 5 ++ 2 files changed, 185 insertions(+), 4 deletions(-)
diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index 1244eeb..3c2a661 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -20,13 +20,17 @@ # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+import libvirt import os +import socket +import string import time +import urlparse import uuid -from xml.etree import ElementTree
-import libvirt from cherrypy.process.plugins import BackgroundTask +from lxml import etree, objectify +from xml.etree import ElementTree
from kimchi import vnc from kimchi import xmlutils @@ -36,7 +40,8 @@ from kimchi.model.config import CapabilitiesModel from kimchi.model.templates import TemplateModel from kimchi.model.utils import get_vm_name from kimchi.screenshot import VMScreenshot -from kimchi.utils import run_setfacl_set_attr, template_name_from_uri +from kimchi.utils import kimchi_log, run_setfacl_set_attr +from kimchi.utils import template_name_from_uri
DOM_STATE_MAP = {0: 'nostate', @@ -47,6 +52,10 @@ DOM_STATE_MAP = {0: 'nostate', 5: 'shutoff', 6: 'crashed'}
+DEV_TYPE_SRC_ATTR_MAP = {'file': 'file', + 'block': 'dev', + 'dir': 'dir'} + GUESTS_STATS_INTERVAL = 5 VM_STATIC_UPDATE_PARAMS = {'name': './name'} VM_LIVE_UPDATE_PARAMS = {} @@ -222,7 +231,7 @@ class VMModel(object): dom = self.get_vm(name, self.conn) dom = self._static_vm_update(dom, params) self._live_vm_update(dom, params) - return dom.name().decode('utf-8') + return dom.name()
def _static_vm_update(self, dom, params): state = DOM_STATE_MAP[dom.info()[0]] @@ -453,3 +462,170 @@ class LibvirtVMScreenshot(VMScreenshot): stream.finish() finally: os.close(fd) + + +class StoragesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + + def _get_storage_xml(self, params): + storage = """ + <disk type='%(src_type)s' device='%(dev_type)s'> + <driver name='qemu' type='raw'/>%(source)s + <target dev='%(dev)s' bus='ide'/> + </disk> + """ To keep consistency, may I suggest we use lxml.builder and lxml etree. + info = {} + info['dev'] = params.get('dev') + info['dev_type'] = params.get('type') + info['src_type'] = params.get('src_type') + + # Working with url paths + if info['src_type'] == 'network': + net = {} + output = urlparse.urlparse(params.get('path')) + net['protocol'] = output.scheme + net['port'] = output.port or socket.getservbyname(output.scheme) + net['hostname'] = output.hostname + net['url_path'] = output.path + info['source'] = """ + <source protocol='%(protocol)s' name='%(url_path)s'> + <host name='%(hostname)s' port='%(port)s'/> + </source>""" % net Here too. I tried, but couldn't came up with a code using lxml.builder and/or objectify which is simpler and more readable than this one.
If anyone wants to jump in and provide such code, please do :)
+ else: + # Fixing source attribute + info['source'] = """ + <source %s='%s'/>""" % (DEV_TYPE_SRC_ATTR_MAP[info['src_type']], + params.get('path')) + return storage % info + + def create(self, vm_name, params): + #TODO: Check if device name is already use + path = params.get('path') + + # Checking if path exist, if not url + if path.startswith('/'): + if not os.path.exists(path): + msg = "Path specified for device is not valid" + raise InvalidParameter(msg) + elif path.endswith('.iso') or os.path.isfile(path): + params['src_type'] = 'file' + elif os.path.isdir(path): + params['src_type'] = 'dir' REF: http://wiki.qemu.org/download/qemu-doc.html#disk_005fimages_005ffat_005fimag... http://en.wikibooks.org/wiki/QEMU/Devices/Storage 'dir' type is used to share directory between host(linux) and guest(windows) without using samba or nfs, or create a boot floppy. So I don't think its related to CDROM management and will cause unexpected result. Let's get rid of it temporarily until we start to support this feature. + else: + params['src_type'] = 'block' Does it mean other type is classified to block? I think only block device should be used. Also I suggest we only suggest the same usecase with template cdrom, that is file or valid file link, The use of host block device may be out of our scope. + else: + params['src_type'] = 'network' Will it be better if we validate the source file as we did in osinfo.py? response = urllib2.urlopen(request) data = response.read() + + # Use device name passed or pick next + dev_name = params.get('dev') + if not dev_name: + params['dev'] = self._get_storage_device_name(vm_name) + + # Add device to VM + dev_xml = self._get_storage_xml(params) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.attachDeviceFlags(dev_xml, libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to attach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + return params['dev'] + + def _get_storage_device_name(self, vm_name): + dev_list = [dev for dev in self.get_list(vm_name) + if dev.startswith('hd')] + if len(dev_list) == 0: + return 'hda' + dev_list.sort() + last_dev = dev_list.pop() + # TODO: Improve to device names "greater then" hdz + next_dev_letter_pos = string.ascii_lowercase.index(last_dev[2]) + 1 + return 'hd' + string.ascii_lowercase[next_dev_letter_pos] + + def get_list(self, vm_name): + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + storages = [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='disk']")] + storages += [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='cdrom']")] + return storages + + +class StorageModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.kargs = kargs + + def _get_device_xml(self, vm_name, dev_name): + # Get VM xml and then devices xml + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + disk = devices.xpath("./disk/target[@dev='%s']/.." % dev_name) + return disk[0] + + def lookup(self, vm_name, dev_name): + # Retrieve disk xml and format return dict + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + source = disk.source + path = "" + if source is not None: + src_type = disk.attrib['type'] + if src_type == 'network': + host = source.host + path = (source.attrib['protocol'] + '://' + + host.attrib['name'] + ':' + + host.attrib['port'] + source.attrib['name']) + else: + path = source.attrib[DEV_TYPE_SRC_ATTR_MAP[src_type]] + dev_type = disk.attrib['device'] + return {'dev': dev_name, + 'type': dev_type, + 'path': path} + + def delete(self, vm_name, dev_name): + # Get storage device xml + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.detachDeviceFlags(etree.tostring(disk), + libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to detach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + def update(self, vm_name, dev_name, params): + info = self.lookup(vm_name, dev_name) + backup_params = info.copy() + try: + self.delete(vm_name, dev_name) + except: + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + info.update(params) + kargs = {'conn': self.conn} + stgModel = StoragesModel(**kargs) + try: + dev_name = stgModel.create(vm_name, info) + return dev_name + except Exception as e: + # Restoring previous device + dev_name = stgModel.create(vm_name, backup_params) + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) diff --git a/src/kimchi/xmlutils.py b/src/kimchi/xmlutils.py index 51ff0ec..176ceb1 100644 --- a/src/kimchi/xmlutils.py +++ b/src/kimchi/xmlutils.py @@ -40,3 +40,8 @@ def xml_item_update(xml, xpath, value): item = root.find(xpath) item.text = value return ElementTree.tostring(root, encoding="utf-8") + +def xml_get_child(xml, xpath): + root = ElementTree.fromstring(xml) + item = root.find(xpath) + return ElementTree.tostring(item, encoding="utf-8")

On 02/12/2014 03:32 PM, Daniel H Barboza wrote:
On 02/12/2014 07:08 AM, Royce Lv wrote:
On 2014?02?12? 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo<rodrigo.trujillo@linux.vnet.ibm.com>
This patch adds the CREATE, LOOKUP, UPDATE and DELETE functionalities to guest storage devices support.
Signed-off-by: Rodrigo Trujillo<rodrigo.trujillo@linux.vnet.ibm.com>
Using lxml and objectify to parse the XML info. Pep8 adjustments in model/vms.py
Signed-off-by: Daniel Henrique Barboza<danielhb@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 184 ++++++++++++++++++++++++++++++++++++++++++++++-- src/kimchi/xmlutils.py | 5 ++ 2 files changed, 185 insertions(+), 4 deletions(-)
diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index 1244eeb..3c2a661 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -20,13 +20,17 @@ # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+import libvirt import os +import socket +import string import time +import urlparse import uuid -from xml.etree import ElementTree
-import libvirt from cherrypy.process.plugins import BackgroundTask +from lxml import etree, objectify +from xml.etree import ElementTree
from kimchi import vnc from kimchi import xmlutils @@ -36,7 +40,8 @@ from kimchi.model.config import CapabilitiesModel from kimchi.model.templates import TemplateModel from kimchi.model.utils import get_vm_name from kimchi.screenshot import VMScreenshot -from kimchi.utils import run_setfacl_set_attr, template_name_from_uri +from kimchi.utils import kimchi_log, run_setfacl_set_attr +from kimchi.utils import template_name_from_uri
DOM_STATE_MAP = {0: 'nostate', @@ -47,6 +52,10 @@ DOM_STATE_MAP = {0: 'nostate', 5: 'shutoff', 6: 'crashed'}
+DEV_TYPE_SRC_ATTR_MAP = {'file': 'file', + 'block': 'dev', + 'dir': 'dir'} + GUESTS_STATS_INTERVAL = 5 VM_STATIC_UPDATE_PARAMS = {'name': './name'} VM_LIVE_UPDATE_PARAMS = {} @@ -222,7 +231,7 @@ class VMModel(object): dom = self.get_vm(name, self.conn) dom = self._static_vm_update(dom, params) self._live_vm_update(dom, params) - return dom.name().decode('utf-8') + return dom.name()
def _static_vm_update(self, dom, params): state = DOM_STATE_MAP[dom.info()[0]] @@ -453,3 +462,170 @@ class LibvirtVMScreenshot(VMScreenshot): stream.finish() finally: os.close(fd) + + +class StoragesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + + def _get_storage_xml(self, params): + storage = """ + <disk type='%(src_type)s' device='%(dev_type)s'> + <driver name='qemu' type='raw'/>%(source)s + <target dev='%(dev)s' bus='ide'/> + </disk> + """ To keep consistency, may I suggest we use lxml.builder and lxml etree. + info = {} + info['dev'] = params.get('dev') + info['dev_type'] = params.get('type') + info['src_type'] = params.get('src_type') + + # Working with url paths + if info['src_type'] == 'network': + net = {} + output = urlparse.urlparse(params.get('path')) + net['protocol'] = output.scheme + net['port'] = output.port or socket.getservbyname(output.scheme) + net['hostname'] = output.hostname + net['url_path'] = output.path + info['source'] = """ + <source protocol='%(protocol)s' name='%(url_path)s'> + <host name='%(hostname)s' port='%(port)s'/> + </source>""" % net Here too. I tried, but couldn't came up with a code using lxml.builder and/or objectify which is simpler and more readable than this one.
If anyone wants to jump in and provide such code, please do :)
import lxml.etree as ET from lxml.builder import E host = E.host() host.set('name', 'localhost') host.set('port', '80') source = E.source(host) source.set('protocol', 'http') source.set('name', '/my-iso.iso') print ET.tostring(source)
+ else: + # Fixing source attribute + info['source'] = """ + <source %s='%s'/>""" % (DEV_TYPE_SRC_ATTR_MAP[info['src_type']], + params.get('path')) + return storage % info + + def create(self, vm_name, params): + #TODO: Check if device name is already use + path = params.get('path') + + # Checking if path exist, if not url + if path.startswith('/'): + if not os.path.exists(path): + msg = "Path specified for device is not valid" + raise InvalidParameter(msg) + elif path.endswith('.iso') or os.path.isfile(path): + params['src_type'] = 'file' + elif os.path.isdir(path): + params['src_type'] = 'dir' REF: http://wiki.qemu.org/download/qemu-doc.html#disk_005fimages_005ffat_005fimag... http://en.wikibooks.org/wiki/QEMU/Devices/Storage 'dir' type is used to share directory between host(linux) and guest(windows) without using samba or nfs, or create a boot floppy. So I don't think its related to CDROM management and will cause unexpected result. Let's get rid of it temporarily until we start to support this feature. + else: + params['src_type'] = 'block' Does it mean other type is classified to block? I think only block device should be used. Also I suggest we only suggest the same usecase with template cdrom, that is file or valid file link, The use of host block device may be out of our scope. + else: + params['src_type'] = 'network' Will it be better if we validate the source file as we did in osinfo.py? response = urllib2.urlopen(request) data = response.read() + + # Use device name passed or pick next + dev_name = params.get('dev') + if not dev_name: + params['dev'] = self._get_storage_device_name(vm_name) + + # Add device to VM + dev_xml = self._get_storage_xml(params) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.attachDeviceFlags(dev_xml, libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to attach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + return params['dev'] + + def _get_storage_device_name(self, vm_name): + dev_list = [dev for dev in self.get_list(vm_name) + if dev.startswith('hd')] + if len(dev_list) == 0: + return 'hda' + dev_list.sort() + last_dev = dev_list.pop() + # TODO: Improve to device names "greater then" hdz + next_dev_letter_pos = string.ascii_lowercase.index(last_dev[2]) + 1 + return 'hd' + string.ascii_lowercase[next_dev_letter_pos] + + def get_list(self, vm_name): + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + storages = [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='disk']")] + storages += [disk.target.attrib['dev'] + for disk in devices.xpath("./disk[@device='cdrom']")] + return storages + + +class StorageModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.kargs = kargs + + def _get_device_xml(self, vm_name, dev_name): + # Get VM xml and then devices xml + dom = VMModel.get_vm(vm_name, self.conn) + xml = dom.XMLDesc(0) + devices = objectify.fromstring(xml).devices + disk = devices.xpath("./disk/target[@dev='%s']/.." % dev_name) + return disk[0] + + def lookup(self, vm_name, dev_name): + # Retrieve disk xml and format return dict + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + source = disk.source + path = "" + if source is not None: + src_type = disk.attrib['type'] + if src_type == 'network': + host = source.host + path = (source.attrib['protocol'] + '://' + + host.attrib['name'] + ':' + + host.attrib['port'] + source.attrib['name']) + else: + path = source.attrib[DEV_TYPE_SRC_ATTR_MAP[src_type]] + dev_type = disk.attrib['device'] + return {'dev': dev_name, + 'type': dev_type, + 'path': path} + + def delete(self, vm_name, dev_name): + # Get storage device xml + disk = self._get_device_xml(vm_name, dev_name) + if disk is None: + raise NotFoundError('The storage device "%s" does not exist in ' + 'the guest "%s"' % (dev_name, vm_name)) + try: + conn = self.conn.get() + dom = conn.lookupByName(vm_name) + dom.detachDeviceFlags(etree.tostring(disk), + libvirt.VIR_DOMAIN_AFFECT_CURRENT) + except Exception as e: + msg = 'Was not possible to detach storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + def update(self, vm_name, dev_name, params): + info = self.lookup(vm_name, dev_name) + backup_params = info.copy() + try: + self.delete(vm_name, dev_name) + except: + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) + + info.update(params) + kargs = {'conn': self.conn} + stgModel = StoragesModel(**kargs) + try: + dev_name = stgModel.create(vm_name, info) + return dev_name + except Exception as e: + # Restoring previous device + dev_name = stgModel.create(vm_name, backup_params) + msg = 'Was not possible to update storage device: %s' % e.message + kimchi_log.error(msg) + raise OperationFailed(e.message) diff --git a/src/kimchi/xmlutils.py b/src/kimchi/xmlutils.py index 51ff0ec..176ceb1 100644 --- a/src/kimchi/xmlutils.py +++ b/src/kimchi/xmlutils.py @@ -40,3 +40,8 @@ def xml_item_update(xml, xpath, value): item = root.find(xpath) item.text = value return ElementTree.tostring(root, encoding="utf-8") + +def xml_get_child(xml, xpath): + root = ElementTree.fromstring(xml) + item = root.find(xpath) + return ElementTree.tostring(item, encoding="utf-8")
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

+import libvirt import os +import socket +import string import time +import urlparse import uuid -from xml.etree import ElementTree
-import libvirt from cherrypy.process.plugins import BackgroundTask +from lxml import etree, objectify +from xml.etree import ElementTree
from kimchi import vnc from kimchi import xmlutils @@ -36,7 +40,8 @@ from kimchi.model.config import CapabilitiesModel from kimchi.model.templates import TemplateModel from kimchi.model.utils import get_vm_name from kimchi.screenshot import VMScreenshot -from kimchi.utils import run_setfacl_set_attr, template_name_from_uri +from kimchi.utils import kimchi_log, run_setfacl_set_attr +from kimchi.utils import template_name_from_uri The import order is defined as: 1) standard libraries; 2) third party
Am 12-02-2014 00:38, schrieb Daniel Barboza: libraries; 3) Kimchi libraries. So libvirt should be on the second group, as it was before, and the XML packages should be on the first group (they're python standard libraries, right?).
@@ -40,3 +40,8 @@ def xml_item_update(xml, xpath, value): item = root.find(xpath) item.text = value return ElementTree.tostring(root, encoding="utf-8") + +def xml_get_child(xml, xpath): + root = ElementTree.fromstring(xml) + item = root.find(xpath) + return ElementTree.tostring(item, encoding="utf-8") Top-level elements (like the function "xml_get_child") should have two blank lines between themselves.

From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> This patch implements the mockmodel class to simulate lookup, add, remove, update of devices in a guest vm. Also, it adds test cases to test the rest API. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> Minor changes/improvements based on ML feedback Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 66 +++++++++++++++++++++++++++++++++++++++++++ tests/test_rest.py | 74 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 140 insertions(+) diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index 441c0e4..9b7233b 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -542,6 +542,51 @@ class MockModel(object): def networks_get_list(self): return sorted(self._mock_networks.keys()) + def storages_create(self, vm_name, params): + path = params.get('path') + if path.startswith('/') and not os.path.exists(path): + msg = "Path specified for device is not valid" + raise InvalidParameter(msg) + + dom = self._get_vm(vm_name) + dev = params.get('dev', None) + if dev and dev in self.storages_get_list(vm_name): + return OperationFailed('Device name already in use.') + if not dev: + return OperationFailed('Must specify a device name.') + vmdev = MockVMStorageDevice(params) + dom.storagedevices[params['dev']] = vmdev + return params['dev'] + + def storages_get_list(self, vm_name): + dom = self._get_vm(vm_name) + return dom.storagedevices.keys() + + def storage_lookup(self, vm_name, dev_name): + dom = self._get_vm(vm_name) + if dev_name not in self.storages_get_list(vm_name): + msg = 'The storage device "%s" does not exist in the guest "%s"' \ + % (dev_name,vm_name) + raise NotFoundError(msg) + return dom.storagedevices.get('dev_name').info + + def storage_delete(self, vm_name, dev_name): + dom = self._get_vm(vm_name) + if dev_name not in self.storages_get_list(vm_name): + msg = 'The storage device "%s" does not exist in the guest "%s"' \ + % (dev_name,vm_name) + raise NotFoundError(msg) + dom.storagedevices.pop(dev_name) + + def storage_update(self, vm_name, dev_name, params): + try: + dom = self._get_vm(vm_name) + dom.storagedevices[dev_name].info.update(params) + except Exception as e: + msg = 'Was not possible to update storage device: %s' % e.message + raise OperationFailed(e.message) + return dev_name + def vmifaces_create(self, vm, params): if (params["type"] == "network" and params["network"] not in self.networks_get_list()): @@ -720,6 +765,24 @@ class MockVMTemplate(VMTemplate): return disk_paths +class MockVMStorageDevice(object): + def __init__(self, params): + # Defaults + if params['dev'] == 'hda': + self.info = {'dev': params.get('dev'), + 'type': 'disk', + 'path': '/tmp/myimage.img'} + elif params['dev'] == 'hdc': + self.info = {'dev': params.get('dev'), + 'type': 'cdrom', + 'path': ''} + # New devices + else: + self.info = {'dev': params.get('dev'), + 'type': params.get('type'), + 'path': params.get('path')} + + class MockVMIface(object): counter = 0 @@ -745,6 +808,9 @@ class MockVM(object): self.disk_paths = [] self.networks = template_info['networks'] ifaces = [MockVMIface(net) for net in self.networks] + default_devices = [{'dev':'hda'}, {'dev':'hdc'}] + self.storagedevices = dict([(dev['dev'], MockVMStorageDevice(dev)) \ + for dev in default_devices]) self.ifaces = dict([(iface.info['mac'], iface) for iface in ifaces]) self.info = {'state': 'shutoff', 'stats': "{'cpu_utilization': 20, 'net_throughput' : 35, \ diff --git a/tests/test_rest.py b/tests/test_rest.py index 69b8316..1dae45e 100644 --- a/tests/test_rest.py +++ b/tests/test_rest.py @@ -341,6 +341,80 @@ class RestTests(unittest.TestCase): resp = self.request('/templates/test', '{}', 'DELETE') self.assertEquals(204, resp.status) + def test_vm_storage_devices(self): + + with RollbackContext() as rollback: + # Create a template as a base for our VMs + req = json.dumps({'name': 'test', 'cdrom': '/nonexistent.iso'}) + resp = self.request('/templates', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the template + rollback.prependDefer(self.request, + '/templates/test', '{}', 'DELETE') + + # Create a VM with default args + req = json.dumps({'name': 'test-vm', + 'template': '/templates/test'}) + resp = self.request('/vms', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the VM + rollback.prependDefer(self.request, + '/vms/test-vm', '{}', 'DELETE') + + # Attach a storage disk + req = json.dumps({'dev': 'hdx', + 'type': 'disk', + 'path': '/tmp'}) + resp = self.request('/vms/test-vm/storages', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the disk + rollback.prependDefer(self.request, + '/vms/test-vm/storages/hdx', '{}', 'DELETE') + + # Detach storage disk + resp = self.request('/vms/test-vm/storages/hdx', '{}', 'DELETE') + self.assertEquals(204, resp.status) + + # Detach storage disk that does not exist + resp = self.request('/vms/test-vm/storages/hdx', '{}', 'DELETE') + self.assertEquals(404, resp.status) + + # Attach cdrom with nonexistent iso + req = json.dumps({'dev': 'hdx', + 'type': 'cdrom', + 'path': '/tmp/nonexistent.iso'}) + resp = self.request('/vms/test-vm/storages', req, 'POST') + self.assertEquals(400, resp.status) + + # Attach a cdrom with existent dev name + open('/tmp/existent.iso', 'w').close() + req = json.dumps({'dev': 'hdx', + 'type': 'cdrom', + 'path': '/tmp/existent.iso'}) + resp = self.request('/vms/test-vm/storages', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the file and cdrom + rollback.prependDefer(self.request, + '/vms/test-vm/storages/hdx', '{}', 'DELETE') + os.remove('/tmp/existent.iso') + + # Change path of storage cdrom + req = json.dumps({'path': 'http://myserver.com/myiso.iso'}) + resp = self.request('/vms/test-vm/storages/hdx', req, 'PUT') + self.assertEquals(200, resp.status) + + # Test GET + devs = json.loads(self.request('/vms/test-vm/storages').read()) + self.assertEquals(3, len(devs)) + + # Detach storage cdrom + resp = self.request('/vms/test-vm/storages/hdx', '{}', 'DELETE') + self.assertEquals(204, resp.status) + + # Test GET + devs = json.loads(self.request('/vms/test-vm/storages').read()) + self.assertEquals(2, len(devs)) + def test_vm_iface(self): with RollbackContext() as rollback: -- 1.8.3.1

And the model test? On 2014年02月12日 10:38, Daniel Barboza wrote:
From: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
This patch implements the mockmodel class to simulate lookup, add, remove, update of devices in a guest vm. Also, it adds test cases to test the rest API.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com>
Minor changes/improvements based on ML feedback
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 66 +++++++++++++++++++++++++++++++++++++++++++ tests/test_rest.py | 74 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 140 insertions(+)
diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index 441c0e4..9b7233b 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -542,6 +542,51 @@ class MockModel(object): def networks_get_list(self): return sorted(self._mock_networks.keys())
+ def storages_create(self, vm_name, params): + path = params.get('path') + if path.startswith('/') and not os.path.exists(path): + msg = "Path specified for device is not valid" + raise InvalidParameter(msg) + + dom = self._get_vm(vm_name) + dev = params.get('dev', None) + if dev and dev in self.storages_get_list(vm_name): + return OperationFailed('Device name already in use.') + if not dev: + return OperationFailed('Must specify a device name.') + vmdev = MockVMStorageDevice(params) + dom.storagedevices[params['dev']] = vmdev + return params['dev'] + + def storages_get_list(self, vm_name): + dom = self._get_vm(vm_name) + return dom.storagedevices.keys() + + def storage_lookup(self, vm_name, dev_name): + dom = self._get_vm(vm_name) + if dev_name not in self.storages_get_list(vm_name): + msg = 'The storage device "%s" does not exist in the guest "%s"' \ + % (dev_name,vm_name) + raise NotFoundError(msg) + return dom.storagedevices.get('dev_name').info + + def storage_delete(self, vm_name, dev_name): + dom = self._get_vm(vm_name) + if dev_name not in self.storages_get_list(vm_name): + msg = 'The storage device "%s" does not exist in the guest "%s"' \ + % (dev_name,vm_name) + raise NotFoundError(msg) + dom.storagedevices.pop(dev_name) + + def storage_update(self, vm_name, dev_name, params): + try: + dom = self._get_vm(vm_name) + dom.storagedevices[dev_name].info.update(params) + except Exception as e: + msg = 'Was not possible to update storage device: %s' % e.message + raise OperationFailed(e.message) + return dev_name + def vmifaces_create(self, vm, params): if (params["type"] == "network" and params["network"] not in self.networks_get_list()): @@ -720,6 +765,24 @@ class MockVMTemplate(VMTemplate): return disk_paths
+class MockVMStorageDevice(object): + def __init__(self, params): + # Defaults + if params['dev'] == 'hda': + self.info = {'dev': params.get('dev'), + 'type': 'disk', + 'path': '/tmp/myimage.img'} + elif params['dev'] == 'hdc': + self.info = {'dev': params.get('dev'), + 'type': 'cdrom', + 'path': ''} + # New devices + else: + self.info = {'dev': params.get('dev'), + 'type': params.get('type'), + 'path': params.get('path')} + + class MockVMIface(object): counter = 0
@@ -745,6 +808,9 @@ class MockVM(object): self.disk_paths = [] self.networks = template_info['networks'] ifaces = [MockVMIface(net) for net in self.networks] + default_devices = [{'dev':'hda'}, {'dev':'hdc'}] + self.storagedevices = dict([(dev['dev'], MockVMStorageDevice(dev)) \ + for dev in default_devices]) self.ifaces = dict([(iface.info['mac'], iface) for iface in ifaces]) self.info = {'state': 'shutoff', 'stats': "{'cpu_utilization': 20, 'net_throughput' : 35, \ diff --git a/tests/test_rest.py b/tests/test_rest.py index 69b8316..1dae45e 100644 --- a/tests/test_rest.py +++ b/tests/test_rest.py @@ -341,6 +341,80 @@ class RestTests(unittest.TestCase): resp = self.request('/templates/test', '{}', 'DELETE') self.assertEquals(204, resp.status)
+ def test_vm_storage_devices(self): + + with RollbackContext() as rollback: + # Create a template as a base for our VMs + req = json.dumps({'name': 'test', 'cdrom': '/nonexistent.iso'}) + resp = self.request('/templates', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the template + rollback.prependDefer(self.request, + '/templates/test', '{}', 'DELETE') + + # Create a VM with default args + req = json.dumps({'name': 'test-vm', + 'template': '/templates/test'}) + resp = self.request('/vms', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the VM + rollback.prependDefer(self.request, + '/vms/test-vm', '{}', 'DELETE') + + # Attach a storage disk + req = json.dumps({'dev': 'hdx', + 'type': 'disk', + 'path': '/tmp'}) + resp = self.request('/vms/test-vm/storages', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the disk + rollback.prependDefer(self.request, + '/vms/test-vm/storages/hdx', '{}', 'DELETE') + + # Detach storage disk + resp = self.request('/vms/test-vm/storages/hdx', '{}', 'DELETE') + self.assertEquals(204, resp.status) + + # Detach storage disk that does not exist + resp = self.request('/vms/test-vm/storages/hdx', '{}', 'DELETE') + self.assertEquals(404, resp.status) + + # Attach cdrom with nonexistent iso + req = json.dumps({'dev': 'hdx', + 'type': 'cdrom', + 'path': '/tmp/nonexistent.iso'}) + resp = self.request('/vms/test-vm/storages', req, 'POST') + self.assertEquals(400, resp.status) + + # Attach a cdrom with existent dev name + open('/tmp/existent.iso', 'w').close() + req = json.dumps({'dev': 'hdx', + 'type': 'cdrom', + 'path': '/tmp/existent.iso'}) + resp = self.request('/vms/test-vm/storages', req, 'POST') + self.assertEquals(201, resp.status) + # Delete the file and cdrom + rollback.prependDefer(self.request, + '/vms/test-vm/storages/hdx', '{}', 'DELETE') + os.remove('/tmp/existent.iso') + + # Change path of storage cdrom + req = json.dumps({'path': 'http://myserver.com/myiso.iso'}) + resp = self.request('/vms/test-vm/storages/hdx', req, 'PUT') + self.assertEquals(200, resp.status) + + # Test GET + devs = json.loads(self.request('/vms/test-vm/storages').read()) + self.assertEquals(3, len(devs)) + + # Detach storage cdrom + resp = self.request('/vms/test-vm/storages/hdx', '{}', 'DELETE') + self.assertEquals(204, resp.status) + + # Test GET + devs = json.loads(self.request('/vms/test-vm/storages').read()) + self.assertEquals(2, len(devs)) + def test_vm_iface(self):
with RollbackContext() as rollback:
participants (5)
-
Aline Manera
-
Crístian Viana
-
Daniel Barboza
-
Daniel H Barboza
-
Royce Lv