[PATCH 0/8 - v5] Multiple disks in Template, updates objectstore, add max vcpu and topology support

V5 - Include script to update old templates in objectstore - Remove dependency and use of 'storagepool' tag from Templates - Fixes backend and tests problems found V4 - Include patch with support to max vcpu and other changes in vm backend to support cpu hot plug in the future. - Fixes vm update problem (offline update was not atomic) V3 - Addresses Aline's comments * fixes backend, API.md and API.json * fixes test cases and add new test V2 - Rebase over latest new UI patches V1 - Initial version Rodrigo Trujillo (8): Fix Template backend create/update for multiple disks Enable create guests with multiple disks from different pools UI - Implement multiple disks support in Template edit window Update VCPU by using libvirt function Test: Fix and add test related to disks in templates Remove 'stogarepool' referencies from Templates code backend Add script to upgrade objectstore to support new Template disks schema Fix tests and backend after remove 'storagepool' from templates src/wok/plugins/kimchi/API.json | 43 ++-- src/wok/plugins/kimchi/control/templates.py | 1 - src/wok/plugins/kimchi/docs/API.md | 11 +- src/wok/plugins/kimchi/i18n.py | 5 + src/wok/plugins/kimchi/mockmodel.py | 60 +++-- src/wok/plugins/kimchi/model/storagepools.py | 10 +- src/wok/plugins/kimchi/model/templates.py | 85 +++---- src/wok/plugins/kimchi/model/vms.py | 130 +++++++++-- src/wok/plugins/kimchi/osinfo.py | 14 +- src/wok/plugins/kimchi/root.py | 2 + src/wok/plugins/kimchi/template.conf | 7 +- src/wok/plugins/kimchi/tests/test_model.py | 65 +++--- src/wok/plugins/kimchi/tests/test_rest.py | 41 ++-- src/wok/plugins/kimchi/tests/test_template.py | 76 +++++-- src/wok/plugins/kimchi/tests/test_vmtemplate.py | 9 +- .../kimchi/ui/js/src/kimchi.template_edit_main.js | 246 ++++++++++----------- .../kimchi/ui/pages/template-edit.html.tmpl | 5 +- src/wok/plugins/kimchi/utils.py | 49 +++- src/wok/plugins/kimchi/vmtemplate.py | 110 ++++++--- 19 files changed, 605 insertions(+), 364 deletions(-) -- 2.1.0

This patch fixes the backend code in order to allow create templates with multiple disks and update them correctly. Backend supports adding multiple disks with a json like: "disks": [{"size": 3, "format": "qcow"}, {"size": 5, "pool":"/plugins/kimchi/storagepools/poolX"}, {"size": 7, "format": "raw", "pool": "/plugins/kimchi/storagepools/poolY"} ] If any information is missing, Kimchi will use values from template.conf. Kimchi will expose template disk data like: "disks": [{"index" : ... , "size" : ... , "format" : ... , "pool" : { "name" : ... , "type" : ... } , }, ... ] Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/API.json | 31 ++++++++++++++++++++++++++++++- src/wok/plugins/kimchi/docs/API.md | 8 ++++++++ src/wok/plugins/kimchi/model/templates.py | 9 +++++---- src/wok/plugins/kimchi/osinfo.py | 14 +++++++------- src/wok/plugins/kimchi/template.conf | 7 ++++--- src/wok/plugins/kimchi/vmtemplate.py | 26 +++++++++++++++++++++----- 6 files changed, 75 insertions(+), 20 deletions(-) diff --git a/src/wok/plugins/kimchi/API.json b/src/wok/plugins/kimchi/API.json index 7addfc7..265ae36 100644 --- a/src/wok/plugins/kimchi/API.json +++ b/src/wok/plugins/kimchi/API.json @@ -477,6 +477,12 @@ "type": "integer", "minimum": 0 }, + "format": { + "description": "Type of the image of the disk", + "type": "string", + "pattern": "^(bochs|cloop|cow|dmg|qcow|qcow2|qed|raw|vmdk|vpc)$", + "error": "KCHTMPL0027E" + }, "size": { "description": "Size (GB) of the disk", "type": "number", @@ -488,8 +494,19 @@ "type": "string", "pattern": "^/.+$", "error": "KCHTMPL0023E" + }, + "pool": { + "description": "Storage pool information", + "type": "object", + "properties": { + "name": { + "description": "Location (URI) of the storage pool", + "type": "string", + "pattern": "^/plugins/kimchi/storagepools/[^/]+/?$", + "error": "KCHTMPL0015E" + } + } } - } }, "minItems": 1, @@ -660,6 +677,18 @@ "type": "string", "pattern": "^(bochs|cloop|cow|dmg|qcow|qcow2|qed|raw|vmdk|vpc)$", "error": "KCHTMPL0027E" + }, + "pool": { + "description": "Storage pool information", + "type": "object", + "properties": { + "name": { + "description": "Location (URI) of the storage pool", + "type": "string", + "pattern": "^/plugins/kimchi/storagepools/[^/]+/?$", + "error": "KCHTMPL0015E" + } + } } } }, diff --git a/src/wok/plugins/kimchi/docs/API.md b/src/wok/plugins/kimchi/docs/API.md index 1e7d8ca..8ef48d6 100644 --- a/src/wok/plugins/kimchi/docs/API.md +++ b/src/wok/plugins/kimchi/docs/API.md @@ -289,6 +289,9 @@ Represents a snapshot of the Virtual Machine's primary monitor. (either *size* or *volume* must be specified): * index: The device index * size: The device size in GB + * format: Format of the image. Valid formats: bochs, cloop, cow, dmg, qcow, qcow2, qed, raw, vmdk, vpc + * pool: Storage pool information + * name: URI of the storagepool where disk will be created * base: Base image of this disk * graphics *(optional)*: The graphics paramenters of this template @@ -388,6 +391,9 @@ A interface represents available network interface on VM. * size: The device size in GB * volume: A volume name that contains the initial disk contents * format: Format of the image. Valid formats: bochs, cloop, cow, dmg, qcow, qcow2, qed, raw, vmdk, vpc. + * pool: Information about the pool where disk or volume will be created + * name: URI of the storagepool + * type: Type of the storagepool (dir, nfs, scsci, iscsi, etc) * graphics: A dict of graphics paramenters of this template * type: The type of graphics. It can be VNC or spice or None. * vnc: Graphical display using the Virtual Network @@ -422,6 +428,8 @@ A interface represents available network interface on VM. * size: The device size in GB * volume: A volume name that contains the initial disk contents * format: Format of the image. Valid formats: bochs, cloop, cow, dmg, qcow, qcow2, qed, raw, vmdk, vpc. + * pool: Storage pool information + * name: URI of the storagepool where template allocates vm disk. * graphics *(optional)*: A dict of graphics paramenters of this template * type: The type of graphics. It can be VNC or spice or None. * vnc: Graphical display using the Virtual Network diff --git a/src/wok/plugins/kimchi/model/templates.py b/src/wok/plugins/kimchi/model/templates.py index 47b2c9e..bac6e47 100644 --- a/src/wok/plugins/kimchi/model/templates.py +++ b/src/wok/plugins/kimchi/model/templates.py @@ -234,8 +234,9 @@ class LibvirtVMTemplate(VMTemplate): self.conn = conn VMTemplate.__init__(self, args, scan) - def _storage_validate(self): - pool_uri = self.info['storagepool'] + def _storage_validate(self, pool_uri=None): + if pool_uri is None: + pool_uri = self.info['storagepool'] pool_name = pool_name_from_uri(pool_uri) try: conn = self.conn.get() @@ -278,8 +279,8 @@ class LibvirtVMTemplate(VMTemplate): xml = pool.XMLDesc(0) return xpath_get_text(xml, "/pool/target/path")[0] - def _get_storage_type(self): - pool = self._storage_validate() + def _get_storage_type(self, pool_uri=None): + pool = self._storage_validate(pool_uri) xml = pool.XMLDesc(0) return xpath_get_text(xml, "/pool/@type")[0] diff --git a/src/wok/plugins/kimchi/osinfo.py b/src/wok/plugins/kimchi/osinfo.py index 30ecd4f..1891398 100644 --- a/src/wok/plugins/kimchi/osinfo.py +++ b/src/wok/plugins/kimchi/osinfo.py @@ -118,8 +118,8 @@ def _get_tmpl_defaults(): The default values should be like below: {'main': {'networks': ['default'], 'memory': '1024'}, - 'storage': {'pool': 'default', - 'disk.0': {'format': 'qcow2', 'size': '10'}}, + 'storage': { 'disk.0': {'format': 'qcow2', 'size': '10', + 'pool': '/plugins/kimchi/storagepools/default'}}, 'processor': {'cpus': '1'}, 'graphics': {'type': 'spice', 'listen': '127.0.0.1'}} """ @@ -127,8 +127,8 @@ def _get_tmpl_defaults(): tmpl_defaults = defaultdict(dict) tmpl_defaults['main']['networks'] = ['default'] tmpl_defaults['main']['memory'] = _get_default_template_mem() - tmpl_defaults['storage']['pool'] = 'default' - tmpl_defaults['storage']['disk.0'] = {'size': 10, 'format': 'qcow2'} + tmpl_defaults['storage']['disk.0'] = {'size': 10, 'format': 'qcow2', + 'pool': 'default'} tmpl_defaults['processor']['cpus'] = 1 tmpl_defaults['graphics'] = {'type': 'vnc', 'listen': '127.0.0.1'} @@ -150,14 +150,14 @@ def _get_tmpl_defaults(): main_section = default_config.pop('main') defaults.update(main_section) - # Parse storage section to get storage pool and disks values + # Parse storage section to get disks values storage_section = default_config.pop('storage') - defaults['storagepool'] = '/plugins/kimchi/storagepools/' + \ - storage_section.pop('pool') defaults['disks'] = [] for disk in storage_section.keys(): data = storage_section[disk] data['index'] = int(disk.split('.')[1]) + data['pool'] = {"name": '/plugins/kimchi/storagepools/' + + storage_section[disk].pop('pool')} defaults['disks'].append(data) # Parse processor section to get cpus and cpu_topology values diff --git a/src/wok/plugins/kimchi/template.conf b/src/wok/plugins/kimchi/template.conf index f3615e6..37fadb8 100644 --- a/src/wok/plugins/kimchi/template.conf +++ b/src/wok/plugins/kimchi/template.conf @@ -11,11 +11,9 @@ #networks = default, [storage] -# Storage pool used to handle the guest disk -#pool = default # Specify multiple [[disk.X]] sub-sections to add multiples disks to guest -# All the disk files will be created in the same storage pool as set above +# Each disk files will be created in respective storage pool set [[disk.0]] # Disk size in GB #size = 10 @@ -23,6 +21,9 @@ # Disk format #format = qcow2 +# Storage pool used to handle the guest disk +#pool = default + [graphics] # Graphics type # Valid options: vnc | spice diff --git a/src/wok/plugins/kimchi/vmtemplate.py b/src/wok/plugins/kimchi/vmtemplate.py index 3097b66..c7635b0 100644 --- a/src/wok/plugins/kimchi/vmtemplate.py +++ b/src/wok/plugins/kimchi/vmtemplate.py @@ -76,14 +76,30 @@ class VMTemplate(object): graphics.update(graph_args) args['graphics'] = graphics - # Merge disks dict + # Provide compatibility with old template version and sets data to + # correct output format if necessary + def _fix_disk_compatibilities(disks): + for i, disk in enumerate(disks): + if 'pool' not in disk: + disk['pool'] = {'name': self.info['storagepool']} + if (isinstance(disk['pool'], str) or + isinstance(disk['pool'], unicode)): + disk['pool'] = {'name': disk['pool']} + disk['pool']['type'] = \ + self._get_storage_type(disk['pool']['name']) + + if self.info.get('disks') is not None: + _fix_disk_compatibilities(self.info['disks']) + if args.get('disks') is not None: + _fix_disk_compatibilities(args['disks']) + + # Merge disks dict with disks provided by user default_disk = self.info['disks'][0] for i, d in enumerate(args.get('disks', [])): disk = dict(default_disk) + disk['index'] = i disk.update(d) - - # Assign right disk format to logical and [i]scsi storagepools - if self._get_storage_type() in ['logical', 'iscsi', 'scsi']: + if disk['pool']['type'] in ['logical', 'iscsi', 'scsi']: disk['format'] = 'raw' args['disks'][i] = disk @@ -401,7 +417,7 @@ class VMTemplate(object): def _get_storage_path(self): return '' - def _get_storage_type(self): + def _get_storage_type(self, pool=None): return '' def _get_volume_path(self): -- 2.1.0

This patch changes the Kimchi backend in order to support creation of guests from Templates that have multiple disks from different storage pools. Before, it was only possible to create a guest with one, two or more disks from only one storagepool. In order to add a disk from another pool, user would have to edit the guest and add it. Now users can do this in creation time, using pre-configured Template. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/i18n.py | 1 + src/wok/plugins/kimchi/model/templates.py | 21 +++++++++++++++------ src/wok/plugins/kimchi/model/vms.py | 18 ++++++------------ src/wok/plugins/kimchi/vmtemplate.py | 30 ++++++++++++++++-------------- 4 files changed, 38 insertions(+), 32 deletions(-) diff --git a/src/wok/plugins/kimchi/i18n.py b/src/wok/plugins/kimchi/i18n.py index b3414e2..a01c3de 100644 --- a/src/wok/plugins/kimchi/i18n.py +++ b/src/wok/plugins/kimchi/i18n.py @@ -128,6 +128,7 @@ messages = { "KCHVM0069E": _("Password field must be a string."), "KCHVM0070E": _("Error creating local host ssh rsa key of user 'root'."), "KCHVM0071E": _("Memory value %(mem)s must be aligned to %(alignment)sMiB."), + "KCHVM0072E": _("Template given has multiple disks assigned to different types of storage pools, conflicting with storage pool provided."), "KCHVMHDEV0001E": _("VM %(vmid)s does not contain directly assigned host device %(dev_name)s."), "KCHVMHDEV0002E": _("The host device %(dev_name)s is not allowed to directly assign to VM."), diff --git a/src/wok/plugins/kimchi/model/templates.py b/src/wok/plugins/kimchi/model/templates.py index bac6e47..eae2714 100644 --- a/src/wok/plugins/kimchi/model/templates.py +++ b/src/wok/plugins/kimchi/model/templates.py @@ -147,6 +147,16 @@ class TemplateModel(object): params = session.get('template', name) if overrides: params.update(overrides) + if 'storagepool' in params: + libvVMT = LibvirtVMTemplate(params, conn=conn) + poolType = libvVMT._get_storage_type(params['storagepool']) + for i, disk in enumerate(params['disks']): + if disk['pool']['type'] != poolType: + raise InvalidOperation('KCHVM0072E') + else: + params['disks'][i]['pool']['name'] = \ + params['storagepool'] + return LibvirtVMTemplate(params, False, conn) def lookup(self, name): @@ -274,8 +284,8 @@ class LibvirtVMTemplate(VMTemplate): raise InvalidParameter("KCHTMPL0007E", {'network': name, 'template': self.name}) - def _get_storage_path(self): - pool = self._storage_validate() + def _get_storage_path(self, pool_uri=None): + pool = self._storage_validate(pool_uri) xml = pool.XMLDesc(0) return xpath_get_text(xml, "/pool/target/path")[0] @@ -285,7 +295,7 @@ class LibvirtVMTemplate(VMTemplate): return xpath_get_text(xml, "/pool/@type")[0] def _get_volume_path(self, pool, vol): - pool = self._storage_validate() + pool = self._storage_validate(pool) try: return pool.storageVolLookupByName(vol).path() except: @@ -293,12 +303,11 @@ class LibvirtVMTemplate(VMTemplate): 'pool': pool}) def fork_vm_storage(self, vm_uuid): - # Provision storage: - # TODO: Rebase on the storage API once upstream - pool = self._storage_validate() + # Provision storages: vol_list = self.to_volume_list(vm_uuid) try: for v in vol_list: + pool = self._storage_validate(v['pool']) # outgoing text to libvirt, encode('utf-8') pool.createXML(v['xml'].encode('utf-8'), 0) except libvirt.libvirtError as e: diff --git a/src/wok/plugins/kimchi/model/vms.py b/src/wok/plugins/kimchi/model/vms.py index 366f1b4..ebe716e 100644 --- a/src/wok/plugins/kimchi/model/vms.py +++ b/src/wok/plugins/kimchi/model/vms.py @@ -151,28 +151,22 @@ class VMsModel(object): wok_log.error('Error trying to update database with guest ' 'icon information due error: %s', e.message) - # If storagepool is SCSI, volumes will be LUNs and must be passed by - # the user from UI or manually. - cb('Provisioning storage for new VM') - vol_list = [] - if t._get_storage_type() not in ["iscsi", "scsi"]: - vol_list = t.fork_vm_storage(vm_uuid) + cb('Provisioning storages for new VM') + vol_list = t.fork_vm_storage(vm_uuid) graphics = params.get('graphics', {}) stream_protocols = self.caps.libvirt_stream_protocols xml = t.to_vm_xml(name, vm_uuid, libvirt_stream_protocols=stream_protocols, - graphics=graphics, - volumes=vol_list) + graphics=graphics) cb('Defining new VM') try: conn.defineXML(xml.encode('utf-8')) except libvirt.libvirtError as e: - if t._get_storage_type() not in READONLY_POOL_TYPE: - for v in vol_list: - vol = conn.storageVolLookupByPath(v['path']) - vol.delete(0) + for v in vol_list: + vol = conn.storageVolLookupByPath(v['path']) + vol.delete(0) raise OperationFailed("KCHVM0007E", {'name': name, 'err': e.get_error_message()}) diff --git a/src/wok/plugins/kimchi/vmtemplate.py b/src/wok/plugins/kimchi/vmtemplate.py index c7635b0..69cc9b5 100644 --- a/src/wok/plugins/kimchi/vmtemplate.py +++ b/src/wok/plugins/kimchi/vmtemplate.py @@ -184,11 +184,6 @@ class VMTemplate(object): return xml def _get_disks_xml(self, vm_uuid): - # Current implementation just allows to create disk in one single - # storage pool, so we cannot mix the types (scsi volumes vs img file) - storage_type = self._get_storage_type() - storage_path = self._get_storage_path() - base_disk_params = {'type': 'disk', 'disk': 'file', 'bus': self.info['disk_bus']} logical_disk_params = {'format': 'raw'} @@ -199,37 +194,44 @@ class VMTemplate(object): 'format': 'raw', 'bus': 'scsi'} disks_xml = '' - pool_name = pool_name_from_uri(self.info['storagepool']) for index, disk in enumerate(self.info['disks']): params = dict(base_disk_params) params['format'] = disk['format'] - params.update(locals().get('%s_disk_params' % storage_type, {})) + params.update(locals().get('%s_disk_params' % + disk['pool']['type'], {})) params['index'] = index volume = disk.get('volume') if volume is not None: - params['path'] = self._get_volume_path(pool_name, volume) + params['path'] = self._get_volume_path(disk['pool']['name'], + volume) else: - volume = "%s-%s.img" % (vm_uuid, params['index']) - params['path'] = os.path.join(storage_path, volume) + img = "%s-%s.img" % (vm_uuid, params['index']) + storage_path = self._get_storage_path(disk['pool']['name']) + params['path'] = os.path.join(storage_path, img) disks_xml += get_disk_xml(params)[1] return unicode(disks_xml, 'utf-8') def to_volume_list(self, vm_uuid): - storage_path = self._get_storage_path() ret = [] for i, d in enumerate(self.info['disks']): + # Create only .img. If storagepool is (i)SCSI, volumes will be LUNs + if d['pool']['type'] in ["iscsi", "scsi"]: + continue + index = d.get('index', i) volume = "%s-%s.img" % (vm_uuid, index) + storage_path = self._get_storage_path(d['pool']['name']) info = {'name': volume, 'capacity': d['size'], 'format': d['format'], - 'path': '%s/%s' % (storage_path, volume)} + 'path': '%s/%s' % (storage_path, volume), + 'pool': d['pool']['name']} - if 'logical' == self._get_storage_type() or \ + if 'logical' == d['pool']['type'] or \ info['format'] not in ['qcow2', 'raw']: info['allocation'] = info['capacity'] else: @@ -414,7 +416,7 @@ class VMTemplate(object): def fork_vm_storage(self, vm_uuid): pass - def _get_storage_path(self): + def _get_storage_path(self, pool_uri=None): return '' def _get_storage_type(self, pool=None): -- 2.1.0

- This patch implements necessary changes in frontend to allow users to edit and add/remove extra disks to a chosen Template. - This patch also refactor the UI code in order to avoid lots of backend calls, improving performance. - Template disks data saved in objectstore have new fields: * 'size' is always recorded, even for ISCSI/SCSI types; * 'storagepool type' is always recorded; * example: "disks":[ { "index": 0, "format": "raw", "volume": "unit:0:0:2", "pool": { "name": "/plugins/kimchi/storagepools/test-iscsi", "type": "iscsi" } "size": 7 }, { "index": 1, "format": "qcow2", "pool": { "name": "/plugins/kimchi/storagepools/default", "type": "dir" } "size": 3 } ] Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- .../kimchi/ui/js/src/kimchi.template_edit_main.js | 246 ++++++++++----------- .../kimchi/ui/pages/template-edit.html.tmpl | 5 +- 2 files changed, 123 insertions(+), 128 deletions(-) diff --git a/src/wok/plugins/kimchi/ui/js/src/kimchi.template_edit_main.js b/src/wok/plugins/kimchi/ui/js/src/kimchi.template_edit_main.js index 5b77892..10f11ed 100644 --- a/src/wok/plugins/kimchi/ui/js/src/kimchi.template_edit_main.js +++ b/src/wok/plugins/kimchi/ui/js/src/kimchi.template_edit_main.js @@ -22,7 +22,6 @@ kimchi.template_edit_main = function() { var origNetworks; var templateDiskSize; $('#template-name', templateEditMain).val(kimchi.selectedTemplate); - //templateEditMain.tabs(); $('#edit-template-tabs a[data-toggle="tab"]').on('shown.bs.tab', function (e) { var target = $(this).attr('href'); @@ -32,7 +31,7 @@ kimchi.template_edit_main = function() { }); var initTemplate = function(template) { - origDisks = template.disks; + origDisks = template.disks; origPool = template.storagepool; origNetworks = template.networks; for(var i=0;i<template.disks.length;i++){ @@ -68,134 +67,132 @@ kimchi.template_edit_main = function() { return false; } enableSpice(); + var initStorage = function(result) { - var scsipools = {}; + // Gather storagepools data + var storagePoolsInfo = new Object(); + $.each(result, function(index, pool) { + if (pool.state === 'active' && pool.type != 'kimchi-iso') { + if (pool.type === 'iscsi' || pool.type === 'scsi') { + volumes = new Object(); + kimchi.listStorageVolumes(pool.name, function(vols) { + $.each(vols, function(i, vol) { + storagePoolsInfo[pool.name + "/" + vol.name] = { + 'type' : pool.type, + 'volSize': vol.capacity / Math.pow(1024, 3)}; + }); + }, null, true); + } else { + storagePoolsInfo[pool.name] = { 'type' : pool.type }; + } + } + }); + var addStorageItem = function(storageData) { var thisName = storageData.storageName; + // Compatibility with old versions + if (storageData.storageVolume) { + storageData.storageDisk = storagePoolsInfo[thisName].volSize; + } + if (!storageData.storageType) { + storageData.storageType = storagePoolsInfo[thisName].type; + } + var nodeStorage = $.parseHTML(wok.substitute($('#template-storage-pool-tmpl').html(), storageData)); $('.template-tab-body', '#form-template-storage').append(nodeStorage); + var storageRow = '#storageRow' + storageData.storageIndex; + var storageOptions = ''; - var scsiOptions = ''; - $('#selectStorageName').find('option').remove(); - $.each(result, function(index, storageEntities) { - if((storageEntities.state === 'active') && (storageEntities.type != 'kimchi-iso')) { - if(storageEntities.type === 'iscsi' || storageEntities.type === 'scsi') { - kimchi.listStorageVolumes(storageEntities.name, function(currentVolume) { - $.each(currentVolume, function(indexSCSI, scsiEntities) { - var tmpPath = storageEntities.name + '/' + scsiEntities.name; - var isSlected = tmpPath === thisName ? ' selected' : ''; - scsiOptions += '<option' + isSlected + '>' + tmpPath + '</option>'; - }); - $('#selectStorageName').append(scsiOptions); - }, function() {}); - } else { - var isSlected = storageEntities.name === thisName ? ' selected' : ''; - storageOptions += '<option' + isSlected + '>' + storageEntities.name + '</option>'; - } - } + $.each(storagePoolsInfo, function(poolName, value) { + storageOptions += '<option value="' + poolName + '">' + poolName + '</option>'; }); - $('#selectStorageName').append(storageOptions); - $('select','#form-template-storage').selectpicker(); + + $(storageRow + ' #selectStorageName').append(storageOptions); + $(storageRow + ' #selectStorageName').val(storageData.storageName); + $(storageRow + ' #selectStorageName').selectpicker(); + + if (storageData.storageType === 'iscsi' || storageData.storageType === 'scsi') { + $(storageRow + ' .template-storage-disk').attr('readonly', true).prop('disabled', true); + $(storageRow + ' #diskFormat').val('raw'); + $(storageRow + ' #diskFormat').prop('disabled', true).change(); + } else if (storageData.storageType === 'logical') { + $(storageRow + ' #diskFormat').val('raw'); + $(storageRow + ' #diskFormat').prop('disabled', true).change(); + } // Set disk format if (isImageBasedTemplate()) { - $('#diskFormat').val('qcow2'); - $('#diskFormat').prop('disabled', 'disabled'); + $(storageRow + ' #diskFormat').val('qcow2'); + $(storageRow + ' #diskFormat').prop('disabled', 'disabled'); } else { - $('#diskFormat').val(storageData.storageDiskFormat); - $('#diskFormat').on('change', function() { - $('.template-storage-disk-format').val($(this).val()); + $(storageRow + ' #diskFormat').val(storageData.storageDiskFormat); + $(storageRow + ' #diskFormat').on('change', function() { + $(storageRow + ' .template-storage-disk-format').val($(this).val()); }); } + $(storageRow + ' #diskFormat').selectpicker(); - $('#selectStorageName').change(function() { - var selectedItem = $(this).parent().parent(); - var tempStorageNameFull = $(this).val(); - var tempName = tempStorageNameFull.split('/'); - var tempStorageName = tempName[0]; - $('.template-storage-name').val(tempStorageNameFull); - kimchi.getStoragePool(tempStorageName, function(info) { - tempType = info.type; - selectedItem.find('.template-storage-type').val(tempType); - if (tempType === 'iscsi' || tempType === 'scsi') { - kimchi.getStoragePoolVolume(tempStorageName, tempName[tempName.length-1], function(info) { - volSize = info.capacity / Math.pow(1024, 3); - $('.template-storage-disk', selectedItem).attr('readonly', true).val(volSize); - if (!isImageBasedTemplate()) { - $('#diskFormat').val('raw'); - $('#diskFormat').prop('disabled', true).change(); - } - }); - } else if (tempType === 'logical') { - $('.template-storage-disk', selectedItem).attr('readonly', false); - if (!isImageBasedTemplate()) { - $('#diskFormat').val('raw'); - $('#diskFormat').prop('disabled', true).change(); - } - } else { - $('.template-storage-disk', selectedItem).attr('readonly', false); - if ($('#diskFormat').prop('disabled') == true && - !isImageBasedTemplate()) { - $('#diskFormat').val('qcow2'); - $('#diskFormat').prop('disabled', false).change(); - } - } - }); + $('.delete', '#form-template-storage').on( "click",function(event) { + event.preventDefault(); + $(this).parent().parent().remove(); }); - }; - if ((origDisks && origDisks.length) && (origPool && origPool.length)) { - splitPool = origPool.split('/'); - var defaultPool = splitPool[splitPool.length-1]; - var defaultType; - - kimchi.getStoragePool(defaultPool, function(info) { - defaultType = info.type; - $.each(origDisks, function(index, diskEntities) { - var storageNodeData = { - viewMode : '', - editMode : 'hide', - storageName : defaultPool, - storageType : defaultType, - storageDisk : diskEntities.size, - storageDiskFormat : diskEntities.format ? diskEntities.format : 'qcow2' + $(storageRow + ' #selectStorageName').change(function() { + var poolType = storagePoolsInfo[$(this).val()].type; + $(storageRow + ' .template-storage-name').val($(this).val()); + $(storageRow + ' .template-storage-type').val(poolType); + if (poolType === 'iscsi' || poolType === 'scsi') { + $(storageRow + ' .template-storage-disk').attr('readonly', true).prop('disabled', true).val(storagePoolsInfo[$(this).val()].volSize); + if (!isImageBasedTemplate()) { + $(storageRow + ' #diskFormat').val('raw').prop('disabled', true).change(); } - - if (diskEntities.volume) { - kimchi.getStoragePoolVolume(defaultPool, diskEntities.volume, function(info) { - var volSize = info.capacity / Math.pow(1024, 3); - var nodeData = storageNodeData - nodeData.storageName = defaultPool + '/' + diskEntities.volume; - nodeData.storageDisk = volSize; - addStorageItem(nodeData); - $('.template-storage-disk').attr('readonly', true); - $('#diskFormat').val('raw'); - $('#diskFormat').prop('disabled', true).change(); - }); - } else if (defaultType === 'logical') { - addStorageItem(storageNodeData); - $('#diskFormat').val('raw'); - $('#diskFormat').prop('disabled', true).change(); - } else { - addStorageItem(storageNodeData); + } else if (poolType === 'logical') { + $(storageRow + ' .template-storage-disk').attr('readonly', false).prop('disabled', false); + if (!isImageBasedTemplate()) { + $(storageRow + ' #diskFormat').val('raw').prop('disabled', true).change(); } - }); + } else { + $(storageRow + ' .template-storage-disk').attr('readonly', false).prop('disabled', false); + if ($(storageRow + ' #diskFormat').prop('disabled') == true && !isImageBasedTemplate()) { + $(storageRow + ' #diskFormat').val('qcow2').prop('disabled', false).change(); + } + } + $(storageRow + ' #diskFormat').selectpicker('refresh'); + }); + }; // End of addStorageItem funtion + + if ((origDisks && origDisks.length) && (origPool && origPool.length)) { + origDisks.sort(function(a, b){return a.index-b.index}); + $.each(origDisks, function(index, diskEntities) { + var defaultPool = diskEntities.pool.name ? diskEntities.pool.name.split('/').pop() : origPool.split('/').pop() + var storageNodeData = { + storageIndex : diskEntities.index, + storageName : diskEntities.volume ? defaultPool + '/' + diskEntities.volume : defaultPool, + storageType : diskEntities.pool.type, + storageDisk : diskEntities.size, + storageDiskFormat : diskEntities.format ? diskEntities.format : 'qcow2', + storageVolume : diskEntities.volume + } + addStorageItem(storageNodeData); }); } + var storageID = origDisks.length -1; $('#template-edit-storage-add-button').on("click", function(event) { event.preventDefault(); + storageID = storageID + 1; var storageNodeData = { - viewMode : 'hide', - editMode : '', - storageName : 'null', + storageName : 'default', storageType : 'dir', - storageDisk : '10' + storageDisk : '10', + storageDiskFormat : 'qcow2', + storageIndex : storageID } addStorageItem(storageNodeData); }); }; + var initInterface = function(result) { var networkItemNum = 0; var addInterfaceItem = function(networkData) { @@ -235,6 +232,7 @@ kimchi.template_edit_main = function() { }); }); }; + var initProcessor = function(){ var setCPUValue = function(){ if(!$('#cores').hasClass("invalid-field")&&$('#cores').val()!=""){ @@ -278,36 +276,30 @@ kimchi.template_edit_main = function() { }; kimchi.retrieveTemplate(kimchi.selectedTemplate, initTemplate); - $('#tmpl-edit-button-save').on('click', function() { - var editableFields = [ 'name', 'memory', 'disks', 'graphics']; + var editableFields = [ 'name', 'memory', 'graphics']; var data = {}; - //Fix me: Only support one storage pool now - var storages = $('.template-tab-body .item', '#form-template-storage'); - var tempName = $('.template-storage-name', storages).val(); - var tmpItem = $('#form-template-storage .item'); - tempName = tempName.split('/'); - var tempNameHead = tempName[0]; - var tempNameTail = tempNameHead; - if($('.template-storage-type', tmpItem).val() === 'iscsi' || $('.template-storage-type', tmpItem).val() == 'scsi') { - tempNameTail = tempName[tempName.length-1]; - } - tempName = '/plugins/kimchi/storagepools/' + tempNameHead; - data['storagepool'] = tempName; - $.each(editableFields, function(i, field) { - /* Support only 1 disk at this moment */ - if (field == 'disks') { - if($('.template-storage-type', tmpItem).val() === 'iscsi' || $('.template-storage-type', tmpItem).val() == 'scsi') { - origDisks[0]['size'] && delete origDisks[0]['size']; - origDisks[0]['volume'] = tempNameTail; - } else { - origDisks[0]['volume'] && delete origDisks[0]['volume']; - origDisks[0].size = Number($('.template-storage-disk', tmpItem).val()); - } - origDisks[0].format = $('.template-storage-disk-format', tmpItem).val(); - data[field] = origDisks; + var disks = $('.template-tab-body .item', '#form-template-storage'); + var disksForUpdate = new Array(); + $.each(disks, function(index, diskEntity) { + var newDisk = { + 'index' : index, + 'pool' : '/plugins/kimchi/storagepools/' + $(diskEntity).find('.template-storage-name').val(), + 'size' : Number($(diskEntity).find('.template-storage-disk').val()), + 'format' : $(diskEntity).find('.template-storage-disk-format').val() + }; + + var storageType = $(diskEntity).find('.template-storage-type').val(); + if(storageType === 'iscsi' || storageType === 'scsi') { + newDisk['volume'] = newDisk['pool'].split('/').pop(); + newDisk['pool'] = newDisk['pool'].slice(0, newDisk['pool'].lastIndexOf('/')); } - else if (field == 'graphics') { + disksForUpdate.push(newDisk); + }); + data.disks = disksForUpdate; + + $.each(editableFields, function(i, field) { + if (field == 'graphics') { var type = $('#form-template-general [name="' + field + '"]').val(); data[field] = {'type': type}; } diff --git a/src/wok/plugins/kimchi/ui/pages/template-edit.html.tmpl b/src/wok/plugins/kimchi/ui/pages/template-edit.html.tmpl index 59a2cfa..dc6b762 100644 --- a/src/wok/plugins/kimchi/ui/pages/template-edit.html.tmpl +++ b/src/wok/plugins/kimchi/ui/pages/template-edit.html.tmpl @@ -156,7 +156,7 @@ kimchi.template_edit_main(); </script> <script id="template-storage-pool-tmpl" type="text/html"> - <div class='item'> + <div id="storageRow{storageIndex}" class='item'> <span class="template-storage-cell storage-pool"> <input class="template-storage-name" value={storageName} type="text" style="display:none" /> <select id="selectStorageName"></select> @@ -182,6 +182,9 @@ <option value="vpc">vpc</option> </select> </span> + <span class="pull-right"> + <button class="delete btn-primary btn"><i class="fa fa-minus-circle"></i> $_("Delete")</button> + </span> </div> </script> <script id="template-interface-tmpl" type="text/html"> -- 2.1.0

Currently, the VCPU count is updated by editing the configuration XML (tag: <vcpu>). However, in some cases, editing that tag will only update the maximum VCPU count, not the current one. For example, if the VM has the following XML tag: <vcpu current='2'>8</vcpu> and the user updates the VCPU value to 10, Kimchi will update the XML tag to: <vcpu current='2'>10</vcpu> Use the libvirt function "setVcpusFlags" to update the current and the maximum VCPU values to the one specified by the user. * This patch also changes vcpu setting when guest is being created. Now 'current' is always set and is the total of cpus from the Template and maxVcpu is the >= 255 or (sockets * cores * threads) in PPC. In x86 it is the value of libvirt getMaxVcpu. * Change NUMA management. Sets all cpus to 0. There is not impact to the guest. Signed-off-by: Aline Manera <alinefm@linux.vnet.ibm.com> Signed-off-by: Crístian Deives <cristiandeives@gmail.com> Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/i18n.py | 3 + src/wok/plugins/kimchi/mockmodel.py | 52 ++++++-------- src/wok/plugins/kimchi/model/vms.py | 116 ++++++++++++++++++++++++++---- src/wok/plugins/kimchi/tests/test_rest.py | 10 ++- src/wok/plugins/kimchi/vmtemplate.py | 25 ++++++- 5 files changed, 157 insertions(+), 49 deletions(-) diff --git a/src/wok/plugins/kimchi/i18n.py b/src/wok/plugins/kimchi/i18n.py index a01c3de..7154a62 100644 --- a/src/wok/plugins/kimchi/i18n.py +++ b/src/wok/plugins/kimchi/i18n.py @@ -129,6 +129,9 @@ messages = { "KCHVM0070E": _("Error creating local host ssh rsa key of user 'root'."), "KCHVM0071E": _("Memory value %(mem)s must be aligned to %(alignment)sMiB."), "KCHVM0072E": _("Template given has multiple disks assigned to different types of storage pools, conflicting with storage pool provided."), + "KCHVM0073E": _("Unable to update the following parameters while the VM is offline: %(params)s"), + "KCHVM0074E": _("Unable to update the following parameters while the VM is online: %(params)s"), + "KCHVM0075E": _("Cannot change VCPU value because '%(vm)s' has a topology defined - sockets: %(sockets)s, cores: %(cores)s, threads: %(threads)s."), "KCHVMHDEV0001E": _("VM %(vmid)s does not contain directly assigned host device %(dev_name)s."), "KCHVMHDEV0002E": _("The host device %(dev_name)s is not allowed to directly assign to VM."), diff --git a/src/wok/plugins/kimchi/mockmodel.py b/src/wok/plugins/kimchi/mockmodel.py index 9186f78..c679378 100644 --- a/src/wok/plugins/kimchi/mockmodel.py +++ b/src/wok/plugins/kimchi/mockmodel.py @@ -21,6 +21,8 @@ import libvirt import lxml.etree as ET import os import time + +from collections import defaultdict from lxml import objectify from lxml.builder import E @@ -60,10 +62,9 @@ storagevolumes.VALID_RAW_CONTENT = ['dos/mbr boot sector', class MockModel(Model): - _mock_vms = {} + _mock_vms = defaultdict(list) _mock_snapshots = {} _XMLDesc = libvirt.virDomain.XMLDesc - _defineXML = libvirt.virConnect.defineXML _undefineDomain = libvirt.virDomain.undefine _libvirt_get_vol_path = LibvirtVMTemplate._get_volume_path @@ -81,7 +82,6 @@ class MockModel(Model): cpuinfo.get_topo_capabilities = MockModel.get_topo_capabilities vmifaces.getDHCPLeases = MockModel.getDHCPLeases - libvirt.virConnect.defineXML = MockModel.domainDefineXML libvirt.virDomain.XMLDesc = MockModel.domainXMLDesc libvirt.virDomain.undefine = MockModel.undefineDomain libvirt.virDomain.attachDeviceFlags = MockModel.attachDeviceFlags @@ -124,7 +124,7 @@ class MockModel(Model): imageinfo.probe_image = self._probe_image def reset(self): - MockModel._mock_vms = {} + MockModel._mock_vms = defaultdict(list) MockModel._mock_snapshots = {} if hasattr(self, 'objstore'): @@ -157,21 +157,14 @@ class MockModel(Model): return ET.fromstring(xml) @staticmethod - def domainDefineXML(conn, xml): - name = objectify.fromstring(xml).name.text - try: - dom = conn.lookupByName(name) - if not dom.isActive(): - MockModel._mock_vms[name] = xml - except: - pass - - return MockModel._defineXML(conn, xml) - - @staticmethod def domainXMLDesc(dom, flags=0): - return MockModel._mock_vms.get(dom.name(), - MockModel._XMLDesc(dom, flags)) + xml = MockModel._XMLDesc(dom, flags) + root = objectify.fromstring(xml) + + for dev_xml in MockModel._mock_vms.get(dom.name(), []): + dev = objectify.fromstring(dev_xml) + root.devices.append(dev) + return ET.tostring(root, encoding="utf-8") @staticmethod def undefineDomain(dom): @@ -182,12 +175,7 @@ class MockModel(Model): @staticmethod def attachDeviceFlags(dom, xml, flags=0): - old_xml = dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE) - root = objectify.fromstring(old_xml) - dev = objectify.fromstring(xml) - root.devices.append(dev) - - MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") + MockModel._mock_vms[dom.name()].append(xml) @staticmethod def _get_device_node(dom, xml): @@ -213,16 +201,18 @@ class MockModel(Model): @staticmethod def detachDeviceFlags(dom, xml, flags=0): - root, dev = MockModel._get_device_node(dom, xml) - root.devices.remove(dev) - - MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") + node = ET.fromstring(xml) + xml = ET.tostring(node, encoding="utf-8", pretty_print=True) + if xml in MockModel._mock_vms[dom.name()]: + MockModel._mock_vms[dom.name()].remove(xml) @staticmethod def updateDeviceFlags(dom, xml, flags=0): - root, old_dev = MockModel._get_device_node(dom, xml) - root.devices.replace(old_dev, objectify.fromstring(xml)) - MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") + _, old_dev = MockModel._get_device_node(dom, xml) + old_xml = ET.tostring(old_dev, encoding="utf-8", pretty_print=True) + if old_xml in MockModel._mock_vms[dom.name()]: + MockModel._mock_vms[dom.name()].remove(old_xml) + MockModel._mock_vms[dom.name()].append(xml) @staticmethod def volResize(vol, size, flags=0): diff --git a/src/wok/plugins/kimchi/model/vms.py b/src/wok/plugins/kimchi/model/vms.py index ebe716e..4835adb 100644 --- a/src/wok/plugins/kimchi/model/vms.py +++ b/src/wok/plugins/kimchi/model/vms.py @@ -50,6 +50,7 @@ from wok.plugins.kimchi.config import READONLY_POOL_TYPE, get_kimchi_version from wok.plugins.kimchi.kvmusertests import UserTests from wok.plugins.kimchi.osinfo import PPC_MEM_ALIGN from wok.plugins.kimchi.model.config import CapabilitiesModel +from wok.plugins.kimchi.model.cpuinfo import CPUInfoModel from wok.plugins.kimchi.model.featuretests import FeatureTests from wok.plugins.kimchi.model.templates import TemplateModel from wok.plugins.kimchi.model.utils import get_ascii_nonascii_name, get_vm_name @@ -71,10 +72,16 @@ DOM_STATE_MAP = {0: 'nostate', 6: 'crashed', 7: 'pmsuspended'} -VM_STATIC_UPDATE_PARAMS = {'name': './name', - 'cpus': './vcpu'} +VM_STATIC_UPDATE_PARAMS = {'name': './name', 'cpus': './vcpu'} + VM_LIVE_UPDATE_PARAMS = {} +# update parameters which are updatable when the VM is online +VM_ONLINE_UPDATE_PARAMS = ['graphics', 'groups', 'memory', 'users'] +# update parameters which are updatable when the VM is offline +VM_OFFLINE_UPDATE_PARAMS = ['cpus', 'graphics', 'groups', 'memory', 'name', + 'users'] + XPATH_DOMAIN_DISK = "/domain/devices/disk[@device='disk']/source/@file" XPATH_DOMAIN_DISK_BY_FILE = "./devices/disk[@device='disk']/source[@file='%s']" XPATH_DOMAIN_NAME = '/domain/name' @@ -84,8 +91,10 @@ XPATH_DOMAIN_MAC_BY_ADDRESS = "./devices/interface[@type='network']/"\ XPATH_DOMAIN_MEMORY = '/domain/memory' XPATH_DOMAIN_MEMORY_UNIT = '/domain/memory/@unit' XPATH_DOMAIN_UUID = '/domain/uuid' +XPATH_DOMAIN_DEV_CPU_ID = '/domain/devices/spapr-cpu-socket/@id' XPATH_NUMA_CELL = './cpu/numa/cell' +XPATH_TOPOLOGY = './cpu/topology' # key: VM name; value: lock object vm_locks = {} @@ -98,6 +107,17 @@ class VMsModel(object): self.caps = CapabilitiesModel(**kargs) self.task = TaskModel(**kargs) + def _get_host_maxcpu(self): + if os.uname()[4] in ['ppc', 'ppc64', 'ppc64le']: + cpu_model = CPUInfoModel(conn=self.conn) + max_vcpu_val = (cpu_model.cores_available * + cpu_model.threads_per_core) + if max_vcpu_val > 255: + max_vcpu_val = 255 + else: + max_vcpu_val = self.conn.get().getMaxVcpus('kvm') + return max_vcpu_val + def create(self, params): t_name = template_name_from_uri(params['template']) vm_list = self.get_list() @@ -158,7 +178,8 @@ class VMsModel(object): stream_protocols = self.caps.libvirt_stream_protocols xml = t.to_vm_xml(name, vm_uuid, libvirt_stream_protocols=stream_protocols, - graphics=graphics) + graphics=graphics, + max_vcpus=self._get_host_maxcpu()) cb('Defining new VM') try: @@ -225,6 +246,30 @@ class VMModel(object): self.vmsnapshots = cls(**kargs) self.stats = {} + def has_topology(self, dom): + xml = dom.XMLDesc(0) + sockets = xpath_get_text(xml, XPATH_TOPOLOGY + '/@sockets') + cores = xpath_get_text(xml, XPATH_TOPOLOGY + '/@cores') + threads = xpath_get_text(xml, XPATH_TOPOLOGY + '/@threads') + return sockets and cores and threads + + def get_vm_max_sockets(self, dom): + return int(xpath_get_text(dom.XMLDesc(0), + XPATH_TOPOLOGY + '/@sockets')[0]) + + def get_vm_sockets(self, dom): + current_vcpu = dom.vcpusFlags(libvirt.VIR_DOMAIN_AFFECT_CURRENT) + return (current_vcpu / self.get_vm_cores(dom) / + self.get_vm_threads(dom)) + + def get_vm_cores(self, dom): + return int(xpath_get_text(dom.XMLDesc(0), + XPATH_TOPOLOGY + '/@cores')[0]) + + def get_vm_threads(self, dom): + return int(xpath_get_text(dom.XMLDesc(0), + XPATH_TOPOLOGY + '/@threads')[0]) + def update(self, name, params): lock = vm_locks.get(name) if lock is None: @@ -241,8 +286,30 @@ class VMModel(object): 'alignment': str(PPC_MEM_ALIGN)}) dom = self.get_vm(name, self.conn) - vm_name, dom = self._static_vm_update(name, dom, params) + if DOM_STATE_MAP[dom.info()[0]] == 'shutoff': + ext_params = set(params.keys()) - set(VM_OFFLINE_UPDATE_PARAMS) + if len(ext_params) > 0: + raise InvalidParameter('KCHVM0073E', + {'params': ', '.join(ext_params)}) + else: + ext_params = set(params.keys()) - set(VM_ONLINE_UPDATE_PARAMS) + if len(ext_params) > 0: + raise InvalidParameter('KCHVM0074E', + {'params': ', '.join(ext_params)}) + + if 'cpus' in params and DOM_STATE_MAP[dom.info()[0]] == 'shutoff': + # user cannot change vcpu if topology is defined. + curr_vcpu = dom.vcpusFlags(libvirt.VIR_DOMAIN_AFFECT_CURRENT) + if self.has_topology(dom) and curr_vcpu != params['cpus']: + raise InvalidOperation( + 'KCHVM0075E', + {'vm': dom.name(), + 'sockets': self.get_vm_sockets(dom), + 'cores': self.get_vm_cores(dom), + 'threads': self.get_vm_threads(dom)}) + self._live_vm_update(dom, params) + vm_name, dom = self._static_vm_update(name, dom, params) return vm_name def clone(self, name): @@ -707,23 +774,34 @@ class VMModel(object): params['name'], nonascii_name = get_ascii_nonascii_name(name) for key, val in params.items(): + change_numa = True if key in VM_STATIC_UPDATE_PARAMS: if type(val) == int: val = str(val) xpath = VM_STATIC_UPDATE_PARAMS[key] - new_xml = xml_item_update(new_xml, xpath, val) + attrib = None + if key == 'cpus': + if self.has_topology(dom) or dom.isActive(): + change_numa = False + continue + # Update maxvcpu firstly + new_xml = xml_item_update(new_xml, xpath, + str(self._get_host_maxcpu()), + attrib) + # Update current vcpu + attrib = 'current' + new_xml = xml_item_update(new_xml, xpath, val, attrib) # Updating memory and NUMA if necessary, if vm is offline if not dom.isActive(): if 'memory' in params: new_xml = self._update_memory_config(new_xml, params) - elif 'cpus' in params and \ + elif 'cpus' in params and change_numa and \ (xpath_get_text(new_xml, XPATH_NUMA_CELL + '/@memory') != []): - vcpus = params['cpus'] new_xml = xml_item_update( new_xml, XPATH_NUMA_CELL, - value='0-' + str(vcpus - 1) if vcpus > 1 else '0', + value='0', attr='cpus') if 'graphics' in params: @@ -758,6 +836,8 @@ class VMModel(object): raise OperationFailed("KCHVM0008E", {'name': vm_name, 'err': e.get_error_message()}) + if name is not None: + vm_name = name return (nonascii_name if nonascii_name is not None else vm_name, dom) def _update_memory_config(self, xml, params): @@ -769,21 +849,20 @@ class VMModel(object): vcpus = params.get('cpus') if numa_mem == []: if vcpus is None: - vcpus = int(xpath_get_text(xml, - VM_STATIC_UPDATE_PARAMS['cpus'])[0]) + vcpus = int(xpath_get_text(xml, 'vcpu')[0]) cpu = root.find('./cpu') if cpu is None: - cpu = get_cpu_xml(vcpus, params['memory'] << 10) + cpu = get_cpu_xml(0, params['memory'] << 10) root.insert(0, ET.fromstring(cpu)) else: - numa_element = get_numa_xml(vcpus, params['memory'] << 10) + numa_element = get_numa_xml(0, params['memory'] << 10) cpu.insert(0, ET.fromstring(numa_element)) else: if vcpus is not None: xml = xml_item_update( xml, XPATH_NUMA_CELL, - value='0-' + str(vcpus - 1) if vcpus > 1 else '0', + value='0', attr='cpus') root = ET.fromstring(xml_item_update(xml, XPATH_NUMA_CELL, str(params['memory'] << 10), @@ -847,6 +926,17 @@ class VMModel(object): return new_xml return ET.tostring(root, encoding="utf-8") + def _get_host_maxcpu(self): + if os.uname()[4] in ['ppc', 'ppc64', 'ppc64le']: + cpu_model = CPUInfoModel(conn=self.conn) + max_vcpu_val = (cpu_model.cores_available * + cpu_model.threads_per_core) + if max_vcpu_val > 255: + max_vcpu_val = 255 + else: + max_vcpu_val = self.conn.get().getMaxVcpus('kvm') + return max_vcpu_val + def _live_vm_update(self, dom, params): self._vm_update_access_metadata(dom, params) if 'memory' in params and dom.isActive(): diff --git a/src/wok/plugins/kimchi/tests/test_rest.py b/src/wok/plugins/kimchi/tests/test_rest.py index 544f2e6..9fa8c8d 100644 --- a/src/wok/plugins/kimchi/tests/test_rest.py +++ b/src/wok/plugins/kimchi/tests/test_rest.py @@ -144,6 +144,10 @@ class RestTests(unittest.TestCase): vm = json.loads(self.request('/plugins/kimchi/vms/vm-1').read()) self.assertEquals('vm-1', vm['name']) + req = json.dumps({'cpus': 3}) + resp = self.request('/plugins/kimchi/vms/vm-1', req, 'PUT') + self.assertEquals(200, resp.status) + resp = self.request('/plugins/kimchi/vms/vm-1/start', '{}', 'POST') self.assertEquals(200, resp.status) @@ -155,9 +159,10 @@ class RestTests(unittest.TestCase): resp = self.request('/plugins/kimchi/vms/vm-1', req, 'PUT') self.assertEquals(400, resp.status) - req = json.dumps({'cpus': 3}) + # Unable to do CPU hotplug + req = json.dumps({'cpus': 5}) resp = self.request('/plugins/kimchi/vms/vm-1', req, 'PUT') - self.assertEquals(200, resp.status) + self.assertEquals(400, resp.status) # Check if there is support to memory hotplug, once vm is running resp = self.request('/plugins/kimchi/config/capabilities').read() @@ -171,6 +176,7 @@ class RestTests(unittest.TestCase): req = json.dumps({"graphics": {'passwd': "abcdef"}}) resp = self.request('/plugins/kimchi/vms/vm-1', req, 'PUT') + self.assertEquals(200, resp.status) info = json.loads(resp.read()) self.assertEquals('abcdef', info["graphics"]["passwd"]) self.assertEquals(None, info["graphics"]["passwdValidTo"]) diff --git a/src/wok/plugins/kimchi/vmtemplate.py b/src/wok/plugins/kimchi/vmtemplate.py index 69cc9b5..08adf4c 100644 --- a/src/wok/plugins/kimchi/vmtemplate.py +++ b/src/wok/plugins/kimchi/vmtemplate.py @@ -316,7 +316,7 @@ class VMTemplate(object): cpu_info = self.info.get('cpu_info') if cpu_info is not None: cpu_topo = cpu_info.get('topology') - return get_cpu_xml(self.info.get('cpus'), + return get_cpu_xml(0, self.info.get('memory') << 10, cpu_topo) @@ -329,7 +329,6 @@ class VMTemplate(object): params['qemu-namespace'] = '' params['cdroms'] = '' params['qemu-stream-cmdline'] = '' - params['cpu_info'] = self._get_cpu_xml() params['disks'] = self._get_disks_xml(vm_uuid) params['serial'] = get_serial_xml(params) @@ -340,6 +339,8 @@ class VMTemplate(object): libvirt_stream_protocols = kwargs.get('libvirt_stream_protocols', []) cdrom_xml = self._get_cdrom_xml(libvirt_stream_protocols) + max_vcpus = kwargs.get('max_vcpus', 1) + if not urlparse.urlparse(self.info.get('cdrom', "")).scheme in \ libvirt_stream_protocols and \ params.get('iso_stream', False): @@ -362,6 +363,24 @@ class VMTemplate(object): if distro == "IBM_PowerKVM": params['slots'] = 32 + cpu_topo = self.info.get('cpu_info').get('topology') + if (cpu_topo is not None): + sockets = int(max_vcpus / (cpu_topo['cores'] * + cpu_topo['threads'])) + self.info['cpu_info']['topology']['sockets'] = sockets + + # Reduce maxvcpu to fit number of sockets if necessary + total_max_vcpu = sockets * cpu_topo['cores'] * cpu_topo['threads'] + if total_max_vcpu != max_vcpus: + max_vcpus = total_max_vcpu + + params['vcpus'] = "<vcpu current='%s'>%d</vcpu>" % \ + (params['cpus'], max_vcpus) + else: + params['vcpus'] = "<vcpu current='%s'>%d</vcpu>" % \ + (params['cpus'], max_vcpus) + params['cpu_info'] = self._get_cpu_xml() + xml = """ <domain type='%(domain)s'> %(qemu-stream-cmdline)s @@ -369,7 +388,7 @@ class VMTemplate(object): <uuid>%(uuid)s</uuid> <maxMemory slots='%(slots)s' unit='KiB'>%(max_memory)s</maxMemory> <memory unit='MiB'>%(memory)s</memory> - <vcpu>%(cpus)s</vcpu> + %(vcpus)s %(cpu_info)s <os> <type arch='%(arch)s'>hvm</type> -- 2.1.0

This patch fixes problems in test cases after change the way Kimchi updates the disks in the Templates. Now disks have the storagepool assigned in its data. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/mockmodel.py | 8 +++- src/wok/plugins/kimchi/tests/test_model.py | 18 ++++++-- src/wok/plugins/kimchi/tests/test_rest.py | 15 ++++--- src/wok/plugins/kimchi/tests/test_template.py | 59 ++++++++++++++++++++------- 4 files changed, 75 insertions(+), 25 deletions(-) diff --git a/src/wok/plugins/kimchi/mockmodel.py b/src/wok/plugins/kimchi/mockmodel.py index c679378..2e58724 100644 --- a/src/wok/plugins/kimchi/mockmodel.py +++ b/src/wok/plugins/kimchi/mockmodel.py @@ -48,18 +48,20 @@ from wok.plugins.kimchi.model import storagevolumes from wok.plugins.kimchi.model.templates import LibvirtVMTemplate from wok.plugins.kimchi.model.users import PAMUsersModel from wok.plugins.kimchi.model.groups import PAMGroupsModel +from wok.plugins.kimchi.utils import pool_name_from_uri from wok.plugins.kimchi.vmtemplate import VMTemplate fake_user = {'root': 'letmein!'} mockmodel_defaults = { - 'storagepool': '/plugins/kimchi/storagepools/default-pool', 'domain': 'test', 'arch': 'i686' } storagevolumes.VALID_RAW_CONTENT = ['dos/mbr boot sector', 'x86 boot sector', 'data', 'empty'] +DEFAULT_POOL = '/plugins/kimchi/storagepools/default-pool' + class MockModel(Model): _mock_vms = defaultdict(list) @@ -73,6 +75,7 @@ class MockModel(Model): # test:///default driver defaults = dict(osinfo.defaults) defaults.update(mockmodel_defaults) + defaults['disks'][0]['pool'] = DEFAULT_POOL osinfo.defaults = dict(defaults) self._mock_vgs = MockVolumeGroups() @@ -238,7 +241,8 @@ class MockModel(Model): def _probe_image(self, path): return ('unknown', 'unknown') - def _get_volume_path(self, pool, vol): + def _get_volume_path(self, pool_uri, vol): + pool = pool_name_from_uri(pool_uri) pool_info = self.storagepool_lookup(pool) if pool_info['type'] == 'scsi': return self._mock_storagevolumes.scsi_volumes[vol]['path'] diff --git a/src/wok/plugins/kimchi/tests/test_model.py b/src/wok/plugins/kimchi/tests/test_model.py index 65d97f2..c5407d6 100644 --- a/src/wok/plugins/kimchi/tests/test_model.py +++ b/src/wok/plugins/kimchi/tests/test_model.py @@ -76,6 +76,16 @@ def tearDownModule(): shutil.rmtree(TMP_DIR) +def _setDiskPoolDefault(): + osinfo.defaults['disks'][0]['pool'] = \ + '/plugins/kimchi/storagepools/default' + + +def _setDiskPoolDefaultTest(): + osinfo.defaults['disks'][0]['pool'] = \ + '/plugins/kimchi/storagepools/default-pool' + + class ModelTests(unittest.TestCase): def setUp(self): self.tmp_store = '/tmp/kimchi-store-test' @@ -266,8 +276,8 @@ class ModelTests(unittest.TestCase): "networks": ["default"], "memory": 1024, "folder": [], "icon": "images/icon-vm.png", "os_distro": "unknown", "os_version": "unknown", - "disks": [{"base": vol_path, "size": 10}], - "storagepool": "/plugins/kimchi/storagepools/default"} + "disks": [{"base": vol_path, "size": 10, "pool": { + "name": "/plugins/kimchi/storagepools/default"}}]} with inst.objstore as session: session.store('template', tmpl_name, tmpl_info, @@ -1046,11 +1056,13 @@ class ModelTests(unittest.TestCase): 'name': 'test', 'disks': [], 'cdrom': UBUNTU_ISO, - 'storagepool': '/plugins/kimchi/storagepools/default-pool', 'domain': 'test', 'arch': 'i686' } + _setDiskPoolDefaultTest() + rollback.prependDefer(_setDiskPoolDefault) + inst.templates_create(params) rollback.prependDefer(inst.template_delete, 'test') diff --git a/src/wok/plugins/kimchi/tests/test_rest.py b/src/wok/plugins/kimchi/tests/test_rest.py index 9fa8c8d..2947df2 100644 --- a/src/wok/plugins/kimchi/tests/test_rest.py +++ b/src/wok/plugins/kimchi/tests/test_rest.py @@ -226,6 +226,8 @@ class RestTests(unittest.TestCase): vm = json.loads( self.request('/plugins/kimchi/vms/∨м-црdαtеd', req).read() ) + # Memory was hot plugged + params['memory'] += 1024 for key in params.keys(): self.assertEquals(params[key], vm[key]) @@ -895,7 +897,7 @@ class RestTests(unittest.TestCase): # Test template not changed after vm customise its pool t = json.loads(self.request('/plugins/kimchi/templates/test').read()) - self.assertEquals(t['storagepool'], + self.assertEquals(t['disks'][0]['pool']['name'], '/plugins/kimchi/storagepools/default-pool') # Verify the volume was created @@ -928,9 +930,9 @@ class RestTests(unittest.TestCase): # Create template fails because SCSI volume is missing tmpl_params = { - 'name': 'test_fc_pool', 'cdrom': fake_iso, - 'storagepool': '/plugins/kimchi/storagepools/scsi_fc_pool' - } + 'name': 'test_fc_pool', 'cdrom': fake_iso, 'disks': [{'pool': { + 'name': '/plugins/kimchi/storagepools/scsi_fc_pool'}}]} + req = json.dumps(tmpl_params) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(400, resp.status) @@ -940,8 +942,9 @@ class RestTests(unittest.TestCase): '/plugins/kimchi/storagepools/scsi_fc_pool/storagevolumes' ) lun_name = json.loads(resp.read())[0]['name'] - - tmpl_params['disks'] = [{'index': 0, 'volume': lun_name}] + pool_name = tmpl_params['disks'][0]['pool']['name'] + tmpl_params['disks'] = [{'index': 0, 'volume': lun_name, + 'pool': {'name': pool_name}}] req = json.dumps(tmpl_params) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) diff --git a/src/wok/plugins/kimchi/tests/test_template.py b/src/wok/plugins/kimchi/tests/test_template.py index 3f026f3..7bd856e 100644 --- a/src/wok/plugins/kimchi/tests/test_template.py +++ b/src/wok/plugins/kimchi/tests/test_template.py @@ -25,7 +25,6 @@ from functools import partial from tests.utils import get_free_port, patch_auth, request, run_server -from wok.plugins.kimchi import osinfo from wok.plugins.kimchi.config import READONLY_POOL_TYPE from wok.plugins.kimchi.mockmodel import MockModel @@ -37,6 +36,8 @@ port = None ssl_port = None cherrypy_port = None +DEFAULT_POOL = u'/plugins/kimchi/storagepools/default-pool' + def setUpModule(): global test_server, model, host, port, ssl_port, cherrypy_port @@ -68,8 +69,7 @@ class TemplateTests(unittest.TestCase): # Create a template without cdrom and disk specified fails with 400 t = {'name': 'test', 'os_distro': 'ImagineOS', - 'os_version': '1.0', 'memory': 1024, 'cpus': 1, - 'storagepool': '/plugins/kimchi/storagepools/alt'} + 'os_version': '1.0', 'memory': 1024, 'cpus': 1} req = json.dumps(t) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(400, resp.status) @@ -82,16 +82,18 @@ class TemplateTests(unittest.TestCase): # Verify the template keys = ['name', 'icon', 'invalid', 'os_distro', 'os_version', 'cpus', - 'memory', 'cdrom', 'disks', 'storagepool', 'networks', + 'memory', 'cdrom', 'disks', 'networks', 'folder', 'graphics', 'cpu_info'] tmpl = json.loads( self.request('/plugins/kimchi/templates/test').read() ) self.assertEquals(sorted(tmpl.keys()), sorted(keys)) - # Verify if default disk format was configured - default_disk_format = osinfo.defaults['disks'][0]['format'] - self.assertEquals(tmpl['disks'][0]['format'], default_disk_format) + disk_keys = ['index', 'pool', 'size', 'format'] + disk_pool_keys = ['name', 'type'] + self.assertEquals(sorted(tmpl['disks'][0].keys()), sorted(disk_keys)) + self.assertEquals(sorted(tmpl['disks'][0]['pool'].keys()), + sorted(disk_pool_keys)) # Clone a template resp = self.request('/plugins/kimchi/templates/test/clone', '{}', @@ -212,13 +214,20 @@ class TemplateTests(unittest.TestCase): self.assertEquals(update_tmpl['cdrom'], cdrom_data['cdrom']) # Update disks - disk_data = {'disks': [{'index': 0, 'size': 10, 'format': 'raw'}, - {'index': 1, 'size': 20, 'format': 'raw'}]} + disk_data = {'disks': [{'index': 0, 'size': 10, 'format': 'raw', + 'pool': DEFAULT_POOL}, + {'index': 1, 'size': 20, 'format': 'qcow2', + 'pool': DEFAULT_POOL}]} resp = self.request(new_tmpl_uri, json.dumps(disk_data), 'PUT') self.assertEquals(200, resp.status) resp = self.request(new_tmpl_uri) self.assertEquals(200, resp.status) updated_tmpl = json.loads(resp.read()) + disk_data['disks'][0]['pool'] = {'name': DEFAULT_POOL, + 'type': 'dir'} + + disk_data['disks'][1]['pool'] = {'name': DEFAULT_POOL, + 'type': 'dir'} self.assertEquals(updated_tmpl['disks'], disk_data['disks']) # For all supported types, edit the template and check if @@ -227,13 +236,15 @@ class TemplateTests(unittest.TestCase): 'qed', 'raw', 'vmdk', 'vpc'] for disk_type in disk_types: disk_data = {'disks': [{'index': 0, 'format': disk_type, - 'size': 10}]} + 'size': 10, 'pool': DEFAULT_POOL}]} resp = self.request(new_tmpl_uri, json.dumps(disk_data), 'PUT') self.assertEquals(200, resp.status) resp = self.request(new_tmpl_uri) self.assertEquals(200, resp.status) updated_tmpl = json.loads(resp.read()) + disk_data['disks'][0]['pool'] = {u'name': DEFAULT_POOL, + u'type': u'dir'} self.assertEquals(updated_tmpl['disks'], disk_data['disks']) # Update folder @@ -338,16 +349,34 @@ class TemplateTests(unittest.TestCase): vols = json.loads(resp.read()) if len(vols) > 0: vol = vols[0]['name'] - req = json.dumps({'storagepool': pool_uri, - 'disks': [{'volume': vol}]}) + req = json.dumps({'disks': [{'volume': vol, + 'pool': {'name': pool_uri}}]}) else: - req = json.dumps({'storagepool': pool_uri}) + req = json.dumps({'disks': [{'pool': {'name': pool_uri}}]}) if req is not None: resp = self.request('/plugins/kimchi/templates/test', req, 'PUT') self.assertEquals(200, resp.status) + # Test disk template update with different pool + pool_uri = u'/plugins/kimchi/storagepools/kīмсhīUnitTestDirPool' + disk_data = {'disks': [{'size': 5, 'format': 'qcow2', + 'pool': pool_uri}]} + req = json.dumps(disk_data) + resp = self.request('/plugins/kimchi/templates/test', req, 'PUT') + self.assertEquals(200, resp.status) + del(disk_data['disks'][0]['pool']) + disk_data['disks'][0]['index'] = 0 + disk_data['disks'][0]['pool'] = {u'name': pool_uri, + u'type': u'dir'} + tmpl = json.loads( + self.request('/plugins/kimchi/templates/test').read()) + self.assertEquals(sorted(disk_data['disks'][0].keys()), + sorted(tmpl['disks'][0].keys())) + self.assertEquals(sorted(disk_data['disks'][0].values()), + sorted(tmpl['disks'][0].values())) + def test_tmpl_integrity(self): # Create a network and a pool for testing template integrity net = {'name': u'nat-network', 'connection': 'nat'} @@ -362,7 +391,9 @@ class TemplateTests(unittest.TestCase): # Create a template using the custom network and pool t = {'name': 'test', 'cdrom': '/tmp/mock.iso', 'networks': ['nat-network'], - 'storagepool': '/plugins/kimchi/storagepools/dir-pool'} + 'disks': [{'pool': { + 'name': '/plugins/kimchi/storagepools/dir-pool'}, + 'size': 2}]} req = json.dumps(t) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) -- 2.1.0

It is not necessary to pass the parameter 'storagepool' when create a new template or update one. User is going to set a pool/name for each disk instead. This patch remove the legacy code related to 'storagepool' in templates. It also fixes minor issues found and improve some checkings. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/API.json | 12 ---- src/wok/plugins/kimchi/control/templates.py | 1 - src/wok/plugins/kimchi/docs/API.md | 3 - src/wok/plugins/kimchi/i18n.py | 3 +- src/wok/plugins/kimchi/model/storagepools.py | 10 +-- src/wok/plugins/kimchi/model/templates.py | 103 ++++++++++++++------------- src/wok/plugins/kimchi/utils.py | 2 +- src/wok/plugins/kimchi/vmtemplate.py | 52 ++++++-------- 8 files changed, 83 insertions(+), 103 deletions(-) diff --git a/src/wok/plugins/kimchi/API.json b/src/wok/plugins/kimchi/API.json index 265ae36..2b64d07 100644 --- a/src/wok/plugins/kimchi/API.json +++ b/src/wok/plugins/kimchi/API.json @@ -512,12 +512,6 @@ "minItems": 1, "uniqueItems": true }, - "storagepool": { - "description": "Location of the storage pool", - "type": "string", - "pattern": "^/plugins/kimchi/storagepools/[^/]+/?$", - "error": "KCHTMPL0015E" - }, "networks": { "description": "list of which networks will be assigned to the new VM.", "type": "array", @@ -695,12 +689,6 @@ "minItems": 1, "uniqueItems": true }, - "storagepool": { - "description": "Location of the storage pool", - "type": "string", - "pattern": "^/plugins/kimchi/storagepools/[^/]+/?$", - "error": "KCHTMPL0015E" - }, "networks": { "description": "list of which networks will be assigned to the new VM.", "type": "array", diff --git a/src/wok/plugins/kimchi/control/templates.py b/src/wok/plugins/kimchi/control/templates.py index fc58815..4cd70c2 100644 --- a/src/wok/plugins/kimchi/control/templates.py +++ b/src/wok/plugins/kimchi/control/templates.py @@ -50,7 +50,6 @@ class Template(Resource): 'memory': self.info['memory'], 'cdrom': self.info.get('cdrom', None), 'disks': self.info['disks'], - 'storagepool': self.info['storagepool'], 'networks': self.info['networks'], 'folder': self.info.get('folder', []), 'graphics': self.info['graphics'], diff --git a/src/wok/plugins/kimchi/docs/API.md b/src/wok/plugins/kimchi/docs/API.md index 8ef48d6..1c4ee50 100644 --- a/src/wok/plugins/kimchi/docs/API.md +++ b/src/wok/plugins/kimchi/docs/API.md @@ -281,8 +281,6 @@ Represents a snapshot of the Virtual Machine's primary monitor. * memory *(optional)*: The amount of memory assigned to the VM. Default is 1024M. * cdrom *(optional)*: A volume name or URI to an ISO image. - * storagepool *(optional)*: URI of the storagepool. - Default is '/storagepools/default' * networks *(optional)*: list of networks will be assigned to the new VM. Default is '[default]' * disks *(optional)*: An array of requested disks with the following optional fields @@ -420,7 +418,6 @@ A interface represents available network interface on VM. * cpus: The number of CPUs assigned to the VM * memory: The amount of memory assigned to the VM * cdrom: A volume name or URI to an ISO image - * storagepool: URI of the storagepool where template allocates vm storage. * networks *(optional)*: list of networks will be assigned to the new VM. * disks: An array of requested disks with the following optional fields (either *size* or *volume* must be specified): diff --git a/src/wok/plugins/kimchi/i18n.py b/src/wok/plugins/kimchi/i18n.py index 7154a62..cf67085 100644 --- a/src/wok/plugins/kimchi/i18n.py +++ b/src/wok/plugins/kimchi/i18n.py @@ -128,7 +128,6 @@ messages = { "KCHVM0069E": _("Password field must be a string."), "KCHVM0070E": _("Error creating local host ssh rsa key of user 'root'."), "KCHVM0071E": _("Memory value %(mem)s must be aligned to %(alignment)sMiB."), - "KCHVM0072E": _("Template given has multiple disks assigned to different types of storage pools, conflicting with storage pool provided."), "KCHVM0073E": _("Unable to update the following parameters while the VM is offline: %(params)s"), "KCHVM0074E": _("Unable to update the following parameters while the VM is online: %(params)s"), "KCHVM0075E": _("Cannot change VCPU value because '%(vm)s' has a topology defined - sockets: %(sockets)s, cores: %(cores)s, threads: %(threads)s."), @@ -179,6 +178,8 @@ messages = { "KCHTMPL0025E": _("When specifying CPU topology, VCPUs must be a product of sockets, cores, and threads."), "KCHTMPL0026E": _("When specifying CPU topology, each element must be an integer greater than zero."), "KCHTMPL0027E": _("Invalid disk image format. Valid formats: bochs, cloop, cow, dmg, qcow, qcow2, qed, raw, vmdk, vpc."), + "KCHTMPL0028E": _("When setting template disks, following parameters are required: 'index', 'pool name', 'format', 'size' or 'volume' (for scsi/iscsi pools)"), + "KCHTMPL0029E": _("Disk format must be 'raw', for logical, iscsi, and scsi pools."), "KCHPOOL0001E": _("Storage pool %(name)s already exists"), "KCHPOOL0002E": _("Storage pool %(name)s does not exist"), diff --git a/src/wok/plugins/kimchi/model/storagepools.py b/src/wok/plugins/kimchi/model/storagepools.py index ec866c5..ddfa7fa 100644 --- a/src/wok/plugins/kimchi/model/storagepools.py +++ b/src/wok/plugins/kimchi/model/storagepools.py @@ -34,7 +34,6 @@ from wok.plugins.kimchi.model.host import DeviceModel from wok.plugins.kimchi.model.libvirtstoragepool import StoragePoolDef from wok.plugins.kimchi.osinfo import defaults as tmpl_defaults from wok.plugins.kimchi.scan import Scanner -from wok.plugins.kimchi.utils import pool_name_from_uri ISO_POOL_NAME = u'kimchi_isos' @@ -72,7 +71,7 @@ class StoragePoolsModel(object): def _check_default_pools(self): pools = {} - default_pool = tmpl_defaults['storagepool'] + default_pool = tmpl_defaults['disks'][0]['pool']['name'] default_pool = default_pool.split('/')[-1] pools[default_pool] = {} @@ -446,9 +445,10 @@ class StoragePoolModel(object): templates = session.get_list('template') for tmpl in templates: t_info = session.get('template', tmpl) - t_pool = pool_name_from_uri(t_info['storagepool']) - if t_pool == pool_name: - return True + for disk in t_info['disks']: + t_pool = disk['pool']['name'] + if t_pool == pool_name: + return True return False def deactivate(self, name): diff --git a/src/wok/plugins/kimchi/model/templates.py b/src/wok/plugins/kimchi/model/templates.py index eae2714..7b1412b 100644 --- a/src/wok/plugins/kimchi/model/templates.py +++ b/src/wok/plugins/kimchi/model/templates.py @@ -22,7 +22,7 @@ import libvirt import os import stat -from wok.exception import InvalidOperation, InvalidParameter +from wok.exception import InvalidOperation, InvalidParameter, MissingParameter from wok.exception import NotFoundError, OperationFailed from wok.utils import probe_file_permission_as_user, run_setfacl_set_attr from wok.xmlutils.utils import xpath_get_text @@ -34,6 +34,43 @@ from wok.plugins.kimchi.utils import pool_name_from_uri from wok.plugins.kimchi.vmtemplate import VMTemplate +def _check_disks_params(params, conn, objstore): + if 'disks' not in params: + return + + basic_disk = ['index', 'format', 'pool', 'size'] + ro_disk = ['index', 'format', 'pool', 'volume'] + base_disk = ['index', 'base', 'pool'] + for disk in params['disks']: + keys = sorted(disk.keys()) + if ((keys == sorted(basic_disk)) or (keys == sorted(ro_disk)) or + (keys == sorted(base_disk))): + pass + else: + raise MissingParameter('KCHTMPL0028E') + + if disk.get('pool', {}).get('name') is None: + raise MissingParameter('KCHTMPL0028E') + + pool_uri = disk['pool']['name'] + try: + pool_name = pool_name_from_uri(pool_uri) + conn.get().storagePoolLookupByName(pool_name.encode("utf-8")) + except Exception: + raise InvalidParameter("KCHTMPL0004E", + {'pool': pool_uri, + 'template': pool_name}) + if 'volume' in disk: + kwargs = {'conn': conn, 'objstore': objstore} + storagevolumes = __import__( + "wok.plugins.kimchi.model.storagevolumes", fromlist=['']) + pool_volumes = storagevolumes.StorageVolumesModel( + **kwargs).get_list(pool_name) + if disk['volume'] not in pool_volumes: + raise InvalidParameter("KCHTMPL0019E", {'pool': pool_name, + 'volume': disk['volume']}) + + class TemplatesModel(object): def __init__(self, **kargs): self.objstore = kargs['objstore'] @@ -54,6 +91,9 @@ class TemplatesModel(object): {'filename': iso, 'user': user, 'err': excp}) + # Check disks information + _check_disks_params(params, self.conn, self.objstore) + cpu_info = params.get('cpu_info') if cpu_info: topology = cpu_info.get('topology') @@ -71,19 +111,6 @@ class TemplatesModel(object): check_topology(params['cpus'], topology) conn = self.conn.get() - pool_uri = params.get(u'storagepool', '') - if pool_uri: - try: - pool_name = pool_name_from_uri(pool_uri) - pool = conn.storagePoolLookupByName(pool_name.encode("utf-8")) - except Exception: - raise InvalidParameter("KCHTMPL0004E", {'pool': pool_uri, - 'template': name}) - - tmp_volumes = [disk['volume'] for disk in params.get('disks', []) - if 'volume' in disk] - self.template_volume_validate(tmp_volumes, pool) - for net_name in params.get(u'networks', []): try: conn.networkLookupByName(net_name.encode('utf-8')) @@ -112,27 +139,22 @@ class TemplatesModel(object): with self.objstore as session: return session.get_list('template') - def template_volume_validate(self, tmp_volumes, pool): + def template_volume_validate(self, volume, pool): kwargs = {'conn': self.conn, 'objstore': self.objstore} pool_type = xpath_get_text(pool.XMLDesc(0), "/pool/@type")[0] pool_name = unicode(pool.name(), 'utf-8') - # as we discussion, we do not mix disks from 2 different types of - # storage pools, for instance: we do not create a template with 2 - # disks, where one comes from a SCSI pool and other is a .img in - # a DIR pool. if pool_type in ['iscsi', 'scsi']: - if not tmp_volumes: + if not volume: raise InvalidParameter("KCHTMPL0018E") storagevolumes = __import__( "wok.plugins.kimchi.model.storagevolumes", fromlist=['']) pool_volumes = storagevolumes.StorageVolumesModel( **kwargs).get_list(pool_name) - vols = set(tmp_volumes) - set(pool_volumes) - if vols: + if volume not in pool_volumes: raise InvalidParameter("KCHTMPL0019E", {'pool': pool_name, - 'volume': vols}) + 'volume': volume}) class TemplateModel(object): @@ -146,17 +168,14 @@ class TemplateModel(object): with objstore as session: params = session.get('template', name) if overrides: - params.update(overrides) - if 'storagepool' in params: - libvVMT = LibvirtVMTemplate(params, conn=conn) - poolType = libvVMT._get_storage_type(params['storagepool']) + if 'storagepool' in overrides: for i, disk in enumerate(params['disks']): - if disk['pool']['type'] != poolType: - raise InvalidOperation('KCHVM0072E') - else: - params['disks'][i]['pool']['name'] = \ - params['storagepool'] + params['disks'][i]['pool']['name'] = \ + overrides['storagepool'] + del overrides['storagepool'] + params.update(overrides) + _check_disks_params(params, conn, objstore) return LibvirtVMTemplate(params, False, conn) def lookup(self, name): @@ -193,27 +212,15 @@ class TemplateModel(object): new_t = copy.copy(old_t) new_t.update(params) + # Check disks information + _check_disks_params(new_t, self.conn, self.objstore) + if not self._validate_updated_cpu_params(new_t): raise InvalidParameter('KCHTMPL0025E') - ident = name - - conn = self.conn.get() - pool_uri = new_t.get(u'storagepool', '') - - if pool_uri: - try: - pool_name = pool_name_from_uri(pool_uri) - pool = conn.storagePoolLookupByName(pool_name.encode("utf-8")) - except Exception: - raise InvalidParameter("KCHTMPL0004E", {'pool': pool_uri, - 'template': name}) - tmp_volumes = [disk['volume'] for disk in new_t.get('disks', []) - if 'volume' in disk] - self.templates.template_volume_validate(tmp_volumes, pool) - for net_name in params.get(u'networks', []): try: + conn = self.conn.get() conn.networkLookupByName(net_name.encode('utf-8')) except Exception: raise InvalidParameter("KCHTMPL0003E", {'network': net_name, diff --git a/src/wok/plugins/kimchi/utils.py b/src/wok/plugins/kimchi/utils.py index 31def2e..a5f5e97 100644 --- a/src/wok/plugins/kimchi/utils.py +++ b/src/wok/plugins/kimchi/utils.py @@ -38,7 +38,7 @@ def _uri_to_name(collection, uri): expr = '/plugins/kimchi/%s/(.*?)$' % collection m = re.match(expr, uri) if not m: - raise InvalidParameter("KCHUTILS0001E", {'uri': uri}) + raise InvalidParameter("WOKUTILS0001E", {'uri': uri}) return m.group(1) diff --git a/src/wok/plugins/kimchi/vmtemplate.py b/src/wok/plugins/kimchi/vmtemplate.py index 08adf4c..62d0e1a 100644 --- a/src/wok/plugins/kimchi/vmtemplate.py +++ b/src/wok/plugins/kimchi/vmtemplate.py @@ -76,36 +76,23 @@ class VMTemplate(object): graphics.update(graph_args) args['graphics'] = graphics - # Provide compatibility with old template version and sets data to - # correct output format if necessary - def _fix_disk_compatibilities(disks): - for i, disk in enumerate(disks): - if 'pool' not in disk: - disk['pool'] = {'name': self.info['storagepool']} - if (isinstance(disk['pool'], str) or - isinstance(disk['pool'], unicode)): - disk['pool'] = {'name': disk['pool']} - disk['pool']['type'] = \ - self._get_storage_type(disk['pool']['name']) - - if self.info.get('disks') is not None: - _fix_disk_compatibilities(self.info['disks']) - if args.get('disks') is not None: - _fix_disk_compatibilities(args['disks']) - - # Merge disks dict with disks provided by user - default_disk = self.info['disks'][0] - for i, d in enumerate(args.get('disks', [])): - disk = dict(default_disk) - disk['index'] = i - disk.update(d) - if disk['pool']['type'] in ['logical', 'iscsi', 'scsi']: - disk['format'] = 'raw' - args['disks'][i] = disk - # Override template values according to 'args' self.info.update(args) + # Find pool type for each disk + disks = self.info.get('disks') + for disk in disks: + pool_name = disk.get('pool', {}).get('name') + if pool_name is None: + raise MissingParameter('KCHTMPL0029E') + + pool_type = self._get_storage_type(disk['pool']['name']) + disk['pool']['type'] = pool_type + + if pool_type in ['logical', 'iscsi', 'scsi']: + if disk['format'] != 'raw': + raise InvalidParameter('KCHTMPL0029E') + def _get_os_info(self, args, scan): distro = version = 'unknown' @@ -197,9 +184,9 @@ class VMTemplate(object): for index, disk in enumerate(self.info['disks']): params = dict(base_disk_params) params['format'] = disk['format'] + params['index'] = index params.update(locals().get('%s_disk_params' % disk['pool']['type'], {})) - params['index'] = index volume = disk.get('volume') if volume is not None: @@ -459,10 +446,11 @@ class VMTemplate(object): invalid['networks'] = invalid_networks # validate storagepools integrity - pool_uri = self.info['storagepool'] - pool_name = pool_name_from_uri(pool_uri) - if pool_name not in self._get_all_storagepools_name(): - invalid['storagepools'] = [pool_name] + for disk in self.info['disks']: + pool_uri = disk['pool']['name'] + pool_name = pool_name_from_uri(pool_uri) + if pool_name not in self._get_all_storagepools_name(): + invalid['storagepools'] = [pool_name] # validate iso integrity # FIXME when we support multiples cdrom devices -- 2.1.0

In Kimchi 2.0, Templates recorded in objectstore will not have 'storagepool' anymore. Instead, each disk in disks field will have a new entry: "pool": {"name": ..., "type": ...} This change is necessary in order to support old objectstore in new Kimchi version, which has lots of changes in Template backend. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/root.py | 2 ++ src/wok/plugins/kimchi/utils.py | 47 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/src/wok/plugins/kimchi/root.py b/src/wok/plugins/kimchi/root.py index 6af053e..4fbe0fc 100644 --- a/src/wok/plugins/kimchi/root.py +++ b/src/wok/plugins/kimchi/root.py @@ -27,6 +27,7 @@ from wok.plugins.kimchi.control import sub_nodes from wok.plugins.kimchi.model import model as kimchiModel from wok.plugins.kimchi.utils import upgrade_objectstore_data from wok.plugins.kimchi.utils import upgrade_objectstore_schema +from wok.plugins.kimchi.utils import upgrade_objectstore_template_disks from wok.root import WokRoot @@ -71,6 +72,7 @@ class KimchiRoot(WokRoot): upgrade_objectstore_data('icon', 'images', 'plugins/kimchi/') upgrade_objectstore_data('storagepool', '/storagepools', '/plugins/kimchi') + upgrade_objectstore_template_disks(self.model.conn) def get_custom_conf(self): return config.KimchiConfig() diff --git a/src/wok/plugins/kimchi/utils.py b/src/wok/plugins/kimchi/utils.py index a5f5e97..0997faa 100644 --- a/src/wok/plugins/kimchi/utils.py +++ b/src/wok/plugins/kimchi/utils.py @@ -30,6 +30,7 @@ from urlparse import urlparse from wok.exception import InvalidParameter, OperationFailed from wok.plugins.kimchi import config from wok.utils import wok_log +from wok.xmlutils.utils import xpath_get_text MAX_REDIRECTION_ALLOWED = 5 @@ -171,3 +172,49 @@ def upgrade_objectstore_data(item, old_uri, new_uri): if conn: conn.close() wok_log.info("%d '%s' entries upgraded in objectstore.", total, item) + + +def upgrade_objectstore_template_disks(libv_conn): + """ + Upgrade the value of a given JSON's item of all Templates. + Removes 'storagepool' entry and adds + 'pool: { name: ..., type: ... }' + """ + total = 0 + try: + conn = sqlite3.connect(config.get_object_store(), timeout=10) + cursor = conn.cursor() + sql = "SELECT id, json FROM objects WHERE type='template'" + cursor.execute(sql) + for row in cursor.fetchall(): + template = json.loads(row[1]) + + # Get pool info + pool_uri = template['storagepool'] + pool_name = pool_name_from_uri(pool_uri) + pool = libv_conn.get().storagePoolLookupByName( + pool_name.encode("utf-8")) + pool_type = xpath_get_text(pool.XMLDesc(0), "/pool/@type")[0] + + # Update json + new_disks = [] + for disk in template['disks']: + disk['pool'] = {'name': pool_uri, + 'type': pool_type} + new_disks.append(disk) + template['disks'] = new_disks + del template['storagepool'] + + sql = "UPDATE objects SET json=? WHERE id=?" + cursor.execute(sql, (json.dumps(template), row[0])) + conn.commit() + total += 1 + except sqlite3.Error, e: + if conn: + conn.rollback() + raise OperationFailed("KCHUTILS0006E") + wok_log.error("Error while upgrading objectstore data:", e.args[0]) + finally: + if conn: + conn.close() + wok_log.info("%d 'template' entries upgraded in objectstore.", total) -- 2.1.0

Backend and tests need some adjustments to perform all tasks and tests after remove support to 'storagepool' in Templates. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> Signed-off-by: Aline Manera <alinefm@linux.vnet.ibm.com> --- src/wok/plugins/kimchi/mockmodel.py | 2 +- src/wok/plugins/kimchi/model/templates.py | 80 ++++++------------------- src/wok/plugins/kimchi/tests/test_model.py | 59 +++++++++--------- src/wok/plugins/kimchi/tests/test_rest.py | 20 ++++--- src/wok/plugins/kimchi/tests/test_template.py | 33 ++++++---- src/wok/plugins/kimchi/tests/test_vmtemplate.py | 9 ++- src/wok/plugins/kimchi/vmtemplate.py | 37 +++++++++--- 7 files changed, 119 insertions(+), 121 deletions(-) diff --git a/src/wok/plugins/kimchi/mockmodel.py b/src/wok/plugins/kimchi/mockmodel.py index 2e58724..8263207 100644 --- a/src/wok/plugins/kimchi/mockmodel.py +++ b/src/wok/plugins/kimchi/mockmodel.py @@ -75,7 +75,7 @@ class MockModel(Model): # test:///default driver defaults = dict(osinfo.defaults) defaults.update(mockmodel_defaults) - defaults['disks'][0]['pool'] = DEFAULT_POOL + defaults['disks'][0]['pool'] = {'name': DEFAULT_POOL} osinfo.defaults = dict(defaults) self._mock_vgs = MockVolumeGroups() diff --git a/src/wok/plugins/kimchi/model/templates.py b/src/wok/plugins/kimchi/model/templates.py index 7b1412b..84cdd02 100644 --- a/src/wok/plugins/kimchi/model/templates.py +++ b/src/wok/plugins/kimchi/model/templates.py @@ -22,7 +22,7 @@ import libvirt import os import stat -from wok.exception import InvalidOperation, InvalidParameter, MissingParameter +from wok.exception import InvalidOperation, InvalidParameter from wok.exception import NotFoundError, OperationFailed from wok.utils import probe_file_permission_as_user, run_setfacl_set_attr from wok.xmlutils.utils import xpath_get_text @@ -34,43 +34,6 @@ from wok.plugins.kimchi.utils import pool_name_from_uri from wok.plugins.kimchi.vmtemplate import VMTemplate -def _check_disks_params(params, conn, objstore): - if 'disks' not in params: - return - - basic_disk = ['index', 'format', 'pool', 'size'] - ro_disk = ['index', 'format', 'pool', 'volume'] - base_disk = ['index', 'base', 'pool'] - for disk in params['disks']: - keys = sorted(disk.keys()) - if ((keys == sorted(basic_disk)) or (keys == sorted(ro_disk)) or - (keys == sorted(base_disk))): - pass - else: - raise MissingParameter('KCHTMPL0028E') - - if disk.get('pool', {}).get('name') is None: - raise MissingParameter('KCHTMPL0028E') - - pool_uri = disk['pool']['name'] - try: - pool_name = pool_name_from_uri(pool_uri) - conn.get().storagePoolLookupByName(pool_name.encode("utf-8")) - except Exception: - raise InvalidParameter("KCHTMPL0004E", - {'pool': pool_uri, - 'template': pool_name}) - if 'volume' in disk: - kwargs = {'conn': conn, 'objstore': objstore} - storagevolumes = __import__( - "wok.plugins.kimchi.model.storagevolumes", fromlist=['']) - pool_volumes = storagevolumes.StorageVolumesModel( - **kwargs).get_list(pool_name) - if disk['volume'] not in pool_volumes: - raise InvalidParameter("KCHTMPL0019E", {'pool': pool_name, - 'volume': disk['volume']}) - - class TemplatesModel(object): def __init__(self, **kargs): self.objstore = kargs['objstore'] @@ -91,9 +54,6 @@ class TemplatesModel(object): {'filename': iso, 'user': user, 'err': excp}) - # Check disks information - _check_disks_params(params, self.conn, self.objstore) - cpu_info = params.get('cpu_info') if cpu_info: topology = cpu_info.get('topology') @@ -121,6 +81,15 @@ class TemplatesModel(object): # Checkings will be done while creating this class, so any exception # will be raised here t = LibvirtVMTemplate(params, scan=True, conn=self.conn) + + # Validate volumes + for disk in t.info.get('disks'): + volume = disk.get('volume') + # volume can be None + if 'volume' in disk.keys(): + self.template_volume_validate(volume, disk['pool']) + + # Store template on objectstore name = params['name'] try: with self.objstore as session: @@ -141,10 +110,8 @@ class TemplatesModel(object): def template_volume_validate(self, volume, pool): kwargs = {'conn': self.conn, 'objstore': self.objstore} - pool_type = xpath_get_text(pool.XMLDesc(0), "/pool/@type")[0] - pool_name = unicode(pool.name(), 'utf-8') - - if pool_type in ['iscsi', 'scsi']: + pool_name = pool_name_from_uri(pool['name']) + if pool['type'] in ['iscsi', 'scsi']: if not volume: raise InvalidParameter("KCHTMPL0018E") @@ -164,18 +131,14 @@ class TemplateModel(object): self.templates = TemplatesModel(**kargs) @staticmethod - def get_template(name, objstore, conn, overrides=None): + def get_template(name, objstore, conn, overrides={}): with objstore as session: params = session.get('template', name) - if overrides: - if 'storagepool' in overrides: - for i, disk in enumerate(params['disks']): - params['disks'][i]['pool']['name'] = \ - overrides['storagepool'] - del overrides['storagepool'] - params.update(overrides) - - _check_disks_params(params, conn, objstore) + if overrides and 'storagepool' in overrides: + for i, disk in enumerate(params['disks']): + params['disks'][i]['pool']['name'] = overrides['storagepool'] + del overrides['storagepool'] + params.update(overrides) return LibvirtVMTemplate(params, False, conn) def lookup(self, name): @@ -212,9 +175,6 @@ class TemplateModel(object): new_t = copy.copy(old_t) new_t.update(params) - # Check disks information - _check_disks_params(new_t, self.conn, self.objstore) - if not self._validate_updated_cpu_params(new_t): raise InvalidParameter('KCHTMPL0025E') @@ -251,9 +211,7 @@ class LibvirtVMTemplate(VMTemplate): self.conn = conn VMTemplate.__init__(self, args, scan) - def _storage_validate(self, pool_uri=None): - if pool_uri is None: - pool_uri = self.info['storagepool'] + def _storage_validate(self, pool_uri): pool_name = pool_name_from_uri(pool_uri) try: conn = self.conn.get() diff --git a/src/wok/plugins/kimchi/tests/test_model.py b/src/wok/plugins/kimchi/tests/test_model.py index c5407d6..caacd91 100644 --- a/src/wok/plugins/kimchi/tests/test_model.py +++ b/src/wok/plugins/kimchi/tests/test_model.py @@ -39,7 +39,6 @@ from wok.xmlutils.utils import xpath_get_text from wok.plugins.kimchi import netinfo from wok.plugins.kimchi import osinfo -from wok.plugins.kimchi.config import get_kimchi_version from wok.plugins.kimchi.config import kimchiPaths as paths from wok.plugins.kimchi.model import model from wok.plugins.kimchi.model.libvirtconnection import LibvirtConnection @@ -77,13 +76,13 @@ def tearDownModule(): def _setDiskPoolDefault(): - osinfo.defaults['disks'][0]['pool'] = \ - '/plugins/kimchi/storagepools/default' + osinfo.defaults['disks'][0]['pool'] = { + 'name': '/plugins/kimchi/storagepools/default'} def _setDiskPoolDefaultTest(): - osinfo.defaults['disks'][0]['pool'] = \ - '/plugins/kimchi/storagepools/default-pool' + osinfo.defaults['disks'][0]['pool'] = { + 'name': '/plugins/kimchi/storagepools/default-pool'} class ModelTests(unittest.TestCase): @@ -138,8 +137,9 @@ class ModelTests(unittest.TestCase): vol = inst.storagevolume_lookup(u'default', vol_params['name']) params = {'name': 'test', 'disks': [{'base': vol['path'], - 'size': 1}], - 'cdrom': UBUNTU_ISO} + 'size': 1, 'pool': { + 'name': '/plugins/kimchi/storagepools/default'}}], + 'cdrom': UBUNTU_ISO} inst.templates_create(params) rollback.prependDefer(inst.template_delete, 'test') @@ -264,24 +264,20 @@ class ModelTests(unittest.TestCase): self.assertEquals('finished', inst.task_lookup(task_id)['status']) vol_path = inst.storagevolume_lookup('default', vol)['path'] - # Hack the model objstore to add a new template - # It is needed as the image file must be a bootable image when - # using model - # As it is difficult to create one on test runtime, inject a - # template with an empty image file to the objstore to test the - # feature + # Create template based on IMG file tmpl_name = "img-tmpl" - tmpl_info = {"cpus": 1, "cdrom": "", + pool_uri = "/plugins/kimchi/storagepools/default" + tmpl_info = {"cpus": 1, "cdrom": "", "name": tmpl_name, "graphics": {"type": "vnc", "listen": "127.0.0.1"}, "networks": ["default"], "memory": 1024, "folder": [], "icon": "images/icon-vm.png", "os_distro": "unknown", "os_version": "unknown", - "disks": [{"base": vol_path, "size": 10, "pool": { - "name": "/plugins/kimchi/storagepools/default"}}]} + "disks": [{"base": vol_path, "size": 10, + "format": "qcow2", + "pool": {"name": pool_uri}}]} - with inst.objstore as session: - session.store('template', tmpl_name, tmpl_info, - get_kimchi_version()) + inst.templates_create(tmpl_info) + rollback.prependDefer(inst.template_delete, tmpl_name) params = {'name': 'kimchi-vm', 'template': '/plugins/kimchi/templates/img-tmpl'} @@ -606,8 +602,9 @@ class ModelTests(unittest.TestCase): inst = model.Model(objstore_loc=self.tmp_store) with RollbackContext() as rollback: - params = {'name': 'test', 'disks': [{'size': 1}], - 'cdrom': UBUNTU_ISO} + params = {'name': 'test', 'disks': [{'size': 1, 'pool': { + 'name': '/plugins/kimchi/storagepools/default'}}], + 'cdrom': UBUNTU_ISO} inst.templates_create(params) rollback.prependDefer(inst.template_delete, 'test') @@ -664,8 +661,9 @@ class ModelTests(unittest.TestCase): self._create_template_conf_with_disk_format('vmdk') rollback.prependDefer(self._restore_template_conf_file) - params = {'name': 'test', 'disks': [{'size': 1}], - 'cdrom': UBUNTU_ISO} + params = {'name': 'test', 'disks': [{'size': 1, 'pool': { + 'name': '/plugins/kimchi/storagepools/default'}}], + 'cdrom': UBUNTU_ISO} inst.templates_create(params) rollback.prependDefer(inst.template_delete, 'test') @@ -690,9 +688,10 @@ class ModelTests(unittest.TestCase): self._create_template_conf_with_disk_format(default_vol) rollback.prependDefer(self._restore_template_conf_file) - params = {'name': 'test', - 'disks': [{'size': 1, 'format': user_vol}], - 'cdrom': UBUNTU_ISO} + params = {'name': 'test', 'disks': [{ + 'size': 1, 'format': user_vol, + 'pool': {'name': '/plugins/kimchi/storagepools/default'}}], + 'cdrom': UBUNTU_ISO} inst.templates_create(params) rollback.prependDefer(inst.template_delete, 'test') @@ -715,8 +714,9 @@ class ModelTests(unittest.TestCase): self._create_template_conf_with_disk_format(None) rollback.prependDefer(self._restore_template_conf_file) - params = {'name': 'test', - 'disks': [{'size': 1}], 'cdrom': UBUNTU_ISO} + params = {'name': 'test', 'disks': [{'size': 1, 'pool': { + 'name': '/plugins/kimchi/storagepools/default'}}], + 'cdrom': UBUNTU_ISO} inst.templates_create(params) rollback.prependDefer(inst.template_delete, 'test') @@ -770,7 +770,8 @@ class ModelTests(unittest.TestCase): # only supports this format orig_params = {'name': 'test', 'memory': 1024, 'cpus': 1, 'cdrom': UBUNTU_ISO, - 'disks': [{'size': 1, 'format': 'qcow2'}]} + 'disks': [{'size': 1, 'format': 'qcow2', 'pool': { + 'name': '/plugins/kimchi/storagepools/default'}}]} inst.templates_create(orig_params) with RollbackContext() as rollback: diff --git a/src/wok/plugins/kimchi/tests/test_rest.py b/src/wok/plugins/kimchi/tests/test_rest.py index 2947df2..0305e82 100644 --- a/src/wok/plugins/kimchi/tests/test_rest.py +++ b/src/wok/plugins/kimchi/tests/test_rest.py @@ -18,6 +18,7 @@ # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA +import copy import json import os import re @@ -47,6 +48,9 @@ ssl_port = None cherrypy_port = None fake_iso = '/tmp/fake.iso' +DISKS = [{'size': 10, 'format': 'qcow2', 'index': 0, 'pool': { + 'name': '/plugins/kimchi/storagepools/default-pool'}}] + def setUpModule(): global test_server, model, host, port, ssl_port, cherrypy_port @@ -267,7 +271,7 @@ class RestTests(unittest.TestCase): def test_vm_lifecycle(self): # Create a Template - req = json.dumps({'name': 'test', 'disks': [{'size': 1}], + req = json.dumps({'name': 'test', 'disks': DISKS, 'icon': 'plugins/kimchi/images/icon-debian.png', 'cdrom': fake_iso}) resp = self.request('/plugins/kimchi/templates', req, 'POST') @@ -291,7 +295,7 @@ class RestTests(unittest.TestCase): + '%s-0.img' resp = self.request(vol_uri % vm['uuid']) vol = json.loads(resp.read()) - self.assertEquals(1 << 30, vol['capacity']) + self.assertEquals(10 << 30, vol['capacity']) self.assertEquals(['test-vm'], vol['used_by']) # verify if poweroff command returns correct status @@ -868,7 +872,7 @@ class RestTests(unittest.TestCase): def test_vm_customise_storage(self): # Create a Template req = json.dumps({'name': 'test', 'cdrom': fake_iso, - 'disks': [{'size': 1}]}) + 'disks': DISKS}) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) @@ -905,7 +909,7 @@ class RestTests(unittest.TestCase): % vm_info['uuid'] resp = self.request(vol_uri) vol = json.loads(resp.read()) - self.assertEquals(1 << 30, vol['capacity']) + self.assertEquals(10 << 30, vol['capacity']) # Delete the VM resp = self.request('/plugins/kimchi/vms/test-vm', '{}', 'DELETE') @@ -943,8 +947,8 @@ class RestTests(unittest.TestCase): ) lun_name = json.loads(resp.read())[0]['name'] pool_name = tmpl_params['disks'][0]['pool']['name'] - tmpl_params['disks'] = [{'index': 0, 'volume': lun_name, - 'pool': {'name': pool_name}}] + tmpl_params['disks'] = [{'index': 0, 'volume': lun_name, 'pool': { + 'name': pool_name}, 'format': 'raw'}] req = json.dumps(tmpl_params) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) @@ -1016,7 +1020,9 @@ class RestTests(unittest.TestCase): # Create a Template mock_base = '/tmp/mock.img' open(mock_base, 'w').close() - req = json.dumps({'name': 'test', 'disks': [{'base': mock_base}]}) + disks = copy.deepcopy(DISKS) + disks[0]['base'] = mock_base + req = json.dumps({'name': 'test', 'disks': disks}) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) diff --git a/src/wok/plugins/kimchi/tests/test_template.py b/src/wok/plugins/kimchi/tests/test_template.py index 7bd856e..0cc5ac8 100644 --- a/src/wok/plugins/kimchi/tests/test_template.py +++ b/src/wok/plugins/kimchi/tests/test_template.py @@ -120,16 +120,18 @@ class TemplateTests(unittest.TestCase): # Create an image based template open('/tmp/mock.img', 'w').close() - t = {'name': 'test_img_template', - 'disks': [{'base': '/tmp/mock.img'}]} + t = {'name': 'test_img_template', 'disks': [{ + 'base': '/tmp/mock.img', 'format': 'qcow2', + 'pool': {'name': DEFAULT_POOL}, 'size': 1}]} req = json.dumps(t) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) os.remove('/tmp/mock.img') # Test disk format - t = {'name': 'test-format', 'cdrom': '/tmp/mock.iso', - 'disks': [{'index': 0, 'size': 10, 'format': 'vmdk'}]} + t = {'name': 'test-format', 'cdrom': '/tmp/mock.iso', 'disks': [{ + 'index': 0, 'size': 10, 'format': 'vmdk', 'pool': { + 'name': DEFAULT_POOL}}]} req = json.dumps(t) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) @@ -214,10 +216,11 @@ class TemplateTests(unittest.TestCase): self.assertEquals(update_tmpl['cdrom'], cdrom_data['cdrom']) # Update disks - disk_data = {'disks': [{'index': 0, 'size': 10, 'format': 'raw', - 'pool': DEFAULT_POOL}, + disk_data = {'disks': [{'index': 0, 'size': 10, + 'format': 'raw', 'pool': { + 'name': DEFAULT_POOL}}, {'index': 1, 'size': 20, 'format': 'qcow2', - 'pool': DEFAULT_POOL}]} + 'pool': {'name': DEFAULT_POOL}}]} resp = self.request(new_tmpl_uri, json.dumps(disk_data), 'PUT') self.assertEquals(200, resp.status) resp = self.request(new_tmpl_uri) @@ -236,7 +239,7 @@ class TemplateTests(unittest.TestCase): 'qed', 'raw', 'vmdk', 'vpc'] for disk_type in disk_types: disk_data = {'disks': [{'index': 0, 'format': disk_type, - 'size': 10, 'pool': DEFAULT_POOL}]} + 'size': 10, 'pool': {'name': DEFAULT_POOL}}]} resp = self.request(new_tmpl_uri, json.dumps(disk_data), 'PUT') self.assertEquals(200, resp.status) @@ -350,9 +353,14 @@ class TemplateTests(unittest.TestCase): if len(vols) > 0: vol = vols[0]['name'] req = json.dumps({'disks': [{'volume': vol, - 'pool': {'name': pool_uri}}]}) + 'pool': {'name': pool_uri}, + 'format': 'raw'}]}) + elif pool['type'] == 'logical': + req = json.dumps({'disks': [{'pool': {'name': pool_uri}, + 'format': 'raw', 'size': 10}]}) else: - req = json.dumps({'disks': [{'pool': {'name': pool_uri}}]}) + req = json.dumps({'disks': [{'pool': {'name': pool_uri}, + 'format': 'qcow2', 'size': 10}]}) if req is not None: resp = self.request('/plugins/kimchi/templates/test', req, @@ -362,7 +370,7 @@ class TemplateTests(unittest.TestCase): # Test disk template update with different pool pool_uri = u'/plugins/kimchi/storagepools/kīмсhīUnitTestDirPool' disk_data = {'disks': [{'size': 5, 'format': 'qcow2', - 'pool': pool_uri}]} + 'pool': {'name': pool_uri}}]} req = json.dumps(disk_data) resp = self.request('/plugins/kimchi/templates/test', req, 'PUT') self.assertEquals(200, resp.status) @@ -393,7 +401,8 @@ class TemplateTests(unittest.TestCase): 'networks': ['nat-network'], 'disks': [{'pool': { 'name': '/plugins/kimchi/storagepools/dir-pool'}, - 'size': 2}]} + 'size': 2, + 'format': 'qcow2'}]} req = json.dumps(t) resp = self.request('/plugins/kimchi/templates', req, 'POST') self.assertEquals(201, resp.status) diff --git a/src/wok/plugins/kimchi/tests/test_vmtemplate.py b/src/wok/plugins/kimchi/tests/test_vmtemplate.py index 0bca215..79e3b52 100644 --- a/src/wok/plugins/kimchi/tests/test_vmtemplate.py +++ b/src/wok/plugins/kimchi/tests/test_vmtemplate.py @@ -26,6 +26,11 @@ from wok.xmlutils.utils import xpath_get_text from wok.plugins.kimchi.osinfo import get_template_default from wok.plugins.kimchi.vmtemplate import VMTemplate +DISKS = [{'size': 10, 'format': 'raw', 'index': 0, 'pool': {'name': + '/plugins/kimchi/storagepools/default-pool'}}, + {'size': 5, 'format': 'qcow2', 'index': 1, 'pool': {'name': + '/plugins/kimchi/storagepools/default-pool'}}] + class VMTemplateTests(unittest.TestCase): def setUp(self): @@ -53,7 +58,7 @@ class VMTemplateTests(unittest.TestCase): def test_construct_overrides(self): graphics = {'type': 'spice', 'listen': '127.0.0.1'} - args = {'name': 'test', 'disks': [{'size': 10}, {'size': 20}], + args = {'name': 'test', 'disks': DISKS, 'graphics': graphics, "cdrom": self.iso} t = VMTemplate(args) self.assertEquals(2, len(t.info['disks'])) @@ -62,7 +67,7 @@ class VMTemplateTests(unittest.TestCase): def test_specified_graphics(self): # Test specified listen graphics = {'type': 'vnc', 'listen': '127.0.0.1'} - args = {'name': 'test', 'disks': [{'size': 10}, {'size': 20}], + args = {'name': 'test', 'disks': DISKS, 'graphics': graphics, 'cdrom': self.iso} t = VMTemplate(args) self.assertEquals(graphics, t.info['graphics']) diff --git a/src/wok/plugins/kimchi/vmtemplate.py b/src/wok/plugins/kimchi/vmtemplate.py index 62d0e1a..b90f221 100644 --- a/src/wok/plugins/kimchi/vmtemplate.py +++ b/src/wok/plugins/kimchi/vmtemplate.py @@ -76,23 +76,40 @@ class VMTemplate(object): graphics.update(graph_args) args['graphics'] = graphics + default_disk = self.info['disks'][0] # Override template values according to 'args' self.info.update(args) - - # Find pool type for each disk disks = self.info.get('disks') - for disk in disks: - pool_name = disk.get('pool', {}).get('name') - if pool_name is None: - raise MissingParameter('KCHTMPL0029E') + + basic_disk = ['index', 'format', 'pool', 'size'] + ro_disk = ['index', 'format', 'pool', 'volume'] + base_disk = ['index', 'base', 'pool', 'size', 'format'] + + for index, disk in enumerate(disks): + disk_info = dict(default_disk) pool_type = self._get_storage_type(disk['pool']['name']) - disk['pool']['type'] = pool_type + if pool_type in ['iscsi', 'scsi']: + disk_info = {'index': 0, 'format': 'raw', 'volume': None} + + disk_info.update(disk) + pool_name = disk_info.get('pool', {}).get('name') + if pool_name is None: + raise MissingParameter('KCHTMPL0028E') + + keys = sorted(disk_info.keys()) + if ((keys != sorted(basic_disk)) and (keys != sorted(ro_disk)) and + (keys != sorted(base_disk))): + raise MissingParameter('KCHTMPL0028E') if pool_type in ['logical', 'iscsi', 'scsi']: - if disk['format'] != 'raw': + if disk_info['format'] != 'raw': raise InvalidParameter('KCHTMPL0029E') + disk_info['pool']['type'] = pool_type + disk_info['index'] = disk_info.get('index', index) + self.info['disks'][index] = disk_info + def _get_os_info(self, args, scan): distro = version = 'unknown' @@ -406,7 +423,9 @@ class VMTemplate(object): return xml def validate(self): - self._storage_validate() + for disk in self.info.get('disks'): + pool_uri = disk.get('pool', {}).get('name') + self._storage_validate(pool_uri) self._network_validate() self._iso_validate() -- 2.1.0

Please, ignore this patchset. Found a problem in UI and will send new version. Rodrigo Trujillo On 12/02/2015 08:53 AM, Rodrigo Trujillo wrote:
V5 - Include script to update old templates in objectstore - Remove dependency and use of 'storagepool' tag from Templates - Fixes backend and tests problems found
V4 - Include patch with support to max vcpu and other changes in vm backend to support cpu hot plug in the future. - Fixes vm update problem (offline update was not atomic)
V3 - Addresses Aline's comments * fixes backend, API.md and API.json * fixes test cases and add new test
V2 - Rebase over latest new UI patches
V1 - Initial version
Rodrigo Trujillo (8): Fix Template backend create/update for multiple disks Enable create guests with multiple disks from different pools UI - Implement multiple disks support in Template edit window Update VCPU by using libvirt function Test: Fix and add test related to disks in templates Remove 'stogarepool' referencies from Templates code backend Add script to upgrade objectstore to support new Template disks schema Fix tests and backend after remove 'storagepool' from templates
src/wok/plugins/kimchi/API.json | 43 ++-- src/wok/plugins/kimchi/control/templates.py | 1 - src/wok/plugins/kimchi/docs/API.md | 11 +- src/wok/plugins/kimchi/i18n.py | 5 + src/wok/plugins/kimchi/mockmodel.py | 60 +++-- src/wok/plugins/kimchi/model/storagepools.py | 10 +- src/wok/plugins/kimchi/model/templates.py | 85 +++---- src/wok/plugins/kimchi/model/vms.py | 130 +++++++++-- src/wok/plugins/kimchi/osinfo.py | 14 +- src/wok/plugins/kimchi/root.py | 2 + src/wok/plugins/kimchi/template.conf | 7 +- src/wok/plugins/kimchi/tests/test_model.py | 65 +++--- src/wok/plugins/kimchi/tests/test_rest.py | 41 ++-- src/wok/plugins/kimchi/tests/test_template.py | 76 +++++-- src/wok/plugins/kimchi/tests/test_vmtemplate.py | 9 +- .../kimchi/ui/js/src/kimchi.template_edit_main.js | 246 ++++++++++----------- .../kimchi/ui/pages/template-edit.html.tmpl | 5 +- src/wok/plugins/kimchi/utils.py | 49 +++- src/wok/plugins/kimchi/vmtemplate.py | 110 ++++++--- 19 files changed, 605 insertions(+), 364 deletions(-)
participants (1)
-
Rodrigo Trujillo