[PATCH 0/5] [Memory HotPlug] Implements backend of memory device hotplug

This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV> Rodrigo Trujillo (5): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 77 +++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- 9 files changed, 316 insertions(+), 23 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py -- 2.1.0

This patch adds a new feature test to check if the host libvirt version supports memory devices attachment. This support is provided since libvirt 1.2.14 and is the base to make memory hotplug work. When libvirt does not support memory devices, it is going to fail when we try to attach it, raising the libvirt.libvirtError: "unsupported configuration: unknown device type 'memory'" Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/config.py | 3 +++ src/kimchi/model/featuretests.py | 43 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/src/kimchi/model/config.py b/src/kimchi/model/config.py index e567ffc..5fca1a2 100644 --- a/src/kimchi/model/config.py +++ b/src/kimchi/model/config.py @@ -55,6 +55,7 @@ class CapabilitiesModel(object): self.fc_host_support = False self.metadata_support = False self.kernel_vfio = False + self.mem_hotplug_support = False # Subscribe function to set host capabilities to be run when cherrypy # server is up @@ -91,6 +92,7 @@ class CapabilitiesModel(object): self.fc_host_support = FeatureTests.libvirt_support_fc_host(conn) self.metadata_support = FeatureTests.has_metadata_support(conn) self.kernel_vfio = FeatureTests.kernel_support_vfio() + self.mem_hotplug_support = FeatureTests.has_mem_hotplug_support(conn) self.libvirt_stream_protocols = [] for p in ['http', 'https', 'ftp', 'ftps', 'tftp']: @@ -139,6 +141,7 @@ class CapabilitiesModel(object): 'auth': kconfig.get("authentication", "method"), 'kernel_vfio': self.kernel_vfio, 'nm_running': FeatureTests.is_nm_running(), + 'mem_hotplug_support': self.mem_hotplug_support } diff --git a/src/kimchi/model/featuretests.py b/src/kimchi/model/featuretests.py index 9400151..86e918a 100644 --- a/src/kimchi/model/featuretests.py +++ b/src/kimchi/model/featuretests.py @@ -63,6 +63,25 @@ SIMPLE_VM_XML = """ </os> </domain>""" +MAXMEM_VM_XML = """ +<domain type='%(domain)s'> + <name>%(name)s</name> + <maxMemory slots='1' unit='KiB'>20480</maxMemory> + <memory unit='KiB'>10240</memory> + <os> + <type arch='%(arch)s'>hvm</type> + <boot dev='hd'/> + </os> +</domain>""" + +DEV_MEM_XML = """ +<memory model='dimm'> + <target> + <size unit='KiB'>10240</size> + <node>0</node> + </target> +</memory>""" + SCSI_FC_XML = """ <pool type='scsi'> <name>%(name)s</name> @@ -207,3 +226,27 @@ class FeatureTests(object): return False return True + + @staticmethod + def has_mem_hotplug_support(conn): + ''' + A memory device can be hot-plugged or hot-unplugged since libvirt + version 1.2.14. + ''' + with RollbackContext() as rollback: + FeatureTests.disable_libvirt_error_logging() + rollback.prependDefer(FeatureTests.enable_libvirt_error_logging) + conn_type = conn.getType().lower() + domain_type = 'test' if conn_type == 'test' else 'kvm' + arch = 'i686' if conn_type == 'test' else platform.machine() + arch = 'ppc64' if arch == 'ppc64le' else arch + dom = conn.defineXML(MAXMEM_VM_XML % {'name': FEATURETEST_VM_NAME, + 'domain': domain_type, + 'arch': arch}) + rollback.prependDefer(dom.undefine) + try: + dom.attachDeviceFlags(DEV_MEM_XML, + libvirt.VIR_DOMAIN_MEM_CONFIG) + return True + except libvirt.libvirtError: + return False -- 2.1.0

In order to support memory hotplug, guest must have maxMemory and NUMA configured in the xml. For maxMemory, this patch changes xml generation at guest creation time, adding maxMemory equals to host total memory and memory slots as the integer number of GiB that fit inside maxMemory (by design users will be allowed to add only memory chunks of 1GB). For NUMA, this patch adds the simplest configuration possible, creating only one node with all vcpus and memory set in the template. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 8 ++++++++ src/kimchi/vmtemplate.py | 29 +++++++++++++++++++---------- 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index f0182b9..dc7f91f 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -94,6 +94,14 @@ class VMsModel(object): if pool_uri: vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support + + # Setting maxMemory and slots parameter values + # getInfo memory value comes in MiB, so dividing by 1024 integer, + # gives the interger maximum number of slots to use in chunks of + # 1 GB + vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 + vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 + t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides) diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index e047228..ffb56be 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -270,17 +270,25 @@ class VMTemplate(object): return input_output def _get_cpu_xml(self): - + # NUMA configuration. Necessary for memory hotplug + cpus = self.info.get('cpus') - 1 + xml = E.cpu(E.numa(E.cell( + id='0', + cpus='0-' + str(cpus) if cpus > 0 else '0', + memory=str(self.info.get('memory')), + unit='MiB'))) + + # Include CPU topology, if provided cpu_info = self.info.get('cpu_info') - if cpu_info is None: - return "" - cpu_topo = cpu_info.get('topology') - if cpu_topo is None: - return "" - return etree.tostring(E.cpu(E.topology( - sockets=str(cpu_topo['sockets']), - cores=str(cpu_topo['cores']), - threads=str(cpu_topo['threads'])))) + if cpu_info is not None: + cpu_topo = cpu_info.get('topology') + if cpu_topo is not None: + xml.insert(0, E.topology( + sockets=str(cpu_topo['sockets']), + cores=str(cpu_topo['cores']), + threads=str(cpu_topo['threads']))) + + return etree.tostring(xml) def to_vm_xml(self, vm_name, vm_uuid, **kwargs): params = dict(self.info) @@ -313,6 +321,7 @@ class VMTemplate(object): %(qemu-stream-cmdline)s <name>%(name)s</name> <uuid>%(uuid)s</uuid> + <maxMemory slots='%(slots)s' unit='KiB'>%(max_memory)s</maxMemory> <memory unit='MiB'>%(memory)s</memory> <vcpu>%(cpus)s</vcpu> %(cpu_info)s -- 2.1.0

This patch creates new colletion and resource of memory devices. They are subnodes of a given VM resource, that means the user is going to use the VM url/api to add or fetch the memory devices, such as: http://<HOST>/vms/<YOUR_VM>/memdevices (GET) -> to list memory devs http://<HOST>/vms/<YOUR_VM>/memdevices (POST) -> to add a memory dev http://<HOST>/vms/<YOUR_VM>/memdevices/<SLOT> (GET) -> to retrieve a given memory device in SLOT This patch does not give support do DELETE or UPDATE a memory device Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 2 + src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py diff --git a/src/kimchi/control/vm/memdevices.py b/src/kimchi/control/vm/memdevices.py new file mode 100644 index 0000000..2d96104 --- /dev/null +++ b/src/kimchi/control/vm/memdevices.py @@ -0,0 +1,47 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("memdevices") +class VMMemDevices(Collection): + def __init__(self, model, vm): + super(VMMemDevices, self).__init__(model) + self.admin_methods = ['GET', 'POST'] + self.resource = VMMemDevice + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class VMMemDevice(Resource): + def __init__(self, model, vm, ident): + super(VMMemDevice, self).__init__(model, ident) + self.admin_methods = ['GET'] + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/memdevices/%s' + + @property + def data(self): + return self.info diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 9f169ab..7085530 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -340,4 +340,6 @@ messages = { "KCHCPUINF0002E": _("Invalid vCPU/topology combination."), "KCHCPUINF0003E": _("This host (or current configuration) does not allow CPU topology."), + "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), + "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), } diff --git a/src/kimchi/model/vmmemdevices.py b/src/kimchi/model/vmmemdevices.py new file mode 100644 index 0000000..016aa8d --- /dev/null +++ b/src/kimchi/model/vmmemdevices.py @@ -0,0 +1,94 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from libvirt import VIR_DOMAIN_MEM_CONFIG, VIR_DOMAIN_MEM_LIVE +from lxml import etree, objectify +from lxml.builder import E + + +from kimchi.exception import InvalidOperation, OperationFailed +from kimchi.model.config import CapabilitiesModel + + +def _get_all_memory_devices(conn, vm_name): + # Return all memory devices from a given guest + dom = conn.get().lookupByName(vm_name) + devs = objectify.fromstring(dom.XMLDesc(0)).devices + return devs.findall('memory') + + +class VMMemDevicesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def create(self, vm_name, params): + # Check if host supports memory device + if not self.caps.mem_hotplug_support: + raise InvalidOperation("KCHMEMDEV0001E") + + dom = self.conn.get().lookupByName(vm_name) + # Setting flags, depending if machine is running or not + flags = VIR_DOMAIN_MEM_CONFIG + if dom.isActive(): + flags = flags | VIR_DOMAIN_MEM_LIVE + + # Create memory device xml + mem_dev_xml = etree.tostring( + E.memory( + E.target( + E.size('1', unit='GiB'), + E.node('0')), + model='dimm' + ) + ) + # Add chunks of 1G of memory + for i in range(params['amount']): + try: + dom.attachDeviceFlags(mem_dev_xml, flags) + except Exception as e: + raise OperationFailed("KCHMEMDEV0002E", {'error': e.message}) + return len(_get_all_memory_devices(self.conn, vm_name)) - 1 + + def get_list(self, vm_name): + # Memory devices do not have an ID or something similar in the XML, so + # lets return the positions of all memory devices in the XML. + return [str(i) for i in range(len( + _get_all_memory_devices(self.conn, vm_name)))] + + +class VMMemDeviceModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def lookup(self, vm_name, dev_slot): + # We are considering the memory device position in the XML, which + # can also be considered the slot position + try: + memdev = _get_all_memory_devices(self.conn, vm_name)[int(dev_slot)] + return { + 'slot': dev_slot, + 'size': str(memdev.xpath('./target/size')[0] / 1024 / 1024) + 'G', + 'node': str(memdev.xpath('./target/node')[0]) + } + except IndexError: + return {} -- 2.1.0

This patch add necessary restrictions to new API, avoiding and warning user about basic mistakes. The restriction is basically: pass "amount" parameter, which must be an integer. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 16 ++++++++++++++++ src/kimchi/API.json | 14 +++++++++++++- src/kimchi/i18n.py | 2 ++ 3 files changed, 31 insertions(+), 1 deletion(-) diff --git a/docs/API.md b/docs/API.md index 71f2539..4f05dc0 100644 --- a/docs/API.md +++ b/docs/API.md @@ -223,6 +223,22 @@ Represents a snapshot of the Virtual Machine's primary monitor. **URI:** /vms/*:name*/snapshots/current * **GET**: Retrieve current snapshot information for the virtual machine. +### Sub-collection: Virtual Machine Memory Device +**URI:** /vms/*:name*/memdevices +* **GET**: Retrieve a list of all devices attached to given virtual machine. +* **POST**: Attach new 1GB memory devices to virtual machine. Attach as many + as the amount given, and only if there are slots available. + * amount: An integer representing the total of GB to be attached to VM. + +### Sub-resource: memdevice +**URI:** /vms/*:name*/memdevices/*:memdevice* +* **GET**: Retrive the memory device information from VM xml. + ":memdevice" is the slot or position in the xml, starting from 0. + * slot: The slot or position in the VM xml. + * node: The NUMA node where the device is associated. '0' by default. + * size: The total of memory of the device. + + ### Collection: Templates **URI:** /templates diff --git a/src/kimchi/API.json b/src/kimchi/API.json index a6330ae..569f8a8 100644 --- a/src/kimchi/API.json +++ b/src/kimchi/API.json @@ -831,6 +831,18 @@ } }, "error": "KCHAPI0001E" - } + }, + "vmmemdevices_create": { + "type": "object", + "error": "KCHMEMDEV0003E", + "properties": { + "amount": { + "description": "Amount of memory in GB to be attached to VM.", + "type": "integer", + "required": true, + "error": "KCHMEMDEV0004E" + } + } + } } } diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 7085530..e6e00b8 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -342,4 +342,6 @@ messages = { "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), + "KCHMEMDEV0003E": _("Parameter 'amount' is required."), + "KCHMEMDEV0004E": _("Amount of memory (GB) to be attached is required and must be an integer."), } -- 2.1.0

This patch changes the current memory update process, the static update, allowed when the guest is offline. Now, the update creates the new xml elements that will allow to hotplug memory, if necessary (MAXMEMORY, CPU, NUMA, NODE, etc.). It also introduce some tests to avoid errors if user sets more memory then the allowed in the server. The memory device slots count and assignment was changed to avoid erros, like negative or zero slots. Slots counting will have the number updated as the static memory is changed. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/i18n.py | 1 + src/kimchi/model/vms.py | 79 +++++++++++++++++++++++++++++++++++++----------- src/kimchi/vmtemplate.py | 11 +++++++ 3 files changed, 74 insertions(+), 17 deletions(-) diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index e6e00b8..1779597 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -111,6 +111,7 @@ messages = { "KCHVM0038E": _("Unable to suspend VM '%(name)s'. Details: %(err)s"), "KCHVM0039E": _("Cannot resume VM '%(name)s' because it is not paused."), "KCHVM0040E": _("Unable to resume VM '%(name)s'. Details: %(err)s"), + "KCHVM0041E": _("Memory assigned is higher then the maximum allowed in the host."), "KCHVMHDEV0001E": _("VM %(vmid)s does not contain directly assigned host device %(dev_name)s."), "KCHVMHDEV0002E": _("The host device %(dev_name)s is not allowed to directly assign to VM."), diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index dc7f91f..2c3a667 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -59,8 +59,7 @@ DOM_STATE_MAP = {0: 'nostate', 7: 'pmsuspended'} VM_STATIC_UPDATE_PARAMS = {'name': './name', - 'cpus': './vcpu', - 'memory': './memory'} + 'cpus': './vcpu'} VM_LIVE_UPDATE_PARAMS = {} XPATH_DOMAIN_DISK = "/domain/devices/disk[@device='disk']/source/@file" @@ -73,6 +72,7 @@ XPATH_DOMAIN_MEMORY = '/domain/memory' XPATH_DOMAIN_MEMORY_UNIT = '/domain/memory/@unit' XPATH_DOMAIN_UUID = '/domain/uuid' +XPATH_NUMA_CELL = './cpu/numa/cell' class VMsModel(object): def __init__(self, **kargs): @@ -95,12 +95,9 @@ class VMsModel(object): vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support - # Setting maxMemory and slots parameter values - # getInfo memory value comes in MiB, so dividing by 1024 integer, - # gives the interger maximum number of slots to use in chunks of - # 1 GB + # Setting maxMemory of the VM, which will be equal the Host memory. + # Host memory comes in MiB, so transform in KiB vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 - vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides) @@ -659,15 +656,15 @@ class VMModel(object): for key, val in params.items(): if key in VM_STATIC_UPDATE_PARAMS: - if key == 'memory': - # Libvirt saves memory in KiB. Retrieved xml has memory - # in KiB too, so new valeu must be in KiB here - val = val * 1024 if type(val) == int: val = str(val) xpath = VM_STATIC_UPDATE_PARAMS[key] new_xml = xml_item_update(new_xml, xpath, val) + # Updating memory and adds NUMA if necessary + if 'memory' in params: + new_xml = self._update_memory(new_xml, params) + if 'graphics' in params: new_xml = self._update_graphics(dom, new_xml, params) @@ -695,12 +692,7 @@ class VMModel(object): # Undefine old vm, only if name is going to change dom.undefine() - root = ET.fromstring(new_xml) - currentMem = root.find('.currentMemory') - if currentMem is not None: - root.remove(currentMem) - - dom = conn.defineXML(ET.tostring(root, encoding="utf-8")) + dom = conn.defineXML(new_xml) if 'name' in params: self._redefine_snapshots(dom, snapshots_info) except libvirt.libvirtError as e: @@ -712,6 +704,59 @@ class VMModel(object): 'err': e.get_error_message()}) return dom + def _update_memory(self, xml, params): + # Checks if NUMA memory is already configured, if not, checks if CPU + # element is already configured (topology). Then add NUMA element as + # apropriated + root = ET.fromstring(xml) + numa_mem = xpath_get_text(xml, XPATH_NUMA_CELL + '/@memory') + if numa_mem == []: + vcpus = xpath_get_text(xml, VM_STATIC_UPDATE_PARAMS['cpus']) + numa_xml = E.numa(E.cell( + id='0', + cpus='0-' + str(vcpus - 1) if vcpus > 1 else '0', + unit='MiB')) + cpu = root.find('./cpu') + if cpu is None: + root.insert(0, E.cpu(numa_xml)) + else: + cpu.insert(0, numa_xml) + + # Remove currentMemory and Memory, automatically set later by libvirt + currentMem = root.find('.currentMemory') + if currentMem is not None: + root.remove(currentMem) + memory = root.find('.memory') + if memory is not None: + root.remove(memory) + # Update/Adds maxMemory accordingly + if self.caps.mem_hotplug_support: + maxMem = root.find('.maxMemory') + host_mem = self.conn.get().getInfo()[1] + slots = (host_mem - params['memory']) / 1024 + # Libvirt does not accepts slots <= 1 + if slots < 0: + raise OperationFailed("KCHVM0041E") + elif slots == 0: + slots = 1 + if maxMem is None: + max_mem_xml = E.maxMemory( + host_mem * 1024, + unit='Kib', + slots=str(slots)) + root.insert(0, max_mem_xml) + new_xml = ET.tostring(root, encoding="utf-8") + else: + # Update slots only + new_xml = xml_item_update(ET.tostring(root, encoding="utf-8"), + './maxMemory', + str(slots), + attr='slots') + # Finally update the memory value in numa cell + new_xml = xml_item_update(new_xml, XPATH_NUMA_CELL, + str(params['memory'] * 1024), attr='memory') + return new_xml + def _live_vm_update(self, dom, params): self._vm_update_access_metadata(dom, params) diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index ffb56be..f7dd5e4 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -316,6 +316,17 @@ class VMTemplate(object): else: params['cdroms'] = cdrom_xml + # Setting maximum number of slots to avoid errors when hotplug memory + # Number of slots are the numbers of chunks of 1GB that fit inside + # the max_memory of the host minus memory assigned to the VM + params['slots'] = ((params['max_memory'] / 1024) - + params['memory']) / 1024 + + if params['slots'] < 0: + raise OperationFailed("KCHVM0041E") + elif params['slots'] == 0: + params['slots'] = 1 + xml = """ <domain type='%(domain)s'> %(qemu-stream-cmdline)s -- 2.1.0

This patchset breaks some tests when running "make check" on Fedora 21. Also, why do we need to create the subresource "memdevices"? Why can't we hotplug memory by setting a new value to the VM 'memory', as in: PUT /vms/myvm {'memory': 1000} On 25-05-2015 18:18, Rodrigo Trujillo wrote:
This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc
URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
Rodrigo Trujillo (5): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment
docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 77 +++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- 9 files changed, 316 insertions(+), 23 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py

Thanks, I am going to fix this On 05/26/2015 03:12 PM, Crístian Deives wrote:
This patchset breaks some tests when running "make check" on Fedora 21.
Also, why do we need to create the subresource "memdevices"? Why can't we hotplug memory by setting a new value to the VM 'memory', as in:
PUT /vms/myvm {'memory': 1000}
On 25-05-2015 18:18, Rodrigo Trujillo wrote:
This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc
URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
Rodrigo Trujillo (5): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment
docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 77 +++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- 9 files changed, 316 insertions(+), 23 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

Hi Cristian, "why create memdevides ?" this is all about the way libvirt and memory hotplug works. Let me try to explain: - it is possible to set in guest xml two elements: <memory> and <currentmemory>. The first is the max amount of memory allocated to guest when libvirt starts it, the second is the memory the guest will "see". You can increase and decrease currentmemory with guest running, but you are not actually hotadding memory, you are using the ballooning module of the guest. - on the other side, we can attach (hotplug) a memory device to the guest (like attach a network device), this is different then using the ballooning. And values can higher than <memory>. In fact, this is the only way to increase <memory> on running guests. Rodrigo Trujillo On 05/26/2015 03:12 PM, Crístian Deives wrote:
This patchset breaks some tests when running "make check" on Fedora 21.
Also, why do we need to create the subresource "memdevices"? Why can't we hotplug memory by setting a new value to the VM 'memory', as in:
PUT /vms/myvm {'memory': 1000}
On 25-05-2015 18:18, Rodrigo Trujillo wrote:
This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc
URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
Rodrigo Trujillo (5): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment
docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 77 +++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- 9 files changed, 316 insertions(+), 23 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

On 27-05-2015 12:39, Rodrigo Trujillo wrote:
Hi Cristian,
"why create memdevides ?" this is all about the way libvirt and memory hotplug works. Let me try to explain: - it is possible to set in guest xml two elements: <memory> and <currentmemory>. The first is the max amount of memory allocated to guest when libvirt starts it, the second is the memory the guest will "see". You can increase and decrease currentmemory with guest running, but you are not actually hotadding memory, you are using the ballooning module of the guest.
Changes to a VM's XML configuration are not applied to a running system, so changing the memory value via XML on a running VM isn't expected to update the VM memory, only after reboot.
- on the other side, we can attach (hotplug) a memory device to the guest (like attach a network device), this is different then using the ballooning. And values can higher than <memory>. In fact, this is the only way to increase <memory> on running guests.
It's not. You can use the function "dom.setMemory[Flags]", which seems like a much easier way of implementing this feature. And even if you really need to attach a memory device to a VM, wouldn't it be simpler for the user to just set a memory value? Then Kimchi will add/remove devices as needed.

On 05/27/2015 02:19 PM, Crístian Deives wrote:
On 27-05-2015 12:39, Rodrigo Trujillo wrote:
Hi Cristian,
"why create memdevides ?" this is all about the way libvirt and memory hotplug works. Let me try to explain: - it is possible to set in guest xml two elements: <memory> and <currentmemory>. The first is the max amount of memory allocated to guest when libvirt starts it, the second is the memory the guest will "see". You can increase and decrease currentmemory with guest running, but you are not actually hotadding memory, you are using the ballooning module of the guest.
Changes to a VM's XML configuration are not applied to a running system, so changing the memory value via XML on a running VM isn't expected to update the VM memory, only after reboot.
Yes, yes, you are correct. Actually I mentioned <memory> and <currentMemory> but did not say that we have to use "setMemoryFlags" and "setMaxMemory" to update this values in a running system. Change the XML is not an option to memory hotplug.
- on the other side, we can attach (hotplug) a memory device to the guest (like attach a network device), this is different then using the ballooning. And values can higher than <memory>. In fact, this is the only way to increase <memory> on running guests.
It's not. You can use the function "dom.setMemory[Flags]", which seems like a much easier way of implementing this feature.
Yes, we can use the setMemory, but you can only increase the running guest memory up to the value set in <memory>. And, yes, this is way easier to implement, but we would have to set <memory> to a high value and <currentMemory> to a value that the user wants to start the guest. So user will have an interval between <currentMemory> and <memory> to increase. Today, Kimchi uses <memory> and <currentMemory> as the same. I think you are mixing the concepts (I did this too), the feature I am proposing in hotplug memory, like install new memory module in a physical machine. This way you can increase the guest value for a higher value, than the previously set in <memory>. In other words, we don't want to use ballooning.
And even if you really need to attach a memory device to a VM, wouldn't it be simpler for the user to just set a memory value? Then Kimchi will add/remove devices as needed.
This can be done in the front-end
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel
participants (2)
-
Crístian Deives
-
Rodrigo Trujillo