[PATCH 0/6 - V2][Memory HotPlug] Implements backend of memory device hotplug

V2 - Fix erros in tests and add a test to slots number - Fix other minor issues with Libvirt < 1.2.14 V1 This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV> Rodrigo Trujillo (6): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment [Memory HotPlug] Fix test and adds slot test docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/mockmodel.py | 7 +++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 85 ++++++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- tests/test_model.py | 2 +- tests/test_rest.py | 4 +- tests/test_vmtemplate.py | 5 +- 13 files changed, 338 insertions(+), 27 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py -- 2.1.0

This patch adds a new feature test to check if the host libvirt version supports memory devices attachment. This support is provided since libvirt 1.2.14 and is the base to make memory hotplug work. When libvirt does not support memory devices, it is going to fail when we try to attach it, raising the libvirt.libvirtError: "unsupported configuration: unknown device type 'memory'" Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/config.py | 3 +++ src/kimchi/model/featuretests.py | 43 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/src/kimchi/model/config.py b/src/kimchi/model/config.py index 687531b..8735beb 100644 --- a/src/kimchi/model/config.py +++ b/src/kimchi/model/config.py @@ -55,6 +55,7 @@ class CapabilitiesModel(object): self.fc_host_support = False self.metadata_support = False self.kernel_vfio = False + self.mem_hotplug_support = False # Subscribe function to set host capabilities to be run when cherrypy # server is up @@ -91,6 +92,7 @@ class CapabilitiesModel(object): self.fc_host_support = FeatureTests.libvirt_support_fc_host(conn) self.metadata_support = FeatureTests.has_metadata_support(conn) self.kernel_vfio = FeatureTests.kernel_support_vfio() + self.mem_hotplug_support = FeatureTests.has_mem_hotplug_support(conn) self.libvirt_stream_protocols = [] for p in ['http', 'https', 'ftp', 'ftps', 'tftp']: @@ -139,6 +141,7 @@ class CapabilitiesModel(object): 'auth': kconfig.get("authentication", "method"), 'kernel_vfio': self.kernel_vfio, 'nm_running': FeatureTests.is_nm_running(), + 'mem_hotplug_support': self.mem_hotplug_support } diff --git a/src/kimchi/model/featuretests.py b/src/kimchi/model/featuretests.py index 9400151..86e918a 100644 --- a/src/kimchi/model/featuretests.py +++ b/src/kimchi/model/featuretests.py @@ -63,6 +63,25 @@ SIMPLE_VM_XML = """ </os> </domain>""" +MAXMEM_VM_XML = """ +<domain type='%(domain)s'> + <name>%(name)s</name> + <maxMemory slots='1' unit='KiB'>20480</maxMemory> + <memory unit='KiB'>10240</memory> + <os> + <type arch='%(arch)s'>hvm</type> + <boot dev='hd'/> + </os> +</domain>""" + +DEV_MEM_XML = """ +<memory model='dimm'> + <target> + <size unit='KiB'>10240</size> + <node>0</node> + </target> +</memory>""" + SCSI_FC_XML = """ <pool type='scsi'> <name>%(name)s</name> @@ -207,3 +226,27 @@ class FeatureTests(object): return False return True + + @staticmethod + def has_mem_hotplug_support(conn): + ''' + A memory device can be hot-plugged or hot-unplugged since libvirt + version 1.2.14. + ''' + with RollbackContext() as rollback: + FeatureTests.disable_libvirt_error_logging() + rollback.prependDefer(FeatureTests.enable_libvirt_error_logging) + conn_type = conn.getType().lower() + domain_type = 'test' if conn_type == 'test' else 'kvm' + arch = 'i686' if conn_type == 'test' else platform.machine() + arch = 'ppc64' if arch == 'ppc64le' else arch + dom = conn.defineXML(MAXMEM_VM_XML % {'name': FEATURETEST_VM_NAME, + 'domain': domain_type, + 'arch': arch}) + rollback.prependDefer(dom.undefine) + try: + dom.attachDeviceFlags(DEV_MEM_XML, + libvirt.VIR_DOMAIN_MEM_CONFIG) + return True + except libvirt.libvirtError: + return False -- 2.1.0

Reviewed-by: Aline Manera <alinefm@linux.vnet.ibm.com> On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch adds a new feature test to check if the host libvirt version supports memory devices attachment. This support is provided since libvirt 1.2.14 and is the base to make memory hotplug work. When libvirt does not support memory devices, it is going to fail when we try to attach it, raising the libvirt.libvirtError: "unsupported configuration: unknown device type 'memory'"
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/config.py | 3 +++ src/kimchi/model/featuretests.py | 43 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+)
diff --git a/src/kimchi/model/config.py b/src/kimchi/model/config.py index 687531b..8735beb 100644 --- a/src/kimchi/model/config.py +++ b/src/kimchi/model/config.py @@ -55,6 +55,7 @@ class CapabilitiesModel(object): self.fc_host_support = False self.metadata_support = False self.kernel_vfio = False + self.mem_hotplug_support = False
# Subscribe function to set host capabilities to be run when cherrypy # server is up @@ -91,6 +92,7 @@ class CapabilitiesModel(object): self.fc_host_support = FeatureTests.libvirt_support_fc_host(conn) self.metadata_support = FeatureTests.has_metadata_support(conn) self.kernel_vfio = FeatureTests.kernel_support_vfio() + self.mem_hotplug_support = FeatureTests.has_mem_hotplug_support(conn)
self.libvirt_stream_protocols = [] for p in ['http', 'https', 'ftp', 'ftps', 'tftp']: @@ -139,6 +141,7 @@ class CapabilitiesModel(object): 'auth': kconfig.get("authentication", "method"), 'kernel_vfio': self.kernel_vfio, 'nm_running': FeatureTests.is_nm_running(), + 'mem_hotplug_support': self.mem_hotplug_support }
diff --git a/src/kimchi/model/featuretests.py b/src/kimchi/model/featuretests.py index 9400151..86e918a 100644 --- a/src/kimchi/model/featuretests.py +++ b/src/kimchi/model/featuretests.py @@ -63,6 +63,25 @@ SIMPLE_VM_XML = """ </os> </domain>"""
+MAXMEM_VM_XML = """ +<domain type='%(domain)s'> + <name>%(name)s</name> + <maxMemory slots='1' unit='KiB'>20480</maxMemory> + <memory unit='KiB'>10240</memory> + <os> + <type arch='%(arch)s'>hvm</type> + <boot dev='hd'/> + </os> +</domain>""" + +DEV_MEM_XML = """ +<memory model='dimm'> + <target> + <size unit='KiB'>10240</size> + <node>0</node> + </target> +</memory>""" + SCSI_FC_XML = """ <pool type='scsi'> <name>%(name)s</name> @@ -207,3 +226,27 @@ class FeatureTests(object): return False
return True + + @staticmethod + def has_mem_hotplug_support(conn): + ''' + A memory device can be hot-plugged or hot-unplugged since libvirt + version 1.2.14. + ''' + with RollbackContext() as rollback: + FeatureTests.disable_libvirt_error_logging() + rollback.prependDefer(FeatureTests.enable_libvirt_error_logging) + conn_type = conn.getType().lower() + domain_type = 'test' if conn_type == 'test' else 'kvm' + arch = 'i686' if conn_type == 'test' else platform.machine() + arch = 'ppc64' if arch == 'ppc64le' else arch + dom = conn.defineXML(MAXMEM_VM_XML % {'name': FEATURETEST_VM_NAME, + 'domain': domain_type, + 'arch': arch}) + rollback.prependDefer(dom.undefine) + try: + dom.attachDeviceFlags(DEV_MEM_XML, + libvirt.VIR_DOMAIN_MEM_CONFIG) + return True + except libvirt.libvirtError: + return False

In order to support memory hotplug, guest must have maxMemory and NUMA configured in the xml. For maxMemory, this patch changes xml generation at guest creation time, adding maxMemory equals to host total memory and memory slots as the integer number of GiB that fit inside maxMemory (by design users will be allowed to add only memory chunks of 1GB). For NUMA, this patch adds the simplest configuration possible, creating only one node with all vcpus and memory set in the template. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 8 ++++++++ src/kimchi/vmtemplate.py | 29 +++++++++++++++++++---------- 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index f0182b9..dc7f91f 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -94,6 +94,14 @@ class VMsModel(object): if pool_uri: vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support + + # Setting maxMemory and slots parameter values + # getInfo memory value comes in MiB, so dividing by 1024 integer, + # gives the interger maximum number of slots to use in chunks of + # 1 GB + vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 + vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 + t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides) diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index e047228..a20098d 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -270,17 +270,25 @@ class VMTemplate(object): return input_output def _get_cpu_xml(self): - + # NUMA configuration. Necessary for memory hotplug + cpus = self.info.get('cpus') - 1 + xml = E.cpu(E.numa(E.cell( + id='0', + cpus='0-' + str(cpus) if cpus > 0 else '0', + memory=str(self.info.get('memory') << 10), + unit='KiB'))) + + # Include CPU topology, if provided cpu_info = self.info.get('cpu_info') - if cpu_info is None: - return "" - cpu_topo = cpu_info.get('topology') - if cpu_topo is None: - return "" - return etree.tostring(E.cpu(E.topology( - sockets=str(cpu_topo['sockets']), - cores=str(cpu_topo['cores']), - threads=str(cpu_topo['threads'])))) + if cpu_info is not None: + cpu_topo = cpu_info.get('topology') + if cpu_topo is not None: + xml.insert(0, E.topology( + sockets=str(cpu_topo['sockets']), + cores=str(cpu_topo['cores']), + threads=str(cpu_topo['threads']))) + + return etree.tostring(xml) def to_vm_xml(self, vm_name, vm_uuid, **kwargs): params = dict(self.info) @@ -313,6 +321,7 @@ class VMTemplate(object): %(qemu-stream-cmdline)s <name>%(name)s</name> <uuid>%(uuid)s</uuid> + <maxMemory slots='%(slots)s' unit='KiB'>%(max_memory)s</maxMemory> <memory unit='MiB'>%(memory)s</memory> <vcpu>%(cpus)s</vcpu> %(cpu_info)s -- 2.1.0

On 28/05/2015 10:59, Rodrigo Trujillo wrote:
In order to support memory hotplug, guest must have maxMemory and NUMA configured in the xml. For maxMemory, this patch changes xml generation at guest creation time, adding maxMemory equals to host total memory and memory slots as the integer number of GiB that fit inside maxMemory (by design users will be allowed to add only memory chunks of 1GB). For NUMA, this patch adds the simplest configuration possible, creating only one node with all vcpus and memory set in the template.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 8 ++++++++ src/kimchi/vmtemplate.py | 29 +++++++++++++++++++---------- 2 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index f0182b9..dc7f91f 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -94,6 +94,14 @@ class VMsModel(object): if pool_uri: vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support +
+ # Setting maxMemory and slots parameter values + # getInfo memory value comes in MiB, so dividing by 1024 integer, + # gives the interger maximum number of slots to use in chunks of + # 1 GB + vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 + vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 +
As those values are the same to every guest and will never change (at least by now) I suggest insert it to the 'defaults' in the osinfo.py
t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides)
diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index e047228..a20098d 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -270,17 +270,25 @@ class VMTemplate(object): return input_output
def _get_cpu_xml(self):
- + # NUMA configuration. Necessary for memory hotplug + cpus = self.info.get('cpus') - 1
Just curious about the count above. What is it for?
+ xml = E.cpu(E.numa(E.cell( + id='0', + cpus='0-' + str(cpus) if cpus > 0 else '0', + memory=str(self.info.get('memory') << 10), + unit='KiB'))) + + # Include CPU topology, if provided cpu_info = self.info.get('cpu_info') - if cpu_info is None: - return "" - cpu_topo = cpu_info.get('topology') - if cpu_topo is None: - return "" - return etree.tostring(E.cpu(E.topology( - sockets=str(cpu_topo['sockets']), - cores=str(cpu_topo['cores']), - threads=str(cpu_topo['threads'])))) + if cpu_info is not None: + cpu_topo = cpu_info.get('topology') + if cpu_topo is not None: + xml.insert(0, E.topology( + sockets=str(cpu_topo['sockets']), + cores=str(cpu_topo['cores']), + threads=str(cpu_topo['threads']))) + + return etree.tostring(xml)
def to_vm_xml(self, vm_name, vm_uuid, **kwargs): params = dict(self.info) @@ -313,6 +321,7 @@ class VMTemplate(object): %(qemu-stream-cmdline)s <name>%(name)s</name> <uuid>%(uuid)s</uuid> + <maxMemory slots='%(slots)s' unit='KiB'>%(max_memory)s</maxMemory> <memory unit='MiB'>%(memory)s</memory> <vcpu>%(cpus)s</vcpu> %(cpu_info)s

On 06/01/2015 09:24 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
In order to support memory hotplug, guest must have maxMemory and NUMA configured in the xml. For maxMemory, this patch changes xml generation at guest creation time, adding maxMemory equals to host total memory and memory slots as the integer number of GiB that fit inside maxMemory (by design users will be allowed to add only memory chunks of 1GB). For NUMA, this patch adds the simplest configuration possible, creating only one node with all vcpus and memory set in the template.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 8 ++++++++ src/kimchi/vmtemplate.py | 29 +++++++++++++++++++---------- 2 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index f0182b9..dc7f91f 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -94,6 +94,14 @@ class VMsModel(object): if pool_uri: vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support +
+ # Setting maxMemory and slots parameter values + # getInfo memory value comes in MiB, so dividing by 1024 integer, + # gives the interger maximum number of slots to use in chunks of + # 1 GB + vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 + vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 +
As those values are the same to every guest and will never change (at least by now) I suggest insert it to the 'defaults' in the osinfo.py
Yes, I can move to there. Just a doubt, is osinfo instantiated once ? Because I coded in VM create function to catch the case where the host had its memory increased on-the-fly.
t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides)
diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index e047228..a20098d 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -270,17 +270,25 @@ class VMTemplate(object): return input_output
def _get_cpu_xml(self):
- + # NUMA configuration. Necessary for memory hotplug + cpus = self.info.get('cpus') - 1
Just curious about the count above. What is it for?
I am getting the total number of CPUS to set below in the NUMA node. We are going to have only one node, so, all cpus must be set in that node, such as "cpus=0-3", if guest has 4 cpus ; or "cpus=0" if the guest has 1 cpu. Actually I am setting the "cpu ID", starting from zero to X-1.
+ xml = E.cpu(E.numa(E.cell( + id='0', + cpus='0-' + str(cpus) if cpus > 0 else '0', + memory=str(self.info.get('memory') << 10), + unit='KiB'))) + + # Include CPU topology, if provided cpu_info = self.info.get('cpu_info') - if cpu_info is None: - return "" - cpu_topo = cpu_info.get('topology') - if cpu_topo is None: - return "" - return etree.tostring(E.cpu(E.topology( - sockets=str(cpu_topo['sockets']), - cores=str(cpu_topo['cores']), - threads=str(cpu_topo['threads'])))) + if cpu_info is not None: + cpu_topo = cpu_info.get('topology') + if cpu_topo is not None: + xml.insert(0, E.topology( + sockets=str(cpu_topo['sockets']), + cores=str(cpu_topo['cores']), + threads=str(cpu_topo['threads']))) + + return etree.tostring(xml)
def to_vm_xml(self, vm_name, vm_uuid, **kwargs): params = dict(self.info) @@ -313,6 +321,7 @@ class VMTemplate(object): %(qemu-stream-cmdline)s <name>%(name)s</name> <uuid>%(uuid)s</uuid> + <maxMemory slots='%(slots)s' unit='KiB'>%(max_memory)s</maxMemory> <memory unit='MiB'>%(memory)s</memory> <vcpu>%(cpus)s</vcpu> %(cpu_info)s
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

On 01/06/2015 16:33, Rodrigo Trujillo wrote:
On 06/01/2015 09:24 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
In order to support memory hotplug, guest must have maxMemory and NUMA configured in the xml. For maxMemory, this patch changes xml generation at guest creation time, adding maxMemory equals to host total memory and memory slots as the integer number of GiB that fit inside maxMemory (by design users will be allowed to add only memory chunks of 1GB). For NUMA, this patch adds the simplest configuration possible, creating only one node with all vcpus and memory set in the template.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/model/vms.py | 8 ++++++++ src/kimchi/vmtemplate.py | 29 +++++++++++++++++++---------- 2 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index f0182b9..dc7f91f 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -94,6 +94,14 @@ class VMsModel(object): if pool_uri: vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support +
+ # Setting maxMemory and slots parameter values + # getInfo memory value comes in MiB, so dividing by 1024 integer, + # gives the interger maximum number of slots to use in chunks of + # 1 GB + vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 + vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 +
As those values are the same to every guest and will never change (at least by now) I suggest insert it to the 'defaults' in the osinfo.py
Yes, I can move to there. Just a doubt, is osinfo instantiated once ? Because I coded in VM create function to catch the case where the host had its memory increased on-the-fly.
The 'defaults' in osinfo is a dict. The dict is built on server start up. To make it "refreshable" you will need to change the osinfo.lookup() which is called on every Template creation.
t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides)
diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index e047228..a20098d 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -270,17 +270,25 @@ class VMTemplate(object): return input_output
def _get_cpu_xml(self):
- + # NUMA configuration. Necessary for memory hotplug + cpus = self.info.get('cpus') - 1
Just curious about the count above. What is it for?
I am getting the total number of CPUS to set below in the NUMA node. We are going to have only one node, so, all cpus must be set in that node, such as "cpus=0-3", if guest has 4 cpus ; or "cpus=0" if the guest has 1 cpu. Actually I am setting the "cpu ID", starting from zero to X-1.
Thanks for the explanation.
+ xml = E.cpu(E.numa(E.cell( + id='0', + cpus='0-' + str(cpus) if cpus > 0 else '0', + memory=str(self.info.get('memory') << 10), + unit='KiB'))) + + # Include CPU topology, if provided cpu_info = self.info.get('cpu_info') - if cpu_info is None: - return "" - cpu_topo = cpu_info.get('topology') - if cpu_topo is None: - return "" - return etree.tostring(E.cpu(E.topology( - sockets=str(cpu_topo['sockets']), - cores=str(cpu_topo['cores']), - threads=str(cpu_topo['threads'])))) + if cpu_info is not None: + cpu_topo = cpu_info.get('topology') + if cpu_topo is not None: + xml.insert(0, E.topology( + sockets=str(cpu_topo['sockets']), + cores=str(cpu_topo['cores']), + threads=str(cpu_topo['threads']))) + + return etree.tostring(xml)
def to_vm_xml(self, vm_name, vm_uuid, **kwargs): params = dict(self.info) @@ -313,6 +321,7 @@ class VMTemplate(object): %(qemu-stream-cmdline)s <name>%(name)s</name> <uuid>%(uuid)s</uuid> + <maxMemory slots='%(slots)s' unit='KiB'>%(max_memory)s</maxMemory> <memory unit='MiB'>%(memory)s</memory> <vcpu>%(cpus)s</vcpu> %(cpu_info)s
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

This patch creates new colletion and resource of memory devices. They are subnodes of a given VM resource, that means the user is going to use the VM url/api to add or fetch the memory devices, such as: http://<HOST>/vms/<YOUR_VM>/memdevices (GET) -> to list memory devs http://<HOST>/vms/<YOUR_VM>/memdevices (POST) -> to add a memory dev http://<HOST>/vms/<YOUR_VM>/memdevices/<SLOT> (GET) -> to retrieve a given memory device in SLOT This patch does not give support do DELETE or UPDATE a memory device Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 2 + src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py diff --git a/src/kimchi/control/vm/memdevices.py b/src/kimchi/control/vm/memdevices.py new file mode 100644 index 0000000..2d96104 --- /dev/null +++ b/src/kimchi/control/vm/memdevices.py @@ -0,0 +1,47 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("memdevices") +class VMMemDevices(Collection): + def __init__(self, model, vm): + super(VMMemDevices, self).__init__(model) + self.admin_methods = ['GET', 'POST'] + self.resource = VMMemDevice + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class VMMemDevice(Resource): + def __init__(self, model, vm, ident): + super(VMMemDevice, self).__init__(model, ident) + self.admin_methods = ['GET'] + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/memdevices/%s' + + @property + def data(self): + return self.info diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 9f169ab..7085530 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -340,4 +340,6 @@ messages = { "KCHCPUINF0002E": _("Invalid vCPU/topology combination."), "KCHCPUINF0003E": _("This host (or current configuration) does not allow CPU topology."), + "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), + "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), } diff --git a/src/kimchi/model/vmmemdevices.py b/src/kimchi/model/vmmemdevices.py new file mode 100644 index 0000000..c92678d --- /dev/null +++ b/src/kimchi/model/vmmemdevices.py @@ -0,0 +1,94 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from libvirt import VIR_DOMAIN_MEM_CONFIG, VIR_DOMAIN_MEM_LIVE +from lxml import etree, objectify +from lxml.builder import E + + +from kimchi.exception import InvalidOperation, OperationFailed +from kimchi.model.config import CapabilitiesModel + + +def _get_all_memory_devices(conn, vm_name): + # Return all memory devices from a given guest + dom = conn.get().lookupByName(vm_name) + devs = objectify.fromstring(dom.XMLDesc(0)).devices + return devs.findall('memory') + + +class VMMemDevicesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def create(self, vm_name, params): + # Check if host supports memory device + if not self.caps.mem_hotplug_support: + raise InvalidOperation("KCHMEMDEV0001E") + + dom = self.conn.get().lookupByName(vm_name) + # Setting flags, depending if machine is running or not + flags = VIR_DOMAIN_MEM_CONFIG + if dom.isActive(): + flags = flags | VIR_DOMAIN_MEM_LIVE + + # Create memory device xml + mem_dev_xml = etree.tostring( + E.memory( + E.target( + E.size('1', unit='GiB'), + E.node('0')), + model='dimm' + ) + ) + # Add chunks of 1G of memory + for i in range(params['amount']): + try: + dom.attachDeviceFlags(mem_dev_xml, flags) + except Exception as e: + raise OperationFailed("KCHMEMDEV0002E", {'error': e.message}) + return len(_get_all_memory_devices(self.conn, vm_name)) - 1 + + def get_list(self, vm_name): + # Memory devices do not have an ID or something similar in the XML, so + # lets return the positions of all memory devices in the XML. + return [str(i) for i in range(len( + _get_all_memory_devices(self.conn, vm_name)))] + + +class VMMemDeviceModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def lookup(self, vm_name, dev_slot): + # We are considering the memory device position in the XML, which + # can also be considered the slot position + try: + memdev = _get_all_memory_devices(self.conn, vm_name)[int(dev_slot)] + return { + 'slot': dev_slot, + 'size': str(memdev.xpath('./target/size')[0] >> 20) + 'G', + 'node': str(memdev.xpath('./target/node')[0]) + } + except IndexError: + return {} -- 2.1.0

On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch creates new colletion and resource of memory devices. They are subnodes of a given VM resource, that means the user is going to use the VM url/api to add or fetch the memory devices, such as: http://<HOST>/vms/<YOUR_VM>/memdevices (GET) -> to list memory devs http://<HOST>/vms/<YOUR_VM>/memdevices (POST) -> to add a memory dev http://<HOST>/vms/<YOUR_VM>/memdevices/<SLOT> (GET) -> to retrieve a given memory device in SLOT This patch does not give support do DELETE or UPDATE a memory device
As I said before, unless we change the UI to looks like the same we did for network interfaces (which I think will only add more complexity), I don't think this API will work good for us. The user should be able to only specify the amount of memory to the guest, and the backend identify how many devices should be added and do the operation. Otherwise, we will insert backend logic to the UI and if a request fails we will need to do a rollback also on UI.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 2 + src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
diff --git a/src/kimchi/control/vm/memdevices.py b/src/kimchi/control/vm/memdevices.py new file mode 100644 index 0000000..2d96104 --- /dev/null +++ b/src/kimchi/control/vm/memdevices.py @@ -0,0 +1,47 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("memdevices") +class VMMemDevices(Collection): + def __init__(self, model, vm): + super(VMMemDevices, self).__init__(model) + self.admin_methods = ['GET', 'POST'] + self.resource = VMMemDevice + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class VMMemDevice(Resource): + def __init__(self, model, vm, ident): + super(VMMemDevice, self).__init__(model, ident) + self.admin_methods = ['GET'] + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/memdevices/%s' + + @property + def data(self): + return self.info diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 9f169ab..7085530 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -340,4 +340,6 @@ messages = { "KCHCPUINF0002E": _("Invalid vCPU/topology combination."), "KCHCPUINF0003E": _("This host (or current configuration) does not allow CPU topology."),
+ "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), + "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), } diff --git a/src/kimchi/model/vmmemdevices.py b/src/kimchi/model/vmmemdevices.py new file mode 100644 index 0000000..c92678d --- /dev/null +++ b/src/kimchi/model/vmmemdevices.py @@ -0,0 +1,94 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from libvirt import VIR_DOMAIN_MEM_CONFIG, VIR_DOMAIN_MEM_LIVE +from lxml import etree, objectify +from lxml.builder import E + + +from kimchi.exception import InvalidOperation, OperationFailed +from kimchi.model.config import CapabilitiesModel + + +def _get_all_memory_devices(conn, vm_name): + # Return all memory devices from a given guest + dom = conn.get().lookupByName(vm_name) + devs = objectify.fromstring(dom.XMLDesc(0)).devices + return devs.findall('memory') + + +class VMMemDevicesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def create(self, vm_name, params): + # Check if host supports memory device + if not self.caps.mem_hotplug_support: + raise InvalidOperation("KCHMEMDEV0001E") + + dom = self.conn.get().lookupByName(vm_name) + # Setting flags, depending if machine is running or not + flags = VIR_DOMAIN_MEM_CONFIG + if dom.isActive(): + flags = flags | VIR_DOMAIN_MEM_LIVE + + # Create memory device xml + mem_dev_xml = etree.tostring( + E.memory( + E.target( + E.size('1', unit='GiB'), + E.node('0')), + model='dimm' + ) + ) + # Add chunks of 1G of memory + for i in range(params['amount']): + try: + dom.attachDeviceFlags(mem_dev_xml, flags) + except Exception as e: + raise OperationFailed("KCHMEMDEV0002E", {'error': e.message}) + return len(_get_all_memory_devices(self.conn, vm_name)) - 1 + + def get_list(self, vm_name): + # Memory devices do not have an ID or something similar in the XML, so + # lets return the positions of all memory devices in the XML. + return [str(i) for i in range(len( + _get_all_memory_devices(self.conn, vm_name)))] + + +class VMMemDeviceModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def lookup(self, vm_name, dev_slot): + # We are considering the memory device position in the XML, which + # can also be considered the slot position + try: + memdev = _get_all_memory_devices(self.conn, vm_name)[int(dev_slot)] + return { + 'slot': dev_slot, + 'size': str(memdev.xpath('./target/size')[0] >> 20) + 'G', + 'node': str(memdev.xpath('./target/node')[0]) + } + except IndexError: + return {}

On 06/01/2015 09:30 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch creates new colletion and resource of memory devices. They are subnodes of a given VM resource, that means the user is going to use the VM url/api to add or fetch the memory devices, such as: http://<HOST>/vms/<YOUR_VM>/memdevices (GET) -> to list memory devs http://<HOST>/vms/<YOUR_VM>/memdevices (POST) -> to add a memory dev http://<HOST>/vms/<YOUR_VM>/memdevices/<SLOT> (GET) -> to retrieve a given memory device in SLOT This patch does not give support do DELETE or UPDATE a memory device
As I said before, unless we change the UI to looks like the same we did for network interfaces (which I think will only add more complexity), I don't think this API will work good for us.
The user should be able to only specify the amount of memory to the guest, and the backend identify how many devices should be added and do the operation. Otherwise, we will insert backend logic to the UI and if a request fails we will need to do a rollback also on UI.
I have explained what I thought in the first email, and indeed, I based my thought like the interfaces coded. But we can mode the complexity to vm update, no problem. I just want to make sure that we can keep this API and make use of it
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 2 + src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
diff --git a/src/kimchi/control/vm/memdevices.py b/src/kimchi/control/vm/memdevices.py new file mode 100644 index 0000000..2d96104 --- /dev/null +++ b/src/kimchi/control/vm/memdevices.py @@ -0,0 +1,47 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("memdevices") +class VMMemDevices(Collection): + def __init__(self, model, vm): + super(VMMemDevices, self).__init__(model) + self.admin_methods = ['GET', 'POST'] + self.resource = VMMemDevice + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class VMMemDevice(Resource): + def __init__(self, model, vm, ident): + super(VMMemDevice, self).__init__(model, ident) + self.admin_methods = ['GET'] + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/memdevices/%s' + + @property + def data(self): + return self.info diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 9f169ab..7085530 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -340,4 +340,6 @@ messages = { "KCHCPUINF0002E": _("Invalid vCPU/topology combination."), "KCHCPUINF0003E": _("This host (or current configuration) does not allow CPU topology."),
+ "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), + "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), } diff --git a/src/kimchi/model/vmmemdevices.py b/src/kimchi/model/vmmemdevices.py new file mode 100644 index 0000000..c92678d --- /dev/null +++ b/src/kimchi/model/vmmemdevices.py @@ -0,0 +1,94 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from libvirt import VIR_DOMAIN_MEM_CONFIG, VIR_DOMAIN_MEM_LIVE +from lxml import etree, objectify +from lxml.builder import E + + +from kimchi.exception import InvalidOperation, OperationFailed +from kimchi.model.config import CapabilitiesModel + + +def _get_all_memory_devices(conn, vm_name): + # Return all memory devices from a given guest + dom = conn.get().lookupByName(vm_name) + devs = objectify.fromstring(dom.XMLDesc(0)).devices + return devs.findall('memory') + + +class VMMemDevicesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def create(self, vm_name, params): + # Check if host supports memory device + if not self.caps.mem_hotplug_support: + raise InvalidOperation("KCHMEMDEV0001E") + + dom = self.conn.get().lookupByName(vm_name) + # Setting flags, depending if machine is running or not + flags = VIR_DOMAIN_MEM_CONFIG + if dom.isActive(): + flags = flags | VIR_DOMAIN_MEM_LIVE + + # Create memory device xml + mem_dev_xml = etree.tostring( + E.memory( + E.target( + E.size('1', unit='GiB'), + E.node('0')), + model='dimm' + ) + ) + # Add chunks of 1G of memory + for i in range(params['amount']): + try: + dom.attachDeviceFlags(mem_dev_xml, flags) + except Exception as e: + raise OperationFailed("KCHMEMDEV0002E", {'error': e.message}) + return len(_get_all_memory_devices(self.conn, vm_name)) - 1 + + def get_list(self, vm_name): + # Memory devices do not have an ID or something similar in the XML, so + # lets return the positions of all memory devices in the XML. + return [str(i) for i in range(len( + _get_all_memory_devices(self.conn, vm_name)))] + + +class VMMemDeviceModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def lookup(self, vm_name, dev_slot): + # We are considering the memory device position in the XML, which + # can also be considered the slot position + try: + memdev = _get_all_memory_devices(self.conn, vm_name)[int(dev_slot)] + return { + 'slot': dev_slot, + 'size': str(memdev.xpath('./target/size')[0] >> 20) + 'G', + 'node': str(memdev.xpath('./target/node')[0]) + } + except IndexError: + return {}
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

On 01/06/2015 16:36, Rodrigo Trujillo wrote:
On 06/01/2015 09:30 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch creates new colletion and resource of memory devices. They are subnodes of a given VM resource, that means the user is going to use the VM url/api to add or fetch the memory devices, such as: http://<HOST>/vms/<YOUR_VM>/memdevices (GET) -> to list memory devs http://<HOST>/vms/<YOUR_VM>/memdevices (POST) -> to add a memory dev http://<HOST>/vms/<YOUR_VM>/memdevices/<SLOT> (GET) -> to retrieve a given memory device in SLOT This patch does not give support do DELETE or UPDATE a memory device
As I said before, unless we change the UI to looks like the same we did for network interfaces (which I think will only add more complexity), I don't think this API will work good for us.
The user should be able to only specify the amount of memory to the guest, and the backend identify how many devices should be added and do the operation. Otherwise, we will insert backend logic to the UI and if a request fails we will need to do a rollback also on UI.
I have explained what I thought in the first email, and indeed, I based my thought like the interfaces coded. But we can mode the complexity to vm update, no problem.
I just want to make sure that we can keep this API and make use of it
Do you mean have 2 APIs to do the same thing? I don't like it for maintenance matters. I'd say to you move the code you did for create/delete to functions inside the VM.model() and call them when needed.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 2 + src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
diff --git a/src/kimchi/control/vm/memdevices.py b/src/kimchi/control/vm/memdevices.py new file mode 100644 index 0000000..2d96104 --- /dev/null +++ b/src/kimchi/control/vm/memdevices.py @@ -0,0 +1,47 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from kimchi.control.base import Collection, Resource +from kimchi.control.utils import UrlSubNode + + +@UrlSubNode("memdevices") +class VMMemDevices(Collection): + def __init__(self, model, vm): + super(VMMemDevices, self).__init__(model) + self.admin_methods = ['GET', 'POST'] + self.resource = VMMemDevice + self.vm = vm + self.resource_args = [self.vm, ] + self.model_args = [self.vm, ] + + +class VMMemDevice(Resource): + def __init__(self, model, vm, ident): + super(VMMemDevice, self).__init__(model, ident) + self.admin_methods = ['GET'] + self.vm = vm + self.ident = ident + self.info = {} + self.model_args = [self.vm, self.ident] + self.uri_fmt = '/vms/%s/memdevices/%s' + + @property + def data(self): + return self.info diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 9f169ab..7085530 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -340,4 +340,6 @@ messages = { "KCHCPUINF0002E": _("Invalid vCPU/topology combination."), "KCHCPUINF0003E": _("This host (or current configuration) does not allow CPU topology."),
+ "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), + "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), } diff --git a/src/kimchi/model/vmmemdevices.py b/src/kimchi/model/vmmemdevices.py new file mode 100644 index 0000000..c92678d --- /dev/null +++ b/src/kimchi/model/vmmemdevices.py @@ -0,0 +1,94 @@ +# +# Project Kimchi +# +# Copyright IBM, Corp. 2015 +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + +from libvirt import VIR_DOMAIN_MEM_CONFIG, VIR_DOMAIN_MEM_LIVE +from lxml import etree, objectify +from lxml.builder import E + + +from kimchi.exception import InvalidOperation, OperationFailed +from kimchi.model.config import CapabilitiesModel + + +def _get_all_memory_devices(conn, vm_name): + # Return all memory devices from a given guest + dom = conn.get().lookupByName(vm_name) + devs = objectify.fromstring(dom.XMLDesc(0)).devices + return devs.findall('memory') + + +class VMMemDevicesModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def create(self, vm_name, params): + # Check if host supports memory device + if not self.caps.mem_hotplug_support: + raise InvalidOperation("KCHMEMDEV0001E") + + dom = self.conn.get().lookupByName(vm_name) + # Setting flags, depending if machine is running or not + flags = VIR_DOMAIN_MEM_CONFIG + if dom.isActive(): + flags = flags | VIR_DOMAIN_MEM_LIVE + + # Create memory device xml + mem_dev_xml = etree.tostring( + E.memory( + E.target( + E.size('1', unit='GiB'), + E.node('0')), + model='dimm' + ) + ) + # Add chunks of 1G of memory + for i in range(params['amount']): + try: + dom.attachDeviceFlags(mem_dev_xml, flags) + except Exception as e: + raise OperationFailed("KCHMEMDEV0002E", {'error': e.message}) + return len(_get_all_memory_devices(self.conn, vm_name)) - 1 + + def get_list(self, vm_name): + # Memory devices do not have an ID or something similar in the XML, so + # lets return the positions of all memory devices in the XML. + return [str(i) for i in range(len( + _get_all_memory_devices(self.conn, vm_name)))] + + +class VMMemDeviceModel(object): + def __init__(self, **kargs): + self.conn = kargs['conn'] + self.objstore = kargs['objstore'] + self.caps = CapabilitiesModel(**kargs) + + def lookup(self, vm_name, dev_slot): + # We are considering the memory device position in the XML, which + # can also be considered the slot position + try: + memdev = _get_all_memory_devices(self.conn, vm_name)[int(dev_slot)] + return { + 'slot': dev_slot, + 'size': str(memdev.xpath('./target/size')[0] >> 20) + 'G', + 'node': str(memdev.xpath('./target/node')[0]) + } + except IndexError: + return {}
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

This patch add necessary restrictions to new API, avoiding and warning user about basic mistakes. The restriction is basically: pass "amount" parameter, which must be an integer. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- docs/API.md | 16 ++++++++++++++++ src/kimchi/API.json | 14 +++++++++++++- src/kimchi/i18n.py | 2 ++ 3 files changed, 31 insertions(+), 1 deletion(-) diff --git a/docs/API.md b/docs/API.md index 71f2539..4f05dc0 100644 --- a/docs/API.md +++ b/docs/API.md @@ -223,6 +223,22 @@ Represents a snapshot of the Virtual Machine's primary monitor. **URI:** /vms/*:name*/snapshots/current * **GET**: Retrieve current snapshot information for the virtual machine. +### Sub-collection: Virtual Machine Memory Device +**URI:** /vms/*:name*/memdevices +* **GET**: Retrieve a list of all devices attached to given virtual machine. +* **POST**: Attach new 1GB memory devices to virtual machine. Attach as many + as the amount given, and only if there are slots available. + * amount: An integer representing the total of GB to be attached to VM. + +### Sub-resource: memdevice +**URI:** /vms/*:name*/memdevices/*:memdevice* +* **GET**: Retrive the memory device information from VM xml. + ":memdevice" is the slot or position in the xml, starting from 0. + * slot: The slot or position in the VM xml. + * node: The NUMA node where the device is associated. '0' by default. + * size: The total of memory of the device. + + ### Collection: Templates **URI:** /templates diff --git a/src/kimchi/API.json b/src/kimchi/API.json index a6330ae..569f8a8 100644 --- a/src/kimchi/API.json +++ b/src/kimchi/API.json @@ -831,6 +831,18 @@ } }, "error": "KCHAPI0001E" - } + }, + "vmmemdevices_create": { + "type": "object", + "error": "KCHMEMDEV0003E", + "properties": { + "amount": { + "description": "Amount of memory in GB to be attached to VM.", + "type": "integer", + "required": true, + "error": "KCHMEMDEV0004E" + } + } + } } } diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index 7085530..e6e00b8 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -342,4 +342,6 @@ messages = { "KCHMEMDEV0001E": _("Host's libvirt version does not support memory devices. Libvirt must be >= 1.2.14"), "KCHMEMDEV0002E": _("Error attaching memory device. Details: %(error)s"), + "KCHMEMDEV0003E": _("Parameter 'amount' is required."), + "KCHMEMDEV0004E": _("Amount of memory (GB) to be attached is required and must be an integer."), } -- 2.1.0

This patch changes the current memory update process, the static update, allowed when the guest is offline. Now, the update creates the new xml elements that will allow to hotplug memory, if necessary (MAXMEMORY, CPU, NUMA, NODE, etc.). It also introduce some tests to avoid errors if user sets more memory than the allowed in the server. The memory device slots count and assignment was changed to avoid erros, like negative or zero slots. Slots counting will have the number updated as the static memory is changed. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/i18n.py | 1 + src/kimchi/model/vms.py | 87 ++++++++++++++++++++++++++++++++++++++---------- src/kimchi/vmtemplate.py | 11 ++++++ 3 files changed, 82 insertions(+), 17 deletions(-) diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index e6e00b8..1779597 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -111,6 +111,7 @@ messages = { "KCHVM0038E": _("Unable to suspend VM '%(name)s'. Details: %(err)s"), "KCHVM0039E": _("Cannot resume VM '%(name)s' because it is not paused."), "KCHVM0040E": _("Unable to resume VM '%(name)s'. Details: %(err)s"), + "KCHVM0041E": _("Memory assigned is higher then the maximum allowed in the host."), "KCHVMHDEV0001E": _("VM %(vmid)s does not contain directly assigned host device %(dev_name)s."), "KCHVMHDEV0002E": _("The host device %(dev_name)s is not allowed to directly assign to VM."), diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index dc7f91f..5834a65 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -59,8 +59,7 @@ DOM_STATE_MAP = {0: 'nostate', 7: 'pmsuspended'} VM_STATIC_UPDATE_PARAMS = {'name': './name', - 'cpus': './vcpu', - 'memory': './memory'} + 'cpus': './vcpu'} VM_LIVE_UPDATE_PARAMS = {} XPATH_DOMAIN_DISK = "/domain/devices/disk[@device='disk']/source/@file" @@ -73,6 +72,8 @@ XPATH_DOMAIN_MEMORY = '/domain/memory' XPATH_DOMAIN_MEMORY_UNIT = '/domain/memory/@unit' XPATH_DOMAIN_UUID = '/domain/uuid' +XPATH_NUMA_CELL = './cpu/numa/cell' + class VMsModel(object): def __init__(self, **kargs): @@ -95,12 +96,9 @@ class VMsModel(object): vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support - # Setting maxMemory and slots parameter values - # getInfo memory value comes in MiB, so dividing by 1024 integer, - # gives the interger maximum number of slots to use in chunks of - # 1 GB + # Setting maxMemory of the VM, which will be equal the Host memory. + # Host memory comes in MiB, so transform in KiB vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 - vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024 t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides) @@ -659,15 +657,15 @@ class VMModel(object): for key, val in params.items(): if key in VM_STATIC_UPDATE_PARAMS: - if key == 'memory': - # Libvirt saves memory in KiB. Retrieved xml has memory - # in KiB too, so new valeu must be in KiB here - val = val * 1024 if type(val) == int: val = str(val) xpath = VM_STATIC_UPDATE_PARAMS[key] new_xml = xml_item_update(new_xml, xpath, val) + # Updating memory and adds NUMA if necessary + if 'memory' in params: + new_xml = self._update_memory(new_xml, params) + if 'graphics' in params: new_xml = self._update_graphics(dom, new_xml, params) @@ -695,12 +693,7 @@ class VMModel(object): # Undefine old vm, only if name is going to change dom.undefine() - root = ET.fromstring(new_xml) - currentMem = root.find('.currentMemory') - if currentMem is not None: - root.remove(currentMem) - - dom = conn.defineXML(ET.tostring(root, encoding="utf-8")) + dom = conn.defineXML(new_xml) if 'name' in params: self._redefine_snapshots(dom, snapshots_info) except libvirt.libvirtError as e: @@ -712,6 +705,66 @@ class VMModel(object): 'err': e.get_error_message()}) return dom + def _update_memory(self, xml, params): + # Checks if NUMA memory is already configured, if not, checks if CPU + # element is already configured (topology). Then add NUMA element as + # apropriated + root = ET.fromstring(xml) + numa_mem = xpath_get_text(xml, XPATH_NUMA_CELL + '/@memory') + if numa_mem == []: + vcpus = xpath_get_text(xml, VM_STATIC_UPDATE_PARAMS['cpus']) + numa_element = E.numa(E.cell( + id='0', + cpus='0-' + str(vcpus - 1) if vcpus > 1 else '0', + memory=str(params['memory'] << 10), + unit='KiB')) + cpu = root.find('./cpu') + if cpu is None: + root.insert(0, E.cpu(numa_element)) + else: + cpu.insert(0, numa_element) + else: + root = ET.fromstring(xml_item_update(xml, XPATH_NUMA_CELL, + str(params['memory'] << 10), + attr='memory')) + + # Remove currentMemory, automatically set later by libvirt + currentMem = root.find('.currentMemory') + if currentMem is not None: + root.remove(currentMem) + + memory = root.find('.memory') + # Update/Adds maxMemory accordingly + if not self.caps.mem_hotplug_support: + if memory is not None: + memory.text = str(params['memory'] << 10) + else: + if memory is not None: + root.remove(memory) + maxMem = root.find('.maxMemory') + host_mem = self.conn.get().getInfo()[1] + slots = (host_mem - params['memory']) / 1024 + # Libvirt does not accepts slots <= 1 + if slots < 0: + raise OperationFailed("KCHVM0041E") + elif slots == 0: + slots = 1 + if maxMem is None: + max_mem_xml = E.maxMemory( + str(host_mem * 1024), + unit='Kib', + slots=str(slots)) + root.insert(0, max_mem_xml) + new_xml = ET.tostring(root, encoding="utf-8") + else: + # Update slots only + new_xml = xml_item_update(ET.tostring(root, encoding="utf-8"), + './maxMemory', + str(slots), + attr='slots') + return new_xml + return ET.tostring(root, encoding="utf-8") + def _live_vm_update(self, dom, params): self._vm_update_access_metadata(dom, params) diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index a20098d..00de7c2 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -316,6 +316,17 @@ class VMTemplate(object): else: params['cdroms'] = cdrom_xml + # Setting maximum number of slots to avoid errors when hotplug memory + # Number of slots are the numbers of chunks of 1GB that fit inside + # the max_memory of the host minus memory assigned to the VM + params['slots'] = ((params['max_memory'] / 1024) - + params['memory']) / 1024 + + if params['slots'] < 0: + raise OperationFailed("KCHVM0041E") + elif params['slots'] == 0: + params['slots'] = 1 + xml = """ <domain type='%(domain)s'> %(qemu-stream-cmdline)s -- 2.1.0

On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch changes the current memory update process, the static update, allowed when the guest is offline. Now, the update creates the new xml elements that will allow to hotplug memory, if necessary (MAXMEMORY, CPU, NUMA, NODE, etc.). It also introduce some tests to avoid errors if user sets more memory than the allowed in the server. The memory device slots count and assignment was changed to avoid erros, like negative or zero slots. Slots counting will have the number updated as the static memory is changed.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/i18n.py | 1 + src/kimchi/model/vms.py | 87 ++++++++++++++++++++++++++++++++++++++---------- src/kimchi/vmtemplate.py | 11 ++++++ 3 files changed, 82 insertions(+), 17 deletions(-)
diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index e6e00b8..1779597 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -111,6 +111,7 @@ messages = { "KCHVM0038E": _("Unable to suspend VM '%(name)s'. Details: %(err)s"), "KCHVM0039E": _("Cannot resume VM '%(name)s' because it is not paused."), "KCHVM0040E": _("Unable to resume VM '%(name)s'. Details: %(err)s"), + "KCHVM0041E": _("Memory assigned is higher then the maximum allowed in the host."),
"KCHVMHDEV0001E": _("VM %(vmid)s does not contain directly assigned host device %(dev_name)s."), "KCHVMHDEV0002E": _("The host device %(dev_name)s is not allowed to directly assign to VM."), diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index dc7f91f..5834a65 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -59,8 +59,7 @@ DOM_STATE_MAP = {0: 'nostate', 7: 'pmsuspended'}
VM_STATIC_UPDATE_PARAMS = {'name': './name', - 'cpus': './vcpu', - 'memory': './memory'} + 'cpus': './vcpu'} VM_LIVE_UPDATE_PARAMS = {}
XPATH_DOMAIN_DISK = "/domain/devices/disk[@device='disk']/source/@file" @@ -73,6 +72,8 @@ XPATH_DOMAIN_MEMORY = '/domain/memory' XPATH_DOMAIN_MEMORY_UNIT = '/domain/memory/@unit' XPATH_DOMAIN_UUID = '/domain/uuid'
+XPATH_NUMA_CELL = './cpu/numa/cell' +
class VMsModel(object): def __init__(self, **kargs): @@ -95,12 +96,9 @@ class VMsModel(object): vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support
- # Setting maxMemory and slots parameter values - # getInfo memory value comes in MiB, so dividing by 1024 integer, - # gives the interger maximum number of slots to use in chunks of - # 1 GB + # Setting maxMemory of the VM, which will be equal the Host memory. + # Host memory comes in MiB, so transform in KiB vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 - vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024
t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides) @@ -659,15 +657,15 @@ class VMModel(object):
for key, val in params.items(): if key in VM_STATIC_UPDATE_PARAMS: - if key == 'memory': - # Libvirt saves memory in KiB. Retrieved xml has memory - # in KiB too, so new valeu must be in KiB here - val = val * 1024 if type(val) == int: val = str(val) xpath = VM_STATIC_UPDATE_PARAMS[key] new_xml = xml_item_update(new_xml, xpath, val)
+ # Updating memory and adds NUMA if necessary + if 'memory' in params: + new_xml = self._update_memory(new_xml, params) + if 'graphics' in params: new_xml = self._update_graphics(dom, new_xml, params)
@@ -695,12 +693,7 @@ class VMModel(object): # Undefine old vm, only if name is going to change dom.undefine()
- root = ET.fromstring(new_xml) - currentMem = root.find('.currentMemory') - if currentMem is not None: - root.remove(currentMem) - - dom = conn.defineXML(ET.tostring(root, encoding="utf-8")) + dom = conn.defineXML(new_xml) if 'name' in params: self._redefine_snapshots(dom, snapshots_info) except libvirt.libvirtError as e: @@ -712,6 +705,66 @@ class VMModel(object): 'err': e.get_error_message()}) return dom
+ def _update_memory(self, xml, params): + # Checks if NUMA memory is already configured, if not, checks if CPU + # element is already configured (topology). Then add NUMA element as + # apropriated + root = ET.fromstring(xml) + numa_mem = xpath_get_text(xml, XPATH_NUMA_CELL + '/@memory') + if numa_mem == []: + vcpus = xpath_get_text(xml, VM_STATIC_UPDATE_PARAMS['cpus'])
+ numa_element = E.numa(E.cell( + id='0', + cpus='0-' + str(vcpus - 1) if vcpus > 1 else '0', + memory=str(params['memory'] << 10), + unit='KiB'))
The above code is the same in vmtemplate.py file. We have the xmlutils module to handle all the XML manipulation. I suggest to add it there and reuse over where.
+ cpu = root.find('./cpu') + if cpu is None: + root.insert(0, E.cpu(numa_element)) + else: + cpu.insert(0, numa_element) + else: + root = ET.fromstring(xml_item_update(xml, XPATH_NUMA_CELL, + str(params['memory'] << 10), + attr='memory')) + + # Remove currentMemory, automatically set later by libvirt + currentMem = root.find('.currentMemory') + if currentMem is not None: + root.remove(currentMem) + + memory = root.find('.memory') + # Update/Adds maxMemory accordingly + if not self.caps.mem_hotplug_support: + if memory is not None: + memory.text = str(params['memory'] << 10) + else: + if memory is not None: + root.remove(memory) + maxMem = root.find('.maxMemory') + host_mem = self.conn.get().getInfo()[1] + slots = (host_mem - params['memory']) / 1024 + # Libvirt does not accepts slots <= 1 + if slots < 0: + raise OperationFailed("KCHVM0041E") + elif slots == 0: + slots = 1 + if maxMem is None: + max_mem_xml = E.maxMemory( + str(host_mem * 1024), + unit='Kib', + slots=str(slots)) + root.insert(0, max_mem_xml) + new_xml = ET.tostring(root, encoding="utf-8") + else: + # Update slots only + new_xml = xml_item_update(ET.tostring(root, encoding="utf-8"), + './maxMemory', + str(slots), + attr='slots') + return new_xml + return ET.tostring(root, encoding="utf-8") + def _live_vm_update(self, dom, params): self._vm_update_access_metadata(dom, params)
diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index a20098d..00de7c2 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -316,6 +316,17 @@ class VMTemplate(object): else: params['cdroms'] = cdrom_xml
+ # Setting maximum number of slots to avoid errors when hotplug memory + # Number of slots are the numbers of chunks of 1GB that fit inside + # the max_memory of the host minus memory assigned to the VM + params['slots'] = ((params['max_memory'] / 1024) - + params['memory']) / 1024 + + if params['slots'] < 0: + raise OperationFailed("KCHVM0041E") + elif params['slots'] == 0: + params['slots'] = 1 + xml = """ <domain type='%(domain)s'> %(qemu-stream-cmdline)s

On 06/01/2015 09:36 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch changes the current memory update process, the static update, allowed when the guest is offline. Now, the update creates the new xml elements that will allow to hotplug memory, if necessary (MAXMEMORY, CPU, NUMA, NODE, etc.). It also introduce some tests to avoid errors if user sets more memory than the allowed in the server. The memory device slots count and assignment was changed to avoid erros, like negative or zero slots. Slots counting will have the number updated as the static memory is changed.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/i18n.py | 1 + src/kimchi/model/vms.py | 87 ++++++++++++++++++++++++++++++++++++++---------- src/kimchi/vmtemplate.py | 11 ++++++ 3 files changed, 82 insertions(+), 17 deletions(-)
diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py index e6e00b8..1779597 100644 --- a/src/kimchi/i18n.py +++ b/src/kimchi/i18n.py @@ -111,6 +111,7 @@ messages = { "KCHVM0038E": _("Unable to suspend VM '%(name)s'. Details: %(err)s"), "KCHVM0039E": _("Cannot resume VM '%(name)s' because it is not paused."), "KCHVM0040E": _("Unable to resume VM '%(name)s'. Details: %(err)s"), + "KCHVM0041E": _("Memory assigned is higher then the maximum allowed in the host."),
"KCHVMHDEV0001E": _("VM %(vmid)s does not contain directly assigned host device %(dev_name)s."), "KCHVMHDEV0002E": _("The host device %(dev_name)s is not allowed to directly assign to VM."), diff --git a/src/kimchi/model/vms.py b/src/kimchi/model/vms.py index dc7f91f..5834a65 100644 --- a/src/kimchi/model/vms.py +++ b/src/kimchi/model/vms.py @@ -59,8 +59,7 @@ DOM_STATE_MAP = {0: 'nostate', 7: 'pmsuspended'}
VM_STATIC_UPDATE_PARAMS = {'name': './name', - 'cpus': './vcpu', - 'memory': './memory'} + 'cpus': './vcpu'} VM_LIVE_UPDATE_PARAMS = {}
XPATH_DOMAIN_DISK = "/domain/devices/disk[@device='disk']/source/@file" @@ -73,6 +72,8 @@ XPATH_DOMAIN_MEMORY = '/domain/memory' XPATH_DOMAIN_MEMORY_UNIT = '/domain/memory/@unit' XPATH_DOMAIN_UUID = '/domain/uuid'
+XPATH_NUMA_CELL = './cpu/numa/cell' +
class VMsModel(object): def __init__(self, **kargs): @@ -95,12 +96,9 @@ class VMsModel(object): vm_overrides['storagepool'] = pool_uri vm_overrides['fc_host_support'] = self.caps.fc_host_support
- # Setting maxMemory and slots parameter values - # getInfo memory value comes in MiB, so dividing by 1024 integer, - # gives the interger maximum number of slots to use in chunks of - # 1 GB + # Setting maxMemory of the VM, which will be equal the Host memory. + # Host memory comes in MiB, so transform in KiB vm_overrides['max_memory'] = self.conn.get().getInfo()[1] * 1024 - vm_overrides['slots'] = self.conn.get().getInfo()[1] / 1024
t = TemplateModel.get_template(t_name, self.objstore, self.conn, vm_overrides) @@ -659,15 +657,15 @@ class VMModel(object):
for key, val in params.items(): if key in VM_STATIC_UPDATE_PARAMS: - if key == 'memory': - # Libvirt saves memory in KiB. Retrieved xml has memory - # in KiB too, so new valeu must be in KiB here - val = val * 1024 if type(val) == int: val = str(val) xpath = VM_STATIC_UPDATE_PARAMS[key] new_xml = xml_item_update(new_xml, xpath, val)
+ # Updating memory and adds NUMA if necessary + if 'memory' in params: + new_xml = self._update_memory(new_xml, params) + if 'graphics' in params: new_xml = self._update_graphics(dom, new_xml, params)
@@ -695,12 +693,7 @@ class VMModel(object): # Undefine old vm, only if name is going to change dom.undefine()
- root = ET.fromstring(new_xml) - currentMem = root.find('.currentMemory') - if currentMem is not None: - root.remove(currentMem) - - dom = conn.defineXML(ET.tostring(root, encoding="utf-8")) + dom = conn.defineXML(new_xml) if 'name' in params: self._redefine_snapshots(dom, snapshots_info) except libvirt.libvirtError as e: @@ -712,6 +705,66 @@ class VMModel(object): 'err': e.get_error_message()}) return dom
+ def _update_memory(self, xml, params): + # Checks if NUMA memory is already configured, if not, checks if CPU + # element is already configured (topology). Then add NUMA element as + # apropriated + root = ET.fromstring(xml) + numa_mem = xpath_get_text(xml, XPATH_NUMA_CELL + '/@memory') + if numa_mem == []: + vcpus = xpath_get_text(xml, VM_STATIC_UPDATE_PARAMS['cpus'])
+ numa_element = E.numa(E.cell( + id='0', + cpus='0-' + str(vcpus - 1) if vcpus > 1 else '0', + memory=str(params['memory'] << 10), + unit='KiB'))
The above code is the same in vmtemplate.py file. We have the xmlutils module to handle all the XML manipulation. I suggest to add it there and reuse over where.
Sure, thanks
+ cpu = root.find('./cpu') + if cpu is None: + root.insert(0, E.cpu(numa_element)) + else: + cpu.insert(0, numa_element) + else: + root = ET.fromstring(xml_item_update(xml, XPATH_NUMA_CELL, + str(params['memory'] << 10), + attr='memory')) + + # Remove currentMemory, automatically set later by libvirt + currentMem = root.find('.currentMemory') + if currentMem is not None: + root.remove(currentMem) + + memory = root.find('.memory') + # Update/Adds maxMemory accordingly + if not self.caps.mem_hotplug_support: + if memory is not None: + memory.text = str(params['memory'] << 10) + else: + if memory is not None: + root.remove(memory) + maxMem = root.find('.maxMemory') + host_mem = self.conn.get().getInfo()[1] + slots = (host_mem - params['memory']) / 1024 + # Libvirt does not accepts slots <= 1 + if slots < 0: + raise OperationFailed("KCHVM0041E") + elif slots == 0: + slots = 1 + if maxMem is None: + max_mem_xml = E.maxMemory( + str(host_mem * 1024), + unit='Kib', + slots=str(slots)) + root.insert(0, max_mem_xml) + new_xml = ET.tostring(root, encoding="utf-8") + else: + # Update slots only + new_xml = xml_item_update(ET.tostring(root, encoding="utf-8"), + './maxMemory', + str(slots), + attr='slots') + return new_xml + return ET.tostring(root, encoding="utf-8") + def _live_vm_update(self, dom, params): self._vm_update_access_metadata(dom, params)
diff --git a/src/kimchi/vmtemplate.py b/src/kimchi/vmtemplate.py index a20098d..00de7c2 100644 --- a/src/kimchi/vmtemplate.py +++ b/src/kimchi/vmtemplate.py @@ -316,6 +316,17 @@ class VMTemplate(object): else: params['cdroms'] = cdrom_xml
+ # Setting maximum number of slots to avoid errors when hotplug memory + # Number of slots are the numbers of chunks of 1GB that fit inside + # the max_memory of the host minus memory assigned to the VM + params['slots'] = ((params['max_memory'] / 1024) - + params['memory']) / 1024 + + if params['slots'] < 0: + raise OperationFailed("KCHVM0041E") + elif params['slots'] == 0: + params['slots'] = 1 + xml = """ <domain type='%(domain)s'> %(qemu-stream-cmdline)s
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

This patch fixed the issues caused by previous changes in the guest xml, like use of NUMA and MAXMEMORY elements. It includes a slot number checking test as well. This patch also changes mockModel to raise libvirterror to set mem_hotplug_support correctly. Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 7 +++++++ tests/test_model.py | 2 +- tests/test_rest.py | 4 ++-- tests/test_vmtemplate.py | 5 ++++- 4 files changed, 14 insertions(+), 4 deletions(-) diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index c81cabb..ab9f517 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -182,6 +182,13 @@ class MockModel(Model): old_xml = dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE) root = objectify.fromstring(old_xml) dev = objectify.fromstring(xml) + # Libvirt < 1.2.14 does not support memory devices, so force raise the + # error, in order to get Capabilities.mem_hotplug_support correctly in + # test mode + version = 1000000*1 + 1000*2 + 14 + if dev.tag == 'memory' and libvirt.getVersion() < version: + raise libvirt.libvirtError('virDomainAttachDeviceFlags() failed', + dom=dom) root.devices.append(dev) MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") diff --git a/tests/test_model.py b/tests/test_model.py index 88c020e..7b6153a 100644 --- a/tests/test_model.py +++ b/tests/test_model.py @@ -600,7 +600,7 @@ class ModelTests(unittest.TestCase): inst = model.Model(None, objstore_loc=self.tmp_store) - orig_params = {'name': 'test', 'memory': '1024', 'cpus': '1', + orig_params = {'name': 'test', 'memory': 1024, 'cpus': 1, 'cdrom': UBUNTU_ISO} inst.templates_create(orig_params) diff --git a/tests/test_rest.py b/tests/test_rest.py index 914b602..64b3414 100644 --- a/tests/test_rest.py +++ b/tests/test_rest.py @@ -201,7 +201,7 @@ class RestTests(unittest.TestCase): resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(400, resp.status) - params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 4096} + params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 3072} req = json.dumps(params) resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(303, resp.status) @@ -1059,7 +1059,7 @@ class RestTests(unittest.TestCase): keys = [u'libvirt_stream_protocols', u'qemu_stream', u'qemu_spice', u'screenshot', u'system_report_tool', u'update_tool', u'repo_mngt_tool', u'federation', u'kernel_vfio', u'auth', - u'nm_running'] + u'nm_running', u'mem_hotplug_support'] self.assertEquals(sorted(keys), sorted(conf.keys())) def test_peers(self): diff --git a/tests/test_vmtemplate.py b/tests/test_vmtemplate.py index b504fbc..7304220 100644 --- a/tests/test_vmtemplate.py +++ b/tests/test_vmtemplate.py @@ -83,7 +83,8 @@ class VMTemplateTests(unittest.TestCase): def test_to_xml(self): graphics = {'type': 'spice', 'listen': '127.0.0.1'} vm_uuid = str(uuid.uuid4()).replace('-', '') - t = VMTemplate({'name': 'test-template', 'cdrom': self.iso}) + t = VMTemplate({'name': 'test-template', 'cdrom': self.iso, + 'max_memory': 3072 << 10}) xml = t.to_vm_xml('test-vm', vm_uuid, graphics=graphics) self.assertEquals(vm_uuid, xpath_get_text(xml, "/domain/uuid")[0]) self.assertEquals('test-vm', xpath_get_text(xml, "/domain/name")[0]) @@ -91,6 +92,8 @@ class VMTemplateTests(unittest.TestCase): self.assertEquals(graphics['type'], xpath_get_text(xml, expr)[0]) expr = "/domain/devices/graphics/@listen" self.assertEquals(graphics['listen'], xpath_get_text(xml, expr)[0]) + expr = "/domain/maxMemory/@slots" + self.assertEquals('2', xpath_get_text(xml, expr)[0]) def test_arg_merging(self): """ -- 2.1.0

On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch fixed the issues caused by previous changes in the guest xml, like use of NUMA and MAXMEMORY elements. It includes a slot number checking test as well. This patch also changes mockModel to raise libvirterror to set mem_hotplug_support correctly.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 7 +++++++ tests/test_model.py | 2 +- tests/test_rest.py | 4 ++-- tests/test_vmtemplate.py | 5 ++++- 4 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index c81cabb..ab9f517 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -182,6 +182,13 @@ class MockModel(Model): old_xml = dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE) root = objectify.fromstring(old_xml) dev = objectify.fromstring(xml) + # Libvirt < 1.2.14 does not support memory devices, so force raise the + # error, in order to get Capabilities.mem_hotplug_support correctly in + # test mode + version = 1000000*1 + 1000*2 + 14
Why do you are checking the libvirt verison if we have a feature test for it?
+ if dev.tag == 'memory' and libvirt.getVersion() < version: + raise libvirt.libvirtError('virDomainAttachDeviceFlags() failed', + dom=dom) root.devices.append(dev)
MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") diff --git a/tests/test_model.py b/tests/test_model.py index 88c020e..7b6153a 100644 --- a/tests/test_model.py +++ b/tests/test_model.py @@ -600,7 +600,7 @@ class ModelTests(unittest.TestCase): inst = model.Model(None, objstore_loc=self.tmp_store)
- orig_params = {'name': 'test', 'memory': '1024', 'cpus': '1', + orig_params = {'name': 'test', 'memory': 1024, 'cpus': 1, 'cdrom': UBUNTU_ISO} inst.templates_create(orig_params)
diff --git a/tests/test_rest.py b/tests/test_rest.py index 914b602..64b3414 100644 --- a/tests/test_rest.py +++ b/tests/test_rest.py @@ -201,7 +201,7 @@ class RestTests(unittest.TestCase): resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(400, resp.status)
- params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 4096} + params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 3072} req = json.dumps(params) resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(303, resp.status) @@ -1059,7 +1059,7 @@ class RestTests(unittest.TestCase): keys = [u'libvirt_stream_protocols', u'qemu_stream', u'qemu_spice', u'screenshot', u'system_report_tool', u'update_tool', u'repo_mngt_tool', u'federation', u'kernel_vfio', u'auth', - u'nm_running'] + u'nm_running', u'mem_hotplug_support'] self.assertEquals(sorted(keys), sorted(conf.keys()))
def test_peers(self): diff --git a/tests/test_vmtemplate.py b/tests/test_vmtemplate.py index b504fbc..7304220 100644 --- a/tests/test_vmtemplate.py +++ b/tests/test_vmtemplate.py @@ -83,7 +83,8 @@ class VMTemplateTests(unittest.TestCase): def test_to_xml(self): graphics = {'type': 'spice', 'listen': '127.0.0.1'} vm_uuid = str(uuid.uuid4()).replace('-', '') - t = VMTemplate({'name': 'test-template', 'cdrom': self.iso}) + t = VMTemplate({'name': 'test-template', 'cdrom': self.iso, + 'max_memory': 3072 << 10}) xml = t.to_vm_xml('test-vm', vm_uuid, graphics=graphics) self.assertEquals(vm_uuid, xpath_get_text(xml, "/domain/uuid")[0]) self.assertEquals('test-vm', xpath_get_text(xml, "/domain/name")[0]) @@ -91,6 +92,8 @@ class VMTemplateTests(unittest.TestCase): self.assertEquals(graphics['type'], xpath_get_text(xml, expr)[0]) expr = "/domain/devices/graphics/@listen" self.assertEquals(graphics['listen'], xpath_get_text(xml, expr)[0]) + expr = "/domain/maxMemory/@slots" + self.assertEquals('2', xpath_get_text(xml, expr)[0])
def test_arg_merging(self): """

On 06/01/2015 09:41 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch fixed the issues caused by previous changes in the guest xml, like use of NUMA and MAXMEMORY elements. It includes a slot number checking test as well. This patch also changes mockModel to raise libvirterror to set mem_hotplug_support correctly.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 7 +++++++ tests/test_model.py | 2 +- tests/test_rest.py | 4 ++-- tests/test_vmtemplate.py | 5 ++++- 4 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index c81cabb..ab9f517 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -182,6 +182,13 @@ class MockModel(Model): old_xml = dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE) root = objectify.fromstring(old_xml) dev = objectify.fromstring(xml) + # Libvirt < 1.2.14 does not support memory devices, so force raise the + # error, in order to get Capabilities.mem_hotplug_support correctly in + # test mode + version = 1000000*1 + 1000*2 + 14
Why do you are checking the libvirt verison if we have a feature test for it?
Oh, I should have wrote in the comments: I use the function dom.attachDeviceFlags in the feature test, when we run the tests, it uses the mockmodel and calls a mock "attachDeviceFlags" function. This snippet of code change this function to handle the feature test properly, because it was never receiving the libvirt error and returning always True.
+ if dev.tag == 'memory' and libvirt.getVersion() < version: + raise libvirt.libvirtError('virDomainAttachDeviceFlags() failed', + dom=dom) root.devices.append(dev)
MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") diff --git a/tests/test_model.py b/tests/test_model.py index 88c020e..7b6153a 100644 --- a/tests/test_model.py +++ b/tests/test_model.py @@ -600,7 +600,7 @@ class ModelTests(unittest.TestCase): inst = model.Model(None, objstore_loc=self.tmp_store)
- orig_params = {'name': 'test', 'memory': '1024', 'cpus': '1', + orig_params = {'name': 'test', 'memory': 1024, 'cpus': 1, 'cdrom': UBUNTU_ISO} inst.templates_create(orig_params)
diff --git a/tests/test_rest.py b/tests/test_rest.py index 914b602..64b3414 100644 --- a/tests/test_rest.py +++ b/tests/test_rest.py @@ -201,7 +201,7 @@ class RestTests(unittest.TestCase): resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(400, resp.status)
- params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 4096} + params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 3072} req = json.dumps(params) resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(303, resp.status) @@ -1059,7 +1059,7 @@ class RestTests(unittest.TestCase): keys = [u'libvirt_stream_protocols', u'qemu_stream', u'qemu_spice', u'screenshot', u'system_report_tool', u'update_tool', u'repo_mngt_tool', u'federation', u'kernel_vfio', u'auth', - u'nm_running'] + u'nm_running', u'mem_hotplug_support'] self.assertEquals(sorted(keys), sorted(conf.keys()))
def test_peers(self): diff --git a/tests/test_vmtemplate.py b/tests/test_vmtemplate.py index b504fbc..7304220 100644 --- a/tests/test_vmtemplate.py +++ b/tests/test_vmtemplate.py @@ -83,7 +83,8 @@ class VMTemplateTests(unittest.TestCase): def test_to_xml(self): graphics = {'type': 'spice', 'listen': '127.0.0.1'} vm_uuid = str(uuid.uuid4()).replace('-', '') - t = VMTemplate({'name': 'test-template', 'cdrom': self.iso}) + t = VMTemplate({'name': 'test-template', 'cdrom': self.iso, + 'max_memory': 3072 << 10}) xml = t.to_vm_xml('test-vm', vm_uuid, graphics=graphics) self.assertEquals(vm_uuid, xpath_get_text(xml, "/domain/uuid")[0]) self.assertEquals('test-vm', xpath_get_text(xml, "/domain/name")[0]) @@ -91,6 +92,8 @@ class VMTemplateTests(unittest.TestCase): self.assertEquals(graphics['type'], xpath_get_text(xml, expr)[0]) expr = "/domain/devices/graphics/@listen" self.assertEquals(graphics['listen'], xpath_get_text(xml, expr)[0]) + expr = "/domain/maxMemory/@slots" + self.assertEquals('2', xpath_get_text(xml, expr)[0])
def test_arg_merging(self): """

On 01/06/2015 16:49, Rodrigo Trujillo wrote:
On 06/01/2015 09:41 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
This patch fixed the issues caused by previous changes in the guest xml, like use of NUMA and MAXMEMORY elements. It includes a slot number checking test as well. This patch also changes mockModel to raise libvirterror to set mem_hotplug_support correctly.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 7 +++++++ tests/test_model.py | 2 +- tests/test_rest.py | 4 ++-- tests/test_vmtemplate.py | 5 ++++- 4 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index c81cabb..ab9f517 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -182,6 +182,13 @@ class MockModel(Model): old_xml = dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE) root = objectify.fromstring(old_xml) dev = objectify.fromstring(xml) + # Libvirt < 1.2.14 does not support memory devices, so force raise the + # error, in order to get Capabilities.mem_hotplug_support correctly in + # test mode + version = 1000000*1 + 1000*2 + 14
Why do you are checking the libvirt verison if we have a feature test for it?
Oh, I should have wrote in the comments: I use the function dom.attachDeviceFlags in the feature test, when we run the tests, it uses the mockmodel and calls a mock "attachDeviceFlags" function. This snippet of code change this function to handle the feature test properly, because it was never receiving the libvirt error and returning always True.
I think we override the attachDeviceFlags method because it is not supported by libvirt Test Driver. So in this case we can assume an unique return value, for example, True, which means the user will be able to do memory hot plug using the mock environment.
+ if dev.tag == 'memory' and libvirt.getVersion() < version: + raise libvirt.libvirtError('virDomainAttachDeviceFlags() failed', + dom=dom) root.devices.append(dev)
MockModel._mock_vms[dom.name()] = ET.tostring(root, encoding="utf-8") diff --git a/tests/test_model.py b/tests/test_model.py index 88c020e..7b6153a 100644 --- a/tests/test_model.py +++ b/tests/test_model.py @@ -600,7 +600,7 @@ class ModelTests(unittest.TestCase): inst = model.Model(None, objstore_loc=self.tmp_store)
- orig_params = {'name': 'test', 'memory': '1024', 'cpus': '1', + orig_params = {'name': 'test', 'memory': 1024, 'cpus': 1, 'cdrom': UBUNTU_ISO} inst.templates_create(orig_params)
diff --git a/tests/test_rest.py b/tests/test_rest.py index 914b602..64b3414 100644 --- a/tests/test_rest.py +++ b/tests/test_rest.py @@ -201,7 +201,7 @@ class RestTests(unittest.TestCase): resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(400, resp.status)
- params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 4096} + params = {'name': u'∨м-црdαtеd', 'cpus': 5, 'memory': 3072} req = json.dumps(params) resp = self.request('/vms/vm-1', req, 'PUT') self.assertEquals(303, resp.status) @@ -1059,7 +1059,7 @@ class RestTests(unittest.TestCase): keys = [u'libvirt_stream_protocols', u'qemu_stream', u'qemu_spice', u'screenshot', u'system_report_tool', u'update_tool', u'repo_mngt_tool', u'federation', u'kernel_vfio', u'auth', - u'nm_running'] + u'nm_running', u'mem_hotplug_support'] self.assertEquals(sorted(keys), sorted(conf.keys()))
def test_peers(self): diff --git a/tests/test_vmtemplate.py b/tests/test_vmtemplate.py index b504fbc..7304220 100644 --- a/tests/test_vmtemplate.py +++ b/tests/test_vmtemplate.py @@ -83,7 +83,8 @@ class VMTemplateTests(unittest.TestCase): def test_to_xml(self): graphics = {'type': 'spice', 'listen': '127.0.0.1'} vm_uuid = str(uuid.uuid4()).replace('-', '') - t = VMTemplate({'name': 'test-template', 'cdrom': self.iso}) + t = VMTemplate({'name': 'test-template', 'cdrom': self.iso, + 'max_memory': 3072 << 10}) xml = t.to_vm_xml('test-vm', vm_uuid, graphics=graphics) self.assertEquals(vm_uuid, xpath_get_text(xml, "/domain/uuid")[0]) self.assertEquals('test-vm', xpath_get_text(xml, "/domain/name")[0]) @@ -91,6 +92,8 @@ class VMTemplateTests(unittest.TestCase): self.assertEquals(graphics['type'], xpath_get_text(xml, expr)[0]) expr = "/domain/devices/graphics/@listen" self.assertEquals(graphics['listen'], xpath_get_text(xml, expr)[0]) + expr = "/domain/maxMemory/@slots" + self.assertEquals('2', xpath_get_text(xml, expr)[0])
def test_arg_merging(self): """

On 28/05/2015 10:59, Rodrigo Trujillo wrote:
V2 - Fix erros in tests and add a test to slots number - Fix other minor issues with Libvirt < 1.2.14
V1 This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc
URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
I understand why you created the API like above, but from a user perspective I just want to increase the guest memory and it should be transparent to user. I'd expect the API to use the PUT /vms/<name> {memory: 2048} And then backend would do the work otherwise we will insert some backend logic to frontend, as user would set the memory to 2048 and UI should calculate how many memory devices should be added. What about one of those requests fails? UI would need to do a callback to revert the successful requests.
Rodrigo Trujillo (6): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment [Memory HotPlug] Fix test and adds slot test
docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/mockmodel.py | 7 +++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 85 ++++++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- tests/test_model.py | 2 +- tests/test_rest.py | 4 +- tests/test_vmtemplate.py | 5 +- 13 files changed, 338 insertions(+), 27 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py

Hi Aline, thank you for the review. See my answers in below and in next emails. Rodrigo On 06/01/2015 09:18 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
V2 - Fix erros in tests and add a test to slots number - Fix other minor issues with Libvirt < 1.2.14
V1 This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc
URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
I understand why you created the API like above, but from a user perspective I just want to increase the guest memory and it should be transparent to user.
I'd expect the API to use the PUT /vms/<name> {memory: 2048}
I developed this feature thinking in 2 environments: VM running and VM stopped. If the VM is stopped then user or UI would have to use "PUT /vms/<name> {memory: 2048}" If the VM is running then user or UI would have to use "POST /vms/<name>/memdevides {amount: 2} From the UI side, there would not need calculation, just check if the VM is running, use different elements and do a different call, notice that if the VM is running it will only accept integer amounts of GiB. There is no problem on use "PUT /vms/<name> {memory: 2048}" to simplify users life, just need to think a little bit more in the "intelligence" to handle the memory number passed. The 'vm update' function can check if the vm is running or not and make use of the memdevice API. In order to this work correctly, I believe I would have to code the "memdevice remove" function as well, or just allow to increase memory, when vm is running. What do you think ?
And then backend would do the work otherwise we will insert some backend logic to frontend, as user would set the memory to 2048 and UI should calculate how many memory devices should be added. What about one of those requests fails? UI would need to do a callback to revert the successful requests.
hum, actually what have thought is not allow user to set a number as memory, in the UI, if the guest is running. Instead, show the current memory in a disabled label, and add a second label with the number of Gb to add, like: --------- ---- Memory | 2048 | Add (GB) | 0 | ( + ) ( - ) --------- ---- The UI would do only 1 more request for the memory. But that is what I have thought, we can keep the UI as is and add the handling in the update function.
Rodrigo Trujillo (6): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment [Memory HotPlug] Fix test and adds slot test
docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/mockmodel.py | 7 +++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 85 ++++++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- tests/test_model.py | 2 +- tests/test_rest.py | 4 +- tests/test_vmtemplate.py | 5 +- 13 files changed, 338 insertions(+), 27 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel

On 01/06/2015 16:24, Rodrigo Trujillo wrote:
Hi Aline, thank you for the review.
See my answers in below and in next emails.
Rodrigo
On 06/01/2015 09:18 AM, Aline Manera wrote:
On 28/05/2015 10:59, Rodrigo Trujillo wrote:
V2 - Fix erros in tests and add a test to slots number - Fix other minor issues with Libvirt < 1.2.14
V1 This patchset implements the backend part of memory hotplug functionality, including: - new feature test to check if libvirt supports memory devices - changes the way that memory is assigned or updated in the guest xml: * memory is now set in a basic NUMA node - includes maxMemory element in the XML: * which is equal the total memory of the host * sets memory device slots. The total number of slots are equal the maxMemory minus the memory assigned (1 slot == 1 GB ) - creates a new VM device, the memory device: * by default, a memory device will have 1GB * user can add the memory device with machine running or offline - memory devices are selected according to its position in the xml (the slot) 0, 1, 2, etc
URL: - http://localhost:8010/vms/<VM>/memdevices - http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
I understand why you created the API like above, but from a user perspective I just want to increase the guest memory and it should be transparent to user.
I'd expect the API to use the PUT /vms/<name> {memory: 2048}
I developed this feature thinking in 2 environments: VM running and VM stopped. If the VM is stopped then user or UI would have to use "PUT /vms/<name> {memory: 2048}" If the VM is running then user or UI would have to use "POST /vms/<name>/memdevides {amount: 2} From the UI side, there would not need calculation, just check if the VM is running, use different elements and do a different call, notice that if the VM is running it will only accept integer amounts of GiB.
There is no problem on use "PUT /vms/<name> {memory: 2048}" to simplify users life, just need to think a little bit more in the "intelligence" to handle the memory number passed. The 'vm update' function can check if the vm is running or not and make use of the memdevice API.
Yeap. I have thought on that way. But instead of keeping the memdevice API, it will turn on multiple methods inside VMModel() So let's say my guest is running and I send the request: PUT /vms/<name> {memory: 2048} You will need to get the current memory (let's say 1024) and get the difference to 2048 and add as many 1GB memory device as needed.
In order to this work correctly, I believe I would have to code the "memdevice remove" function as well, or just allow to increase memory, when vm is running.
Only increasing the memory when the guest is running is enough by now
What do you think ?
And then backend would do the work otherwise we will insert some backend logic to frontend, as user would set the memory to 2048 and UI should calculate how many memory devices should be added. What about one of those requests fails? UI would need to do a callback to revert the successful requests.
hum, actually what have thought is not allow user to set a number as memory, in the UI, if the guest is running. Instead, show the current memory in a disabled label, and add a second label with the number of Gb to add, like:
--------- ---- Memory | 2048 | Add (GB) | 0 | ( + ) ( - ) --------- ----
The UI would do only 1 more request for the memory. But that is what I have thought, we can keep the UI as is and add the handling in the update function.
About the UI, I thought very closed to you. But the + icon would automatically increase the disabled label. So in the beginning it is 1024, after selecting + it turns on 2048 and so on. The same for the minus icon. And on "Save" button, the request is sent to server.
Rodrigo Trujillo (6): [Memory HotPlug] Feature test to check support to memory devices [Memory HotPlug] Add maxMemory and numa configuration to guest xml [Memory HotPlug] Memory device control and model classes [Memory HotPlug] Add parameters checking and documentation to memory device API [Memory HotPlug] Fix VM offline memory update and fix slots assignment [Memory HotPlug] Fix test and adds slot test
docs/API.md | 16 +++++++ src/kimchi/API.json | 14 +++++- src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++ src/kimchi/i18n.py | 5 ++ src/kimchi/mockmodel.py | 7 +++ src/kimchi/model/config.py | 3 ++ src/kimchi/model/featuretests.py | 43 +++++++++++++++++ src/kimchi/model/vmmemdevices.py | 94 +++++++++++++++++++++++++++++++++++++ src/kimchi/model/vms.py | 85 ++++++++++++++++++++++++++++----- src/kimchi/vmtemplate.py | 40 ++++++++++++---- tests/test_model.py | 2 +- tests/test_rest.py | 4 +- tests/test_vmtemplate.py | 5 +- 13 files changed, 338 insertions(+), 27 deletions(-) create mode 100644 src/kimchi/control/vm/memdevices.py create mode 100644 src/kimchi/model/vmmemdevices.py
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel
participants (2)
-
Aline Manera
-
Rodrigo Trujillo