On 01/06/2015 16:24, Rodrigo Trujillo wrote:
Hi Aline, thank you for the review.
See my answers in below and in next emails.
Rodrigo
On 06/01/2015 09:18 AM, Aline Manera wrote:
>
>
> On 28/05/2015 10:59, Rodrigo Trujillo wrote:
>> V2
>> - Fix erros in tests and add a test to slots number
>> - Fix other minor issues with Libvirt < 1.2.14
>>
>> V1
>> This patchset implements the backend part of memory hotplug
>> functionality,
>> including:
>> - new feature test to check if libvirt supports memory devices
>> - changes the way that memory is assigned or updated in the guest xml:
>> * memory is now set in a basic NUMA node
>> - includes maxMemory element in the XML:
>> * which is equal the total memory of the host
>> * sets memory device slots. The total number of slots are equal the
>> maxMemory minus the memory assigned (1 slot == 1 GB )
>> - creates a new VM device, the memory device:
>> * by default, a memory device will have 1GB
>> * user can add the memory device with machine running or offline
>> - memory devices are selected according to its position in the xml
>> (the slot)
>> 0, 1, 2, etc
>
>> URL:
>> -
http://localhost:8010/vms/<VM>/memdevices
>> -
http://localhost:8010/vms/<VM>/memdevices/<MEM-DEV>
>
> I understand why you created the API like above, but from a user
> perspective I just want to increase the guest memory and it should be
> transparent to user.
>
> I'd expect the API to use the PUT /vms/<name> {memory: 2048}
I developed this feature thinking in 2 environments: VM running and VM
stopped.
If the VM is stopped then user or UI would have to use "PUT
/vms/<name> {memory: 2048}"
If the VM is running then user or UI would have to use "POST
/vms/<name>/memdevides {amount: 2}
From the UI side, there would not need calculation, just check if the
VM is running, use different elements and do a different call, notice
that if the VM is running it will only accept integer amounts of GiB.
There is no problem on use "PUT /vms/<name> {memory: 2048}" to
simplify users life, just need to think a little bit more in the
"intelligence" to handle the memory number passed.
The 'vm update' function can check if the vm is running or not and
make use of the memdevice API.
Yeap. I have thought on that way.
But instead of keeping the memdevice API, it will turn on multiple
methods inside VMModel()
So let's say my guest is running and I send the request:
PUT /vms/<name> {memory: 2048}
You will need to get the current memory (let's say 1024) and get the
difference to 2048 and add as many 1GB memory device as needed.
In order to this work correctly, I believe I would have to code the
"memdevice remove" function as well,
or just allow to increase memory, when vm is running.
Only increasing the memory when the guest is running is enough by now
What do you think ?
>
> And then backend would do the work otherwise we will insert some
> backend logic to frontend, as user would set the memory to 2048 and
> UI should calculate how many memory devices should be added. What
> about one of those requests fails? UI would need to do a callback to
> revert the successful requests.
hum, actually what have thought is not allow user to set a number as
memory, in the UI, if the guest is running. Instead, show the current
memory in a disabled label, and add a second label with the number of
Gb to add, like:
--------- ----
Memory | 2048 | Add (GB) | 0 | ( + ) ( - )
--------- ----
The UI would do only 1 more request for the memory.
But that is what I have thought, we can keep the UI as is and add the
handling in the update function.
About the UI, I thought very closed to you.
But the + icon would automatically increase the disabled label. So in
the beginning it is 1024, after selecting + it turns on 2048 and so on.
The same for the minus icon.
And on "Save" button, the request is sent to server.
>>
>> Rodrigo Trujillo (6):
>> [Memory HotPlug] Feature test to check support to memory devices
>> [Memory HotPlug] Add maxMemory and numa configuration to guest xml
>> [Memory HotPlug] Memory device control and model classes
>> [Memory HotPlug] Add parameters checking and documentation to
>> memory
>> device API
>> [Memory HotPlug] Fix VM offline memory update and fix slots
>> assignment
>> [Memory HotPlug] Fix test and adds slot test
>>
>> docs/API.md | 16 +++++++
>> src/kimchi/API.json | 14 +++++-
>> src/kimchi/control/vm/memdevices.py | 47 +++++++++++++++++++
>> src/kimchi/i18n.py | 5 ++
>> src/kimchi/mockmodel.py | 7 +++
>> src/kimchi/model/config.py | 3 ++
>> src/kimchi/model/featuretests.py | 43 +++++++++++++++++
>> src/kimchi/model/vmmemdevices.py | 94
>> +++++++++++++++++++++++++++++++++++++
>> src/kimchi/model/vms.py | 85
>> ++++++++++++++++++++++++++++-----
>> src/kimchi/vmtemplate.py | 40 ++++++++++++----
>> tests/test_model.py | 2 +-
>> tests/test_rest.py | 4 +-
>> tests/test_vmtemplate.py | 5 +-
>> 13 files changed, 338 insertions(+), 27 deletions(-)
>> create mode 100644 src/kimchi/control/vm/memdevices.py
>> create mode 100644 src/kimchi/model/vmmemdevices.py
>>
>
> _______________________________________________
> Kimchi-devel mailing list
> Kimchi-devel(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/kimchi-devel
>
_______________________________________________
Kimchi-devel mailing list
Kimchi-devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/kimchi-devel