[PATCH 0/3] Use lxml.etree instead of libxml2
by Aline Manera
That way we can remove libxml2-python package as Kimchi dependency and make
XML manipulation consistent in the whole code.
Aline Manera (3):
Use lxml.etree on xmlutils/utils.py instead of xml.etree and libxml2
Use lxml.etree on gen-index.py script instead of libxml2
Remove libxml2-python as Kimchi dependency
contrib/DEBIAN/control.in | 3 +--
contrib/kimchi.spec.fedora.in | 3 +--
contrib/kimchi.spec.suse.in | 3 +--
docs/README.md | 30 ++++++++++++++----------------
src/kimchi/xmlutils/utils.py | 18 ++++++------------
ui/pages/help/gen-index.py | 11 +++++------
6 files changed, 28 insertions(+), 40 deletions(-)
--
1.9.3
10 years, 2 months
[PATCH 0/8 V2] Create a common function to generate guest disk XML
by Aline Manera
V1 -> V2:
- Make xmlutils/disk.py independent of model instances
----
This patch set moves common functions related to guest disk XML to
xmlutils/disk.py. That way model/vmstorages.py and vmtemplate.py can make use
of it and we make sure the logic is the same in both cases.
It is the first step and only changes the way vmtemplate.py generated the guest
CDROM XML. But in the next patches all the guest disk generation on vmtemplate.py
will do the same (I prefered to do in parts to avoid huge patch sets)
Aline Manera (8):
Move _get_storage_xml() to xmlutils/disk.py
Move vmdisks.py functions to xmlutils/disk.py
Remove 'bus' paramater from /vms/<name>/storages documentation
Update get_disk_xml() to get the device same according to bus and
index values
Remove ignore_src parameter from get_disk_xml()
Get disk type according to file path on get_disk_xml()
Check QEMU stream DNS capability when attaching new disk to guest
Update vmtemplate.py to use get_disk_xml() while generating CDROM XML
docs/API.md | 1 -
src/kimchi/model/storagevolumes.py | 8 +-
src/kimchi/model/vmstorages.py | 152 ++++++++++++-----------------------
src/kimchi/vmdisks.py | 75 ------------------
src/kimchi/vmtemplate.py | 74 ++++++-----------
src/kimchi/xmlutils/disk.py | 158 +++++++++++++++++++++++++++++++++++++
tests/test_model.py | 11 ++-
7 files changed, 244 insertions(+), 235 deletions(-)
delete mode 100644 src/kimchi/vmdisks.py
create mode 100644 src/kimchi/xmlutils/disk.py
--
1.9.3
10 years, 2 months
[PATCH] Remove pyc files on make clean
by Christy Perez
An old file left behind a pyc file, and caused problems. Manually
removing all the pyc files took care of it, so I figured we could
add this to 'make clean' for convenience.
Signed-off-by: Christy Perez <christy(a)linux.vnet.ibm.com>
---
Makefile.am | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile.am b/Makefile.am
index ae2cdf0..ec88787 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -118,4 +118,4 @@ VERSION:
clean-local:
rm -rf mo rpm
-CLEANFILES = kimchi.spec
+CLEANFILES = kimchi.spec `find "$(top_srcdir)" -type f -name "*.pyc" -print`
--
1.9.3
10 years, 2 months
[PATCH v6 0/2] Backend support for templates with sockets, cores, and threads
by Christy Perez
Adding the ability to specify a CPU topology for a guest. The first
patch is the backend, and the second is the translations.
There will be a follow-on patchset that works with the UI to gather user requirements
and then come up with the appropriate topology based on the guest OS and host
capabilities.
v5->v6:
- Change cpu_info to empty dict instead of empty string if not specified
- Split translations into different commit
v4->v5:
- Fix format issues
- Remove if check for cpu_info in control, and set cpu_info
to the empty string in model if None
- Add format requirements error for topology requirements.
- Add new error to toplogy in API.json
- Update po files
v3->v4:
- Remove the unused cpu_ elements from common_spec
- Pass new_t into validate function to reduce complexity
- Rearrange code to decrese indents in _get_cpu_xml
v2->v3:
- Set vcpus based on topology, if specified.
- Move the update cpu+topology validation out to a function
for redability
- Add a minimum value of 1 for topology values
- Leave new English error msg as empty string
- Update the API documentation on cpu defaults
v1->v2:
- Added a check to make sure that vcpus = sockets*cores*threads
- Set individual topoology params to required in API.json
- Change the topology object types from string to integer
- Always return cpu_info from templates lookup()
- Removed check for cpu_info in to_vm_xml
- Build cpu_info xml using lxml.builder instead of string
- CPU and topology verification on template update
Christy Perez (2):
Backend support for templates with sockets, cores, and threads
Translations for new cpu_info messages
docs/API.md | 13 ++++++-
po/de_DE.po | 76 +++++++++++++++++++++++++++++++++++----
po/en_US.po | 76 +++++++++++++++++++++++++++++++++++----
po/es_ES.po | 76 +++++++++++++++++++++++++++++++++++----
po/fr_FR.po | 76 +++++++++++++++++++++++++++++++++++----
po/it_IT.po | 76 +++++++++++++++++++++++++++++++++++----
po/ja_JP.po | 76 +++++++++++++++++++++++++++++++++++----
po/kimchi.pot | 76 +++++++++++++++++++++++++++++++++++----
po/ko_KR.po | 76 +++++++++++++++++++++++++++++++++++----
po/pt_BR.po | 75 ++++++++++++++++++++++++++++++++++----
po/ru_RU.po | 76 +++++++++++++++++++++++++++++++++++----
po/zh_CN.po | 79 +++++++++++++++++++++++++++++++++++------
po/zh_TW.po | 76 +++++++++++++++++++++++++++++++++++----
src/kimchi/API.json | 36 +++++++++++++++++--
src/kimchi/control/templates.py | 31 ++++++++--------
src/kimchi/i18n.py | 2 ++
src/kimchi/model/templates.py | 32 +++++++++++++++++
src/kimchi/osinfo.py | 5 ++-
src/kimchi/vmtemplate.py | 16 +++++++++
19 files changed, 941 insertions(+), 108 deletions(-)
--
1.9.3
10 years, 2 months
[PATCH v2] AsyncTask: Improve continuous status feedback
by Zhou Zheng Sheng
One of the advantage of AsyncTask is that the background task can call
"_status_cb()" multiple times to report the interim progress. However
this is not properly implemented in "_status_cb()". The correct handling
should be as follow.
cb(message, True)
Means task finished succesfully.
cb(message, False)
Means task failed.
cb(message)
cb(message, None)
Means task is ongoing and the invocation is to provide progress feedback.
The current implementation fails to distinguish "cb(message, False)" and
"cb(message)". This patch fixes the problem and adds a new test case.
This patch also extracts "wait_task()" methods in from the tests, adding a
regular status report to allow developer know it's waiting for some task
but not stuck on something. There is also two methods in tests that start a
AsyncTask but do not wait it till finish, this will interfere with other
tests, so the patch adds the missing "wait_task()" invocation for them.
v2:
Extract the common "wait_task()" methods to avoid duplicated code.
Signed-off-by: Zhou Zheng Sheng <zhshzhou(a)linux.vnet.ibm.com>
---
src/kimchi/asynctask.py | 6 ++----
tests/test_mockmodel.py | 2 ++
tests/test_model.py | 36 +++++++++++++++++++++++-------------
tests/test_rest.py | 25 +++++++++++--------------
tests/utils.py | 15 +++++++++++++++
5 files changed, 53 insertions(+), 31 deletions(-)
diff --git a/src/kimchi/asynctask.py b/src/kimchi/asynctask.py
index 99e7a64..b5673b2 100644
--- a/src/kimchi/asynctask.py
+++ b/src/kimchi/asynctask.py
@@ -49,10 +49,8 @@ class AsyncTask(object):
self._save_helper()
return
- if success:
- self.status = 'finished'
- else:
- self.status = 'failed'
+ if success is not None:
+ self.status = 'finished' if success else 'failed'
self.message = message
self._save_helper()
diff --git a/tests/test_mockmodel.py b/tests/test_mockmodel.py
index 7319531..eeb6715 100644
--- a/tests/test_mockmodel.py
+++ b/tests/test_mockmodel.py
@@ -26,6 +26,7 @@ import unittest
import kimchi.mockmodel
from utils import get_free_port, patch_auth, request, run_server
+from utils import wait_task
from kimchi.control.base import Collection, Resource
@@ -198,3 +199,4 @@ class MockModelTests(unittest.TestCase):
task = model.host_swupdate()
task_params = [u'id', u'message', u'status', u'target_uri']
self.assertEquals(sorted(task_params), sorted(task.keys()))
+ wait_task(model.task_lookup, task['id'])
diff --git a/tests/test_model.py b/tests/test_model.py
index d9bbe9e..896540d 100644
--- a/tests/test_model.py
+++ b/tests/test_model.py
@@ -43,6 +43,7 @@ from kimchi.iscsi import TargetClient
from kimchi.model import model
from kimchi.rollbackcontext import RollbackContext
from kimchi.utils import add_task
+from utils import wait_task
invalid_repository_urls = ['www.fedora.org', # missing protocol
@@ -127,7 +128,7 @@ class ModelTests(unittest.TestCase):
'format': 'qcow2'}
task_id = inst.storagevolumes_create('default', params)['id']
rollback.prependDefer(inst.storagevolume_delete, 'default', vol)
- self._wait_task(inst, task_id)
+ wait_task(inst.task_lookup, task_id)
self.assertEquals('finished', inst.task_lookup(task_id)['status'])
vol_path = inst.storagevolume_lookup('default', vol)['path']
@@ -285,8 +286,9 @@ class ModelTests(unittest.TestCase):
'capacity': 1024,
'allocation': 512,
'format': 'qcow2'}
- inst.storagevolumes_create(pool, params)
+ task_id = inst.storagevolumes_create(pool, params)['id']
rollback.prependDefer(inst.storagevolume_delete, pool, vol)
+ wait_task(inst.task_lookup, task_id)
vm_name = 'kimchi-cdrom'
params = {'name': 'test', 'disks': [], 'cdrom': self.kimchi_iso}
@@ -558,7 +560,7 @@ class ModelTests(unittest.TestCase):
params['name'] = vol
task_id = inst.storagevolumes_create(pool, params)['id']
rollback.prependDefer(inst.storagevolume_delete, pool, vol)
- self._wait_task(inst, task_id)
+ wait_task(inst.task_lookup, task_id)
self.assertEquals('finished', inst.task_lookup(task_id)['status'])
fd, path = tempfile.mkstemp(dir=path)
@@ -599,7 +601,7 @@ class ModelTests(unittest.TestCase):
taskid = task_response['id']
vol_name = task_response['target_uri'].split('/')[-1]
self.assertEquals('COPYING', vol_name)
- self._wait_task(inst, taskid, timeout=60)
+ wait_task(inst.task_lookup, taskid, timeout=60)
self.assertEquals('finished', inst.task_lookup(taskid)['status'])
vol_path = os.path.join(args['path'], vol_name)
self.assertTrue(os.path.isfile(vol_path))
@@ -1120,10 +1122,17 @@ class ModelTests(unittest.TestCase):
except:
cb("Exception raised", False)
+ def continuous_ops(cb, params):
+ cb("step 1 OK")
+ time.sleep(2)
+ cb("step 2 OK")
+ time.sleep(2)
+ cb("step 3 OK", params.get('result', True))
+
inst = model.Model('test:///default',
objstore_loc=self.tmp_store)
taskid = add_task('', quick_op, inst.objstore, 'Hello')
- self._wait_task(inst, taskid)
+ wait_task(inst.task_lookup, taskid)
self.assertEquals(1, taskid)
self.assertEquals('finished', inst.task_lookup(taskid)['status'])
self.assertEquals('Hello', inst.task_lookup(taskid)['message'])
@@ -1134,16 +1143,22 @@ class ModelTests(unittest.TestCase):
self.assertEquals(2, taskid)
self.assertEquals('running', inst.task_lookup(taskid)['status'])
self.assertEquals('OK', inst.task_lookup(taskid)['message'])
- self._wait_task(inst, taskid)
+ wait_task(inst.task_lookup, taskid)
self.assertEquals('failed', inst.task_lookup(taskid)['status'])
self.assertEquals('It was not meant to be',
inst.task_lookup(taskid)['message'])
taskid = add_task('', abnormal_op, inst.objstore, {})
- self._wait_task(inst, taskid)
+ wait_task(inst.task_lookup, taskid)
self.assertEquals('Exception raised',
inst.task_lookup(taskid)['message'])
self.assertEquals('failed', inst.task_lookup(taskid)['status'])
+ taskid = add_task('', continuous_ops, inst.objstore,
+ {'result': True})
+ self.assertEquals('running', inst.task_lookup(taskid)['status'])
+ wait_task(inst.task_lookup, taskid, timeout=10)
+ self.assertEquals('finished', inst.task_lookup(taskid)['status'])
+
# This wrapper function is needed due to the new backend messaging in
# vm model. vm_poweroff and vm_delete raise exception if vm is not found.
# These functions are called after vm has been deleted if test finishes
@@ -1247,7 +1262,7 @@ class ModelTests(unittest.TestCase):
task = inst.debugreports_create({'name': reportName})
rollback.prependDefer(inst.debugreport_delete, tmp_name)
taskid = task['id']
- self._wait_task(inst, taskid, timeout)
+ wait_task(inst.task_lookup, taskid, timeout)
self.assertEquals('finished',
inst.task_lookup(taskid)['status'],
"It is not necessary an error. "
@@ -1265,11 +1280,6 @@ class ModelTests(unittest.TestCase):
if 'debugreport tool not found' not in e.message:
raise e
- def _wait_task(self, model, taskid, timeout=5):
- for i in range(0, timeout):
- if model.task_lookup(taskid)['status'] == 'running':
- time.sleep(1)
-
def test_get_distros(self):
inst = model.Model('test:///default',
objstore_loc=self.tmp_store)
diff --git a/tests/test_rest.py b/tests/test_rest.py
index 8de0a9c..60dce2f 100644
--- a/tests/test_rest.py
+++ b/tests/test_rest.py
@@ -38,7 +38,7 @@ import kimchi.server
from kimchi.config import paths
from kimchi.rollbackcontext import RollbackContext
from utils import get_free_port, patch_auth, request
-from utils import run_server
+from utils import run_server, wait_task
test_server = None
@@ -1041,7 +1041,7 @@ class RestTests(unittest.TestCase):
task = json.loads(resp.read())
vol_name = task['target_uri'].split('/')[-1]
self.assertEquals('anyurl.wor.kz', vol_name)
- self._wait_task(task['id'])
+ wait_task(self._task_lookup, task['id'])
task = json.loads(self.request('/tasks/%s' % task['id']).read())
self.assertEquals('finished', task['status'])
resp = self.request('/storagepools/pool-1/storagevolumes/%s' %
@@ -1069,7 +1069,7 @@ class RestTests(unittest.TestCase):
req, 'POST')
self.assertEquals(202, resp.status)
task_id = json.loads(resp.read())['id']
- self._wait_task(task_id)
+ wait_task(self._task_lookup, task_id)
status = json.loads(self.request('/tasks/%s' % task_id).read())
self.assertEquals('finished', status['status'])
@@ -1531,11 +1531,8 @@ class RestTests(unittest.TestCase):
'DELETE')
self.assertEquals(204, resp.status)
- def _wait_task(self, taskid, timeout=5):
- for i in range(0, timeout):
- task = json.loads(self.request('/tasks/%s' % taskid).read())
- if task['status'] == 'running':
- time.sleep(1)
+ def _task_lookup(self, taskid):
+ return json.loads(self.request('/tasks/%s' % taskid).read())
def test_tasks(self):
id1 = model.add_task('/tasks/1', self._async_op)
@@ -1550,12 +1547,12 @@ class RestTests(unittest.TestCase):
tasks = json.loads(self.request('/tasks').read())
tasks_ids = [int(t['id']) for t in tasks]
self.assertEquals(set([id1, id2, id3]) - set(tasks_ids), set([]))
- self._wait_task(id2)
+ wait_task(self._task_lookup, id2)
foo2 = json.loads(self.request('/tasks/%s' % id2).read())
keys = ['id', 'status', 'message', 'target_uri']
self.assertEquals(sorted(keys), sorted(foo2.keys()))
self.assertEquals('failed', foo2['status'])
- self._wait_task(id3)
+ wait_task(self._task_lookup, id3)
foo3 = json.loads(self.request('/tasks/%s' % id3).read())
self.assertEquals('in progress', foo3['message'])
self.assertEquals('running', foo3['status'])
@@ -1717,7 +1714,7 @@ class RestTests(unittest.TestCase):
task = json.loads(resp.read())
# make sure the debugreport doesn't exist until the
# the task is finished
- self._wait_task(task['id'])
+ wait_task(self._task_lookup, task['id'])
rollback.prependDefer(self._report_delete, 'report2')
resp = request(host, ssl_port, '/debugreports/report1')
debugreport = json.loads(resp.read())
@@ -1736,7 +1733,7 @@ class RestTests(unittest.TestCase):
task = json.loads(resp.read())
# make sure the debugreport doesn't exist until the
# the task is finished
- self._wait_task(task['id'], 20)
+ wait_task(self._task_lookup, task['id'], 20)
rollback.prependDefer(self._report_delete, 'report1')
resp = request(host, ssl_port, '/debugreports/report1')
debugreport = json.loads(resp.read())
@@ -1928,7 +1925,7 @@ class RestTests(unittest.TestCase):
self.assertEquals(r.status_code, 202)
task = r.json()
- self._wait_task(task['id'])
+ wait_task(self._task_lookup, task['id'])
resp = self.request('/storagepools/default/storagevolumes/%s' %
task['target_uri'].split('/')[-1])
self.assertEquals(200, resp.status)
@@ -1948,7 +1945,7 @@ class RestTests(unittest.TestCase):
self.assertEquals(r.status_code, 202)
task = r.json()
- self._wait_task(task['id'], 15)
+ wait_task(self._task_lookup, task['id'], 15)
resp = self.request('/storagepools/default/storagevolumes/%s' %
task['target_uri'].split('/')[-1])
diff --git a/tests/utils.py b/tests/utils.py
index 140bb1d..9133904 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -25,6 +25,7 @@ import json
import os
import socket
import sys
+import time
import threading
import unittest
@@ -36,6 +37,7 @@ import kimchi.mockmodel
import kimchi.server
from kimchi.config import paths
from kimchi.exception import OperationFailed
+from kimchi.utils import kimchi_log
_ports = {}
@@ -195,3 +197,16 @@ def patch_auth(sudo=True):
def normalize_xml(xml_str):
return etree.tostring(etree.fromstring(xml_str,
etree.XMLParser(remove_blank_text=True)))
+
+
+def wait_task(task_lookup, taskid, timeout=10):
+ for i in range(0, timeout):
+ task_info = task_lookup(taskid)
+ if task_info['status'] == "running":
+ kimchi_log.info("Waiting task %s, message: %s",
+ taskid, task_info['message'])
+ time.sleep(1)
+ else:
+ return
+ kimchi_log.error("Timeout while process long-run task, "
+ "try to increase timeout value.")
--
1.9.3
10 years, 2 months
[v2] UI: Clone Guest
by huoyuxin@linux.vnet.ibm.com
From: Yu Xin Huo <huoyuxin(a)linux.vnet.ibm.com>
Signed-off-by: Yu Xin Huo <huoyuxin(a)linux.vnet.ibm.com>
---
ui/css/theme-default/list.css | 18 ++++++++++
ui/js/src/kimchi.api.js | 17 ++++++++-
ui/js/src/kimchi.guest_main.js | 73 +++++++++++++++++++++++++++++++++++----
ui/pages/guest.html.tmpl | 6 +++-
4 files changed, 103 insertions(+), 11 deletions(-)
diff --git a/ui/css/theme-default/list.css b/ui/css/theme-default/list.css
index 8ffee69..fc3017b 100644
--- a/ui/css/theme-default/list.css
+++ b/ui/css/theme-default/list.css
@@ -275,3 +275,21 @@
text-shadow: -1px -1px 1px #ccc, 1px 1px 1px #fff;
padding-left: 10px;
}
+
+.guest-clone {
+ margin: 10px;
+}
+
+.guest-clone .icon {
+ background: url('../../images/theme-default/loading.gif') no-repeat;
+ display: inline-block;
+ width: 20px;
+ height: 20px;
+ vertical-align: middle;
+}
+
+.guest-clone .text {
+ color: #666666;
+ margin-left: 5px;
+ text-shadow: -1px -1px 1px #CCCCCC, 1px 1px 1px #FFFFFF;
+}
diff --git a/ui/js/src/kimchi.api.js b/ui/js/src/kimchi.api.js
index 5895a07..2f90219 100644
--- a/ui/js/src/kimchi.api.js
+++ b/ui/js/src/kimchi.api.js
@@ -695,10 +695,10 @@ var kimchi = {
}, 2000);
break;
case 'finished':
- suc(result);
+ suc && suc(result);
break;
case 'failed':
- err(result);
+ err && err(result);
break;
default:
break;
@@ -1233,5 +1233,18 @@ var kimchi = {
success : suc,
error : err
});
+ },
+
+ cloneGuest: function(vm, suc, err) {
+ kimchi.requestJSON({
+ url : kimchi.url + 'vms/'+encodeURIComponent(vm)+"/clone",
+ type : 'POST',
+ contentType : 'application/json',
+ dataType : 'json',
+ success : suc,
+ error : err ? err : function(data) {
+ kimchi.message.error(data.responseJSON.reason);
+ }
+ });
}
};
diff --git a/ui/js/src/kimchi.guest_main.js b/ui/js/src/kimchi.guest_main.js
index dbe8753..ecc3b7a 100644
--- a/ui/js/src/kimchi.guest_main.js
+++ b/ui/js/src/kimchi.guest_main.js
@@ -15,6 +15,34 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
+kimchi.sampleGuestObject = {
+ "name": "",
+ "uuid": "",
+ "state": "shutoff",
+ "persistent": true,
+ "icon": null,
+ "cpus": 0,
+ "memory": 0,
+ "stats": {
+ "net_throughput": 0,
+ "io_throughput_peak": 100,
+ "cpu_utilization": 0,
+ "io_throughput": 0,
+ "net_throughput_peak": 100
+ },
+ "screenshot": null,
+ "graphics": {
+ "passwd": null,
+ "passwdValidTo": null,
+ "type": "vnc",
+ "port": null,
+ "listen": "127.0.0.1"
+ },
+ "users": [],
+ "groups": [],
+ "access": "full"
+};
+
kimchi.vmstart = function(event) {
var button=$(this);
@@ -173,8 +201,24 @@ kimchi.listVmsAuto = function() {
if (kimchi.vmTimeout) {
clearTimeout(kimchi.vmTimeout);
}
+ var getCloningGuests = function(){
+ var guests = [];
+ kimchi.getTasksByFilter('status=running&target_uri='+encodeURIComponent('^/vms/*'), function(tasks) {
+ for(var i=0;i<tasks.length;i++){
+ var guestUri = tasks[i].target_uri;
+ var guestName = guestUri.substring(guestUri.lastIndexOf('/')+1, guestUri.length);
+ guests.push($.extend({}, kimchi.sampleGuestObject, {name: guestName, isCloning: true}));
+ if(kimchi.trackingTasks.indexOf(tasks[i].id)==-1)
+ kimchi.trackTask(tasks[i].id, null, function(err){
+ kimchi.message.error(err.message);
+ }, null);
+ }
+ }, null, true);
+ return guests;
+ };
kimchi.listVMs(function(result, textStatus, jqXHR) {
if (result && textStatus=="success") {
+ result = getCloningGuests().concat(result);
if(result.length) {
var listHtml = '';
var guestTemplate = kimchi.guestTemplate;
@@ -233,14 +277,16 @@ kimchi.createGuestLi = function(vmObject, prevScreenImage, openMenu) {
imgLoad.attr('src',load_src);
//Link the stopped tile to the start action, the running tile to open the console
- if (vmRunningBool) {
- liveTile.off("click", kimchi.vmstart);
- liveTile.on("click", kimchi.openVmConsole);
- }
- else {
- liveTile.off("click", kimchi.openVmConsole);
- liveTile.on("click", kimchi.vmstart);
- liveTile.hover(function(event){$(this).find('.overlay').show()}, function(event){$(this).find('.overlay').hide()});
+ if(!vmObject.isCloning){
+ if (vmRunningBool) {
+ liveTile.off("click", kimchi.vmstart);
+ liveTile.on("click", kimchi.openVmConsole);
+ }
+ else {
+ liveTile.off("click", kimchi.openVmConsole);
+ liveTile.on("click", kimchi.vmstart);
+ liveTile.hover(function(event){$(this).find('.overlay').show()}, function(event){$(this).find('.overlay').hide()});
+ }
}
@@ -257,6 +303,7 @@ kimchi.createGuestLi = function(vmObject, prevScreenImage, openMenu) {
//Setup the VM Actions
var guestActions=result.find("div[name=guest-actions]");
guestActions.find(".shutoff-disabled").prop('disabled', !vmRunningBool );
+ guestActions.find(".running-disabled").prop('disabled', vmRunningBool );
if (vmRunningBool) {
guestActions.find(".running-hidden").hide();
@@ -286,6 +333,11 @@ kimchi.createGuestLi = function(vmObject, prevScreenImage, openMenu) {
}
guestActions.find("[name=vm-edit]").on({click : kimchi.vmedit});
guestActions.find("[name=vm-delete]").on({click : kimchi.vmdelete});
+ guestActions.find("[name=vm-clone]").click(function(){
+ kimchi.cloneGuest($(this).closest('li[name=guest]').attr("id"), function(data){
+ kimchi.listVmsAuto();
+ });
+ });
//Maintain menu open state
var actionMenu=guestActions.find("div[name=actionmenu]");
@@ -293,6 +345,11 @@ kimchi.createGuestLi = function(vmObject, prevScreenImage, openMenu) {
$('.popover', actionMenu).toggle();
}
+ if(vmObject.isCloning){
+ guestActions.children().hide();
+ result.find('.guest-clone').removeClass('hide-content');
+ }
+
return result;
};
diff --git a/ui/pages/guest.html.tmpl b/ui/pages/guest.html.tmpl
index 43fb350..74206fd 100644
--- a/ui/pages/guest.html.tmpl
+++ b/ui/pages/guest.html.tmpl
@@ -26,6 +26,9 @@
<div class="guest-general">
<h2 class="title" title="{name}">{name}</h2>
</div>
+ <div class="guest-clone hide-content">
+ <span class="icon"></span><span class="text">$_("Cloning")...</span>
+ </div>
</div>
<div name="cpu_utilization" class="sortable">
<div class="circleGauge"></div>
@@ -56,7 +59,8 @@
<span class="text">$_("Actions")</span><span class="arrow"></span>
<div class="popover actionsheet right-side" style="width: 250px">
<button class="button-big shutoff-disabled" name="vm-console" ><span class="text">$_("Connect")</span></button>
- <button class="button-big running-disabled" name="vm-edit"><span class="text">$_("Edit")</span></button>
+ <button class="button-big running-disabled" name="vm-clone"><span class="text">$_("Clone")</span></button>
+ <button class="button-big" name="vm-edit"><span class="text">$_("Edit")</span></button>
<button class="button-big shutoff-hidden" name="vm-reset"><span class="text">$_("Reset")</span></button>
<button class="button-big shutoff-hidden" name="vm-shutdown"><span class="text">$_("Shut Down")</span></button>
<button class="button-big running-hidden" name="vm-start"><span class="text">$_("Start")</span></button>
--
1.7.1
10 years, 2 months
Re: [Kimchi-devel] [PATCH 3/5] AsyncTask: Improve continuous status feedback
by Aline Manera
On 10/28/2014 04:48 AM, Zhou Zheng Sheng wrote:
> One of the advantage of AsyncTask is that the background task can call
> "_status_cb()" multiple times to report the interim progress. However
> this is not properly implemented in "_status_cb()". The correct handling
> should be as follow.
>
> cb(message, True)
> Means task finished succesfully.
>
> cb(message, False)
> Means task failed.
>
> cb(message)
> cb(message, None)
> Means task is ongoing and the invocation is to provide progress feedback.
>
> The current implementation fails to distinguish "cb(message, False)" and
> "cb(message)". This patch fixes the problem and adds a new test case.
>
> This patch also improves the "_wait_task()" methods in tests, adding a
> regular status report to allow developer know it's waiting for some task
> but not stuck on something. There is also two methods in tests that start a
> AsyncTask but do not wait it till finish, this will interfere with other
> tests, so the patch adds the missing "_wait_task()" invocation for them.
>
> Signed-off-by: Zhou Zheng Sheng <zhshzhou(a)linux.vnet.ibm.com>
> ---
> src/kimchi/asynctask.py | 4 ++--
> tests/test_mockmodel.py | 14 ++++++++++++++
> tests/test_model.py | 31 ++++++++++++++++++++++++++-----
> tests/test_rest.py | 7 +++++++
> 4 files changed, 49 insertions(+), 7 deletions(-)
>
> diff --git a/src/kimchi/asynctask.py b/src/kimchi/asynctask.py
> index 99e7a64..f8e55c0 100644
> --- a/src/kimchi/asynctask.py
> +++ b/src/kimchi/asynctask.py
> @@ -49,9 +49,9 @@ class AsyncTask(object):
> self._save_helper()
> return
>
> - if success:
> + if success == True:
> self.status = 'finished'
> - else:
> + elif success ==False:
> self.status = 'failed'
> self.message = message
> self._save_helper()
> diff --git a/tests/test_mockmodel.py b/tests/test_mockmodel.py
> index 8de7ce9..3158c48 100644
> --- a/tests/test_mockmodel.py
> +++ b/tests/test_mockmodel.py
> @@ -27,6 +27,7 @@ import unittest
> import kimchi.mockmodel
> from utils import get_free_port, patch_auth, request, run_server
> from kimchi.control.base import Collection, Resource
> +from kimchi.utils import kimchi_log
>
>
> test_server = None
> @@ -174,3 +175,16 @@ class MockModelTests(unittest.TestCase):
> task = model.host_swupdate()
> task_params = [u'id', u'message', u'status', u'target_uri']
> self.assertEquals(sorted(task_params), sorted(task.keys()))
> + self._wait_task(model, task['id'])
> +
> + def _wait_task(self, model, taskid, timeout=5):
> + for i in range(0, timeout):
> + task_info = model.task_lookup(taskid)
> + if task_info['status'] == "running":
> + kimchi_log.info("Waiting task %s, message: %s",
> + task_info['id'], task_info['message'])
> + time.sleep(1)
> + else:
> + return
> + kimchi_log.error("Timeout while process long-run task, "
> + "try to increase timeout value.")
We could move it to tests/utils.py to avoid duplicating code when a Task
needs to be watched.
> diff --git a/tests/test_model.py b/tests/test_model.py
> index e407fe5..c0e8cb5 100644
> --- a/tests/test_model.py
> +++ b/tests/test_model.py
> @@ -42,7 +42,7 @@ from kimchi.exception import InvalidParameter, NotFoundError, OperationFailed
> from kimchi.iscsi import TargetClient
> from kimchi.model import model
> from kimchi.rollbackcontext import RollbackContext
> -from kimchi.utils import add_task
> +from kimchi.utils import add_task, kimchi_log
>
>
> invalid_repository_urls = ['www.fedora.org', # missing protocol
> @@ -285,8 +285,9 @@ class ModelTests(unittest.TestCase):
> 'capacity': 1024,
> 'allocation': 512,
> 'format': 'qcow2'}
> - inst.storagevolumes_create(pool, params)
> + task_id = inst.storagevolumes_create(pool, params)['id']
> rollback.prependDefer(inst.storagevolume_delete, pool, vol)
> + self._wait_task(inst, task_id)
>
> vm_name = 'kimchi-cdrom'
> params = {'name': 'test', 'disks': [], 'cdrom': self.kimchi_iso}
> @@ -1107,6 +1108,13 @@ class ModelTests(unittest.TestCase):
> except:
> cb("Exception raised", False)
>
> + def continuous_ops(cb, params):
> + cb("step 1 OK")
> + time.sleep(2)
> + cb("step 2 OK")
> + time.sleep(2)
> + cb("step 3 OK", params.get('result', True))
> +
> inst = model.Model('test:///default',
> objstore_loc=self.tmp_store)
> taskid = add_task('', quick_op, inst.objstore, 'Hello')
> @@ -1131,6 +1139,12 @@ class ModelTests(unittest.TestCase):
> inst.task_lookup(taskid)['message'])
> self.assertEquals('failed', inst.task_lookup(taskid)['status'])
>
> + taskid = add_task('', continuous_ops, inst.objstore,
> + {'result': True})
> + self.assertEquals('running', inst.task_lookup(taskid)['status'])
> + self._wait_task(inst, taskid, timeout=10)
> + self.assertEquals('finished', inst.task_lookup(taskid)['status'])
> +
> # This wrapper function is needed due to the new backend messaging in
> # vm model. vm_poweroff and vm_delete raise exception if vm is not found.
> # These functions are called after vm has been deleted if test finishes
> @@ -1253,9 +1267,16 @@ class ModelTests(unittest.TestCase):
> raise e
>
> def _wait_task(self, model, taskid, timeout=5):
> - for i in range(0, timeout):
> - if model.task_lookup(taskid)['status'] == 'running':
> - time.sleep(1)
> + for i in range(0, timeout):
> + task_info = model.task_lookup(taskid)
> + if task_info['status'] == "running":
> + kimchi_log.info("Waiting task %s, message: %s",
> + taskid, task_info['message'])
> + time.sleep(1)
> + else:
> + return
> + kimchi_log.error("Timeout while process long-run task, "
> + "try to increase timeout value.")
>
> def test_get_distros(self):
> inst = model.Model('test:///default',
> diff --git a/tests/test_rest.py b/tests/test_rest.py
> index 8de0a9c..649790b 100644
> --- a/tests/test_rest.py
> +++ b/tests/test_rest.py
> @@ -37,6 +37,7 @@ import kimchi.mockmodel
> import kimchi.server
> from kimchi.config import paths
> from kimchi.rollbackcontext import RollbackContext
> +from kimchi.utils import kimchi_log
> from utils import get_free_port, patch_auth, request
> from utils import run_server
>
> @@ -1535,7 +1536,13 @@ class RestTests(unittest.TestCase):
> for i in range(0, timeout):
> task = json.loads(self.request('/tasks/%s' % taskid).read())
> if task['status'] == 'running':
> + kimchi_log.info("Waiting task %d, message: %s",
> + taskid, task['message'])
> time.sleep(1)
> + else:
> + return
> + kimchi_log.error("Timeout while process long-run task, "
> + "try to increase timeout value.")
>
> def test_tasks(self):
> id1 = model.add_task('/tasks/1', self._async_op)
10 years, 2 months
[PATCH v2] AsyncTask: Improve continuous status feedback
by Zhou Zheng Sheng
One of the advantage of AsyncTask is that the background task can call
"_status_cb()" multiple times to report the interim progress. However
this is not properly implemented in "_status_cb()". The correct handling
should be as follow.
cb(message, True)
Means task finished succesfully.
cb(message, False)
Means task failed.
cb(message)
cb(message, None)
Means task is ongoing and the invocation is to provide progress feedback.
The current implementation fails to distinguish "cb(message, False)" and
"cb(message)". This patch fixes the problem and adds a new test case.
This patch also extracts "wait_task()" methods in from the tests, adding a
regular status report to allow developer know it's waiting for some task
but not stuck on something. There is also two methods in tests that start a
AsyncTask but do not wait it till finish, this will interfere with other
tests, so the patch adds the missing "wait_task()" invocation for them.
v2:
Extract the common "wait_task()" methods to avoid duplicated code.
Signed-off-by: Zhou Zheng Sheng <zhshzhou(a)linux.vnet.ibm.com>
---
src/kimchi/asynctask.py | 6 ++----
tests/test_mockmodel.py | 2 ++
tests/test_model.py | 36 +++++++++++++++++++++++-------------
tests/test_rest.py | 25 +++++++++++--------------
tests/utils.py | 15 +++++++++++++++
5 files changed, 53 insertions(+), 31 deletions(-)
diff --git a/src/kimchi/asynctask.py b/src/kimchi/asynctask.py
index 99e7a64..b5673b2 100644
--- a/src/kimchi/asynctask.py
+++ b/src/kimchi/asynctask.py
@@ -49,10 +49,8 @@ class AsyncTask(object):
self._save_helper()
return
- if success:
- self.status = 'finished'
- else:
- self.status = 'failed'
+ if success is not None:
+ self.status = 'finished' if success else 'failed'
self.message = message
self._save_helper()
diff --git a/tests/test_mockmodel.py b/tests/test_mockmodel.py
index 7319531..eeb6715 100644
--- a/tests/test_mockmodel.py
+++ b/tests/test_mockmodel.py
@@ -26,6 +26,7 @@ import unittest
import kimchi.mockmodel
from utils import get_free_port, patch_auth, request, run_server
+from utils import wait_task
from kimchi.control.base import Collection, Resource
@@ -198,3 +199,4 @@ class MockModelTests(unittest.TestCase):
task = model.host_swupdate()
task_params = [u'id', u'message', u'status', u'target_uri']
self.assertEquals(sorted(task_params), sorted(task.keys()))
+ wait_task(model.task_lookup, task['id'])
diff --git a/tests/test_model.py b/tests/test_model.py
index d9bbe9e..896540d 100644
--- a/tests/test_model.py
+++ b/tests/test_model.py
@@ -43,6 +43,7 @@ from kimchi.iscsi import TargetClient
from kimchi.model import model
from kimchi.rollbackcontext import RollbackContext
from kimchi.utils import add_task
+from utils import wait_task
invalid_repository_urls = ['www.fedora.org', # missing protocol
@@ -127,7 +128,7 @@ class ModelTests(unittest.TestCase):
'format': 'qcow2'}
task_id = inst.storagevolumes_create('default', params)['id']
rollback.prependDefer(inst.storagevolume_delete, 'default', vol)
- self._wait_task(inst, task_id)
+ wait_task(inst.task_lookup, task_id)
self.assertEquals('finished', inst.task_lookup(task_id)['status'])
vol_path = inst.storagevolume_lookup('default', vol)['path']
@@ -285,8 +286,9 @@ class ModelTests(unittest.TestCase):
'capacity': 1024,
'allocation': 512,
'format': 'qcow2'}
- inst.storagevolumes_create(pool, params)
+ task_id = inst.storagevolumes_create(pool, params)['id']
rollback.prependDefer(inst.storagevolume_delete, pool, vol)
+ wait_task(inst.task_lookup, task_id)
vm_name = 'kimchi-cdrom'
params = {'name': 'test', 'disks': [], 'cdrom': self.kimchi_iso}
@@ -558,7 +560,7 @@ class ModelTests(unittest.TestCase):
params['name'] = vol
task_id = inst.storagevolumes_create(pool, params)['id']
rollback.prependDefer(inst.storagevolume_delete, pool, vol)
- self._wait_task(inst, task_id)
+ wait_task(inst.task_lookup, task_id)
self.assertEquals('finished', inst.task_lookup(task_id)['status'])
fd, path = tempfile.mkstemp(dir=path)
@@ -599,7 +601,7 @@ class ModelTests(unittest.TestCase):
taskid = task_response['id']
vol_name = task_response['target_uri'].split('/')[-1]
self.assertEquals('COPYING', vol_name)
- self._wait_task(inst, taskid, timeout=60)
+ wait_task(inst.task_lookup, taskid, timeout=60)
self.assertEquals('finished', inst.task_lookup(taskid)['status'])
vol_path = os.path.join(args['path'], vol_name)
self.assertTrue(os.path.isfile(vol_path))
@@ -1120,10 +1122,17 @@ class ModelTests(unittest.TestCase):
except:
cb("Exception raised", False)
+ def continuous_ops(cb, params):
+ cb("step 1 OK")
+ time.sleep(2)
+ cb("step 2 OK")
+ time.sleep(2)
+ cb("step 3 OK", params.get('result', True))
+
inst = model.Model('test:///default',
objstore_loc=self.tmp_store)
taskid = add_task('', quick_op, inst.objstore, 'Hello')
- self._wait_task(inst, taskid)
+ wait_task(inst.task_lookup, taskid)
self.assertEquals(1, taskid)
self.assertEquals('finished', inst.task_lookup(taskid)['status'])
self.assertEquals('Hello', inst.task_lookup(taskid)['message'])
@@ -1134,16 +1143,22 @@ class ModelTests(unittest.TestCase):
self.assertEquals(2, taskid)
self.assertEquals('running', inst.task_lookup(taskid)['status'])
self.assertEquals('OK', inst.task_lookup(taskid)['message'])
- self._wait_task(inst, taskid)
+ wait_task(inst.task_lookup, taskid)
self.assertEquals('failed', inst.task_lookup(taskid)['status'])
self.assertEquals('It was not meant to be',
inst.task_lookup(taskid)['message'])
taskid = add_task('', abnormal_op, inst.objstore, {})
- self._wait_task(inst, taskid)
+ wait_task(inst.task_lookup, taskid)
self.assertEquals('Exception raised',
inst.task_lookup(taskid)['message'])
self.assertEquals('failed', inst.task_lookup(taskid)['status'])
+ taskid = add_task('', continuous_ops, inst.objstore,
+ {'result': True})
+ self.assertEquals('running', inst.task_lookup(taskid)['status'])
+ wait_task(inst.task_lookup, taskid, timeout=10)
+ self.assertEquals('finished', inst.task_lookup(taskid)['status'])
+
# This wrapper function is needed due to the new backend messaging in
# vm model. vm_poweroff and vm_delete raise exception if vm is not found.
# These functions are called after vm has been deleted if test finishes
@@ -1247,7 +1262,7 @@ class ModelTests(unittest.TestCase):
task = inst.debugreports_create({'name': reportName})
rollback.prependDefer(inst.debugreport_delete, tmp_name)
taskid = task['id']
- self._wait_task(inst, taskid, timeout)
+ wait_task(inst.task_lookup, taskid, timeout)
self.assertEquals('finished',
inst.task_lookup(taskid)['status'],
"It is not necessary an error. "
@@ -1265,11 +1280,6 @@ class ModelTests(unittest.TestCase):
if 'debugreport tool not found' not in e.message:
raise e
- def _wait_task(self, model, taskid, timeout=5):
- for i in range(0, timeout):
- if model.task_lookup(taskid)['status'] == 'running':
- time.sleep(1)
-
def test_get_distros(self):
inst = model.Model('test:///default',
objstore_loc=self.tmp_store)
diff --git a/tests/test_rest.py b/tests/test_rest.py
index 8de0a9c..60dce2f 100644
--- a/tests/test_rest.py
+++ b/tests/test_rest.py
@@ -38,7 +38,7 @@ import kimchi.server
from kimchi.config import paths
from kimchi.rollbackcontext import RollbackContext
from utils import get_free_port, patch_auth, request
-from utils import run_server
+from utils import run_server, wait_task
test_server = None
@@ -1041,7 +1041,7 @@ class RestTests(unittest.TestCase):
task = json.loads(resp.read())
vol_name = task['target_uri'].split('/')[-1]
self.assertEquals('anyurl.wor.kz', vol_name)
- self._wait_task(task['id'])
+ wait_task(self._task_lookup, task['id'])
task = json.loads(self.request('/tasks/%s' % task['id']).read())
self.assertEquals('finished', task['status'])
resp = self.request('/storagepools/pool-1/storagevolumes/%s' %
@@ -1069,7 +1069,7 @@ class RestTests(unittest.TestCase):
req, 'POST')
self.assertEquals(202, resp.status)
task_id = json.loads(resp.read())['id']
- self._wait_task(task_id)
+ wait_task(self._task_lookup, task_id)
status = json.loads(self.request('/tasks/%s' % task_id).read())
self.assertEquals('finished', status['status'])
@@ -1531,11 +1531,8 @@ class RestTests(unittest.TestCase):
'DELETE')
self.assertEquals(204, resp.status)
- def _wait_task(self, taskid, timeout=5):
- for i in range(0, timeout):
- task = json.loads(self.request('/tasks/%s' % taskid).read())
- if task['status'] == 'running':
- time.sleep(1)
+ def _task_lookup(self, taskid):
+ return json.loads(self.request('/tasks/%s' % taskid).read())
def test_tasks(self):
id1 = model.add_task('/tasks/1', self._async_op)
@@ -1550,12 +1547,12 @@ class RestTests(unittest.TestCase):
tasks = json.loads(self.request('/tasks').read())
tasks_ids = [int(t['id']) for t in tasks]
self.assertEquals(set([id1, id2, id3]) - set(tasks_ids), set([]))
- self._wait_task(id2)
+ wait_task(self._task_lookup, id2)
foo2 = json.loads(self.request('/tasks/%s' % id2).read())
keys = ['id', 'status', 'message', 'target_uri']
self.assertEquals(sorted(keys), sorted(foo2.keys()))
self.assertEquals('failed', foo2['status'])
- self._wait_task(id3)
+ wait_task(self._task_lookup, id3)
foo3 = json.loads(self.request('/tasks/%s' % id3).read())
self.assertEquals('in progress', foo3['message'])
self.assertEquals('running', foo3['status'])
@@ -1717,7 +1714,7 @@ class RestTests(unittest.TestCase):
task = json.loads(resp.read())
# make sure the debugreport doesn't exist until the
# the task is finished
- self._wait_task(task['id'])
+ wait_task(self._task_lookup, task['id'])
rollback.prependDefer(self._report_delete, 'report2')
resp = request(host, ssl_port, '/debugreports/report1')
debugreport = json.loads(resp.read())
@@ -1736,7 +1733,7 @@ class RestTests(unittest.TestCase):
task = json.loads(resp.read())
# make sure the debugreport doesn't exist until the
# the task is finished
- self._wait_task(task['id'], 20)
+ wait_task(self._task_lookup, task['id'], 20)
rollback.prependDefer(self._report_delete, 'report1')
resp = request(host, ssl_port, '/debugreports/report1')
debugreport = json.loads(resp.read())
@@ -1928,7 +1925,7 @@ class RestTests(unittest.TestCase):
self.assertEquals(r.status_code, 202)
task = r.json()
- self._wait_task(task['id'])
+ wait_task(self._task_lookup, task['id'])
resp = self.request('/storagepools/default/storagevolumes/%s' %
task['target_uri'].split('/')[-1])
self.assertEquals(200, resp.status)
@@ -1948,7 +1945,7 @@ class RestTests(unittest.TestCase):
self.assertEquals(r.status_code, 202)
task = r.json()
- self._wait_task(task['id'], 15)
+ wait_task(self._task_lookup, task['id'], 15)
resp = self.request('/storagepools/default/storagevolumes/%s' %
task['target_uri'].split('/')[-1])
diff --git a/tests/utils.py b/tests/utils.py
index 140bb1d..9133904 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -25,6 +25,7 @@ import json
import os
import socket
import sys
+import time
import threading
import unittest
@@ -36,6 +37,7 @@ import kimchi.mockmodel
import kimchi.server
from kimchi.config import paths
from kimchi.exception import OperationFailed
+from kimchi.utils import kimchi_log
_ports = {}
@@ -195,3 +197,16 @@ def patch_auth(sudo=True):
def normalize_xml(xml_str):
return etree.tostring(etree.fromstring(xml_str,
etree.XMLParser(remove_blank_text=True)))
+
+
+def wait_task(task_lookup, taskid, timeout=10):
+ for i in range(0, timeout):
+ task_info = task_lookup(taskid)
+ if task_info['status'] == "running":
+ kimchi_log.info("Waiting task %s, message: %s",
+ taskid, task_info['message'])
+ time.sleep(1)
+ else:
+ return
+ kimchi_log.error("Timeout while process long-run task, "
+ "try to increase timeout value.")
--
1.9.3
10 years, 2 months
[PATCH] Number of CPUs in Host's Basic Information.
by Paulo Vital
Support to provide the information of the number of online CPUs
present in the Host system.
Also, updated English help page to express the new information.
Signed-off-by: Paulo Vital <pvital(a)linux.vnet.ibm.com>
---
src/kimchi/model/host.py | 1 +
ui/pages/help/en_US/host.dita | 4 ++--
ui/pages/tabs/host.html.tmpl | 4 ++++
3 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/kimchi/model/host.py b/src/kimchi/model/host.py
index 1bc3ca2..ca197c1 100644
--- a/src/kimchi/model/host.py
+++ b/src/kimchi/model/host.py
@@ -89,6 +89,7 @@ class HostModel(object):
res['cpu'] = line.split(':')[1].strip()
break
+ res['num_cpu'] = psutil.NUM_CPUS
res['memory'] = psutil.TOTAL_PHYMEM
# Include IBM PowerKVM name to supported distro names
diff --git a/ui/pages/help/en_US/host.dita b/ui/pages/help/en_US/host.dita
index 335c51c..0dcb670 100644
--- a/ui/pages/help/en_US/host.dita
+++ b/ui/pages/help/en_US/host.dita
@@ -24,8 +24,8 @@ to the host system, if it is not already connected.</li>
<dlentry>
<dt>Basic information</dt>
<dd>This section displays the host operating system distribution,
-version, and code name, as well as the processor type and amount of
-memory in GB.</dd>
+version, and code name, as well as the processor type, the number of
+online CPUs and amount of memory in GB.</dd>
</dlentry><dlentry>
<dt>System statistics</dt>
<dd>This section displays graphs to show statistics for CPU, memory,
diff --git a/ui/pages/tabs/host.html.tmpl b/ui/pages/tabs/host.html.tmpl
index 8641962..57c7cee 100644
--- a/ui/pages/tabs/host.html.tmpl
+++ b/ui/pages/tabs/host.html.tmpl
@@ -75,6 +75,10 @@
<div class="section-value">{cpu}</div>
</div>
<div class="section-row">
+ <div class="section-label">$_("CPU(s)")</div>
+ <div class="section-value">{num_cpu}</div>
+ </div>
+ <div class="section-row">
<div class="section-label">$_("Memory")</div>
<div class="section-value">{memory}</div>
</div>
--
1.9.3
10 years, 2 months
[PATCH v7 0/3] Backend support for templates with sockets, cores, and threads
by Christy Perez
v6->v7
- Add unit tests
- Rebase
v5->v6:
- Change cpu_info to empty dict instead of empty string if not specified
- Split translations into different commit
v4->v5:
- Fix format issues
- Remove if check for cpu_info in control, and set cpu_info
to the empty string in model if None
- Add format requirements error for topology requirements.
- Add new error to toplogy in API.json
- Update po files
v3->v4:
- Remove the unused cpu_ elements from common_spec
- Pass new_t into validate function to reduce complexity
- Rearrange code to decrese indents in _get_cpu_xml
v2->v3:
- Set vcpus based on topology, if specified.
- Move the update cpu+topology validation out to a function
for redability
- Add a minimum value of 1 for topology values
- Leave new English error msg as empty string
- Update the API documentation on cpu defaults
v1->v2:
- Added a check to make sure that vcpus = sockets*cores*threads
- Set individual topoology params to required in API.json
- Change the topology object types from string to integer
- Always return cpu_info from templates lookup()
- Removed check for cpu_info in to_vm_xml
- Build cpu_info xml using lxml.builder instead of string
- CPU and topology verification on template update
Christy Perez (3):
Backend support for templates with sockets, cores, and threads
model test for updating template
Translations for new cpu_info messages
docs/API.md | 13 ++++++-
po/de_DE.po | 76 +++++++++++++++++++++++++++++++++++----
po/en_US.po | 76 +++++++++++++++++++++++++++++++++++----
po/es_ES.po | 76 +++++++++++++++++++++++++++++++++++----
po/fr_FR.po | 76 +++++++++++++++++++++++++++++++++++----
po/it_IT.po | 76 +++++++++++++++++++++++++++++++++++----
po/ja_JP.po | 76 +++++++++++++++++++++++++++++++++++----
po/kimchi.pot | 76 +++++++++++++++++++++++++++++++++++----
po/ko_KR.po | 76 +++++++++++++++++++++++++++++++++++----
po/pt_BR.po | 75 ++++++++++++++++++++++++++++++++++----
po/ru_RU.po | 76 +++++++++++++++++++++++++++++++++++----
po/zh_CN.po | 79 +++++++++++++++++++++++++++++++++++------
po/zh_TW.po | 76 +++++++++++++++++++++++++++++++++++----
src/kimchi/API.json | 36 +++++++++++++++++--
src/kimchi/control/templates.py | 31 ++++++++--------
src/kimchi/i18n.py | 2 ++
src/kimchi/model/templates.py | 32 +++++++++++++++++
src/kimchi/osinfo.py | 5 ++-
src/kimchi/vmtemplate.py | 16 +++++++++
tests/test_model.py | 25 +++++++++++++
20 files changed, 966 insertions(+), 108 deletions(-)
--
1.9.3
10 years, 2 months