[PATCH 0/2 - v2] Implement front-end support to add multiple disks in Template
by Rodrigo Trujillo
V2 - Rebase over latest new UI patches
V1 - Initial version
Rodrigo Trujillo (2):
Allow listStorageVolumes ajax call be synchronized
UI - Implement multiple disks support in Template edit window
src/wok/plugins/kimchi/ui/js/src/kimchi.api.js | 7 +-
.../ui/js/src/kimchi.guest_storage_add.main.js | 2 +-
.../kimchi/ui/js/src/kimchi.storage_main.js | 2 +-
.../kimchi/ui/js/src/kimchi.template_edit_main.js | 246 ++++++++++-----------
.../kimchi/ui/pages/template-edit.html.tmpl | 5 +-
5 files changed, 130 insertions(+), 132 deletions(-)
--
2.1.0
9 years, 1 month
RFC - #728 - Processor Info in s390 architecture
by Suresh Babu Angadi
Hi Team,
With the current implementation of gingerbase, the processor information
is empty for s390 architecture.
O/p of /plugins/gingerbase/host on one of the lpar with KVM for IBM z
systems:
{
"os_distro":"KVM for IBM z Systems",
"os_codename":"Z",
"os_version":"1.1.1",
"cpus":3,
"memory":4000559104
}
Note: cpu_model is not fetched for s390x architecture.
Additional Information which can be provided for benefit of end users:
Architecture Details: "architecture": "x86 or s390x or ppc etc"
CPU details(for x86 and s390x): Showing online cpus count and offline
cpus count(planned to use lscpu for x86 and s390x, not sure how to fetch
these details in power pc). Display offline CPUs only if any processors
are offline.
"cpus": {"online": <integer(online_cpus_count)>
"offline": <integer(offline_cpus_count)>}
Processor Type(s390x): shared and dedicated cpus
"cpus": {"online": <integer(online_cpus_count)>
"offline": <integer(offline_cpus_count)>
"shared": <integer(shared_cpus_count)>
"dedicated": <integer(dedicated_cpus_count)>}
Host Name(for all architecture): Currently IP address is displayed in
the new UI (below kimchi logo). Displaying host name would be more
meaningful.
Memory Details(s390x specific): In s390x architecture, we can have
online and offline memory.
example:
[root@ ~]# lsmem
Address Range Size (MB) State Removable Device
===============================================================================
0x0000000000000000-0x000000000fffffff 256 online no 0
0x0000000010000000-0x000000002fffffff 512 online yes 1-2
0x0000000030000000-0x000000007fffffff 1280 online no 3-7
0x0000000080000000-0x00000000ffffffff 2048 offline - 8-15
Memory device size : 256 MB
Memory block size : 256 MB*
**Total online memory : 2048 MB**
**Total offline memory: 2048 MB*
so memory would be split to include total online and total offline memory.
memory: {"online": <total_online_memory>, "offline":<total offline memory>}
Virtualization Details(s390x specific):
Providing Hypervisor and Hypervisor Vendor details when virtualized.
Example Hypervisor: PR/SM , Hypervisor vendor: IBM.
Lpar Details: Lpar name and Lpar ID
"virtualization": {"hypervisor":"PR/SM",
"hypervisor_vendor": "IBM",
"lpar_name": "LP6",
"lpar_number":"22"}
So the o/p json of /plugins/gingerbase/host would look like below:
for x86:
{
"os_distro":"Fedora",
"architecture":"x86_64"
"cpu_model":"Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz",
"cpus":{"online":3, "offline":2},
"os_version":"21",
"os_codename":"Twenty One",
"memory":7933894656
"host":<host_name>
}
for power:
Only change, addition of "architecture" and "host" information, unless
we decide to show offline cpus
for s390x:
{
"os_distro":"KVM for IBM z Systems",
"os_codename":"Z",
"os_version":"1.1.1",
"architecture": "s390x"
"cpu_model": "IBM / 2827 / 743 H43"
cpus: {"online":2,
"offline":2,
"shared":2,
"dedicated":0}
memory : {"online" : 2147483628,
"offline" : 2147483628}
"virtualization": {"hypervisor":"PR/SM",
"hypervisor_vendor": "IBM",
"lpar_name": "LP6",
"lpar_number":"22"},
"host":"zfwpc160"
}
--
Regards,
Suresh Babu Angadi
9 years, 1 month
[RFC] UI - Live migration UI design
by Daniel Henrique Barboza
Hi,
The backend for this feature is almost completed and the API is defined,
so let's
talk about the UI.
This is the API (note: the upstream version is outdated, will be updated
in an
incoming patch):
**URI:** /plugins/kimchi/vms/*:name*
* **POST**: *See Virtual Machine Actions*
**Actions (POST):**
* migrate: Migrate a virtual machine to a remote server, only support live
mode without block migration.
* remote_host: IP address or hostname of the remote server.
* user *(optional)*: User to log on at the remote server.
* password *(optional)*: password of the user in the remote server
This API will return a task ID for the UI to track its progress, pretty much
like it is done with the 'clone' feature.
This is how I imagine the UI:
- a button called "migrate" at the same submenu as clone VM.
- when pressed, a new window appears with the following content:
--------
"Disclaimer: this process cannot be stopped after started, can take a
long time
to complete and will turn off this current VM when it is over."
* textfield to input the remote_server, name "remote server"
* checkbox with text: "Delete this VM when the migration is completed"
Text: "The following fields are optional. Fill them if you want Kimchi to
setup a password-less ssh session between the localhost and the
remote host. The setup process will only be successful if the user
has 'SUDO ALL' permission in the remote machine"
* textfield to input the username of the remote host
* password field to input the password of the user in the remote host
* "Cancel" and "Start" buttons at the bottom
-------------
This is the workflow/behavior I would expect of it:
- clicking "Cancel" at any time will dismiss the window and nothing happens;
- clicking "Start" without remote_host field will issue an error "remote
host field
cannot be blank" or something like that
- clicking "Start" with a remote host field will start the process
- clicking "Start" with remote host and only one of the user or password
filled
will raise an error "both user and password fields must be filled".
- if, **and only if **, the user checks the checkbox "Delete VM ....",
the UI will
delete the VM after the migration process is completed by using the
proper API
(I believe it is DELETE /vms/name).
This is all I have for the UI for this feature ATM. I'll update this RFC
if required.
Please provide comments and suggestions. The final backend for this feature
will be submitted to the ML at the start of the next week.
Let me know if there's any doubts about how Kimchi's live migration backend
works.
Daniel
9 years, 1 month
[PATCH 1/2] Allow listStorageVolumes ajax call be synchronized
by Rodrigo Trujillo
This patch changes kimchi.listStorageVolumes function to receive
a new parameter 'sync' in order to set it as synchronous call.
Signed-off-by: Rodrigo Trujillo <rodrigo.trujillo(a)linux.vnet.ibm.com>
---
src/wok/plugins/kimchi/ui/js/src/kimchi.api.js | 7 ++++---
src/wok/plugins/kimchi/ui/js/src/kimchi.guest_storage_add.main.js | 2 +-
src/wok/plugins/kimchi/ui/js/src/kimchi.storage_main.js | 2 +-
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/wok/plugins/kimchi/ui/js/src/kimchi.api.js b/src/wok/plugins/kimchi/ui/js/src/kimchi.api.js
index b1acdb9..d5e9ddd 100644
--- a/src/wok/plugins/kimchi/ui/js/src/kimchi.api.js
+++ b/src/wok/plugins/kimchi/ui/js/src/kimchi.api.js
@@ -393,12 +393,13 @@ var kimchi = {
});
},
- listStorageVolumes : function(poolName, suc, err) {
+ listStorageVolumes : function(poolName, suc, err, sync) {
$.ajax({
url : 'plugins/kimchi/storagepools/' + encodeURIComponent(poolName) + '/storagevolumes',
type : 'GET',
contentType : 'application/json',
dataType : 'json',
+ async : !sync,
success : suc,
error : err
});
@@ -452,7 +453,7 @@ var kimchi = {
return;
}
suc(isos, true);
- }, err);
+ }, err, false);
} else if (status === "running") {
if (deepScanHandler.stop) {
return;
@@ -463,7 +464,7 @@ var kimchi = {
}
suc(isos, false);
setTimeout(monitorTask, 2000);
- }, err);
+ }, err, false);
} else if (status === "failed") {
if (deepScanHandler.stop) {
return;
diff --git a/src/wok/plugins/kimchi/ui/js/src/kimchi.guest_storage_add.main.js b/src/wok/plugins/kimchi/ui/js/src/kimchi.guest_storage_add.main.js
index ec07cf0..d6606ee 100644
--- a/src/wok/plugins/kimchi/ui/js/src/kimchi.guest_storage_add.main.js
+++ b/src/wok/plugins/kimchi/ui/js/src/kimchi.guest_storage_add.main.js
@@ -94,7 +94,7 @@ kimchi.guest_storage_add_main = function() {
}
}
$('#guest-disk').selectMenu("setData", options);
- });
+ }, null, false);
});
diff --git a/src/wok/plugins/kimchi/ui/js/src/kimchi.storage_main.js b/src/wok/plugins/kimchi/ui/js/src/kimchi.storage_main.js
index 40a43f6..cba8fb4 100644
--- a/src/wok/plugins/kimchi/ui/js/src/kimchi.storage_main.js
+++ b/src/wok/plugins/kimchi/ui/js/src/kimchi.storage_main.js
@@ -282,7 +282,7 @@ kimchi.doListVolumes = function(poolObj) {
slide.slideDown('slow');
}, function(err) {
wok.message.error(err.responseJSON.reason);
- });
+ }, false);
}
kimchi.initLogicalPoolExtend = function() {
--
2.1.0
9 years, 1 month
[PATCH 0/4] Ginger Base stabilization
by Aline Manera
Aline Manera (4):
Remove Kimchi references from Ginger Base
Update gingerbase po files
Update COPYING file for Ginger Base
Fix make check-local for Ginger Base
src/wok/plugins/gingerbase/COPYING | 10 +-
src/wok/plugins/gingerbase/model/host.py | 4 +-
src/wok/plugins/gingerbase/model/model.py | 4 +-
src/wok/plugins/gingerbase/po/Makevars | 2 +-
src/wok/plugins/gingerbase/po/de_DE.po | 1687 +-----------------
src/wok/plugins/gingerbase/po/en_US.po | 1884 +++----------------
src/wok/plugins/gingerbase/po/es_ES.po | 1708 +-----------------
src/wok/plugins/gingerbase/po/fr_FR.po | 1737 +-----------------
src/wok/plugins/gingerbase/po/gingerbase.pot | 1885 +++-----------------
src/wok/plugins/gingerbase/po/it_IT.po | 1673 +----------------
src/wok/plugins/gingerbase/po/ja_JP.po | 1683 +----------------
src/wok/plugins/gingerbase/po/ko_KR.po | 1613 +----------------
src/wok/plugins/gingerbase/po/pt_BR.po | 1749 +-----------------
src/wok/plugins/gingerbase/po/ru_RU.po | 1603 +----------------
src/wok/plugins/gingerbase/po/zh_CN.po | 1644 ++---------------
src/wok/plugins/gingerbase/po/zh_TW.po | 1572 +---------------
src/wok/plugins/gingerbase/swupdate.py | 6 +-
src/wok/plugins/gingerbase/tests/test_host.py | 4 +-
src/wok/plugins/gingerbase/tests/test_model.py | 2 +-
src/wok/plugins/gingerbase/tests/utils.py | 6 +-
.../plugins/gingerbase/ui/js/src/gingerbase.api.js | 14 +-
.../gingerbase/ui/js/src/gingerbase.host.js | 126 +-
.../gingerbase/ui/js/src/gingerbase.main.js | 10 +-
.../ui/js/src/gingerbase.report_add_main.js | 10 +-
.../ui/js/src/gingerbase.report_rename_main.js | 10 +-
.../ui/js/src/gingerbase.repository_add_main.js | 6 +-
.../ui/js/src/gingerbase.repository_edit_main.js | 12 +-
src/wok/plugins/gingerbase/ui/pages/host.html.tmpl | 4 +-
.../gingerbase/ui/pages/report-add.html.tmpl | 2 +-
.../gingerbase/ui/pages/report-rename.html.tmpl | 2 +-
.../gingerbase/ui/pages/repository-add.html.tmpl | 2 +-
.../gingerbase/ui/pages/repository-edit.html.tmpl | 2 +-
32 files changed, 1366 insertions(+), 19310 deletions(-)
--
2.5.0
9 years, 1 month
[PATCH 0/3] Some fixes for Kimchi
by Aline Manera
Aline Manera (3):
Fix issue #734: Update test case as libvirt issue was fixed
Fix test case: Delete directory associated to the storage pool
Kimchi: Add m4/pkg.m4 to .gitignore
src/wok/plugins/kimchi/.gitignore | 1 +
src/wok/plugins/kimchi/tests/test_model_storagepool.py | 6 ++++--
.../plugins/kimchi/tests/test_model_storagevolume.py | 17 +++++------------
3 files changed, 10 insertions(+), 14 deletions(-)
--
2.5.0
9 years, 1 month
[PATCH 0/2] Checks memory alignment to 256 MiB and number of slots in PPC
by Rodrigo Trujillo
This patchset makes necessary changes to check and change in PPC:
- memory and maxMemory must be aligned to 256 MiB
- Number of slots must be <= 32
Rodrigo Trujillo (2):
Check memory alignment in PowerPC to 256MiB
Check and align number of memory slot to 32 in PowerPC
src/wok/plugins/kimchi/i18n.py | 1 +
src/wok/plugins/kimchi/model/vms.py | 33 +++++++++++++++++++++++++++++++++
src/wok/plugins/kimchi/osinfo.py | 4 ++++
src/wok/plugins/kimchi/vmtemplate.py | 7 ++++++-
4 files changed, 44 insertions(+), 1 deletion(-)
--
2.1.0
9 years, 1 month
[PATCH 0/2] UI fixes for guest creation/cloning
by Aline Manera
Aline Manera (2):
Bug fix: Properly define kimchi.trackTask()
Bug fix: Properly get the new guest name while creating or cloning a
guest
src/wok/plugins/kimchi/ui/js/src/kimchi.api.js | 26 ++++++++++++++++++++++
.../plugins/kimchi/ui/js/src/kimchi.guest_main.js | 4 ++--
2 files changed, 28 insertions(+), 2 deletions(-)
--
2.5.0
9 years, 1 month
[PATCH] Bug fix: Use qxl video model for Fedora 22+ guests
by Aline Manera
After creating a Fedora 23 guest, the login screen was not displayed
due the wrong video model configured by Kimchi. Since qxl video model is
the default for Fedora 22, update Kimchi to set it to every Fedora
guests newer than 22 version.
For reference: https://github.com/kimchi-project/kimchi/issues/647
Signed-off-by: Aline Manera <alinefm(a)linux.vnet.ibm.com>
---
src/wok/plugins/kimchi/osinfo.py | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/wok/plugins/kimchi/osinfo.py b/src/wok/plugins/kimchi/osinfo.py
index 75a21ff..7f8ace9 100644
--- a/src/wok/plugins/kimchi/osinfo.py
+++ b/src/wok/plugins/kimchi/osinfo.py
@@ -216,7 +216,10 @@ def lookup(distro, version):
params.update(template_specs[arch]['old'])
# Get custom specifications
- params.update(custom_specs.get(distro, {}).get(version, {}))
+ specs = custom_specs.get(distro, {})
+ for v, config in specs.iteritems():
+ if LooseVersion(version) >= LooseVersion(v):
+ params.update(config)
if distro in icon_available_distros:
params['icon'] = 'plugins/kimchi/images/icon-%s.png' % distro
--
2.5.0
9 years, 1 month