[PATCH] [Kimchi] Fixed "Add a Storage Device to VM" modal behavior when guest is running
by sguimaraes943@gmail.com
From: Samuel Guimarães <sguimaraes943(a)gmail.com>
This patch fixes an issue when trying to attach a storage device when a guest is running. The combo box was showing the correct option but the disk panel was hidden.
Samuel Guimarães (1):
Fixed "Add a Storage Device to VM" modal behavior when guest is running
ui/js/src/kimchi.guest_storage_add.main.js | 38 ++++++++++++++++--------------
ui/pages/guest-storage-add.html.tmpl | 2 +-
2 files changed, 21 insertions(+), 19 deletions(-)
--
1.9.3
8 years, 8 months
[PATCH] [Kimchi] Fixed Memory and Max Memory values when adding a new Guest
by sguimaraes943@gmail.com
From: Samuel Guimarães <sguimaraes943(a)gmail.com>
Fixed Memory and Max Memory values when adding a new Guest
Samuel Guimarães (1):
Fixed Memory and Max Memory values when adding a new Guest
ui/pages/guest-add.html.tmpl | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
--
1.9.3
8 years, 8 months
Unable to activate PCI devices
by Samuel Henrique De Oliveira Guimaraes
Hi team,
I was trying to find a fix for enabling PCI devices when editing a guest and the input filter as well but so far I'm unable to test it with any VM that I try to run:
"KCHVMHDEV0003E: No IOMMU groups found. Host PCI pass through needs IOMMU group to function correctly. Please enable Intel VT-d or AMD IOMMU in your BIOS, then verify the Kernel is compiled with IOMMU support. For Intel CPU, add 'intel_iommu=on' to GRUB_CMDLINE_LINUX parameter in /etc/default/grub file. For AMD CPU, add 'iommu=pt iommu=1'."
Is this correct?
Thanks,
Samuel
8 years, 8 months
Re: [Kimchi-devel] [ginger-dev-list] [PATCH V3] [Gingerbase 0/4] Issue #9 - Update selected packages.
by Paulo Ricardo Paz Vital
On Fri, Mar 11, 2016 at 06:55:02PM -0300, Aline Manera wrote:
>
>
> On 03/11/2016 04:48 PM, Aline Manera wrote:
> >
> >
> >On 03/10/2016 05:51 PM, Paulo Ricardo Paz Vital wrote:
> >>
> >>On 03/10/2016 04:23 PM, Daniel Henrique Barboza wrote:
> >>>I've tested the patch in a real environment (my dev PC) and noticed
> >>>the following behavior:
> >>>
> >>>- pick a single package to be updated. In this case I've chosen the
> >>>'libv4l' package:
> >>>
> >>>[danielhb@arthas gingerbase]$ curl -k -u root -H "Content-Type:
> >>>application/json" -H "Accept: application/json"
> >>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/libv4l'
> >>>-X GET
> >>>Enter host password for user 'root':
> >>>{
> >>> "depends":[
> >>> "libv4l"
> >>> ],
> >>> "version":"1.10.0-2.fc23",
> >>> "arch":"i686",
> >>> "repository":"updates",
> >>> "package_name":"libv4l"
> >>>}
> >>>
> >>>[danielhb@arthas gingerbase]$ curl -k -u root -H "Content-Type:
> >>>application/json" -H "Accept: application/json"
> >>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/libv4l/upgrade'
> >>>
> >>>-X POST -d'{}'
> >>>Enter host password for user 'root':
> >>>{
> >>> "status":"running",
> >>> "message":"",
> >>> "id":"1",
> >>>"target_uri":"/plugins/gingerbase/host/packagesupdate/libv4l/upgrade"
> >>>}[danielhb@arthas gingerbase]$
> >>>
> >>>
> >>>- after the update process, the lookup was still returning package
> >>>info,
> >>>as if it
> >>>was still available for updating:
> >>>
> >>>[danielhb@arthas gingerbase]$ curl -k -u root -H "Content-Type:
> >>>application/json" -H "Accept: application/json"
> >>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/libv4l'
> >>>-X GET
> >>>Enter host password for user 'root':
> >>>{
> >>> "depends":[
> >>> "libv4l"
> >>> ],
> >>> "version":"1.10.0-2.fc23",
> >>> "arch":"i686",
> >>> "repository":"updates",
> >>> "package_name":"libv4l"
> >>>}
> >>>
> >>>- after that I've listed all the packages to be updated, wondering if
> >>>there was some
> >>>sort of refresh problem after the single update:
> >>>
> >>>}[danielhb@arthas gingerbase]$ curl -k -u root -H "Content-Type:
> >>>application/json" -H "Accept: aplication/json"
> >>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/' -X GET
> >>>(... long output)
> >>>
> >>>
> >>>- and finally tried the lookup of the libv4l package again:
> >>>
> >>>][danielhb@arthas gingerbase]$ curl -k -u root -H "Content-Type:
> >>>application/json" -H "Accept: aplication/json"
> >>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/libv4l'
> >>>-X GET
> >>>Enter host password for user 'root':
> >>>{
> >>> "reason":"GGBPKGUPD0002E: Package libv4l is not marked to be
> >>>updated.",
> >>> "code":"404 Not Found",
> >>> "call_stack":"Traceback (most recent call last):\n File
> >>>\"/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py\", line 670,
> >>>in respond\n response.body = self.handler()\n File
> >>>\"/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py\", line
> >>>217,
> >>>in __call__\n self.body = self.oldhandler(*args, **kwargs)\n File
> >>>\"/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py\", line 61,
> >>>in __call__\n return self.callable(*self.args, **self.kwargs)\n
> >>>File
> >>>\"/home/danielhb/kimchi/wok_ginger/src/wok/control/base.py\", line 210,
> >>>in index\n raise cherrypy.HTTPError(404, e.message)\nHTTPError:
> >>>(404,
> >>>u'GGBPKGUPD0002E: Package libv4l is not marked to be updated.')\n"
> >>>}[
> >>>
> >>>
> >>>My conclusion is that, after the single package update, we need to do
> >>>a list of the all the packages again to update/refresh some internal
> >>>model
> >>>info.
> >>>
> >>Yes, this happens because the lookup of one single package returns the
> >>dictionary from the internal list, after the first overall lookup.
> >>
> >>>In my opinion the package list needs to be updated after every single
> >>>update.
> >>>Perhaps there is a way of simply removing the updated package(s) from
> >>>this
> >>>list instead of refreshing the whole list again, operation that is now
> >>>even slower
> >>>than before due to all the additional info being fetched.
> >>>
> >>Ok, I'll add something to remove from the internal list of packages the
> >>package and its dependencies in the V4
> >
> >My suggestion is to save the timestamp when a request to get_list() is
> >made.
> >And on lookup() verifies the timestamp does not exceed X seconds and if
> >so, grab the information from yum again for the given resource.
> >
>
> Instead of doing that, it would be better to do not cache the information on
> get_list().
> get_list() should return only the list of packages names and lookup() grabs
> the detailed information for each package.
>
> As you are calling a new command on lookup() for a given package with this
> patch (to get the list of dependencies), I believe this command you are
> using also returns the information cached on get_list() - repository,
> version, etc.
The way PackagesUpdate works is different from what you described here.
PackagesUpdate.get_list() returns the same return of
SoftwareUpdate.getUpdates() that is, basically, the cached dictionary (called
SoftwareUpdate._packages) that contains all packages eligible to update and it's
information (package_name, version, arch, repository and, now, list of
dependencies). All these information are collected by the private method
SoftwareUpdate._scanUpdates() that is executed at first time, when the first
PackagesUpdate.get_list() is requested by user.
PackageUpdate.lookup() returns the information of one package by calling
SoftwareUpdate.getUpdate(<package_name>), that is nothing more that the
information of that specific package cached in SoftwareUpdate._packages.
>
> Said that, I'd say to you change get_list() to only returns the packages
> names.
> And improve your command output on lookup() to grab the information before
> cached on get_list().
>
Ok, interesting idea, but doesn't this change how UI and user see the
information - I mean, how API should works?
If following your idea, when user access the Updates tab on UI, he/she will
see only the list of packages. Today, it's provided the information of the
bew version number of that package, the arch and repository that provides the
package.
> If you fail to parse the command output on lookup() that means the package
> is not marked to be upgraded so raise an error.
>
> Something like below on lookup():
>
> out = run_command(<command to get the dependencies>)
> name = get_name_from_out()
> if name is None:
> # package is not marked to be upgraded
> raise NotFound()
>
> deps = get_deps_from_out()
> ...
>
> return info
>
> >>>Thanks,
> >>>
> >>>
> >>>Daniel
> >>>
> >>>
> >>>On 03/09/2016 04:01 PM, pvital(a)linux.vnet.ibm.com wrote:
> >>>>From: Paulo Vital <pvital(a)linux.vnet.ibm.com>
> >>>>
> >>>>V2 - V3:
> >>>> * Modified _get_dnf_info_requires() of DnfUpdate class to
> >>>>prevent
> >>>>the
> >>>> addition of the package in the dependencies list, and how to
> >>>>process the
> >>>> name of the dependencies to be added.
> >>>>
> >>>>V2:
> >>>>
> >>>>This patch-set is the V2 of the feature to add backend support to
> >>>>select one or
> >>>>more packages to be updated by host package manager.
> >>>>
> >>>>All modifications for V2 were based on the feedback provided by the
> >>>>mail thread
> >>>>"[RFC] Gingerbase Issue#9: Update selected packages."
> >>>>
> >>>>To test, use:
> >>>>
> >>>># List all packages selected by host package manager, as eligible to
> >>>>be updated
> >>>>$ curl -k -u test -H "Content-Type: application/json" -H \
> >>>>"Accept: application/json" \
> >>>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate' -X GET
> >>>>Enter host password for user 'test':
> >>>>[
> >>>> {
> >>>> "depends":[
> >>>> "libudisks2",
> >>>> "udisks2"
> >>>> ],
> >>>> "version":"2.1.7-1.fc23",
> >>>> "arch":"x86_64",
> >>>> "repository":"updates",
> >>>> "package_name":"libudisks2"
> >>>> },
> >>>> {
> >>>> "depends":[
> >>>> "glusterfs-client-xlators",
> >>>> "glusterfs",
> >>>> "glusterfs-fuse",
> >>>> "glusterfs-libs",
> >>>> "glusterfs-api"
> >>>> ],
> >>>> "version":"3.7.8-2.fc23",
> >>>> "arch":"x86_64",
> >>>> "repository":"updates",
> >>>> "package_name":"glusterfs-client-xlators"
> >>>> },
> >>>> {
> >>>> "depends":[
> >>>> "parted"
> >>>> ],
> >>>> "version":"3.2-16.fc23",
> >>>> "arch":"x86_64",
> >>>> "repository":"updates",
> >>>> "package_name":"parted"
> >>>> },
> >>>>
> >>>> < result cut, due to a large number of packages >
> >>>>
> >>>> {
> >>>> "depends":[
> >>>> "openssl",
> >>>> "openssl-libs"
> >>>> ],
> >>>> "version":"1:1.0.2g-2.fc23",
> >>>> "arch":"x86_64",
> >>>> "repository":"updates",
> >>>> "package_name":"openssl"
> >>>> },
> >>>> {
> >>>> "depends":[
> >>>> "gnome-shell"
> >>>> ],
> >>>> "version":"3.18.4-1.fc23",
> >>>> "arch":"x86_64",
> >>>> "repository":"updates",
> >>>> "package_name":"gnome-shell"
> >>>> },
> >>>> {
> >>>> "depends":[
> >>>> "openssl",
> >>>> "openssl-libs"
> >>>> ],
> >>>> "version":"1:1.0.2g-2.fc23",
> >>>> "arch":"i686",
> >>>> "repository":"updates",
> >>>> "package_name":"openssl-libs"
> >>>> },
> >>>>
> >>>> < result cut, due to a large number of packages >
> >>>>
> >>>>]
> >>>>
> >>>># Print information about one selected package
> >>>>$ curl -k -u test -H "Content-Type: application/json" -H \
> >>>>"Accept: application/json" \
> >>>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/openssl'
> >>>>-X
> >>>>GET
> >>>>Enter host password for user 'test':
> >>>>{
> >>>> "depends":[
> >>>> "openssl",
> >>>> "openssl-libs"
> >>>> ],
> >>>> "version":"1:1.0.2g-2.fc23",
> >>>> "arch":"x86_64",
> >>>> "repository":"updates",
> >>>> "package_name":"openssl"
> >>>>}
> >>>>
> >>>># Select only one package to be updated. All packages that were
> >>>>listed as
> >>>># eligible to be update and are dependencies of this package will
> >>>>also be
> >>>># updated automatically.
> >>>>$ curl -k -u test -H "Content-Type: application/json" -H \
> >>>>"Accept: application/json" \
> >>>>'https://localhost:8001/plugins/gingerbase/host/packagesupdate/openssl/upg...'
> >>>>
> >>>>\
> >>>>-X POST -d '{}'
> >>>>Enter host password for user 'test':
> >>>>{
> >>>> "status":"running",
> >>>> "message":"",
> >>>> "id":"1",
> >>>>"target_uri":"/plugins/gingerbase/host/packagesupdate/openssl/upgrade"
> >>>>}
> >>>>
> >>>>
> >>>>Paulo Vital (4):
> >>>> Issue #9 - Reorganize Package Update classes.
> >>>> Issue #9 - List dependencies of packages to update
> >>>> Issue #9 - Add support to update selected packages
> >>>> Issue #9 - Test cases to update selected packages
> >>>>
> >>>> control/host.py | 36 ++--------
> >>>> control/packagesupdate.py | 53 ++++++++++++++
> >>>> docs/API.md | 18 +++--
> >>>> mockmodel.py | 39 ++++++++---
> >>>> model/host.py | 43 ------------
> >>>> model/packagesupdate.py | 116 +++++++++++++++++++++++++++++++
> >>>> swupdate.py | 172
> >>>>+++++++++++++++++++++++++++++++++++++++++-----
> >>>> tests/test_host.py | 37 ++++++++--
> >>>> yumparser.py | 16 ++---
> >>>> 9 files changed, 404 insertions(+), 126 deletions(-)
> >>>> create mode 100644 control/packagesupdate.py
> >>>> create mode 100644 model/packagesupdate.py
> >>>>
> >>>>--
> >>>>2.5.0
> >>>>
> >
>
> --
> You received this message because you are subscribed to the Google Groups "Ginger Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ginger-dev-list+unsubscribe(a)googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
--
Paulo Ricardo Paz Vital
IBM Linux Technology Center
http://www.ibm.com
8 years, 8 months
[PATCH] [Kimchi] Removing disks.py because it was moved to gingerbase
by Jose Ricardo Ziviani
- I didn't find any place using this module so I'm removing the file to
avoid any confusion.
Signed-off-by: Jose Ricardo Ziviani <joserz(a)linux.vnet.ibm.com>
---
disks.py | 297 ---------------------------------------------------------------
1 file changed, 297 deletions(-)
delete mode 100644 disks.py
diff --git a/disks.py b/disks.py
deleted file mode 100644
index 52a793c..0000000
--- a/disks.py
+++ /dev/null
@@ -1,297 +0,0 @@
-#
-# Project Kimchi
-#
-# Copyright IBM Corp, 2015-2016
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
-
-import os.path
-import re
-from parted import Device as PDevice
-from parted import Disk as PDisk
-
-from wok.exception import OperationFailed
-from wok.utils import run_command, wok_log
-
-
-def _get_dev_node_path(maj_min):
- """ Returns device node path given the device number 'major:min' """
-
- dm_name = "/sys/dev/block/%s/dm/name" % maj_min
- if os.path.exists(dm_name):
- with open(dm_name) as dm_f:
- content = dm_f.read().rstrip('\n')
- return "/dev/mapper/" + content
-
- uevent = "/sys/dev/block/%s/uevent" % maj_min
- with open(uevent) as ueventf:
- content = ueventf.read()
-
- data = dict(re.findall(r'(\S+)=(".*?"|\S+)', content.replace("\n", " ")))
-
- return "/dev/%s" % data["DEVNAME"]
-
-
-def _get_lsblk_devs(keys, devs=[]):
- out, err, rc = run_command(["lsblk", "-Pbo"] + [','.join(keys)] + devs)
- if rc != 0:
- raise OperationFailed("KCHDISKS0001E", {'err': err})
-
- return _parse_lsblk_output(out, keys)
-
-
-def _get_dev_major_min(name):
- maj_min = None
-
- keys = ["NAME", "MAJ:MIN"]
- dev_list = _get_lsblk_devs(keys)
-
- for dev in dev_list:
- if dev['name'].split()[0] == name:
- maj_min = dev['maj:min']
- break
- else:
- raise OperationFailed("KCHDISKS0002E", {'device': name})
-
- return maj_min
-
-
-def _is_dev_leaf(devNodePath):
- try:
- # By default, lsblk prints a device information followed by children
- # device information
- childrenCount = len(
- _get_lsblk_devs(["NAME"], [devNodePath])) - 1
- except OperationFailed as e:
- # lsblk is known to fail on multipath devices
- # Assume these devices contain children
- wok_log.error(
- "Error getting device info for %s: %s", devNodePath, e)
- return False
-
- return childrenCount == 0
-
-
-def _is_dev_extended_partition(devType, devNodePath):
- if devType != 'part':
- return False
- diskPath = devNodePath.rstrip('0123456789')
- device = PDevice(diskPath)
- try:
- extended_part = PDisk(device).getExtendedPartition()
- except NotImplementedError as e:
- wok_log.warning(
- "Error getting extended partition info for dev %s type %s: %s",
- devNodePath, devType, e.message)
- # Treate disk with unsupported partiton table as if it does not
- # contain extended partitions.
- return False
- if extended_part and extended_part.path == devNodePath:
- return True
- return False
-
-
-def _parse_lsblk_output(output, keys):
- # output is on format key="value",
- # where key can be NAME, TYPE, FSTYPE, SIZE, MOUNTPOINT, etc
- lines = output.rstrip("\n").split("\n")
- r = []
- for line in lines:
- d = {}
- for key in keys:
- expression = r"%s=\".*?\"" % key
- match = re.search(expression, line)
- field = match.group()
- k, v = field.split('=', 1)
- d[k.lower()] = v[1:-1]
- r.append(d)
- return r
-
-
-def _get_vgname(devNodePath):
- """ Return volume group name of a physical volume. If the device node path
- is not a physical volume, return empty string. """
- out, err, rc = run_command(["pvs", "--unbuffered", "--nameprefixes",
- "--noheadings", "-o", "vg_name", devNodePath])
- if rc != 0:
- return ""
-
- return re.findall(r"LVM2_VG_NAME='([^\']*)'", out)[0]
-
-
-def _is_available(name, devtype, fstype, mountpoint, majmin):
- devNodePath = _get_dev_node_path(majmin)
- # Only list unmounted and unformated and leaf and (partition or disk)
- # leaf means a partition, a disk has no partition, or a disk not held
- # by any multipath device. Physical volume belongs to no volume group
- # is also listed. Extended partitions should not be listed.
- if (devtype in ['part', 'disk', 'mpath'] and
- fstype in ['', 'LVM2_member'] and
- mountpoint == "" and
- _get_vgname(devNodePath) == "" and
- _is_dev_leaf(devNodePath) and
- not _is_dev_extended_partition(devtype, devNodePath)):
- return True
- return False
-
-
-def get_partitions_names(check=False):
- names = set()
- keys = ["NAME", "TYPE", "FSTYPE", "MOUNTPOINT", "MAJ:MIN"]
- # output is on format key="value",
- # where key can be NAME, TYPE, FSTYPE, MOUNTPOINT
- for dev in _get_lsblk_devs(keys):
- # split()[0] to avoid the second part of the name, after the
- # whiteline
- name = dev['name'].split()[0]
- if check and not _is_available(name, dev['type'], dev['fstype'],
- dev['mountpoint'], dev['maj:min']):
- continue
- names.add(name)
-
- return list(names)
-
-
-def get_partition_details(name):
- majmin = _get_dev_major_min(name)
- dev_path = _get_dev_node_path(majmin)
-
- keys = ["TYPE", "FSTYPE", "SIZE", "MOUNTPOINT"]
- try:
- dev = _get_lsblk_devs(keys, [dev_path])[0]
- except OperationFailed as e:
- wok_log.error(
- "Error getting partition info for %s: %s", name, e)
- return {}
-
- dev['available'] = _is_available(name, dev['type'], dev['fstype'],
- dev['mountpoint'], majmin)
- if dev['mountpoint']:
- # Sometimes the mountpoint comes with [SWAP] or other
- # info which is not an actual mount point. Filtering it
- regexp = re.compile(r"\[.*\]")
- if regexp.search(dev['mountpoint']) is not None:
- dev['mountpoint'] = ''
- dev['path'] = dev_path
- dev['name'] = name
- return dev
-
-
-def vgs():
- """
- lists all volume groups in the system. All size units are in bytes.
-
- [{'vgname': 'vgtest', 'size': 999653638144L, 'free': 0}]
- """
- cmd = ['vgs',
- '--units',
- 'b',
- '--nosuffix',
- '--noheading',
- '--unbuffered',
- '--options',
- 'vg_name,vg_size,vg_free']
-
- out, err, rc = run_command(cmd)
- if rc != 0:
- raise OperationFailed("KCHLVM0001E", {'err': err})
-
- if not out:
- return []
-
- # remove blank spaces and create a list of VGs
- vgs = map(lambda v: v.strip(), out.strip('\n').split('\n'))
-
- # create a dict based on data retrieved from vgs
- return map(lambda l: {'vgname': l[0],
- 'size': long(l[1]),
- 'free': long(l[2])},
- [fields.split() for fields in vgs])
-
-
-def lvs(vgname=None):
- """
- lists all logical volumes found in the system. It can be filtered by
- the volume group. All size units are in bytes.
-
- [{'lvname': 'lva', 'path': '/dev/vgtest/lva', 'size': 12345L},
- {'lvname': 'lvb', 'path': '/dev/vgtest/lvb', 'size': 12345L}]
- """
- cmd = ['lvs',
- '--units',
- 'b',
- '--nosuffix',
- '--noheading',
- '--unbuffered',
- '--options',
- 'lv_name,lv_path,lv_size,vg_name']
-
- out, err, rc = run_command(cmd)
- if rc != 0:
- raise OperationFailed("KCHLVM0001E", {'err': err})
-
- if not out:
- return []
-
- # remove blank spaces and create a list of LVs filtered by vgname, if
- # provided
- lvs = filter(lambda f: vgname is None or vgname in f,
- map(lambda v: v.strip(), out.strip('\n').split('\n')))
-
- # create a dict based on data retrieved from lvs
- return map(lambda l: {'lvname': l[0],
- 'path': l[1],
- 'size': long(l[2])},
- [fields.split() for fields in lvs])
-
-
-def pvs(vgname=None):
- """
- lists all physical volumes in the system. It can be filtered by the
- volume group. All size units are in bytes.
-
- [{'pvname': '/dev/sda3',
- 'size': 469502001152L,
- 'uuid': 'kkon5B-vnFI-eKHn-I5cG-Hj0C-uGx0-xqZrXI'},
- {'pvname': '/dev/sda2',
- 'size': 21470642176L,
- 'uuid': 'CyBzhK-cQFl-gWqr-fyWC-A50Y-LMxu-iHiJq4'}]
- """
- cmd = ['pvs',
- '--units',
- 'b',
- '--nosuffix',
- '--noheading',
- '--unbuffered',
- '--options',
- 'pv_name,pv_size,pv_uuid,vg_name']
-
- out, err, rc = run_command(cmd)
- if rc != 0:
- raise OperationFailed("KCHLVM0001E", {'err': err})
-
- if not out:
- return []
-
- # remove blank spaces and create a list of PVs filtered by vgname, if
- # provided
- pvs = filter(lambda f: vgname is None or vgname in f,
- map(lambda v: v.strip(), out.strip('\n').split('\n')))
-
- # create a dict based on data retrieved from pvs
- return map(lambda l: {'pvname': l[0],
- 'size': long(l[1]),
- 'uuid': l[2]},
- [fields.split() for fields in pvs])
--
1.9.1
8 years, 8 months
[PATCH] [Kimchi 0/4] Fix issues on web serial console
by Jose Ricardo Ziviani
This patchset contains some issues found in the web serial console code
as well as some small improvements.
Jose Ricardo Ziviani (4):
Move unix socket files from /tmp to /run/wok
Improve log messages printed by the serial console
Make serial console timeout configurable
Check if guest is listening to serial before connecting to it
config.py.in | 3 ++
i18n.py | 2 +
model/vms.py | 26 ++++++++--
serialconsole.py | 115 ++++++++++++++++++++++++++++++---------------
ui/serial/html/serial.html | 4 --
5 files changed, 105 insertions(+), 45 deletions(-)
--
1.9.1
8 years, 8 months
Storage Volume and selected disks
by Samuel Henrique De Oliveira Guimaraes
Hi team,
In the final design spec for Storage Management we have two view modes for disks like we have in Guests and Templates. It also features a filter input and sort button, as well a new drop-down button for Volume management but there isn't any UI feedback for Volume selection. In the other hand, the old mockups for this screen features a checkbox and a green outline when the user selects the volume in gallery view mode. Here are my proposals:
Keep the checkbox in the list/table view and hide it with Gallery view;
Green outline like the other style used when selecting a Template in the modal window for Gallery view;
Disable Action dropdown-button when nothing is selected.
Questions:
What are the formats to convert the files to?
Regarding volume management, where is the full spec for volume actions such as create, delete, clone, resize and wipe? Should we append these actions to the Activate/Deactivate and Undefine dropdown?
Thanks,
Samuel
8 years, 8 months
[PATCH] [Kimchi] Fix issue #849: Get network name for bridged networks
by Aline Manera
After attaching a bridged network to a guest, if you close the dialog
and open it again, the network field was displayed in blank. So fix it.
This patch also fix the KeyError issue reported in #849
Signed-off-by: Aline Manera <alinefm(a)linux.vnet.ibm.com>
---
mockmodel.py | 3 +--
model/vmifaces.py | 28 ++++++++++++----------------
2 files changed, 13 insertions(+), 18 deletions(-)
diff --git a/mockmodel.py b/mockmodel.py
index a765770..cbeeb5d 100644
--- a/mockmodel.py
+++ b/mockmodel.py
@@ -34,7 +34,6 @@ from wok.xmlutils.utils import xml_item_update
from wok.plugins.kimchi import imageinfo
from wok.plugins.kimchi import osinfo
from wok.plugins.kimchi.model import cpuinfo
-from wok.plugins.kimchi.model import vmifaces
from wok.plugins.kimchi.model.groups import PAMGroupsModel
from wok.plugins.kimchi.model.host import DeviceModel
from wok.plugins.kimchi.model.host import DevicesModel
@@ -86,7 +85,7 @@ class MockModel(Model):
self._mock_storagevolumes = MockStorageVolumes()
cpuinfo.get_topo_capabilities = MockModel.get_topo_capabilities
- vmifaces.getDHCPLeases = MockModel.getDHCPLeases
+ libvirt.virNetwork.DHCPLeases = MockModel.getDHCPLeases
libvirt.virDomain.XMLDesc = MockModel.domainXMLDesc
libvirt.virDomain.undefine = MockModel.undefineDomain
libvirt.virDomain.attachDeviceFlags = MockModel.attachDeviceFlags
diff --git a/model/vmifaces.py b/model/vmifaces.py
index 72f717b..0ec89b6 100644
--- a/model/vmifaces.py
+++ b/model/vmifaces.py
@@ -29,14 +29,6 @@ from wok.plugins.kimchi.model.vms import DOM_STATE_MAP, VMModel
from wok.plugins.kimchi.xmlutils.interface import get_iface_xml
-def getDHCPLeases(net, mac):
- try:
- leases = net.DHCPLeases(mac)
- return leases
- except libvirt.libvirtError:
- return []
-
-
class VMIfacesModel(object):
def __init__(self, **kargs):
self.conn = kargs['conn']
@@ -134,10 +126,9 @@ class VMIfaceModel(object):
info['type'] = iface.attrib['type']
info['mac'] = iface.mac.get('address')
+ info['network'] = iface.source.get('network')
if iface.find("model") is not None:
info['model'] = iface.model.get('type')
- if info['type'] == 'network':
- info['network'] = iface.source.get('network')
if info['type'] == 'bridge':
info['bridge'] = iface.source.get('bridge')
info['ips'] = self._get_ips(vm, info['mac'], info['network'])
@@ -159,14 +150,19 @@ class VMIfaceModel(object):
# First check the ARP cache.
with open('/proc/net/arp') as f:
ips = [line.split()[0] for line in f.xreadlines() if mac in line]
+
# Some ifaces may be inactive, so if the ARP cache didn't have them,
# and they happen to be assigned via DHCP, we can check there too.
- net = conn.networkLookupByName(network)
- leases = getDHCPLeases(net, mac)
- for lease in leases:
- ip = lease.get('ipaddr')
- if ip not in ips:
- ips.append(ip)
+ try:
+ # Some type of interfaces may not have a network associated with
+ net = conn.networkLookupByName(network)
+ leases = net.DHCPLeases(mac)
+ for lease in leases:
+ ip = lease.get('ipaddr')
+ if ip not in ips:
+ ips.append(ip)
+ except libvirt.libvirtError:
+ pass
return ips
--
2.5.0
8 years, 8 months
[PATCH] [Kimchi] Issue #859 - Fix error when adding new disk to VM.
by pvital@linux.vnet.ibm.com
From: Paulo Vital <pvital(a)linux.vnet.ibm.com>
Remove hard-coded reference to the storage pool when attaching a new disk to a
guest.
Signed-off-by: Paulo Vital <pvital(a)linux.vnet.ibm.com>
---
ui/js/src/kimchi.guest_storage_add.main.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ui/js/src/kimchi.guest_storage_add.main.js b/ui/js/src/kimchi.guest_storage_add.main.js
index 55bc3ac..3f416e7 100644
--- a/ui/js/src/kimchi.guest_storage_add.main.js
+++ b/ui/js/src/kimchi.guest_storage_add.main.js
@@ -322,7 +322,7 @@ kimchi.guest_storage_add_main = function() {
}
var createVol = function(settings, addVolSettings) {
- kimchi.createVolumeWithCapacity('default', {
+ kimchi.createVolumeWithCapacity(settings['pool'], {
name: settings['vol'],
format: settings['format'],
capacity: settings['capacity']
--
2.5.0
8 years, 8 months