[WIP 0/4] Authorization utililty modules
by Shu Ming
The following patches provide modules to manage groups for the users
Also pam modules are provided to check if a user is in one specific
group
Shu Ming (4):
A module to manage the groups of the users
A module to do pam account managment
An example to demo how pam_members.so is used
Configuration files for pam services
src/kimchi/pam_members.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++
src/kimchi/pamauth_acc.py | 51 +++++++++++++++++++++++
src/kimchi/rolegroups.py | 36 ++++++++++++++++
src/kimchi/superadmin | 5 +++
src/kimchi/vmadmin | 5 +++
src/kimchi/vmuser | 5 +++
6 files changed, 205 insertions(+)
create mode 100644 src/kimchi/pam_members.c
create mode 100644 src/kimchi/pamauth_acc.py
create mode 100644 src/kimchi/rolegroups.py
create mode 100644 src/kimchi/superadmin
create mode 100644 src/kimchi/vmadmin
create mode 100644 src/kimchi/vmuser
--
1.8.1.4
10 years, 9 months
Fwd: [project-kimchi] NFS img permission problem
by Royce Lv
Guys,
When testing with kimchi nfs feature, I filed two issues related to
nfs image permission problem:
1. volume creation failure: Because of storage pool permission is
not configured to make write
permission.(https://github.com/kimchi-project/kimchi/issues/261)
2. vm with volume cannot be started: Root users are mapped to
nobody, so img it created cannot be accessed by libvirt-qemu(on ubuntu)
user.(https://github.com/kimchi-project/kimchi/issues/267)
After discussed with Mark Wu, we would like to propose the
following to resolve these two problem:
1. To allow creation: export with all_squash(gid = kimchi_guid) and
group allow write permission. Also with planned nfs-pool prevalidation
(a timeout try mount in a process), we can check if the gid and
permission is right. This will save us from future trouble.
2. To allow qemu process(started by libvirt) to access img, we add
uid ('qemu' under fedora and 'libvirt-qemu' under ubuntu) which running
qemu process to 'kimchi' group to allow the write access of the img.
I am also investigating other possibilities like using storage pool
permissions and so on.
Welcome thoughts on it!
--
project-kimchi mailing list <project-kimchi(a)googlegroups.com>
https://groups.google.com/forum/#!forum/project-kimchi
---
You received this message because you are subscribed to the Google Groups "project-kimchi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to project-kimchi+unsubscribe(a)googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
10 years, 9 months
[RFC]: for template cloning.
by Sheldon
template cloning:
<https://github.com/kimchi-project/kimchi/wiki/template-cloning-and-custom...>
|1|||clone a template| from|||template|
The user may clone a template from an existing template with different name.
Later he can customize some parts of the template to save the effort to create a full new template.
For example, he can update the network of the template cloned to have a new different template.
2.||we should also also user||||clone a template| from vm|
|I just concern the image volume.
For vm may has no CDROM attribute.
Then does the template need to copy the image volume.
|For VM clone, can we make the the vm image as a backing file(or we can
call it base image).
Then we can create two new images for this vm and new vm.
All these two new images are a read/write snapshot of the original image,
-- any changes to new images will not be reflected in original image.
http://libvirt.org/formatstorage.html#StorageVolBacking
http://wiki.qemu.org/Documentation/CreateSnapshot
|
--
Thanks and best regards!
Sheldon Feng(???)<shaohef(a)linux.vnet.ibm.com>
IBM Linux Technology Center
10 years, 9 months
[PATCH] Bug fix #309 - network: Unable to create vlan tagged on Ubuntu v2
by Ramon Medeiros
Changes:
v1: Put the rollback function above the raise, unless, the line will never run.
Ramon Medeiros (1):
Bug fix #309 - network: Unable to create vlan tagged on Ubuntu
src/kimchi/model.py | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
--
1.8.3.1
10 years, 9 months
Discussion: 312 - LVM VG is left when removing logical storage pool
by Ramon Medeiros
Well,
you can also test using virsh. Libvirt is not removing the vg and the pv
when you delete a logical storage pool.
The question is: is it better to fix libvirt, submiting a patch for
they, or just remove the vg and pv on kimchi?
--
Ramon Nunes Medeiros
Software Engineer - Linux Technology Center Brazil
IBM Systems & Technology Group
Phone : +55 19 2132 7878
ramonn(a)br.ibm.com
10 years, 9 months
[RFC] Host's repositories support
by Paulo Ricardo Paz Vital
Hello guys.
This is the RFC for the Host's repositories support (task "Register YUM,
apt, zypper repos" of 1.2 sprint 3).
The support will be provided by adding in the control/host.py a new
collection (Repositories), responsible to manage all repositories of the
system, and a new resource (Repository), responsible to operate with a
specific repository. All this management/operations will be transparent
to host's package management system, being provided by one of the new
back end classes.
The back end support provides 4 new classes:
1) Repositories (object): Class to represent and operate with
repositories information in Kimchi's perspective. It's transparent to
host's package management system, and can execute all operations
necessary: add repository, get all repositories list, get information
about one repository, update a repository, enable and disable a
repository and remove a repository. This class will load in run time the
necessary classes to work with the host's package management: YumRepo
for YUM systems based, AptRepo for APT systems based and ZypperRepo for
Zypper systems based;
2) YumRepo (object): Class to represent and operate with YUM
repositories. Loaded only on those systems that supports YUM (e.g.
Fedora and RHEL), it's responsible to connect, collect and provide
information of YUM repositories in the system. Also it's responsible to
create/delete the files in disk to maintain the repositories in system
after disconnection.
3) AptRepo (object): Class to represent and operate with APT
repositories. Loaded only on those systems that supports APT (e.g.
Debian and Ubuntu), it's responsible to connect, collect and provide
information of APT repositories in the system. Also it's responsible to
create/delete the files in disk to maintain the repositories in system
after disconnection.
4) ZypperRepo (object): Class to represent and operate with Zypper
repositories. Loaded only on those systems that supports Zypper (e.g.
OpenSuse), it's responsible to connect, collect and provide information
of Zypper repositories in the system. Also it's responsible to
create/delete the files in disk to maintain the repositories in system
after disconnection.
Bellow I present the REST API for the repositories support.
### Collection: Host Repositories
**URI:** /host/repositories
**Methods:**
* **GET**: Retrieve a summarized list of all repositories available
* **POST**: Add a new repository
* repo_id : Unique repository name for each repository, one word.
* repo_name: Human-readable string describing the repository.
* baseurl: URL to the repodata directory when "is_mirror" is false.
Otherwise, it can be URL to the mirror system for YUM. Can be an
http://, ftp:// or file:// URL.
* is_mirror *(optional)*: Set the given URI of baseurl as a mirror
list, instead of use baseurl in repository configuration.
* url_args *(optional)*: Arguments to be passed to baseurl, like the
list of APT repositories provided by the same baseurl.
* enabled *(optional)*: Indicates if repository should be included
as a package source:
* false: Do not include the repository.
* true: Include the repository.
* gpgcheck *(optional)*: Indicates if a GPG signature check on the
packages gotten from repository should be performed:
* false: Do not check GPG signature
* true: Check GPG signature
* gpgkey *(optional)*: URL pointing to the ASCII-armored GPG key
file for the repository. This option is used if yum needs a public key
to verify a package and the required key hasn't been imported into the
RPM database.
### Resource: Repository
**URI:** /host/repositories/*:repo-id*
**Methods:**
* **GET**: Retrieve the full description of a Repository
* repo_id : Unique repository name for each repository, one word.
* repo_type: Indicates which type of repository is:
* yum: Indicates it is a YUM repository.
* deb: Indicates it is a APT repository.
* zyp: Indicates it is a Zypper repository.
* repo_name: Human-readable string describing the repository.
* baseurl: URL to the repodata directory when "is_mirror" is false.
Otherwise, it can be URL to the mirror system for YUM. Can be an
http://, ftp:// or file:// URL.
* url_args *(optional)*: Arguments to be passed to baseurl, like the
list of APT repositories provided by the same baseurl.
* enabled *(optional)*: Indicates if repository should be included
as a package source:
* false: Do not include the repository.
* true: Include the repository.
* gpgcheck *(optional)*: Indicates if a GPG signature check on the
packages gotten from repository should be performed:
* false: Do not check GPG signature
* true: Check GPG signature
* gpgkey *(optional)*: URL pointing to the ASCII-armored GPG key
file for the repository. This option is used if yum needs a public key
to verify a package and the required key hasn't been imported into the
RPM database.
* **DELETE**: Remove the Repository
* **POST**: *See Repository Actions*
* **PUT**: update the parameters of existing Repository
* repo_id *(otional)*: Unique repository name for each repository,
one word.
* repo_name *(otional)*: Human-readable string describing the
repository.
* baseurl *(optional)*: URL to the repodata directory when
"is_mirror" is false. Otherwise, it can be URL to the mirror system for
YUM. Can be an http://, ftp:// or file:// URL.
* is_mirror *(optional)*: Set the given URI of baseurl as a mirror
list, instead of use baseurl in repository configuration.
* url_args *(optional)*: Arguments to be passed to baseurl, like the
list of APT repositories provided by the same baseurl.
* enabled *(optional)*: Indicates if repository should be included
as a package source:
* false: Do not include the repository.
* true: Include the repository.
* gpgcheck *(optional)*: Indicates if a GPG signature check on the
packages gotten from repository should be performed:
* false: Do not check GPG signature
* true: Check GPG signature
* gpgkey *(optional)*: URL pointing to the ASCII-armored GPG key
file for the repository. This option is used if yum needs a public key
to verify a package and the required key hasn't been imported into the
RPM database.
**Actions (POST):**
* enable: Enable the Repository as package source
* disable: Disable the Repository as package source
10 years, 9 months
[PATCH 0/3 RFC] Refactor exception
by Aline Manera
From: Aline Manera <alinefm(a)br.ibm.com>
Hi all,
This is the RFC for the refactor exception task.
Sheldon has already done something related to that some time ago.
He also created a wiki for that:
- https://github.com/kimchi-project/kimchi/wiki/refactor-exception
The only change I made in him approach is translating the message in backend
and send it to UI
That way we don't need to duplicate the messages on backend and UI (as it is
explained on wiki)
The steps to get it done:
1) Change cherrypy error handler to send the exception data to UI
2) Create a common Exception class (KimchiError) to translate the error message
and proper set the parameters
3) Update UI to show the message received from backend to the user
I also would like to create a file on backend: __messages__.py which will
contain all messages used on backend.
For example:
# __messages__.py
VM_START_FAILED = "<code>: Failed to start virtual machine %(name)s"
The contants names should be: <resource>_<action>_<error>
The <code> need to be a unique value between all messages.
It is needed in order to we identify the error independent of the message
language.
That way a user running Kimchi in German can easily post his error on mail list
or github, and we can identify where the problem is by the message code.
I also will add this code in the UI messages.
For the code, we can build it like: <resource><BACK|UI><type-of-error><ident>
And on backend:
VMBACKINFO01
VMBACKERRO01
VMBACKWARN01
And on UI:
VMUIINFO01
VMUIERRO01
VMUIWARN01
So we will also need:
4) Create a __messages__ file with all messages used on backend
5) Update build process to add the messages in this file to .po files
6) Update UI messages to add a code for them
I made a small patch set with my proposal.
This patch set does not include the items 4,5 and 6
In fact, I used an existent message for test (as you can notice on patches)
And *just for example* it will *always* raise an exception while starting a VM.
Any comments/suggestions are welcome. =)
Aline Manera (2):
refactor exception: Create a common Exception to translate error
messages
refator exception: Example of use
ShaoHe Feng (1):
refactor exception: Add a kimchi special error handler
src/kimchi/control/base.py | 5 ++++-
src/kimchi/exception.py | 29 ++++++++++++++++++++++-------
src/kimchi/model/vms.py | 4 +++-
src/kimchi/template.py | 14 ++++++++++++++
ui/js/src/kimchi.guest_main.js | 4 ++--
5 files changed, 45 insertions(+), 11 deletions(-)
--
1.7.10.4
10 years, 9 months
[PATCH V3] Avoid useless libvirt error log produced by featuretest
by Aline Manera
From: Aline Manera <alinefm(a)br.ibm.com>
I am sending the next version for the apporc patch as he may be out for the
Chinese holidays.
v2 -> v3:
1. Any error on feature tests will be log into kimchi error log file
For that, silent cherrypy screen log to avoid displyaing the message on
screen.
v1 -> -v2:
1. Just hide the portocol type error, leave other error messages as it is. (Thanks Cristian)
2. Unregister the error handler when necessary. (Thanks Cristian)
3. Keep back kimchi log "*** Running feature tests ***", etc. (Thanks Cristian)
4. Move libvirt error handler function inside of Featuretest. (Thanks Aline)
v1:
Avoid useless libvirt error log produced by featuretest
apporc (1):
Avoid useless libvirt error log produced by featuretests
src/kimchi/featuretests.py | 31 +++++++++++++++++++++++++++++--
1 file changed, 29 insertions(+), 2 deletions(-)
--
1.7.10.4
10 years, 9 months
RFC: "Add disk to existing logical storage pool"
by Daniel H Barboza
Hello!
I am almost delivering a first version of this feature but now I am
having doubts in how it might work or not.
My implementation would use vgextend and add a disk to an existing LVM
pool. The question is that, in this process, all existing data in the
added partition will be deleted. Is this ok? We can warn the user about
it, of course. But I am not sure if this is the intended design.
I've spoken with Aline a while ago and she agreed that the
implementation would function similar to what we have today when
creating a new LVM pool. We simply "do not care" about the potential
data loss when adding the disk to an existing pool.
Any thoughts?
thanks!
10 years, 9 months