[Users] Quota for VMs created from templates
by Mitja Mihelič
Hi!
We are running engine version 3.3.0 on CentOS6 and we have come across a
problem, possibly a bug.
When a user creates a VM from a template, the template's quota is
assigned to the VM.
Here is the setup:
- quota is set to Enforced on the data center
- quota is created for template purposes (TemplateQuota)
- a template is created from a sealed VM with TemplateQuota assigned to it
- quota is created for a user, the user is set as its consumer
- the user creates a VM from the mentioned template and leaves the quota
unchanged
- the created VM consumes the user's storage quota but does not consume
their memory and CPU quota
This way a user can create and run an arbitrary number of VMs as long
they stay within their storage quota.
No errors are reported in the logs.
Kind regards,
Mitja Mihelic
--
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877, fax: +386 1 479 88 78
11 years
[Users] A security issue in ovirt 3.2 on fedora - setup leaves a world-writable /etc/sysconfig/nfs
by Yedidyah Bar David
Hi all,
The setup code of ovirt releases 3.2 and older set wrong permissions for
several files, depending on options chosen during setup/upgrade.
The potentially-affected files are:
/etc/sysconfig/nfs
/etc/sysconfig/iptables
/etc/httpd/conf/httpd.conf.BACKUP*
/etc/httpd/conf.d/ssl.conf.BACKUP*
files under the ISO domain copied there by setup, by default under /var/lib/exports
The bug does not affect systems with linux kernels older than 3.1,
which included undocumented behavior that prevented it from occurring. Here is a
link to the kernel patch that dropped this undocumented behavior:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/comm...
In practice this means that among the distributions supported by ovirt,
only ovirt on Fedora is affected.
ovirt 3.3 is not affected as-is. Systems upgraded from 3.2 might still have
the files with wrong permissions, and should follow the actions below.
How to fix an existing system?
Change the permissions of existing files by running the following commands
as root:
chmod 600 /etc/sysconfig/nfs
chmod 600 /etc/sysconfig/iptables
chmod 600 /etc/httpd/conf/httpd.conf.BACKUP*
chmod 600 /etc/httpd/conf.d/ssl.conf.BACKUP*
find /var/lib/exports -perm /002 -type f -exec chmod 600 {} \;
For the last command, replace "/var/lib/exports" with the appropriate directory,
if you did not accept the default during setup and configured another directory to
hold the ISO domain.
A fix for this bug was pushed to gerrit:
http://gerrit.ovirt.org/19557
Security implications:
This bug allows local users escalate their privileges by editing /etc/sysconfig/nfs
and waiting until the nfs service will be started/stopped (e.g. a reboot of the machine).
--
Didi
11 years
[Users] Ovirt 3.3 - How to write a network plugin
by Benoit ML
Hello,
I've seen ovirt can support plugin ... so is there any documentation about
writing a plugin ?
Can we write it in python ?
Well because, here, we don't want to install openstack/neutron for network
management ... It's too complicated and to heavy. I, your opinion wrong
design choice about openvswitch implementation.
Thank you in advance.
Regards,
--
--
Benoit
11 years
[Users] Low cost OOB management device
by Mario Giammarco
Hello,
I would like to try ovirt powermanagement. I have no ILO so I would like to
buy some power management device.
Since it is a test I would like to stay in 100 euro range.
Is it possible?
Thanks,
Mario
11 years
[Users] ovirt 3.3 and ilo2 for power fencing?
by Gianluca Cecchi
Hello,
is this combination supported?
I'm trying to configure Power Management, but I get this:
Power Management test failed for Host f18ovn03.Parse error: Ignoring
unknown option 'option=status' Unable to connect/login to fencing
device
What would be correct setup?
Any way to test from command line too?
The hw is an HP blade BL685 g1 with iLO2 mgmt
Thanks in advance.
Gianluca
11 years
[Users] upgrade bug(?) 3.1->3.3
by Jim Kinney
I just upgraded from engine 3.1 to 3.3 on my standalone centos 6.4 system.
The 3.3 engine-setup apparently did NOT include the requisite iptables
lines for port opening for the spice to function.
I did a yum update for the ovirt-engine-setup to get the latest tool. Then
I ran engine-setup to let the process upgrade as required.
I added the missing ports from a second system I've not yet upgraded and
all is well. Just though someone might want to look at a possible issue.
--
--
James P. Kinney III
*
*Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*
http://heretothereideas.blogspot.com/
*
11 years
[Users] Disk migration among glusterfs volumes
by Mario Giammarco
Hello,
I have a vm disk in glusterfs volume X. Now I have a replicated gluster
volume Y. I would like to move the vm disk on it but if I go to "move" menu
I cannot choose the target and I see a warning: "some disks cannot be moved".
How can I move it?
Thanks,
Mario
11 years
[Users] Storage error after NFS reload
by Juan Pablo Lorier
Hi,
I've made a change to the host where I run nfs for iso images and after
reloading the nfs conf ovirt started to print to console and to log this
every few seconds:
<11>vdsm Storage.DomainMonitorThread ERROR Error while collecting domain
0d22a7d7-a7e0-4c17-b132-5764b7940a70 monitoring information#012Traceback
(most recent call last):#012 File
"/usr/share/vdsm/storage/domainMonitor.py", line 186, in
_monitorDomain#012 self.domain.selftest()#012 File
"/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__#012 return
getattr(self.getRealDomain(), attrName)#012 File
"/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain#012
return self._cache._realProduce(self._sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 121, in _realProduce#012
domain = self._findDomain(sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 152, in _findDomain#012 raise
se.StorageDomainDoesNotExist(sdUUID)#012StorageDomainDoesNotExist:
Storage domain does not exist: (u'0d22a7d7-a7e0-4c17-b132-5764b7940a70',)
Is there a way to stop this from happening without having to bring down
the host?
Regards,
11 years
[Users] oVirt Roadmap feature requests - Summary
by Itamar Heim
so, a lot of input post my question for 3.4 planning on "what do you
want next"[1]
I've just aggregated the replies here at this point for easier
processing going forward:
Infra
- ability to give permissions to group of objects (namely, VMs)
- support users/groups in oVirt DB as an authentication provider
- support keystone as an authentication provider
- kerberos login to webadmin/user portal
- stateless host (pxe boot, no local disks)
- detect updates and update host accordingly
- embedded pxe/tftp boot server, setup to deploy the nodes
- snmp & snmp trap monitoring
Virt
- template versions (ability to update template of a pool)
- pci passthrough for devices like Telephone modem, PCI-express cards,
Graphic cards
- usbhost hook as supported feature (hardware license tokens)
- ability to provide libvirt features directly[2]
- dependencies between VMs (start db vm before webapp vm)
- Allow to clone a VM without snapshot/template
- Integrate v2v into engine
- oVirt guest agent for Ubuntu/openSUSE/SLES/Debian
- compatibility with OVF 2.0.1
- solaris as supported guest OS
Network
- private network between VMs
- Network NAT for VMs (with buildin DHCP).
- openvswitch support (in 3.3, via neutron?)
- nic ordering
Storage
- import existing data domain
- live merge (delete snapshot)
- ability to resize a storage domain lun
- multiple storage domains (local/gluster/FC)
- ISO and export domains on FC/iSCSI
- Upload ISOs within the GUI to ISO domain
- Use an existing share for ISO domain
- A backup and restore option
- iSCSI EqualLogic SAN support or use standard iscsi tools/configuration
- ATA over ethernet (AoE) as an optional storage transport
- DR
- DRBD, either with CLVM or perhaps GFS
- live disk resize
- disk ordering
SLA
- positive/negative affinity between group of VMs
- power saving policy moving hosts to sleep
- time based policy (SLA during the day, power saving at night)
- collect and show power consumption of hosts
- allow to give weights to prefer "faster" hosts to other hosts
- application level HA (monitor the service inside the VM)
Other
- Zabbix monitoring
[1] http://lists.ovirt.org/pipermail/users/2013-August/015807.html
[2] http://lists.ovirt.org/pipermail/engine-devel/2013-August/005364.html
11 years