I saw that openldap is now listed as a provider when invoking
engine-manage-domains. I'm eager to find more information about this.
Does anyone know if there is any updated documentation floating around
Found this: http://www.ovirt.org/LDAP_Quick_Start
But the article seem only half-finished.
The commands of kind:
bash oat_oem ...
are to be run on hypervisor host side, correct?
Where can I find these packages for CentOS 6?
Thanks in advance
PS: I took the time to correct a typo in client section where it said
"Yum Install oat server package from fedora19 repository" instead of
"Yum Install oat client package from fedora19 repository"
- at vm.start() --vm-os-boot doesn't send the order of devices #921464
- rephrase status command help
- add option to retrieve system summary #854369
- accept IP address as FQ argument rather than string #886067
- fix broken pipe
- Bad error message when trying to create a new Role #908284
- add flag --dont-validate-cert-chain #915231
- collection-based-options could be passed in 2 ways #859684
- make NO_SUCH_ACTION error a bit more clear
- ovirt-cli DistributionNotFound exception on f18 #881011
- ovirt-shell misleading help for command "connect" #907943
- show event -id accept strings instead of numeric values #886786
- Use vncviewer passwordFile instead of passwdInput
More details can be found at .
RedHat, ENG-Virtualization R&D
affected version: oVirt 3.3 (ManagementNode)
how to reproduce:
setup 2 data centers with local storage, add data master storage
domains to these 2 data centers
add another data center with local storage, try to add another
master storage domain.
expected result: the data domain gets created just like the 2 before
actual result: ovirt management complains that the data domain
already exists, which is not true.
This behaviour was introduced with version 3.3, we didn't see this
in version 3.2
This is regardless of node versions.
I would at first like to know if anybody can reproduce this?
If this is the case I would create a bugzilla entry.
To try and improve 3.4 planning over the wiki approach in 3.3, I've
placed the items i collected on users list last time into a google doc
now, the main thing each item needs is a requirements owner, devel owner
and a testing owner (well, devel owner is really needed to make it
happen, but all are important).
then we need an oVirt BZ for each, and for some a feature page.
I also added columns indicating if the item will require an API design
review and a GUI design review.
this list is just the start of course for items from it to get
ownership. i expect more items will be added as well, as long as they
have owners, etc.
the doc is public read-only, please request read-write access to be able
to edit it.
feel free to ask questions, etc.
I think that one of the best things from Proxmox is the ability to
connect to any of the hosts via web and been able to admin the hole
infrastructure from there. This avoids the single point of failure that
today engine represents in ovirt plataform.
I agree with Jakub that been able to mix Intel and AMD hosts would be
great so we can use the hole of the servers into a DC and engine may
migrate VM to the same kind of processors if available but if not,
migrate to any available host and alert about the performance degradation.
I'm still a rookie to ovirt so at this time I don't know all the
features to make further coments. I think you have a great product and
the best of it is the speed it's improving.
A great week at linuxcon/cloudopen/kvmforum/ovirt conference at
Edinburgh, and some other nice feedback on oVirt, meriting a special
edition of this update.
Feel free to chime in with your feedback as well.
It is sometimes hard to remember we only released oVirt 3.0 last year,
and that it takes time to get traction.
For example, see slide 3 in the presentation i gave on oVirt Updates
to see the clear trend in adoption (via total users mailing list addresses).
Similarly, in Livnat's oVirt-intro session at CloudOpen, with ~75 people
in the room, almost all raised their hand on her question "who knows/has
We had a plethora of topics (same link as ), but i wanted to highlight:
During Livnat's talk, a question was raised wrt SUSE support.
Asking for more details, we got ~"I'm running oVirt in testing and RHEV
in production. with 200 SUSE 10/11 servers. I just want the guest-agent
to have their ip address in the gui".
So first of all, good to know SUSE runs as a guest without issues.
Also, the guest-agent itself is just a python script that should just
work. Just some packaging is required, so we're looking to revamp this
in build.opensuse.org, and hope some SUSE users will help us with
closing and testing this one.
In related SUSE news, I just saw this posted "After looking into oVirt
it looks absolutely fantastic. Might look into seeing if I can help with
porting this to openSUSE. What kind of work is involved in the porting
of the application like this?"
Leonardo from the Eldorado research center in Brazil gave a lecture on
their work to add PPC support to oVirt. Paul Mackerras (KVM PPC
developer) from IBM and Alexenader Graf attended and gave a lot of
feedback, and potential interest from other PPC vendors, which should be
hopefully mostly config level changes.
Keele university presented their path to oVirt. Always nice to hear how
our project is used, and we actually asked them a lot of questions on
why/how they use it the way they do. They also did a case study with
Dave Neary, which i hope to see more from oVirt community members.
Also, always nice to hear: in an irc chat on #ovirt: "I looked at ovirt
about 4 months ago and when I came back a few days ago I was blown away
at how far it had come! The devs have done an awesome job".
All KVM Forum/oVirt sessions slides (and youtube's) should be available
 oVirt Updates session by Itamar Heim
I would like to propose to the community to join creating a group for testing oVirt
releases and oVirt bug fixes. I suggest to create a ovirt-qe mailing list and set
that as default QE assignee for oVirt bugs.
The list may be used for coordinating testing efforts, to be notified about new ovirt bugs,
to plan test days, propose test cases, discussing about jenkins jobs implementation and so on.
Forming just a small group of people testing milestones release will also help in having better release testing.
What do you think about this?
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
I have a few VM's that have their root LVM file system corrupted because of
improper shutdown. I experienced an issue described here which cause a node
But now I'm experiencing the same problem with other VM's. A DB VM was in
single user mode when it was powered off. I created snapshot to clone it.
Powered the CentOS 6 VM back on and it cannot execute the mount command on
boot and dropped me to a root maintenance prompt.
Running fsck comes back with way too many errors, many of them about files
that shouldnt have even been open. Countless Innode issues and clone
multiply-claimed blocks. Running fsck -y seems to fix the file system and
it comes back as clean but upon restart, the VM load the files system as
read only and trying to remount it as rw, the mount command throws a
I dont know if its the new version of ovirt engine. I'm afraid to shutdown
any VM using the ovirt UI as they dont always come backup. I have tried
repairing the file system with the live CD and such to no avail. Given the
fact the alot of the corrupt files are plain static html, or archived tar
files, I assume it has something to do with ovirt. The only corrupting for
data should be and live application data (open files).
Please advise on how to proceed. I can provide whatever logs you may