
Hi Itamar, On Thu, Oct 31, 2013 at 7:44 PM, Itamar Heim <iheim@redhat.com> wrote:
please check your spam folder and flag as not spam...
Yes, that turned out to be the problem, and I've reported it as not spam...
- My oVirt live distro is installed on-disk, and I'm able to get it up and running with an engine-cleanup, some manual rm's, and an engine-setup. However, when I reboot, the "local_host" host is in a failed state and refuses to be resurrected no matter what. Is there a way to get around that somehow?
logs may help.
I was using an older version of oVirt Live where these bugs were present, the host does come up on reboot now if I manually put it in maintenance before the reboot. I haven't tried not putting it in maintenance.
- My second question is, where do I have to tweak to make all the cores available on the "local_host" host? At the moment, engine-setup automatically sets it to have only 1 core, and it has 8.
that shouldn't happen - it should auto learn from host. what does vdsClient -s 0 getVdsCaps returns?
The newest version of oVirt Live also seems to have this bug resolved. However, in my environment it seems to have a new crippling bug. All virtual machines I create hang at the gPXE stage or at the Booting from CD stage indefinitely. I extracted the qemu-kvm command-line, and I was able to figure out that if I remove the -uuid option, the machines proceed rather than hanging. It seems unlikely that it's simply the -uuid option that's causing the hang. More likely something the option causes qemu-kvm to do is causing the hang. The engine does not appear to be aware that the machine is in a hung state - to it, everything seems perfectly sane. There are no logs to report. I can send you an strace of the qemu-kvm process if you think that'll help. Thanks! iordan