
Hello, Please cc me on any reply, because I tried to subscribe to this list twice and did not receive any confirmation so I assume I'm not subscribed. I'm using ovirt live as a test framework for some Android SPICE client work I'm doing and I have a couple of questions: - My oVirt live distro is installed on-disk, and I'm able to get it up and running with an engine-cleanup, some manual rm's, and an engine-setup. However, when I reboot, the "local_host" host is in a failed state and refuses to be resurrected no matter what. Is there a way to get around that somehow? - My second question is, where do I have to tweak to make all the cores available on the "local_host" host? At the moment, engine-setup automatically sets it to have only 1 core, and it has 8. Any help is much appreciated, and thanks in advance! iordan -- The conscious mind has only one thread of execution.

On 10/30/2013 11:20 PM, i iordanov wrote:
Hello,
Please cc me on any reply, because I tried to subscribe to this list twice and did not receive any confirmation so I assume I'm not subscribed.
please check your spam folder and flag as not spam...
I'm using ovirt live as a test framework for some Android SPICE client work I'm doing and I have a couple of questions:
- My oVirt live distro is installed on-disk, and I'm able to get it up and running with an engine-cleanup, some manual rm's, and an engine-setup. However, when I reboot, the "local_host" host is in a failed state and refuses to be resurrected no matter what. Is there a way to get around that somehow?
logs may help.
- My second question is, where do I have to tweak to make all the cores available on the "local_host" host? At the moment, engine-setup automatically sets it to have only 1 core, and it has 8.
that shouldn't happen - it should auto learn from host. what does vdsClient -s 0 getVdsCaps returns? thanks, Itamar
Any help is much appreciated, and thanks in advance! iordan

Hi Itamar, On Thu, Oct 31, 2013 at 7:44 PM, Itamar Heim <iheim@redhat.com> wrote:
please check your spam folder and flag as not spam...
Yes, that turned out to be the problem, and I've reported it as not spam...
- My oVirt live distro is installed on-disk, and I'm able to get it up and running with an engine-cleanup, some manual rm's, and an engine-setup. However, when I reboot, the "local_host" host is in a failed state and refuses to be resurrected no matter what. Is there a way to get around that somehow?
logs may help.
I was using an older version of oVirt Live where these bugs were present, the host does come up on reboot now if I manually put it in maintenance before the reboot. I haven't tried not putting it in maintenance.
- My second question is, where do I have to tweak to make all the cores available on the "local_host" host? At the moment, engine-setup automatically sets it to have only 1 core, and it has 8.
that shouldn't happen - it should auto learn from host. what does vdsClient -s 0 getVdsCaps returns?
The newest version of oVirt Live also seems to have this bug resolved. However, in my environment it seems to have a new crippling bug. All virtual machines I create hang at the gPXE stage or at the Booting from CD stage indefinitely. I extracted the qemu-kvm command-line, and I was able to figure out that if I remove the -uuid option, the machines proceed rather than hanging. It seems unlikely that it's simply the -uuid option that's causing the hang. More likely something the option causes qemu-kvm to do is causing the hang. The engine does not appear to be aware that the machine is in a hung state - to it, everything seems perfectly sane. There are no logs to report. I can send you an strace of the qemu-kvm process if you think that'll help. Thanks! iordan

On Fri, Nov 01, 2013 at 01:03:45PM -0400, i iordanov wrote:
Hi Itamar,
On Thu, Oct 31, 2013 at 7:44 PM, Itamar Heim <iheim@redhat.com> wrote:
please check your spam folder and flag as not spam...
Yes, that turned out to be the problem, and I've reported it as not spam...
- My oVirt live distro is installed on-disk, and I'm able to get it up and running with an engine-cleanup, some manual rm's, and an engine-setup. However, when I reboot, the "local_host" host is in a failed state and refuses to be resurrected no matter what. Is there a way to get around that somehow?
logs may help.
I was using an older version of oVirt Live where these bugs were present, the host does come up on reboot now if I manually put it in maintenance before the reboot. I haven't tried not putting it in maintenance.
- My second question is, where do I have to tweak to make all the cores available on the "local_host" host? At the moment, engine-setup automatically sets it to have only 1 core, and it has 8.
that shouldn't happen - it should auto learn from host. what does vdsClient -s 0 getVdsCaps returns?
The newest version of oVirt Live also seems to have this bug resolved.
However, in my environment it seems to have a new crippling bug. All virtual machines I create hang at the gPXE stage or at the Booting from CD stage indefinitely. I extracted the qemu-kvm command-line, and I was able to figure out that if I remove the -uuid option, the machines proceed rather than hanging. It seems unlikely that it's simply the -uuid option that's causing the hang. More likely something the option causes qemu-kvm to do is causing the hang.
How are you sure that -uuid is the trigger for this qemu bug? Could you copy here the shortest command line that reproduces the bug?
The engine does not appear to be aware that the machine is in a hung state - to it, everything seems perfectly sane. There are no logs to report.
That's where our new watchdog feature comes into play ;-)
I can send you an strace of the qemu-kvm process if you think that'll help.
I suppose that a qemu mailing list could provide more help. What is your host kernel and exact qemu version? Dan

Hi Dan, I reproduced this bug within oVirt Live 1.1. One difference from a traditional setup in my case was that I was running the oVirt Live distro within a VM with nested virtualization enabled. I don't think that this is the cause because I was able to run VMs successfully when I removed the -uuid option, but it's worth mentioning. On Sat, Nov 2, 2013 at 11:24 AM, Dan Kenigsberg <danken@redhat.com> wrote:
How are you sure that -uuid is the trigger for this qemu bug? Could you copy here the shortest command line that reproduces the bug?
When I get a chance, I will get that VM running again and will give you the shortest command-line that reproduces the hang.
I suppose that a qemu mailing list could provide more help. What is your host kernel and exact qemu version?
I'll get that for you in the follow-up as well. Thanks! iordan -- The conscious mind has only one thread of execution.

Hi Itamar, Just a follow-up to our previous conversation. I believe all my troubles were due to running the VMs inside an oVirt node which itself was a VM (with nested virtualization). I reworked my scheme where the node is now installed directly onto hardware and everything is operating lightning fast with no hangups. Thanks for the quick reply and take care! Cheers, iordan On Sat, Nov 2, 2013 at 10:46 AM, i iordanov <iiordanov@gmail.com> wrote:
Hi Dan,
I reproduced this bug within oVirt Live 1.1. One difference from a traditional setup in my case was that I was running the oVirt Live distro within a VM with nested virtualization enabled. I don't think that this is the cause because I was able to run VMs successfully when I removed the -uuid option, but it's worth mentioning.
On Sat, Nov 2, 2013 at 11:24 AM, Dan Kenigsberg <danken@redhat.com> wrote:
How are you sure that -uuid is the trigger for this qemu bug? Could you copy here the shortest command line that reproduces the bug?
When I get a chance, I will get that VM running again and will give you the shortest command-line that reproduces the hang.
I suppose that a qemu mailing list could provide more help. What is your host kernel and exact qemu version?
I'll get that for you in the follow-up as well.
Thanks! iordan
-- The conscious mind has only one thread of execution.
-- The conscious mind has only one thread of execution.
participants (3)
-
Dan Kenigsberg
-
i iordanov
-
Itamar Heim