
Hi list! I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager). After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge. After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble. My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me? Best, Toni

----- Original Message -----
From: "Antoni Segura Puimedon" <asegurap@redhat.com> To: users@ovirt.org Sent: Thursday, August 16, 2012 10:04:40 AM Subject: [Users] Nested kvms
Hi list!
I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager).
After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge.
After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble.
My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me?
Gal, Is it an bug that you already solved (vendor parsing issue) ? I just can't find it in gerrit.
Best,
Toni
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 16/08/2012 10:12, Igor Lvovsky wrote:
----- Original Message -----
From: "Antoni Segura Puimedon" <asegurap@redhat.com> To: users@ovirt.org Sent: Thursday, August 16, 2012 10:04:40 AM Subject: [Users] Nested kvms
Hi list!
I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager).
After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge.
After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble.
My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me?
Gal, Is it an bug that you already solved (vendor parsing issue) ? I just can't find it in gerrit.
Do you mean this one? http://gerrit.ovirt.org/5035 (can't verify it at the moment as gerrit.ovirt is not responsive at the moment). Gal.
Best,
Toni
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Gal Hammer" <ghammer@redhat.com> To: "Igor Lvovsky" <ilvovsky@redhat.com> Cc: apuimedo@redhat.com, users@ovirt.org Sent: Thursday, August 16, 2012 11:02:27 AM Subject: Re: [Users] Nested kvms
On 16/08/2012 10:12, Igor Lvovsky wrote:
----- Original Message -----
From: "Antoni Segura Puimedon" <asegurap@redhat.com> To: users@ovirt.org Sent: Thursday, August 16, 2012 10:04:40 AM Subject: [Users] Nested kvms
Hi list!
I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager).
After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge.
After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble.
My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me?
Gal, Is it an bug that you already solved (vendor parsing issue) ? I just can't find it in gerrit.
Do you mean this one? http://gerrit.ovirt.org/5035 (can't verify it at the moment as gerrit.ovirt is not responsive at the moment).
Gal.
Yes, I think it is. Toni, did you had this patch on you vdsm when you tried nested hosts?
Best,
Toni
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, 2012-08-16 at 07:17 -0400, Igor Lvovsky wrote:
----- Original Message -----
From: "Gal Hammer" <ghammer@redhat.com> To: "Igor Lvovsky" <ilvovsky@redhat.com> Cc: apuimedo@redhat.com, users@ovirt.org Sent: Thursday, August 16, 2012 11:02:27 AM Subject: Re: [Users] Nested kvms
On 16/08/2012 10:12, Igor Lvovsky wrote:
----- Original Message -----
From: "Antoni Segura Puimedon" <asegurap@redhat.com> To: users@ovirt.org Sent: Thursday, August 16, 2012 10:04:40 AM Subject: [Users] Nested kvms
Hi list!
I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager).
After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge.
After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble.
My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me?
Gal, Is it an bug that you already solved (vendor parsing issue) ? I just can't find it in gerrit.
Do you mean this one? http://gerrit.ovirt.org/5035 (can't verify it at the moment as gerrit.ovirt is not responsive at the moment).
Gal.
Yes, I think it is. Toni, did you had this patch on you vdsm when you tried nested hosts?
I was using the vdsm from the rpms in build SI13.2.
Best,
Toni
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I have also been working with nested kvm topology. However I'm using AMD. Posted a thread to this group months ago about running node and engine on the same physical box. When selecting the VM setup I'm using "kvm" not "qemu". When I select within virt-manager "Copy host CPU configuration" it sees Opteron_G3. The physical CPU is AMD Phenom II X4 965 . The virtual host is running Mint(Ubuntu 12.04) with Cinnamon desktop. On Thu, Aug 16, 2012 at 2:04 AM, Antoni Segura Puimedon <asegurap@redhat.com> wrote:
Hi list!
I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager).
After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge.
After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble.
My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me?
Best,
Toni
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, 2012-08-16 at 06:17 -0500, Brent Bolin wrote:
I have also been working with nested kvm topology. However I'm using AMD.
Posted a thread to this group months ago about running node and engine on the same physical box.
When selecting the VM setup I'm using "kvm" not "qemu". When I select within virt-manager "Copy host CPU configuration" it sees Opteron_G3. The physical CPU is AMD Phenom II X4 965 . I also used the copy host CPU config. from Xeon it went to Sandy Bridge, which AFAIK it could very well be the correct Xeon generation on the host.
The virtual host is running Mint(Ubuntu 12.04) with Cinnamon desktop.
On Thu, Aug 16, 2012 at 2:04 AM, Antoni Segura Puimedon <asegurap@redhat.com> wrote:
Hi list!
I've been playing lately with nested VMs in a configuration where I have a VM for the ovirt-engine and another to be used as a host (both created with virt-manager).
After adding the virtualized host to the engine, I could start VMs but their boot process was always hanging/kernel panicking. The cpu type I had set for the virtualized host was Sandy bridge.
After some trial and error, I changed the cpu type of the virtualized host to Conroe (as well as the cluster cpu type) and suddenly I can run VMs well through installation and usage without any kind of trouble.
My question is... Any similar experiences? Or is it possible that in the meantime some yum update to qemu-kvm fooled me?
Best,
Toni
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Antoni Segura Puimedon
-
Brent Bolin
-
Gal Hammer
-
Igor Lvovsky