
Hi everyone,
From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour? Are there additional debug logs I can look at or enable to help further diagnose what is happening? Thanks Mark

On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com> wrote:
Hi everyone,
From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
I wouldn’t think you can even get that far. It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so some changes are likely needed. Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is indeed working and just runs very slow) Nested on a KVM hypervisor runs ok Thanks, michal
Are there additional debug logs I can look at or enable to help further diagnose what is happening?
Thanks
Mark
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Michal I certainly seem to be able to get that far, I can provide screen grabs if you think it'd be useful. I'm OK with hopelessly slow, for now. It's really just to POC the interface, and work flows. I'm hoping to get my hands on a couple of servers soon so that I can do a more full blooded test. Mark On May 12, 2017 07:06, "Michal Skrivanek" <michal.skrivanek@redhat.com> wrote:
On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com> wrote:
Hi everyone,
From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
I wouldn’t think you can even get that far. It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so some changes are likely needed. Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is indeed working and just runs very slow)
Nested on a KVM hypervisor runs ok
Thanks, michal
Are there additional debug logs I can look at or enable to help further
diagnose what is happening?
Thanks
Mark
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail=_AB47DE54-51E4-4612-95CA-6D04D34AFEEC Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8
On 12 May 2017, at 13:16, Mark Duggan <mduggan@gmail.com> wrote: =20 Michal=20 =20 I certainly seem to be able to get that far, I can provide screen = grabs if you think it'd be useful.=20 =20 I'm OK with hopelessly slow, for now. It's really just to POC the = interface, and work flows. I'm hoping to get my hands on a couple of = servers soon so that I can do a more full blooded test.=20
=20 Mark=20 =20 On May 12, 2017 07:06, "Michal Skrivanek" <michal.skrivanek@redhat.com = <mailto:michal.skrivanek@redhat.com>> wrote: =20
On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com = <mailto:mduggan@gmail.com>> wrote:
Hi everyone,
=46rom reading through the mailing list, it does appear that it's =
Hi Mark, ok. how long did you wait for anything to happen? did you install any vdsm hooks on the host? how does the VM xml look like? (you can see that dumped in vdsm.log or = "virsh -r dumpxml <vm>=E2=80=9D Thanks, michal possible to have the ovirt nodes/hosts be VMware virtual machines, once = I enable the appropriate settings on the VMware side. All seems to have = gone well, I can see the hosts in the ovirt interface, but when I = attempt to create and start a VM it never gets past printing the SeaBios = version and the machine UUID to the screen/console. It doesn't appear to = try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
=20 I wouldn=E2=80=99t think you can even get that far. It may work with full emulation (non-kvm) but we kind of enforce it in = oVirt so some changes are likely needed. Of course even if you succeed it=E2=80=99s going to be hopelessly = slow. (or maybe it is indeed working and just runs very slow) =20 Nested on a KVM hypervisor runs ok =20 Thanks, michal
Are there additional debug logs I can look at or enable to help =
further diagnose what is happening?
Thanks
Mark
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users =
--Apple-Mail=_AB47DE54-51E4-4612-95CA-6D04D34AFEEC Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 12 May 2017, at 13:16, Mark Duggan <<a = href=3D"mailto:mduggan@gmail.com" class=3D"">mduggan@gmail.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"auto" class=3D"">Michal <div dir=3D"auto" class=3D""><br = class=3D""></div><div dir=3D"auto" class=3D"">I certainly seem to be = able to get that far, I can provide screen grabs if you think it'd be = useful. </div><div dir=3D"auto" class=3D""><br class=3D""></div><div = dir=3D"auto" class=3D"">I'm OK with hopelessly slow, for now. It's = really just to POC the interface, and work flows. I'm hoping to get my = hands on a couple of servers soon so that I can do a more full blooded = test. </div></div></div></blockquote><div><br = class=3D""></div><div>Hi Mark,</div><div>ok. how long did you wait for = anything to happen?</div>did you install any vdsm hooks on the = host?</div><div>how does the VM xml look like? (you can see that dumped = in vdsm.log or "virsh -r dumpxml <vm>=E2=80=9D</div><div><br = class=3D""></div><div>Thanks,</div><div>michal</div><div><br = class=3D""></div><div><blockquote type=3D"cite" class=3D""><div = class=3D""><div dir=3D"auto" class=3D""><div dir=3D"auto" class=3D""><br = class=3D""></div><div dir=3D"auto" class=3D"">Mark </div></div><div = class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On May = 12, 2017 07:06, "Michal Skrivanek" <<a = href=3D"mailto:michal.skrivanek@redhat.com" = class=3D"">michal.skrivanek@redhat.com</a>> wrote:<br = type=3D"attribution" class=3D""><blockquote class=3D"gmail_quote" = style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"><br class=3D""> > On 11 May 2017, at 19:52, Mark Duggan <<a = href=3D"mailto:mduggan@gmail.com" class=3D"">mduggan@gmail.com</a>> = wrote:<br class=3D""> ><br class=3D""> > Hi everyone,<br class=3D""> ><br class=3D""> > =46rom reading through the mailing list, it does appear that it's = possible to have the ovirt nodes/hosts be VMware virtual machines, once = I enable the appropriate settings on the VMware side. All seems to have = gone well, I can see the hosts in the ovirt interface, but when I = attempt to create and start a VM it never gets past printing the SeaBios = version and the machine UUID to the screen/console. It doesn't appear to = try to boot from the hard disk or an ISO that I've attached.<br = class=3D""> ><br class=3D""> > Has anyone else encountered similar behaviour?<br class=3D""> <br class=3D""> I wouldn=E2=80=99t think you can even get that far.<br class=3D""> It may work with full emulation (non-kvm) but we kind of enforce it in = oVirt so some changes are likely needed.<br class=3D""> Of course even if you succeed it=E2=80=99s going to be hopelessly slow. = (or maybe it is indeed working and just runs very slow)<br class=3D""> <br class=3D""> Nested on a KVM hypervisor runs ok<br class=3D""> <br class=3D""> Thanks,<br class=3D""> michal<br class=3D""> ><br class=3D""> > Are there additional debug logs I can look at or enable to help = further diagnose what is happening?<br class=3D""> ><br class=3D""> > Thanks<br class=3D""> ><br class=3D""> > Mark<br class=3D""> ><br class=3D""> ><br class=3D""> > ______________________________<wbr class=3D"">_________________<br = class=3D""> > Users mailing list<br class=3D""> > <a href=3D"mailto:Users@ovirt.org" class=3D"">Users@ovirt.org</a><br = class=3D""> > <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br class=3D""> <br class=3D""> </blockquote></div></div> </div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_AB47DE54-51E4-4612-95CA-6D04D34AFEEC--

On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com> wrote:
Hi everyone,
From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
I wouldn’t think you can even get that far. It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so some changes are likely needed. Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is indeed working and just runs very slow)
Nested on a KVM hypervisor runs ok
Thanks, michal
In the past I was able to get an Openstack Icehouse environment running inside vSphere 5.x for a POC (on poweful physical servers) and performance of nested VMs inside the virtual compute nodes was acceptable. More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of environments (just verified they are still on after some months I abandoned them to their destiny... ;-) 1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't remember) with self hosted engine (that itself becomes an L2 VM) and also another VM (CentOS 6.8) See here a screenshot of the web admin gui with a spice console open after connecting to the engine: https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms/view?usp=sharin... 2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts (with one as arbiter node if I remember correctly) On this second environment I have ovirt01, virt02 and ovirt03 VMs: [root@ovirt02 ~]# hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3042 stopped : False Local maintenance : False crc32 : 2041d7b6 Host timestamp : 15340856 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340856 (Fri May 12 14:59:17 2017) host-id=1 score=3042 maintenance=False state=EngineDown stopped=False --== Host 2 status ==-- Status up-to-date : True Hostname : 192.168.150.103 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 27a80001 Host timestamp : 15340760 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340760 (Fri May 12 14:59:11 2017) host-id=2 score=3400 maintenance=False state=EngineUp stopped=False --== Host 3 status ==-- Status up-to-date : True Hostname : ovirt03.localdomain.local Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 2986 stopped : False Local maintenance : False crc32 : 98aed4ec Host timestamp : 15340475 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340475 (Fri May 12 14:59:22 2017) host-id=3 score=2986 maintenance=False state=EngineDown stopped=False [root@ovirt02 ~]# The virtual node ovirt02 has the hosted engine vm running on it It was some months I didn't come back, but it seems it is still up... ;-) [root@ovirt02 ~]# uptime 15:02:18 up 177 days, 13:26, 1 user, load average: 2.04, 1.46, 1.22 [root@ovirt02 ~]# free total used free shared buff/cache available Mem: 12288324 6941068 3977644 595204 1369612 4340808 Swap: 5242876 2980672 2262204 [root@ovirt02 ~]# [root@ovirt02 ~]# ps -ef|grep qemu-kvm qemu 18982 1 8 2016 ? 14-20:33:44 /usr/libexec/qemu-kvm -name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off the first node (used for deploy with hostname ovirt01 and with name inside oVirt web admin gui of hosted_engine_1) has other 3 L2 VMs running [root@ovirt01 ~]# ps -ef|grep qemu-kvm qemu 125069 1 1 15:01 ? 00:00:11 /usr/libexec/qemu-kvm -name atomic2 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125186 1 2 15:02 ? 00:00:18 /usr/libexec/qemu-kvm -name centos6 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125329 1 1 15:02 ? 00:00:06 /usr/libexec/qemu-kvm -name cirros3 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off I also tested live migration with success. Furthermore all the 3 ESXI VMs hat are the 3 oVirt Hypervisors have still in place a VMware snapshot, because I was making a test with the idea of reverting after preliminary testing and this adds further load... see here some screenshots: ESXi with its 3 VMs that are the 3 oVirt hypervisors https://drive.google.com/file/d/0BwoPbcrMv8mvWEtwM3otLU5uUkU/view?usp=sharin... oVirt Engine web admin portal with one L2 VM console open https://drive.google.com/file/d/0BwoPbcrMv8mvS2I1eEREclBqSU0/view?usp=sharin... oVirt Engine web admin Hosts tab https://drive.google.com/file/d/0BwoPbcrMv8mvWGcxV0xDUGpINlU/view?usp=sharin... oVrt Engine Gluster data domain https://drive.google.com/file/d/0BwoPbcrMv8mvVkxMa1R2eGRfV2s/view?usp=sharin... Let me see and find the configuration settings I set up for it, because some months have gone and I then had little time to follow it... In the mean time, what is the version of your ESXi environment? Because settings to put in place changed form version 5 to version 6. What are particular settings you already configured for the ESXi VMs you plan to use as oVirt hypervisors? Gianluca

Thanks Gianluca, So I installed the engine into a separate VM, and didn't go down the hosted-engine path, although if I was to look at this with physical hosts, this seems like a really good approach. To answer Michal's question from earlier, the nested VM inside the oVirt Hypervisors has been up for 23+ hours and it has not progressed past the Bios. Also, with respect to the vdsm-hooks, here's a list. Dumpxml attached (hopefully with identifying information removed) vdsm-hook-nestedvt.noarch vdsm-hook-vmfex-dev.noarch vdsm-hook-allocate_net.noarch vdsm-hook-checkimages.noarch vdsm-hook-checkips.x86_64 vdsm-hook-diskunmap.noarch vdsm-hook-ethtool-options.noarch vdsm-hook-extnet.noarch vdsm-hook-extra-ipv4-addrs.x86_64 vdsm-hook-fakesriov.x86_64 vdsm-hook-fakevmstats.noarch vdsm-hook-faqemu.noarch vdsm-hook-fcoe.noarch vdsm-hook-fileinject.noarch vdsm-hook-floppy.noarch vdsm-hook-hostusb.noarch vdsm-hook-httpsisoboot.noarch vdsm-hook-hugepages.noarch vdsm-hook-ipv6.noarch vdsm-hook-isolatedprivatevlan.noarch vdsm-hook-localdisk.noarch vdsm-hook-macbind.noarch vdsm-hook-macspoof.noarch vdsm-hook-noipspoof.noarch vdsm-hook-numa.noarch vdsm-hook-openstacknet.noarch vdsm-hook-pincpu.noarch vdsm-hook-promisc.noarch vdsm-hook-qemucmdline.noarch vdsm-hook-qos.noarch vdsm-hook-scratchpad.noarch vdsm-hook-smbios.noarch vdsm-hook-spiceoptions.noarch vdsm-hook-vhostmd.noarch vdsm-hook-vmdisk.noarch vdsm-hook-vmfex.noarch I'm running ESXi 5.5. For the hypervisor VMs I've set the "Expose Hardware Assisted Virtualization to the guest OS" Hypervisor VMs are running CentOS 7.3 [image: Inline images 1] On 12 May 2017 at 09:36, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com> wrote:
Hi everyone,
From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
I wouldn’t think you can even get that far. It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so some changes are likely needed. Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is indeed working and just runs very slow)
Nested on a KVM hypervisor runs ok
Thanks, michal
In the past I was able to get an Openstack Icehouse environment running inside vSphere 5.x for a POC (on poweful physical servers) and performance of nested VMs inside the virtual compute nodes was acceptable. More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of environments (just verified they are still on after some months I abandoned them to their destiny... ;-)
1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't remember) with self hosted engine (that itself becomes an L2 VM) and also another VM (CentOS 6.8) See here a screenshot of the web admin gui with a spice console open after connecting to the engine: https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms/ view?usp=sharing
2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts (with one as arbiter node if I remember correctly)
On this second environment I have ovirt01, virt02 and ovirt03 VMs:
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3042 stopped : False Local maintenance : False crc32 : 2041d7b6 Host timestamp : 15340856 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340856 (Fri May 12 14:59:17 2017) host-id=1 score=3042 maintenance=False state=EngineDown stopped=False
--== Host 2 status ==--
Status up-to-date : True Hostname : 192.168.150.103 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 27a80001 Host timestamp : 15340760 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340760 (Fri May 12 14:59:11 2017) host-id=2 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : ovirt03.localdomain.local Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 2986 stopped : False Local maintenance : False crc32 : 98aed4ec Host timestamp : 15340475 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340475 (Fri May 12 14:59:22 2017) host-id=3 score=2986 maintenance=False state=EngineDown stopped=False [root@ovirt02 ~]#
The virtual node ovirt02 has the hosted engine vm running on it It was some months I didn't come back, but it seems it is still up... ;-)
[root@ovirt02 ~]# uptime 15:02:18 up 177 days, 13:26, 1 user, load average: 2.04, 1.46, 1.22
[root@ovirt02 ~]# free total used free shared buff/cache available Mem: 12288324 6941068 3977644 595204 1369612 4340808 Swap: 5242876 2980672 2262204 [root@ovirt02 ~]#
[root@ovirt02 ~]# ps -ef|grep qemu-kvm qemu 18982 1 8 2016 ? 14-20:33:44 /usr/libexec/qemu-kvm -name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
the first node (used for deploy with hostname ovirt01 and with name inside oVirt web admin gui of hosted_engine_1) has other 3 L2 VMs running [root@ovirt01 ~]# ps -ef|grep qemu-kvm qemu 125069 1 1 15:01 ? 00:00:11 /usr/libexec/qemu-kvm -name atomic2 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125186 1 2 15:02 ? 00:00:18 /usr/libexec/qemu-kvm -name centos6 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125329 1 1 15:02 ? 00:00:06 /usr/libexec/qemu-kvm -name cirros3 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
I also tested live migration with success.
Furthermore all the 3 ESXI VMs hat are the 3 oVirt Hypervisors have still in place a VMware snapshot, because I was making a test with the idea of reverting after preliminary testing and this adds further load... see here some screenshots:
ESXi with its 3 VMs that are the 3 oVirt hypervisors https://drive.google.com/file/d/0BwoPbcrMv8mvWEtwM3otLU5uUkU/ view?usp=sharing
oVirt Engine web admin portal with one L2 VM console open https://drive.google.com/file/d/0BwoPbcrMv8mvS2I1eEREclBqSU0/ view?usp=sharing
oVirt Engine web admin Hosts tab https://drive.google.com/file/d/0BwoPbcrMv8mvWGcxV0xDUGpINlU/ view?usp=sharing
oVrt Engine Gluster data domain https://drive.google.com/file/d/0BwoPbcrMv8mvVkxMa1R2eGRfV2s/ view?usp=sharing
Let me see and find the configuration settings I set up for it, because some months have gone and I then had little time to follow it...
In the mean time, what is the version of your ESXi environment? Because settings to put in place changed form version 5 to version 6. What are particular settings you already configured for the ESXi VMs you plan to use as oVirt hypervisors?
Gianluca

Just bumping this in case anyone has any ideas as to what I might be able to do to potentially get this to work. On 12 May 2017 at 13:57, Mark Duggan <mduggan@gmail.com> wrote:
Thanks Gianluca,
So I installed the engine into a separate VM, and didn't go down the hosted-engine path, although if I was to look at this with physical hosts, this seems like a really good approach.
To answer Michal's question from earlier, the nested VM inside the oVirt Hypervisors has been up for 23+ hours and it has not progressed past the Bios. Also, with respect to the vdsm-hooks, here's a list.
Dumpxml attached (hopefully with identifying information removed)
vdsm-hook-nestedvt.noarch vdsm-hook-vmfex-dev.noarch vdsm-hook-allocate_net.noarch vdsm-hook-checkimages.noarch vdsm-hook-checkips.x86_64 vdsm-hook-diskunmap.noarch vdsm-hook-ethtool-options.noarch vdsm-hook-extnet.noarch vdsm-hook-extra-ipv4-addrs.x86_64 vdsm-hook-fakesriov.x86_64 vdsm-hook-fakevmstats.noarch vdsm-hook-faqemu.noarch vdsm-hook-fcoe.noarch vdsm-hook-fileinject.noarch vdsm-hook-floppy.noarch vdsm-hook-hostusb.noarch vdsm-hook-httpsisoboot.noarch vdsm-hook-hugepages.noarch vdsm-hook-ipv6.noarch vdsm-hook-isolatedprivatevlan.noarch vdsm-hook-localdisk.noarch vdsm-hook-macbind.noarch vdsm-hook-macspoof.noarch vdsm-hook-noipspoof.noarch vdsm-hook-numa.noarch vdsm-hook-openstacknet.noarch vdsm-hook-pincpu.noarch vdsm-hook-promisc.noarch vdsm-hook-qemucmdline.noarch vdsm-hook-qos.noarch vdsm-hook-scratchpad.noarch vdsm-hook-smbios.noarch vdsm-hook-spiceoptions.noarch vdsm-hook-vhostmd.noarch vdsm-hook-vmdisk.noarch vdsm-hook-vmfex.noarch
I'm running ESXi 5.5. For the hypervisor VMs I've set the "Expose Hardware Assisted Virtualization to the guest OS"
Hypervisor VMs are running CentOS 7.3
[image: Inline images 1]
On 12 May 2017 at 09:36, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com> wrote:
Hi everyone,
From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
Has anyone else encountered similar behaviour?
I wouldn’t think you can even get that far. It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so some changes are likely needed. Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is indeed working and just runs very slow)
Nested on a KVM hypervisor runs ok
Thanks, michal
In the past I was able to get an Openstack Icehouse environment running inside vSphere 5.x for a POC (on poweful physical servers) and performance of nested VMs inside the virtual compute nodes was acceptable. More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of environments (just verified they are still on after some months I abandoned them to their destiny... ;-)
1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't remember) with self hosted engine (that itself becomes an L2 VM) and also another VM (CentOS 6.8) See here a screenshot of the web admin gui with a spice console open after connecting to the engine: https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms /view?usp=sharing
2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts (with one as arbiter node if I remember correctly)
On this second environment I have ovirt01, virt02 and ovirt03 VMs:
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True Hostname : ovirt01.localdomain.local Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3042 stopped : False Local maintenance : False crc32 : 2041d7b6 Host timestamp : 15340856 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340856 (Fri May 12 14:59:17 2017) host-id=1 score=3042 maintenance=False state=EngineDown stopped=False
--== Host 2 status ==--
Status up-to-date : True Hostname : 192.168.150.103 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 27a80001 Host timestamp : 15340760 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340760 (Fri May 12 14:59:11 2017) host-id=2 score=3400 maintenance=False state=EngineUp stopped=False
--== Host 3 status ==--
Status up-to-date : True Hostname : ovirt03.localdomain.local Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 2986 stopped : False Local maintenance : False crc32 : 98aed4ec Host timestamp : 15340475 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=15340475 (Fri May 12 14:59:22 2017) host-id=3 score=2986 maintenance=False state=EngineDown stopped=False [root@ovirt02 ~]#
The virtual node ovirt02 has the hosted engine vm running on it It was some months I didn't come back, but it seems it is still up... ;-)
[root@ovirt02 ~]# uptime 15:02:18 up 177 days, 13:26, 1 user, load average: 2.04, 1.46, 1.22
[root@ovirt02 ~]# free total used free shared buff/cache available Mem: 12288324 6941068 3977644 595204 1369612 4340808 Swap: 5242876 2980672 2262204 [root@ovirt02 ~]#
[root@ovirt02 ~]# ps -ef|grep qemu-kvm qemu 18982 1 8 2016 ? 14-20:33:44 /usr/libexec/qemu-kvm -name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
the first node (used for deploy with hostname ovirt01 and with name inside oVirt web admin gui of hosted_engine_1) has other 3 L2 VMs running [root@ovirt01 ~]# ps -ef|grep qemu-kvm qemu 125069 1 1 15:01 ? 00:00:11 /usr/libexec/qemu-kvm -name atomic2 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125186 1 2 15:02 ? 00:00:18 /usr/libexec/qemu-kvm -name centos6 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125329 1 1 15:02 ? 00:00:06 /usr/libexec/qemu-kvm -name cirros3 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
I also tested live migration with success.
Furthermore all the 3 ESXI VMs hat are the 3 oVirt Hypervisors have still in place a VMware snapshot, because I was making a test with the idea of reverting after preliminary testing and this adds further load... see here some screenshots:
ESXi with its 3 VMs that are the 3 oVirt hypervisors https://drive.google.com/file/d/0BwoPbcrMv8mvWEtwM3otLU5uUkU /view?usp=sharing
oVirt Engine web admin portal with one L2 VM console open https://drive.google.com/file/d/0BwoPbcrMv8mvS2I1eEREclBqSU0 /view?usp=sharing
oVirt Engine web admin Hosts tab https://drive.google.com/file/d/0BwoPbcrMv8mvWGcxV0xDUGpINlU /view?usp=sharing
oVrt Engine Gluster data domain https://drive.google.com/file/d/0BwoPbcrMv8mvVkxMa1R2eGRfV2s /view?usp=sharing
Let me see and find the configuration settings I set up for it, because some months have gone and I then had little time to follow it...
In the mean time, what is the version of your ESXi environment? Because settings to put in place changed form version 5 to version 6. What are particular settings you already configured for the ESXi VMs you plan to use as oVirt hypervisors?
Gianluca

On Tue, May 16, 2017 at 3:17 PM, Mark Duggan <mduggan@gmail.com> wrote:
Just bumping this in case anyone has any ideas as to what I might be able to do to potentially get this to work.
Hello, I found some information about my lab in 2013. I have verified I was using 5.1 so not exactly 5.5. For that lab, where I ran Openstack Grizzly (and then IceHouse in a second step), I was able to use nested VMs inside the virtual Hypervisors (based on Qemu/KVM). At that time I followed these two guides from virtuallyghetto for 5.0 and 5.1, that are still available. It seems I didn't find a specific update for 5.5, so it could be that the guide for 5.1 is still ok for 5.5. Can you verify if you followed the same steps in configuring vSphere? Note that the guides are for nesting Hyper-V but I used the same guidelines to have a nested Qemu-KVM based hypervisor setup: The guide for 5.0 http://www.virtuallyghetto.com/2011/07/how-to-enable-support-for-nested-64bi... The guide for 5.1 http://www.virtuallyghetto.com/2012/08/how-to-enable-nested-esxi-other.html One important new setup thing for the 5.1 version was the step 3) in section <Nesting "Other" Hypervisors> Step 3 - You will need to add one additional .vmx parameter which tells the underlying guestOS (Hyper-V) that it is not running as a virtual guest which in fact it really is. The parameter is hypervisor.cpuid.v0 = FALSE HIH, Gianluca
participants (3)
-
Gianluca Cecchi
-
Mark Duggan
-
Michal Skrivanek