On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
> On 11 May 2017, at 19:52, Mark Duggan <mduggan@gmail.com> wrote:
>
> Hi everyone,
>
> From reading through the mailing list, it does appear that it's possible to have the ovirt nodes/hosts be VMware virtual machines, once I enable the appropriate settings on the VMware side. All seems to have gone well, I can see the hosts in the ovirt interface, but when I attempt to create and start a VM it never gets past printing the SeaBios version and the machine UUID to the screen/console. It doesn't appear to try to boot from the hard disk or an ISO that I've attached.
>
> Has anyone else encountered similar behaviour?
I wouldn’t think you can even get that far.
It may work with full emulation (non-kvm) but we kind of enforce it in oVirt so some changes are likely needed.
Of course even if you succeed it’s going to be hopelessly slow. (or maybe it is indeed working and just runs very slow)
Nested on a KVM hypervisor runs ok
Thanks,
michalIn the past I was able to get an Openstack Icehouse environment running inside vSphere 5.x for a POC (on poweful physical servers) and performance of nested VMs inside the virtual compute nodes was acceptable.More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of environments (just verified they are still on after some months I abandoned them to their destiny... ;-)1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't remember) with self hosted engine (that itself becomes an L2 VM) and also another VM (CentOS 6.8)See here a screenshot of the web admin gui with a spice console open after connecting to the engine:2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual Hosts (with one as arbiter node if I remember correctly)On this second environment I have ovirt01, virt02 and ovirt03 VMs:[root@ovirt02 ~]# hosted-engine --vm-status--== Host 1 status ==--Status up-to-date : TrueHostname : ovirt01.localdomain.localHost ID : 1Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}Score : 3042stopped : FalseLocal maintenance : Falsecrc32 : 2041d7b6Host timestamp : 15340856Extra metadata (valid at timestamp):metadata_parse_version=1metadata_feature_version=1timestamp=15340856 (Fri May 12 14:59:17 2017)host-id=1score=3042maintenance=Falsestate=EngineDownstopped=False--== Host 2 status ==--Status up-to-date : TrueHostname : 192.168.150.103Host ID : 2Engine status : {"health": "good", "vm": "up", "detail": "up"}Score : 3400stopped : FalseLocal maintenance : Falsecrc32 : 27a80001Host timestamp : 15340760Extra metadata (valid at timestamp):metadata_parse_version=1metadata_feature_version=1timestamp=15340760 (Fri May 12 14:59:11 2017)host-id=2score=3400maintenance=Falsestate=EngineUpstopped=False--== Host 3 status ==--Status up-to-date : TrueHostname : ovirt03.localdomain.localHost ID : 3Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}Score : 2986stopped : FalseLocal maintenance : Falsecrc32 : 98aed4ecHost timestamp : 15340475Extra metadata (valid at timestamp):metadata_parse_version=1metadata_feature_version=1timestamp=15340475 (Fri May 12 14:59:22 2017)host-id=3score=2986maintenance=Falsestate=EngineDownstopped=False[root@ovirt02 ~]#The virtual node ovirt02 has the hosted engine vm running on itIt was some months I didn't come back, but it seems it is still up... ;-)[root@ovirt02 ~]# uptime15:02:18 up 177 days, 13:26, 1 user, load average: 2.04, 1.46, 1.22[root@ovirt02 ~]# freetotal used free shared buff/cache availableMem: 12288324 6941068 3977644 595204 1369612 4340808Swap: 5242876 2980672 2262204[root@ovirt02 ~]#[root@ovirt02 ~]# ps -ef|grep qemu-kvmqemu 18982 1 8 2016 ? 14-20:33:44 /usr/libexec/qemu-kvm -name HostedEngine -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off the first node (used for deploy with hostname ovirt01 and with name inside oVirt web admin gui of hosted_engine_1) has other 3 L2 VMs running[root@ovirt01 ~]# ps -ef|grep qemu-kvmqemu 125069 1 1 15:01 ? 00:00:11 /usr/libexec/qemu-kvm -name atomic2 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125186 1 2 15:02 ? 00:00:18 /usr/libexec/qemu-kvm -name centos6 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off qemu 125329 1 1 15:02 ? 00:00:06 /usr/libexec/qemu-kvm -name cirros3 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off I also tested live migration with success.Furthermore all the 3 ESXI VMs hat are the 3 oVirt Hypervisors have still in place a VMware snapshot, because I was making a test with the idea of reverting after preliminary testing and this adds further load...see here some screenshots:ESXi with its 3 VMs that are the 3 oVirt hypervisorsoVirt Engine web admin portal with one L2 VM console openoVirt Engine web admin Hosts taboVrt Engine Gluster data domainLet me see and find the configuration settings I set up for it, because some months have gone and I then had little time to follow it...In the mean time, what is the version of your ESXi environment? Because settings to put in place changed form version 5 to version 6.What are particular settings you already configured for the ESXi VMs you plan to use as oVirt hypervisors?Gianluca