Just bumping this in case anyone has any ideas as to what I might be able
to do to potentially get this to work.
On 12 May 2017 at 13:57, Mark Duggan <mduggan(a)gmail.com> wrote:
Thanks Gianluca,
So I installed the engine into a separate VM, and didn't go down the
hosted-engine path, although if I was to look at this with physical hosts,
this seems like a really good approach.
To answer Michal's question from earlier, the nested VM inside the oVirt
Hypervisors has been up for 23+ hours and it has not progressed past the
Bios.
Also, with respect to the vdsm-hooks, here's a list.
Dumpxml attached (hopefully with identifying information removed)
vdsm-hook-nestedvt.noarch
vdsm-hook-vmfex-dev.noarch
vdsm-hook-allocate_net.noarch
vdsm-hook-checkimages.noarch
vdsm-hook-checkips.x86_64
vdsm-hook-diskunmap.noarch
vdsm-hook-ethtool-options.noarch
vdsm-hook-extnet.noarch
vdsm-hook-extra-ipv4-addrs.x86_64
vdsm-hook-fakesriov.x86_64
vdsm-hook-fakevmstats.noarch
vdsm-hook-faqemu.noarch
vdsm-hook-fcoe.noarch
vdsm-hook-fileinject.noarch
vdsm-hook-floppy.noarch
vdsm-hook-hostusb.noarch
vdsm-hook-httpsisoboot.noarch
vdsm-hook-hugepages.noarch
vdsm-hook-ipv6.noarch
vdsm-hook-isolatedprivatevlan.noarch
vdsm-hook-localdisk.noarch
vdsm-hook-macbind.noarch
vdsm-hook-macspoof.noarch
vdsm-hook-noipspoof.noarch
vdsm-hook-numa.noarch
vdsm-hook-openstacknet.noarch
vdsm-hook-pincpu.noarch
vdsm-hook-promisc.noarch
vdsm-hook-qemucmdline.noarch
vdsm-hook-qos.noarch
vdsm-hook-scratchpad.noarch
vdsm-hook-smbios.noarch
vdsm-hook-spiceoptions.noarch
vdsm-hook-vhostmd.noarch
vdsm-hook-vmdisk.noarch
vdsm-hook-vmfex.noarch
I'm running ESXi 5.5. For the hypervisor VMs I've set the "Expose Hardware
Assisted Virtualization to the guest OS"
Hypervisor VMs are running CentOS 7.3
[image: Inline images 1]
On 12 May 2017 at 09:36, Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
wrote:
>
>
> On Fri, May 12, 2017 at 1:06 PM, Michal Skrivanek <
> michal.skrivanek(a)redhat.com> wrote:
>
>>
>> > On 11 May 2017, at 19:52, Mark Duggan <mduggan(a)gmail.com> wrote:
>> >
>> > Hi everyone,
>> >
>> > From reading through the mailing list, it does appear that it's
>> possible to have the ovirt nodes/hosts be VMware virtual machines, once I
>> enable the appropriate settings on the VMware side. All seems to have gone
>> well, I can see the hosts in the ovirt interface, but when I attempt to
>> create and start a VM it never gets past printing the SeaBios version and
>> the machine UUID to the screen/console. It doesn't appear to try to boot
>> from the hard disk or an ISO that I've attached.
>> >
>> > Has anyone else encountered similar behaviour?
>>
>> I wouldn’t think you can even get that far.
>> It may work with full emulation (non-kvm) but we kind of enforce it in
>> oVirt so some changes are likely needed.
>> Of course even if you succeed it’s going to be hopelessly slow. (or
>> maybe it is indeed working and just runs very slow)
>>
>> Nested on a KVM hypervisor runs ok
>>
>> Thanks,
>> michal
>>
>>
> In the past I was able to get an Openstack Icehouse environment running
> inside vSphere 5.x for a POC (on poweful physical servers) and performance
> of nested VMs inside the virtual compute nodes was acceptable.
> More recently I configured a standalone ESXi server 6.0 U2 on a Nuc6 with
> 32Gb of ram and 2 ssd disks and on it I have now running 2 kinds of
> environments (just verified they are still on after some months I abandoned
> them to their destiny... ;-)
>
> 1) an ESXi VM acting as a single oVirt host (4.1.1 final or pre, I don't
> remember) with self hosted engine (that itself becomes an L2 VM) and also
> another VM (CentOS 6.8)
> See here a screenshot of the web admin gui with a spice console open
> after connecting to the engine:
>
https://drive.google.com/file/d/0BwoPbcrMv8mvanpTUnFuZ2FURms
> /view?usp=sharing
>
> 2) a virtual oVirt gluster environment based on 4.0.5 with 3 Virtual
> Hosts (with one as arbiter node if I remember correctly)
>
> On this second environment I have ovirt01, virt02 and ovirt03 VMs:
>
> [root@ovirt02 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : ovirt01.localdomain.local
> Host ID : 1
> Engine status : {"reason": "vm not running on
this
> host", "health": "bad", "vm": "down",
"detail": "unknown"}
> Score : 3042
> stopped : False
> Local maintenance : False
> crc32 : 2041d7b6
> Host timestamp : 15340856
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340856 (Fri May 12 14:59:17 2017)
> host-id=1
> score=3042
> maintenance=False
> state=EngineDown
> stopped=False
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 192.168.150.103
> Host ID : 2
> Engine status : {"health": "good",
"vm": "up",
> "detail": "up"}
> Score : 3400
> stopped : False
> Local maintenance : False
> crc32 : 27a80001
> Host timestamp : 15340760
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340760 (Fri May 12 14:59:11 2017)
> host-id=2
> score=3400
> maintenance=False
> state=EngineUp
> stopped=False
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : ovirt03.localdomain.local
> Host ID : 3
> Engine status : {"reason": "vm not running on
this
> host", "health": "bad", "vm": "down",
"detail": "unknown"}
> Score : 2986
> stopped : False
> Local maintenance : False
> crc32 : 98aed4ec
> Host timestamp : 15340475
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=15340475 (Fri May 12 14:59:22 2017)
> host-id=3
> score=2986
> maintenance=False
> state=EngineDown
> stopped=False
> [root@ovirt02 ~]#
>
> The virtual node ovirt02 has the hosted engine vm running on it
> It was some months I didn't come back, but it seems it is still up... ;-)
>
>
> [root@ovirt02 ~]# uptime
> 15:02:18 up 177 days, 13:26, 1 user, load average: 2.04, 1.46, 1.22
>
> [root@ovirt02 ~]# free
> total used free shared buff/cache
> available
> Mem: 12288324 6941068 3977644 595204 1369612
> 4340808
> Swap: 5242876 2980672 2262204
> [root@ovirt02 ~]#
>
> [root@ovirt02 ~]# ps -ef|grep qemu-kvm
> qemu 18982 1 8 2016 ? 14-20:33:44
> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off
>
> the first node (used for deploy with hostname ovirt01 and with name
> inside oVirt web admin gui of hosted_engine_1) has other 3 L2 VMs running
> [root@ovirt01 ~]# ps -ef|grep qemu-kvm
> qemu 125069 1 1 15:01 ? 00:00:11 /usr/libexec/qemu-kvm
> -name atomic2 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
> qemu 125186 1 2 15:02 ? 00:00:18 /usr/libexec/qemu-kvm
> -name centos6 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
> qemu 125329 1 1 15:02 ? 00:00:06 /usr/libexec/qemu-kvm
> -name cirros3 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off
>
> I also tested live migration with success.
>
> Furthermore all the 3 ESXI VMs hat are the 3 oVirt Hypervisors have still
> in place a VMware snapshot, because I was making a test with the idea of
> reverting after preliminary testing and this adds further load...
> see here some screenshots:
>
> ESXi with its 3 VMs that are the 3 oVirt hypervisors
>
https://drive.google.com/file/d/0BwoPbcrMv8mvWEtwM3otLU5uUkU
> /view?usp=sharing
>
> oVirt Engine web admin portal with one L2 VM console open
>
https://drive.google.com/file/d/0BwoPbcrMv8mvS2I1eEREclBqSU0
> /view?usp=sharing
>
> oVirt Engine web admin Hosts tab
>
https://drive.google.com/file/d/0BwoPbcrMv8mvWGcxV0xDUGpINlU
> /view?usp=sharing
>
> oVrt Engine Gluster data domain
>
https://drive.google.com/file/d/0BwoPbcrMv8mvVkxMa1R2eGRfV2s
> /view?usp=sharing
>
>
> Let me see and find the configuration settings I set up for it, because
> some months have gone and I then had little time to follow it...
>
> In the mean time, what is the version of your ESXi environment? Because
> settings to put in place changed form version 5 to version 6.
> What are particular settings you already configured for the ESXi VMs you
> plan to use as oVirt hypervisors?
>
> Gianluca
>