Just to give a report of final success updating from 4.3.7 to 4.4 one of my home labs, composed by a single host.

hw: Intel NUC6i5SY with 32G of ram and 2 SSD disks (250Gb and 500Gb)

source sw: oVirt 4.3.7 single host with CentOS 7 OS and storage provided through the host itself via NFS (not officially supported, but working, apart when shutdown needed)
Two main VMs to migrate to the new environment: Fedora 30 and Slackware Current (just to not forget the first love... ;-)

Exported the VMs to an export storage domain offered through an external USB disk

dest sw: oVirt node ng 4.4 configured with HCI single host wizard. Installed on the 250Gb disk. I pre-clean the disks (dd of the first 100Mb of the disks) before install, because during the beta/rc phase I noticed the installer was not so smart to cleanup pre-existing configurations.
I had problems in the first run, but with engine cleanup and redeploying it went ok.
See here for more details:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/6QODLB6J5Z74YCVF6C3TLQPF4KK7RKB5/

I configured the gluster domains in the wizard on the whole second disk.
I then imported the 2 VMs without problems. Before starting them up I changed their inherited "BIOS Type" from Legacy to "Default Cluster" and they both started without any problem. 
While in 4.4 I was able to update Fedora VM from 30 to 31 and then 32 and also refresh the slackware-current one that was about a month behind from latest current.
With "Default Cluster" BIOS type the VMs start with the following options:
" -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell-noTSX"

I was then able to download a CentOS 8 cloud image from the predefined ovirt-image-repository storage domain and convert it to template.
Created a VM from this template and cloud-init was able to inject the ssh public key and set timezone.
Changed cluster type of ths VM to Default as above with a warning from oVirt but without any problem inside the VM and also changed the disk type from virtio to virtio-scsi without problems.
Now I enabled incremental backup at engine level and also at disk level of this CentOS 8 VM so that I can make some tests in this regard.

Also, I was able to successfully test the ovirt-ansible-shutdown-env ansible role to make a clean overall shutdown of the environment, one of the things that in my previous unsupported setup was a little cumbersome.

Right after install I noticed that in CentOS 8 ovirt-node-ng setup, intel_pstate was the default scaling driver setup, but my cpu was almost always crying with core at 2.6Ghz (and temp around 90 degrees) even if attempted to setup conservative profiles and without nothing running, apart engine VM.
The NUC is under my working day table and I don't need always performance from it...
So I modified (notice also the rhgb and quite omissions...) these files and reboot:

- /etc/default/grub
GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap rd.lvm.lv=onn/ovirt-node-ng-4.4.0-0.20200521.0+1 rd.lvm.lv=onn/swap intel_pstate=disable'

- /boot/grub2/grub.cfg
set default_kernelopts="root=UUID=85212719-8feb-43aa-9819-2820d4672795 ro crashkernel=auto ipv6.disable=1 intel_pstate=disable "

- /boot/loader/entries/ovirt-node-ng-4.4.0-0.20200521.0+1-4.18.0-147.8.1.el8_1.x86_64.conf
options intel_pstate=disable boot=UUID=b717ab4f-ca71-469a-8836-ff92cebc7650 crashkernel=auto rd.lvm.lv=onn/swap root=/dev/onn/ovirt-node-ng-4.4.0-0.20200521.0+1 resume=/dev/mapper/onn-swap rootflags=discard rd.lvm.lv=onn/ovirt-node-ng-4.4.0-0.20200521.0+1 img.bootid=ovirt-node-ng-4.4.0-0.20200521.0+1 null

- /boot/efi/EFI/centos/grub.cfg
set default_kernelopts="root=/dev/mapper/onn-root ro crashkernel=auto resume=/dev/mapper/onn-swap rd.lvm.lv=onn/root rd.lvm.lv=onn/swap intel_pstate=disable "

And now with the default/old scaling driver all is quite silent and still working ok for my needs, with web admin quite usable. I updated my two VMs as described above after cpompleting these changes.
Right now for example I have hosted engine running + slackware one + CentOS 8 one with

[root@ovirt01 ~]# cat /proc/cpuinfo | grep Hz
model name : Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz
cpu MHz : 648.431
model name : Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz
cpu MHz : 628.307
model name : Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz
cpu MHz : 648.859
model name : Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz
cpu MHz : 663.792
[root@ovirt01 ~]#

and
[root@ovirt01 ~# cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
powersave
powersave
powersave
powersave
[root@ovirt01 g.cecchi]#

Thanks to all that helped throughout ...

Cheers,
Gianluca