Ovirt 4.4 - starting just-installed host from ovirt console fails
by David Johnson
Good afternoon all,
I recently had to rebuild my cluster due to a self inflicted error.
I have finally managed to get the ovirt host software installed and
communicating on all hosts.
The first host installed and started cleanly. However, after installation
the second host is failing to start. Prior to my cluster crash, this host
was running well in the cluster.
During the downtime, we applied microcode and BIOS updates as part of the
recovery process.
I have reviewed this chain:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/N3PPT34GBRLP...
and reached a dead end.
Based on what I see (in the long stream of logs and info following), it
looks like I should change the cluster CPU type from Cascadelake to Haswell
to restore normal operation.
The long involved stuff:
The Engine reports:
[image: image.png]
Host CPU type is not compatible with Cluster Properties.
[image: image.png]
The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags:
model_Cascadelake-Server-noTSX. Please update the host CPU microcode or
change the Cluster CPU Type.
The Cluster definition is:
[image: image.png]
*lscpu returns:*
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
BIOS Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Stepping: 2
CPU MHz: 3300.000
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4988.45
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl
vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm
cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp
tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2
smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat
pln pts md_clear flush_l1d
*cpuid returns:*
CPU 0:
vendor_id = "GenuineIntel"
version information (1/eax):
processor type = primary processor (0)
family = 0x6 (6)
model = 0xf (15)
stepping id = 0x2 (2)
extended family = 0x0 (0)
extended model = 0x3 (3)
(family synth) = 0x6 (6)
(model synth) = 0x3f (63)
(simple synth) = Intel (unknown type) (Haswell C1/M1/R2) {Haswell},
22nm
*virsh domcapabilities returns:*
<domainCapabilities>
<path>/usr/libexec/qemu-kvm</path>
<domain>kvm</domain>
<machine>pc-i440fx-rhel7.6.0</machine>
<arch>x86_64</arch>
<vcpu max='240'/>
<iothreads supported='yes'/>
<os supported='yes'>
<enum name='firmware'/>
<loader supported='yes'>
<value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
<enum name='type'>
<value>rom</value>
<value>pflash</value>
</enum>
<enum name='readonly'>
<value>yes</value>
<value>no</value>
</enum>
<enum name='secure'>
<value>no</value>
</enum>
</loader>
</os>
<cpu>
<mode name='host-passthrough' supported='yes'>
<enum name='hostPassthroughMigratable'>
<value>on</value>
<value>off</value>
</enum>
</mode>
<mode name='maximum' supported='yes'>
<enum name='maximumMigratable'>
<value>on</value>
<value>off</value>
</enum>
</mode>
<mode name='host-model' supported='yes'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='pdcm'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='abm'/>
<feature policy='require' name='invtsc'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='ibrs'/>
<feature policy='require' name='amd-stibp'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<feature policy='require' name='pschange-mc-no'/>
</mode>
<mode name='custom' supported='yes'>
<model usable='yes'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere-IBRS</model>
<model usable='yes'>Westmere</model>
<model usable='no'>Snowridge</model>
<model usable='no'>Skylake-Server-noTSX-IBRS</model>
<model usable='no'>Skylake-Server-IBRS</model>
<model usable='no'>Skylake-Server</model>
<model usable='no'>Skylake-Client-noTSX-IBRS</model>
<model usable='no'>Skylake-Client-IBRS</model>
<model usable='no'>Skylake-Client</model>
<model usable='yes'>SandyBridge-IBRS</model>
<model usable='yes'>SandyBridge</model>
<model usable='yes'>Penryn</model>
<model usable='no'>Opteron_G5</model>
<model usable='no'>Opteron_G4</model>
<model usable='no'>Opteron_G3</model>
<model usable='yes'>Opteron_G2</model>
<model usable='yes'>Opteron_G1</model>
<model usable='yes'>Nehalem-IBRS</model>
<model usable='yes'>Nehalem</model>
<model usable='yes'>IvyBridge-IBRS</model>
<model usable='yes'>IvyBridge</model>
<model usable='no'>Icelake-Server-noTSX</model>
<model usable='no'>Icelake-Server</model>
<model usable='no' deprecated='yes'>Icelake-Client-noTSX</model>
<model usable='no' deprecated='yes'>Icelake-Client</model>
<model usable='yes'>Haswell-noTSX-IBRS</model>
<model usable='yes'>Haswell-noTSX</model>
<model usable='no'>Haswell-IBRS</model>
<model usable='no'>Haswell</model>
<model usable='no'>EPYC-Rome</model>
<model usable='no'>EPYC-Milan</model>
<model usable='no'>EPYC-IBPB</model>
<model usable='no'>EPYC</model>
<model usable='no'>Dhyana</model>
<model usable='no'>Cooperlake</model>
<model usable='yes'>Conroe</model>
<model usable='no'>Cascadelake-Server-noTSX</model>
<model usable='no'>Cascadelake-Server</model>
<model usable='no'>Broadwell-noTSX-IBRS</model>
<model usable='no'>Broadwell-noTSX</model>
<model usable='no'>Broadwell-IBRS</model>
<model usable='no'>Broadwell</model>
<model usable='yes'>486</model>
</mode>
</cpu>
<memoryBacking supported='yes'>
<enum name='sourceType'>
<value>file</value>
<value>anonymous</value>
<value>memfd</value>
</enum>
</memoryBacking>
<devices>
<disk supported='yes'>
<enum name='diskDevice'>
<value>disk</value>
<value>cdrom</value>
<value>floppy</value>
<value>lun</value>
</enum>
<enum name='bus'>
<value>ide</value>
<value>fdc</value>
<value>scsi</value>
<value>virtio</value>
<value>usb</value>
<value>sata</value>
</enum>
<enum name='model'>
<value>virtio</value>
<value>virtio-transitional</value>
<value>virtio-non-transitional</value>
</enum>
</disk>
<graphics supported='yes'>
<enum name='type'>
<value>vnc</value>
<value>spice</value>
<value>egl-headless</value>
</enum>
</graphics>
<video supported='yes'>
<enum name='modelType'>
<value>vga</value>
<value>cirrus</value>
<value>qxl</value>
<value>virtio</value>
<value>none</value>
<value>bochs</value>
<value>ramfb</value>
</enum>
</video>
<hostdev supported='yes'>
<enum name='mode'>
<value>subsystem</value>
</enum>
<enum name='startupPolicy'>
<value>default</value>
<value>mandatory</value>
<value>requisite</value>
<value>optional</value>
</enum>
<enum name='subsysType'>
<value>usb</value>
<value>pci</value>
<value>scsi</value>
</enum>
<enum name='capsType'/>
<enum name='pciBackend'/>
</hostdev>
<rng supported='yes'>
<enum name='model'>
<value>virtio</value>
<value>virtio-transitional</value>
<value>virtio-non-transitional</value>
</enum>
<enum name='backendModel'>
<value>random</value>
<value>egd</value>
<value>builtin</value>
</enum>
</rng>
<filesystem supported='yes'>
<enum name='driverType'>
<value>path</value>
<value>handle</value>
<value>virtiofs</value>
</enum>
</filesystem>
<tpm supported='yes'>
<enum name='model'>
<value>tpm-tis</value>
<value>tpm-crb</value>
</enum>
<enum name='backendModel'>
<value>passthrough</value>
<value>emulator</value>
</enum>
</tpm>
</devices>
<features>
<gic supported='no'/>
<vmcoreinfo supported='yes'/>
<genid supported='yes'/>
<backingStoreInput supported='yes'/>
<backup supported='yes'/>
<sev supported='no'/>
</features>
</domainCapabilities>
Please advise.
2 years, 9 months
Preferred RHEL Based Distro For oVirt
by Clint Boggio
Good Day All;
i am inquiring about which RHEL based distros are currently preferred and which ones are currently supported. I know the oVirt project is a RH entity and so RHEL and CentOS-Stream are the base offering. Would it, or is it, feasible for Rocky 8.X, or Alma 8.X to be the base OS for an oVirt deployment seeing as though they both RHEL clones ?
How confident is the user community in the stability of CentOS-Stream in terms of production use as compared to Alma or Rocky ?
2 years, 9 months
failed to mount hosted engine gluster storage - how to debug?
by diego.ercolani@ssis.sm
Hello, I have an issue probably related to my particular implementation but I think some controls are missing
Here is the story.
I have a cluster of two nodes 4.4.10.3 with an upgraded kernel as the cpu (Ryzen 5) suffer from an incompatibility issue with the kernel provided by 4.4.10.x series.
On each node there are three glusterfs "partitions" in replica mode, one for the hosted_engine, the other two are for user usage.
The third node was an old i3 workstation only used to provide the arbiter partition to the glusterfs cluster.
I installed a new server (ryzen processor) with 4.5.0 and successfully installed glusterfs 10.1 and inserted the arbiter bricks implemented on glusterfs 10.1 while the replica bricks are 8.6 after removing the old i3 provided bricks.
I successfully imported the new node in the ovirt engine (after updating the engine to 4.5)
The problem is that the ovirt-ha-broker doesn't start complaining that is not possible to connect the storage. (I suppose the hosted_engine storage) so I did some digs that I'm going to show here:
####
1. The node seem to be correctly configured:
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
[root@ovirt-node3 devices]# vdsm-tool configure
Checking configuration status...
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
sanlock is configured for vdsm
Managed volume database is already configured
lvm is configured for vdsm
Current revision of multipath.conf detected, preserving
Running configure...
Done configuring modules to VDSM.
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
####
2. the node refuses to mount via hosted-engine (same error in broker.log)
[root@ovirt-node3 devices]# hosted-engine --connect-storage
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/connect_storage_server.py", line 30, in <module>
timeout=ohostedcons.Const.STORAGE_SERVER_TIMEOUT,
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 312, in connect_storage_server
sserver.connect_storage_server(timeout=timeout)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py", line 451, in connect_storage_server
'Connection to storage server failed'
RuntimeError: Connection to storage server failed
#####
3. manually mount of glusterfs work correctly
[root@ovirt-node3 devices]# grep storage /etc/ovirt-hosted-engine/hosted-engine.conf
storage=ovirt-node2.ovirt:/gveng
# The following are used only for iSCSI storage
[root@ovirt-node3 devices]#
[root@ovirt-node3 devices]# mount -t glusterfs ovirt-node2.ovirt:/gveng /mnt/tmp/
[root@ovirt-node3 devices]# ls -l /mnt/tmp
total 0
drwxr-xr-x. 6 vdsm kvm 64 Dec 15 19:04 7b8f1cc9-e3de-401f-b97f-8c281ca30482
What else should I control? Thank you and sorry for the long message
Diego
2 years, 9 months
moVirt delisted from Google Play
by Filip Krepinsky
Hi all,
Unfortunately, we decided to delist moVirt from Google Play. As you might
have noticed, the app has not been maintained for some time and also the
main repository https://github.com/oVirt/moVirt has been archived. It is
not possible for us to maintain store presence due to new requirements for
Google APIs (and thus our libraries that are obsolete at this point), app
interoperability and in general to fulfill expectations of our users.
For users who still wish to use moVirt, you can keep your current
application installed or download an APK (
https://github.com/oVirt/moVirt/releases/tag/v2.1) and install moVirt
manually. Just keep in mind that the app might not behave properly (mostly
on newer versions of Android).
I hope moVirt has been helpful with managing your envs :)
Best regards,
Filip
2 years, 9 months
Dedicated Migration Network
by Clint Boggio
Good Day All;
I am in the process of assembling a new oVirt cluster and I am wondering if there is any benefit to having a dedicated set of nics solely dedicated to VM migration. If so, would those nics (bonded) be on their own vlan and subnet or would they share a subnet with gluster storage or the management network ? Right now i'm planning on the following architecture.
1. Management = 10G X2 Bond per hypervisor host (10.66.0.0/24)
2. Gluster/iSCSI Storage = 10G X2 Bond per hypervisor host (10.244.0.0/24)
3. VM Production Network(s) = 10G X2 Bond per hypervisor host (No IP Range)
4. ?? Possible Dedicated VM Migration
Thank you very much as any input would be greatly appreciated
2 years, 9 months
"Retrieval of iSCSI targets failed" during hosted engine deployment on oVirt node 4.5
by pat@patfruth.com
I have freshly installed ovirt node 4.5 from the iso download here;
https://resources.ovirt.org/pub/ovirt-4.5/iso/ovirt-node-ng-installer/4.5...
output from 'rpm -qa | grep ovirt' shows
ovirt-hosted-engine-setup-2.6.3-1.el8.noarch
ovirt-imageio-daemon-2.4.3-1.el8.x86_64
python38-ovirt-engine-sdk4-4.5.1-1.el8.x86_64
ovirt-imageio-common-2.4.3-1.el8.x86_64
ovirt-node-ng-image-update-placeholder-4.5.0.3-1.el8.noarch
ovirt-openvswitch-2.15-3.el8.noarch
python38-ovirt-imageio-client-2.4.3-1.el8.x86_64
ovirt-openvswitch-ipsec-2.15-3.el8.noarch
ovirt-openvswitch-ovn-common-2.15-3.el8.noarch
ovirt-openvswitch-ovn-host-2.15-3.el8.noarch
centos-release-ovirt45-8.7-1.el8s.noarch
ovirt-provider-ovn-driver-1.2.36-1.el8.noarch
ovirt-host-dependencies-4.5.0-3.el8.x86_64
ovirt-release-host-node-4.5.0.3-1.el8.x86_64
ovirt-ansible-collection-2.0.3-1.el8.noarch
ovirt-vmconsole-1.0.9-1.el8.noarch
python3-ovirt-engine-sdk4-4.5.1-1.el8.x86_64
ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
ovirt-openvswitch-ovn-2.15-3.el8.noarch
ovirt-hosted-engine-ha-2.5.0-1.el8.noarch
ovirt-host-4.5.0-3.el8.x86_64
ovirt-engine-appliance-4.5-20220511122240.1.el8.x86_64
ovirt-vmconsole-host-1.0.9-1.el8.noarch
ovirt-python-openvswitch-2.15-3.el8.noarch
python38-ovirt-imageio-common-2.4.3-1.el8.x86_64
python3-ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
cockpit-ovirt-dashboard-0.16.0-1.el8.noarch
python3-ovirt-setup-lib-1.3.3-1.el8.noarch
ovirt-imageio-client-2.4.3-1.el8.x86_64
During the hosted engine deployment process, I get thru Step 3 ( the "Prepare VM" step) successfully.
On Step 4 (Storage Settings), I set;
- Storage type = iSCSI
- Portal IP address = my ISCSI target's ip address
- Accept the default Portal port number, which is already set to 3260
- Leave username & password blank (as I have no CHAP configured on the ISCSI target system)
When I click the "Retrieve Target List" button there is a brief pause, followed by red error message which says "Retrieval of iSCSI targets failed"
Upon reviewing the files in /var/log/ovirt-hosted-engine-setup on the ovirt node, I find a new log file named ovirt-hosted-engine-setup-ansible-iscsi_discover-20220614084053-in517x.log
The message near the end of the log file are as follows;
------ snip ------
.....
2022-06-14 08:41:03,510-0600 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_task': 'ovirt.ovirt.hosted_engine_setup : iSCSI discover'}
2022-06-14 08:41:03,511-0600 DEBUG ansible on_any args TASK: ovirt.ovirt.hosted_engine_setup : iSCSI discover kwargs is_conditional:False
2022-06-14 08:41:03,511-0600 DEBUG ansible on_any args localhost TASK: ovirt.ovirt.hosted_engine_setup : iSCSI discover kwargs
2022-06-14 08:41:06,430-0600 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<class 'list'>" value: "[]"
2022-06-14 08:41:06,430-0600 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<class 'list'>" value: "[]"
2022-06-14 08:41:06,430-0600 DEBUG var changed: host "localhost" var "play_hosts" type "<class 'list'>" value: "[]"
2022-06-14 08:41:06,431-0600 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_host_payload_ky4zlp1s/ansible_ovirt_host_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_host.py\", line 638, in main\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\n",
"invocation": {
"module_args": {
"activate": true,
"address": null,
"check_upgrade": true,
"cluster": null,
"comment": null,
"enroll_certificate": false,
"fetch_nested": false,
"force": false,
"hosted_engine": null,
"id": null,
"iscsi": {
"address": "192.168.1.2",
"password": "",
"port": null,
"username": ""
},
"kdump_integration": null,
"kernel_params": null,
"name": "ovirt-node01.internal.net",
"nested_attributes": [],
"override_display": null,
"override_iptables": null,
"password": null,
"poll_interval": 3,
"power_management_enabled": null,
"public_key": false,
"reboot_after_installation": null,
"reboot_after_upgrade": true,
"spm_priority": null,
"ssh_port": null,
"state": "iscsidiscover",
"timeout": 600,
"vgpu_placement": null,
"wait": true
}
},
"msg": "int() argument must be a string, a bytes-like object or a number, not 'NoneType'"
},
"ansible_task": "iSCSI discover",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 3
}
2022-06-14 08:41:06,431-0600 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f9c76744370> kwargs ignore_errors:None
2022-06-14 08:41:06,432-0600 INFO ansible stats {
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "00:11 Minutes",
"ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 5, 'failures': 1, 'unreachable': 0, 'changed': 0, 'skipped': 0, 'rescued': 0, 'ignored': 0}}",
"ansible_type": "finish",
"status": "FAILED"
}
2022-06-14 08:41:06,432-0600 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:01 ] Force facts gathering
[ 00:03 ] Obtain SSO token using username/password credentials
[ 00:03 ] Fetch host facts
[ FAILED ] iSCSI discover
2022-06-14 08:41:06,432-0600 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7f9c793444c0> kwargs
------ snip ------
The error suggest that the iSCSI portal port number (which is defaulted to 3260 in the UI) is not being properly passed into the python module ovirt/ovirt/plugins/modules/ovirt_host.py
Looking at the code at line 638 of ovirt_host.py, found here;
https://github.com/oVirt/ovirt-ansible-collection/blob/2.0.3-1/plugins/mo...
I see;
.....
elif state == 'iscsidiscover':
host_id = get_id_by_name(hosts_service, module.params['name'])
iscsi_param = module.params['iscsi']
iscsi_targets = hosts_service.service(host_id).discover_iscsi(
iscsi=otypes.IscsiDetails(
port=int(iscsi_param.get('port', 3260)), <---- line 638
username=iscsi_param.get('username'),
password=iscsi_param.get('password'),
address=iscsi_param.get('address'),
portal=iscsi_param.get('portal'),
),
)
ret = {
'changed': False,
'id': host_id,
'iscsi_targets': [iscsi.target for iscsi in iscsi_targets],
'iscsi_targets_struct': [get_dict_of_struct(
struct=iscsi,
connection=connection,
fetch_nested=module.params.get('fetch_nested'),
attributes=module.params.get('nested_attributes'),
) for iscsi in iscsi_targets],
}
.....
I'm not a Python expert, so I can't tell if this logic is correct or not.
Looking at Git history on this code, it looks like the last time a change was made effecting ISCSI was in May of 2021;
https://github.com/oVirt/ovirt-ansible-collection/commit/1c4c18d844a69b82...
By Martin Necas - https://github.com/mnecas
I gotta believe I'm not the first one to try setting up oVirt 4.5 hosted engine with ISCSI storage.
Is anyone else out there using ISCSI storage with oVirt 4.5.0.3 yet?
How did you get it working?
2 years, 9 months
Issue adding host to ovirt 4.4 cluster
by David Johnson
Good morning all,
I am attempting to add a host to my ovirt 4.4 cluster. The installation of
the first host went smoothly, but the installation of the second host
stalled.
Currently, the second host is in Installing state, but doing nothing. The
installation failed due to a failure to register the host certificate.
I cannot change the state of the host or retry the installation to capture
logs.
Warnings from the host that are visible on the ovirt console are:
- Power Management is not configured for this Host. Enable Power
Management
- Host has no default route.
- The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags: vmx,
model_Cascadelake-Server-noTSX. Please update the host CPU microcode or
change the Cluster CPU Type.
The error message generated at the last attempt to install the host from
the ovirt console is:
- Failed to enroll certificate for host ovirt-host-04 (User:
admin@internal-authz).
Please advise
2 years, 9 months