I think you may be right, here. I decided to just start over and use the actual
ovirt-node installation media, rather than Centos Stream installation media. Hopefully
that gets the software-side situated. Thanks for the pointers.
________________________________
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, January 23, 2022 5:46 PM
To: Robert Tongue <phunyguy(a)neverserio.us>; users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Failed HostedEngine Deployment
yum downgrade qemu-kvm-block-gluster-6.0.0-33.el8s
libvirt-daemon-driver-qemu-6.0.0-33.el8s qemu-kvm-common
-6.0.0-33.el8s qemu-kvm-hw-usbredir-6.0.0-33.el8s qemu-kvm-u
i-opengl-6.0.0-33.el8s qemu-kvm-block-rbd-6.0.0-33.el8s qemu
-img-6.0.0-33.el8s qemu-kvm-6.0.0-33.el8s qemu-kvm-block-cur
l-6.0.0-33.el8s qemu-kvm-block-ssh-6.0.0-33.el8s qemu-kvm-ui
-spice-6.0.0-33.el8s ipxe-roms-qemu-6.0.0-33.el8s qemu-kvm-c
ore-6.0.0-33.el8s qemu-kvm-docs-6.0.0-33.el8s qemu-kvm-block-6.0.0-33.el8s
Best Regards,
Strahil Nikolov
On Sun, Jan 23, 2022 at 22:47, Robert Tongue
<phunyguy(a)neverserio.us> wrote:
Ahh, I did some repoquery commands can see a good bit of qemu* packages are coming from
appstream rather than ovirt-4.4-centos-stream-advanced-virtualization.
What's the recommanded fix?
________________________________
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, January 23, 2022 3:41 PM
To: users <users(a)ovirt.org>; Robert Tongue <phunyguy(a)neverserio.us>
Subject: Re: [ovirt-users] Failed HostedEngine Deployment
I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo
(6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue
<phunyguy(a)neverserio.us> написа:
Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend
spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS,
hyperconverged, but I am managing that myself outside of oVirt. It's a single-node
GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the
way through the initial hosted-engine deployment (via the cockpit interface) pre-storage,
then get most of the way through the storage portion of it. It fails at starting the
HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves
to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this
deployment error isn't really the reason for the failure, it's just where it is at
when it fails. The HostedEngine VM is starting, but not actually booting. I was able to
change the VNC password with `hosted-engine --add-console-password`, and see the local
console display with that, however it just displays "The guest has not initialized
the display (yet)".
I also did:
# hosted-engine --console
The engine VM is running on this host
Escape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on
the network. I am thinking it's just not making it to the initial BIOS screen and
booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage
Volume Name: storage
Type: Distribute
Volume ID: e9544310-8890-43e3-b49c-6e8c7472dbbb
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node1:/var/glusterfs/storage/1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.ping-timeout: 5
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1024
cluster.locking-scheme: full
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: disable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHz
stepping : 9
microcode : 0x21
cpu MHz : 4000.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon
pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt
tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
srbds
bogomips : 7199.86
clflush size : 64
cache_alignment: 64
address sizes : 36 bits physical, 48 bits virtual
power management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...