Re: Failed HostedEngine Deployment
by Strahil Nikolov
yum downgrade qemu-kvm-block-gluster-6.0.0-33.el8s libvirt-daemon-driver-qemu-6.0.0-33.el8s qemu-kvm-common-6.0.0-33.el8s qemu-kvm-hw-usbredir-6.0.0-33.el8s qemu-kvm-ui-opengl-6.0.0-33.el8s qemu-kvm-block-rbd-6.0.0-33.el8s qemu-img-6.0.0-33.el8s qemu-kvm-6.0.0-33.el8s qemu-kvm-block-curl-6.0.0-33.el8s qemu-kvm-block-ssh-6.0.0-33.el8s qemu-kvm-ui-spice-6.0.0-33.el8s ipxe-roms-qemu-6.0.0-33.el8s qemu-kvm-core-6.0.0-33.el8s qemu-kvm-docs-6.0.0-33.el8s qemu-kvm-block-6.0.0-33.el8s
Best Regards,Strahil Nikolov
On Sun, Jan 23, 2022 at 22:47, Robert Tongue<phunyguy(a)neverserio.us> wrote: #yiv7072323153 P {margin-top:0;margin-bottom:0;}Ahh, I did some repoquery commands can see a good bit of qemu* packages are coming from appstream rather than ovirt-4.4-centos-stream-advanced-virtualization.
What's the recommanded fix?From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, January 23, 2022 3:41 PM
To: users <users(a)ovirt.org>; Robert Tongue <phunyguy(a)neverserio.us>
Subject: Re: [ovirt-users] Failed HostedEngine Deployment I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue <phunyguy(a)neverserio.us> написа:
<!--#yiv7072323153 #yiv7072323153x_yiv4464233184 p {margin-top:0;margin-bottom:0;}-->Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of Bricks: 1Transport-type: tcpBricks:Brick1: node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: onserver.event-threads: 4client.event-threads: 4cluster.choose-local: offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: fullcluster.server-quorum-type: servercluster.quorum-type: autocluster.eager-lock: enableperformance.strict-o-direct: onnetwork.remote-dio: disableperformance.low-prio-threads: 32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits virtualpower management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...
2 years, 10 months
Major problems after upgrading 2 (of 4) Red Hat hosts to 4.4.10
by David White
I have a Hyperconverged cluster with 4 hosts.
Gluster is replicated across 2 hosts, and a 3rd host is an arbiter node.
The 4th host is compute only.
I updated the compute-only node, as well as the arbiter node, early this morning. I didn't touch either of the actual storage nodes.That said, I forgot to upgrade the engine.
oVirt Manager thinks that all but 1 of the hosts in the cluster are unhealthy. However, all 4 hosts are online. oVirt Manager (Engine) also keeps deactivating at least 1, if not 2 of the 3 (total) bricks behind each volume.
Even though the Engine thinks that only 1 host is healthy, VMs are clearly running on some of the other hosts. However, in troubleshooting, some of the customer VMs were turned off, and oVirt is refusing to start those VMs, because it only recognizes that 1 of the hosts is healthy -- and that host's resources are maxed out.
This afternoon, I went ahead and upgraded (and rebooted) the Engine VM, so it is now up-to-date. Unfortunately, that didn't resolve the issue. So I took one of the "unhealthy" hosts which didn't have any VMs on it (which was the host that is our compute-only server hosting no gluster data), and I used oVirt to "reinstall" the oVirt software. That didn't resolve the issue for that host.
How can I troubleshoot this? I need:
- To figure out why oVirt keeps trying to deactivate volumes
- From the command line, `gluster peer status` show all nodes connected, and all volumes appear to be healthy
- More importantly, I need to get these VMs that are currently down back online. Is there a way to somehow force oVirt to launch the VMs on the "unhealthy" nodes?
What logs should I be looking at? Any help would be greatly appreciated .
Sent with ProtonMail Secure Email.
2 years, 10 months
Re: Failed HostedEngine Deployment
by Strahil Nikolov
I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue <phunyguy(a)neverserio.us> написа:
#yiv4464233184 P {margin-top:0;margin-bottom:0;}Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of Bricks: 1Transport-type: tcpBricks:Brick1: node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: onserver.event-threads: 4client.event-threads: 4cluster.choose-local: offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: fullcluster.server-quorum-type: servercluster.quorum-type: autocluster.eager-lock: enableperformance.strict-o-direct: onnetwork.remote-dio: disableperformance.low-prio-threads: 32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits virtualpower management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...
2 years, 10 months
Failed HostedEngine Deployment
by Robert Tongue
Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --console
The engine VM is running on this host
Escape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage
Volume Name: storage
Type: Distribute
Volume ID: e9544310-8890-43e3-b49c-6e8c7472dbbb
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node1:/var/glusterfs/storage/1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.ping-timeout: 5
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1024
cluster.locking-scheme: full
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: disable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHz
stepping : 9
microcode : 0x21
cpu MHz : 4000.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds
bogomips : 7199.86
clflush size : 64
cache_alignment: 64
address sizes : 36 bits physical, 48 bits virtual
power management:
[ plus 7 more ]
Thanks for any insight that can be provided.
2 years, 10 months
How does the Stream based management engine transition to RHV?
by Thomas Hoberg
In the recent days, I've been trying to validate the transition from CentOS 8 to Alma, Rocky, Oracle and perhaps soon Liberty Linux for existing HCI clusters.
I am using nested virtualization on a VMware workstation host, because I understand snapshoting and linked clones much better on VMware, even if I've tested nested virtualization to some degree with oVirt as well. It makes moving forth and back between distros and restarting failed oVirt deployments much easier and more reliable than ovirt-hosted-engine-cleanup.
Installing oVirt 4.10 on TrueCentOS systems, which had been freshly switched to Alma, Rocky and Oracle went relatively well, apart from Oracle pushing UEK kernels, which break VDO (and some Python2 mishaps).
I'm still testing transitioning pre-existing TrueCentOS HCI glusters to Alma, Rocky and Oracle.
While that solves the issue of having the hosts running a mature OS which is downstream of RHEL, there is still an issue with the management engine being based on the upstream stream release: It doesn't have the vulnerability managment baked in, which is required even for labs use in an enterprise.
So I'd like to ask our Redhat friends here: How does this work when releases of oVirt transition to RHV? Do you backport oVirt changes from Stream to RHEL? When bugs are found in that process, are they then fed back into oVirt or into the oVirt-to-RHEV proces?
2 years, 10 months
CentOS 8.4 Linux hosts from 4.4.8 to Rocky Linux 4.4.10
by Gianluca Cecchi
Hello,
after updating the external engine from CentOS 8.4 and 4.4.8 to Rocky Linux
8.5 and 4.4.9 as outlined here:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YUDJRC22SQP...
I went further and updated also the hosts.
Environment is with an external engine and 3 CentOS Linux 8.4 hosts in
4.4.8 with iSCSI storage domain.
Preliminarily I upgraded the engine to 4.4.10 (not yet the just released
async) without problems.
Then, one host at a time:
. put host into maintenance from web admin UI
Management --> Maintenance
. In a terminal f host set proxy for my environment needs
export https_proxy=http://my_proxy:my_proxy_port
export http_proxy=http://my_proxy:my_proxy_port (not sure if this
necessary...)
. in the same terminal execute migration script
./migrate2rocky.sh -r
. executed Management --> SSH Management --> SSH Restart from web admin ui
the host comes on in maintenance mode
. selected Installation --> Check for Upgrade but the host is detected as
already update
. for further security and to be sure that all upgrade steps are applied I
executed
Installation --> Reinstall
I deselected
- activate host after install
- reboot host after install
It went ok so
. executed Management --> SSH Management --> SSH Restart from web admin ui
the host comes on in maintenance mode
. Management --> Activate
. Empty another host moving its VMs to the just updated host and continue
in the same way, also electing as new SPM the updated host
All went smoothly and without VMs disruption.
Let's see how it goes next days with the light workload I have on this
testing environment.
Currently the async 1 of 4.4.10 is not catched up by engine-upgrade-check
command.. I'm going to retry applying again during the next few days.
Gianluca
2 years, 10 months
mdadm vs. JBOD
by jonas@rabe.ch
Hi,
We are currently building a three node hyper-converged cluster based on oVirt Node and Gluster. While discussing the different storage layout we couldn't get to a final decision.
Currently our servers are equipped as follows:
- servers 1 & 2:
- Two 800GB disks for OS
- 100GB RAID1 used as LVM PV for OS
- Nine 7.68TB disks for Gluster
- 60TB RAID 5 used as LVM PV for Gluster
- server 3
- Two 800GB disks for OS & Gluster
- 100GB RAID 1 used as LVM PV for OS
- 700GB RAID 1 used as LVM PV for Gluster
Unfortunately I couldn't find much information about mdadm on this topic. The hyper-convergence guides ([1], [2]) seem to assume that there is either a hardware RAID in place or JBOD is used. Is there some documentation available on what to consider when using mdadm? Or would it be more sensible to just use JBOD and then add redundancy on the LVM or Gluster level?
If choosing to go with mdadm, what option should I choose in the bricks wizard screen (RAID 5 or JBOD)?
[1]: https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
[2]: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infr...
2 years, 10 months
oVirt 4.4.10 is now generally available
by Sandro Bonazzola
oVirt 4.4.10 is now generally available
The oVirt project is excited to announce the general availability of oVirt
4.4.10 , as of January 18th, 2022.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.4.10 Release?
This update is the ninth in a series of stabilization updates to the 4.4
series.
This release is available now on x86_64 architecture for:
-
Red Hat Enterprise Linux 8.5 (or similar)
-
CentOS Stream 8
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
-
Red Hat Enterprise Linux 8.5 (or similar)
-
oVirt Node NG (based on CentOS Stream 8)
-
CentOS Stream 8
Some of the RFEs with high user impact are listed below:
-
Bug 2012135 <https://bugzilla.redhat.com/show_bug.cgi?id=2012135> -
[RFE] Provide option to unmap multiple LUNs using ovirt_remove_stale_lun
ansible role
Some of the Bugs with high user impact are listed below:
-
Bug 1710323 <https://bugzilla.redhat.com/show_bug.cgi?id=1710323> -
Microsoft failover cluster is not working with FC direct LUN on Windows
2016 server and Windows 2019
-
Bug 2023224 <https://bugzilla.redhat.com/show_bug.cgi?id=2023224> -
multipath -f fails with "map in use" error while removing the LUNs using
"ovirt_remove_stale_lun"
-
Bug 2027260 <https://bugzilla.redhat.com/show_bug.cgi?id=2027260> - Cold
backup fail in various ways - backup is reported ready before add_bitmap
jobs complete
-
Bug 1897114 <https://bugzilla.redhat.com/show_bug.cgi?id=1897114> - Add
additional logging information to be able to understand why host is stuck
in Unassigned state
-
Bug 1978655 <https://bugzilla.redhat.com/show_bug.cgi?id=1978655> - ELK
integration fails due to missing configuration parameters
-
Bug 2032919 <https://bugzilla.redhat.com/show_bug.cgi?id=2032919> -
Unable to add EL 7 host into oVirt Engine in clusters 4.2/4.3
oVirt Node has been updated, including:
-
oVirt 4.4.10: https://www.ovirt.org/release/4.4.10/
-
CentOS Stream 8 latest updates
-
Full list of changes:
--- ovirt-node-ng-image-4.4.9.3.manifest-rpm 2021-12-15 15:40:13.501764699
+0100
+++ ovirt-node-ng-image-4.4.10.manifest-rpm 2022-01-19 08:09:36.668868583
+0100
@@ -2,7 +2,6 @@
-ModemManager-glib-1.18.2-1.el8.x86_64
-NetworkManager-1.36.0-0.2.el8.x86_64
-NetworkManager-config-server-1.36.0-0.2.el8.noarch
-NetworkManager-libnm-1.36.0-0.2.el8.x86_64
-NetworkManager-ovs-1.36.0-0.2.el8.x86_64
-NetworkManager-team-1.36.0-0.2.el8.x86_64
-NetworkManager-tui-1.36.0-0.2.el8.x86_64
+NetworkManager-1.36.0-0.3.el8.x86_64
+NetworkManager-config-server-1.36.0-0.3.el8.noarch
+NetworkManager-libnm-1.36.0-0.3.el8.x86_64
+NetworkManager-ovs-1.36.0-0.3.el8.x86_64
+NetworkManager-team-1.36.0-0.3.el8.x86_64
+NetworkManager-tui-1.36.0-0.3.el8.x86_64
@@ -23 +22 @@
-alsa-lib-1.2.5-4.el8.x86_64
+alsa-lib-1.2.6.1-1.el8.x86_64
@@ -29,2 +28,2 @@
-augeas-1.12.0-6.el8.x86_64
-augeas-libs-1.12.0-6.el8.x86_64
+augeas-1.12.0-7.el8.x86_64
+augeas-libs-1.12.0-7.el8.x86_64
@@ -34 +33 @@
-autofs-5.1.4-74.el8.x86_64
+autofs-5.1.4-77.el8.x86_64
@@ -44 +43 @@
-binutils-2.30-111.el8.x86_64
+binutils-2.30-112.el8.x86_64
@@ -46 +45 @@
-blivet-data-3.4.0-7.el8.noarch
+blivet-data-3.4.0-8.el8.noarch
@@ -58 +57 @@
-ceph-common-16.2.6-1.el8s.x86_64
+ceph-common-16.2.7-1.el8s.x86_64
@@ -63,6 +62,6 @@
-clevis-15-4.el8.x86_64
-clevis-dracut-15-4.el8.x86_64
-clevis-luks-15-4.el8.x86_64
-clevis-systemd-15-4.el8.x86_64
-cockpit-258-1.el8.x86_64
-cockpit-bridge-258-1.el8.x86_64
+clevis-15-6.el8.x86_64
+clevis-dracut-15-6.el8.x86_64
+clevis-luks-15-6.el8.x86_64
+clevis-systemd-15-6.el8.x86_64
+cockpit-260-1.el8.x86_64
+cockpit-bridge-260-1.el8.x86_64
@@ -70,3 +69,3 @@
-cockpit-storaged-255-1.el8.noarch
-cockpit-system-258-1.el8.noarch
-cockpit-ws-258-1.el8.x86_64
+cockpit-storaged-259-1.el8.noarch
+cockpit-system-260-1.el8.noarch
+cockpit-ws-260-1.el8.x86_64
@@ -116 +115 @@
-dmidecode-3.2-10.el8.x86_64
+dmidecode-3.3-1.el8.x86_64
@@ -193 +192 @@
-fwupd-1.5.9-1.el8_4.x86_64
+fwupd-1.7.1-1.el8.x86_64
@@ -196 +195 @@
-gdb-headless-8.2-17.el8.x86_64
+gdb-headless-8.2-18.el8.x86_64
@@ -206,4 +205,3 @@
-glibc-2.28-174.el8.x86_64
-glibc-common-2.28-174.el8.x86_64
-glibc-gconv-extra-2.28-174.el8.x86_64
-glibc-langpack-en-2.28-174.el8.x86_64
+glibc-2.28-181.el8.x86_64
+glibc-common-2.28-181.el8.x86_64
+glibc-langpack-en-2.28-181.el8.x86_64
@@ -244 +242 @@
-gssproxy-0.8.0-19.el8.x86_64
+gssproxy-0.8.0-20.el8.x86_64
@@ -253 +251 @@
-hwdata-0.314-8.10.el8.noarch
+hwdata-0.314-8.11.el8.noarch
@@ -260,4 +258,4 @@
-ipa-client-4.9.6-6.module_el8.5.0+947+fabc681e.x86_64
-ipa-client-common-4.9.6-6.module_el8.5.0+947+fabc681e.noarch
-ipa-common-4.9.6-6.module_el8.5.0+947+fabc681e.noarch
-ipa-selinux-4.9.6-6.module_el8.5.0+947+fabc681e.noarch
+ipa-client-4.9.8-2.module_el8.6.0+1054+cdb51b28.x86_64
+ipa-client-common-4.9.8-2.module_el8.6.0+1054+cdb51b28.noarch
+ipa-common-4.9.8-2.module_el8.6.0+1054+cdb51b28.noarch
+ipa-selinux-4.9.8-2.module_el8.6.0+1054+cdb51b28.noarch
@@ -275 +273 @@
-iputils-20180629-7.el8.x86_64
+iputils-20180629-8.el8.x86_64
@@ -303,6 +301,6 @@
-kernel-4.18.0-348.2.1.el8_5.x86_64
-kernel-core-4.18.0-348.2.1.el8_5.x86_64
-kernel-modules-4.18.0-348.2.1.el8_5.x86_64
-kernel-tools-4.18.0-348.2.1.el8_5.x86_64
-kernel-tools-libs-4.18.0-348.2.1.el8_5.x86_64
-kexec-tools-2.0.20-63.el8.x86_64
+kernel-4.18.0-358.el8.x86_64
+kernel-core-4.18.0-358.el8.x86_64
+kernel-modules-4.18.0-358.el8.x86_64
+kernel-tools-4.18.0-358.el8.x86_64
+kernel-tools-libs-4.18.0-358.el8.x86_64
+kexec-tools-2.0.20-67.el8.x86_64
@@ -312 +310 @@
-kmod-kvdo-6.2.5.72-81.el8.x86_64
+kmod-kvdo-6.2.6.3-82.el8.x86_64
@@ -359 +357 @@
-libcephfs2-16.2.6-1.el8s.x86_64
+libcephfs2-16.2.7-1.el8s.x86_64
@@ -381 +379 @@
-libgcc-8.5.0-6.el8.x86_64
+libgcc-8.5.0-7.el8.x86_64
@@ -393 +391 @@
-libgomp-8.5.0-6.el8.x86_64
+libgomp-8.5.0-7.el8.x86_64
@@ -401 +399 @@
-libibverbs-37.1-1.el8.x86_64
+libibverbs-37.2-1.el8.x86_64
@@ -405 +403 @@
-libipa_hbac-2.5.2-2.el8_5.1.x86_64
+libipa_hbac-2.6.1-2.el8.x86_64
@@ -418 +415,0 @@
-libmbim-1.26.0-2.el8.x86_64
@@ -434 +431 @@
-libosinfo-1.9.0-1.el8.x86_64
+libosinfo-1.9.0-2.el8.x86_64
@@ -441 +437,0 @@
-libpmemobj-1.6.1-1.el8.x86_64
@@ -447 +442,0 @@
-libqmi-1.30.2-1.el8.x86_64
@@ -449,3 +444,3 @@
-librados2-16.2.6-1.el8s.x86_64
-libradosstriper1-16.2.6-1.el8s.x86_64
-librbd1-16.2.6-1.el8s.x86_64
+librados2-16.2.7-1.el8s.x86_64
+libradosstriper1-16.2.7-1.el8s.x86_64
+librbd1-16.2.7-1.el8s.x86_64
@@ -453 +448 @@
-librdmacm-37.1-1.el8.x86_64
+librdmacm-37.2-1.el8.x86_64
@@ -463 +458 @@
-librgw2-16.2.6-1.el8s.x86_64
+librgw2-16.2.7-1.el8s.x86_64
@@ -477,7 +472,7 @@
-libsss_autofs-2.5.2-2.el8_5.1.x86_64
-libsss_certmap-2.5.2-2.el8_5.1.x86_64
-libsss_idmap-2.5.2-2.el8_5.1.x86_64
-libsss_nss_idmap-2.5.2-2.el8_5.1.x86_64
-libsss_simpleifp-2.5.2-2.el8_5.1.x86_64
-libsss_sudo-2.5.2-2.el8_5.1.x86_64
-libstdc++-8.5.0-6.el8.x86_64
+libsss_autofs-2.6.1-2.el8.x86_64
+libsss_certmap-2.6.1-2.el8.x86_64
+libsss_idmap-2.6.1-2.el8.x86_64
+libsss_nss_idmap-2.6.1-2.el8.x86_64
+libsss_simpleifp-2.6.1-2.el8.x86_64
+libsss_sudo-2.6.1-2.el8.x86_64
+libstdc++-8.5.0-7.el8.x86_64
@@ -536 +531 @@
-libwbclient-4.14.5-2.el8.x86_64
+libwbclient-4.15.3-0.el8.x86_64
@@ -570 +565 @@
-mdadm-4.2-rc2.el8.x86_64
+mdadm-4.2-rc3.el8.x86_64
@@ -598,4 +593,4 @@
-net-snmp-5.8-23.el8.x86_64
-net-snmp-agent-libs-5.8-23.el8.x86_64
-net-snmp-libs-5.8-23.el8.x86_64
-net-snmp-utils-5.8-23.el8.x86_64
+net-snmp-5.8-24.el8.x86_64
+net-snmp-agent-libs-5.8-24.el8.x86_64
+net-snmp-libs-5.8-24.el8.x86_64
+net-snmp-utils-5.8-24.el8.x86_64
@@ -609,2 +604,2 @@
-nmstate-1.2.0-0.1.alpha2.el8.x86_64
-nmstate-plugin-ovsdb-1.2.0-0.1.alpha2.el8.noarch
+nmstate-1.2.0-1.el8.x86_64
+nmstate-plugin-ovsdb-1.2.0-1.el8.noarch
@@ -640 +635 @@
-osinfo-db-20210903-1.el8.noarch
+osinfo-db-20211216-1.el8.noarch
@@ -643 +638 @@
-ovirt-ansible-collection-1.6.5-1.el8.noarch
+ovirt-ansible-collection-1.6.6-1.el8.noarch
@@ -646 +641 @@
-ovirt-hosted-engine-ha-2.4.9-1.el8.noarch
+ovirt-hosted-engine-ha-2.4.10-1.el8.noarch
@@ -651 +646 @@
-ovirt-node-ng-image-update-placeholder-4.4.9.3-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.10-1.el8.noarch
@@ -659,2 +654,2 @@
-ovirt-release-host-node-4.4.9.3-1.el8.noarch
-ovirt-release44-4.4.9.3-1.el8.noarch
+ovirt-release-host-node-4.4.10-1.el8.noarch
+ovirt-release44-4.4.10-1.el8.noarch
@@ -667,3 +662,3 @@
-pacemaker-cluster-libs-2.1.2-1.el8.x86_64
-pacemaker-libs-2.1.2-1.el8.x86_64
-pacemaker-schemas-2.1.2-1.el8.noarch
+pacemaker-cluster-libs-2.1.2-2.el8.x86_64
+pacemaker-libs-2.1.2-2.el8.x86_64
+pacemaker-schemas-2.1.2-2.el8.noarch
@@ -677 +672 @@
-pcsc-lite-1.8.23-4.1.el8_4.x86_64
+pcsc-lite-1.9.5-1.el8.x86_64
@@ -679 +674 @@
-pcsc-lite-libs-1.8.23-4.1.el8_4.x86_64
+pcsc-lite-libs-1.9.5-1.el8.x86_64
@@ -747 +742 @@
-python3-blivet-3.4.0-7.el8.noarch
+python3-blivet-3.4.0-8.el8.noarch
@@ -750,3 +745,3 @@
-python3-ceph-argparse-16.2.6-1.el8s.x86_64
-python3-ceph-common-16.2.6-1.el8s.x86_64
-python3-cephfs-16.2.6-1.el8s.x86_64
+python3-ceph-argparse-16.2.7-1.el8s.x86_64
+python3-ceph-common-16.2.7-1.el8s.x86_64
+python3-cephfs-16.2.7-1.el8s.x86_64
@@ -755 +750,2 @@
-python3-cloud-what-1.28.21-3.el8.x86_64
+python3-click-6.7-8.el8.noarch
+python3-cloud-what-1.28.24-1.el8.x86_64
@@ -787,2 +783,2 @@
-python3-ipaclient-4.9.6-6.module_el8.5.0+947+fabc681e.noarch
-python3-ipalib-4.9.6-6.module_el8.5.0+947+fabc681e.noarch
+python3-ipaclient-4.9.8-2.module_el8.6.0+1054+cdb51b28.noarch
+python3-ipalib-4.9.8-2.module_el8.6.0+1054+cdb51b28.noarch
@@ -797,2 +793,2 @@
-python3-libipa_hbac-2.5.2-2.el8_5.1.x86_64
-python3-libnmstate-1.2.0-0.1.alpha2.el8.noarch
+python3-libipa_hbac-2.6.1-2.el8.x86_64
+python3-libnmstate-1.2.0-1.el8.noarch
@@ -805 +801 @@
-python3-linux-procfs-0.6.3-4.el8.noarch
+python3-linux-procfs-0.7.0-1.el8.noarch
@@ -837 +833 @@
-python3-perf-4.18.0-348.2.1.el8_5.x86_64
+python3-perf-4.18.0-358.el8.x86_64
@@ -860,2 +856,2 @@
-python3-rados-16.2.6-1.el8s.x86_64
-python3-rbd-16.2.6-1.el8s.x86_64
+python3-rados-16.2.7-1.el8s.x86_64
+python3-rbd-16.2.7-1.el8s.x86_64
@@ -865 +861 @@
-python3-rgw-16.2.6-1.el8s.x86_64
+python3-rgw-16.2.7-1.el8s.x86_64
@@ -867 +863 @@
-python3-rpm-4.14.3-19.el8.x86_64
+python3-rpm-4.14.3-21.el8.x86_64
@@ -875,3 +871,3 @@
-python3-sss-2.5.2-2.el8_5.1.x86_64
-python3-sss-murmur-2.5.2-2.el8_5.1.x86_64
-python3-sssdconfig-2.5.2-2.el8_5.1.noarch
+python3-sss-2.6.1-2.el8.x86_64
+python3-sss-murmur-2.6.1-2.el8.x86_64
+python3-sssdconfig-2.6.1-2.el8.noarch
@@ -879 +875 @@
-python3-subscription-manager-rhsm-1.28.21-3.el8.x86_64
+python3-subscription-manager-rhsm-1.28.24-1.el8.x86_64
@@ -881 +877 @@
-python3-syspurpose-1.28.21-3.el8.x86_64
+python3-syspurpose-1.28.24-1.el8.x86_64
@@ -914,4 +910,4 @@
-rpm-4.14.3-19.el8.x86_64
-rpm-build-libs-4.14.3-19.el8.x86_64
-rpm-libs-4.14.3-19.el8.x86_64
-rpm-plugin-selinux-4.14.3-19.el8.x86_64
+rpm-4.14.3-21.el8.x86_64
+rpm-build-libs-4.14.3-21.el8.x86_64
+rpm-libs-4.14.3-21.el8.x86_64
+rpm-plugin-selinux-4.14.3-21.el8.x86_64
@@ -925,3 +921,3 @@
-samba-client-libs-4.14.5-2.el8.x86_64
-samba-common-4.14.5-2.el8.noarch
-samba-common-libs-4.14.5-2.el8.x86_64
+samba-client-libs-4.15.3-0.el8.x86_64
+samba-common-4.15.3-0.el8.noarch
+samba-common-libs-4.15.3-0.el8.x86_64
@@ -931,2 +927,2 @@
-sbd-1.5.0-2.el8.x86_64
-scap-security-guide-0.1.57-5.el8.noarch
+sbd-1.5.1-1.el8.x86_64
+scap-security-guide-0.1.59-1.el8.noarch
@@ -937,2 +933,2 @@
-selinux-policy-3.14.3-85.el8.noarch
-selinux-policy-targeted-3.14.3-85.el8.noarch
+selinux-policy-3.14.3-86.el8.noarch
+selinux-policy-targeted-3.14.3-86.el8.noarch
@@ -950 +946 @@
-sos-4.2-7.el8.noarch
+sos-4.2-11.el8.noarch
@@ -956,9 +952,9 @@
-sssd-client-2.5.2-2.el8_5.1.x86_64
-sssd-common-2.5.2-2.el8_5.1.x86_64
-sssd-common-pac-2.5.2-2.el8_5.1.x86_64
-sssd-dbus-2.5.2-2.el8_5.1.x86_64
-sssd-ipa-2.5.2-2.el8_5.1.x86_64
-sssd-kcm-2.5.2-2.el8_5.1.x86_64
-sssd-krb5-common-2.5.2-2.el8_5.1.x86_64
-sssd-tools-2.5.2-2.el8_5.1.x86_64
-subscription-manager-rhsm-certificates-1.28.21-3.el8.x86_64
+sssd-client-2.6.1-2.el8.x86_64
+sssd-common-2.6.1-2.el8.x86_64
+sssd-common-pac-2.6.1-2.el8.x86_64
+sssd-dbus-2.6.1-2.el8.x86_64
+sssd-ipa-2.6.1-2.el8.x86_64
+sssd-kcm-2.6.1-2.el8.x86_64
+sssd-krb5-common-2.6.1-2.el8.x86_64
+sssd-tools-2.6.1-2.el8.x86_64
+subscription-manager-rhsm-certificates-1.28.24-1.el8.x86_64
@@ -976,5 +972,5 @@
-systemd-239-51.el8.x86_64
-systemd-container-239-51.el8.x86_64
-systemd-libs-239-51.el8.x86_64
-systemd-pam-239-51.el8.x86_64
-systemd-udev-239-51.el8.x86_64
+systemd-239-51.el8_5.2.x86_64
+systemd-container-239-51.el8_5.2.x86_64
+systemd-libs-239-51.el8_5.2.x86_64
+systemd-pam-239-51.el8_5.2.x86_64
+systemd-udev-239-51.el8_5.2.x86_64
@@ -982 +978 @@
-tcpdump-4.9.3-2.el8.x86_64
+tcpdump-4.9.3-3.el8.x86_64
@@ -993 +989 @@
-usbredir-0.8.0-1.el8.x86_64
+usbredir-0.12.0-1.el8.x86_64
@@ -997,11 +993,11 @@
-vdsm-4.40.90.4-1.el8.x86_64
-vdsm-api-4.40.90.4-1.el8.noarch
-vdsm-client-4.40.90.4-1.el8.noarch
-vdsm-common-4.40.90.4-1.el8.noarch
-vdsm-gluster-4.40.90.4-1.el8.x86_64
-vdsm-http-4.40.90.4-1.el8.noarch
-vdsm-jsonrpc-4.40.90.4-1.el8.noarch
-vdsm-network-4.40.90.4-1.el8.x86_64
-vdsm-python-4.40.90.4-1.el8.noarch
-vdsm-yajsonrpc-4.40.90.4-1.el8.noarch
-vim-minimal-8.0.1763-16.el8_5.2.x86_64
+vdsm-4.40.100.2-1.el8.x86_64
+vdsm-api-4.40.100.2-1.el8.noarch
+vdsm-client-4.40.100.2-1.el8.noarch
+vdsm-common-4.40.100.2-1.el8.noarch
+vdsm-gluster-4.40.100.2-1.el8.x86_64
+vdsm-http-4.40.100.2-1.el8.noarch
+vdsm-jsonrpc-4.40.100.2-1.el8.noarch
+vdsm-network-4.40.100.2-1.el8.x86_64
+vdsm-python-4.40.100.2-1.el8.noarch
+vdsm-yajsonrpc-4.40.100.2-1.el8.noarch
+vim-minimal-8.0.1763-16.el8_5.3.x86_64
@@ -1011 +1007 @@
-virt-what-1.18-12.el8.x86_64
+virt-what-1.18-13.el8.x86_64
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.4.10 release highlights:
https://www.ovirt.org/release/4.4.10/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
[1] https://www.ovirt.org/release/4.4.10/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 10 months
Re: support of AMD EPYC 3rd Genneration Milan
by Sandro Bonazzola
Il giorno sab 8 gen 2022 alle ore 18:47 samuel.xhu(a)horebdata.cn <
samuel.xhu(a)horebdata.cn> ha scritto:
> Helllo, Ovirt experts,
>
> Does Ovirt now support the use of AMD EPYC 3rd Genneration Milan CPU? and
> if yes, from which version?
>
>
AMD EPYC Milan CPU should be supported by Advanced Virtualization since
June 2021, included in oVirt Node 4.4.6.1 and newer.
Within oVirt I think it's recognized as "EPYC" without real distinction
between Milan, Rome or others.
+Arik Hadas <ahadas(a)redhat.com> ?
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 10 months