Wait for the engine to come up on the target vm
by Vladimir Belov
I'm trying to deploy oVirt from a self-hosted engine, but at the last step I get an engine startup error.
[ INFO ] TASK [Wait for the engine to come up on the target VM]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.181846", "end": "2022-03-28 15:41:28.853150", "rc": 0, "start": "2022-03-28 15:41:28.671304", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5537 (Mon Mar 28 15:41:20 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=5537 (Mon Mar 28 15:41:20 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"v2.test.ru\", \"host-id\": 1, \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"4d2eeaea\", \"local_conf_timestamp\": 5537, \"host-ts\": 5537}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5537 (Mon Mar 28 15:41:20 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=5537 (Mon Mar 28 15:41:20 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"v2.test.ru\", \"host-id\": 1, \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"4d2eeaea\", \"local_conf_timestamp\": 5537, \"host-ts\": 5537}, \"global_maintenance\": false}"]}
Аfter the installation is completed, the condition of the engine is as follows:
Engine status: {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
After reading the vdsm.logs, I found that qemu-guest-agent failed to connect to the engine for some reason.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5400, in qemuGuestAgentShutdown
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in shutdownFlags
if ret == -1: raise libvirtError ('virDomainShutdownFlags() failed', dom=self)
libvirtError: Guest agent is not responding: QEMU guest agent is not connected
During the installation phase, qemu-guest-agent on the guest VM is running.
Setting a temporary password (hosted-engine --add-console-password --password) and connecting via VNC also failed.
Using "hosted-engine --console" also failed to connect
The engine VM is running on this host
Connected to HostedEngine domain
Escaping character: ^]
error: internal error: character device <null> not found
The network settings are configured using static addressing, without DHCP.
It seems to me that this is due to the fact that the engine receives an IP address that does not match the entry in /etc/hosts, but I do not know how to fix it. Any help is welcome, I will provide the necessary logs. Thanks
2 years, 8 months
Gluster storage and TRIM VDO
by Oleh Horbachov
Hello everyone. I have a Gluster distributed replication cluster deployed. The cluster - store for ovirt. For bricks - VDO over a raw disk. When discarding via 'fstrim -av' the storage hangs for a few seconds and the connection is lost. Does anyone know the best practices for using TRIM with VDO in the context of ovirt?
ovirt - v4.4.10
gluster - v8.6
2 years, 8 months
Mac addresses pool issues
by Nicolas MAIRE
Hi,
We're encountering some issues on one of our production clusters running oVirt 4.2. We've had an incident with the engine's database a few weeks back that we were able to recover from, however since then we've been having a bunch of weird issues, mostly around MACs.
It started off with the engine being unable to find a free MAC when creating a VM, despite there being significantly less virtual interfaces (around 250) than the total number of MACs in the default pool (default configuration, so 65536 addresses) and escalated into creating duplicate MACs (despite the pool not allowing it) and now we can't even modify the pool or remove VMs (since deleting the attached vnics fail), so we're kinda stuck with a cluster that has running VMs which are fine as long as we don't touch them, but on which we can't create new VMs (or modify the existing ones).
In the engine's log we can see that we've had an "Unable to initialize MAC pool due to existing duplicates (Failed with error MAC_POOL_INITIALIZATION_FAILED and code 5010)" error when we tried to reconfigure the pool this morning (see the full error stack here : https://pastebin.com/6bKMfbLn) and now whenever we try to delete a VM or reconfigure the pool we have a 'Pool for id="58ca604b-017d-0374-0220-00000000014e" does not exist' error (see the full error stack here: https://pastebin.com/Huy91iig), but, if we check the engine's mac_pool table we can see that it's there :
engine=# select * from mac_pools;
id | name | description | allow_duplicate_mac_addresses | default_pool
--------------------------------------+---------+------------------+-------------------------------+--------------
58ca604b-017d-0374-0220-00000000014e | Default | Default MAC pool | f | t
(1 row)
engine=# select * from mac_pool_ranges;
mac_pool_id | from_mac | to_mac
--------------------------------------+-------------------+-------------------
58ca604b-017d-0374-0220-00000000014e | 56:6f:1a:1a:00:00 | 56:6f:1a:1a:ff:ff
(1 row)
I found this bugzilla that seems to somehow apply https://bugzilla.redhat.com/show_bug.cgi?id=1554180 however I don't really know how to "reinitialize engine", especially considering that the mac pool was not configured to allow duplicate macs to begin with, and I've no idea what the impact of that reinitialization would be on the current VMs.
I'm quite new to oVirt (only been using it for one year) so any help would be greatly appreciated.
2 years, 8 months
How to create User in web ui?
by jihwahn1018@naver.com
Hello,
in ovirt guide, we need to create user by ovirt-aaa-jdbc-tool
and we can add only users created by ovirt-aaa-jdbc-tool.
Is there any way to create user in web ui?
and if not, is there any reason to block creating user in web ui?
Thank you.
2 years, 8 months
Duplicate nameserver in Host causing unassigned state when adding. possible bug?
by ravi k
Hello all,
We are running oVirt 4.3.10.4-1.0.22.el7. I noticed an interesting issue or a possible bug yesterday. I was trying to add a host when I noticed that it was failing and the host status was going into 'unassigned' state.
I saw the below error in the engine log.
/var/log/ovirt-engine/engine.log
2022-04-07 15:17:07,739+04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CollectVdsNetworkDataAfterInstallationVDSCommand] (EE-ManagedThreadFactory-engine-Thread-24723) [4917a348] HostName = olvsrv005u
2022-04-07 15:17:07,739+04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CollectVdsNetworkDataAfterInstallationVDSCommand] (EE-ManagedThreadFactory-engine-Thread-24723) [4917a348] Failed in 'CollectVdsNetworkDataAfterInstallationVDS' method, for vds: 'olvsrv005u'; host: '10.119.6.232': CallableStatementCallback; SQL [{call insertnameserver(?, ?, ?)}ERROR: duplicate key value violates unique constraint "name_server_pkey"
Detail: Key (dns_resolver_configuration_id, address)=(459b68e6-b684-4cf6-8834-755249a6bd3a, 10.119.10.212) already exists.
Where: SQL statement "INSERT INTO
name_server(
address,
position,
dns_resolver_configuration_id)
VALUES (
v_address,
v_position,
v_dns_resolver_configuration_id)"
PL/pgSQL function insertnameserver(uuid,character varying,smallint) line 3 at SQL statement; nested exception is org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "name_server_pkey"
Detail: Key (dns_resolver_configuration_id, address)=(459b68e6-b684-4cf6-8834-755249a6bd3a, 10.119.10.212) already exists.
Then I checked the resolv.conf on the host
[root@olvsrv005u ~]# cat /etc/resolv.conf
# Version: 1.00
search uat.abc.com
nameserver 10.119.10.212
nameserver 10.119.10.212
Well, ideally it's of no use having duplicate nameserver. But it was not affecting the functionality of the host. However it was failing the addition of the host, probably because it was failing when updating the host's config in the engine DB due to the duplicate nameserver.
To test this I commented the duplicate value and checked. The host is now added successfully.
2022-04-07 15:33:37,301+04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [] START, GetHardwareInfoAsyncVDSCommand(HostName = olvsrv005u, VdsIdA
ndVdsVDSCommandParametersBase:{hostId='459b68e6-b684-4cf6-8834-755249a6bd3a', vds='Host[olvsrv005u,459b68e6-b684-4cf6-8834-755249a6bd3a]'}), log id: 52e7ec52
2022-04-07 15:33:37,301+04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [] FINISH, GetHardwareInfoAsyncVDSCommand, return: , log id: 52e7ec52
2022-04-07 15:33:37,356+04 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [3de72cb7] Running command: SetNonOperationalVdsCommand internal: true. Entities affected :
ID: 459b68e6-b684-4cf6-8834-755249a6bd3a Type: VDS
2022-04-07 15:33:37,360+04 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [3de72cb7] START, SetVdsStatusVDSCommand(HostName = olvsrv005u, SetVdsStatusVDSCommandPa
rameters:{hostId='459b68e6-b684-4cf6-8834-755249a6bd3a', status='NonOperational', nonOperationalReason='NETWORK_UNREACHABLE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 1bfc90a3
2022-04-07 15:33:37,363+04 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [3de72cb7] FINISH, SetVdsStatusVDSCommand, return: , log id: 1bfc90a3
2022-04-07 15:33:37,404+04 ERROR [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [3de72cb7] Host 'olvsrv005u' is set to Non-Operational, it is missing the following networks
Should I raise this as a bug? I'm of the opinion that it should be because if it's not breaking the host's functionality then it should be ok.
Regards,
Ravi
2 years, 8 months
10Gbps iSCSI Bonding issue on HPE Gen10 server
by michael.li@hactlsolutions.com
Hi support,
I have an issue on configure 10Gbps iSCSI port on HPE Gen 10 Server in oVirt 4.4.10. The behavior as below
1. Two 10Gbps ports are running on 10000Mb/s and up verified ethtool
2. configure iSCSI active-standby bonding (bond0) on oVirt manager.
3. When reboot the server, one of port is down (no carrier).
Temp. solution: I manually up the port by "nmcli conn up "port name" to resume bonding. Anyone know how to resolve it? Let me know for any information needed to investigate.
I enclose my configuration as below.
[root@ovrphv04 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: ens1f1np1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
Slave Interface: ens1f1np1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:03:43:e7:da:38
Slave queue ID: 0
Slave Interface: ens4f1np1
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: f4:03:43:e7:d7:68
Slave queue ID: 0
[root@ovrphv04 network-scripts]# ip link |grep ens1f1np1
7: ens1f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
[root@ovrphv04 network-scripts]# ip link |grep ens4f1np1
10: ens4f1np1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 9000 qdisc mq master bond0 state DOWN mode DEFAULT group default qlen 1000
[root@ovrphv04 network-scripts]# cat ifcfg-ens1f1np1
TYPE=Ethernet
MTU=9000
SRIOV_TOTAL_VFS=0
NAME=ens1f1np1
UUID=6072cd16-a45a-433e-bc9e-817557706fb2
DEVICE=ens1f1np1
ONBOOT=yes
LLDP=no
ETHTOOL_OPTS="speed 10000 duplex full autoneg on"
MASTER_UUID=77e39d1b-cf14-406f-87f6-fe954fde40f0
MASTER=bond0
SLAVE=yes
[root@ovrphv04 network-scripts]# cat ifcfg-ens4f1np1
TYPE=Ethernet
MTU=9000
SRIOV_TOTAL_VFS=0
NAME=ens4f1np1
UUID=cf6b28e7-8fdb-472f-bbe9-0eecc95c5c3c
DEVICE=ens4f1np1
ONBOOT=yes
LLDP=no
ETHTOOL_OPTS="speed 10000 duplex full autoneg on"
MASTER_UUID=77e39d1b-cf14-406f-87f6-fe954fde40f0
MASTER=bond0
SLAVE=yes
[root@ovrphv04 network-scripts]# cat ifcfg-bond0
BONDING_OPTS="mode=active-backup miimon=100"
TYPE=Bond
BONDING_MASTER=yes
HWADDR=
MTU=9000
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=172.26.14.139
PREFIX=24
DEFROUTE=yes
DHCP_CLIENT_ID=mac
IPV4_FAILURE_FATAL=no
IPV6_DISABLED=yes
IPV6INIT=no
DHCPV6_DUID=ll
DHCPV6_IAID=mac
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=bond0
UUID=77e39d1b-cf14-406f-87f6-fe954fde40f0
DEVICE=bond0
ONBOOT=yes
AUTOCONNECT_SLAVES=yes
LLDP=no
2 years, 8 months
Re: [Ext.] Re: Re: Storage Design Question - GlusterFS
by Strahil Nikolov
This one shows if a setting differs from the defaults. As I don't know the default (you can see it with gluster volume set help), you can check via gluster volume get <VOLUME> cluster.lookup-optimize.
best Regards,Strahil Nikolov
On Wed, Apr 6, 2022 at 16:50, Mohamed Roushdy<mohamedroushdy(a)peopleintouch.com> wrote: <!--#yiv2710143779 _filtered {} _filtered {} _filtered {}#yiv2710143779 #yiv2710143779 p.yiv2710143779MsoNormal, #yiv2710143779 li.yiv2710143779MsoNormal, #yiv2710143779 div.yiv2710143779MsoNormal {margin:0in;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv2710143779 a:link, #yiv2710143779 span.yiv2710143779MsoHyperlink {color:blue;text-decoration:underline;}#yiv2710143779 span.yiv2710143779EmailStyle19 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv2710143779 .yiv2710143779MsoChpDefault {font-size:10.0pt;} _filtered {}#yiv2710143779 div.yiv2710143779WordSection1 {}-->
Well, I think it’s not configured. Here’s the config:
Volume Name: data
Type: Replicate
Volume ID: 06cac4b9-3d6a-410b-93e6-4f876bbfcdd1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/gluster/data/brick1
Brick2: server2:/gluster/data/brick1
Brick3: server3:/gluster/data/brick1
Options Reconfigured:
features.shard-block-size: 512MB
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Wednesday, April 6, 2022 1:42 PM
To: Mohamed Roushdy <mohamedroushdy(a)peopleintouch.com>; users(a)ovirt.org
Subject: [Ext.] Re: [ovirt-users] Re: Storage Design Question - GlusterFS
Can you check of the volume had 'cluster.lookup-optimize' enabled (on)?
Most probably it caused your problems .
For details check [1] [2]
Best Regards,
Strahil Nikolov
[1] https://access.redhat.com/solutions/5896761
[2] https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BEYIM56H2Q3P...
On Wed, Apr 6, 2022 at 10:42,mohamedroushdy@peopleintouch.com
<mohamedroushdy(a)peopleintouch.com> wrote:
Thanks a lot for your answer. The thing is, we had data lose disaster last month, and that's why we've decided to deploy a cmmpletely new environment for taht pirpose, and let me tell you what happened. We are using Ovirt 4.1 with Gluster 3.8 for our production, and the cluster contains five hosts, three of them are storage servers. I've added two more hosts, and manually via CLI (as I couldn't do this via GUI) created a new brick on one of the newely added nodes,and expanded the volume with the parameter "replica = 4 as it was refused to expand without setting the replica count. The action was successful for both the engine vol, and the data vol, however, few minutes later we've found that the disk sizes (images) of the existing VMs got shrunk and lost all of the data, and we had to recover from backup.. We can't afford another downtime (another disaster I mean), that's why I'm so maticulous about what to do next, so, is there a good guide around for the architecure and sizing?
Thank you,
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4LS3BEUHYF...
2 years, 8 months
oVirt 4.5.0 Beta is now available for testing
by Sandro Bonazzola
oVirt 4.5.0 Beta is now available for testing
The oVirt project is excited to announce the availability of oVirt 4.5.0
Beta release for testing , as of April 6th, 2022.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.4.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production, and it is not feature
complete.
In particular, please note that upgrades from 4.4 and future upgrades from
this beta to the final 4.5 release from this version are not supported.
Some of the features included in oVirt 4.5.0 Beta require content that will
be available in RHEL 8.6 and derivatives which are currently included in
CentOS Stream 8.
Known issues
After BETA compose finished a few issues have been found:
-
Cluster upgrade is not working, fix in progress:
https://github.com/oVirt/ovirt-ansible-collection/pull/474
-
Hosted Engine setup fails on Keycloak configuration, fix in progress:
https://github.com/oVirt/ovirt-ansible-collection/pull/475
An async beta refresh will be issued as soon as the fix will be ready.
Documentation
Be sure to follow instructions for oVirt 4.5!
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.5.0 Beta Release?
This release is available now on x86_64 architecture for:
-
CentOS Stream 8
-
RHEL 8.6 Beta and derivatives
This release supports Hypervisor Hosts on x86_64:
-
oVirt Node NG (based on CentOS Stream 8)
-
CentOS Stream 8
-
RHEL 8.6 Beta and derivatives
Builds are also available for ppc64le and aarch64.
Experimental builds for CentOS Stream 9 are also provided for Hypervisor
Hosts.
Security fixes included in oVirt 4.5 compared to latest oVirt 4.4.10:
-
CVE-2021-33502 <https://access.redhat.com/security/cve/CVE-2021-33502>
[moderate] ovirt-web-ui: normalize-url: ReDoS for data URLs
-
CVE-2022-0207 <https://access.redhat.com/security/cve/CVE-2022-0207>
[low] vdsm: disclosure of sensitive values in log files
-
CVE-2022-24302 <https://access.redhat.com/security/cve/CVE-2022-24302>
[moderate] python-paramiko: Race condition in the write_private_key_file
function
-
CVE-2021-41182 <https://access.redhat.com/security/cve/CVE-2021-41182>
[moderate] jquery-ui: XSS in the altField option of the datepicker widget
[ovirt-engine]
-
CVE-2021-41183 <https://access.redhat.com/security/cve/CVE-2021-41183>
[moderate] jquery-ui: XSS in *Text options of the datepicker widget
[ovirt-engine]
-
CVE-2021-41184 <https://access.redhat.com/security/cve/CVE-2021-41184>
[moderate] jquery-ui: XSS in the 'of' option of the .position() util
[ovirt-engine]
-
CVE-2021-33502 <https://access.redhat.com/security/cve/CVE-2021-33502>
[moderate] ovirt-engine-ui-extensions: normalize-url: ReDoS for data URLs
-
CVE-2021-23425 <https://access.redhat.com/security/cve/CVE-2021-23425>
[moderate] ovirt-engine-ui-extensions: nodejs-trim-off-newlines: ReDoS via
string processing
-
CVE-2021-3807 <https://access.redhat.com/security/cve/CVE-2021-3807>
[moderate] ovirt-engine-ui-extensions: nodejs-ansi-regex: Regular
expression denial of service (ReDoS) matching ANSI escape codes
oVirt Node includes updated packages with fixes for:
-
CVE-2022-25235 <https://access.redhat.com/security/cve/CVE-2022-25235>
[important] expat: Malformed 2- and 3-byte UTF-8 sequences can lead to
arbitrary code execution
-
CVE-2022-25315 <https://access.redhat.com/security/cve/CVE-2022-25315>
[important] expat: Integer overflow in storeRawNames()
-
CVE-2022-25236 <https://access.redhat.com/security/cve/CVE-2022-25236>
[important] expat: Namespace-separator characters in "xmlns[:prefix]"
attribute values can lead to arbitrary code execution
-
CVE-2021-4083 <https://access.redhat.com/security/cve/CVE-2021-4083>
[important] kernel: fget: check that the fd still exists after getting a
ref to it
-
CVE-2022-0847 <https://access.redhat.com/security/cve/CVE-2022-0847>
[important] kernel: improper initialization of the "flags" member of the
new pipe_buffer
-
CVE-2022-0778 <https://access.redhat.com/security/cve/CVE-2022-0778>
[important] openssl: Infinite loop in BN_mod_sqrt() reachable when parsing
certificates
Some of the RFEs with high user impact are listed below:
-
1782077 [RFE] More Flexible RHV CPU Allocation Policy with HyperThreading
-
1922977 [RFE] VM shared disks are not part of the OVF_STORE
-
977778 [RFE] - Mechanism for converting disks for non-running VMS
-
1926625 [RFE] How to enable HTTP Strict Transport Security (HSTS) on
Apache HTTPD for Red Hat Virtualization Manager
-
2029830 [RFE] Hosted engine should accept OpenSCAP profile name instead
of bool
-
1933555 [RFE] Release python-ovirt-engine-sdk4 package on RHEL 9
-
1782056 [RFE] Integration of built-in ipsec feature in RHV/RHHI-V with
OVN
-
1849169 [RFE] add virtualCPUs/physicalCPUs ratio property to
evenly_distributed policy
-
1624015 [RFE] Expose Console Options and Console invocation via API
-
1998255 [RFE] [UI] Add search box for vNIC Profiles in RHVM WebUI on the
main vNIC profiles tab
-
1987121 [RFE] Support enabling nVidia Unified Memory on mdev vGPU
-
2058177 [RFE] Include the package nvme-cli on virtualization hosts
-
1927985 [RFE] Speed up export-to-OVA on NFS by aligning loopback device
offset
-
2021217 [RFE] Windows 2022 support
-
1975720 [RFE] Huge VMs require an additional migration parameter
-
1979277 [RFE] Migration - Apply automatic CPU and NUMA pinning based on
the migrated host
-
1986775 [RFE] introduce support for CentOS Stream 9 on oVirt releases
-
1990462 [RFE] Add user name and password to ELK integration
-
1838089 [RFE] Please allow placing domain in maintenance mode with
suspended VM
-
1913389 [CBT] [RFE] Provide VDSM with more information on scratch disks
Some of the Bugs with high user impact are listed below:
-
1986732 ovirt-ha services cannot set the LocalMaintenance mode in the
storage metadata and are in a restart loop
-
2035051 removing nfs-utils cause ovirt-engine removal due to cinderlib
dep tree
-
2054745 Setting SD to maintenance fails and turns the SD to inactive
mode as a result
-
1878724 vdsm-tool configure is failing with error ”dependency job for
libvirtd.service failed”
-
2024698 build failure in copr
-
2057958 oVirt Node 4.5 el9 iso doesn’t boot anymore
-
2043146 Expired /etc/pki/vdsm/libvirt-vnc/server-cert.pem certificate is
skipped during Enroll Certificate
-
2040402 unable to use --log-size=0 option
-
1932149 [blocked] Create hosted_storage with the correct storage_format
based on the Data-Center level of the backup
-
2066285 Copying from an Image domain to a Managed Block Storage domain
fails
-
1986485 otopi uses deprecated API platform.linux_distribution which has
been removed in Python 3.7 and later.
-
1986728 ovirt-log-collector uses deprecated API
platform.linux_distribution which has been removed in Python 3.7 and later.
-
2066628 [UI] UI exception when updating or enabling number of VFs via
the webadmin
-
1986731 ovirt-engine uses deprecated API platform.linux_distribution
which has been removed in Python 3.7 and later.
-
2024161 Penalizing score by 1000 due to cpu load is not canceled after
load decreasing to 0
-
2019869 Local disk is not bootable when the VM is imported from OVA
-
1857815 Copying HE-VM's disk to iSCSI storage volume, during deployment
of 4.4 HE takes too long.
-
2065152 Implicit CPU pinning for NUMA VMs destroyed because of invalid
CPU policy
-
2069658 Unable to deploy HE, couldn't resolve module/action 'firewalld'.
-
2070184 Hybrid Backup - Specifying disks for backup doesn’t work, backup
is created for all disks
oVirt Node has been updated, including:
-
oVirt 4.5.0 beta: https://www.ovirt.org/release/4.5.0/
-
GlusterFS 10.1: https://docs.gluster.org/en/main/release-notes/10.1/
-
RDO OpenStack Yoga pre-release:
https://releases.openstack.org/yoga/index.html
-
OVS 2.15
-
Ansible Core 2.12.3:
https://github.com/ansible/ansible/blob/stable-2.12/changelogs/CHANGELOG-...
-
CentOS Stream 8 latest updates
-
Full list of changes compared to oVirt Node 4.5.0 Alpha:
ovirt-node-ng-image-4.5.0_alpha
ovirt-node-ng-image-4.5.0_beta
NetworkManager 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-config-server 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-libnm 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-ovs 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-team 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-tui 1.36.0-2.el8
1.37.2-1.el8
ansible 2.9.27-2.el8
ansible-collection-ansible-netcommon
2.2.0-3.2.el8
ansible-collection-ansible-posix
1.3.0-1.2.el8
ansible-collection-ansible-utils
2.3.0-2.2.el8
ansible-core
2.12.3-1.el8
audispd-plugins 3.0.7-2.el8
3.0.7-3.el8
audit 3.0.7-2.el8
3.0.7-3.el8
audit-libs 3.0.7-2.el8
3.0.7-3.el8
centos-gpg-keys 8-4.el8
8-6.el8
centos-release-gluster10
1.0-1.el8s
centos-release-ovirt45 8.6-1.el8
8.6-3.el8s
centos-stream-repos 8-4.el8
8-6.el8
cockpit-ovirt-dashboard 0.15.1-1.el8
0.16.0-1.el8
dmidecode 3.3-1.el8
3.3-4.el8
dnf 4.7.0-7.el8
4.7.0-8.el8
dnf-data 4.7.0-7.el8
4.7.0-8.el8
dnf-plugins-core 4.0.21-10.el8
4.0.21-11.el8
emacs-filesystem
26.1-7.el8
expat 2.2.5-5.el8
2.2.5-8.el8
gdisk 1.0.3-8.el8
1.0.3-9.el8
git
2.31.1-2.el8
git-core-doc
2.31.1-2.el8
glibc 2.28-189.el8
2.28-196.el8
glibc-common 2.28-189.el8
2.28-196.el8
glibc-langpack-en 2.28-189.el8
2.28-196.el8
gluster-ansible-cluster 1.0.1-2.el8
1.0-4.el8
gluster-ansible-features 1.0.5-6.el8
1.0.5-12.el8
gluster-ansible-infra 1.0.4-15.el8
1.0.4-20.el8
gluster-ansible-maintenance 1.0.1-10.el8
1.0.1-12.el8
gluster-ansible-repositories 1.0.1-3.el8
1.0.1-5.el8
gluster-ansible-roles 1.0.5-21.el8
1.0.5-26.el8
ipa-client 4.9.8-2.module_el8.6.0+1054+cdb51b28
4.9.8-6.module_el8.6.0+1104+ba556574
ipa-client-common 4.9.8-2.module_el8.6.0+1054+cdb51b28
4.9.8-6.module_el8.6.0+1104+ba556574
ipa-common 4.9.8-2.module_el8.6.0+1054+cdb51b28
4.9.8-6.module_el8.6.0+1104+ba556574
ipa-selinux 4.9.8-2.module_el8.6.0+1054+cdb51b28
4.9.8-6.module_el8.6.0+1104+ba556574
iproute 5.15.0-3.el8
5.15.0-4.el8
iproute-tc 5.15.0-3.el8
5.15.0-4.el8
iputils 20180629-9.el8
20180629-10.el8
kernel 4.18.0-365.el8
4.18.0-373.el8
kernel-core 4.18.0-365.el8
4.18.0-373.el8
kernel-modules 4.18.0-365.el8
4.18.0-373.el8
kernel-tools 4.18.0-365.el8
4.18.0-373.el8
kernel-tools-libs 4.18.0-365.el8
4.18.0-373.el8
libbabeltrace 1.5.4-3.el8
1.5.4-4.el8
libblkid 2.32.1-34.el8
2.32.1-35.el8
libdnf 0.63.0-7.el8
0.63.0-8.el8
libfdisk 2.32.1-34.el8
2.32.1-35.el8
libipa_hbac 2.6.1-2.el8
2.6.2-3.el8
libmount 2.32.1-34.el8
2.32.1-35.el8
libnfsidmap 2.3.3-50.el8
2.3.3-51.el8
libsmartcols 2.32.1-34.el8
2.32.1-35.el8
libsss_autofs 2.6.1-2.el8
2.6.2-3.el8
libsss_certmap 2.6.1-2.el8
2.6.2-3.el8
libsss_idmap 2.6.1-2.el8
2.6.2-3.el8
libsss_nss_idmap 2.6.1-2.el8
2.6.2-3.el8
libsss_simpleifp 2.6.1-2.el8
2.6.2-3.el8
libsss_sudo 2.6.1-2.el8
2.6.2-3.el8
libuuid 2.32.1-34.el8
2.32.1-35.el8
libwbclient 4.15.5-4.el8
4.15.5-5.el8
nfs-utils 2.3.3-50.el8
2.3.3-51.el8
nmstate 1.2.1-1.el8
1.3.0-0.alpha.20220310.el8
nmstate-plugin-ovsdb 1.2.1-1.el8
1.3.0-0.alpha.20220310.el8
openssl 1.1.1k-5.el8_5
1.1.1k-6.el8
openssl-libs 1.1.1k-5.el8_5
1.1.1k-6.el8
ovirt-ansible-collection 2.0.0-0.6.BETA.el8
2.0.1-1.el8
ovirt-host 4.5.0-1.el8
4.5.0-2.el8
ovirt-host-dependencies 4.5.0-1.el8
4.5.0-2.el8
ovirt-hosted-engine-setup 2.6.1-1.el8
2.6.3-1.el8
ovirt-imageio-client 2.4.1-1.el8
2.4.3-1.el8
ovirt-imageio-common 2.4.1-1.el8
2.4.3-1.el8
ovirt-imageio-daemon 2.4.1-1.el8
2.4.3-1.el8
ovirt-node-ng-image-update-placeholder 4.5.0-2.el8
4.5.0-3.el8
ovirt-node-ng-nodectl 4.4.1-1.el8
4.4.2-1.el8
ovirt-release-host-node 4.5.0-2.el8
4.5.0-3.el8
perl-Error
0.17025-2.el8
perl-Git
2.31.1-2.el8
perl-TermReadKey
2.37-7.el8
platform-python 3.6.8-45.el8
3.6.8-46.el8
procps-ng 3.3.15-6.el8
3.3.15-7.el8
python3-audit 3.0.7-2.el8
3.0.7-3.el8
python3-click 6.7-8.el8
python3-daemon 2.2.4-3.el8
2.2.4-3.2.el8
python3-dnf 4.7.0-7.el8
4.7.0-8.el8
python3-dnf-plugin-versionlock 4.0.21-10.el8
4.0.21-11.el8
python3-dnf-plugins-core 4.0.21-10.el8
4.0.21-11.el8
python3-docutils 0.14-12.module_el8.5.0+761+faacb0fb
0.14-12.2.el8
python3-hawkey 0.63.0-7.el8
0.63.0-8.el8
python3-ipaclient 4.9.8-2.module_el8.6.0+1054+cdb51b28
4.9.8-6.module_el8.6.0+1104+ba556574
python3-ipalib 4.9.8-2.module_el8.6.0+1054+cdb51b28
4.9.8-6.module_el8.6.0+1104+ba556574
python3-jmespath 0.9.0-11.el8
0.9.0-11.1.el8
python3-libdnf 0.63.0-7.el8
0.63.0-8.el8
python3-libipa_hbac 2.6.1-2.el8
2.6.2-3.el8
python3-libnmstate 1.2.1-1.el8
1.3.0-0.alpha.20220310.el8
python3-libs 3.6.8-45.el8
3.6.8-46.el8
python3-lockfile 0.12.2-1.el8
0.12.2-1.2.el8
python3-netaddr 0.7.19-8.el8
0.7.19-8.1.1.el8
python3-ovirt-engine-sdk4 4.5.0-1.el8
4.5.1-1.el8
python3-ovirt-node-ng-nodectl 4.4.1-1.el8
4.4.2-1.el8
python3-pbr 5.5.1-1.el8
5.5.1-7.2.el8
python3-perf 4.18.0-365.el8
4.18.0-373.el8
python3-pexpect 4.7.0-4.el8
4.7.0-4.3.el8
python3-ptyprocess 0.5.2-4.el8
0.5.2-4.2.el8
python3-pycurl 7.43.0.2-4.el8
7.43.0.2-4.1.el8
python3-sss 2.6.1-2.el8
2.6.2-3.el8
python3-sss-murmur 2.6.1-2.el8
2.6.2-3.el8
python3-sssdconfig 2.6.1-2.el8
2.6.2-3.el8
python3-syspurpose 1.28.25-1.el8
1.28.28-1.el8
python3-tenacity 6.2.0-1.el8
6.3.1-1.el8
python38
3.8.12-1.module_el8.6.0+929+89303463
python38-asn1crypto
1.2.0-3.module_el8.5.0+742+dbad1979
python38-babel
2.7.0-11.module_el8.6.0+929+89303463
python38-cffi
1.13.2-3.module_el8.5.0+742+dbad1979
python38-cryptography
2.8-3.module_el8.5.0+742+dbad1979
python38-idna
2.8-6.module_el8.5.0+742+dbad1979
python38-jinja2
2.10.3-5.module_el8.6.0+929+89303463
python38-jmespath
0.9.0-11.1.el8
python38-libs
3.8.12-1.module_el8.6.0+929+89303463
python38-markupsafe
1.1.1-6.module_el8.5.0+742+dbad1979
python38-netaddr
0.7.19-8.1.1.el8
python38-ovirt-engine-sdk4
4.5.1-1.el8
python38-ovirt-imageio-client
2.4.3-1.el8
python38-ovirt-imageio-common
2.4.3-1.el8
python38-passlib
1.7.0-5.1.el8
python38-pip-wheel
19.3.1-5.module_el8.6.0+960+f11a9b17
python38-ply
3.11-10.module_el8.5.0+742+dbad1979
python38-pycparser
2.19-3.module_el8.5.0+742+dbad1979
python38-pycurl
7.43.0.2-4.1.el8
python38-pytz
2019.3-3.module_el8.5.0+742+dbad1979
python38-pyyaml
5.4.1-1.module_el8.6.0+929+89303463
python38-resolvelib
0.5.4-5.el8
python38-setuptools
41.6.0-5.module_el8.6.0+929+89303463
python38-setuptools-wheel
41.6.0-5.module_el8.6.0+929+89303463
python38-six
1.12.0-10.module_el8.5.0+742+dbad1979
rsyslog 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-elasticsearch 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-mmjsonparse 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-mmnormalize 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-openssl 8.2102.0-7.el8
8.2102.0-8.el8
samba-client-libs 4.15.5-4.el8
4.15.5-5.el8
samba-common 4.15.5-4.el8
4.15.5-5.el8
samba-common-libs 4.15.5-4.el8
4.15.5-5.el8
selinux-policy 3.14.3-93.el8
3.14.3-95.el8
selinux-policy-targeted 3.14.3-93.el8
3.14.3-95.el8
sssd-client 2.6.1-2.el8
2.6.2-3.el8
sssd-common 2.6.1-2.el8
2.6.2-3.el8
sssd-common-pac 2.6.1-2.el8
2.6.2-3.el8
sssd-dbus 2.6.1-2.el8
2.6.2-3.el8
sssd-ipa 2.6.1-2.el8
2.6.2-3.el8
sssd-kcm 2.6.1-2.el8
2.6.2-3.el8
sssd-krb5-common 2.6.1-2.el8
2.6.2-3.el8
sssd-tools 2.6.1-2.el8
2.6.2-3.el8
tzdata 2021e-1.el8
2022a-1.el8
util-linux 2.32.1-34.el8
2.32.1-35.el8
vdsm 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-api 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-client 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-common 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-gluster 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-http 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-jsonrpc 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-network 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-python 4.50.0.10-1.el8
4.50.0.11-1.el8
vdsm-yajsonrpc 4.50.0.10-1.el8
4.50.0.11-1.el8
virt-install 3.2.0-3.el8
3.2.0-4.el8
virt-manager-common 3.2.0-3.el8
3.2.0-4.el8
yum 4.7.0-7.el8
4.7.0-8.el8
oVirt Appliance has been updated, including:
-
oVirt 4.5.0 beta: https://www.ovirt.org/release/4.5.0/
-
GlusterFS 10.1: https://docs.gluster.org/en/main/release-notes/10.1/
-
RDO OpenStack Yoga pre-release:
https://releases.openstack.org/yoga/index.html
-
OVS 2.15
-
Ansible Core 2.12.3:
https://github.com/ansible/ansible/blob/stable-2.12/changelogs/CHANGELOG-...
-
CentOS Stream 8 latest updates
-
Full list of changes compared to oVirt Appliance 4.5.0 Alpha:
ovirt-engine-appliance-manifest-rpm-el8.4.5.0_alpha
ovirt-engine-appliance-manifest-rpm-el8.4.5.0_beta
NetworkManager 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-libnm 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-team 1.36.0-2.el8
1.37.2-1.el8
NetworkManager-tui 1.36.0-2.el8
1.37.2-1.el8
ansible
2.9.27-2.el8
ansible-collection-ansible-netcommon
2.2.0-3.2.el8
ansible-collection-ansible-posix
1.3.0-1.2.el8
ansible-collection-ansible-utils
2.3.0-2.2.el8
ansible-core
2.12.3-1.el8
ansible-runner
2.1.3-1.el8
ansible-runner-service
1.0.7-1.el8
audit 3.0.7-2.el8
3.0.7-3.el8
audit-libs 3.0.7-2.el8
3.0.7-3.el8
centos-gpg-keys 8-4.el8
8-6.el8
centos-release-gluster10
1.0-1.el8s
centos-release-ovirt45 8.6-1.el8s
8.6-3.el8s
centos-stream-repos 8-4.el8
8-6.el8
dmidecode 3.3-1.el8
3.3-4.el8
dnf 4.7.0-7.el8
4.7.0-8.el8
dnf-data 4.7.0-7.el8
4.7.0-8.el8
dnf-plugins-core 4.0.21-10.el8
4.0.21-11.el8
expat 2.2.5-5.el8
2.2.5-8.el8
gdisk 1.0.3-8.el8
1.0.3-9.el8
git
2.31.1-2.el8
git-core-doc
2.31.1-2.el8
glibc 2.28-189.el8
2.28-196.el8
glibc-common 2.28-189.el8
2.28-196.el8
glibc-gconv-extra 2.28-189.el8
2.28-196.el8
glibc-langpack-en 2.28-189.el8
2.28-196.el8
iproute 5.15.0-3.el8
5.15.0-4.el8
iputils 20180629-9.el8
20180629-10.el8
java-1.8.0-openjdk-headless 1.8.0.322.b06-2.el8_5
1.8.0.322.b06-11.el8
java-11-openjdk-headless 11.0.14.0.9-2.el8_5
11.0.14.1.1-6.el8
kernel 4.18.0-365.el8
4.18.0-373.el8
kernel-core 4.18.0-365.el8
4.18.0-373.el8
kernel-modules 4.18.0-365.el8
4.18.0-373.el8
kernel-tools 4.18.0-365.el8
4.18.0-373.el8
kernel-tools-libs 4.18.0-365.el8
4.18.0-373.el8
libbabeltrace 1.5.4-3.el8
1.5.4-4.el8
libblkid 2.32.1-34.el8
2.32.1-35.el8
libdnf 0.63.0-7.el8
0.63.0-8.el8
libfdisk 2.32.1-34.el8
2.32.1-35.el8
libmount 2.32.1-34.el8
2.32.1-35.el8
libnfsidmap 2.3.3-50.el8
2.3.3-51.el8
libqb
1.0.3-12.el8
libsmartcols 2.32.1-34.el8
2.32.1-35.el8
libsss_certmap 2.6.1-2.el8
2.6.2-3.el8
libsss_idmap 2.6.1-2.el8
2.6.2-3.el8
libsss_nss_idmap 2.6.1-2.el8
2.6.2-3.el8
libuuid 2.32.1-34.el8
2.32.1-35.el8
nfs-utils 2.3.3-50.el8
2.3.3-51.el8
nodejs 10.23.1-1.module_el8.4.0+645+9ce14ba2
14.17.5-1.module_el8.6.0+939+4802ccb9
nodejs-docs
14.17.5-1.module_el8.6.0+939+4802ccb9
nodejs-full-i18n 10.23.1-1.module_el8.4.0+645+9ce14ba2
14.17.5-1.module_el8.6.0+939+4802ccb9
npm 6.14.10-1.10.23.1.1.module_el8.4.0+645+9ce14ba2
6.14.14-1.14.17.5.1.module_el8.6.0+939+4802ccb9
openssl 1.1.1k-5.el8_5
1.1.1k-6.el8
openssl-libs 1.1.1k-5.el8_5
1.1.1k-6.el8
openstack-java-cinder-client 3.2.9-1.el8
3.2.9-9.el8
openstack-java-cinder-model 3.2.9-1.el8
3.2.9-9.el8
openstack-java-client 3.2.9-1.el8
3.2.9-9.el8
openstack-java-glance-client 3.2.9-1.el8
3.2.9-9.el8
openstack-java-glance-model 3.2.9-1.el8
3.2.9-9.el8
openstack-java-keystone-client 3.2.9-1.el8
3.2.9-9.el8
openstack-java-keystone-model 3.2.9-1.el8
3.2.9-9.el8
openstack-java-quantum-client 3.2.9-1.el8
3.2.9-9.el8
openstack-java-quantum-model 3.2.9-1.el8
3.2.9-9.el8
openstack-java-resteasy-connector 3.2.9-1.el8
3.2.9-9.el8
ovirt-ansible-collection 2.0.0-0.6.BETA.el8
2.0.1-1.el8
ovirt-engine 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-backend 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-dbscripts 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-dwh 4.5.0-1.el8
4.5.2-1.el8
ovirt-engine-dwh-grafana-integration-setup 4.5.0-1.el8
4.5.2-1.el8
ovirt-engine-dwh-setup 4.5.0-1.el8
4.5.2-1.el8
ovirt-engine-keycloak-setup
15.0.2-1.el8
ovirt-engine-metrics 1.4.4-1.el8
1.6.0-1.el8
ovirt-engine-restapi 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-base 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-plugin-cinderlib 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-plugin-imageio 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-plugin-ovirt-engine 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-plugin-ovirt-engine-common 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-setup-plugin-websocket-proxy 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-tools 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-tools-backup 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-ui-extensions 1.3.1-1.el8
1.3.2-1.el8
ovirt-engine-vmconsole-proxy-helper 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-webadmin-portal 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-engine-websocket-proxy 4.5.0-1.el8
4.5.0.1-1.el8
ovirt-imageio-client 2.4.1-1.el8
2.4.3-1.el8
ovirt-imageio-common 2.4.1-1.el8
2.4.3-1.el8
ovirt-imageio-daemon 2.4.1-1.el8
2.4.3-1.el8
ovirt-web-ui 1.8.0-1.el8
1.8.1-1.el8
perl-Error
0.17025-2.el8
perl-Git
2.31.1-2.el8
perl-TermReadKey
2.37-7.el8
platform-python 3.6.8-45.el8
3.6.8-46.el8
platform-python-devel 3.6.8-45.el8
3.6.8-46.el8
procps-ng 3.3.15-6.el8
3.3.15-7.el8
protobuf
3.11.2-2.el8
python3-aniso8601
8.1.0-1.el8
python3-ansible-runner
1.4.6-3.el8
python3-audit 3.0.7-2.el8
3.0.7-3.el8
python3-click
6.7-8.el8
python3-daemon 2.2.4-3.el8
2.2.4-3.2.el8
python3-dataclasses
0.8-1.el8
python3-dnf 4.7.0-7.el8
4.7.0-8.el8
python3-dnf-plugin-versionlock 4.0.21-10.el8
4.0.21-11.el8
python3-dnf-plugins-core 4.0.21-10.el8
4.0.21-11.el8
python3-docutils
0.14-12.module_el8.5.0+761+faacb0fb
0.14-12.2.el8
python3-flask
1.1.2-4.el8
python3-flask-restful
0.3.9-2.el8
python3-hawkey 0.63.0-7.el8
0.63.0-8.el8
python3-itsdangerous
0.24-14.el8
python3-jmespath 0.9.0-11.el8
0.9.0-11.1.el8
python3-libdnf 0.63.0-7.el8
0.63.0-8.el8
python3-libs 3.6.8-45.el8
3.6.8-46.el8
python3-lockfile 0.12.2-1.el8
0.12.2-1.2.el8
python3-mod_wsgi
4.6.4-4.el8
python3-netaddr 0.7.19-8.el8
0.7.19-8.1.1.el8
python3-notario
0.0.16-4.el8
python3-ntlm-auth
1.5.0-1.el8
python3-ovirt-engine-lib 4.5.0-1.el8
4.5.0.1-1.el8
python3-ovirt-engine-sdk4 4.5.0-1.el8
4.5.1-1.el8
python3-paramiko 2.7.2-2.el8
2.7.2-3.el8
python3-pbr 5.5.1-1.el8
5.5.1-7.2.el8
python3-perf 4.18.0-365.el8
4.18.0-373.el8
python3-pexpect
4.7.0-4.el8
python3-psycopg2 2.7.5-7.el8
2.8.6-3.el8
python3-ptyprocess
0.5.2-4.el8
python3-pycurl 7.43.0.2-4.el8
7.43.0.2-4.1.el8
python3-requests_ntlm
1.1.0-9.el8
python3-sqlalchemy 1.4.18-1.1.el8
1.4.31-1.el8
python3-syspurpose 1.28.25-1.el8
1.28.28-1.el8
python3-tenacity 6.2.0-1.el8
6.3.1-1.el8
python3-tkinter 3.6.8-45.el8
3.6.8-46.el8
python3-werkzeug
2.0.1-2.el8
python3-winrm
0.4.1-1.el8
python3-xmltodict
0.12.0-6.el8
python38
3.8.12-1.module_el8.6.0+929+89303463
python38-ansible-runner
2.1.3-1.el8
python38-asn1crypto
1.2.0-3.module_el8.5.0+742+dbad1979
python38-babel
2.7.0-11.module_el8.6.0+929+89303463
python38-cffi
1.13.2-3.module_el8.5.0+742+dbad1979
python38-cryptography
2.8-3.module_el8.5.0+742+dbad1979
python38-daemon
2.2.4-3.2.el8
python38-docutils
0.14-12.2.el8
python38-idna
2.8-6.module_el8.5.0+742+dbad1979
python38-jinja2
2.10.3-5.module_el8.6.0+929+89303463
python38-jmespath
0.9.0-11.1.el8
python38-libs
3.8.12-1.module_el8.6.0+929+89303463
python38-lockfile
0.12.2-1.2.el8
python38-markupsafe
1.1.1-6.module_el8.5.0+742+dbad1979
python38-netaddr
0.7.19-8.1.1.el8
python38-ovirt-engine-sdk4
4.5.1-1.el8
python38-ovirt-imageio-client
2.4.3-1.el8
python38-ovirt-imageio-common
2.4.3-1.el8
python38-passlib
1.7.0-5.1.el8
python38-pexpect
4.7.0-4.3.el8
python38-pip
19.3.1-5.module_el8.6.0+960+f11a9b17
python38-pip-wheel
19.3.1-5.module_el8.6.0+960+f11a9b17
python38-ply
3.11-10.module_el8.5.0+742+dbad1979
python38-ptyprocess
0.5.2-4.2.el8
python38-pycparser
2.19-3.module_el8.5.0+742+dbad1979
python38-pycurl
7.43.0.2-4.1.el8
python38-pytz
2019.3-3.module_el8.5.0+742+dbad1979
python38-pyyaml
5.4.1-1.module_el8.6.0+929+89303463
python38-resolvelib
0.5.4-5.el8
python38-setuptools
41.6.0-5.module_el8.6.0+929+89303463
python38-setuptools-wheel
41.6.0-5.module_el8.6.0+929+89303463
python38-six
1.12.0-10.module_el8.5.0+742+dbad1979
python38-tkinter
3.8.12-1.module_el8.6.0+929+89303463
rhel-system-roles 1.15.1-1.el8
1.16.1-1.el8
rng-tools
6.14-4.git.b2b7934e.el8
rsyslog 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-elasticsearch 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-gnutls
8.2102.0-8.el8
rsyslog-mmjsonparse 8.2102.0-7.el8
8.2102.0-8.el8
rsyslog-mmnormalize 8.2102.0-7.el8
8.2102.0-8.el8
selinux-policy 3.14.3-93.el8
3.14.3-95.el8
selinux-policy-targeted 3.14.3-93.el8
3.14.3-95.el8
sssd-client 2.6.1-2.el8
2.6.2-3.el8
sssd-common 2.6.1-2.el8
2.6.2-3.el8
sssd-kcm 2.6.1-2.el8
2.6.2-3.el8
tzdata 2021e-1.el8
2022a-1.el8
tzdata-java 2021e-1.el8
2022a-1.el8
usbguard
1.0.0-9.el8
usbguard-selinux
1.0.0-9.el8
util-linux 2.32.1-34.el8
2.32.1-35.el8
yum 4.7.0-7.el8
4.7.0-8.el8
yum-utils 4.0.21-10.el8
4.0.21-11.el8
See the release notes for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.5.0 release highlights:
https://www.ovirt.org/release/4.5.0/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
oVirt 4.5.0 Beta test day - April 7th 2022
If you're willing to help testing the release during the test days please
join the oVirt development mailing list at
https://lists.ovirt.org/archives/list/devel@ovirt.org/ and report your
feedback there.
Please join the trello board at
https://trello.com/b/3FZ7gdhM/ovirt-450-test-day for sharing what you're
going to test so others can focus on different areas not covered by your
test.
If you don't want to register to trello, please share on the oVirt
development mailing list and we'll add it to the board.
The board is publicly visible also to non-registered users.
Instructions for installing oVirt 4.5.0 Beta for testing have been added to
the release page
https://www.ovirt.org/release/4.5.0/.
Professional Services, Integrators and Backup vendors: please run a test
session against your additional services, integrated solutions, downstream
rebuilds, backup solution on the released beta release and report issues as
soon as possible.
If you're not listed here:
https://ovirt.org/community/user-stories/users-and-providers.html
consider adding your company there.
Do you want to contribute to getting ready for this release?
Read more about oVirt community at https://ovirt.org/community/ and join
the oVirt developers https://ovirt.org/develop/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 8 months
Storage Design Question - GlusterFS
by mohamedroushdy@peopleintouch.com
Hello,
we are designing a new production cluster, and I have to questsions about GlusterFS please:
1- this quote from the documentation os not clear to me "
Creating additional data domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine." Does this mean that we should have more than one "Data" domain? or here this paragraph is talking about sharing a single domain for both the hosted-engine and VMs?
2- the documentation mentioned that Replica 1 or Replica 3 are only supported, but is this only for the hosted-engine volume? or also for Data volumes? so the max is only 3? and waht if I wanted to expand capacity later?
Thank you,
2 years, 8 months