[ANN] oVirt 4.4.4 is now generally available
by Sandro Bonazzola
oVirt 4.4.4 is now generally available
The oVirt project is excited to announce the general availability of oVirt
4.4.4 , as of December 21st, 2020.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.4.4 Release?
This update is the fourth in a series of stabilization updates to the 4.4
series.
This release is available now on x86_64 architecture for:
-
Red Hat Enterprise Linux 8.3
-
CentOS Linux (or similar) 8.3
-
CentOS Stream (tech preview)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
-
Red Hat Enterprise Linux 8.3
-
CentOS Linux (or similar) 8.3
-
oVirt Node (based on CentOS Linux 8.3)
-
CentOS Stream (tech preview)
oVirt Node and Appliance have been updated, including:
-
oVirt 4.4.4: https://www.ovirt.org/release/4.4.4/
-
Ansible 2.9.16:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
CentOS Linux 8 (2011):
https://lists.centos.org/pipermail/centos-announce/2020-December/048207.html
-
Advanced Virtualization 8.3
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
-
oVirt Appliance is already available for CentOS Linux 8
-
oVirt Node NG is already available for CentOS Linux 8
Additional resources:
-
Read more about the oVirt 4.4.4 release highlights:
https://www.ovirt.org/release/4.4.4/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
[1] https://www.ovirt.org/release/4.4.4/
[2] https://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 10 months
encrypted GENEVE traffic
by Pavel Nakonechnyi
Dear oVirt Community,
From my understanding oVirt does not support Open vSwitch IPSEC tunneling for GENEVE traffic (which is described on pages http://docs.openvswitch.org/en/latest/howto/ipsec/ and http://docs.openvswitch.org/en/latest/tutorials/ipsec/).
Are there plans to introduce such support? (or explicitly not to..)
Is it possible to somehow manually configure such tunneling for existing virtual networks? (even in a limited way)
Alternatively, is it possible to deploy oVirt on top of the tunneled (i.e. via VXLAN, IPSec) interfaces? This will allow to encrypt all management traffic.
Such requirement arises when using oVirt deployment on third-party premises with untrusted network.
Thank in advance for any clarifications. :)
--
WBR, Pavel
+32478910884
2 years, 10 months
Re: Constantly XFS in memory corruption inside VMs
by Strahil Nikolov
Damn...
You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
Yes!
I have a live VM right now that will de dead on a reboot:
[root@kontainerscomk ~]# cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.3:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.3"
Red Hat Enterprise Linux release 8.3 (Ootpa)
Red Hat Enterprise Linux release 8.3 (Ootpa)
[root@kontainerscomk ~]# sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
Use -F to force a read attempt.
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0 -F
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
xfs_db: size check failed
xfs_db: V1 inodes unsupported. Please try an older xfsprogs.
[root@kontainerscomk ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Nov 19 22:40:39 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=ad84d1ea-c9cc-4b22-8338-d1a6b2c7d27e /boot xfs defaults 0 0
UUID=4642-2FF6 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/rhel-swap none swap defaults 0 0
Thanks,
-----Original Message-----
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, November 29, 2020 2:33 PM
To: Vinícius Ferrão <ferrao(a)versatushpc.com.br>
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Constantly XFS in memory corruption inside VMs
Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
Hi Strahil.
I’m not using barrier options on mount. It’s the default settings from CentOS install.
I have some additional findings, there’s a big number of discarded packages on the switch on the hypervisor interfaces.
Discards are OK as far as I know, I hope TCP handles this and do the proper retransmissions, but I ask if this may be related or not. Our storage is over NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue with iSCSI, not that I’m aware of.
In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 7.2 but there’s no corruption on the VMs there...
Thanks,
Sent from my iPhone
> On 29 Nov 2020, at 04:00, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> Are you using "nobarrier" mount options in the VM ?
>
> If yes, can you try to remove the "nobarrrier" option.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
>
>
>
>
>
> Hi Strahil,
>
> I moved a running VM to other host, rebooted and no corruption was found. If there's any corruption it may be silent corruption... I've cases where the VM was new, just installed, run dnf -y update to get the updated packages, rebooted, and boom XFS corruption. So perhaps the motion process isn't the one to blame.
>
> But, in fact, I remember when moving a VM that it went down during the process and when I rebooted it was corrupted. But this may not seems related. It perhaps was already in a inconsistent state.
>
> Anyway, here's the mount options:
>
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> The options are the default ones. I haven't changed anything when configuring this cluster.
>
> Thanks.
>
>
>
> -----Original Message-----
> From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> Sent: Saturday, November 28, 2020 1:54 PM
> To: users <users(a)ovirt.org>; Vinícius Ferrão
> <ferrao(a)versatushpc.com.br>
> Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside
> VMs
>
> Can you try with a test vm, if this happens after a Virtual Machine migration ?
>
> What are your mount options for the storage domain ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
>
>
>
>
>
>
>
>
> Hello,
>
>
>
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs.
>
>
>
> For random reasons VM’s gets corrupted, sometimes halting it or just being silent corrupted and after a reboot the system is unable to boot due to “corruption of in-memory data detected”. Sometimes the corrupted data are “all zeroes”, sometimes there’s data there. In extreme cases the XFS superblock 0 get’s corrupted and the system cannot even detect a XFS partition anymore since the magic XFS key is corrupted on the first blocks of the virtual disk.
>
>
>
> This is happening for a month now. We had to rollback some backups, and I don’t trust anymore on the state of the VMs.
>
>
>
> Using xfs_db I can see that some VM’s have corrupted superblocks but the VM is up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks in the machine will be gone, and that’s exactly what happened.
>
>
>
> Another day I was just installing a new CentOS 8 VM for random reasons, and after running dnf -y update and a reboot the VM was corrupted needing XFS repair. That was an extreme case.
>
>
>
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on the system. No errors logged on dmesg, nothing on /var/log/messages and no errors on the “zpools”, not even after scrub operations. On the switch, a Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are no “up and down” and zero errors on all interfaces (we have a 4x Port LACP on the TrueNAS side and 2x Port LACP on each hosts), everything seems to be fine. The only metric that I was unable to get is “dropped packages”, but I’m don’t know if this can be an issue or not.
>
>
>
> Finally, on oVirt, I can’t find anything either. I looked on /var/log/messages and /var/log/sanlock.log but there’s nothing that I found suspicious.
>
>
>
> Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS 7/8 with XFS, there’s 3 Windows VM’s that does not seems to be affected, everything else is affected.
>
>
>
> Thanks all.
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HC
> FNWTWFZZTL2EJHV36OENHUGB/
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ5E55LJMA7...
2 years, 10 months
upload_disk.py - CLI Upload Disk
by Jorge Visentini
Hi All.
I'm using version 4.4.4 (latest stable version -
ovirt-node-ng-installer-4.4.4-2020122111.el8.iso)
I tried using the upload_disk.py script, but I don't think I knew how to
use it.
When I try to use it, these errors occur:
*python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
disk01-SO.qcow2 --disk-format qcow2 --sd-name ISOsusage: upload_disk.py
[-h] -c CONFIG [--debug] [--logfile LOGFILE]
[--disk-format {raw,qcow2}] [--disk-sparse]
[--enable-backup] --sd-name SD_NAME [--use-proxy]
[--max-workers MAX_WORKERS] [--buffer-size BUFFER_SIZE]
filenameupload_disk.py: error: the following arguments are required:
-c/--config*
Using the upload_disk.py help:
*python3 upload_disk.py --help-c CONFIG, --config CONFIG
Use engine connection details from [CONFIG] section in
~/.config/ovirt.conf.*
This CONFIG, is the API access configuration for authentication? Because
analyzing the script I did not find this information.
Does this new version work differently or am I doing something wrong?
In the sdk_4.3 version of upload_disk.py I had to change the script to add
the access information, but it worked.
*[root@engineteste01 ~]# python3 upload_disk.py disk01-SO.qcow2Checking
image...Disk format: qcow2Disk content type: dataConnecting...Creating
disk...Creating transfer session...Uploading image...Uploaded
20.42%Uploaded 45.07%Uploaded 68.89%Uploaded 94.45%Uploaded 2.99g in 42.17
seconds (72.61m/s)Finalizing transfer session...Upload completed
successfully[root@engineteste01 ~]#*
Thank you all!!
--
Att,
Jorge Visentini
+55 55 98432-9868
2 years, 11 months
Windows 7 vm lost network connection under heavy network load
by Joey Ma
Hi folks,
Happy holidays.
I'm having an urgent problem :smile: .
I've installed oVirt 4.4.2 on CentOS 8.2 and then created several Windows 7
vms for stress testing. I found that the heavy network load would lead to
the e1000 net card NOT able to receive packets, it seemed totally blocked.
In the meantime, packet sending was fine.
Only re-enabling the net card can restore the network. Has anyone also had
this problem? Looking forward to your insights. Much appreciated.
Best regards,
Joey
2 years, 11 months
Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)
by Gilboa Davara
Hello all,
I'm more-or-less finished building a new ovirt over glusterfs cluster with
3 fairly beefy servers.
Nodes were fully upgraded to CentOS Linux release 8.3.2011 before they
joined the cluster.
Looking at the cluster view in the WebUI, I get an exclamation mark with
the following message: "Upgrade cluster compatibility level".
When I try to upgrade the cluster, 2 of the 3 hosts go into maintenance and
reboot, but once the procedure is complete, the cluster version remains the
same.
Looking at the host vdsm logs, I see that once the engine refreshes their
capabilities, all hosts return 4.2-4.4 and not 4.5.
E.g.
'supportedENGINEs': ['4.2', '4.3', '4.4'], 'clusterLevels': ['4.2', '4.3',
'4.4']
I assume I should be seeing 4.5 after the upgrade, no?
AmI missing something?
Thanks,
- Gilboa
2 years, 11 months
Best Practice? Affinity Rules Enforcement Manager or High Availability?
by souvaliotimaria@mail.com
Hello everyone,
Not sure if I should ask this here as it seems to be a pretty obvious question but here it is.
What is the best solution for making your VMs able to automatically boot up on another working host when something goes wrong (gluster problem, non responsive host etc)? Would you enable the Affinity Manager and enforce some policies or would you set the VMs you want as Highly Available?
Thank you very much for your time!
Best regards,
Maria Souvalioti
2 years, 11 months
Re: Breaking up a oVirt cluster on storage domain boundary.
by Strahil Nikolov
> Can I migrate storage domains, and thus all the VMs within that
> storage domain?
>
>
>
> Or will I need to build new cluster, with new storage domains, and
> migrate the VMs?
>
>
Actually you can create a new cluster and ensure that the Storage
domains are accessible by that new cluster.
Then to migrate, you just need to power off the VM, Edit -> change
cluster, network, etc and power it up.
It will start on the hosts in the new cluster and then you just need to
verify that the application is working properly.
Best Regards,
Strahil Nikolov
2 years, 11 months