[ANN] oVirt 4.4.4 Sixth Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.4 Sixth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.4
Sixth Release Candidate for testing, as of December 17th, 2020.
This update is the fourth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.4 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.4 (redeploy in case of already being on 4.4.4).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.4 release highlights:
http://www.ovirt.org/release/4.4.4/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.4/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years
Network Teamd support
by Carlos C
Hi folks,
Does Ovirt 4.4.4 support or will support Network Teamd? Or only bonding?
regards
Carlos
4 years
Cannot connect Glusterfs storage to Ovirt
by Ariez Ahito
HI guys, i have installed ovirt 4.4 hosted engine and a separate glusterfs storage.
now during hosted engine deployment when i try do choose
STORAGE TYPE: gluster
Storage connection: 10.33.50.33/VOL1
Mount Option:
when i try to connect
this gives me an error:
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."}
4 years
Move self hosted engine to a different gluster volume
by ralf@os-s.de
Hi,
I apparently successfully upgraded a hyperconverged self hosted setup from 4.3 to 4.4. During this process the selfhosted engine required a new gluster volume (/engine-new). I used a temporary storage for that. Is it possible to move the SHE back to the original volume (/engine)?
What steps would be needed? Could I just do:
1. global maintenance
2. stop engine and SHE guest
3. copy all files from glusterfs /engine-new to /engine
4. use hosted-engine --set-shared-config storage <server1>:/engine
hosted-engine --set-shared-config mnt_options
backup-volfile-servers=<server2>:<server3>
5. disable maintenance
Or are additional steps required?
Kind regards,
Ralf
4 years
Increase the initial size of LVM snapshots to prevent VM's from freezing
by Gal Villaret
Hi all,
Lately, I have been encountering an issue where VMs freeze during backups.
From what I can gather, this happens because some of the VMs sometimes perform large writes during the backup window and the snapshots dose not grow fast enough.
I use ISCSI storage with all VM disks preallocated.
Is there a configuration value I can change in order to increase the initial size of snapshots and also maybe change the watermark trigger for the expansion of snapshots?
Thanks.
4 years
Bad CPU TYPE after Centos 8.3
by Lionel Caignec
Hi,
i've just upgraded one host to latest centos release. And after reboot ovirt said "Host CPU type is not compatible with Cluster Properties."
Looking on server i can see cpu is detected as skylake (cat /sys/devices/cpu/caps/pmu_name).
My CPU is Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz so according to ark cascade lake server.
When i installed my new ovirt environment month agos, i configure cluster to be "secure Intel Cascade lake server family" and all sound goods.
So anyone can help me?
Environment :
OS : Centos 8.3 (for manager and host in error)
ovirt-engine 4.4.3.12-1.el8
Host with error : vdsm.x86_64 4.40.35.1-1.el8
Working Host : vdsm.x86_64 .40.26.3-1.el8
Maybe someone can help me? I don't know what to do, and does not want to try update another host if it fail...
Sorry if i post my message at wrong place.
Lionel Caignec.
4 years
How to unlock disk images?
by thomas@hoberg.net
On one oVirt 4.3 farm I have three locked images I'd like to clear out.
One is an ISO image, that somehow never completed the transfer due to a slow network. It's occupying little space, except in the GUI where it sticks out and irritates. I guess it would just be an update somewhere on the Postgress database to unlock it and have it deletable: But since the schema isn't documented, I'd rather ask here: How to I unlock the image?
Two are left-overs from a snapshot that somehow never completed, one for the disk another for the RAM part. I don't know how my colleague managed to get into that state, but impatience/concurrency probably was a factor, a transient failure of a node could have been another.
In any case the snapshot operation logically has been going on for weeks without any real activity, survived several restarts (after patches) of all nodes and the ME and shows no sign of disappearing voluntarily.
Again, I'd assume that I need to clear out the snapshot job, unlock the images and then delete what's left. Some easy SQL and most likely a management engine restart afterwards... if you knew what you were doing (or there was an option in the GUI).
So how do I list/delete snapshot jobs that aren't really running any more?
And how do I unlock the images so I can delete them?
Thanks for your help!
4 years
illegal disk status
by Daniel Menzel
Hi,
we have a problem with some VMs which cannot be started anymore due to
an illegal disk status of a snapshot.
What happend (most likely)? we tried to snapshot those vms some days ago
but the storage domain didn't have enough free space left. Yesterday we
shut those vms down - and from then on they didn't start anymore.
What have I tried so far?
1. Via the web interface I tried to remove the snapshot - didn't work.
2. Searched the internet. Found (among other stuff) this:
https://bugzilla.redhat.com/show_bug.cgi?id=1649129
3. via /vdsm-tool dump-volume-chains/ I managed to list those 5
snapshots (see below).
The output for one machine was:
image: 2d707743-4a9e-40bb-b223-83e3be672dfe
- 9ae6ea73-94b4-4588-9a6b-ea7a58ef93c9
status: OK, voltype: INTERNAL, format: RAW, legality:
LEGAL, type: PREALLOCATED, capacity: 32212254720, truesize: 32212254720
- f7d2c014-e8f5-4413-bfc5-4aa1426cb1e2
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE, capacity: 32212254720, truesize: 29073408
So my idea was to follow the said bugzilla thread and update the volume
- but I didn't manage to find input for the /job_id/ and /generation/.
So my question is: Does anyone have an idea on how to (force) remove a
given snapshot via vsdm-{tool|client}?
Thanks in advance!
Daniel
--
Daniel Menzel
Geschäftsführer
Menzel IT GmbH
Charlottenburger Str. 33a
13086 Berlin
+49 (0) 30 / 5130 444 - 00
daniel.menzel(a)menzel-it.net
https://menzel-it.net
Geschäftsführer: Daniel Menzel, Josefin Menzel
Unternehmenssitz: Berlin
Handelsregister: Amtsgericht Charlottenburg
Handelsregister-Nummer: HRB 149835 B
USt-ID: DE 309 226 751
4 years
Nodes install Python3 every day
by jb
Hello,
I notice a strange thing in the logs. All Nodes (ovirt 4.4.3.12-1.el8)
install every days again Python3, after checking for updates. The GUI
log shows this entries:
13.12.2020, 11:39 Check for update of host onode1.example.org.
Gathering Facts.
13.12.2020, 11:39 Check for update of host onode1.example.org.
include_tasks.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Detect host operating system.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Fetch installed packages.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Check if vdsm is preinstalled.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Parse operating system release.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Detect if host is a prebuilt image.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Install Python3 for CentOS/RHEL8 hosts.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Set facts.
/var/log/ovirt-engine/ansible-runner-service.log shows:
2020-12-13 11:39:20,849 - runner_service.services.playbook - DEBUG -
cb_event_handler event_data={'uuid':
'7c4b039d-6212-4b52-95fd-40d85036ed98', 'counter': 33, 'stdout':
'ok: [onode1.example.org]', 'start_line': 31, 'end_line': 32,
'runner_ident': '72737578-3d2f-11eb-b955-00163e33f845', 'event':
'runner_on_ok', 'pid': 603696, 'created':
'2020-12-13T10:39:20.847869', 'parent_uuid':
'00163e33-f845-ee64-acee-000000000013', 'event_data': {'playbook':
'ovirt-host-check-upgrade.yml', 'playbook_uuid':
'0eb5c935-9f17-4b07-961e-7e0a866dd5ed', 'play': 'all', 'play_uuid':
'00163e33-f845-ee64-acee-000000000008', 'play_pattern': 'all',
'task': 'Install Python3 for CentOS/RHEL8 hosts', 'task_uuid':
'00163e33-f845-ee64-acee-000000000013', 'task_action': 'yum',
'task_args': '', 'task_path':
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-facts/tasks/main.yml:20',
'role': 'ovirt-host-deploy-facts', 'host': 'onode1.example.org',
'remote_addr': 'onode1.example.org', 'res': {'msg': 'Nothing to do',
'changed': False, 'results': [], 'rc': 0, 'invocation':
{'module_args': {'name': ['python3'], 'state': 'present',
'allow_downgrade': False, 'autoremove': False, 'bugfix': False,
'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [],
'download_only': False, 'enable_plugin': [], 'enablerepo': [],
'exclude': [], 'installroot': '/', 'install_repoquery': True,
'install_weak_deps': True, 'security': False, 'skip_broken': False,
'update_cache': False, 'update_only': False, 'validate_certs': True,
'lock_timeout': 30, 'conf_file': None, 'disable_excludes': None,
'download_dir': None, 'list': None, 'releasever': None}},
'_ansible_no_log': False}, 'start': '2020-12-13T10:39:19.872585',
'end': '2020-12-13T10:39:20.847636', 'duration': 0.975051,
'event_loop': None, 'uuid': '7c4b039d-6212-4b52-95fd-40d85036ed98'}}
Is this a bug?
Best regards
Jonathan
4 years
Re: CentOS 8 is dead
by marcel d'heureuse
So, I think keep the live system on ovirt 4.3 to be sure that's works after 2021?
Distribution you have 10 years support? Centos 7 has support up to June 24.
Someone starts to evolute Gentoo?
marcel
Am 8. Dezember 2020 21:15:48 MEZ schrieb "Vinícius Ferrão via Users" <users(a)ovirt.org>:
>CentOS Stream is unstable at best.
>
>I’ve used it recently and it was just a mess. There’s no binary
>compatibility with the current point release and there’s no version
>pinning. So it will be really difficult to keep track of things.
>
>I’m really curious how oVirt will handle this.
>
>From: Wesley Stewart <wstewart3(a)gmail.com>
>Sent: Tuesday, December 8, 2020 4:56 PM
>To: Strahil Nikolov <hunter86_bg(a)yahoo.com>
>Cc: users <users(a)ovirt.org>
>Subject: [ovirt-users] Re: CentOS 8 is dead
>
>This is a little concerning.
>
>But it seems pretty easy to convert:
>https://www.centos.org/centos-stream/
>
>However I would be curious to see if someone tests this with having an
>active ovirt node!
>
>On Tue, Dec 8, 2020 at 2:39 PM Strahil Nikolov via Users
><users(a)ovirt.org<mailto:users@ovirt.org>> wrote:
>Hello All,
>
>I'm really worried about the following news:
>https://blog.centos.org/2020/12/future-is-centos-stream/
>
>Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
>distro ?
>
>Best Regards,
>Strahil Nikolov
>_______________________________________________
>Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
>To unsubscribe send an email to
>users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZC4D4OSYL6...
--
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
4 years