[ANN] oVirt 4.4.3 Seventh Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Seventh Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Seventh Release Candidate for testing, as of October 29th, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
You can hit:
Problem: cannot install the best update candidate for package
ovirt-engine-metrics-1.4.1.1-1.el8.noarch
- nothing provides rhel-system-roles >= 1.0-19 needed by
ovirt-engine-metrics-1.4.2-1.el8.noarch
in order to get rhel-system-roles >= 1.0-19 you need
https://buildlogs.centos.org/centos/8/virt/x86_64/ovirt-44/ repo since that
package can be promoted to release only at 4.4.3 GA.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years
Hosted Engine install via cockpit - proxy issue
by simon@justconnect.ie
I am installing oVirt in a closed environment where internet access is controlled by proxies.
This works until the hosted engine install via cockpit where it fails to complete as it appears to require internet access to the repository.
The only workaround I have found is to ssh onto the engine ‘mid install’ and add the proxy address to /etc/dnf/dnf.conf. After doing this the install is successful.
Am I missing something or does this type of install require unfettered internet access?
4 years
when configuring multi path logical network selection area is empty, hence not able to configure multipathing
by dhanaraj.ramesh@yahoo.com
Hi team,
I have 4 node cluster where in each node I configured 2 dedicated 10 gig NIC with
dedicated subnet each ( NIC 1 = 10.10.10.0/24, NIC 2 = 10.10.20.0/24 ) and on the array
side I configured 2 targets with 10.10.10.0 /24 & another 2 target with 10.10.20.0/24
subnet. without any errors I could check in all four paths and able to mount iscsi luns in
all 4 nodes. However when i try to configure mutipathing at Data center level I could see
all the paths but not the logical network, it stays empty although I configured logical
network label for both NICs with dedicated names as ISCSI1 & ISCI2. these logical
names visible and green at the host network level, no errors, they are just L2 IP config
Am I missing something here? what else I should do to enable multiplathing
4 years
SPICE proxy behind nginx reverse proxy
by Colin Coe
Hi all
As per $SUBJECT, I have a SPICE proxy behind a reverse proxy which all
external VDI users are forced to use.
We've only started doing this in the last week or so but I'm not getting
heaps of reports SPICE sessions "freezing". The testing that I've done
shows that a SPICE session that is unattended for 10-15 minutes hangs or
freezes. By this I mean that you can't interact with the VM (using SPICE)
via mouse or Keyboard. Restarting the SPICE session fixes the problem.
Is anyone else doing this? If so, have you noticed the SPICE session
freezes?
Thanks in advance
4 years
Re: vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
by Strahil Nikolov
When you set a host to maintenance from oVirt API/UI, one of the tasks is to umount any shared storage (incluing the NFS you got). Then rebooting should work like a charm.
Why did you reboot without putting the node in maintenance ?
P.S.: Do not confuse rebooting with fencing - the latter kills the node ungracefully in order to safely start HA VMs on another node.
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 10:27:01 Гринуич+2, lifuqiong(a)sunyainfo.com <lifuqiong(a)sunyainfo.com> написа:
Hi everyone:
Description of problem:
When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'.
other messages may be useful: [] watchdog: watchdog0: watchdog did not stop! []systemd-shutdown[5594]: Failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
[]systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
dracut Warning: Killing all remaining processes
dracut Warning: Killing all remaining processes
Version-Release number of selected component (if applicable):
Software Version:4.2.8.2-1.el7
OS: CentOS Linux release 7.5.1804 (Core)
How reproducible:
100%
Steps to Reproduce:
1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
vdsm: 172.17.99.105/16
nfs server: 172.17.81.14/16Actual results:
As above. the server will reboot more than 30 minutes
Expected results:
the server will reboot in a short time.
What I have done:
I have capture packet in nfs server while vdsm is rebooting, I found vdsm is always sending nfs packet to nfs server circularly as follows:this is some log files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion is:
1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) [storage.Monitor] Error checking path /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
3. there is nothing message import to this issue.The logs is in the attachment.I'm very appreciate if anyone can help me. Thank you.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2GATAD35SU...
4 years
Migrated disk from NFS to iSCSI - Unable to Boot
by Wesley Stewart
This is a new one.
I migrated from an NFS share to an iSCSI share on a small single node oVirt
system (Currently running 4.3.10).
After migrating a disk (Virtual Machine -> Disk -> Move), I was unable to
boot to it. The console tells me "No bootable device". This is a Centos7
guest.
I booted into a CentOS7 ISO and tried a few things...
fdisk -l shows me a 40GB disk (/dev/sda).
fsck -f tells me "bad magic number in superblock"
lvdisplay and pvdisplay show nothing. Even if I can't boot to the drive I
would love to recover a couple of documents from here if possible. Does
anyone have any suggestions? I am running out of ideas.
4 years
oVirt 4.4 Upgrade issue with pki for libvirt-vnc
by lee.hanel@gmail.com
Greetings,
After reverting the ovirt_disk module to the ovirt_disk_28, I'm able to get passed that step, however now I'm running into a new issue.
When it tries to start the vm after moving it from local storage to the hosted storage, I get the following errors:
2020-10-27 21:42:17,334+0000 ERROR (vm/9562a74e) [virt.vm] (vmId='9562a74e-2e6c-433b-ac0a-75a2acc7398d') The vm start process failed (vm:872)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 802, in _startUnderlyingVm
self._run()
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2615, in _run
dom.createWithFlags(flags)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1265, in createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirt.libvirtError: internal error: process exited while connecting to monitor: 2020-10-27T21:42:16.133517Z qemu-kvm: -object tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no: Cannot load certificate '/etc/pki/vdsm/libvirt-vnc/server-cert.pem' & key '/etc/pki/vdsm/libvirt-vnc/server-key.pem': Error while reading file.
2020-10-27 21:42:17,335+0000 INFO (vm/9562a74e) [virt.vm] (vmId='9562a74e-2e6c-433b-ac0a-75a2acc7398d') Changed state to Down: internal error: process exited while connecting to monitor: 2020-10-27T21:42:16.133517Z qemu-kvm: -object tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no: Cannot load certificate '/etc/pki/vdsm/libvirt-vnc/server-cert.pem' & key '/etc/pki/vdsm/libvirt-vnc/server-key.pem': Error while reading file. (code=1) (vm:1636)
the permissions on the files appears to be correct.
https://bugzilla.redhat.com/show_bug.cgi?id=1634742 appears similar, but i took the added precaution of completely removing the vdsm packages and /etc/pki/vdsm and /etc/libvirt.
Anyone have any additional troubleshooting steps?
4 years
Gluster Domain Storage full
by suporte@logicworks.pt
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 0GB available and 100% used
I can not do anything with this disk, for example, if I try to move it to another Gluster Domain Storage get the message:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
4 years
oVirt 4.4 Upgrade issue?
by lee.hanel@gmail.com
Greetings,
I'm trying to preform an upgrade from 4.3 to 4.4 using the hosted engine option of https://github.com/ovirt/ovirt-ansible-collection.
Unfortunately, when it goes to create the hosted engine disk images I get the following:
[Cannot move Virtual Disk. The operation is not supported for HOSTED_ENGINE_METADATA disks.]
This appears to be related to https://bugzilla.redhat.com/show_bug.cgi?id=1883817, BUT I've manually applied that patch. Shouldn't it be creating a NEW disk image instead of trying to move the existing one?
Any help would be appriciated.
Thanks,
Lee
4 years