engine storage fail after upgrade
by ozmen62@hotmail.com
Hi,
After upgrade from 4.3 to 4.4 pop-ups some error on engine.
It becomes unavailable several times for a 2-3 minutes and cames back in a day
after some research on the system, found some logs
On THE hosted_storage there are 2 events
1- Failed to update VMs/Templates OVF data for Storage Domain hosted_storage in Data Center XXX
2- Failed to update OVF disks 9cbb34d0-06b0-4ce7-a3fa-7dfed689c442, OVF data isn't updated on those OVF stores (Data Center XXX, Storage Domain hosted_storage).
Is there any idea how can i fix this?
3 years, 12 months
FW: oVirt 4.4 and Active directory
by Latchezar Filtchev
Hello ,
I think I resolved this issue. It is dig response when resolving the domain name!
CentOS-7 - bind-utils-9.11.4-16.P2.el7_8.6.x86_64; Windows AD level 2008R2; in my case dig returns answer with
;; ANSWER SECTION:
mb118.local. 600 IN A 192.168.1.7
IP address returned is address of DC
CentOS-8 - bind-utils-9.11.20-5.el8.x86_64; Same Domain Controller; dig returns answer without ;;ANSWER SECTION e.g. IP address of DC cannot be identified.
The solution is to add directive '+nocookie', after '+tcp' in the file /usr/share/ovirt-engine-extension-aaa-ldap/setup/plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py
The section starts at line 144:
@staticmethod
def _resolver(plugin, record, what):
rc, stdout, stderr = plugin.execute(
args=(
(
plugin.command.get('dig'),
'+noall',
'+answer',
'+tcp',
'+nocookie',
what,
record
)
),
)
return stdout
With this change execution of ovirt-engine-extension-aaa-ldap-setup completes successfully and joins fresh install of oVirt 4.4 to Active Directory.
If level of AD is 2016 '+nocookie' change is not needed.
Happy holydays to all of you!
Stay safe!
Thank you!
Best,
Latcho
From: Latchezar Filtchev
Sent: Tuesday, November 24, 2020 10:31 AM
To: users(a)ovirt.org
Subject: oVirt 4.4 and Active directory
Hello All,
Fresh standalone installation of oVirt 4.3 (CentOS 7) . Execution of ovirt-engine-extension-aaa-ldap-setup completes normally and DC is connected to AD (Domain functional level: Windows Server 2008 ).
On the same hardware fresh standalone installation of oVirt 4.4.
Installation of engine completed with warning:
2020-11-23 14:50:46,159+0200 WARNING otopi.plugins.ovirt_engine_common.base.network.hostname hostname._validateFQDNresolvability:308 Failed to resolve 44-8.mb118.local using DNS, it can be resolved only locally
Despite warning engine portal is resolvable after installation.
Execution of ovirt-engine-extension-aaa-ldap-setup ends with:
[ INFO ] Stage: Environment customization
Welcome to LDAP extension configuration program
Available LDAP implementations:
1 - 389ds
2 - 389ds RFC-2307 Schema
3 - Active Directory
4 - IBM Security Directory Server
5 - IBM Security Directory Server RFC-2307 Schema
6 - IPA
7 - Novell eDirectory RFC-2307 Schema
8 - OpenLDAP RFC-2307 Schema
9 - OpenLDAP Standard Schema
10 - Oracle Unified Directory RFC-2307 Schema
11 - RFC-2307 Schema (Generic)
12 - RHDS
13 - RHDS RFC-2307 Schema
14 - iPlanet
Please select: 3
Please enter Active Directory Forest name: mb118.local
[ INFO ] Resolving Global Catalog SRV record for mb118.local
[WARNING] Cannot resolve Global Catalog SRV record for mb118.local. Please check you have entered correct Active Directory forest name and check that forest is resolvable by your system DNS servers
[ ERROR ] Failed to execute stage 'Environment customization': Active Directory forest is not resolvable, please make sure you've entered correct forest name. If for some reason you can't use forest and you need some special configuration instead, please refer to examples directory provided by ovirt-engine-extension-aaa-ldap package.
[ INFO ] Stage: Clean up
Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20201123113909-bj749k.log:
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
Can someone advise on this?
Thank you!
Best,
Latcho
3 years, 12 months
Ovirt VM import issue
by Deekshith
Hi Team ,
We are not able to import the Virtual machine from ova file into Ovirt
.Kindly help us
Regards
Deekshith
3 years, 12 months
Unable to live migrate a VM from 4.4.2 to 4.4.3 CentOS Linux host
by Gianluca Cecchi
Hello,
I was able to update an external CentOS Linux 8.2 standalone engine from
4.4.2 to 4.4.3 (see dedicated thread).
Then I was able to put into maintenance one 4.4.2 host (CentOS Linux 8.2
based, not ovirt node ng) and run:
[root@ov301 ~]# dnf update
Last metadata expiration check: 0:27:11 ago on Wed 11 Nov 2020 08:48:04 PM
CET.
Dependencies resolved.
======================================================================================================================
Package Arch Version
Repository Size
======================================================================================================================
Installing:
kernel x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.8 M
kernel-core x86_64 4.18.0-193.28.1.el8_2
BaseOS 28 M
kernel-modules x86_64 4.18.0-193.28.1.el8_2
BaseOS 23 M
ovirt-ansible-collection noarch 1.2.1-1.el8
ovirt-4.4 276 k
replacing ovirt-ansible-engine-setup.noarch 1.2.4-1.el8
replacing ovirt-ansible-hosted-engine-setup.noarch 1.1.8-1.el8
Upgrading:
ansible noarch 2.9.15-2.el8
ovirt-4.4-centos-ovirt44 17 M
bpftool x86_64 4.18.0-193.28.1.el8_2
BaseOS 3.4 M
cockpit-ovirt-dashboard noarch 0.14.13-1.el8
ovirt-4.4 3.5 M
ioprocess x86_64 1.4.2-1.el8
ovirt-4.4 37 k
kernel-tools x86_64 4.18.0-193.28.1.el8_2
BaseOS 3.0 M
kernel-tools-libs x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.8 M
libiscsi x86_64 1.18.0-8.module_el8.2.0+524+f765f7e0
AppStream 89 k
nftables x86_64 1:0.9.3-12.el8_2.1
BaseOS 311 k
ovirt-hosted-engine-ha noarch 2.4.5-1.el8
ovirt-4.4 325 k
ovirt-hosted-engine-setup noarch 2.4.8-1.el8
ovirt-4.4 227 k
ovirt-imageio-client x86_64 2.1.1-1.el8
ovirt-4.4 21 k
ovirt-imageio-common x86_64 2.1.1-1.el8
ovirt-4.4 155 k
ovirt-imageio-daemon x86_64 2.1.1-1.el8
ovirt-4.4 15 k
ovirt-provider-ovn-driver noarch 1.2.32-1.el8
ovirt-4.4 27 k
ovirt-release44 noarch 4.4.3-1.el8
ovirt-4.4 17 k
python3-ioprocess x86_64 1.4.2-1.el8
ovirt-4.4 33 k
python3-nftables x86_64 1:0.9.3-12.el8_2.1
BaseOS 25 k
python3-ovirt-engine-sdk4 x86_64 4.4.6-1.el8
ovirt-4.4 560 k
python3-perf x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.9 M
python3-pyasn1 noarch 0.4.6-3.el8
ovirt-4.4-centos-opstools 140 k
python3-pyasn1-modules noarch 0.4.6-3.el8
ovirt-4.4-centos-opstools 151 k
qemu-img x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 1.0 M
qemu-kvm x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 118 k
qemu-kvm-block-curl x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 129 k
qemu-kvm-block-gluster x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 131 k
qemu-kvm-block-iscsi x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 136 k
qemu-kvm-block-rbd x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 130 k
qemu-kvm-block-ssh x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 131 k
qemu-kvm-common x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 1.2 M
qemu-kvm-core x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 3.4 M
selinux-policy noarch 3.14.3-41.el8_2.8
BaseOS 615 k
selinux-policy-targeted noarch 3.14.3-41.el8_2.8
BaseOS 15 M
spice-server x86_64 0.14.2-1.el8_2.1
AppStream 404 k
tzdata noarch 2020d-1.el8
BaseOS 471 k
vdsm x86_64 4.40.35.1-1.el8
ovirt-4.4 1.4 M
vdsm-api noarch 4.40.35.1-1.el8
ovirt-4.4 106 k
vdsm-client noarch 4.40.35.1-1.el8
ovirt-4.4 24 k
vdsm-common noarch 4.40.35.1-1.el8
ovirt-4.4 136 k
vdsm-hook-ethtool-options noarch 4.40.35.1-1.el8
ovirt-4.4 9.8 k
vdsm-hook-fcoe noarch 4.40.35.1-1.el8
ovirt-4.4 10 k
vdsm-hook-openstacknet noarch 4.40.35.1-1.el8
ovirt-4.4 18 k
vdsm-hook-vhostmd noarch 4.40.35.1-1.el8
ovirt-4.4 17 k
vdsm-hook-vmfex-dev noarch 4.40.35.1-1.el8
ovirt-4.4 11 k
vdsm-http noarch 4.40.35.1-1.el8
ovirt-4.4 15 k
vdsm-jsonrpc noarch 4.40.35.1-1.el8
ovirt-4.4 31 k
vdsm-network x86_64 4.40.35.1-1.el8
ovirt-4.4 331 k
vdsm-python noarch 4.40.35.1-1.el8
ovirt-4.4 1.3 M
vdsm-yajsonrpc noarch 4.40.35.1-1.el8
ovirt-4.4 40 k
Installing dependencies:
NetworkManager-ovs x86_64 1:1.22.14-1.el8
ovirt-4.4-copr:copr.fedorainfracloud.org:networkmanager:NetworkManager-1.22
144 k
Transaction Summary
======================================================================================================================
Install 5 Packages
Upgrade 48 Packages
Total download size: 116 M
After reboot I can activate the host (strange that I see many pop up
messages about "finished activating host") and the host is shown as
OS Version: RHEL - 8.2 - 2.2004.0.2.el8
OS Description: CentOS Linux 8 (Core)
Kernel Version: 4.18.0 - 193.28.1.el8_2.x86_64
KVM Version: 4.2.0 - 29.el8.6
LIBVIRT Version: libvirt-6.0.0-25.2.el8
VDSM Version: vdsm-4.40.35.1-1.el8
SPICE Version: 0.14.2 - 1.el8_2.1
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.2.10-1.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled
while another host still in 4.4.2:
OS Version: RHEL - 8.2 - 2.2004.0.2.el8
OS Description: CentOS Linux 8 (Core)
Kernel Version: 4.18.0 - 193.19.1.el8_2.x86_64
KVM Version: 4.2.0 - 29.el8.3
LIBVIRT Version: libvirt-6.0.0-25.2.el8
VDSM Version: vdsm-4.40.26.3-1.el8
SPICE Version: 0.14.2 - 1.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.2.10-1.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled
But if I try to move away VMs from the 4.4.2 host to the 4.4.3 one I get
error:
Failed to migrate VM c8client to Host ov301 . Trying to migrate to another
Host.
(btw: there is no other active host; there is a ov300 host that is in
maintenance)
No available host was found to migrate VM c8client to.
It seems the root error in engine.log is:
2020-11-11 21:44:42,487+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-11) [] Migration of VM 'c8client' to host 'ov301'
failed: VM destroyed during the startup.
On target host in /var/log/libvirt/qemu/c8clinet.log I see:
2020-11-11 20:44:40.981+0000: shutting down, reason=failed
In target vdsm.log
2020-11-11 21:44:39,958+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call VM.migrationCreate took more than 1.00
seconds to succeed: 1.97 (__init__:316)
2020-11-11 21:44:40,230+0100 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=cb51fd4a-09d3-4d77-821b-391da2467487 (api:48)
2020-11-11 21:44:40,231+0100 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={'fa33df49-b09d-4f86-9719-ede649542c21': {'code': 0, 'lastCheck':
'4.1', 'delay': '0.000836715', 'valid': True, 'version': 4, 'acquired':
True, 'actual': True}} from=internal,
task_id=cb51fd4a-09d3-4d77-821b-391da2467487 (api:54)
2020-11-11 21:44:41,929+0100 INFO (jsonrpc/5) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:10.4.192.32,52266,
vmId=c95da734-7ed1-4caa-bacb-3fa24f4efb56 (api:48)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [virt.vm]
(vmId='c95da734-7ed1-4caa-bacb-3fa24f4efb56') Release VM resources (vm:4666)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [virt.vm]
(vmId='c95da734-7ed1-4caa-bacb-3fa24f4efb56') Stopping connection
(guestagent:444)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [vdsm.api] START
teardownImage(sdUUID='fa33df49-b09d-4f86-9719-ede649542c21',
spUUID='ef17cad6-7724-4cd8-96e3-9af6e529db51',
imgUUID='ff10a405-cc61-4d00-a83f-3ee04b19f381', volUUID=None)
from=::ffff:10.4.192.32,52266, task_id=177461c0-83d6-4c90-9c5c-3cc8ee9150c7
(api:48)
It seems that during the host update the OVN configuration has not been
maintained.
Right now all my active VMs are with at least a vnic on OVN so I cannot
test the scenario of migrating a VM without OVN based vnic.
In fact on engine I see only the currently active host in 4.4.2 (ov200) and
another host that is in maintenance (it is still in 4.3.10; I wanted to
update to 4.4.2 but I realized that 4.4.3 has been out...):
[root@ovmgr1 ovirt-engine]# ovn-sbctl show
Chassis "6a46b802-5a50-4df5-b1af-e73f58a57164"
hostname: "ov200.mydomain"
Encap geneve
ip: "10.4.192.32"
options: {csum="true"}
Port_Binding "2ae7391b-4297-4247-a315-99312f6392e6"
Port_Binding "c1ec60a4-b4f3-4cb5-8985-43c086156e83"
Port_Binding "174b69f8-00ed-4e25-96fc-7db11ea8a8b9"
Port_Binding "66359e79-56c4-47e0-8196-2241706329f6"
Port_Binding "ccbd6188-78eb-437b-9df9-9929e272974b"
Chassis "ddecf0da-4708-4f93-958b-6af365a5eeca"
hostname: "ov300.mydomain"
Encap geneve
ip: "10.4.192.33"
options: {csum="true"}
[root@ovmgr1 ovirt-engine]#
Any hint about the reason of losing OVN config for ov301 and the correct
procedure to get it again and persiste future updates?
NOTE: this was a cluster in 4.3.10 and I updated it to 4.4.2 and I noticed
that the OVN config was not retained and I had to run on hosts:
[root@ov200 ~]# vdsm-tool ovn-config engine_ip ov200_ip_on_mgmt
Using default PKI files
Created symlink
/etc/systemd/system/multi-user.target.wants/openvswitch.service →
/usr/lib/systemd/system/openvswitch.service.
Created symlink
/etc/systemd/system/multi-user.target.wants/ovn-controller.service →
/usr/lib/systemd/system/ovn-controller.service.
[root@ov200 ~]#
Now it seems the problem persists...
Why do I have to run each time?
Gianluca
3 years, 12 months
Get Host Capabilities failed: Internal JSON-RPC error: {'reason': 'internal error: Duplicate key'}
by tommy
Hi,everyone:
I got this error in my ovirt env:
VDSM ooengh1.tltd.com command Get Host Capabilities failed: Internal
JSON-RPC error: {'reason': 'internal error: Duplicate key'}
Systemctl message is:
Dec 23 20:48:48 ooengh1.tltd.com vdsm[2431]: ERROR Internal server error
Traceback (most recent call
last):
File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request
res = method(**params)
File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
_dynamicMethod
result = fn(*methodArgs)
File "<string>", line 2, in
getCapabilities
File
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File
"/usr/lib/python2.7/site-packages/vdsm/API.py", line 1371, in
getCapabilities
c = caps.get()
File
"/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 93, in get
machinetype.compatible_cpu_models())
File
"/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 43, in
__call__
value = self.func(*args)
File
"/usr/lib/python2.7/site-packages/vdsm/machinetype.py", line 142, in
compatible_cpu_models
all_models =
domain_cpu_models(c, arch, cpu_mode)
File
"/usr/lib/python2.7/site-packages/vdsm/machinetype.py", line 97, in
domain_cpu_models
domcaps =
conn.getDomainCapabilities(None, arch, None, virt_type, 0)
File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
131, in wrapper
ret = f(*args, **kwargs)
File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in
wrapper
return func(inst, *args,
**kwargs)
File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 3844, in
getDomainCapabilities
if ret is None: raise
libvirtError ('virConnectGetDomainCapabilities() failed', conn=self)
libvirtError: internal error:
Duplicate key
AnyOne can help me ?
Thanks!
3 years, 12 months
Upgrade to 4.4.4
by Jonathan Baecker
Hello,
I'm running here a upgrade from 4.4.3 to latest 4.4.4, on a 3 node self
hosted cluster. The engine upgrade went fine and now I'm on host
upgrades. When I check there the updates it shows only
*ovirt-node-ng-image-update-4.4.4-1.el8.noarch.rpm*. For that I have run
manual updates on each host, with maintenance mode -> yum update -> reboot.
When I run now on the engine *cat /etc/redhat-release *it show:
*CentOS Linux release 8.3.2011*
But on my nodes it shows still:
*CentOS Linux release 8.2.2004 (Core)*
How can this be?
Best regards
Jonathan
3 years, 12 months