Rootless Podman container not displaying in oVirt Manager
by David White
I deployed a rootless Podman container on a RHEL 8 guest on Saturday (3 days ago).
At the time, I remember seeing some selinux AVC "denied" messages related to qemu-guest-agent and podman, but I didn't have time to look into it further, but made a mental note to come back to it, because it really smelled like a bug to me.
So, I came back to it this afternoon, and now I see nothing when I look for `ausearch -m AVC`
I restarted the `quemu-guest-agent` service with systemctl, and ran `ausearch -m AVC` again, hoping to see some results, but I still don't.
I really wish that I had at least copied the AVC message I saw on Saturday for later investigation, but I fully expected to be able to find that information again today.
Regardless, I have a rootless container running on the guest VM.
When I login to the oVirt Manager and navigate to the VM -> Containers, I don't see anything listed.
On Saturday, I thought this was a bug with selinux and qemu-guest-agent.
But now, I have no idea.
Any thoughts?
Sent with ProtonMail Secure Email.
3 years, 5 months
Booting VMs from RHEL ISOs fail
by David White
Ever since I deployed oVirt a couple months ago, I've been unable to boot any VMs from a RHEL ISO.
Ubuntu works fine, as does CentOS.
I've tried multiple RHEL 8 ISOs on multiple VMs.
I've destroyed and re-uploaded the ISOs, and I've also destroyed and re-created the VMs.
Every time I try to boot a VM to a RHEL 8 ISO, the console just tells me that "No boot device" was found.
Can anyone think of any reason why other ISOs would work, when RHEL ISOs do not work? How can I troubleshoot this further?
I really need to get a server up and running with Podman.
Sent with ProtonMail Secure Email.
3 years, 5 months
4.3 engine cert
by KSNull Zero
Hello!
oVirt 4.3 engine.cer is about to expire.
What is the proper way to renew it, so there is no impact on the running hosts and workloads ?
Thank you.
3 years, 5 months
Ovirt node 4.4.5 failure to upgrade to 4.4.6
by Guillaume Pavese
Maybe my problem is in part linked to an issue seen by Jayme earlier, but
then the resolution that worked for him did not succeed for me :
I first upgraded my Self Hosted Engine from 4.4.5 to 4.4.6 and then
upgraded it to Centos-Stream and rebooted
Then I tried to upgrade the cluster (3 ovirt-nodes on 4.4.5) but it failed
at the first host.
They are all ovir-node hosts, originally first installed in 4.4.5
In Host Event Logs I saw :
...
Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
Upgrade packages
Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
Check if image was updated.
Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
Check if image was updated.
Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
Check if image-updated file exists.
Failed to upgrade Host ps-inf-prd-kvm-fr-510.hostics.fr (User:
gpav(a)hostics.fr).
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch was installed according to
yum,
I tried reinstalling it but got errors: "Error in POSTIN scriptlet" :
Downloading Packages:
[SKIPPED] ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm: Already
downloaded
...
Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
Reinstalling : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
warning: %post(ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch) scriptlet
failed, exit status 1
Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update
---
Reinstalled:
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
nodectl still showed it was on 4.4.5 :
[root@ps-inf-prd-kvm-fr-510 ~]# nodectl info
bootloader:
default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
...
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
I tried to upgrade the Host again from oVirt and this time there was no
error, and the host rebooted.
However, it did not pass active after rebooting and nodectl still shows
that it's 4.4.5 installed. Similar symptoms as OP
So I removed ovirt-node-ng-image-update, then reinstalled it and got no
error this time.
nodectl info seemed to show that it was installed :
[root@ps-inf-prd-kvm-fr-510 yum.repos.d]# nodectl info
bootloader:
default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
...
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
However, after reboot the Host was still shown as "unresponsive"
After Marking it as "Manually rebooted", passing it in maintenance mode and
trying to activate it, the Host was automatically fenced. And still
unresponsive after this new reboot.
I passed it in maintenance mode again, And tried to reinstall it with
"Deploy Hosted Engine" selected
However if failed : "Task Stop services failed to execute."
In
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20210602082519-ps-inf-prd-kvm-fr-510.hostics.fr-0565d681-9406-4fa7-a444-7ee34804579c.log
:
"msg" : "Unable to stop service vdsmd.service: Job for vdsmd.service
canceled.\n", "failed" : true,
"msg" : "Unable to stop service supervdsmd.service: Job for
supervdsmd.service canceled.\n", failed" : true,
"stderr" : "Error: ServiceOperationError: _systemctlStop failed\nb'Job for
vdsmd.service canceled.\\n' ",
"stderr_lines" : [ "Error: ServiceOperationError: _systemctlStop failed",
"b'Job for vdsmd.service canceled.\\n' " ],
If I try on the Host I get :
[root@ps-inf-prd-kvm-fr-510 ~]# systemctl stop vdsmd
Job for vdsmd.service canceled.
[root@ps-inf-prd-kvm-fr-510 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: disabled)
Active: deactivating (stop-sigterm) since Wed 2021-06-02 08:49:21 CEST;
7s ago
Process: 54037 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=0/SUCCESS)
...
Jun 02 08:47:34 ps-inf-prd-kvm-fr-510.hostics.fr vdsm[54100]: WARN Failed
to retrieve Hosted Engine HA info, is Hosted Engine setup finished?
...
Jun 02 08:48:31 ps-inf-prd-kvm-fr-510.hostics.fr vdsm[54100]: WARN Worker
blocked: <Worker name=jsonrpc/4 running <Task <JsonRpcTask {'jsonrpc':
'2.0', 'method': 'StoragePool.connectStorageServer', 'params': {'storage>
File:
"/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
Retrying to manually stop vdsmd a second time then seems to work...
I tried rebooting again, restarting the install always fail at the the same
spot
What should I try to get this host back up?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
3 years, 5 months
ovirt 4 live migration problem
by david
engine 4.4.4.7-1.el8
can someone explain to me what is the problem, when i try to migrate some
vms
to another host in the cluster the migration status reaches the percent of
99 and this error message appears:
+-------------------------+
Failed to migrate VM scom1-a66 to Host kvm3
No available host was found to migrate VM scom1-a66 to.
Trying to migrate to another Host.
+-------------------------+
i have attached engine.log where the last problematic correlation is
d448f6ad-c549-4f72-b3da-bed927f32b23
i have two node cluster
the servers(kvm4 and kvm3) in the cluster have a different hardware
configuration and kvm version
also attached virsh-capabilities and vdsm.log from them
the strangest thing is that in the logs there is no reason at all why the
migration failed. at least i didn't find anything
kvm4
====
OS Version: RHEL - 8.3 - 1.2011.el8
Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
KVM Version: 5.1.0 - 14.el8.1
LIBVIRT Version: libvirt-6.6.0-7.1.el8
VDSM Version: vdsm-4.40.40-1.el8
SPICE Version: 0.14.3 - 3.el8
kvm3
====
OS Version: RHEL - 8.4 - 1.2105.el8
Kernel Version: 4.18.0 - 305.3.1.el8.x86_64
KVM Version: 5.1.0 - 20.el8
LIBVIRT Version: libvirt-6.6.0-13.el8
VDSM Version: vdsm-4.40.60.7-1.el8
SPICE Version: 0.14.3 - 4.el8
3 years, 5 months
I am installing ovirt engine 4.3.10
by ken@everheartpartners.com
I am getting this error message when I install it on CentOS 7.9 when running the hosted engine setup.
[ INFO ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exists]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The selected network interface is not valid"}
I have two interfaces
enp6s0
enp11s0
Enp11s0 is the public network
enp6s0 is the storage network to the netapp.
Any idea how to resolve this?
3 years, 5 months
Fedora CoreOS
by lejeczek
Hi guys.
From what I gather there is no oVirt for Fedora CoreOS but
I should ask here at the source - is it there oVirt for that
OS and if there is not as of now, are the any plans or
discussion to make that reality?
many thanks, L.
3 years, 5 months
centos stream 8 ovirt 4.4.6 with intel silver 4216 deployment problem
by Anatoliy Radchenko
Hi,
i cannot deploy hyperconverged 4.4.6 replica 3 on "centos stream 8" with
4216 processor.
Demployment with intel 4210 is ok.
Demployment with centos 8.4 is ok.
The differences of cat /proc/cpuinfo are only microcode:
centos 8: 0x5003006
centos 8 stream: 0x5003102
I try to install on centos 8 and successfully upgrade to centos stream 8
but engine cannot change version of cluster from 4.5 to 4.6 with error:
"Cannot change Cluster compatibility version where there are no hosts in
the Cluster which support that version"
and log says:
"Host moved to Non-Operational state as host CPU type is not supported in
this cluster compatibility version or is not supported at all"
Any ideas?
Thanks in advance.
Best regards.
ps: if is need..
dell poweredge R440
cat /proc/cpuinfo:
processor : 31
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
stepping : 7
microcode : 0x5003102
cpu MHz : 2445.408
cache size : 22528 KB
physical id : 0
siblings : 32
core id : 12
cpu cores : 16
apicid : 25
initial apicid : 25
fpu : yes
fpu_exception : yes
cpuid level : 22
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe
popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm
3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd
mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid
ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a
avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw
avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total
cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear
flush_l1d arch_capabilities
bugs : spectre_v1 spectre_v2 spec_store_bypass swapgs taa itlb_multihit
bogomips : 4200.00
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
rpm -qa|grep ovirt:
ovirt-ansible-collection-1.4.2-1.el8.noarch
ovirt-hosted-engine-ha-2.4.7-1.el8.noarch
ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch
ovirt-host-dependencies-4.4.6-1.el8.x86_64
cockpit-ovirt-dashboard-0.15.0-1.el8.noarch
ovirt-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch
ovirt-imageio-common-2.1.1-1.el8.x86_64
python3-ovirt-engine-sdk4-4.4.12-1.el8.x86_64
ovirt-vmconsole-1.0.9-1.el8.noarch
ovirt-imageio-client-2.1.1-1.el8.x86_64
python3-ovirt-setup-lib-1.3.2-1.el8.noarch
ovirt-release44-4.4.6.3-1.el8.noarch
ovirt-provider-ovn-driver-1.2.33-1.el8.noarch
ovirt-host-4.4.6-1.el8.x86_64
ovirt-vmconsole-host-1.0.9-1.el8.noarch
ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch
ovirt-imageio-daemon-2.1.1-1.el8.x86_64
ovirt-hosted-engine-setup-2.5.0-1.el8.noarch
--
_____________________________________
Radchenko Anatolii
via Manoppello, 83 - 00132 Roma
tel. 06 96044328
cel. 329 6030076
Nota di riservatezza : ai sensi e per gli effetti della Legge sulla Tutela
della Riservatezza Personale (Legge 196/03) si precisa che il presente
messaggio, corredato dei relativi allegati, contiene informazioni da
considerarsi strettamente riservate, ed è destinato esclusivamente al
destinatario sopra indicato, il quale è l'unico autorizzato ad usarlo,
copiarlo e, sotto la propria responsabilità, diffonderlo. Chiunque
ricevesse questo messaggio per errore o comunque lo leggesse senza esserne
legittimato è avvertito che trattenerlo, copiarlo, divulgarlo, distribuirlo
a persone diverse dal destinatario è severamente proibito,ed è pregato di
rinviarlo al mittente.
3 years, 5 months