VM causes CPU blocks and forces reboot of host
by dominik.drazyk@blackrack.pl
Hello,
I have recently migrated our customer's cluster to newer hardware (CentOS 8 Stream, 4 hypervisor nodes, 3 hosts with GlusterFS 5x 6TB SSD as JBOD with replica 3). After 1 month from the switch we encounter frequent vm locks that need host reboot in order to unlock the VM. Affected vms cannot be powered down from ovirt UI. Even if ovirt is successful in powering down affected vms, they cannot be booted again with information that OS disk is used. Once I reboot the host, vms can be turned on and everything works fine.
In vdsm logs I can note the following error:
2023-05-11 19:33:12,339+0200 ERROR (qgapoller/1) [virt.periodic.Operation] <bound method QemuGuestAgentPoller._poller
of <vdsm.virt.qemuguestagent.QemuGuestAgentPoller object at 0x7f553aa3e470>> operation failed (periodic:187)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 185, in __call__
self._func()
File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 476, in _poller
vm_id, self._qga_call_get_vcpus(vm_obj))
File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 797, in _qga_call_get_vcpus
if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable
/var/log/messages reports:
May 11 19:35:15 kernel: task:CPU 7/KVM state:D stack: 0 pid: 7065 ppid: 1 flags: 0x80000182
May 11 19:35:15 kernel: Call Trace:
May 11 19:35:15 kernel: __schedule+0x2d1/0x870
May 11 19:35:15 kernel: schedule+0x55/0xf0
May 11 19:35:15 kernel: schedule_preempt_disabled+0xa/0x10
May 11 19:35:15 kernel: rwsem_down_read_slowpath+0x26e/0x3f0
May 11 19:35:15 kernel: down_read+0x95/0xa0
May 11 19:35:15 kernel: get_user_pages_unlocked+0x66/0x2a0
May 11 19:35:15 kernel: hva_to_pfn+0xf5/0x430 [kvm]
May 11 19:35:15 kernel: kvm_faultin_pfn+0x95/0x2e0 [kvm]
May 11 19:35:15 kernel: ? select_task_rq_fair+0x355/0x990
May 11 19:35:15 kernel: ? sched_clock+0x5/0x10
May 11 19:35:15 kernel: ? sched_clock_cpu+0xc/0xb0
May 11 19:35:15 kernel: direct_page_fault+0x3b4/0x860 [kvm]
May 11 19:35:15 kernel: kvm_mmu_page_fault+0x114/0x680 [kvm]
May 11 19:35:15 kernel: ? vmx_vmexit+0x9f/0x70d [kvm_intel]
May 11 19:35:15 kernel: ? vmx_vmexit+0xae/0x70d [kvm_intel]
May 11 19:35:15 kernel: ? gfn_to_pfn_cache_invalidate_start+0x190/0x190 [kvm]
May 11 19:35:15 kernel: vmx_handle_exit+0x177/0x770 [kvm_intel]
May 11 19:35:15 kernel: ? gfn_to_pfn_cache_invalidate_start+0x190/0x190 [kvm]
May 11 19:35:15 kernel: vcpu_enter_guest+0xafd/0x18e0 [kvm]
May 11 19:35:15 kernel: ? hrtimer_try_to_cancel+0x7b/0x100
May 11 19:35:15 kernel: kvm_arch_vcpu_ioctl_run+0x112/0x600 [kvm]
May 11 19:35:15 kernel: kvm_vcpu_ioctl+0x2c9/0x640 [kvm]
May 11 19:35:15 kernel: ? pollwake+0x74/0xa0
May 11 19:35:15 kernel: ? wake_up_q+0x70/0x70
May 11 19:35:15 kernel: ? __wake_up_common+0x7a/0x190
May 11 19:35:15 kernel: do_vfs_ioctl+0xa4/0x690
May 11 19:35:15 kernel: ksys_ioctl+0x64/0xa0
May 11 19:35:15 kernel: __x64_sys_ioctl+0x16/0x20
May 11 19:35:15 kernel: do_syscall_64+0x5b/0x1b0
May 11 19:35:15 kernel: entry_SYSCALL_64_after_hwframe+0x61/0xc6
May 11 19:35:15 kernel: RIP: 0033:0x7faf1a1387cb
May 11 19:35:15 kernel: Code: Unable to access opcode bytes at RIP 0x7faf1a1387a1.
May 11 19:35:15 kernel: RSP: 002b:00007fa6f5ffa6e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
May 11 19:35:15 kernel: RAX: ffffffffffffffda RBX: 000055be52e7bcf0 RCX: 00007faf1a1387cb
May 11 19:35:15 kernel: RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000027
May 11 19:35:15 kernel: RBP: 0000000000000000 R08: 000055be5158c6a8 R09: 00000007d9e95a00
May 11 19:35:15 kernel: R10: 0000000000000002 R11: 0000000000000246 R12: 0000000000000000
May 11 19:35:15 kernel: R13: 000055be515bcfc0 R14: 00007fffec958800 R15: 00007faf1d6c6000
May 11 19:35:15 kernel: INFO: task worker:714626 blocked for more than 120 seconds.
May 11 19:35:15 kernel: Not tainted 4.18.0-489.el8.x86_64 #1
May 11 19:35:15 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 11 19:35:15 kernel: task:worker state:D stack: 0 pid:714626 ppid: 1 flags:0x00000180
May 11 19:35:15 kernel: Call Trace:
May 11 19:35:15 kernel: __schedule+0x2d1/0x870
May 11 19:35:15 kernel: schedule+0x55/0xf0
May 11 19:35:15 kernel: schedule_preempt_disabled+0xa/0x10
May 11 19:35:15 kernel: rwsem_down_read_slowpath+0x26e/0x3f0
May 11 19:35:15 kernel: down_read+0x95/0xa0
May 11 19:35:15 kernel: do_madvise.part.30+0x2c3/0xa40
May 11 19:35:15 kernel: ? syscall_trace_enter+0x1ff/0x2d0
May 11 19:35:15 kernel: ? __x64_sys_madvise+0x26/0x30
May 11 19:35:15 kernel: __x64_sys_madvise+0x26/0x30
May 11 19:35:15 kernel: do_syscall_64+0x5b/0x1b0
May 11 19:35:15 kernel: entry_SYSCALL_64_after_hwframe+0x61/0xc6
May 11 19:35:15 kernel: RIP: 0033:0x7faf1a138a4b
May 11 19:35:15 kernel: Code: Unable to access opcode bytes at RIP 0x7faf1a138a21.
May 11 19:35:15 kernel: RSP: 002b:00007faf151ea7f8 EFLAGS: 00000206 ORIG_RAX: 000000000000001c
May 11 19:35:15 kernel: RAX: ffffffffffffffda RBX: 00007faf149eb000 RCX: 00007faf1a138a4b
May 11 19:35:15 kernel: RDX: 0000000000000004 RSI: 00000000007fb000 RDI: 00007faf149eb000
May 11 19:35:15 kernel: RBP: 0000000000000000 R08: 00000007faf080ba R09: 00000000ffffffff
May 11 19:35:15 kernel: R10: 00007faf151ea760 R11: 0000000000000206 R12: 00007faf15aec48e
May 11 19:35:15 kernel: R13: 00007faf15aec48f R14: 00007faf151eb700 R15: 00007faf151ea8c0
May 11 19:35:15 kernel: INFO: task worker:714628 blocked for more than 120 seconds.
May 11 19:35:15 kernel: Not tainted 4.18.0-489.el8.x86_64 #1
May 11 19:35:15 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Installed VDSM packages:
vdsm-api-4.50.3.4-1.el8.noarch
vdsm-network-4.50.3.4-1.el8.x86_64
vdsm-yajsonrpc-4.50.3.4-1.el8.noarch
vdsm-http-4.50.3.4-1.el8.noarch
vdsm-client-4.50.3.4-1.el8.noarch
vdsm-4.50.3.4-1.el8.x86_64
vdsm-gluster-4.50.3.4-1.el8.x86_64
vdsm-python-4.50.3.4-1.el8.noarch
vdsm-jsonrpc-4.50.3.4-1.el8.noarch
vdsm-common-4.50.3.4-1.el8.noarch
Libvirt:
libvirt-client-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-nodedev-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-logical-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-network-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-qemu-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-scsi-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-core-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-config-network-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-iscsi-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-rbd-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-libs-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-config-nwfilter-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-secret-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-disk-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-mpath-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-gluster-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
python3-libvirt-8.0.0-2.module_el8.7.0+1218+f626c2ff.x86_64
libvirt-daemon-driver-nwfilter-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-lock-sanlock-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-interface-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-driver-storage-iscsi-direct-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
libvirt-daemon-kvm-8.0.0-14.module_el8.8.0+1257+0c3374ae.x86_64
During the issue with locked vms, they do not respond from network, I cannot use VNC console (or any other) to check what is happening from VM perspective. The host cannot list running processes. There are plenty of resources left and each host runs about 30-35 vms.
First I thought that it might be related to glusterfs (I use gluster on other clusters and it usually works fine), so we migrated all vms back to old storage (NFS). The problem came back today on two hosts. I do not have such issues on other cluster which runs on Rocky 8.6 with hyperconverged glusterfs. Hence as a last resort I'll be migrating to Rocky 8 from CentOS 8-stream.
Has anyone observed such issues with ovirt hosts on CentOS 8-stream? Any form of help is welcome, as I'm running out of ideas.
1 year, 6 months
Patching issue on RHEL 8.7-8.8
by jamesroark@gmail.com
After applying RHEL updates for the recent upgrade from 8.7 to 8.8, I started getting some package problems on my oVirt engine and hosts.
Perhaps someone can help me figure out what is going on.
Environment:
oVirt 4.5.4-1.el8
Stand alone engine - Red Hat Enterprise Linux release 8.7 (Ootpa)
RHEL hosts - Red Hat Enterprise Linux release 8.7 (Ootpa)
# yum update
Updating Subscription Management repositories.
Last metadata expiration check: 3:01:05 ago on Fri 19 May 2023 09:43:55 AM EDT.
Error:
Problem: installed package centos-stream-release-8.6-1.el8.noarch obsoletes redhat-release < 9 provided by redhat-release-8.8-0.8.el8.x86_64
- cannot install the best update candidate for package redhat-release-8.7-0.3.el8.x86_64
- problem with installed package centos-stream-release-8.6-1.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
# yum update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 3:01:25 ago on Fri 19 May 2023 09:43:55 AM EDT.
Dependencies resolved.
Problem: installed package centos-stream-release-8.6-1.el8.noarch obsoletes redhat-release < 9 provided by redhat-release-8.8-0.8.el8.x86_64
- cannot install the best update candidate for package redhat-release-8.7-0.3.el8.x86_64
- problem with installed package centos-stream-release-8.6-1.el8.noarch
Nothing to do.
Complete!
I've applied the https://ovirt.org/download/install_on_rhel.html suggestions.
When I try to "Install nightly oVirt master snapshot" as suggested, I also get an error:
# dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-8
Updating Subscription Management repositories.
Enabling a Copr repository. Please note that this repository is not part
of the main distribution, and quality may vary.
The Fedora Project does not exercise any power over the contents of
this repository beyond the rules outlined in the Copr FAQ at
<https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-buil...>,
and packages are not held to any quality or security level.
Please do not file bug reports about these packages in Fedora
Bugzilla. In case of problems, contact the owner of this repository.
Error: Failed to connect to https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/repo/...: Error
1 year, 6 months
Ovirt 4.5.0 Support (End of Life Date)
by masood.ahmed@digital.mod.uk
Hi,
I am working on a project and we would like to utilise (oVirt 4.5.) as the platform for our enterprise infrastructure.
We have arrived at this juncture because of a range of considerations related to hardware and software availability and compatibility.
My question is if there is on-going vendor \ professional support for oVirt moving forward, as one of my colleagues spoke with Red Hat, who stated they are going to end their support for the product.
we will not gain security accreditation for our project unless we have professional support available, community support only is not sufficient,
Any ideas of firms available for a support contract for oVirt 4.5.0?
thanks
1 year, 6 months
constant error in enfine lof and Ovirt dashboard.
by eevans@digitaldatatechs.com
My setup:
CentOS 7 with Gluster 9.6 managed outside of engine, includes 3 servers plusdt the engine host.
engine is a standalong but controls the 3 managed gluster volumes hosts. One of the hosts manages the gluster volume. I have used this setup for years and have never seen this on the engine host, but I have on the gluster volume servers.
engine.log: 2023-05-22 15:58:33,020-04 INFO [org.ovirt.engine.core.vdsbroker.gluster.Gluste rServersListVDSCommand] (DefaultQuartzScheduler6) [235e97d6] FINISH, GlusterServ ersListVDSCommand, return: [192.168.254.252/24:CONNECTED, kvm04.digitaldatatechs .com:CONNECTED, kvm02.digitaldatatechs.com:CONNECTED], log id: 513632a0
2023-05-22 15:58:33,032-04 INFO [org.ovirt.engine.core.vdsbroker.gluster.AddGlu sterServerVDSCommand] (DefaultQuartzScheduler6) [235e97d6] START, AddGlusterServ erVDSCommand(HostName = kvm02, AddGlusterServerVDSParameters:{hostId='41b5df10-e d0a-4d6a-b375-c77ae9b681b6'}), log id: 6b6b5c97
2023-05-22 15:58:36,237-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghan dling.AuditLogDirector] (DefaultQuartzScheduler6) [235e97d6] EVENT_ID: GLUSTER_S ERVER_ADD_FAILED(4,436), Failed to add host kvm02 into Cluster ovirt_cluster. Ad d host failed: rc=107 out=() err=['Probe returned with Transport endpoint is not connected']
This shows up about eveny 90 seconds. No other issues and the host is up, volume is up.
Triple checked firewalld and even disabled it as a test. SELinux is disabled.
I'm not sure what I'm missing. Everything looks great in the cockpit.
Any help is appreciated.
Thanks in advance.
1 year, 6 months
Ovirt 4.5 deploy failed
by Selçuk N
Hello,
I'm trying to install Ovirt 4.5 and I got the following error.
FAILED! => {"attempts": 50, "changed": false, "msg": "Error during SSO
authentication access_denied : Cannot authenticate user Invalid user
credentials."}
The node and hosted-engine vm password is the same and does not have any
problem.
Here are detailed logs.
What am I doing wrong? Thank you. Regards
2023-05-18 18:49:25,924+0000 DEBUG ansible on_any args localhost TASK:
ovirt.ovirt.hosted_engine_setup : include_tasks kwargs
2023-05-18 18:49:26,601+0000 INFO ansible ok {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml',
'ansible_host': 'localhost', 'ansible_task': '', 'task_duration': 1}
2023-05-18 18:49:26,601+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5580> kwargs
2023-05-18 18:49:26,644+0000 DEBUG ansible on_any args
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/auth_sso.yml
(args={} vars={}): [localhost] kwargs
2023-05-18 18:49:27,317+0000 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml',
'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials'}
2023-05-18 18:49:27,317+0000 DEBUG ansible on_any args TASK:
ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password
credentials kwargs is_conditional:False
2023-05-18 18:49:27,318+0000 DEBUG ansible on_any args localhost TASK:
ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password
credentials kwargs
2023-05-18 18:49:29,447+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970781b430> kwargs
2023-05-18 18:49:40,175+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9190> kwargs
2023-05-18 18:49:50,869+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079cb460> kwargs
2023-05-18 18:50:01,565+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9190> kwargs
2023-05-18 18:50:12,258+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707864b50> kwargs
2023-05-18 18:50:22,932+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9b80> kwargs
2023-05-18 18:50:33,638+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9709cb0370> kwargs
2023-05-18 18:50:44,319+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9190> kwargs
2023-05-18 18:50:54,990+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9709cb0370> kwargs
2023-05-18 18:51:05,683+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9b80> kwargs
2023-05-18 18:51:16,375+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9709cb0370> kwargs
2023-05-18 18:51:27,062+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9190> kwargs
2023-05-18 18:51:37,766+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9709cb0a60> kwargs
2023-05-18 18:51:48,422+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97077c3670> kwargs
2023-05-18 18:51:59,125+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97078c2a30> kwargs
2023-05-18 18:52:09,821+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5730> kwargs
2023-05-18 18:52:20,489+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707a41ac0> kwargs
2023-05-18 18:52:31,188+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970781b700> kwargs
2023-05-18 18:52:41,873+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707894580> kwargs
2023-05-18 18:52:52,559+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707894760> kwargs
2023-05-18 18:53:03,260+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079f3d00> kwargs
2023-05-18 18:53:13,923+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa14f0> kwargs
2023-05-18 18:53:24,622+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707a01310> kwargs
2023-05-18 18:53:35,311+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97076b0700> kwargs
2023-05-18 18:53:45,964+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa14f0> kwargs
2023-05-18 18:53:56,661+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970771afa0> kwargs
2023-05-18 18:54:07,346+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa15e0> kwargs
2023-05-18 18:54:18,018+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5730> kwargs
2023-05-18 18:54:28,715+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5c40> kwargs
2023-05-18 18:54:39,387+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97076f8cd0> kwargs
2023-05-18 18:54:50,067+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa17c0> kwargs
2023-05-18 18:55:00,773+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5730> kwargs
2023-05-18 18:55:11,426+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707a36c40> kwargs
2023-05-18 18:55:22,121+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5c40> kwargs
2023-05-18 18:55:32,809+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97077b5880> kwargs
2023-05-18 18:55:43,463+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97075dc0d0> kwargs
2023-05-18 18:55:54,157+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa14f0> kwargs
2023-05-18 18:56:04,841+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa5c40> kwargs
2023-05-18 18:56:15,500+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97079b9250> kwargs
2023-05-18 18:56:26,193+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97076b05e0> kwargs
2023-05-18 18:56:36,875+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970787c790> kwargs
2023-05-18 18:56:47,557+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970769ba00> kwargs
2023-05-18 18:56:58,257+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707aa14f0> kwargs
2023-05-18 18:57:08,912+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97076f8c10> kwargs
2023-05-18 18:57:19,612+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97076f8040> kwargs
2023-05-18 18:57:30,302+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97077b5160> kwargs
2023-05-18 18:57:40,969+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970769b160> kwargs
2023-05-18 18:57:51,664+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f9707516b80> kwargs
2023-05-18 18:58:02,347+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97073f1c40> kwargs
2023-05-18 18:58:13,021+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f970769bb20> kwargs
2023-05-18 18:58:24,784+0000 DEBUG var changed: host "localhost" var
"ansible_failed_task" type "<class 'dict'>" value: "{
"action": "ovirt_auth",
"any_errors_fatal": false,
"args": {
"_ansible_check_mode": false,
"_ansible_debug": false,
"_ansible_diff": false,
"_ansible_keep_remote_files": false,
"_ansible_module_name": "ovirt_auth",
"_ansible_no_log": false,
"_ansible_remote_tmp": "~/.ansible/tmp",
"_ansible_selinux_special_fs": [
"fuse",
"nfs",
"vboxsf",
"ramfs",
"9p",
"vfat"
],
"_ansible_shell_executable": "/bin/sh",
"_ansible_socket": null,
"_ansible_string_conversion_action": "warn",
"_ansible_syslog_facility": "LOG_USER",
"_ansible_tmpdir":
"/root/.ansible/tmp/ansible-tmp-1684436303.0466125-36573-187227954552032/",
"_ansible_verbosity": 0,
"_ansible_version": "2.13.5",
"insecure": true
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [
"ovirt.ovirt",
"ansible.builtin"
],
"connection": "local",
"debugger": null,
"delay": 10,
"delegate_facts": null,
"delegate_to": null,
"diff": false,
"environment": [
{
"OVIRT_PASSWORD": "**FILTERED**",
"OVIRT_URL": "https://eng.xxxx.net/ovirt-engine/api",
"OVIRT_USERNAME": "admin@internal"
}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": null,
"loop_with": null,
"module_defaults": [],
"name": "Obtain SSO token using username/password credentials",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": "ovirt_sso_auth",
"remote_user": null,
"retries": 50,
"run_once": null,
"squashed": true,
"tags": [
"never",
"bootstrap_local_vm",
"never"
],
"throttle": 0,
"timeout": 0,
"until": [
"ovirt_sso_auth is succeeded"
],
"uuid": "1866daab-9d24-cf71-0985-00000000181c",
"vars": {},
"when": []
}"
2023-05-18 18:58:24,784+0000 DEBUG var changed: host "localhost" var
"ansible_failed_result" type "<class 'dict'>" value: "{
"_ansible_no_log": null,
"_ansible_parsed": true,
"attempts": 50,
"changed": false,
"exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_auth_payload_x_h__rn9/ansible_ovirt_auth_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_auth.py\",
line 287, in main\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 382, in
authenticate\n self._sso_token = self._get_access_token()\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 627, in
_get_access_token\n sso_error[1]\novirtsdk4.AuthError: Error during SSO
authentication access_denied : Cannot authenticate user Invalid user
credentials.\n",
"failed": true,
"invocation": {
"module_args": {
"ca_file": null,
"compress": true,
"headers": null,
"hostname": null,
"insecure": true,
"kerberos": false,
"ovirt_auth": null,
"password": null,
"state": "present",
"timeout": 0,
"token": null,
"url": null,
"username": null
}
},
"msg": "Error during SSO authentication access_denied : Cannot
authenticate user Invalid user credentials."
}"
2023-05-18 18:58:24,784+0000 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": null,
"attempts": 50,
"changed": false,
"exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_auth_payload_x_h__rn9/ansible_ovirt_auth_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_auth.py\",
line 287, in main\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 382, in
authenticate\n self._sso_token = self._get_access_token()\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 627, in
_get_access_token\n sso_error[1]\novirtsdk4.AuthError: Error during SSO
authentication access_denied : Cannot authenticate user Invalid user
credentials.\n",
"invocation": {
"module_args": {
"ca_file": null,
"compress": true,
"headers": null,
"hostname": null,
"insecure": true,
"kerberos": false,
"ovirt_auth": null,
"password": null,
"state": "present",
"timeout": 0,
"token": null,
"url": null,
"username": null
}
},
"msg": "Error during SSO authentication access_denied : Cannot
authenticate user Invalid user credentials."
},
"ansible_task": "Obtain SSO token using username/password credentials",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 538
}
2023-05-18 18:58:24,784+0000 DEBUG ansible on_any args
<ansible.executor.task_result.TaskResult object at 0x7f97073f1520> kwargs
ignore_errors:None
2023-05-18 18:58:25,487+0000 DEBUG var changed: host "localhost" var
"ovirt_sso_auth" type "<class 'dict'>" value: "{
"attempts": 50,
"changed": false,
"exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_auth_payload_x_h__rn9/ansible_ovirt_auth_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_auth.py\",
line 287, in main\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 382, in
authenticate\n self._sso_token = self._get_access_token()\n File
\"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 627, in
_get_access_token\n sso_error[1]\novirtsdk4.AuthError: Error during SSO
authentication access_denied : Cannot authenticate user Invalid user
credentials.\n",
"failed": true,
"msg": "Error during SSO authentication access_denied : Cannot
authenticate user Invalid user credentials."
}"
2023-05-18 18:58:25,487+0000 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml',
'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Sync on engine machine'}
2023-05-18 18:58:25,487+0000 DEBUG ansible on_any args TASK:
ovirt.ovirt.hosted_engine_setup : Sync on engine machine kwargs
is_conditional:False
1 year, 6 months
ovirt node
by skhurtsilava@cellfie.ge
Hello Guys
I installed oVirt Node 4.4 and I want to deploy Hosted Engine but i get this error
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure the resolved address resolves only on the selected interface]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "hostname 'ovirt.bee.moitel.local' doesn't uniquely match the interface 'ens192' selected for the management bridge; it matches also interface with IP ['fe80::9a5b:2039:fe49:5252', '192.168.222.1', 'fd00:1234:5678:900::1']. Please make sure that the hostname got from the interface for the management network resolves only there.\n"}
How can i fix this error?
1 year, 6 months
Re: Basic authentication to Rest api not working 4.5.4
by kishorekumar.goli@gmail.com
Thanks Alexei for the response.
I see httpd configuration is updated to use oauth. I see below configuration updated in /etc/httpd/conf.d/internalsso-openidc.conf
<LocationMatch ^/ovirt-engine/api($|/)>
AuthType oauth20
Require valid-user
</LocationMatch>
I dont see any release notes about removal of basic authentication in 4.5.x. So I wanted to know if this is mentioned anywhere in the documentation.
1 year, 6 months
Basic authentication to Rest api not working 4.5.4
by kishorekumar.goli@gmail.com
We are facing issue while using basic authentication .
We get 401 unauthorized error . It was working in previous versions.
parameters used:
curl -vvk -u "admin:admin" -H "Content-type: application/xml" -X GET https://<ovirt_gui>/ovirt-engine/api/hosts/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html>
<head>
<title>401 Unauthorized</title>
</head>
<body>
<h1>Unauthorized</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
</body>
</html>
1 year, 6 months