Change Hosted Engine VM MAC address
by Sergei Panchenko
Good morning, colleagues!
Due some network issues I need to change HostedEngine VM MAC address.
The additional diffeculty is unaccessebility of the HE Web-interface via the network (the cause of - those network issues -)).
Is there any way to change the HE vNIC's MAC using command line (on the host where HE VM works or on the HE VM)?
Thanks in advance, Sergei.
1 year, 2 months
Need help again, new issue with VM import
by Michaal R
I have a VM that's a little over 4TB in size that won't import for whatever reason. I've tried changing the export from the streaming disk format to a flat vmdk,thinking that might be it, but it didn't work. I've gone through the OVF and don't see anything that stands out. Same for the logs, I see where it's having issues importing the 4TB drive from the VM, but I can't decipher the error messages well enough to know how to fix it. The drives had chkdsk run on them before they were exported, and the VM was cleanly shut down, so I don't understand what the issue is. Not even sure at this point how to run a filesystem repair on the vmdk, in case the export corrupted something. And I can't export it directly from ESXi either, for some reason. Each attempt fails with an unspecified error.
Below is a snip from the import log where it's reading the drives. It has one of the first errors:
[ 2.492229] scsi host0: Virtio SCSI HBA
[ 2.501365] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 2.504080] scsi 0:0:1:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 2.506035] scsi 0:0:2:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 2.540737] sd 0:0:0:0: Power-on or device reset occurred
[ 2.540790] sd 0:0:1:0: Power-on or device reset occurred
[ 2.540899] sd 0:0:2:0: Power-on or device reset occurred
[ 2.541130] sd 0:0:0:0: [sda] 8589934592 512-byte logical blocks: (4.40 TB/4.00 TiB)
[ 2.541168] sd 0:0:2:0: [sdc] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)
[ 2.541319] sd 0:0:2:0: [sdc] Write Protect is off
[ 2.541391] sd 0:0:0:0: [sda] Write Protect is off
[ 2.541592] sd 0:0:2:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 2.542019] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 2.544060] sd 0:0:2:0: [sdc] Attached SCSI disk
[ 2.545294] sd 0:0:1:0: [sdb] 251658240 512-byte logical blocks: (129 GB/120 GiB)
[ 2.545437] sd 0:0:1:0: [sdb] Write Protect is off
[ 2.545966] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 2.551250] sdb: sdb1 sdb2
[ 2.556698] sd 0:0:1:0qemu-nbd: Disconnect client, due to: Failed to send reply: reading from file failed: Invalid argument
: [sdb] Attached SCSI disk
qemu-nbd: Disconnect client, due to: Failed to send reply: reading from file failed: Invalid argument
qemu-nbd: Disconnect client, due to: Failed to send reply: reading from file failed: Invalid argument
[ 2.618883] sd 0:0:0:0: [sda] tag#157 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[ 2.618890] sd 0:0:0:0: [sda] tag#157 Sense Key : Aborted Command [current]
[ 2.618892] sd 0:0:0:0: [sda] tag#157 Add. Sense: I/O process terminated
[ 2.618894] sd 0:0:0:0: [sda] tag#157 CDB: Read(16) 88 00 00 00 00 01 ff ff ff f8 00 00 00 08 00 00
[ 2.618897] I/O error, dev sda, sector 8589934584 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
supermin: intern[ 2.618902] Buffer I/O error on dev sda, logical block 1073741823, async page read
al insmod virtio[ 2.618927] Alternate GPT is invalid, using primary GPT.
The rest of the log is peppered with Buffer I/O error entries on /dev/sda (the 4TB vmdk).
Could someone please help? I think I've been looking at these logs and trying to fix the drive for so long I've gone logic blind and can't see the answer right in front of my eyes.
Here's a link to the logs pulled from the host: https://www.dropbox.com/scl/fi/570t279s0k3pfgafuvv01/felicity-import-1.18...
1 year, 2 months
Re: [External] : Re: can hosted engine deploy use local repository mirrors instead of internet ones?
by iucounu@gmail.com
Hi Marcos,
>
> The dnsmasq service running on the KVM host manages the IP assignment during the first
> deployment phase.
> How did you deploy your KVM host? Which configurations have you done on it before running
> the hosted-engine --deploy?
> Also, what is your full hosted-engine deployment command?
>
I deployed the KVM hosts using the guide at:
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_eng...
(section 4.2 Installing Enterprise Linux Hosts)
and
https://www.ovirt.org/download/install_on_rhel.html
Most of the ovirt packages were installed from the installation of the ovirt-engine-appliance. The KVM hosts are using EL 9.3. I've tried on two different EL hosts: one with the standard ovirt-45 repos, and the other had the nightly builds enabled. I observed the same issues on both. I haven't changed any specific settings, such as networking or storage.
The full command to deploy the engine VM was done via:
hosted-engine --deploy --4
I have just setup glusterfs just as a temporary storage option now (yet to run the deploy again), though I'm not sure how to get ovirt to use it. As mentioned I don't know whether this or the networking is causing it to fail.
In case it is important, I notice that virtnetworkd.socket systemd unit gets killed during the deployment, and has to be restarted otherwise the deployment fails prematurely. It also masks all the libvirtd systemd units as part of the cleanup, and these have to manually unmasked and several need manually restarting (virtnetword.socket, virtqemud.socket and virtstoraged.sock) before the deployment is run again or the deploy will fail on trying to communicate with these.
Thanks very much for the help, let me know if any further information needed.
Cam
>
> -----Original Message-----
> From: iucounu(a)gmail.com <iucounu(a)gmail.com>
> Sent: Tuesday, January 23, 2024 12:49 PM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Re: [External] : Re: can hosted engine deploy use local repository
> mirrors instead of internet ones?
>
> Thanks very for the reply Marcos. I tried another deployment, just to see if "wait
> for the host to be up" would time out, and I saw a couple of errors in the log:
>
> From the ovirt-hosted-engine-setup-ansible-final_clean log, it mentions that the VM IP is
> undefined:
>
> 2024-01-23 12:50:19,554+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:109 {'msg': "The task includes an option with
> an undefined variable. The error was: 'local_vm_ip' is undefined.
> 'local_vm_ip' is undefined\n\nThe error appears to be in
> '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml':
> line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax
> problem.\n\nThe offending line appears to be:\n\n---\n- name: Set the name for add_host\n
> ^ here\n", '_ansible_no_log': False}
>
> In the ovirt-hosted-engine-setuplog, it mentions not being able to get the storage pool:
>
>
> 2024-01-23 12:50:35,787+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:109 {'changed': True, 'stdout': '',
> 'stderr': "error: failed to get pool 'localvmvy8whst5'\nerror:
> Storage pool not found: no storage pool with matching name
> 'localvmvy8whst5'", 'rc': 1, 'cmd': ['virsh',
> '-c', 'qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf',
> 'pool-destroy', 'localvmvy8whst5'], 'start': '2024-01-23
> 12:50:35.558666', 'end': '2024-01-23 12:50:35.611808',
> 'delta': '0:00:00.053142', 'msg': 'non-zero return code',
> 'invocation': {'module_args': {'_raw_params': 'virsh -c
> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf pool-destroy
> localvmvy8whst5', '_uses_shell': False, 'stdin_add_newline': True,
> 'strip_empty_ends': True, 'argv': None, 'chdir': None,
> 'executable': None, 'creates': None, 'removes': None,
> 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["error:
> failed to get pool 'localvmvy8whst5'", "error: Storage pool not fou
> nd: no storage pool with matching name 'localvmvy8whst5'"],
> '_ansible_no_log': None}
>
> I set the IP to the one I have assigned in DNS, but when I attach to the console of the VM
> (which is still running, though the disk image has been deleted) via virsh, it shows me a
> completely different IP: In hosted-engine --deploy, I set a 10.0.0.x address, however, it
> shows a 192.168.1.x address on the VM. Do I need to set this somewhere else, e.g., with
> '--ansible-extra-vars=he_ipv4_subnet_prefix='?
>
> As for the storage pool, is that for later VM deployment? The deploy script did not ask me
> for a storage location. If I need to specify this, where do I do this?
>
> Thanks again for any help,
>
> Kind regards,
>
> Cam
>
> PS: is there a simple way to have the answers saved so I don't have to keep running
> through all the questions every time I try a deployment
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!...
> oVirt Code of Conduct:
> https://urldefense.com/v3/__https://www.ovirt.org/community/about/communi...
> List Archives:
> https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@o...
1 year, 2 months
Re: [External] : Re: can hosted engine deploy use local repository mirrors instead of internet ones?
by iucounu@gmail.com
Thanks very for the reply Marcos. I tried another deployment, just to see if "wait for the host to be up" would time out, and I saw a couple of errors in the log:
From the ovirt-hosted-engine-setup-ansible-final_clean log, it mentions that the VM IP is undefined:
2024-01-23 12:50:19,554+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'msg': "The task includes an option with an undefined variable. The error was: 'local_vm_ip' is undefined. 'local_vm_ip' is undefined\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/sync_on_engine_machine.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Set the name for add_host\n ^ here\n", '_ansible_no_log': False}
In the ovirt-hosted-engine-setuplog, it mentions not being able to get the storage pool:
2024-01-23 12:50:35,787+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': True, 'stdout': '', 'stderr': "error: failed to get pool 'localvmvy8whst5'\nerror: Storage pool not found: no storage pool with matching name 'localvmvy8whst5'", 'rc': 1, 'cmd': ['virsh', '-c', 'qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf', 'pool-destroy', 'localvmvy8whst5'], 'start': '2024-01-23 12:50:35.558666', 'end': '2024-01-23 12:50:35.611808', 'delta': '0:00:00.053142', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf pool-destroy localvmvy8whst5', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["error: failed to get pool 'localvmvy8whst5'", "error: Storage pool not fou
nd: no storage pool with matching name 'localvmvy8whst5'"], '_ansible_no_log': None}
I set the IP to the one I have assigned in DNS, but when I attach to the console of the VM (which is still running, though the disk image has been deleted) via virsh, it shows me a completely different IP: In hosted-engine --deploy, I set a 10.0.0.x address, however, it shows a 192.168.1.x address on the VM. Do I need to set this somewhere else, e.g., with '--ansible-extra-vars=he_ipv4_subnet_prefix='?
As for the storage pool, is that for later VM deployment? The deploy script did not ask me for a storage location. If I need to specify this, where do I do this?
Thanks again for any help,
Kind regards,
Cam
PS: is there a simple way to have the answers saved so I don't have to keep running through all the questions every time I try a deployment
1 year, 2 months
Apple Mac Pro 2013 install hangs with oVirt Node installer 4.5 but ok with 4.3.10
by john@alwayson.net.au
This hardware (6 Core Xeon E5-1650v2) successfully runs Fedora 39 and oVirt 4.3.10 (CentOS 7) but freezes immediately when attempting to boot from either of the latest node installer ISOs:
ovirt-node-ng-installer-latest-el8.iso
ovirt-node-ng-installer-latest-el9.iso
I suspect it will require tweaks to the kernel parameters which will need to be made to the ISO installer image prior to booting.
Any suggestions would be appreciated.
Thanks
1 year, 2 months
Upgrade host from 4.5.4 to 4.5.5 failed
by Jacob M. Nielsen
Upgrade host from Ovirt manager - Result Install Failed
Here log
2024-01-23 23:08:38 CET - TASK [ovirt-host-upgrade : Upgrade packages] ***********************************
2024-01-23 23:11:47 CET - {
"uuid" : "a7ac6634-4644-4416-a818-183b42b3c925",
"counter" : 191,
"stdout" : "",
"start_line" : 186,
"end_line" : 186,
"runner_ident" : "0b7adac9-6ff4-4934-a9c1-ad0785877978",
"event" : "runner_on_failed",
"pid" : 135486,
"created" : "2024-01-23T22:11:45.029493",
"parent_uuid" : "000c2930-20d1-ae10-b6a7-000000000042",
"event_data" : {
"playbook" : "ovirt-host-upgrade.yml",
"playbook_uuid" : "89cebab6-f73e-4a2f-bb30-b8de7e04367a",
"play" : "all",
"play_uuid" : "000c2930-20d1-ae10-b6a7-000000000002",
"play_pattern" : "all",
"task" : "Upgrade packages",
"task_uuid" : "000c2930-20d1-ae10-b6a7-000000000042",
"task_action" : "ansible.builtin.yum",
"task_args" : "",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-upgrade/tasks/main.yml:69",
"role" : "ovirt-host-upgrade",
"host" : "10.63.0.6",
"remote_addr" : "10.63.0.6",
"res" : {
"results" : [ {
"failed" : true,
"msg" : "Failed to validate GPG signature for ovirt-node-ng-image-update-4.5.5-1.el8.noarch: Public key for ovirt-node-ng-image-update-4.5.5-1.el8.noarch.rpm is not installed",
"invocation" : {
"module_args" : {
Tried to just do the yum update on the host
Got this result
Tried to upgrade host from Ovirt manager , and got InstallFailed
2024-01-23 23:08:38 CET - TASK [ovirt-host-upgrade : Upgrade packages] ***********************************
2024-01-23 23:11:47 CET - {
"uuid" : "a7ac6634-4644-4416-a818-183b42b3c925",
"counter" : 191,
"stdout" : "",
"start_line" : 186,
"end_line" : 186,
"runner_ident" : "0b7adac9-6ff4-4934-a9c1-ad0785877978",
"event" : "runner_on_failed",
"pid" : 135486,
"created" : "2024-01-23T22:11:45.029493",
"parent_uuid" : "000c2930-20d1-ae10-b6a7-000000000042",
"event_data" : {
"playbook" : "ovirt-host-upgrade.yml",
"playbook_uuid" : "89cebab6-f73e-4a2f-bb30-b8de7e04367a",
"play" : "all",
"play_uuid" : "000c2930-20d1-ae10-b6a7-000000000002",
"play_pattern" : "all",
"task" : "Upgrade packages",
"task_uuid" : "000c2930-20d1-ae10-b6a7-000000000042",
"task_action" : "ansible.builtin.yum",
"task_args" : "",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-upgrade/tasks/main.yml:69",
"role" : "ovirt-host-upgrade",
"host" : "10.63.0.6",
"remote_addr" : "10.63.0.6",
"res" : {
"results" : [ {
"failed" : true,
"msg" : "Failed to validate GPG signature for ovirt-node-ng-image-update-4.5.5-1.el8.noarch: Public key for ovirt-node-ng-image-update-4.5.5-1.el8.noarch.rpm is not installed",
"invocation" : {
"module_args" : {
Tried to make the yum update
Last metadata expiration check: 0:55:29 ago on 2024-01-23T22:43:18 CET.
Dependencies resolved.
===========================================================================================================================================
Package Architecture Version Repository Size
===========================================================================================================================================
Installing:
ovirt-node-ng-image-update noarch 4.5.5-1.el8 ovirt-45-upstream 1.3 G
replacing ovirt-node-ng-image-update-placeholder.noarch 4.5.4-1.el8
Transaction Summary
===========================================================================================================================================
Install 1 Package
Total download size: 1.3 G
Is this ok [y/N]: y
Downloading Packages:
ovirt-node-ng-image-update-4.5.5-1.el8.noarch.rpm 7.3 MB/s | 1.3 GB 02:59
-------------------------------------------------------------------------------------------------------------------------------------------
Total 7.2 MB/s | 1.3 GB 03:02
oVirt upstream for CentOS Stream 8 - oVirt 4.5 2.8 MB/s | 2.9 kB 00:00
Importing GPG key 0xFE590CB7:
Userid : "oVirt <infra(a)ovirt.org>"
Fingerprint: 31A5 D783 7FAD 7CB2 86CD 3469 AB8C 4F9D FE59 0CB7
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
Is this ok [y/N]: y
Key imported successfully
Import of key(s) didn't help, wrong key(s)?
Public key for ovirt-node-ng-image-update-4.5.5-1.el8.noarch.rpm is not installed. Failing package is: ovirt-node-ng-image-update-4.5.5-1.el8.noarch
GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: GPG check FAILED
Anyone seen this and found a solution ??
1 year, 2 months
oVirt nodes with local storage
by Wild Star
On the weekend I upgraded my self-hosted oVirt engine to 4.56. All went well with that!
I also spotted that there was an update for all my 4.54 nodes, but something has changed with the repo overnight because none of my nodes see updates anymore, though the update remains available here … https://resources.ovirt.org/pub/ovirt-4.5/iso/ovirt-node-ng-installer/.
Yesterday, I attempted to update just one of my nodes and ran into this snag… “Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs”.
I’ve always stored my VMs in a separate /DATA directory off the root filesystem, and shared the local storage using NFS. I know it’s not ideal, but with good frequent regular backups, it has served me well for many years.
A Google search revealed, that others have also had similar local storage issues, and suggestions on ways to mitigate the issue, including the oVirt documentation found here… https://www.ovirt.org/documentation/upgrade_guide/index.html#Upgrading_hy..., is not my preferred fix.
In the past node updates with my local storage were not a problem and were easy peasy! From some of the discussions I saw at Red Hat, I have deduced (maybe wrongly) that there was an issue that necessitated a fix, which introduced a required check for local storage during the upgrade process.
I probably should move all the local storage off the oVirt nodes, but at this time, that is easier said than done.
I’m posting here only to see if there are perhaps other ideas or perspectives I may not have thought of and should consider with my local storage.
Thank you all in advance, much appreciated, and of course, thank you all for supporting oVirt!
1 year, 2 months
Odd behavior after upgrade to 4.5
by Seann G. Clark
All,
I just recently upgraded from 4.3 to 4.5, and while most of the upgrade went smoothly and I didn't see any major issues, I have noticed a few things that are causing problems.
The one piece that I had to sneak in to make the install successful, is push my CA and intermediate CA certs into the engine after it came up the first time, but before it fully installed, otherwise it would fail restoring the backup. This is to address using a custom Apache cert for the hosted engine, instead of the default self-signed CA.
The biggest issue is with keycloak. To make the upgrade succeed I had to skip keycloak installation, after doing some digging related to this error:
"Failed to execute stage 'Misc configuration': 'OVESETUP_OVN/ovirtProviderOvnSecret'" I don't actively use OVN in my environment right now, and I think it is all pretty much in the default install state.
This is the same error I get when I loop back to install keycloak now that the engine is up. This is what is in the logs for the last few lines preceding the error and failure event:
2024-01-19 19:09:32,602+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn plugin.execute:921 execute-output: ('ovn-sbctl', 'set-connection', 'pssl:6642:[::]') stdout:
2024-01-19 19:09:32,602+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn plugin.execute:926 execute-output: ('ovn-sbctl', 'set-connection', 'pssl:6642:[::]') stderr:
2024-01-19 19:09:32,626+0000 DEBUG otopi.context context._executeMethod:127 Stage misc METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_ovn_timeout
2024-01-19 19:09:32,627+0000 INFO otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn ovirtproviderovn._misc_configure_ovn_timeout:1076 Updating OVN timeout configuration
2024-01-19 19:09:32,628+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn plugin.executeRaw:813 execute: ('ovn-sbctl', 'set', 'connection', '.', 'inactivity_probe=60000'), executable='None', cwd='None', env=None
2024-01-19 19:09:32,652+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn plugin.executeRaw:863 execute-result: ('ovn-sbctl', 'set', 'connection', '.', 'inactivity_probe=60000'), rc=0
2024-01-19 19:09:32,653+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn plugin.execute:921 execute-output: ('ovn-sbctl', 'set', 'connection', '.', 'inactivity_probe=60000') stdout:
2024-01-19 19:09:32,653+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn plugin.execute:926 execute-output: ('ovn-sbctl', 'set', 'connection', '.', 'inactivity_probe=60000') stderr:
2024-01-19 19:09:32,677+0000 DEBUG otopi.context context._executeMethod:127 Stage misc METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_provider
2024-01-19 19:09:32,678+0000 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py", line 1113, in _misc_configure_provider
self._configure_ovirt_provider_ovn()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py", line 796, in _configure_ovirt_provider_ovn
content = self._create_config_content()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py", line 761, in _create_config_content
OvnEnv.OVIRT_PROVIDER_OVN_SECRET
KeyError: 'OVESETUP_OVN/ovirtProviderOvnSecret'
2024-01-19 19:09:32,682+0000 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Misc configuration': 'OVESETUP_OVN/ovirtProviderOvnSecret'
2024-01-19 19:09:32,683+0000 DEBUG otopi.transaction transaction.abort:124 aborting 'DNF Transaction'
I have spent a fair amount of time researching this but haven't found a specific root cause, or solution on this issue.
Thank you in advance,
Seann
1 year, 2 months
Cannot update oVirt Node
by Gianluca Amato
Hello,
I am trying to update my oVirt installation to 4.5.5, but while I had no
problems upgrading the self-hosted ovirt-engine, I am not able to upgrade
any node (running oVirt Node NG 4.5.4 or older). I just click Upgrade in
the oVirt Manager, and I get a bunch of events telling me that everything
was OK and the upgrade succeeded, but in reality nothing happened on the
nodes: the result of "nodectl info" before and after the upgrade process
is exactly the same.
I gave a look to the engine.log and to the relevant ovirt-host-mgmt-ansible
log in the ovirt-engine VM, but I cannot find the problem. Do any of you
have any ideas ? I am attaching some relevant files.
Note that the upgrade consistently fails in the same way on other nodes,
which have other versions of oVirt Node NG (always in the 4.5 family).
Thanks for your help
--gianluca
1 year, 2 months
hosted-engine --deploy: No module named 'he_ansible'
by iucounu@gmail.com
Hi,
I'm trying to install hosted engine on a server (4.5.5-1.el9) and it reports 'No module named 'he_ansible'. I have what could an he_ansible module under /usr/share/ovirt-hosted-engine-setup:
/usr/share/ovirt-hosted-engine-setup/he_ansible
/usr/share/ovirt-hosted-engine-setup/he_ansible/__pycache__
/usr/share/ovirt-hosted-engine-setup/he_ansible/ansible.cfg
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins
/usr/share/ovirt-hosted-engine-setup/he_ansible/constants.py
/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml
/usr/share/ovirt-hosted-engine-setup/he_ansible/__pycache__/constants.cpython-39.opt-1.pyc
/usr/share/ovirt-hosted-engine-setup/he_ansible/__pycache__/constants.cpython-39.pyc
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins/1_otopi_json.py
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins/2_ovirt_logger.py
Is there a missing include/path somewhere?
Thanks for any help.
Full error:
bash# hosted-engine --deploy
<string>:1: DeprecationWarning: distro.linux_distribution() is deprecated. It should only be used as a compatibility shim with Python's platform.linux_distribution(). Please use distro.id(), distro.version() and distro.name() instead.
<string>:1: DeprecationWarning: distro.linux_distribution() is deprecated. It should only be used as a compatibility shim with Python's platform.linux_distribution(). Please use distro.id(), distro.version() and distro.name() instead.
***L:ERROR Internal error: No module named 'he_ansible'
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/otopi/main.py", line 141, in execute
self.context.loadPlugins()
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 803, in loadPlugins
self._loadPluginGroups(plugindir, needgroups, loadedgroups)
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 112, in _loadPluginGroups
self._loadPlugins(path, path, groupname)
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 69, in _loadPlugins
self._loadPlugins(base, d, groupname)
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 95, in _loadPlugins
util.loadModule(
File "/usr/lib/python3.9/site-packages/otopi/util.py", line 110, in loadModule
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/__init__.py", line 25, in <module>
from . import misc
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 34, in <module>
from ovirt_hosted_engine_setup import ansible_utils
File "/usr/lib/python3.9/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 33, in <module>
from he_ansible.constants import AnsibleCallback
ModuleNotFoundError: No module named 'he_ansible'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/otopi/__main__.py", line 88, in main
installer.execute()
File "/usr/lib/python3.9/site-packages/otopi/main.py", line 143, in execute
util.raiseExceptionInformation(
File "/usr/lib/python3.9/site-packages/otopi/util.py", line 85, in raiseExceptionInformation
raise info[1].with_traceback(info[2])
File "/usr/lib/python3.9/site-packages/otopi/main.py", line 141, in execute
self.context.loadPlugins()
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 803, in loadPlugins
self._loadPluginGroups(plugindir, needgroups, loadedgroups)
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 112, in _loadPluginGroups
self._loadPlugins(path, path, groupname)
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 69, in _loadPlugins
self._loadPlugins(base, d, groupname)
File "/usr/lib/python3.9/site-packages/otopi/context.py", line 95, in _loadPlugins
util.loadModule(
File "/usr/lib/python3.9/site-packages/otopi/util.py", line 110, in loadModule
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/__init__.py", line 25, in <module>
from . import misc
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 34, in <module>
from ovirt_hosted_engine_setup import ansible_utils
File "/usr/lib/python3.9/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 33, in <module>
from he_ansible.constants import AnsibleCallback
otopi.main.PluginLoadException: No module named 'he_ansible'
root@lonovirt1 /u/cmcl bash#
1 year, 2 months