Failed to read or parse '/etc/pki/ovirt-engine/keys/engine.p12'
by Frank Wall
Hi,
I was trying to restore a oVirt Engine Backup into a new Hosted Engine
appliance (as part of an upgrade), but this failed with the following
error:
--== PKI CONFIGURATION ==--
[WARNING] Failed to read or parse
'/etc/pki/ovirt-engine/keys/engine.p12'
Perhaps it was changed since last Setup.
Error was:
Error outputting keys and certificates
80EBCC44677F0000:error:0308010C:digital envelope
routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global
default library context, Algorithm (RC2-40-CBC : 0)
It looks like this is related to openssl requiring legacy mode
to use the old Engine cert/key.
Is there any way to workaround this? Or would it be possible
to repackage the existing PCKS#12 file with new encryption (on
the old Engine)?
Regards
- Frank
1 year, 6 months
[ansible]attach vdisk to vm
by Pietro Pesce
Hello ever1
i create a playbook for create and attach vdisk (from direct lun) to vm, the firs block work. I want attach the created vdisk to second vm. how can do?
---
# Add fiber chanel disk
- name: Create disk
ovirt.ovirt.ovirt_disk:
auth: "{{ ovirt_auth }}"
name: "{{ item.0 }}"
host: "{{host}}"
shareable: True
interface: virtio_scsi
vm_name: "{{hostname}}"
scsi_passthrough: disabled
logical_unit:
id: "{{ item.1 }}"
storage_type: fcp
loop: "{{ disk_name | zip(lun) | list }}"
## Add disk second node
#- name: Create disk
# ovirt.ovirt.ovirt_disk:
# auth: "{{ ovirt_auth }}"
# vm_name: "{{hostname2}}"
# name: "{{ item.0 }}"
# host: "{{host}}"
# interface: virtio_scsi
# logical_unit:
# id: "{{ item.1 }}"
# storage_type: fcp
# loop: "{{ disk_name | zip(lun) | list }}"
thanks
1 year, 6 months
engine setup fails: error: The system may not be provisioned according to the playbook results
by neeldey427@gmail.com
I'm trying to setup the engine. But I am getting the same error.
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool localvm3a2r5z0y]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool localvm3a2r5z0y]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool 9ef860a6-ee88-4aa6-94ac-a429a90ebec8]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool 9ef860a6-ee88-4aa6-94ac-a429a90ebec8]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
Please let me know if you need more information in this regard or contents from any of the log files.
Any & all suggestions on how to fix/troubleshoot this are much appreciated.
1 year, 6 months
engine setup fails: error creating bridge interface virbr0: File exists - ?
by lejeczek
Hi guys.
I'm trying to setup the engine on the latest stable ovirt
node(in a VM), so a clean, vanilla-default system.
-> $ hosted-engine --deploy --4
...
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate
default libvirt network]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"cmd": ["virsh", "net-start", "default"], "delta":
"0:00:00.042134", "end": "2023-05-11 11:08:59.248405",
"msg": "non-zero return code", "rc": 1, "start": "2023-05-11
11:08:59.206271", "stderr": "error: Failed to start network
default\nerror: error creating bridge interface virbr0: File
exists", "stderr_lines": ["error: Failed to start network
default", "error: error creating bridge interface virbr0:
File exists"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed
getting local_vm_dir
...
Any & all suggestions on how to fix/troubleshoot this are
much appreciated.
many thanks, L.
1 year, 6 months
Migration failed after upgrade engine from 4.3 to 4.4
by Emmanuel Ferrandi
Hi !
When I try to migrate a powered VM (regardless of OS) from one
hypervisor to another, the VM is immediately shut down with this error
message:
/Migration failed: Admin shut down from the engine (VM: VM, Source:
HP11)./
The oVirt engine has been upgraded from version 4.3 to version 4.4.
Some nodes are in version 4.3 and others in version 4.4.
Here are the oVirt versions for selected hypervisors:
* HP11 : 4.4
* HP5 : 4.4
* HP6 : 4.3
Here are the migration attempts I tried with a powered VM :
* From HP > to HP
* HP6 > HP5 : OK
* HP6 > HP11 : OK
* HP5 > HP11 : OK
* HP5 > HP6 : OK
* HP11 > HP5 : *NOK*
* HP11 > HP6 : OK
As mentioned above the migration of a VM between two versions of ovirt
is not a problem.
The migration of the VM between two HPs with the same 4.4 version works
only in one direction (HP5 to HP11) and doesn't work in the other way.
I already tried to reinstall both HPs in version 4.4 but without success.
Here are the logs on the HP5 concerning the VM:
//var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
(jsonrpc/3) [api.virt] START destroy(gracefulAttempts=1)
from=::ffff:172.20.3.250,37534, flow_id=43364065,
vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:48)//
///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
(jsonrpc/3) [api] FINISH destroy error=Virtual machine does not
exist: {'vmId': 'd14f75cd-1cb1-440b-9780-6b6ee78149ac'} (api:129)//
///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
(jsonrpc/3) [api.virt] FINISH destroy return={'status': {'code': 1,
'message': "Virtual machine does not exist: {'vmId':
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'}"}}
from=::ffff:172.20.3.250,37534, flow_id=43364065,
vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:54)/
//var/log/libvirt/qemu/VM.log:2023-03-24 14:56:51.474+0000:
initiating migration//
///var/log/libvirt/qemu/VM.log:2023-03-24 14:56:54.342+0000:
shutting down, reason=migrated//
///var/log/libvirt/qemu/VM.log:2023-03-24T14:56:54.870528Z qemu-kvm:
terminating on signal 15 from pid 4379 (<unknown process>)/
Here are the log on the engine concerning the VM:
//
//var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,333+02 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default
task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
MigrateVDSCommand(
MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
dstHost='HP5:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='null',
consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}},
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1,
action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
6a3507d0//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,334+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
MigrateBrokerVDSCommand(HostName = HP11,
MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
dstHost='HP5:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='null',
consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}},
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1,
action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
f254f72//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,246+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [3f0e966d] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
'6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,296+02 INFO
[org.ovirt.engine.core.bll.SaveVmExternalDataCommand]
(ForkJoinPool-1-worker-9) [43364065] Running command:
SaveVmExternalDataCommand internal: true. Entities affected : ID:
d14f75cd-1cb1-440b-9780-6b6ee78149ac Type: VM//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,299+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START,
DestroyVDSCommand(HostName = HP11,
DestroyVmVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 20bf3f27//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] Failed to destroy VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' because VM does not exist,
ignoring//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) moved from
'MigratingFrom' --> 'Down'//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START, DestroyVmVDSCommand(
DestroyVmVDSCommandParameters:{hostId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 1734109b//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START,
DestroyVDSCommand(HostName = HP5,
DestroyVmVDSCommandParameters:{hostId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 2679b538//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,307+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] Failed to destroy VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' because VM does not exist,
ignoring//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,310+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] Stopped migrating VM:
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) on VDS:
'd2481de5-5ad2-4d06-9545-d5628cb87bcb'//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,329+02 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(ForkJoinPool-1-worker-9) [43364065] Lock freed to object
'EngineLock:{exclusiveLocks='[d14f75cd-1cb1-440b-9780-6b6ee78149ac=VM]',
sharedLocks=''}'//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,333+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
'6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,333+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) was unexpectedly detected
as 'Down' on VDS '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)
(expected on 'null')//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,333+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START,
DestroyVDSCommand(HostName = HP11,
DestroyVmVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 6a04ab1//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,358+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] Failed to destroy VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' because VM does not exist,
ignoring//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,358+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) was unexpectedly detected
as 'Down' on VDS '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)
(expected on 'null')/
Has anyone ever encountered this kind of problem following an oVirt
cluster update?
Thanks,
--
Emmanuel
1 year, 6 months
barely started - cannot import name 'Callable' from 'collections'
by lejeczek
Hi guys.
I've barely started, trying to deploy my first oVirt and I get:
...
Please indicate the gateway IP address [10.3.1.254]:
[ INFO ] Checking available network interfaces:
[ ERROR ] b'[WARNING]: Skipping plugin
(/usr/share/ovirt-hosted-engine-\n'
[ ERROR ]
b'setup/he_ansible/callback_plugins/2_ovirt_logger.py),
cannot load: cannot\n'
[ ERROR ] b"import name 'Callable' from 'collections'\n"
[ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n'
[ ERROR ] b"ERROR! Unexpected Exception, this is probably a
bug: cannot import name 'Callable' from 'collections'
(/usr/lib64/python3.11/collections/__init__.py)\n"
[ ERROR ] Failed to execute stage 'Environment
customization': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
[ ERROR ] b'[WARNING]: Skipping plugin
(/usr/share/ovirt-hosted-engine-\n'
[ ERROR ]
b'setup/he_ansible/callback_plugins/2_ovirt_logger.py),
cannot load: cannot\n'
[ ERROR ] b"import name 'Callable' from 'collections'\n"
[ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n'
[ ERROR ] b"ERROR! Unexpected Exception, this is probably a
bug: cannot import name 'Callable' from 'collections'
(/usr/lib64/python3.11/collections/__init__.py)\n"
[ ERROR ] Failed to execute stage 'Clean up': Failed
executing ansible-playbook
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20230509193552.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230509193544-s72umf.log
centos 9 stream with
ovirt-engine-setup-base-4.5.3.1-1.el9.noarch
Any & every suggestion on what is braking here and how to
troubleshoot/fix it are much appreciated.
thanks, L.
1 year, 6 months
Please help connecting to serial console - <urlopen error [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:897)>
by morgan cox
Hi.
I have an Ovirt 4.4.10 system - its a standalone setup (not hosted engine) - ovirt-vmconsole-proxy-sshd is running on the engine.
Presently I just cannot connect via port 2222.
I have never been able to connect via serial console and require help connecting, pretty sure its due to not using the right key/cert ..
I have tried to follow something like this -> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...
i.e I created a serialconsole key - added to my user in Ovirt UI (as my user 'mcox')
however if I try
# ssh -t -i /home/mcox/.ssh/ -p 2222 ng2-ovirt-mgmt1 -l ovirt-vmconsole
I get
ovirt-vmconsole(a)10.168.69.200: Permission denied (publickey).
I get in ovirt-vmconsole-proxy-sshd logs
"May 10 15:54:37 ng2-ovirt-mgmt1 ovirt-vmconsole[301584]: 2023-05-10 15:54:37,221+0000 ovirt-vmconsole-list: ERROR main:265 Error: <urlopen error [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:897)>
May 10 15:54:37 ng2-ovirt-mgmt1 ovirt-vmconsole-proxy-keys[301580]: ERROR Key list execution failed rc=1
May 10 15:54:37 ng2-ovirt-mgmt1 sshd[301578]: AuthorizedKeysCommand /usr/libexec/ovirt-vmconsole-proxy-keys ovirt-vmconsole failed, status 1
May 10 15:54:37 ng2-ovirt-mgmt1 ovirt-vmconsole[301589]: 2023-05-10 15:54:37,543+0000 ovirt-vmconsole-list: ERROR main:265 Error: <urlopen error [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:897)>
May 10 15:54:37 ng2-ovirt-mgmt1 ovirt-vmconsole-proxy-keys[301585]: ERROR Key list execution failed rc=1
May 10 15:54:37 ng2-ovirt-mgmt1 sshd[301578]: AuthorizedKeysCommand /usr/libexec/ovirt-vmconsole-proxy-keys ovirt-vmconsole failed, status 1
May 10 15:54:37 ng2-ovirt-mgmt1 sshd[301578]: Connection closed by authenticating user ovirt-vmconsole 10.88.1.105 port 52858 [preauth]"
Am I using the right key ?
I have tried to troubleshoot this - if I use
# /usr/libexec/ovirt-vmconsole-proxy-keys list
ERROR: Internal error
If it helps here is : /usr/share/ovirt-vmconsole/ovirt-vmconsole-proxy/ovirt-vmconsole-proxy-sshd/sshd_config
------------
AllowAgentForwarding no
#AllowStreamLocalForwarding no
AllowTcpForwarding no
AllowUsers ovirt-vmconsole
AuthorizedKeysCommand /usr/libexec/ovirt-vmconsole-proxy-keys
AuthorizedKeysCommandUser ovirt-vmconsole
ChallengeResponseAuthentication no
ClientAliveCountMax 3
ClientAliveInterval 10
GSSAPIAuthentication no
HostCertificate /etc/pki/ovirt-vmconsole/proxy-ssh_host_rsa-cert.pub
HostKey /etc/pki/ovirt-vmconsole/proxy-ssh_host_rsa
HostbasedAuthentication no
KbdInteractiveAuthentication no
KerberosAuthentication no
PasswordAuthentication no
#PermitUserRC no
PidFile /dev/null
Port 2222
Protocol 2
PubkeyAuthentication yes
TrustedUserCAKeys /etc/pki/ovirt-vmconsole/ca.pub
X11Forwarding no
------------
1 year, 6 months
no snapshot function in VM portal
by Christoph Köhler
Hi!
On a fresh Version 4.5.4-1.el8: in the vm portal are no snapshot
operations possible - not for an user_vm_manager and not for a
super_user. We have imported VMs from 4.3 with exisiting snapshots.
These are listed for the users in the snapshot box but there is also no
operation possible for them.
Does anyone has a hint?
Greetings!
Chris
1 year, 6 months
Ovirt node disk broken
by marcel d'heureuse
Hi all,
On a single Server the node disk is broken and I have to replace it.
The glusterfs configuration is also gone but the glusterfs directorys are existing on a separate available disk.
Did I have a Chance to install the ovirt Node on a New disk and make the old glusterfs running and also the 7 vms included the hosted engine?
I can also deploy a New hosted engine but can I than import the existing gluster disk from the old Installation?
I have installed the node and also mount the gluster_bricks but to make the volumes available I have not found any command.
It is an ovirt 4.3.10 Installation.
Is there any chance to save some time. The data inside the vms are good to have.
Br
Marcel
1 year, 6 months
hosted setup failed with error on 'Initialize lockspace volume'
by destfinal@googlemail.com
Hi,
I am trying to perform a self hosted ovirt cluster. All goes well and towards the end the setup fails at the task
'Initialize lockspace volume'
with the following error message:
{
"attempts":5,
"changed":true,
"cmd":[
"hosted-engine",
"--reinitialize-lockspace",
"--force"
],
"delta":"0:00:00.108243",
"end":"2023-05-03 22:50:53.930482",
"msg":"non-zero return code",
"rc":1,
"start":"2023-05-03 22:50:53.822239",
"stderr":"Traceback (most recent call last):\n File \"/usr/lib64/python3.9/runpy.py\", line 197, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in <module>\n ha_cli.reset_lockspace(force)\n File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 286, in reset_lockspace\n stats = broker.get_stats_from_storage()\n File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 148, in get_stats_from_storage\n result = self._proxy.get_stats()\n File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1122, in __call__\n return self.__send(self.__name, args)\n File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1464, in __request\n response = self.__transport.request(\n File \"/usr/lib64/py
thon3.9/xmlrpc/client.py\", line 1166, in request\n return self.single_request(host, handler, request_body, verbose)\n File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1178, in single_request\n http_conn = self.send_request(host, handler, request_body, verbose)\n File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1291, in send_request\n self.send_content(connection, request_body)\n File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1321, in send_content\n connection.endheaders(request_body)\n File \"/usr/lib64/python3.9/http/client.py\", line 1280, in endheaders\n self._send_output(message_body, encode_chunked=encode_chunked)\n File \"/usr/lib64/python3.9/http/client.py\", line 1040, in _send_output\n self.send(msg)\n File \"/usr/lib64/python3.9/http/client.py\", line 980, in send\n self.connect()\n File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", line 76, in connect\n self.sock.connect(base64.b16decode(self.host
))\nFileNotFoundError: [Errno 2] No such file or directory",
"stderr_lines":[
"Traceback (most recent call last):",
" File \"/usr/lib64/python3.9/runpy.py\", line 197, in _run_module_as_main",
" return _run_code(code, main_globals, None,",
" File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code",
" exec(code, run_globals)",
" File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in <module>",
" ha_cli.reset_lockspace(force)",
" File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 286, in reset_lockspace",
" stats = broker.get_stats_from_storage()",
" File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 148, in get_stats_from_storage",
" result = self._proxy.get_stats()",
" File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1122, in __call__",
" return self.__send(self.__name, args)",
" File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1464, in __request",
" response = self.__transport.request(",
" File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1166, in request",
" return self.single_request(host, handler, request_body, verbose)",
" File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1178, in single_request",
" http_conn = self.send_request(host, handler, request_body, verbose)",
" File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1291, in send_request",
" self.send_content(connection, request_body)",
" File \"/usr/lib64/python3.9/xmlrpc/client.py\", line 1321, in send_content",
" connection.endheaders(request_body)",
" File \"/usr/lib64/python3.9/http/client.py\", line 1280, in endheaders",
" self._send_output(message_body, encode_chunked=encode_chunked)",
" File \"/usr/lib64/python3.9/http/client.py\", line 1040, in _send_output",
" self.send(msg)",
" File \"/usr/lib64/python3.9/http/client.py\", line 980, in send",
" self.connect()",
" File \"/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", line 76, in connect",
" self.sock.connect(base64.b16decode(self.host))",
"FileNotFoundError: [Errno 2] No such file or directory"
],
"stdout":"",
"stdout_lines":[
]
}
The underlying storage is NFS on top of Ceph (which is not listed in the ovirt documentation). Does this ring any bell to anybody?
Please let me know if you need more information in this regard or contents from any of the log files.
Thanks
1 year, 6 months