ovirt node
by skhurtsilava@cellfie.ge
Hello Guys
I installed oVirt Node 4.4 and I want to deploy Hosted Engine but i get this error
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure the resolved address resolves only on the selected interface]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "hostname 'ovirt.bee.moitel.local' doesn't uniquely match the interface 'ens192' selected for the management bridge; it matches also interface with IP ['fe80::9a5b:2039:fe49:5252', '192.168.222.1', 'fd00:1234:5678:900::1']. Please make sure that the hostname got from the interface for the management network resolves only there.\n"}
How can i fix this error?
1 year, 10 months
Re: Basic authentication to Rest api not working 4.5.4
by kishorekumar.goli@gmail.com
Thanks Alexei for the response.
I see httpd configuration is updated to use oauth. I see below configuration updated in /etc/httpd/conf.d/internalsso-openidc.conf
<LocationMatch ^/ovirt-engine/api($|/)>
AuthType oauth20
Require valid-user
</LocationMatch>
I dont see any release notes about removal of basic authentication in 4.5.x. So I wanted to know if this is mentioned anywhere in the documentation.
1 year, 10 months
Basic authentication to Rest api not working 4.5.4
by kishorekumar.goli@gmail.com
We are facing issue while using basic authentication .
We get 401 unauthorized error . It was working in previous versions.
parameters used:
curl -vvk -u "admin:admin" -H "Content-type: application/xml" -X GET https://<ovirt_gui>/ovirt-engine/api/hosts/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html>
<head>
<title>401 Unauthorized</title>
</head>
<body>
<h1>Unauthorized</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
</body>
</html>
1 year, 10 months
Failed to read or parse '/etc/pki/ovirt-engine/keys/engine.p12'
by Frank Wall
Hi,
I was trying to restore a oVirt Engine Backup into a new Hosted Engine
appliance (as part of an upgrade), but this failed with the following
error:
--== PKI CONFIGURATION ==--
[WARNING] Failed to read or parse
'/etc/pki/ovirt-engine/keys/engine.p12'
Perhaps it was changed since last Setup.
Error was:
Error outputting keys and certificates
80EBCC44677F0000:error:0308010C:digital envelope
routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global
default library context, Algorithm (RC2-40-CBC : 0)
It looks like this is related to openssl requiring legacy mode
to use the old Engine cert/key.
Is there any way to workaround this? Or would it be possible
to repackage the existing PCKS#12 file with new encryption (on
the old Engine)?
Regards
- Frank
1 year, 10 months
[ansible]attach vdisk to vm
by Pietro Pesce
Hello ever1
i create a playbook for create and attach vdisk (from direct lun) to vm, the firs block work. I want attach the created vdisk to second vm. how can do?
---
# Add fiber chanel disk
- name: Create disk
ovirt.ovirt.ovirt_disk:
auth: "{{ ovirt_auth }}"
name: "{{ item.0 }}"
host: "{{host}}"
shareable: True
interface: virtio_scsi
vm_name: "{{hostname}}"
scsi_passthrough: disabled
logical_unit:
id: "{{ item.1 }}"
storage_type: fcp
loop: "{{ disk_name | zip(lun) | list }}"
## Add disk second node
#- name: Create disk
# ovirt.ovirt.ovirt_disk:
# auth: "{{ ovirt_auth }}"
# vm_name: "{{hostname2}}"
# name: "{{ item.0 }}"
# host: "{{host}}"
# interface: virtio_scsi
# logical_unit:
# id: "{{ item.1 }}"
# storage_type: fcp
# loop: "{{ disk_name | zip(lun) | list }}"
thanks
1 year, 10 months
engine setup fails: error: The system may not be provisioned according to the playbook results
by neeldey427@gmail.com
I'm trying to setup the engine. But I am getting the same error.
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool localvm3a2r5z0y]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool localvm3a2r5z0y]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool 9ef860a6-ee88-4aa6-94ac-a429a90ebec8]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool 9ef860a6-ee88-4aa6-94ac-a429a90ebec8]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
Please let me know if you need more information in this regard or contents from any of the log files.
Any & all suggestions on how to fix/troubleshoot this are much appreciated.
1 year, 10 months
engine setup fails: error creating bridge interface virbr0: File exists - ?
by lejeczek
Hi guys.
I'm trying to setup the engine on the latest stable ovirt
node(in a VM), so a clean, vanilla-default system.
-> $ hosted-engine --deploy --4
...
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate
default libvirt network]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"cmd": ["virsh", "net-start", "default"], "delta":
"0:00:00.042134", "end": "2023-05-11 11:08:59.248405",
"msg": "non-zero return code", "rc": 1, "start": "2023-05-11
11:08:59.206271", "stderr": "error: Failed to start network
default\nerror: error creating bridge interface virbr0: File
exists", "stderr_lines": ["error: Failed to start network
default", "error: error creating bridge interface virbr0:
File exists"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed
getting local_vm_dir
...
Any & all suggestions on how to fix/troubleshoot this are
much appreciated.
many thanks, L.
1 year, 10 months
Migration failed after upgrade engine from 4.3 to 4.4
by Emmanuel Ferrandi
Hi !
When I try to migrate a powered VM (regardless of OS) from one
hypervisor to another, the VM is immediately shut down with this error
message:
/Migration failed: Admin shut down from the engine (VM: VM, Source:
HP11)./
The oVirt engine has been upgraded from version 4.3 to version 4.4.
Some nodes are in version 4.3 and others in version 4.4.
Here are the oVirt versions for selected hypervisors:
* HP11 : 4.4
* HP5 : 4.4
* HP6 : 4.3
Here are the migration attempts I tried with a powered VM :
* From HP > to HP
* HP6 > HP5 : OK
* HP6 > HP11 : OK
* HP5 > HP11 : OK
* HP5 > HP6 : OK
* HP11 > HP5 : *NOK*
* HP11 > HP6 : OK
As mentioned above the migration of a VM between two versions of ovirt
is not a problem.
The migration of the VM between two HPs with the same 4.4 version works
only in one direction (HP5 to HP11) and doesn't work in the other way.
I already tried to reinstall both HPs in version 4.4 but without success.
Here are the logs on the HP5 concerning the VM:
//var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
(jsonrpc/3) [api.virt] START destroy(gracefulAttempts=1)
from=::ffff:172.20.3.250,37534, flow_id=43364065,
vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:48)//
///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
(jsonrpc/3) [api] FINISH destroy error=Virtual machine does not
exist: {'vmId': 'd14f75cd-1cb1-440b-9780-6b6ee78149ac'} (api:129)//
///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
(jsonrpc/3) [api.virt] FINISH destroy return={'status': {'code': 1,
'message': "Virtual machine does not exist: {'vmId':
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'}"}}
from=::ffff:172.20.3.250,37534, flow_id=43364065,
vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:54)/
//var/log/libvirt/qemu/VM.log:2023-03-24 14:56:51.474+0000:
initiating migration//
///var/log/libvirt/qemu/VM.log:2023-03-24 14:56:54.342+0000:
shutting down, reason=migrated//
///var/log/libvirt/qemu/VM.log:2023-03-24T14:56:54.870528Z qemu-kvm:
terminating on signal 15 from pid 4379 (<unknown process>)/
Here are the log on the engine concerning the VM:
//
//var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,333+02 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default
task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
MigrateVDSCommand(
MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
dstHost='HP5:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='null',
consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}},
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1,
action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
6a3507d0//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,334+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
MigrateBrokerVDSCommand(HostName = HP11,
MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
dstHost='HP5:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='null',
consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}},
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1,
action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
f254f72//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,246+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [3f0e966d] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
'6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,296+02 INFO
[org.ovirt.engine.core.bll.SaveVmExternalDataCommand]
(ForkJoinPool-1-worker-9) [43364065] Running command:
SaveVmExternalDataCommand internal: true. Entities affected : ID:
d14f75cd-1cb1-440b-9780-6b6ee78149ac Type: VM//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,299+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START,
DestroyVDSCommand(HostName = HP11,
DestroyVmVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 20bf3f27//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] Failed to destroy VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' because VM does not exist,
ignoring//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) moved from
'MigratingFrom' --> 'Down'//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START, DestroyVmVDSCommand(
DestroyVmVDSCommandParameters:{hostId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 1734109b//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,303+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START,
DestroyVDSCommand(HostName = HP5,
DestroyVmVDSCommandParameters:{hostId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 2679b538//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,307+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] Failed to destroy VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' because VM does not exist,
ignoring//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,310+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] Stopped migrating VM:
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) on VDS:
'd2481de5-5ad2-4d06-9545-d5628cb87bcb'//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,329+02 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(ForkJoinPool-1-worker-9) [43364065] Lock freed to object
'EngineLock:{exclusiveLocks='[d14f75cd-1cb1-440b-9780-6b6ee78149ac=VM]',
sharedLocks=''}'//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,333+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
'6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,333+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) was unexpectedly detected
as 'Down' on VDS '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)
(expected on 'null')//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,333+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] START,
DestroyVDSCommand(HostName = HP11,
DestroyVmVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', secondsToWait='0',
gracefully='false', reason='', ignoreNoVm='true'}), log id: 6a04ab1//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,358+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-9) [43364065] Failed to destroy VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac' because VM does not exist,
ignoring//
///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,358+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-9) [43364065] VM
'd14f75cd-1cb1-440b-9780-6b6ee78149ac'(VM) was unexpectedly detected
as 'Down' on VDS '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)
(expected on 'null')/
Has anyone ever encountered this kind of problem following an oVirt
cluster update?
Thanks,
--
Emmanuel
1 year, 10 months