oVirt 4.3 ssh passwordless setup guide
by Morris, Roy
Hello,
Does anyone have a guide or how to on setting up oVirt with passwordless ssh setup? I want to do this with a production environment to improve security but I have never done this before and want to build a test environment to try it out.
Best regards,
Roy Morris
4 years, 5 months
Upgrade from 4.3 to 4.4 fails with db or user ovirt_engine_history already exists
by wilderink@triplon.nl
Currently, our upgrade to 4.4 fails with error:
FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found and temporary ones created
We have upgraded the running 4.3 installation to the latest version and also use the latest packages for the upgrade on the new CentOS 8.2 installation. The back-up is made following the Hosted Engine upgrade steps in the manual, using: `engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log`
The upgrade is performed after copying the backup.bck file to the new server and using `hosted-engine --deploy --restore-from-file=backup.bck`
After creating the Engine VM, the installation process hangs when the backup is restored. We tried it several times, using a complete or a partial back-up.
Old/current oVirt version: 4.3.10.4-1.el7
New version: 4.4.1.8
ovirt-ansible-hosted-engine-setup: 1.1.6
Did anyone get the same error while upgrading an existing installation?
Thanks!
Error log Ansible on Host:
2020-07-15 12:34:09,361+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Run engine-backup]
2020-07-15 12:35:28,778+0200 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 {'msg': 'non-zero return code', 'cmd': 'engine-backup --mode=restore --log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log --file=/root/engine_backup --provision-all-databases --restore-permissions', 'stdout': "Start of engine-backup with mode 'restore'\nscope: all\narchive file: /root/engine_backup\nlog file: /var/log/ovirt-engine/setup/restore-backup-20200715103410.log\nPreparing to restore:\n- Unpacking file '/root/engine_backup'\nRestoring:\n- Files\n------------------------------------------------------------------------------\nPlease note:\n\nOperating system is different from the one used during backup.\nCurrent operating system: centos8\nOperating system at backup: centos7\n\nApache httpd configuration will not be restored.\nYou will be asked about it on the next engine-setup run.\n----------------------------------------------------------
--------------------\nProvisioning PostgreSQL users/databases:\n- user 'engine', database 'engine'\n- extra user 'ovirt_engine_history' having grants on database engine, created with a random password\n- user 'ovirt_engine_history', database 'ovirt_engine_history'", 'stderr': "FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found and temporary ones created - Please clean up everything and try again", 'rc': 1, 'start': '2020-07-15 12:34:10.824630', 'end': '2020-07-15 12:35:28.488261', 'delta': '0:01:17.663631', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'engine-backup --mode=restore --log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log --file=/root/engine_backup --provision-all-databases --restore-permissions', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines'
: ["Start of engine-backup with mode 'restore'", 'scope: all', 'archive file: /root/engine_backup', 'log file: /var/log/ovirt-engine/setup/restore-backup-20200715103410.log', 'Preparing to restore:', "- Unpacking file '/root/engine_backup'", 'Restoring:', '- Files', '------------------------------------------------------------------------------', 'Please note:', '', 'Operating system is different from the one used during backup.', 'Current operating system: centos8', 'Operating system at backup: centos7', '', 'Apache httpd configuration will not be restored.', 'You will be asked about it on the next engine-setup run.', '------------------------------------------------------------------------------', 'Provisioning PostgreSQL users/databases:', "- user 'engine', database 'engine'", "- extra user 'ovirt_engine_history' having grants on database engine, created with a random password", "- user 'ovirt_engine_history', database 'ovirt_engine_history'"], 'stderr_lines': ["FATAL: Existing d
atabase 'ovirt_engine_history' or user 'ovirt_engine_history' found and temporary ones created - Please clean up everything and try again"], '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': 'ovirt-management.dc1.triplon', 'ansible_port': None, 'ansible_user': 'root'}}
2020-07-15 12:35:28,879+0200 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 fatal: [localhost -> ovirt-management.dc1.triplon]: FAILED! => {"changed": true, "cmd": "engine-backup --mode=restore --log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log --file=/root/engine_backup --provision-all-databases --restore-permissions", "delta": "0:01:17.663631", "end": "2020-07-15 12:35:28.488261", "msg": "non-zero return code", "rc": 1, "start": "2020-07-15 12:34:10.824630", "stderr": "FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found and temporary ones created - Please clean up everything and try again", "stderr_lines": ["FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found and temporary ones created - Please clean up everything and try again"], "stdout": "Start of engine-backup with mode 'restore'\nscope: all\narchive file: /root/engine_backup\nlog file: /var/log/ov
irt-engine/setup/restore-backup-20200715103410.log\nPreparing to restore:\n- Unpacking file '/root/engine_backup'\nRestoring:\n- Files\n------------------------------------------------------------------------------\nPlease note:\n\nOperating system is different from the one used during backup.\nCurrent operating system: centos8\nOperating system at backup: centos7\n\nApache httpd configuration will not be restored.\nYou will be asked about it on the next engine-setup run.\n------------------------------------------------------------------------------\nProvisioning PostgreSQL users/databases:\n- user 'engine', database 'engine'\n- extra user 'ovirt_engine_history' having grants on database engine, created with a random password\n- user 'ovirt_engine_history', database 'ovirt_engine_history'", "stdout_lines": ["Start of engine-backup with mode 'restore'", "scope: all", "archive file: /root/engine_backup", "log file: /var/log/ovirt-engine/setup/restore-backup-20200715103410.log", "Prep
aring to restore:", "- Unpacking file '/root/engine_backup'", "Restoring:", "- Files", "------------------------------------------------------------------------------", "Please note:", "", "Operating system is different from the one used during backup.", "Current operating system: centos8", "Operating system at backup: centos7", "", "Apache httpd configuration will not be restored.", "You will be asked about it on the next engine-setup run.", "------------------------------------------------------------------------------", "Provisioning PostgreSQL users/databases:", "- user 'engine', database 'engine'", "- extra user 'ovirt_engine_history' having grants on database engine, created with a random password", "- user 'ovirt_engine_history', database 'ovirt_engine_history'"]}
restore-backup-xx.log file on the created Hosted Engine (retrieved by Ansible after failing):
2020-07-15 15:57:36 6437: Start of engine-backup mode restore scope all file /root/engine_backup
2020-07-15 15:57:36 6437: OUTPUT: Start of engine-backup with mode 'restore'
2020-07-15 15:57:36 6437: OUTPUT: scope: all
2020-07-15 15:57:36 6437: OUTPUT: archive file: /root/engine_backup
2020-07-15 15:57:36 6437: OUTPUT: log file: /var/log/ovirt-engine/setup/restore-backup-20200715135736.log
2020-07-15 15:57:36 6437: OUTPUT: Preparing to restore:
2020-07-15 15:57:36 6437: OUTPUT: - Unpacking file '/root/engine_backup'
2020-07-15 15:57:36 6437: Opening tarball /root/engine_backup to /tmp/engine-backup.5tvwGDx3qs
2020-07-15 15:57:37 6437: Verifying hash
2020-07-15 15:57:37 6437: Verifying version
2020-07-15 15:57:37 6437: Reading config
2020-07-15 15:57:37 6437: OUTPUT: Restoring:
2020-07-15 15:57:37 6437: OUTPUT: - Files
2020-07-15 15:57:37 6437: Restoring files
2020-07-15 15:57:38 6437: Reloading configuration
2020-07-15 15:57:38 6437: OUTPUT: Provisioning PostgreSQL users/databases:
2020-07-15 15:57:38 6437: provisionDB: user engine host localhost port 5432 database engine secured False secured_host_validation False
2020-07-15 15:57:38 6437: OUTPUT: - user 'engine', database 'engine'
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/10-packaging.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf, /tmp/engine-backup.5tvwGDx3qs/pg-provision-answer-file
Log file: /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20200715155739-3o20a7.log
Version: otopi-1.9.2 (otopi-1.9.2-1.el8)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
[ INFO ] Stage: Setup validation
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Initializing PostgreSQL
[ INFO ] Creating PostgreSQL 'engine' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Install selinux module /usr/share/ovirt-engine/selinux/ansible-runner-service.cil
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20200715155739-3o20a7.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of provisiondb completed successfully
2020-07-15 15:58:14 6437: OUTPUT: - extra user 'ovirt_engine_history' having grants on database engine, created with a random password
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/10-packaging.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf, /tmp/engine-backup.5tvwGDx3qs/pg-provision-answer-file
Log file: /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20200715155815-j4ba3l.log
Version: otopi-1.9.2 (otopi-1.9.2-1.el8)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
[ INFO ] Stage: Setup validation
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Install selinux module /usr/share/ovirt-engine/selinux/ansible-runner-service.cil
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20200715155815-j4ba3l.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of provisiondb completed successfully
2020-07-15 15:58:46 6437: provisionDB: user ovirt_engine_history host localhost port 5432 database ovirt_engine_history secured False secured_host_validation False
2020-07-15 15:58:46 6437: OUTPUT: - user 'ovirt_engine_history', database 'ovirt_engine_history'
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/10-packaging.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf, /tmp/engine-backup.5tvwGDx3qs/pg-provision-answer-file
Log file: /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20200715155847-qk7ipy.log
Version: otopi-1.9.2 (otopi-1.9.2-1.el8)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
[ INFO ] Stage: Setup validation
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Creating PostgreSQL 'ovirt_engine_history' database
[ INFO ] Configuring PostgreSQL
[ ERROR ] Failed to execute stage 'Misc configuration': Existing resources found, new ones created:
database ovirt_engine_history_20200715155850 user ovirt_engine_history_20200715155850
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20200715155847-qk7ipy.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of provisiondb failed
2020-07-15 15:58:52 6437: FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found and temporary ones created - Please clean up everything and try again
4 years, 5 months
Hosted Engine Ovirt 4.3.8
by Vijay Sachdeva
Hi Everyone,
Does anyone has any idea why hosted engine setup stuck at “Wait for host to be up”.
It’s been 4 hours deployment is going on and got stuck. Any help please..!!
Thanks
Vijay Sachdeva
4 years, 5 months
oVirt 4.4 Engine data migration & GUI notes
by Andrei Verovski
Hi !
I have installed new oVirt 4.4 Engine (as separate entity, not hosted engine) and migrated data from old 4.3 installation.
Everything went smooth.
Q: Which PostgreSQL password now active - new one I entered during install or old which could migrate with old data?
I noticed GUI changes straight on dashboard.
Icons on top (data centres, clusters, hosts, etc) are simply put, too huge. This serves no useful purpose whatsoever, and consumes screen real estate for no reason.
Decreasing their size by 2/3 will help to make 1st page more informative at 1st sight without extra scrolling.
with best regards
Andrei
4 years, 5 months
HostedEngine start error: could not find capabilities for arch=x86_64 domaintype=kvm
by Gianluca Cecchi
Hello,
I'm importing a VM into a new oVirt environment that is in 4.3.10: the new
host is the same as the previous one (only difference it is in 4.3.10 while
the old one in 4.3.9).
This VM acted as a single host HCI environment (with 4.3.9). I imported the
storage domain containing the VM and its disks and the VM boots ok.
Now its Hosted Engine VM (that is so a nested VM) doesn't start.
I think it was good when I stopped/detached it... and so I would like to
cross check and verify if I missed something configuring the new
environment or something changed from 4.3.9 to 4.3.10
In vdsm.log I see this reason:
2020-07-23 14:38:20,489+0200 ERROR (vm/9233dc8b) [virt.vm]
(vmId='9233dc8b-7a4d-4fac-8551-dc7407f36548') The vm start process failed
(vm:
934)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 868, in
_startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2892, in
_run
dom = self._connection.defineXML(self._domain.xml)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in
defineXML
if ret is None:raise libvirtError('virDomainDefineXML() failed',
conn=self)
libvirtError: invalid argument: could not find capabilities for arch=x86_64
domaintype=kvm
The relevant parts of the generated xml in vdsm.log seems:
2020-07-23 14:38:20,487+0200 INFO (vm/9233dc8b) [virt.vm]
(vmId='9233dc8b-7a4d-4fac-8551-dc7407f36548') <?xml version="1.0"
encoding="ut
f-8"?><domain type="kvm" xmlns:ns0="http://ovirt.org/vm/tune/1.0"
xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<name>HostedEngine</name>
<uuid>9233dc8b-7a4d-4fac-8551-dc7407f36548</uuid>
<memory>16777216</memory>
<currentMemory>16777216</currentMemory>
<iothreads>1</iothreads>
<maxMemory slots="16">67108864</maxMemory>
<vcpu current="2">32</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">oVirt</entry>
<entry name="product">oVirt Node</entry>
<entry name="version">7-7.1908.0.el7.centos</entry>
<entry
name="serial">704e1338-5dec-46f8-a186-8c2080dcefd8</entry>
<entry name="uuid">9233dc8b-7a4d-4fac-8551-dc7407f36548</entry>
</system>
</sysinfo>
<clock adjustment="0" offset="variable">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact">
<model>Skylake-Server</model>
<feature name="spec-ctrl" policy="require"/>
<feature name="ssbd" policy="require"/>
<feature name="md-clear" policy="require"/>
<topology cores="2" sockets="16" threads="1"/>
<numa>
<cell cpus="0,1" id="0" memory="16777216"/>
</numa>
</cpu>
<cputune/>
...
</metadata>
<on_poweroff>destroy</on_poweroff><on_reboot>destroy</on_reboot><on_crash>destroy</on_crash></domain>
(vm:2890)
Any insights?
The configuration of the new 4.3.10 physical oVirt environment that
contains the VM has this cluster cpu type:
Intel Skylake Server IBRS SSBD MDS Family
Do I have to change it to anything else? I don't remember how it was set in
4.3.9
Thanks in advance,
Gianluca
4 years, 5 months
Re: [moVirt] New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions
by Tomas Jelinek
forwarding to the correct list
On Wed, Jul 22, 2020 at 5:15 PM Henri Aanstoot <fash(a)fash.nu> wrote:
> Hi all,
>
> I've got 2 two node setup, image based installs.
> When doing ova exports or generic snapshots, things seem in order.
> Removing snapshots shows warning 'disk in illegal state'
>
> Mouse hover shows .. please do not shutdown before succesfully remove
> snapshot
>
>
> ovirt-engine log
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
> Merge failed
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
> node2.lab,
> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> MergeVDS, error = Merge failed, code = 52
> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
> EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52 (Failed with error mergeErr and code 52)
> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52
> <driver name='qemu' error_policy='report'/>
> <driver name='qemu' type='qcow2' cache='none' error_policy='stop'
> io='threads'/>
> 2020-07-22 16:40:39,659+02 ERROR
> [org.ovirt.engine.core.bll.MergeStatusCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
> 2020-07-22 16:40:41,524+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
> 'MERGE_STATUS'
> 2020-07-22 16:40:42,597+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
> failed. Images have been marked illegal and can no longer be previewed or
> reverted to. Please retry Live Merge on the snapshot to complete the
> operation.
> 2020-07-22 16:40:42,603+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
> with failure.
> 2020-07-22 16:40:43,679+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
> 2020-07-22 16:40:43,774+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot
> 'Auto-generated for Export To OVA' for VM 'Adhoc'.
>
>
> VDSM on hypervisor
> 2020-07-22 14:14:30,220+0200 ERROR (jsonrpc/5) [virt.vm]
> (vmId='14283e6d-c3f0-4011-b90f-a1272f0fbc10') Live merge failed (job:
> e59c54d9-b8d3-44d0-9147-9dd40dff57b9) (vm:5381)
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
> dom=self)
> libvirt.libvirtError: internal error: qemu block name 'json:{"backing":
> {"driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/10.12.0.9:_exports_data/9a12f1b2-5378-46cc-964d-3575695e823f/images/3206de41-ccdc-4f2d-a968-5e4da6c2ca3e/bb3aed4b-fc41-456a-9c18-1409a9aa6d14"}},
> "driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/10.12.0.9:_exports_data/9a12f1b2-5378-46cc-964d-3575695e823f/images/3206de41-ccdc-4f2d-a968-5e4da6c2ca3e/3995b256-2afb-4853-9360-33d0c12e5fd1"}}'
> doesn't match expected '/rhev/data-center/mnt/10.12.0.9:
> _exports_data/9a12f1b2-5378-46cc-964d-3575695e823f/images/3206de41-ccdc-4f2d-a968-5e4da6c2ca3e/3995b256-2afb-4853-9360-33d0c12e5fd1'
> 2020-07-22 14:14:30,234+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call VM.merge failed (error 52) in 0.17 seconds (__init__:312)
>
> 2020-07-22 14:17:28,798+0200 INFO (jsonrpc/2) [api] FINISH getStats
> error=Virtual machine does not exist: {'vmId':
> '698d486c-edbf-4e28-a199-31a2e27bd808'} (api:129)
> 2020-07-22 14:17:28,798+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
>
> Also in log,
> INFO (jsonrpc/1) [api.virt] FINISH getStats return={'status': {'code': 1,
> 'message': "Virtual machine does not exist:
> But is is there and accessible
>
> Any advice here?
> Henri
>
>
> ovirt 4.4.0.3-1.el8
>
> OS Version:
> RHEL - 8 - 1.1911.0.9.el8
> OS Description:
> oVirt Node 4.4.0
> Kernel Version:
> 4.18.0 - 147.8.1.el8_1.x86_64
> KVM Version:
> 4.1.0 - 23.el8.1
> LIBVIRT Version:
> libvirt-5.6.0-10.el8
> VDSM Version:
> vdsm-4.40.16-1.el8
> SPICE Version:
> 0.14.2 - 1.el8
> GlusterFS Version:
> glusterfs-7.5-1.el8
> CEPH Version:
> librbd1-12.2.7-9.el8
> Open vSwitch Version:
> openvswitch-2.11.1-5.el8
> Nmstate Version:
> nmstate-0.2.10-1.el8
>
> _______________________________________________
> moVirt mailing list -- movirt(a)ovirt.org
> To unsubscribe send an email to movirt-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/movirt@ovirt.org/message/3GXS3N77LE...
>
4 years, 5 months
oVirt Network problem.
by Emil Dumitrache
Hello,
I am new to oVirt.
I have a problem and I don't know how to deal with it, i added another
network besides the management network and after that the node did not
respond to the oVIrt manager.
I had to change the DNS to the new IP of the new network so that oVIrt
manager can manage it.
Now i have an error:
Out-of-sync: Default route: host true, DC false.
I checked on the node and the second network has DEFROUTE=NO and the
management network to yes. I don't know why the manager is seeing that.
Also the management IP does not respond to ping or anything.
Please some advice for a newbie?
The setup is:
A standalone server with Centos7 and oVIrt manager installed and a node
with ovirt node os.
Thank you.
[image: Tremend Logo]
Emil Dumitrache
IT Support Manager
[image: Mobile] +40720450085 [image: Phone] +40-212-237-700
<0040212237700>
[image: Email] emil.dumitrache(a)tremend.com [image: Website]
www.tremend.com
[image: Facebook] <https://www.facebook.com/TremendSoftware> [image:
LinkedIN] [image: Skype]
------------------------------
Fastest growing Romanian company in Deloitte Fast 50 CE 2016
4 years, 5 months
ovirt+SDN
by Marco Mangione
hello,
anyone are using OVIRT with a SDN controller?
4 years, 5 months
Re: Hosted Engine 4.4.1
by Vijay Sachdeva
Anyone please can help me out on this.?
Vijay Sachdeva
From: Vijay Sachdeva <vijay.sachdeva(a)indiqus.com>
Date: Wednesday, 22 July 2020 at 8:24 PM
To: Florian Schmid via Users <users(a)ovirt.org>
Subject: Hosted Engine 4.4.1
Hello Everyone,
Waiting for host to be up task is stuck for hours and when checked engine log found this below:
2020-07-22 16:50:35,717+02 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-1) [] OAuthException access_denied: Cannot authenticate user 'None@N/A': No valid profile found in credentials..
Has anyone faced such issue then please help me out..!!
Thanks
Vijay Sachdeva
4 years, 5 months