Re: Help with problem in oVirt 4.4.
by Nir Soffer
On Tue, Aug 18, 2020 at 6:07 AM Rodrigo Mauriz <rmauriz(a)gmail.com> wrote:
Hi Rodrigo, I'm moving the discussion to ovirt users list, which is the right
place to discuss this:
https://lists.ovirt.org/archives/list/users%40ovirt.org/
> When I try to upload an ISO file by:
>
> Storage -> Storage Domains -> Disks
> I get an error:
> "Connection to ovirt-imageio-proxy service failed. Make sure service is installed, configured, and ovirt-engine certificate is registered as valid CA in browser."
>
> Review and note that the services that perform this task were not installed in the ovirt-engine:
> • ovirt-imageio-proxy
This package does not exist and not needed in ovirt 4.4
> • ovirt-imageio-daemon
> • ovirt-imageio-common
These packages are needed, and installed when you instal ovirt-engine.
The ovirt-imageio service is configured automatically when you run engine-setup.
> Example:
> #systemctl status ovirt-imageio-proxy (not installed)
>
> I tried to install it but it throws it an error:
>
> #yum install ovirt-imageio-proxy engine-setup
>
> Updating Subscription Management repositories.
> Unable to read consumer identity
> This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
> Last metadata expiration check: 0:41:46 Aug on Fri 07 Aug 2020 10:20:42 AM -04.
> No match for argument: ovirt-imageio-proxy
> No match for argument: engine-setup
> Error: Unable to find a match: ovirt-imageio-proxy engine-setup
This is correct, ovirt-imageio-proxy is not needed and does not exist.
> The same, for the engine-iso-uploader
I think the iso uploader was deprecated a long time ago, and it is not
needed to upload images.
> I asked a person who works at Red Hat and they told me that this apps (services) are not released yet.
Uploading images from the UI was available in ovirt 4.4.0, but is fully
functional since ovirt 4.4.1.
We know about hone issue - if you replaced apache certificates with 3rd party
certificates, uploading via ovirt-imageio service on engine host (as a proxy)
will not work:
https://bugzilla.redhat.com/1866745
https://bugzilla.redhat.com/1862107
If you are using engine built certificates, upload and download should work.
We may have more info about the failure in engine log. Please share the log
showing the time when you tried to upload an image.
Nir
4 years, 4 months
Help remove snapshot
by Magnus Isaksson
Hi!
I would need some help removing a snapshot that looks like this.
How do i remove it manually or fix snapshot definition?
-----------------
Error while executing action:
TUN_SALDC01:
* The requested snapshot has an invalid parent, either fix the snapshot definition or remove it manually to complete the process.
-----------------
And when i look at the disk on tha tnapshot it show as this:
[cid:98462a7f-dbb6-447e-925a-5b170883d7c7]
And general info:
[cid:9264e2e0-459f-4b54-aba7-08b3bf1f4068]
4 years, 4 months
How to diable dhcp in ovirt network
by Adam Xu
Hi everyone
Recently, when I install okd on ovirt. The installation process was
failed because the vm automatically acquired an IP address from DHCP
before the installer assigned it.
Our DHCP is enabled by a layer 3 switch, I tried to diable dhcp using
network filter. but there is no entry related to blocking the DHCP
feature. Is there any way for ovirt to disable DHCP?
--
Adam Xu
4 years, 4 months
Migration not working
by Juan Pablo Lorier
Hi,
I'm having issues with migration to some hosts. I have a 4 node cluster
and I tried updating to see if it fixes but it's still failing. The
reason is nos clear in the engine log.
2020-08-14 09:45:32,322-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-429) [364e7777] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT
ED(67), Migration initiated by system (VM: medialist2-videoteca, Source:
virt2.tnu.com.uy, Destination: virt3.tnu.com.uy, Reason: ).
2020-08-14 09:45:32,336-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429)
[535a8bdb] Lock Acquired to object 'EngineLock:{exclusiveLocks='[aac7468
5-c969-4761-923f-043400176edf=VM]', sharedLocks=''}'
2020-08-14 09:45:32,355-03 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429)
[535a8bdb] Running command: MigrateVmToServerCommand internal: true. Ent
ities affected : ID: aac74685-c969-4761-923f-043400176edf Type:
VMAction group MIGRATE_VM with role type USER
2020-08-14 09:45:32,383-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429)
[535a8bdb] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId=
'634f3f64-8945-470c-b31c-b8d4c73109e6',
vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy',
dstVdsId='f9e441a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.
tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false',
migrationDowntime='0', autoConverge='true', migrateCompressed='false',
consoleAddress='null', maxBandwidth=
'125', enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action=
{name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime,
params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}},
{limit=4, action={name=setDowntime, pa
rams=[400]}}, {limit=6, action={name=setDowntime, params=[500]}},
{limit=-1, action={name=abort, params=[]}}]]',
dstQemu='172.16.100.45'}), log id: 1db80717
2020-08-14 09:45:32,385-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-429) [535a8bdb] START, MigrateBrokerVDSCommand(HostName = virt
2.tnu.com.uy,
MigrateVDSCommandParameters:{hostId='634f3f64-8945-470c-b31c-b8d4c73109e6',
vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy',
dstVdsId='f9e4
41a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.tnu.com.uy:54321',
migrationMethod='ONLINE', tunnelMigration='false',
migrationDowntime='0', autoConverge='true', migrateCompre
ssed='false', consoleAddress='null', maxBandwidth='125',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDow
ntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, para
ms=[300]}}, {limit=4, action={name=setDowntime, params=[400]}},
{limit=6, action={name=setDowntime, params=[500]}}, {limit=-1,
action={name=abort, params=[]}}]]', dstQemu='172.1
6.100.45'}), log id: 4085b503
2020-08-14 09:45:32,433-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-429) [535a8bdb] FINISH, MigrateBrokerVDSCommand, return: , log
id: 4085b503
2020-08-14 09:45:32,437-03 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429)
[535a8bdb] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 1db8
0717
2020-08-14 09:45:32,446-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-429) [535a8bdb] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT
ED(67), Migration initiated by system (VM: reverse_proxy, Source:
virt2.tnu.com.uy, Destination: virt4.tnu.com.uy, Reason: ).
2020-08-14 09:45:32,462-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-429) [535a8bdb] EVENT_ID:
USER_VDS_MAINTENANCE_WITHOUT_REASON(620), Host virt2.tnu.com.uy was
switched to Maintenance mode by jplorier(a)tnu.com.uy.
2020-08-14 09:45:32,681-03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-14)
[7667d149-886e-406b-9cd6-9b7472fa9944] Command 'MaintenanceNumberOfVdss'
id: 'db97a747-115b-4f01-aa3d-6277080e6a44' child commands '[]'
executions were completed, status 'SUCCEEDED'
2020-08-14 09:45:33,024-03 INFO
[org.ovirt.engine.core.vdsbroker.VdsManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-62) [] Received first
domain report for host virt2.tnu.com.uy
2020-08-14 09:45:33,686-03 INFO
[org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-88)
[7667d149-886e-406b-9cd6-9b7472fa9944] Ending command
'org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand' successfully.
2020-08-14 09:45:35,137-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] VM
'fd854ebe-144f-455f-bf0c-48167d7aae9f'(medialist2-videoteca) moved from
'MigratingFrom' --> 'Up'
2020-08-14 09:45:35,137-03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] Adding VM
'fd854ebe-144f-455f-bf0c-48167d7aae9f'(medialist2-videoteca) to re-run list
2020-08-14 09:45:35,144-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] Rerun VM
'fd854ebe-144f-455f-bf0c-48167d7aae9f'. Called from VDS 'virt2.tnu.com.uy'
*In the host running the vm, I see no error related to the migration,
only about Engine VM which seems to be working fine otherwise:*
2020-08-14 09:06:23,337-0400 ERROR (jsonrpc/3) [root] failed to retrieve
Hosted Engine HA score '[Errno 2] No such file or directory'Is the
Hosted Engine setup finished? (api:19
*VDSM version in node:*
vdsm.x86_64 4.30.46-1.el7 @ovirt-4.3
All nodes are up to date.
Thanks
4 years, 4 months
oVirt nested on oVirt works, when you enable MAC spoofing on host VMs
by thomas@hoberg.net
Howto: Create a new network profile that doesn't have the MAC spoofing filter included. In my case I used one without any filters, you may want to be more careful.
Background:
I had tried the nested approach with the default settings and found that the hosted-engine setup worked just find to the point where it was running as a nested VM on the install node, but that it failed to tie in the other gluster nodes or to communicate to the outside: There was something wrong with the network, but no obivous failures.
I tried using other, simpler "client hypervisors" (it's all KVM anyway) VirtualBox and 'native' KVM and the symptoms were the same: The VMs ran just fine, but the network didn't connect.
And then I remembered that obviously a Hypervisor need to override the MAC for each client VM if you want full network access and I had come across "no-mac-spoofing" often enough it somehow stayed on my mind... and finally it clicked.
Anyhow, validing full functionality now and the main interest in this is the ability to do a full simulation/testing of oVirt 4.3->4.4 upgrades on 3-node HCI setups, which I'd rather not do on the live physical system.
Of course, it's also quite cool!
4 years, 4 months
Support for Shared SAS storage
by Vinícius Ferrão
Hello,
I’ve two compute nodes with SAS Direct Attached sharing the same disks.
Looking at the supported types I can’t see this on the documentation: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
There’s is local storage on this documentation, but my case is two machines, both using SAS, connected to the same machines. It’s the VRTX hardware from Dell.
Is there any support for this? It should be just like Fibre Channel and iSCSI, but with SAS instead.
Thanks,
4 years, 4 months
[ANN] oVirt 4.4.2 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.2 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.2
Third Release Candidate for testing, as of August 13th, 2020.
This update is the second in a series of stabilization updates to the 4.4
series.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.2 release highlights:
http://www.ovirt.org/release/4.4.2/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.2/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 4 months