gerrit.ovirt.org migration
by Denis Volkov
Hello
There's a migration of gerrit to another server.
Server should change IP address
I'll post update when it's done
--
Denis Volkov
2 years, 2 months
Failure Of Latest oVirt Installation - Missing Packages
by Matthew J Black
Hi All,
This is all on a Rocky Linux 8.6 box, as a CLI oVirt self-hosted engine install.
I have followed all of the instructions (include the special RHEL and derivatives and Rocky Linux instructions) in the oVirt documentation.
So, I'm attempting to install the ovirt-hosted-engine-setup rpm (ie dnf -y install ovirt-hosted-engine-setup), and I'm getting failures involving the following packages (ie the following packages cannot be found):
- collectd >= 5.12.0-7
- collectd-disk >= 5.12.0-7
- collectd-write_http >= 5.12.0-7
- collectd-write_syslog >= 5.12.0-7
- collectd-netlink >= 5.12.0-7
- collectd-virt >= 5.12.0-7
- python3-os-brick
The full dnf error message is:
~~~
Error:
Problem: package ovirt-hosted-engine-setup-2.6.5-1.el8.noarch requires ovirt-host >= 4.5.0, but none of the providers can be installed
- package ovirt-host-4.5.0-3.el8.x86_64 requires ovirt-host-dependencies = 4.5.0-3.el8, but none of the providers can be installed
- cannot install the best candidate for the job
- nothing provides collectd >= 5.12.0-7 needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
- nothing provides collectd-disk >= 5.12.0-7 needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
- nothing provides collectd-write_http >= 5.12.0-7 needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
- nothing provides collectd-write_syslog >= 5.12.0-7 needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
- nothing provides collectd-netlink >= 5.12.0-7 needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
- nothing provides collectd-virt >= 5.12.0-7 needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
- nothing provides python3-os-brick needed by ovirt-host-dependencies-4.5.0-3.el8.x86_64
~~~
Doing a dnf with --nobest produces similar results ie:
~~~
Error:
Problem: conflicting requests
- package ovirt-hosted-engine-setup-2.6.3-1.el8.noarch requires ovirt-host >= 4.5.0, but none of the providers can be installed
- package ovirt-hosted-engine-setup-2.6.4-1.el8.noarch requires ovirt-host >= 4.5.0, but none of the providers can be installed
- package ovirt-hosted-engine-setup-2.6.5-1.el8.noarch requires ovirt-host >= 4.5.0, but none of the providers can be installed
etc
~~~
In all of the repos (& mirrors) I've looked at the latest version of collectd* (for EL8) is 5.9.0-5, and I can't locate python3-os-block at all.
pkgs.org says that even the version of collectd* in the Centos 8 Repos is v5.9.0-5 in the EPEL EL8 repo. According to rpmfind.net there is a 5.12.0-23 version of collectd* in the EPEL EL9 repo, but I read somewhere that oVirt is not-production-ready on EL9, so I don't have the EPEL EL9 repo "available".
pkgs.org says that the only versions it knows about for pyton3-os-brick are for Debian and Ubuntu, and rpmfind.net doesn't list it at all (which makes sense if there are only versions from Debia-based distros).
Could someone please point me towards an EL8 repo where I can get the required packages, or, if I'm being totally stupid, what I'm doing wrong?
Thanks in advance
Dulux-Oz
2 years, 2 months
Slow upload via oVirt Manager
by markeczzz@gmail.com
Hi!
I am trying to upload some rather large disk images via oVirt Web Manager, but I am getting rather slow speeds. Network is 1Gbps between my pc and oVirt node via which I am uploading, but I am getting around 200Mbps transfer speed.
Last time when I was uploading was when I had version 4.4 of oVirt and I haven't had that problem then. Now I am using latest version.
Any steps how to troubleshoot this?
2 years, 2 months
vdsm.log full of "error" because node don't run the "engine"
by Diego Ercolani
Hello,
I don't know if it's normal but in all the nodes of the cluster (except the one that runs the engine) I have something like:
2022-09-12 15:41:54,563+0000 INFO (jsonrpc/0) [api.virt] START getStats() from=::1,57578, vmId=8486ed73-df34-4c58-bfdc-7025dec63b7f (api:48)
2022-09-12 15:41:54,563+0000 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine does not exist: {'vmId': '8486ed73-df34-4c58-bfdc-7025dec63b7f'} (api:129)
2022-09-12 15:41:54,563+0000 INFO (jsonrpc/0) [api.virt] FINISH getStats return={'status': {'code': 1, 'message': "Virtual machine does not exist: {'vmId': '8486ed73-df34-4c58-bfdc-7025dec63b7f'}"}} from=::1,57578, vmId=8486ed73-df34-4c58-bfdc-7025dec63b7f (api:54)
2022-09-12 15:41:54,563+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
The UUID 8486ed73-df34-4c58-bfdc-7025dec63b7f is the UUID of the Hosted-engine VM, obviously it run only in one noe and, as I undestand, the other nodes are querying the VM state only to fence the VMs if is started on more node at a time... but having the log full of these "error" are annoying.
I would prefer to have the word "error" only in meaningful context.
This if the hypotesis of my interpretation is correct, if it's a true error, then please address me how to fix it. Thank you.
2 years, 2 months
vm migration failed with certifacate issue
by parallax
ovirt 4.4.4.7
not able to migrate VMs between hosts with following vdsm error:
operation failed: Failed to connect to remote libvirt URI
qemu+tls://kvm4.imp.loc/system: authentication failed: Failed to verify
peer's certificate
hosts certificates was renewed recently but hosts hasn't been reloaded
how to fix this issue
2 years, 2 months
remote-viewer "Change CD menu" contains ISOs only from one storage domain
by Alexander Murashkin
remote-viewer "Change CD menu" contains ISOs only from one storage domain.
There are two "data" storage domains in the datacenter. Both of them
contains ISOs and VM disk images.
The first VM has a disk image in the first storage domain and so is
shown in the domain's "Virtual Machine" tab. Similar, the second VM has
a disk image in the second storage domain and so is also shown in the
domain's "Virtual Machine" tab.
When the VMs are running, in their consoles, in "Change CD menu" only
ISOs from the first storage domain are listed.
I would consider it a bug. It is possible to attach ISO from any storage
domain to any VM. But it is not possible to see ISO from the second
domain in "Change CD menu".
Engine RPM: ovirt-engine-4.5.2.5-1.el8.noarch. Desktop RPMs:
libgovirt-0.3.8-3.fc36.x86_64 virt-viewer-11.0-2.fc36.x86_64
Shall I open a bug in Bugzilla? If so, what component it is related to -
libgovirt?
Alexander Murashkin
PS I also have tried ISO domain. No luck.
2 years, 2 months
VM remote console issue related to VNC and Root CA certs?
by Chris Smith
Hello,
I have built a small 2 cluster "datacenter". One host is oVirt 4.5.1 and
another 4.5.2. Each host is in it's own cluster due to CPU type difference.
I am able to build VMs on each host and they are able to start.
My issue I am fighting now is that I am unable to get remote-viewer.exe to
open a remote console window to a VM on the 2nd cluster I've built.
I'm trying to connect to a VM named "utm" which is running on host
"mini-node" which is in a separate cluster from where "ovirt-node" resides.
"ovirt-node" is the host where "ovirt-engine" resides.
I have attached the whole log to this e-mail from my windows machine, and I
think this part is most relevant:
[image: image.png]
I've tried to install the SSL certs for each server (from tcp/9090) on my
windows workstation's cert store in both user and computer personal and
trusted root ca store, but i'm not sure that the cert I have for my 2nd
host is made right?
[image: image.png]
[image: image.png]
This is my ovirt engine cert (ignore the comcast.net domain name, that is
due to linksys nonsense that i'm just running with):
[image: image.png]
[image: image.png]
I wonder, should SSL cert for "mini-node" have been issued by
"ovirt-engine"? i think it's self-signed at the moment.
Here is what my console.vv looks like:
[virt-viewer]
type=vnc
host=mini-node.hsd1.fl.comcast.net
port=5900
password=<snip>
# Password is valid for 120 seconds.
delete-this-file=1
fullscreen=0
title=utm:%d
toggle-fullscreen=shift+f11
release-cursor=shift+f12
secure-attention=ctrl+alt+end
versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0-6;rhel6:99.0-1
newer-version-url=
http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources
[ovirt]
host=ovirt-engine.hsd1.fl.comcast.net:443
vm-guid=cdfe33fc-4d6e-425d-b5d2-d999f56ea4ea
sso-token=<snip>
admin=1
ca=-----BEGIN CERTIFICATE-----
<snip>
Any help is appreciated,
Thank you,
Chris
2 years, 2 months
Cannot upgrade hosted-engine due to a LUN in a storage domain
by virden-gerald@aramark.com
I need to know if there are any cli commands to manage storage domains. I have a completely empty LUN in a storage domain but I can't use it to apply to my hosted-engine upgrade. I think I need to put the storage domain into maintenance mode and remove it but I don't have the GUI to do this. Anyone know how to use the command line for this?
2 years, 2 months
hosted-engine -vm-status show a ghost node that is not anymore in the cluster: how to remove?
by Diego Ercolani
engine 4.5.2.4
The issue is that in my cluster when I use the:
[root@ovirt-node3 ~]# hosted-engine --vm-status
--== Host ovirt-node3.ovirt (id: 1) status ==--
Host ID : 1
Host timestamp : 1633143
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : ovirt-node3.ovirt
Local maintenance : False
stopped : False
crc32 : 1cbfcd19
conf_on_shared_storage : True
local_conf_timestamp : 1633143
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1633143 (Wed Aug 31 14:37:53 2022)
host-id=1
score=3400
vm_conf_refresh_time=1633143 (Wed Aug 31 14:37:53 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host ovirt-node1.ovirt (id: 2) status ==--
Host ID : 2
Host timestamp : 373629
Score : 0
Engine status : unknown stale-data
Hostname : ovirt-node1.ovirt
Local maintenance : True
stopped : False
crc32 : 12a6eb81
conf_on_shared_storage : True
local_conf_timestamp : 373630
Status up-to-date : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=373629 (Tue Jun 14 16:48:50 2022)
host-id=2
score=0
vm_conf_refresh_time=373630 (Tue Jun 14 16:48:50 2022)
conf_on_shared_storage=True
maintenance=True
state=LocalMaintenance
stopped=False
--== Host ovirt-node2.ovirt (id: 3) status ==--
Host ID : 3
Host timestamp : 434247
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : ovirt-node2.ovirt
Local maintenance : False
stopped : False
crc32 : badb3751
conf_on_shared_storage : True
local_conf_timestamp : 434247
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=434247 (Wed Aug 31 14:37:45 2022)
host-id=3
score=3400
vm_conf_refresh_time=434247 (Wed Aug 31 14:37:45 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host ovirt-node4.ovirt (id: 4) status ==--
Host ID : 4
Host timestamp : 1646655
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : ovirt-node4.ovirt
Local maintenance : False
stopped : False
crc32 : 1a16027e
conf_on_shared_storage : True
local_conf_timestamp : 1646655
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1646655 (Wed Aug 31 14:37:43 2022)
host-id=4
score=3400
vm_conf_refresh_time=1646655 (Wed Aug 31 14:37:43 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
The problem is that ovirt-node1.ovirt is not anymore in ther cluster, in the host list presented by the ui there is correctly no ovirt-node1, the ovirt-node1 appears only in the commandline.
I did a full text search in the engine DB, but node1 doesn't appear anywhere, even in the filesystem, a grep doesn't find anything
2 years, 2 months