Q: Node Install on Fresh CentOS Stream 8
by Andrei Verovski
Hi !
I'm going to install oVirt node software on CentOS 8 Stream (I don't use
node image from Red Hat because of custom monitoring scripts).
Do I need to disable any stock repos (e.g. AppStream) in order to avoid
installation of anything not suitable, e.g. newer version then one in
oVirt repo, for oVirt node software?
Thanks in advance.
Andrei
2 years, 4 months
Re: Problem Upgrading DWH from 4.5.1 to 4.5.2
by Yedidyah Bar David
On Tue, Aug 23, 2022 at 8:53 AM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> Hi,
>
>
>
> I’m keep getting this kind of error whenever try to run engine-setup to upgrade my separated DWH server :
>
> [ INFO ] Stage: Initializing
>
> [ INFO ] Stage: Environment setup
>
> Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
>
> Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20220823110720-28vs78.log
>
> Version: otopi-1.10.2 (otopi-1.10.2-1.el8)
>
> [ INFO ] Stage: Environment packages setup
>
> [ INFO ] Stage: Programs detection
>
> [ INFO ] Stage: Environment customization
>
>
>
> --== PRODUCT OPTIONS ==--
>
>
>
> [ ERROR ] Failed to execute stage 'Environment customization': ok_to_renew_cert() missing 2 required positional arguments: 'short_life' and 'environment'
>
> [ INFO ] Stage: Clean up
>
>
>
> Maybe anybody here can give any idea to solve this issue ?
It's a bug, would you like to report it in bugzilla?
This should fix it:
https://github.com/oVirt/ovirt-dwh/pull/48
Best regards,
--
Didi
2 years, 4 months
Problem with VNC + TLS
by nixmagic@gmail.com
Why is VNC + TLS enabled for the host after executing "Enroll Certificate" (Modify qemu config file - enable TLS), although "Enable VNC Encryption" is not enabled in the cluster settings?
If you do "Reinstall" for the host, then VNC is configured without TLS support (Modify qemu config file - disable TLS), which is correct according to the cluster settings.
In addition to this, the problem is aggravated by the bug https://bugzilla.redhat.com/show_bug.cgi?id=1757793
2 years, 4 months
Hosted engine restarting
by markeczzz@gmail.com
Hi!
In the last few days I am having problem with Hosted-Engine, it keeps restarting. Sometimes after few minutes, sometimes after few hours..
I haven't done any changes on oVirt or network in that time.
Version is 4.4.10.7-1.el8. (this was also installation version)
Here are the logs:
Agent.log------------------------------
MainThread::INFO::2022-08-21 09:48:36,200::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUp (score: 2440)
MainThread::INFO::2022-08-21 09:48:36,200::hosted_engine::525::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host node3.ovirt.example.org (id: 3, score: 2440)
MainThread::ERROR::2022-08-21 09:48:46,212::states::398::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Host node3.ovirt.example.org (id 3) score is significantly better than local score, shutting down VM on this host
MainThread::INFO::2022-08-21 09:48:46,641::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUp-EngineStop) sent? ignored
MainThread::INFO::2022-08-21 09:48:46,706::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStop (score: 3400)
MainThread::INFO::2022-08-21 09:48:46,706::hosted_engine::525::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host node3.ovirt.example.org (id: 3, score: 3400)
MainThread::INFO::2022-08-21 09:48:56,714::hosted_engine::934::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-shutdown`
MainThread::INFO::2022-08-21 09:48:56,871::hosted_engine::941::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout:
MainThread::INFO::2022-08-21 09:48:56,871::hosted_engine::942::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr:
MainThread::ERROR::2022-08-21 09:48:56,871::hosted_engine::950::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost
MainThread::INFO::2022-08-21 09:48:56,880::state_decorators::102::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout set to Sun Aug 21 09:53:56 2022 while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStop'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineStop'>
MainThread::INFO::2022-08-21 09:48:56,959::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStop (score: 3400)
MainThread::INFO::2022-08-21 09:49:06,977::states::537::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine vm not running on local host
MainThread::INFO::2022-08-21 09:49:06,983::state_decorators::95::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStop'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineDown'>
MainThread::INFO::2022-08-21 09:49:07,173::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStop-EngineDown) sent? ignored
MainThread::INFO::2022-08-21 09:49:07,795::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400)
MainThread::INFO::2022-08-21 09:49:16,811::states::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM
MainThread::INFO::2022-08-21 09:49:16,998::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? ignored
MainThread::INFO::2022-08-21 09:49:17,179::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400)
MainThread::INFO::2022-08-21 09:49:17,195::hosted_engine::895::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM
MainThread::INFO::2022-08-21 09:49:17,200::hosted_engine::915::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM
MainThread::INFO::2022-08-21 09:49:18,211::hosted_engine::907::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean
MainThread::INFO::2022-08-21 09:49:18,212::hosted_engine::853::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start`
MainThread::INFO::2022-08-21 09:49:18,814::hosted_engine::862::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: VM in WaitForLaunch
MainThread::INFO::2022-08-21 09:49:18,814::hosted_engine::863::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': 'cc7931ff-8124-4724-9242-abea2ab5bf42'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 'cc7931ff-8124-4724-9242-abea2ab5bf42'})
MainThread::INFO::2022-08-21 09:49:18,814::hosted_engine::875::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost
MainThread::INFO::2022-08-21 09:49:18,999::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? ignored
MainThread::INFO::2022-08-21 09:49:19,008::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2022-08-21 09:49:29,027::states::741::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is powering up..
MainThread::INFO::2022-08-21 09:49:29,033::state_decorators::102::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout set to Sun Aug 21 09:59:29 2022 while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'>
MainThread::INFO::2022-08-21 09:49:29,109::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2022-08-21 09:49:38,121::states::741::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is powering up..
MainThread::INFO::2022-08-21 09:49:38,195::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2022-08-21 09:49:48,218::state_decorators::95::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineUp'>
MainThread::INFO::2022-08-21 09:49:48,403::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineUp) sent? ignored
MainThread::INFO::2022-08-21 09:49:48,713::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUp (score: 3400)
MainThread::INFO::2022-08-21 09:49:58,725::states::406::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine vm running on localhost
Broker.log------------------------------
Thread-4::INFO::2022-08-21 09:47:59,342::cpu_load_no_engine::142::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0241, engine=0.0013, non-engine=0.0228
Thread-3::INFO::2022-08-21 09:48:01,311::mem_free::51::mem_free.MemFree::(action) memFree: 96106
Thread-5::INFO::2022-08-21 09:48:05,612::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-2::INFO::2022-08-21 09:48:08,591::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-1::WARNING::2022-08-21 09:48:10,352::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-1::WARNING::2022-08-21 09:48:10,352::network::92::network.Network::(action) Failed to verify network status, (4 out of 5)
Thread-3::INFO::2022-08-21 09:48:11,389::mem_free::51::mem_free.MemFree::(action) memFree: 96089
Thread-5::INFO::2022-08-21 09:48:15,707::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-2::INFO::2022-08-21 09:48:18,662::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-1::WARNING::2022-08-21 09:48:18,879::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-3::INFO::2022-08-21 09:48:21,467::mem_free::51::mem_free.MemFree::(action) memFree: 96072
Thread-1::WARNING::2022-08-21 09:48:24,904::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-5::INFO::2022-08-21 09:48:25,808::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-2::INFO::2022-08-21 09:48:28,740::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-1::WARNING::2022-08-21 09:48:30,416::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-1::WARNING::2022-08-21 09:48:30,416::network::92::network.Network::(action) Failed to verify network status, (2 out of 5)
Thread-3::INFO::2022-08-21 09:48:31,545::mem_free::51::mem_free.MemFree::(action) memFree: 96064
Thread-5::INFO::2022-08-21 09:48:35,909::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-1::WARNING::2022-08-21 09:48:35,940::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-1::WARNING::2022-08-21 09:48:37,480::network::92::network.Network::(action) Failed to verify network status, (4 out of 5)
Thread-2::INFO::2022-08-21 09:48:38,809::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-3::INFO::2022-08-21 09:48:41,623::mem_free::51::mem_free.MemFree::(action) memFree: 96014
Thread-1::INFO::2022-08-21 09:48:42,549::network::88::network.Network::(action) Successfully verified network status
Thread-5::INFO::2022-08-21 09:48:46,011::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Listener::ERROR::2022-08-21 09:48:46,639::notifications::42::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email) (530, b'5.7.1 Authentication required', 'alerts(a)example.org.hr')
At first I thought that it is related to this bugs.
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/2HTD5WR43M5M...
https://bugzilla.redhat.com/show_bug.cgi?id=1984356
But in this oVirt version that bug should already be solved.
I was trying to monitor network, but this error keeps happening even if network load is low.
I did try to do continuous dig and ping commands on VM-s running on same host as Hosted Engine, and did not have any network problems, not even one connection drop.
Any solutions or next steps i should try?
2 years, 4 months
Problem Upgrading DWH from 4.5.1 to 4.5.2
by Nur Imam Febrianto
Hi,
I’m keep getting this kind of error whenever try to run engine-setup to upgrade my separated DWH server :
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20220823110720-28vs78.log
Version: otopi-1.10.2 (otopi-1.10.2-1.el8)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
[ ERROR ] Failed to execute stage 'Environment customization': ok_to_renew_cert() missing 2 required positional arguments: 'short_life' and 'environment'
[ INFO ] Stage: Clean up
Maybe anybody here can give any idea to solve this issue ?
Thanks before.
Regards,
Nur Imam Febrianto
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows
2 years, 4 months
Selecting login profile with LDAP integration
by Dave Lennox
I am trying to get LDAP integration working with FreeIPA, it would seem that the instructions to do so are the same across the RHEV and oVirt administration guides and other sites that have replicated that information and based on oVirt 4.4 (I am running 4.5.2).
I have it configured as per the oVirt admin guide with:
- the test as part of the setup tool returned success
- I have created a ovirt-admins LDAP group, which was successfully found by oVirt and I have created a new group within oVirt for that.
But how do I actually login with a LDAP user credentials?
Documentation refers to selecting the Profile that was configured with the LDAP setup, but doesn't seem to be provided since 4.5 on the login screen?
Keycloak is reporting that it is trying to validate the login against the Internal profile so I assume it isn't able to try multiple authentication sources?
2022-08-19 14:46:55,112+10 WARN [org.keycloak.events] (default task-12) [] type=LOGIN_ERROR, realmId=2429db03-71ca-4500-a8ee-e25e01c7a5e3, clientId=ovirt-engine-internal, userId=null, ipAddress=192.168.0.70, error=user_not_found, auth_method=openid-connect, auth_type=code, redirect_uri=https://sr-utl04.ovirt.lennoxconsulting.com.au/ovirt-engine/callback, code_id=d9f6400a-4d2f-4d9f-8407-e40db360a56b, username=david(a)lennoxconsulting.com.au, authSessionParentId=d9f6400a-4d2f-4d9f-840
7-e40db360a56b, authSessionTabId=He1IhSgIZP8
So how do I set up the engine to allow me to select the Profile to use on the login screen?
- David.
I have tried using LDAP email addresses,
2 years, 4 months
Having issue deploying self-hosted engine on additional nodes
by Henry Wong
Hi,
I have a two-node 4.5.2 cluster and the self-hosted engine was deployed and running on the 1st node. When I tried to deploy it on the 2nd node in the manager UI -> hosts -> Edit host -> Hosted Engine -> Deploy, once I hit ok the window closed and looked as if nothing had happened. There was no message or error that I can find in the GUI. Does anyone have any suggestions?
Thanks
Henry
2 years, 4 months
DWH_DELETE_JOB not start or not worked
by Sergey D
Hello, I'm upgrade oVirt from 4.3.10.4 to 4.4.10.7-1
The update was successful, but I noticed that a couple of days before the
update, the "samples" tables stopped clearing.
DWH is configured in Basic
cat /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-scale.conf
DWH_TABLES_KEEP_SAMPLES=24
DWH_TABLES_KEEP_HOURLY=720
DWH_TABLES_KEEP_DAILY=0
cat /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
...
2022-08-22 15:13:41|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**********************
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|300000
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.000000
etlVersion|4.4.10
dwhAggregationDebug|true
...
2022-08-22
15:00:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
2022-08-22 15:00:00 Statistics sync ended. Duration: 394 milliseconds
2022-08-22 15:00:00 Aggregation to Hourly ended.
2022-08-22
15:01:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||end|success|60000
2022-08-22
15:01:00|9mBhki|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
...
# select * from history_configuration;
var_name | var_value | var_datetime
-------------------+-----------+------------------------
default_language | en_US |
firstSync | false | 2016-12-17 07:35:00+03
lastDayAggr | | 2022-08-18 01:00:00+03
MinimalETLVersion | 4.4.7 |
lastHourAggr | | 2022-08-22 15:00:00+03
HourlyAggFailed | false |
# select min(history_datetime),max(history_datetime) from
public.host_samples_history;
min | max
----------------------------+--------------------------
2022-08-17 03:09:31.797+03 | 2022-08-22 18:43:00.1+03
# select min(history_datetime),max(history_datetime) from
public.host_hourly_history;
min | max
------------------------+------------------------
2022-07-19 04:00:00+03 | 2022-08-22 16:00:00+03
# select min(history_datetime),max(history_datetime) from
public.host_daily_history;
min | max
------------+------------
2022-08-17 | 2022-08-18
I tried сonfigгку the start time with DWH_DELETE_JOB_HOUR (UTC), but there
was no result.
By enabling DWH_AGGREGATION_DEBUG=true
According to the logs, aggregation is on the clock, but cleaning doesn't
start...
I would delete the data manually, but I'm not worried about data
relatedness.
How do I start a manual deletion task?
2 years, 4 months
Cleaning a DWH job does not work
by gmasta2000@gmail.com
Hello, I'm upgrade oVirt from 4.3.10.4 to 4.4.10.7-1
The update was successful, but I noticed that a couple of days before the update, the "samples" tables stopped clearing.
DWH is configured in Basic
cat /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-scale.conf
DWH_TABLES_KEEP_SAMPLES=24
DWH_TABLES_KEEP_HOURLY=720
DWH_TABLES_KEEP_DAILY=0
cat /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
...
2022-08-22 15:13:41|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**********************
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|300000
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.000000
etlVersion|4.4.10
dwhAggregationDebug|true
...
2022-08-22 15:00:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
2022-08-22 15:00:00 Statistics sync ended. Duration: 394 milliseconds
2022-08-22 15:00:00 Aggregation to Hourly ended.
2022-08-22 15:01:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||end|success|60000
2022-08-22 15:01:00|9mBhki|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
...
# select * from history_configuration;
var_name | var_value | var_datetime
-------------------+-----------+------------------------
default_language | en_US |
firstSync | false | 2016-12-17 07:35:00+03
lastDayAggr | | 2022-08-18 01:00:00+03
MinimalETLVersion | 4.4.7 |
lastHourAggr | | 2022-08-22 15:00:00+03
HourlyAggFailed | false |
# select min(history_datetime),max(history_datetime) from public.host_samples_history;
min | max
----------------------------+--------------------------
2022-08-17 03:09:31.797+03 | 2022-08-22 18:43:00.1+03
# select min(history_datetime),max(history_datetime) from public.host_hourly_history;
min | max
------------------------+------------------------
2022-07-19 04:00:00+03 | 2022-08-22 16:00:00+03
# select min(history_datetime),max(history_datetime) from public.host_daily_history;
min | max
------------+------------
2022-08-17 | 2022-08-18
I tried сonfigгку the start time with DWH_DELETE_JOB_HOUR (UTC), but there was no result.
By enabling DWH_AGGREGATION_DEBUG=true
According to the logs, aggregation is on the clock, but cleaning doesn't start...
I would delete the data manually, but I'm not worried about data relatedness.
2 years, 4 months
VM with connected MBS disk can't start / can't hot plug mbs disk
by Aliaksei Hrechushkin
Ovirt Software Version:4.5.1.2-1.el8
So, i've datastor with type managed block storage (ceph backend) with driver cinder.volume.drivers.rbd.RBDDriver (connected using official instuction)
I can successfully create and attach disks to virtual machines.
But
Case 1:
Can't run vm with attached mbs disk
Log files:
engine.log
```
2022-08-22 10:30:46,929Z INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'}), log id: 4a63fb47
2022-08-22 10:30:46,933Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] START, CreateBrokerVDSCommand(HostName = ovih01, CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'}), log id: 279ca1cc
2022-08-22 10:30:47,217Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Failed in 'CreateBrokerVDS' method, for vds: 'ovih01'; host: 'ovih01': null
2022-08-22 10:30:47,221Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Command 'CreateBrokerVDSCommand(HostName = ovih01, CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'})' execution failed: null
2022-08-22 10:30:47,221Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] FINISH, CreateBrokerVDSCommand, return: , log id: 279ca1cc
2022-08-22 10:30:47,223Z ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Failed to create VM: java.lang.NullPointerException
2022-08-22 10:30:47,228Z ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Command 'CreateVDSCommand( CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'})' execution failed: java.lang.NullPointerException
2022-08-22 10:30:47,229Z INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] FINISH, CreateVDSCommand, return: Down, log id: 4a63fb47
```
cinderlib.log
```
022-08-22 10:28:17,260 - cinderlib-client - INFO - Cloning volume '3f763b9c-1bd2-4174-8603-c30587cb4e03' to 'c5a301c7-fc59-400f-8733-61e34c8fadf1' [716f43c7-09a7-4a43-9a3a-461db2cc3653]
2022-08-22 10:30:39,781 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2022-08-22 10:30:39,840 - cinderlib-client - INFO - Connecting volume 'c5a301c7-fc59-400f-8733-61e34c8fadf1', to host with info '{"ip":null,"nqn":"nqn.2014-08.org.nvmexpress:uuid:00000000-0000-0000-0000-000000000000","host":"ovih01","uuid":"b0d5ac8a-4a14-46cb-a114-1bc8ffbc7cec","os_type":"linux","platform":"x86_64","found_dsc":"","initiator":"iqn.2020-01.io.icdc.tby:ovih01","multipath":true,"system uuid":"48b0666c-082c-11ea-a3b1-3a68dd1a0257","do_local_attach"
:false}' [7307cc9]
2022-08-22 10:30:44,260 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2022-08-22 10:30:44,395 - cinderlib-client - INFO - Saving connection <cinderlib.Connection object 2a531747-4ae2-4dae-bc13-c341ec30eece on backend mbs_domain> for volume 'c5a301c7-fc59-400f-8733-61e34c8fadf1' [4a367039]
2022-08-22 10:30:51,363 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2022-08-22 10:30:51,485 - cinderlib-client - INFO - Disconnecting volume 'c5a301c7-fc59-400f-8733-61e34c8fadf1' [62acd0ed]
```
Case 2:
Can't activate attached disk on running vm.
Strange things: when disk is deactivated and vm if powered on i see on host:
[root@ovih01 ~]# rbd showmapped
id pool namespace image snap device
0 cinder volume-c5a301c7-fc59-400f-8733-61e34c8fadf1 - /dev/rbd0
Also i can't activate disk with ERROR: disk already attached to vm. Of cause i didn't see disk from guest os.
2 years, 4 months