oVirt 4.5.2 new ISO uploads are not usable
by Christoph Timm
Hi list,
we have uploaded new ISO files to our data domain which we are using for
ISO images and found out that we cannot boot from these ISO.
Older ISOs are still working but no new one.
I have not really any idea what to check and where to look so any kind
of help would be really appreciated.
Best regards
Christoph
2 years, 9 months
Self-hosted engine deploy failed
by Henry Wong
Hi,
I have been trying to deploy the engine from cockpit of a ovirt node 4.5.2. The system is freshly installed from the iso. The deployment failed on step 3 Prepare VM. The last error in the /var/log/ovirt-hosted-engine-setup/*log said something about "SSO authentication access_denied : Cannot authenticate user Invalid user credentials." Has anyone seen this before? Thanks
```
2022-08-18 18:57:20,334-0500 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"attempts": 50,
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_auth_payload_1t3ixb8c/ansible_ovirt_auth_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_auth.py\", line 287, in main\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 382, in authenticate\n self._sso_token = self._get_access_token()\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/__init__.py\", line 627, in _get_access_token\n sso_error[1]\novirtsdk4.AuthError: Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials.\n",
"invocation": {
"module_args": {
"ca_file": null,
"compress": true,
"headers": null,
"hostname": null,
"insecure": true,
"kerberos": false,
"ovirt_auth": null,
"password": null,
"state": "present",
"timeout": 0,
"token": null,
"url": null,
"username": null
}
},
"msg": "Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials."
},
"ansible_task": "Obtain SSO token using username/password credentials",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 537
}
```
Static hostname: xxxxxxxxx
Icon name: computer-server
Chassis: server
Machine ID: 6090168dcd724b04be97d57b01c2a11c
Boot ID: 5fba59e1914e48a1aad23fc02641bf3c
Operating System: oVirt Node 4.5.2
CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-408.el8.x86_64
Architecture: x86-64
2 years, 9 months
Q: Node Install on Fresh CentOS Stream 8
by Andrei Verovski
Hi !
I'm going to install oVirt node software on CentOS 8 Stream (I don't use
node image from Red Hat because of custom monitoring scripts).
Do I need to disable any stock repos (e.g. AppStream) in order to avoid
installation of anything not suitable, e.g. newer version then one in
oVirt repo, for oVirt node software?
Thanks in advance.
Andrei
2 years, 9 months
Re: Problem Upgrading DWH from 4.5.1 to 4.5.2
by Yedidyah Bar David
On Tue, Aug 23, 2022 at 8:53 AM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> Hi,
>
>
>
> I’m keep getting this kind of error whenever try to run engine-setup to upgrade my separated DWH server :
>
> [ INFO ] Stage: Initializing
>
> [ INFO ] Stage: Environment setup
>
> Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
>
> Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20220823110720-28vs78.log
>
> Version: otopi-1.10.2 (otopi-1.10.2-1.el8)
>
> [ INFO ] Stage: Environment packages setup
>
> [ INFO ] Stage: Programs detection
>
> [ INFO ] Stage: Environment customization
>
>
>
> --== PRODUCT OPTIONS ==--
>
>
>
> [ ERROR ] Failed to execute stage 'Environment customization': ok_to_renew_cert() missing 2 required positional arguments: 'short_life' and 'environment'
>
> [ INFO ] Stage: Clean up
>
>
>
> Maybe anybody here can give any idea to solve this issue ?
It's a bug, would you like to report it in bugzilla?
This should fix it:
https://github.com/oVirt/ovirt-dwh/pull/48
Best regards,
--
Didi
2 years, 9 months
Problem with VNC + TLS
by nixmagic@gmail.com
Why is VNC + TLS enabled for the host after executing "Enroll Certificate" (Modify qemu config file - enable TLS), although "Enable VNC Encryption" is not enabled in the cluster settings?
If you do "Reinstall" for the host, then VNC is configured without TLS support (Modify qemu config file - disable TLS), which is correct according to the cluster settings.
In addition to this, the problem is aggravated by the bug https://bugzilla.redhat.com/show_bug.cgi?id=1757793
2 years, 9 months
Hosted engine restarting
by markeczzz@gmail.com
Hi!
In the last few days I am having problem with Hosted-Engine, it keeps restarting. Sometimes after few minutes, sometimes after few hours..
I haven't done any changes on oVirt or network in that time.
Version is 4.4.10.7-1.el8. (this was also installation version)
Here are the logs:
Agent.log------------------------------
MainThread::INFO::2022-08-21 09:48:36,200::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUp (score: 2440)
MainThread::INFO::2022-08-21 09:48:36,200::hosted_engine::525::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host node3.ovirt.example.org (id: 3, score: 2440)
MainThread::ERROR::2022-08-21 09:48:46,212::states::398::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Host node3.ovirt.example.org (id 3) score is significantly better than local score, shutting down VM on this host
MainThread::INFO::2022-08-21 09:48:46,641::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUp-EngineStop) sent? ignored
MainThread::INFO::2022-08-21 09:48:46,706::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStop (score: 3400)
MainThread::INFO::2022-08-21 09:48:46,706::hosted_engine::525::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host node3.ovirt.example.org (id: 3, score: 3400)
MainThread::INFO::2022-08-21 09:48:56,714::hosted_engine::934::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-shutdown`
MainThread::INFO::2022-08-21 09:48:56,871::hosted_engine::941::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout:
MainThread::INFO::2022-08-21 09:48:56,871::hosted_engine::942::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr:
MainThread::ERROR::2022-08-21 09:48:56,871::hosted_engine::950::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost
MainThread::INFO::2022-08-21 09:48:56,880::state_decorators::102::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout set to Sun Aug 21 09:53:56 2022 while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStop'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineStop'>
MainThread::INFO::2022-08-21 09:48:56,959::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStop (score: 3400)
MainThread::INFO::2022-08-21 09:49:06,977::states::537::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine vm not running on local host
MainThread::INFO::2022-08-21 09:49:06,983::state_decorators::95::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStop'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineDown'>
MainThread::INFO::2022-08-21 09:49:07,173::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStop-EngineDown) sent? ignored
MainThread::INFO::2022-08-21 09:49:07,795::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400)
MainThread::INFO::2022-08-21 09:49:16,811::states::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM
MainThread::INFO::2022-08-21 09:49:16,998::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? ignored
MainThread::INFO::2022-08-21 09:49:17,179::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400)
MainThread::INFO::2022-08-21 09:49:17,195::hosted_engine::895::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM
MainThread::INFO::2022-08-21 09:49:17,200::hosted_engine::915::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM
MainThread::INFO::2022-08-21 09:49:18,211::hosted_engine::907::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean
MainThread::INFO::2022-08-21 09:49:18,212::hosted_engine::853::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start`
MainThread::INFO::2022-08-21 09:49:18,814::hosted_engine::862::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: VM in WaitForLaunch
MainThread::INFO::2022-08-21 09:49:18,814::hosted_engine::863::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': 'cc7931ff-8124-4724-9242-abea2ab5bf42'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 'cc7931ff-8124-4724-9242-abea2ab5bf42'})
MainThread::INFO::2022-08-21 09:49:18,814::hosted_engine::875::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost
MainThread::INFO::2022-08-21 09:49:18,999::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? ignored
MainThread::INFO::2022-08-21 09:49:19,008::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2022-08-21 09:49:29,027::states::741::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is powering up..
MainThread::INFO::2022-08-21 09:49:29,033::state_decorators::102::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout set to Sun Aug 21 09:59:29 2022 while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'>
MainThread::INFO::2022-08-21 09:49:29,109::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2022-08-21 09:49:38,121::states::741::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is powering up..
MainThread::INFO::2022-08-21 09:49:38,195::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2022-08-21 09:49:48,218::state_decorators::95::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineUp'>
MainThread::INFO::2022-08-21 09:49:48,403::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineUp) sent? ignored
MainThread::INFO::2022-08-21 09:49:48,713::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUp (score: 3400)
MainThread::INFO::2022-08-21 09:49:58,725::states::406::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine vm running on localhost
Broker.log------------------------------
Thread-4::INFO::2022-08-21 09:47:59,342::cpu_load_no_engine::142::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0241, engine=0.0013, non-engine=0.0228
Thread-3::INFO::2022-08-21 09:48:01,311::mem_free::51::mem_free.MemFree::(action) memFree: 96106
Thread-5::INFO::2022-08-21 09:48:05,612::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-2::INFO::2022-08-21 09:48:08,591::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-1::WARNING::2022-08-21 09:48:10,352::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-1::WARNING::2022-08-21 09:48:10,352::network::92::network.Network::(action) Failed to verify network status, (4 out of 5)
Thread-3::INFO::2022-08-21 09:48:11,389::mem_free::51::mem_free.MemFree::(action) memFree: 96089
Thread-5::INFO::2022-08-21 09:48:15,707::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-2::INFO::2022-08-21 09:48:18,662::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-1::WARNING::2022-08-21 09:48:18,879::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-3::INFO::2022-08-21 09:48:21,467::mem_free::51::mem_free.MemFree::(action) memFree: 96072
Thread-1::WARNING::2022-08-21 09:48:24,904::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-5::INFO::2022-08-21 09:48:25,808::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-2::INFO::2022-08-21 09:48:28,740::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-1::WARNING::2022-08-21 09:48:30,416::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-1::WARNING::2022-08-21 09:48:30,416::network::92::network.Network::(action) Failed to verify network status, (2 out of 5)
Thread-3::INFO::2022-08-21 09:48:31,545::mem_free::51::mem_free.MemFree::(action) memFree: 96064
Thread-5::INFO::2022-08-21 09:48:35,909::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Thread-1::WARNING::2022-08-21 09:48:35,940::network::121::network.Network::(_dns) DNS query failed:
; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> +tries=1 +time=5 +tcp
;; global options: +cmd
;; connection timed out; no servers could be reached
Thread-1::WARNING::2022-08-21 09:48:37,480::network::92::network.Network::(action) Failed to verify network status, (4 out of 5)
Thread-2::INFO::2022-08-21 09:48:38,809::mgmt_bridge::65::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt in up state
Thread-3::INFO::2022-08-21 09:48:41,623::mem_free::51::mem_free.MemFree::(action) memFree: 96014
Thread-1::INFO::2022-08-21 09:48:42,549::network::88::network.Network::(action) Successfully verified network status
Thread-5::INFO::2022-08-21 09:48:46,011::engine_health::246::engine_health.EngineHealth::(_result_from_stats) VM is up on this host with healthy engine
Listener::ERROR::2022-08-21 09:48:46,639::notifications::42::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email) (530, b'5.7.1 Authentication required', 'alerts(a)example.org.hr')
At first I thought that it is related to this bugs.
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/2HTD5WR43M5M...
https://bugzilla.redhat.com/show_bug.cgi?id=1984356
But in this oVirt version that bug should already be solved.
I was trying to monitor network, but this error keeps happening even if network load is low.
I did try to do continuous dig and ping commands on VM-s running on same host as Hosted Engine, and did not have any network problems, not even one connection drop.
Any solutions or next steps i should try?
2 years, 9 months
Problem Upgrading DWH from 4.5.1 to 4.5.2
by Nur Imam Febrianto
Hi,
I’m keep getting this kind of error whenever try to run engine-setup to upgrade my separated DWH server :
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20220823110720-28vs78.log
Version: otopi-1.10.2 (otopi-1.10.2-1.el8)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
[ ERROR ] Failed to execute stage 'Environment customization': ok_to_renew_cert() missing 2 required positional arguments: 'short_life' and 'environment'
[ INFO ] Stage: Clean up
Maybe anybody here can give any idea to solve this issue ?
Thanks before.
Regards,
Nur Imam Febrianto
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows
2 years, 9 months
Selecting login profile with LDAP integration
by Dave Lennox
I am trying to get LDAP integration working with FreeIPA, it would seem that the instructions to do so are the same across the RHEV and oVirt administration guides and other sites that have replicated that information and based on oVirt 4.4 (I am running 4.5.2).
I have it configured as per the oVirt admin guide with:
- the test as part of the setup tool returned success
- I have created a ovirt-admins LDAP group, which was successfully found by oVirt and I have created a new group within oVirt for that.
But how do I actually login with a LDAP user credentials?
Documentation refers to selecting the Profile that was configured with the LDAP setup, but doesn't seem to be provided since 4.5 on the login screen?
Keycloak is reporting that it is trying to validate the login against the Internal profile so I assume it isn't able to try multiple authentication sources?
2022-08-19 14:46:55,112+10 WARN [org.keycloak.events] (default task-12) [] type=LOGIN_ERROR, realmId=2429db03-71ca-4500-a8ee-e25e01c7a5e3, clientId=ovirt-engine-internal, userId=null, ipAddress=192.168.0.70, error=user_not_found, auth_method=openid-connect, auth_type=code, redirect_uri=https://sr-utl04.ovirt.lennoxconsulting.com.au/ovirt-engine/callback, code_id=d9f6400a-4d2f-4d9f-8407-e40db360a56b, username=david(a)lennoxconsulting.com.au, authSessionParentId=d9f6400a-4d2f-4d9f-840
7-e40db360a56b, authSessionTabId=He1IhSgIZP8
So how do I set up the engine to allow me to select the Profile to use on the login screen?
- David.
I have tried using LDAP email addresses,
2 years, 9 months
Having issue deploying self-hosted engine on additional nodes
by Henry Wong
Hi,
I have a two-node 4.5.2 cluster and the self-hosted engine was deployed and running on the 1st node. When I tried to deploy it on the 2nd node in the manager UI -> hosts -> Edit host -> Hosted Engine -> Deploy, once I hit ok the window closed and looked as if nothing had happened. There was no message or error that I can find in the GUI. Does anyone have any suggestions?
Thanks
Henry
2 years, 9 months
DWH_DELETE_JOB not start or not worked
by Sergey D
Hello, I'm upgrade oVirt from 4.3.10.4 to 4.4.10.7-1
The update was successful, but I noticed that a couple of days before the
update, the "samples" tables stopped clearing.
DWH is configured in Basic
cat /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-scale.conf
DWH_TABLES_KEEP_SAMPLES=24
DWH_TABLES_KEEP_HOURLY=720
DWH_TABLES_KEEP_DAILY=0
cat /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
...
2022-08-22 15:13:41|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**********************
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|300000
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.000000
etlVersion|4.4.10
dwhAggregationDebug|true
...
2022-08-22
15:00:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
2022-08-22 15:00:00 Statistics sync ended. Duration: 394 milliseconds
2022-08-22 15:00:00 Aggregation to Hourly ended.
2022-08-22
15:01:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||end|success|60000
2022-08-22
15:01:00|9mBhki|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
...
# select * from history_configuration;
var_name | var_value | var_datetime
-------------------+-----------+------------------------
default_language | en_US |
firstSync | false | 2016-12-17 07:35:00+03
lastDayAggr | | 2022-08-18 01:00:00+03
MinimalETLVersion | 4.4.7 |
lastHourAggr | | 2022-08-22 15:00:00+03
HourlyAggFailed | false |
# select min(history_datetime),max(history_datetime) from
public.host_samples_history;
min | max
----------------------------+--------------------------
2022-08-17 03:09:31.797+03 | 2022-08-22 18:43:00.1+03
# select min(history_datetime),max(history_datetime) from
public.host_hourly_history;
min | max
------------------------+------------------------
2022-07-19 04:00:00+03 | 2022-08-22 16:00:00+03
# select min(history_datetime),max(history_datetime) from
public.host_daily_history;
min | max
------------+------------
2022-08-17 | 2022-08-18
I tried сonfigгку the start time with DWH_DELETE_JOB_HOUR (UTC), but there
was no result.
By enabling DWH_AGGREGATION_DEBUG=true
According to the logs, aggregation is on the clock, but cleaning doesn't
start...
I would delete the data manually, but I'm not worried about data
relatedness.
How do I start a manual deletion task?
2 years, 9 months
Cleaning a DWH job does not work
by gmasta2000@gmail.com
Hello, I'm upgrade oVirt from 4.3.10.4 to 4.4.10.7-1
The update was successful, but I noticed that a couple of days before the update, the "samples" tables stopped clearing.
DWH is configured in Basic
cat /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-scale.conf
DWH_TABLES_KEEP_SAMPLES=24
DWH_TABLES_KEEP_HOURLY=720
DWH_TABLES_KEEP_DAILY=0
cat /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
...
2022-08-22 15:13:41|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**********************
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|300000
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.000000
etlVersion|4.4.10
dwhAggregationDebug|true
...
2022-08-22 15:00:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
2022-08-22 15:00:00 Statistics sync ended. Duration: 394 milliseconds
2022-08-22 15:00:00 Aggregation to Hourly ended.
2022-08-22 15:01:00|lxnXpZ|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||end|success|60000
2022-08-22 15:01:00|9mBhki|fKyuLo|lvrfJZ|130277|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|_FvEy8LzqEeCaj-T1n0SCFw|4.4|Default||begin||
...
# select * from history_configuration;
var_name | var_value | var_datetime
-------------------+-----------+------------------------
default_language | en_US |
firstSync | false | 2016-12-17 07:35:00+03
lastDayAggr | | 2022-08-18 01:00:00+03
MinimalETLVersion | 4.4.7 |
lastHourAggr | | 2022-08-22 15:00:00+03
HourlyAggFailed | false |
# select min(history_datetime),max(history_datetime) from public.host_samples_history;
min | max
----------------------------+--------------------------
2022-08-17 03:09:31.797+03 | 2022-08-22 18:43:00.1+03
# select min(history_datetime),max(history_datetime) from public.host_hourly_history;
min | max
------------------------+------------------------
2022-07-19 04:00:00+03 | 2022-08-22 16:00:00+03
# select min(history_datetime),max(history_datetime) from public.host_daily_history;
min | max
------------+------------
2022-08-17 | 2022-08-18
I tried сonfigгку the start time with DWH_DELETE_JOB_HOUR (UTC), but there was no result.
By enabling DWH_AGGREGATION_DEBUG=true
According to the logs, aggregation is on the clock, but cleaning doesn't start...
I would delete the data manually, but I'm not worried about data relatedness.
2 years, 9 months
VM with connected MBS disk can't start / can't hot plug mbs disk
by Aliaksei Hrechushkin
Ovirt Software Version:4.5.1.2-1.el8
So, i've datastor with type managed block storage (ceph backend) with driver cinder.volume.drivers.rbd.RBDDriver (connected using official instuction)
I can successfully create and attach disks to virtual machines.
But
Case 1:
Can't run vm with attached mbs disk
Log files:
engine.log
```
2022-08-22 10:30:46,929Z INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'}), log id: 4a63fb47
2022-08-22 10:30:46,933Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] START, CreateBrokerVDSCommand(HostName = ovih01, CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'}), log id: 279ca1cc
2022-08-22 10:30:47,217Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Failed in 'CreateBrokerVDS' method, for vds: 'ovih01'; host: 'ovih01': null
2022-08-22 10:30:47,221Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Command 'CreateBrokerVDSCommand(HostName = ovih01, CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'})' execution failed: null
2022-08-22 10:30:47,221Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] FINISH, CreateBrokerVDSCommand, return: , log id: 279ca1cc
2022-08-22 10:30:47,223Z ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Failed to create VM: java.lang.NullPointerException
2022-08-22 10:30:47,228Z ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] Command 'CreateVDSCommand( CreateVDSCommandParameters:{hostId='ca0b6c75-6f4d-4706-8420-a8a85834a26e', vmId='ceb677b3-d0ad-4e99-8ca8-226ce9cf7621', vm='VM [clone_ahrechushkin-2]'})' execution failed: java.lang.NullPointerException
2022-08-22 10:30:47,229Z INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33748) [4a367039] FINISH, CreateVDSCommand, return: Down, log id: 4a63fb47
```
cinderlib.log
```
022-08-22 10:28:17,260 - cinderlib-client - INFO - Cloning volume '3f763b9c-1bd2-4174-8603-c30587cb4e03' to 'c5a301c7-fc59-400f-8733-61e34c8fadf1' [716f43c7-09a7-4a43-9a3a-461db2cc3653]
2022-08-22 10:30:39,781 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2022-08-22 10:30:39,840 - cinderlib-client - INFO - Connecting volume 'c5a301c7-fc59-400f-8733-61e34c8fadf1', to host with info '{"ip":null,"nqn":"nqn.2014-08.org.nvmexpress:uuid:00000000-0000-0000-0000-000000000000","host":"ovih01","uuid":"b0d5ac8a-4a14-46cb-a114-1bc8ffbc7cec","os_type":"linux","platform":"x86_64","found_dsc":"","initiator":"iqn.2020-01.io.icdc.tby:ovih01","multipath":true,"system uuid":"48b0666c-082c-11ea-a3b1-3a68dd1a0257","do_local_attach"
:false}' [7307cc9]
2022-08-22 10:30:44,260 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2022-08-22 10:30:44,395 - cinderlib-client - INFO - Saving connection <cinderlib.Connection object 2a531747-4ae2-4dae-bc13-c341ec30eece on backend mbs_domain> for volume 'c5a301c7-fc59-400f-8733-61e34c8fadf1' [4a367039]
2022-08-22 10:30:51,363 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2022-08-22 10:30:51,485 - cinderlib-client - INFO - Disconnecting volume 'c5a301c7-fc59-400f-8733-61e34c8fadf1' [62acd0ed]
```
Case 2:
Can't activate attached disk on running vm.
Strange things: when disk is deactivated and vm if powered on i see on host:
[root@ovih01 ~]# rbd showmapped
id pool namespace image snap device
0 cinder volume-c5a301c7-fc59-400f-8733-61e34c8fadf1 - /dev/rbd0
Also i can't activate disk with ERROR: disk already attached to vm. Of cause i didn't see disk from guest os.
2 years, 9 months
Cannot log in to admin account on internal domain
by David Johnson
Good afternoon all,
*Environment:*
New bare metal standalone engine host
New installation of Centos Stream 8
New installation of oVirt engine 4.5
No hosts or storage domains established yet.
After installation and engine-setup, I cannot log in to the administration
console.
When logging in with the admin account and password, the ovirt login screen
reports "Invalid Username and Password"
*Log excerpt:*
If I read this log correctly, the login is successful.
2022-08-21 19:46:47,930-05 INFO
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18)
[66a7721] Lock Acquired to object
'EngineLock:{exclusiveLocks='[d4e7bc0e-54d1-452f-aad9-2277cf28bbfd=PROVIDER]',
sharedLocks=''}'
2022-08-21 19:46:47,994-05 INFO
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18)
[66a7721] Running command: SyncNetworkProviderCommand internal: true.
2022-08-21 19:46:48,013-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18)
[66a7721] EVENT_ID: PROVIDER_SYNCHRONIZATION_STARTED(223), Provider
ovirt-provider-ovn synchronization started.
*2022-08-21 19:46:49,276-05 INFO
[org.ovirt.engine.core.sso.service.ExternalOIDCService] (default task-1)
[] User admin@ovirt@internalkeycloak-authz with profile [internalsso]
successfully logged into external OP with scopes: ovirt-app-api
ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
ovirt-ext=token-info:validate ovirt-ext=token:password-access*
2022-08-21 19:46:49,497-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18)
[66a7721] EVENT_ID: PROVIDER_SYNCHRONIZATION_ENDED(224), Provider
ovirt-provider-ovn synchronization ended.
2022-08-21 19:46:49,501-05 INFO
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18)
[66a7721] Lock freed to object
'EngineLock:{exclusiveLocks='[d4e7bc0e-54d1-452f-aad9-2277cf28bbfd=PROVIDER]',
sharedLocks=''}'
Actions already taken:
1. reset password and expiry time to a far future date via command line
tools
2. unlocked admin account via command line tools
*David Johnson*
*Director of Development, Maxis Technology*
844.696.2947 ext 702 (o) | 479.531.3590 (c)
<https://www.linkedin.com/in/pojoguy/>
<https://maxistechnology.com/wp-content/uploads/vcards/vcard-David_Johnson...>
<https://maxistechnology.com/>
*Follow us:* <https://www.linkedin.com/company/maxis-tech-inc/>
2 years, 9 months
Re: Users Digest, Vol 131, Issue 39
by David Johnson
oVirt engine is not supported on Centos 9.
This is resolved.
*David Johnson*
On Sun, Aug 21, 2022 at 1:29 PM <users-request(a)ovirt.org> wrote:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."Today's Topics:
>
> 1. Ovirt 4.5 standalone engine does not install on Centos 9 stream
> (David Johnson)
>
>
>
> ---------- Forwarded message ----------
> From: David Johnson <djohnson(a)maxistechnology.com>
> To: users <users(a)ovirt.org>
> Cc:
> Bcc:
> Date: Sun, 21 Aug 2022 13:28:16 -0500
> Subject: [ovirt-users] Ovirt 4.5 standalone engine does not install on
> Centos 9 stream
> Good afternoon all,
>
> The instructions at
> https://www.ovirt.org/documentation/installing_ovirt_as_a_standalone_mana...
> say:
>
> Ensure all packages are up to date:
>
> # dnf upgrade --nobest
> Reboot the machine if any kernel-related packages were updated.
>
> Install the ovirt-engine package and dependencies.
>
> # dnf install ovirt-engine
> Run the engine-setup command to begin configuring the oVirt Engine:
>
> # engine-setup
>
> I did figure out that the modules that were in centos 8 are now straight
> out installs except for the pkideps. haven't got that one figured out yet.
>
> At this step, engine-setup is nowhere to be found on the system, and there
> appears to be no ovirt-engine RPM available.
>
> Terminal dump of the offending session:
>
> [root@ovirt2 administrator]# *dnf install -y centos-release-ovirt45*
> Last metadata expiration check: 0:12:04 ago on Sun 21 Aug 2022 12:58:00 PM
> CDT.
> Package centos-release-ovirt45-9.1-2.el9s.noarch is already installed.
> Dependencies resolved.
> Nothing to do.
> Complete!
> [root@ovirt2 administrator]#
> *dnf upgrade --nobest*Last metadata expiration check: 0:12:15 ago on Sun
> 21 Aug 2022 12:58:00 PM CDT.
> Dependencies resolved.
>
> Problem 1: package ovirt-openvswitch-2.15-4.el9.noarch requires
> openvswitch2.15, but none of the providers can be installed
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-99.el9s.x86_64
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-51.el9s.x86_64
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-56.el9s.x86_64
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-81.el9s.x86_64
> - cannot install the best update candidate for package
> ovirt-openvswitch-2.15-4.el9.noarch
> - cannot install the best update candidate for package
> openvswitch2.15-2.15.0-99.el9s.x86_64
> Problem 2: problem with installed package
> ovirt-openvswitch-2.15-4.el9.noarch
> - package ovirt-openvswitch-2.15-4.el9.noarch requires openvswitch2.15,
> but none of the providers can be installed
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-99.el9s.x86_64
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-51.el9s.x86_64
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-56.el9s.x86_64
> - package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
> < 2.17 provided by openvswitch2.15-2.15.0-81.el9s.x86_64
> - package python3-rdo-openvswitch-2:2.17-2.el9s.noarch requires
> rdo-openvswitch = 2:2.17-2.el9s, but none of the providers can be installed
> - cannot install the best update candidate for package
> python3-openvswitch2.15-2.15.0-99.el9s.x86_64
>
> ====================================================================================================================================================
> Package Architecture
> Version Repository
> Size
>
> ====================================================================================================================================================
> Skipping packages with broken dependencies:
> python3-rdo-openvswitch noarch
> 2:2.17-2.el9s centos-openstack-yoga
> 7.5 k
>
> Transaction Summary
>
> ====================================================================================================================================================
> Skip 1 Package
>
> Nothing to do.
> Complete!
>
> I confirmed that the required OpenVSwitch version is installed:
>
> [root@ovirt2 administrator]# *dnf list installed |grep openvswitch*
>
> centos-release-nfv-openvswitch.noarch 1-4.el9s
> @c9s-extras-common
> openvswitch-selinux-extra-policy.noarch 1.0-31.el9s
> @centos-nfv-openvswitch
> *openvswitch2.15.x86_64 2.15.0-99.el9s
> @centos-nfv-openvswitch*
> ovirt-openvswitch.noarch 2.15-4.el9
> @centos-ovirt45
> python3-openvswitch2.15.x86_64 2.15.0-99.el9s
> @centos-nfv-openvswitch
>
>
>
> [root@ovirt2 administrator]# *dnf install ovirt-engine*
> Last metadata expiration check: 0:12:29 ago on Sun 21 Aug 2022 12:58:00 PM
> CDT.
> No match for argument: ovirt-engine
> Error: Unable to find a match: ovirt-engine
> [root@ovirt2 administrator]#
>
>
> DNF list shows this:
>
> [root@ovirt2 administrator]# *dnf list|grep engine*
> fio-engine-dev-dax.x86_64 3.27-7.el9
> appstream
> fio-engine-http.x86_64 3.27-7.el9
> appstream
> fio-engine-libaio.x86_64 3.27-7.el9
> appstream
> fio-engine-libpmem.x86_64 3.27-7.el9
> appstream
> fio-engine-nbd.x86_64 3.27-7.el9
> appstream
> fio-engine-pmemblk.x86_64 3.27-7.el9
> appstream
> fio-engine-rados.x86_64 3.27-7.el9
> appstream
> fio-engine-rbd.x86_64 3.27-7.el9
> appstream
> fio-engine-rdma.x86_64 3.27-7.el9
> appstream
> gtk-murrine-engine.x86_64 0.98.2-23.el9
> epel
> gtk2-engines.x86_64 2.20.2-24.el9
> epel
> gtk2-engines-devel.x86_64 2.20.2-24.el9
> epel
> kwebenginepart.x86_64
> 22.04.1-1.el9.next epel-next
> lumina-themeengine.x86_64 1.6.2-2.el9
> epel
> mariadb-oqgraph-engine.x86_64 3:10.5.16-2.el9
> appstream
> openscap-engine-sce.i686 1:1.3.6-4.el9
> appstream
> openscap-engine-sce.x86_64 1:1.3.6-4.el9
> appstream
> openscap-engine-sce-devel.i686 1:1.3.6-4.el9
> crb
> openscap-engine-sce-devel.x86_64 1:1.3.6-4.el9
> crb
> openstack-heat-engine.noarch 1:18.0.0-1.el9s
> centos-openstack-yoga
> openstack-mistral-engine.noarch 14.0.0-1.el9s
> centos-openstack-yoga
> openstack-mistral-event-engine.noarch 14.0.0-1.el9s
> centos-openstack-yoga
> openstack-murano-engine.noarch 13.0.0-1.el9s
> centos-openstack-yoga
> openstack-sahara-engine.noarch 1:16.0.0-1.el9s
> centos-openstack-yoga
> openstack-senlin-engine.noarch 13.0.0-1.el9s
> centos-openstack-yoga
> openstack-watcher-decision-engine.noarch 8.0.0-1.el9s
> centos-openstack-yoga
> ovirt-engine-appliance.x86_64
> 4.5-20220419160254.1.el9 ovirt-45-upstream
> ovirt-engine-extension-aaa-ldap.noarch 1.4.6-1.el9
> centos-ovirt45
> ovirt-engine-extension-aaa-ldap-setup.noarch 1.4.6-1.el9
> centos-ovirt45
> ovirt-engine-extensions-api.noarch 1.0.1-1.el9
> centos-ovirt45
> ovirt-engine-extensions-api-javadoc.noarch 1.0.1-1.el9
> centos-ovirt45
> ovirt-engine-nodejs-modules.noarch 2.3.9-1.el9
> centos-ovirt45
> ovirt-engine-wildfly.x86_64 24.0.1-1.el9
> centos-ovirt45
> ovirt-hosted-engine-ha.noarch 2.5.0-1.el9
> centos-ovirt45
> ovirt-hosted-engine-setup.noarch 2.6.5-1.el9
> centos-ovirt45
> pentaho-reporting-flow-engine.noarch 1:0.9.4-24.el9
> appstream
> pki-servlet-engine.noarch 1:9.0.50-1.el9
> appstream
> python3-ovirt-engine-sdk4.x86_64 4.5.1-1.el9
> centos-ovirt45
> qatengine.x86_64 0.6.10-1.el9
> appstream
> qt5-qtwebengine.x86_64
> 5.15.8-5.el9.next epel-next
> qt5-qtwebengine-devel.x86_64
> 5.15.8-5.el9.next epel-next
> qt5-qtwebengine-devtools.x86_64
> 5.15.8-5.el9.next epel-next
> qt5-qtwebengine-examples.x86_64
> 5.15.8-5.el9.next epel-next
> texlive-stackengine.noarch
> 9:20200406-25.el9 appstream
>
>
> Please advise.
>
> *David Johnson*
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
2 years, 9 months
Ovirt 4.5 standalone engine does not install on Centos 9 stream
by David Johnson
Good afternoon all,
The instructions at
https://www.ovirt.org/documentation/installing_ovirt_as_a_standalone_mana...
say:
Ensure all packages are up to date:
# dnf upgrade --nobest
Reboot the machine if any kernel-related packages were updated.
Install the ovirt-engine package and dependencies.
# dnf install ovirt-engine
Run the engine-setup command to begin configuring the oVirt Engine:
# engine-setup
I did figure out that the modules that were in centos 8 are now straight
out installs except for the pkideps. haven't got that one figured out yet.
At this step, engine-setup is nowhere to be found on the system, and there
appears to be no ovirt-engine RPM available.
Terminal dump of the offending session:
[root@ovirt2 administrator]# *dnf install -y centos-release-ovirt45*
Last metadata expiration check: 0:12:04 ago on Sun 21 Aug 2022 12:58:00 PM
CDT.
Package centos-release-ovirt45-9.1-2.el9s.noarch is already installed.
Dependencies resolved.
Nothing to do.
Complete!
[root@ovirt2 administrator]#
*dnf upgrade --nobest*Last metadata expiration check: 0:12:15 ago on Sun 21
Aug 2022 12:58:00 PM CDT.
Dependencies resolved.
Problem 1: package ovirt-openvswitch-2.15-4.el9.noarch requires
openvswitch2.15, but none of the providers can be installed
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-99.el9s.x86_64
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-51.el9s.x86_64
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-56.el9s.x86_64
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-81.el9s.x86_64
- cannot install the best update candidate for package
ovirt-openvswitch-2.15-4.el9.noarch
- cannot install the best update candidate for package
openvswitch2.15-2.15.0-99.el9s.x86_64
Problem 2: problem with installed package
ovirt-openvswitch-2.15-4.el9.noarch
- package ovirt-openvswitch-2.15-4.el9.noarch requires openvswitch2.15,
but none of the providers can be installed
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-99.el9s.x86_64
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-51.el9s.x86_64
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-56.el9s.x86_64
- package rdo-openvswitch-2:2.17-2.el9s.noarch obsoletes openvswitch2.15
< 2.17 provided by openvswitch2.15-2.15.0-81.el9s.x86_64
- package python3-rdo-openvswitch-2:2.17-2.el9s.noarch requires
rdo-openvswitch = 2:2.17-2.el9s, but none of the providers can be installed
- cannot install the best update candidate for package
python3-openvswitch2.15-2.15.0-99.el9s.x86_64
====================================================================================================================================================
Package Architecture Version
Repository Size
====================================================================================================================================================
Skipping packages with broken dependencies:
python3-rdo-openvswitch noarch
2:2.17-2.el9s centos-openstack-yoga
7.5 k
Transaction Summary
====================================================================================================================================================
Skip 1 Package
Nothing to do.
Complete!
I confirmed that the required OpenVSwitch version is installed:
[root@ovirt2 administrator]# *dnf list installed |grep openvswitch*
centos-release-nfv-openvswitch.noarch 1-4.el9s
@c9s-extras-common
openvswitch-selinux-extra-policy.noarch 1.0-31.el9s
@centos-nfv-openvswitch
*openvswitch2.15.x86_64 2.15.0-99.el9s
@centos-nfv-openvswitch*
ovirt-openvswitch.noarch 2.15-4.el9
@centos-ovirt45
python3-openvswitch2.15.x86_64 2.15.0-99.el9s
@centos-nfv-openvswitch
[root@ovirt2 administrator]# *dnf install ovirt-engine*
Last metadata expiration check: 0:12:29 ago on Sun 21 Aug 2022 12:58:00 PM
CDT.
No match for argument: ovirt-engine
Error: Unable to find a match: ovirt-engine
[root@ovirt2 administrator]#
DNF list shows this:
[root@ovirt2 administrator]# *dnf list|grep engine*
fio-engine-dev-dax.x86_64 3.27-7.el9
appstream
fio-engine-http.x86_64 3.27-7.el9
appstream
fio-engine-libaio.x86_64 3.27-7.el9
appstream
fio-engine-libpmem.x86_64 3.27-7.el9
appstream
fio-engine-nbd.x86_64 3.27-7.el9
appstream
fio-engine-pmemblk.x86_64 3.27-7.el9
appstream
fio-engine-rados.x86_64 3.27-7.el9
appstream
fio-engine-rbd.x86_64 3.27-7.el9
appstream
fio-engine-rdma.x86_64 3.27-7.el9
appstream
gtk-murrine-engine.x86_64 0.98.2-23.el9
epel
gtk2-engines.x86_64 2.20.2-24.el9
epel
gtk2-engines-devel.x86_64 2.20.2-24.el9
epel
kwebenginepart.x86_64
22.04.1-1.el9.next epel-next
lumina-themeengine.x86_64 1.6.2-2.el9
epel
mariadb-oqgraph-engine.x86_64 3:10.5.16-2.el9
appstream
openscap-engine-sce.i686 1:1.3.6-4.el9
appstream
openscap-engine-sce.x86_64 1:1.3.6-4.el9
appstream
openscap-engine-sce-devel.i686 1:1.3.6-4.el9
crb
openscap-engine-sce-devel.x86_64 1:1.3.6-4.el9
crb
openstack-heat-engine.noarch 1:18.0.0-1.el9s
centos-openstack-yoga
openstack-mistral-engine.noarch 14.0.0-1.el9s
centos-openstack-yoga
openstack-mistral-event-engine.noarch 14.0.0-1.el9s
centos-openstack-yoga
openstack-murano-engine.noarch 13.0.0-1.el9s
centos-openstack-yoga
openstack-sahara-engine.noarch 1:16.0.0-1.el9s
centos-openstack-yoga
openstack-senlin-engine.noarch 13.0.0-1.el9s
centos-openstack-yoga
openstack-watcher-decision-engine.noarch 8.0.0-1.el9s
centos-openstack-yoga
ovirt-engine-appliance.x86_64
4.5-20220419160254.1.el9 ovirt-45-upstream
ovirt-engine-extension-aaa-ldap.noarch 1.4.6-1.el9
centos-ovirt45
ovirt-engine-extension-aaa-ldap-setup.noarch 1.4.6-1.el9
centos-ovirt45
ovirt-engine-extensions-api.noarch 1.0.1-1.el9
centos-ovirt45
ovirt-engine-extensions-api-javadoc.noarch 1.0.1-1.el9
centos-ovirt45
ovirt-engine-nodejs-modules.noarch 2.3.9-1.el9
centos-ovirt45
ovirt-engine-wildfly.x86_64 24.0.1-1.el9
centos-ovirt45
ovirt-hosted-engine-ha.noarch 2.5.0-1.el9
centos-ovirt45
ovirt-hosted-engine-setup.noarch 2.6.5-1.el9
centos-ovirt45
pentaho-reporting-flow-engine.noarch 1:0.9.4-24.el9
appstream
pki-servlet-engine.noarch 1:9.0.50-1.el9
appstream
python3-ovirt-engine-sdk4.x86_64 4.5.1-1.el9
centos-ovirt45
qatengine.x86_64 0.6.10-1.el9
appstream
qt5-qtwebengine.x86_64 5.15.8-5.el9.next
epel-next
qt5-qtwebengine-devel.x86_64 5.15.8-5.el9.next
epel-next
qt5-qtwebengine-devtools.x86_64 5.15.8-5.el9.next
epel-next
qt5-qtwebengine-examples.x86_64 5.15.8-5.el9.next
epel-next
texlive-stackengine.noarch 9:20200406-25.el9
appstream
Please advise.
*David Johnson*
2 years, 9 months
Problems Installing oVirt 4.5 on Centos Stream 9
by David Johnson
Good evening all,
*High level goal:*
We are preparing to upgrade our hosting OS to Centos Stream 9, and update
to the latest oVirt patch level.
We are running with a standalone engine.
The immediate goal is to stand up a brand new engine on bare metal running
CentOS 9.
Once the new engine is configured, the engine database will be migrated
from the current system production engine to the new to-be production
engine and upgraded.
After successful engine upgrade, we will be commissioning a new host and
then upgrading the hosts in sequence.
*Problem:*
When installing the ovirt standalone engine, there is a requirement to
follow the instructions at
https://www.ovirt.org/download/install_on_rhel.html . I found I had to
enable both epel and crb (formerly powertools) (both undocumented) before I
could complete this step.
After following those instructions with the necessary pieces I had
identified as missing, I returned to section 3.2 Enabling the oVirt Engine
Repositories. The next step is:
[root@ovirt2 administrator]# *dnf install -y centos-release-ovirt45*
Last metadata expiration check: 0:12:58 ago on Sat 20 Aug 2022 09:50:00 PM
CDT.
Package centos-release-ovirt45-9.1-2.el9s.noarch is already installed.
Dependencies resolved.
Nothing to do.
Complete!
Following this, The commands to enable the repositories all return "Error
Unable to find a match: xxxx"
Sample:
[root@ovirt2 administrator]# *dnf module -y enable javapackages-tools*
Last metadata expiration check: 0:03:31 ago on Sat 20 Aug 2022 09:50:00 PM
CDT.
Error: Problems in request:
missing groups or modules: javapackages-tools
[root@ovirt2 administrator]# *dnf module -y enable postgresql:12*
Last metadata expiration check: 0:07:43 ago on Sat 20 Aug 2022 09:50:00 PM
CDT.
Error: Problems in request:
missing groups or modules: postgresql:12
Please advise.
*David Johnson*
2 years, 9 months
node 4.5.2 doesn't activate logical volumes on boot
by p.staniforth@leedsbeckett.ac.uk
Hello
when booting an upgraded 4.5.2 node it is not activating local logical volumes used for gluster bricks.
The output of lvmconfig is
devices {
use_devicesfile=1
multipath_wwids_file=""
scan_lvs=0
hints="none"
}
and the output of lvmdevices is
Device /dev/sdg2 IDTYPE=sys_wwid IDNAME=t10.ATA_____DELLBOSS_VD_____________________________610c9e14a9070010\0\0\0\0 DEVNAME=/dev/sdg2 PVID=fpim51dQp3b8fwqc8ymFye7vkBa6WeL4 PART=2
Device /dev/sda IDTYPE=sys_wwid IDNAME=t10.ATA_____ST2000NX0423________________________________________W462QSFT DEVNAME=/dev/sda PVID=dkLRQ69NRh0YvRvUK7cyIGLlJMOFkKcD
the gluster lv from lvscan are
inactive '/dev/RHGS_vg_diska/diska_lv_pool' [<1.79 TiB] inherit
inactive '/dev/RHGS_vg_diska/diska_lv' [1.80 TiB] inherit
after running vgchange -a y RHGS_vg_diska
ACTIVE '/dev/RHGS_vg_diska/diska_lv_pool' [<1.79 TiB] inherit
ACTIVE '/dev/RHGS_vg_diska/diska_lv' [1.80 TiB] inherit
Thanks for any help
2 years, 9 months
Clone template on MBS not working
by Jöran Malek
I created a test-setup with oVirt 4.5.2 from scratch, with
hyperconverged Ceph installed, with following services in use:
- iSCSI Gateway for Hosted Engine Disk
- NFS for disk upload/import
- RBD/MBS for template disks and VM disks
My issue now is that I can download a VM from ovirt-image-repository
and import that as template into the NFS domain.
After that I go into the template and copy the boot disk onto the block storage.
This is successfully copied, and I go ahead and remove the original
disk to the NFS storage domain.
So now there is only one disk left on the Managed Block Storage domain.
Cloning the template results in a failed attempt with erors in
DeleteImageGroupVDS.
Attached are vdsm.log, supervdsm.log and engine.log.
On a related note: Copying the disk from NFS to MBS, then creating a
VM, assigning that disk, making a template off of that and cloning
that works just fine - though I then have following disks in RBD,
which I'd like to avoid:
- initial-disk imported to MBS for VM template [deleted]
- template disk as snapshot of initial-disk
- template clone disk as snapshot of template disk
With the copying from NFS approach I get a template which is also the
root disk, not a snapshot of a deleted disk.
Is this something that hasn't been seen yet?
Best,
Jöran
2 years, 9 months
GlusterFS Network issue
by Facundo Badaracco
Hi everyone.
I have deployed a 3x replica glusterfs succesfully.
I have 4 nic in each server. Will be using 2 bond, one for gluster and the
other for the vm. My doubt is:
Actually, my network is 192.168.2.0/23
The IP of bond0 and bond1 should be in differente networks? Can i give, for
example, 192.168.2.3 to bond0 and 192.168.2.4 to bond1?
If above can be do it, how?
Thx in advance
2 years, 9 months
Should I migrate existing oVirt Engine, or deploy new?
by David White
Hello,
I have just purchased a Synology SA3400 which I plan to use for my oVirt storage domain(s) going forward. I'm currently using Gluster storage in a hyperconverged environment.
My goal now is to:
- Use the Synology Virtual Machine manager to host the oVirt Engine on the Synology
- Setup NFS storage on the Synology as the storage domain for all VMs in our environment
- Migrate all VM storage onto the new NFS domain
- Get rid of Gluster
My first step is to migrate the oVirt Engine off of Gluster storage / off the Hyperconverged hosts into the Synology Virtual Machine manager.
Is it possible to migrate the existing oVirt Engine (put the cluster into Global Maintenance Mode, shutdown oVirt, export to VDI or something, and then import into Synology's virtualization)? Or would it be better for me to install a completely new Engine, and then somehow migrate all of the VMs from the old engine into the new engine?
Thanks,
David
Sent with Proton Mail secure email.
2 years, 9 months
VDSM command GetStoragePoolInfoVDS failed:
by parallax
oVirt version:4.4.4.7-1.el8
I have several servers in cluster and I got this error:
Data Center is being initialized, please wait for initialization to
complete.
VDSM command GetStoragePoolInfoVDS failed: PKIX path validation failed:
java.security.cert.CertPathValidatorException: validity check failed
the SPM role is constantly being transferred to the servers and I can't do
anything
in StorageDomains storages are in inactive status but virtual machines are
runnig
how ti fix it?
2 years, 9 months
posix storage migration issue on 4.4 cluster
by Sketch
I currently have two clusters up and running under one engine. An old
cluster on 4.3, and a new cluster on 4.4. In addition to migrating from
4.3 to 4.4, we are also migrating from glusterfs to cephfs mounted as
POSIX storage (not cinderlib, though we may make that conversion after
moving to 4.4). I have run into a strange issue, though.
On the 4.3 cluster, migration works fine with any storage backend. On
4.4, migration works against gluster or NFS, but fails when the VM is
hosted on POSIX cephfs. Both hosts are running CentOS 8.4 and were fully
updated to oVirt 4.4.7 today, as well as fully updating the engine (all
rebooted before this test, as well).
It appears that the VM fails to start on the new host, but it's not
obvious why from the logs. Can anyone shed some light or suggest further
debugging?
Related engine log:
2021-08-03 07:11:51,609-07 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] Lock Acquired to object 'EngineLock:{exclusiveLocks='[1fd47e75-d708-43e4-ac0f-67bd28dceefd=VM]', sharedLocks=''}'
2021-08-03 07:11:51,679-07 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 1fd47e75-d708-43e4-ac0f-67bd28dceefd Type: VMAction group MIGRATE_VM with role type USER
2021-08-03 07:11:51,738-07 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {li
mit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 67f63342
2021-08-03 07:11:51,739-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateBrokerVDSCommand(HostName = ovirt_host1, MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 37ab0828
2021-08-03 07:11:51,741-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, MigrateBrokerVDSCommand, return: , log id: 37ab0828
2021-08-03 07:11:51,743-07 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 67f63342
2021-08-03 07:11:51,750-07 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: my_vm_hostname, Source: ovirt_host1, Destination: ovirt_host2, User: ebyrne@FreeIPA).
2021-08-03 07:11:55,736-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd' was reported as Down on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2)
2021-08-03 07:11:55,736-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) was unexpectedly detected as 'Down' on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2) (expected on '6ec548e6-9a2a-4885-81da-74d0935b7ba5')
2021-08-03 07:11:55,736-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [28d98b26] START, DestroyVDSCommand(HostName = ovirt_host2, DestroyVmVDSCommandParameters:{hostId='6c31c294-477d-4fa8-b6ff-12e189918f69', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 110ec6aa
2021-08-03 07:11:55,911-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [28d98b26] FINISH, DestroyVDSCommand, return: , log id: 110ec6aa
2021-08-03 07:11:55,911-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) was unexpectedly detected as 'Down' on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2) (expected on '6ec548e6-9a2a-4885-81da-74d0935b7ba5')
2021-08-03 07:11:55,911-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] Migration of VM 'my_vm_hostname' to host 'ovirt_host2' failed: VM destroyed during the startup.
2021-08-03 07:11:55,913-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) moved from 'MigratingFrom' --> 'Paused'
2021-08-03 07:11:55,933-07 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-15) [28d98b26] EVENT_ID: VM_PAUSED(1,025), VM my_vm_hostname has been paused.
2021-08-03 07:11:55,940-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-15) [28d98b26] EVENT_ID: VM_PAUSED_ERROR(139), VM my_vm_hostname has been paused due to unknown storage error.
2021-08-03 07:11:55,946-07 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-15) [28d98b26] Lock freed to object 'EngineLock:{exclusiveLocks='[1fd47e75-d708-43e4-ac0f-67bd28dceefd=VM]', sharedLocks=''}'
Log from the ovirt host being migrated to:
2021-08-03 07:11:51,744-0700 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:10.1.2.199:57742 (protocoldetector:61)
2021-08-03 07:11:51,749-0700 WARN (Reactor thread) [vds.dispatcher] unhandled write event (betterAsyncore:184)
2021-08-03 07:11:51,749-0700 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::ffff:10.1.2.199:57742 (protocoldetector:125)
2021-08-03 07:11:51,749-0700 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompserver:95)
2021-08-03 07:11:51,750-0700 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompserver:124)
2021-08-03 07:11:51,791-0700 WARN (jsonrpc/7) [root] ping was deprecated in favor of ping2 and confirmConnectivity (API:1372)
2021-08-03 07:11:51,879-0700 INFO (jsonrpc/0) [api.virt] START migrationCreate(params={'_srcDomXML': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:min
GuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:vo
lumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-
a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n
</system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot
>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' r
elabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multif
unction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/
>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' functi
on=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>
\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\
'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n
<alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\'
bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\'
vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'da
c\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:g
uestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n
<ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.8
8.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.
2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis
=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center
/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>
\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n
<alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' functi
on=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pc
ie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/
>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'
/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n
<source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\
n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199
\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'se
linux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'elapsedTimeOffset': 89141.10990166664, 'enableGuestEvents': True, 'migrationDest': 'libvirt'}, incomingLimit=2) from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:51,879-0700 INFO (jsonrpc/0) [api.virt] START create(vmParams={'_srcDomXML': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuarant
eedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>
\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53
511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </syste
m>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restar
t</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=
\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction
=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <mode
l name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n
<address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x
2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\'
slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias
name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'p
s2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'
32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' rel
abel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAge
ntAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt
-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10
.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el
8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\
'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10
.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <a
lias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x
6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root
-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n
<alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n
</controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <so
urce mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n
<alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keym
ap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\'
relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'elapsedTimeOffset': 89141.10990166664, 'enableGuestEvents': True, 'migrationDest': 'libvirt'}) from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:51,883-0700 INFO (jsonrpc/0) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') using a computed convergence schedule for a legacy migration: {'init': [{'params': ['101'], 'name': 'setDowntime'}], 'stalling': [{'action': {'params': ['104'], 'name': 'setDowntime'}, 'limit': 1}, {'action': {'params': ['120'], 'name': 'setDowntime'}, 'limit': 2}, {'action': {'params': ['189'], 'name': 'setDowntime'}, 'limit': 3}, {'action': {'params': ['500'], 'name': 'setDowntime'}, 'limit': 4}, {'action': {'params': ['500'], 'name': 'setDowntime'}, 'limit': 42}, {'action': {'params': [], 'name': 'abort'}, 'limit': -1}]} (migration:161)
2021-08-03 07:11:51,884-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') VM wrapper has started (vm:2832)
2021-08-03 07:11:51,884-0700 INFO (jsonrpc/0) [api.virt] FINISH create return={'vmList': {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'status': 'Migration Destination', 'statusTime': '2158818086', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:lau
nchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-0
0163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6
f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry n
ame=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>
\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4c
ce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'
pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' func
tion=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n
<alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\'
function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <al
ias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-
guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <inp
ut type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2
.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system
_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n'}, 'status': {'code': 0, 'message': 'Done'}} from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:51,885-0700 INFO (vm/1fd47e75) [vdsm.api] START getVolumeSize(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', imgUUID='eb15970b-7b94-4cce-ab44-50f57850aa7f', volUUID='6f82b02d-8c22-4d50-a30e-53511776354c', options=None) from=internal, task_id=1c8d4900-c5c9-44d8-aeac-d11749b2fcae (api:48)
2021-08-03 07:11:51,890-0700 INFO (vm/1fd47e75) [vdsm.api] FINISH getVolumeSize return={'apparentsize': '52031193088', 'truesize': '52031193088'} from=internal, task_id=1c8d4900-c5c9-44d8-aeac-d11749b2fcae (api:54)
2021-08-03 07:11:51,890-0700 INFO (vm/1fd47e75) [vds] prepared volume path: (clientIF:518)
2021-08-03 07:11:51,890-0700 INFO (vm/1fd47e75) [vdsm.api] START prepareImage(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', imgUUID='eb15970b-7b94-4cce-ab44-50f57850aa7f', leafUUID='6f82b02d-8c22-4d50-a30e-53511776354c', allowIllegal=False) from=internal, task_id=52b45a2f-1664-4b27-931c-4e4b81d39389 (api:48)
2021-08-03 07:11:51,923-0700 INFO (vm/1fd47e75) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c (fileSD:624)
2021-08-03 07:11:51,924-0700 INFO (vm/1fd47e75) [storage.StorageDomain] Creating domain run directory '/run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48' (fileSD:578)
2021-08-03 07:11:51,924-0700 INFO (vm/1fd47e75) [storage.fileUtils] Creating directory: /run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48 mode: None (fileUtils:201)
2021-08-03 07:11:51,924-0700 INFO (vm/1fd47e75) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f to /run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/eb15970b-7b94-4cce-ab44-50f57850aa7f (fileSD:581)
2021-08-03 07:11:51,939-0700 INFO (vm/1fd47e75) [vdsm.api] FINISH prepareImage return={'path': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c', 'info': {'type': 'file', 'path': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c'}, 'imgVolumesInfo': [{'domainID': 'e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', 'imageID': 'eb15970b-7b94-4cce-ab44-50f57850aa7f', 'volumeID': '6f82b02d-8c22-4d50-a30e-53511776354c', 'path': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c', 'leasePath': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b
-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease', 'leaseOffset': 0}]} from=internal, task_id=52b45a2f-1664-4b27-931c-4e4b81d39389 (api:54)
2021-08-03 07:11:51,939-0700 INFO (vm/1fd47e75) [vds] prepared volume path: /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c (clientIF:518)
2021-08-03 07:11:51,940-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Enabling drive monitoring (drivemonitor:59)
2021-08-03 07:11:51,942-0700 WARN (vm/1fd47e75) [root] Attempting to add an existing net user: ovirtmgmt/1fd47e75-d708-43e4-ac0f-67bd28dceefd (libvirtnetwork:192)
2021-08-03 07:11:52,018-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/before_vm_migrate_destination/50_vhostmd: rc=0 err=b'' (hooks:122)
2021-08-03 07:11:52,018-0700 INFO (jsonrpc/0) [api.virt] FINISH migrationCreate return={'status': {'code': 0, 'message': 'Done'}, 'migrationPort': 0, 'params': {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'status': 'Migration Destination', 'statusTime': '2158818086', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:
jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False
</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc
1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=
\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' t
ickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.7
7:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port
\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n
<address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n
<controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\
'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n
<address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controll
er type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/va
r/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n
<address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'
1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label
>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n'}} from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:53,656-0700 INFO (libvirt/events) [vds] Channel state for vm_id=1fd47e75-d708-43e4-ac0f-67bd28dceefd changed from=UNKNOWN(-1) to=disconnected(2) (qemuguestagent:289)
2021-08-03 07:11:55,724-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') underlying process disconnected (vm:1134)
2021-08-03 07:11:55,724-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Release VM resources (vm:5313)
2021-08-03 07:11:55,724-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [vdsm.api] START teardownImage(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', imgUUID='eb15970b-7b94-4cce-ab44-50f57850aa7f', volUUID=None) from=internal, task_id=546c50e4-8889-47f3-b9ea-4c4bd8a71148 (api:48)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [storage.StorageDomain] Removing image rundir link '/run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/eb15970b-7b94-4cce-ab44-50f57850aa7f' (fileSD:601)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [vdsm.api] FINISH teardownImage return=None from=internal, task_id=546c50e4-8889-47f3-b9ea-4c4bd8a71148 (api:54)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,726-0700 WARN (libvirt/events) [root] Attempting to remove a non existing net user: ovirtmgmt/1fd47e75-d708-43e4-ac0f-67bd28dceefd (libvirtnetwork:207)
2021-08-03 07:11:55,726-0700 WARN (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') timestamp already removed from stats cache (vm:2539)
2021-08-03 07:11:55,726-0700 INFO (libvirt/events) [vdsm.api] START inappropriateDevices(thiefId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') from=internal, task_id=5c856e64-6b29-41aa-bc29-704ea86f9c64 (api:48)
2021-08-03 07:11:55,727-0700 INFO (libvirt/events) [vdsm.api] FINISH inappropriateDevices return=None from=internal, task_id=5c856e64-6b29-41aa-bc29-704ea86f9c64 (api:54)
2021-08-03 07:11:55,731-0700 WARN (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Couldn't destroy incoming VM: Domain not found: no domain with matching uuid '1fd47e75-d708-43e4-ac0f-67bd28dceefd' (vm:4046)
2021-08-03 07:11:55,732-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Changed state to Down: VM destroyed during the startup (code=10) (vm:1895)
2021-08-03 07:11:55,733-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,737-0700 INFO (jsonrpc/1) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.1.2.37,39674, flow_id=28d98b26, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:55,811-0700 INFO (jsonrpc/2) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:55,833-0700 INFO (libvirt/events) [root] /usr/libexec/vdsm/hooks/after_vm_destroy/50_vhostmd: rc=0 err=b'' (hooks:122)
2021-08-03 07:11:55,909-0700 INFO (libvirt/events) [root] /usr/libexec/vdsm/hooks/after_vm_destroy/delete_vhostuserclient_hook: rc=0 err=b'' (hooks:122)
2021-08-03 07:11:55,909-0700 WARN (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') trying to set state to Down when already Down (vm:701)
2021-08-03 07:11:55,910-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/1) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Can't undefine disconnected VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd' (vm:2533)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/1) [api.virt] FINISH destroy return={'status': {'code': 0, 'message': 'Machine destroyed'}} from=::ffff:10.1.2.37,39674, flow_id=28d98b26, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/2) [api] FINISH destroy error=Virtual machine does not exist: {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd'} (api:129)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/2) [api.virt] FINISH destroy return={'status': {'code': 1, 'message': "Virtual machine does not exist: {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd'}"}} from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:55,911-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.destroy failed (error 1) in 0.10 seconds (__init__:312)
2 years, 9 months
Getting to know oVirt
by Alexandre Aguirre
I'm new to ovirt, I'm coming from Proxmox and VMware, so I had some doubts that I couldn't find myself;
- Via terminal, where are the VMs created in oVirt?
- What are the most used commands on a daily basis in the terminal?
- Can I use the host's local disk as storage for the VMs?
- Is glusterfs configuration native in ovirt, or do I need to install the part?
Thanks in advance for your attention!!
2 years, 9 months
Adding a new cluster is in doubt
by ziyi Liu
ovirt version 4.5.1
I didn't see the tutorial on the ovirt official website, but I found the tutorial of Red Hat Hyperconverged Infrastructure for Virtualization. Should I follow this tutorial to execute, or through Cockpit, first pass the three machines of the second cluster through glusterfs ansible to form a cluster of storage Add to the new cluster in the engine web interface
2 years, 9 months
oVirt node 4.4 repository missing package
by kenneth.hau@hactl.com
Hi there, I saw that there has new package of ovirt-node-ng-image-update-placeholder-4.4.10.3-1.el8.noarch.rpm has released on 2022-06-24. However, I would like to know if there has any plan to release new package of ovirt-node-ng-image-update-4.4.10.3-1.el8.noarch.rpm as the the package is still retain at 4.4.10.2-1. Many thanks.
2 years, 9 months
Cannot start VM after upgrade - certificate issue
by LAMARC
Hello,
after running a PoC oVirt setup for more than a year, I ran into the "expiring certificate issue". Done an full upgrade for 4.4, engine-setup, and "Enroll certificates" to all hosts, all hosts are now green, and the certificates are renewed:
- /etc/pki/vdsm/certs/vdsmcert.pem
- /etc/pki/vdsm/libvirt-spice/server-cert.pem
- /etc/pki/libvirt/clientcert.pem
- /etc/pki/vdsm/libvirt-migrate/server-cert.pem
But: I cannot start any VM any more. Getting:
engine.log:2022-08-16 16:34:36,209+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-213) [60c5f661-6231-4e03-87a5-f6e6f2c63ad1] Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: EngineException: (Failed with error NotAfter: Mon Aug 15 09:15:42 CEST 2022 and code 5050)
Any ideas?
Cheers Marcus
2 years, 9 months
VM (live) migrate after update ovirt 4.2 -> 4.4
by Fabian Mohren
Hi,
we have upgrade the Cluster (and all Hosts) to oVirt 4.4 and Oracle Linux 8.5
After the Upgrade, we cannot migrate VMs from one Host to another (same Cluster).
oVirt stock, and on Hosts Tab we see the message (one the vm):
"VM-CPU dosent match the Cluster-CPU".
We checked the xml config from kvm
We checked the DB for Cluster-CPU and VM-CPU - its the same (Cascadelake-Server-noTSX,-mpx).
select name, cpu_name, cpu_flags from cluster;
name | cpu_name | cpu_flags
--------------------------+----------------------------------------+------------------------------------
TestMigrationCluster | Intel Haswell Family | vmx,nx,model_Haswell-noTSX
select vm_guid, vm_name, cpu_name from vms where vm_name='oVirtTest';
vm_guid | vm_name | cpu_name
--------------------------------------+-----------+---------------
421ade0f-4d76-941d-3de2-13b1df54c89d | oVirtTest | Haswell-noTSX
We create a new "TestMigrationCluster" with two Hosts for better testing.
We reinstall the Hosts (OS + ovirt) = failed
We reset the manager to an older State = failed
We have no idea where the "Cluster-CPU check" is running?
We search for a ansible role or play but nothing found
Have anybody a idea where the Cluster CPU check is?
Thanks & Regards
Fabian
2 years, 9 months
Server patching on the oVirt node
by kenneth.hau@hactl.com
Server patching on the oVirt node
Hi there, we have some oVirt node running on 4.4 and found vulnerabilities after performed TVM scan on the server. Please advise how can we fix the below vulnerabilities.
Python Unsupported Version Detection
CentOS 8 : cockpit (CESA-2022:2008)
CentOS 8 : container-tools:3.0 (CESA-2022:1793)
CentOS 8 : container-tools:3.0 (CESA-2022:2143)
CentOS 8 : samba (CESA-2022:2074)
CentOS 8 : maven:3.6 (CESA-2022:1860)
2 years, 9 months
Changing Cluster Compatibility Version from 4.6 to 4.7 issue
by Alexandr Mikhailov
Hi!
Just uprgaded from 4.4. to 4.5. Had all the problems with this update, such as postgresql-jdbc version and with stripeCount in cli.y . But I managed it, everything works more or less.
Now I cannot raise the Cluster compatibility level. The problem is that increasing the level tries to change something in the HE configuration but cannot.
This is error massage:
Error while executing action: Cannot update cluster because the update triggered update of the VMs/Templates and it failed for the following: HostedEngine. "There was an attempt to change Hosted Engine VM values that are locked." is one of the error(s).
To fix the issue, please go to each VM/Template, edit, change the Custom Compatibility Version (or other fields changed previously in the cluster dialog) and press OK. If the save does not pass, fix the dialog validation. After successful cluster update, you can revert your Custom Compatibility Version change (or other changes). If the problem still persists, you may refer to the engine.log file for further details.
If i trying to edit HE machine without changing anything i se next error: There was an attempt to change Hosted Engine VM values that are locked/ I think this is linked issues.
Log from engine log when i trying to update Cluster version:
2022-05-27 14:20:54,410+06 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-212) [1b8b6b78] EVENT_ID: CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Ca
nnot update compatibility version of Vm/Template: [HostedEngine], Message: There was an attempt to change Hosted Engine VM values that are locked.
Log from engine log when i trying to save HE configuration without any changing:
2022-05-27 14:34:10,965+06 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-220) [9cdfe99b-b7a1-46a4-ab3f-fc110b939f08] Lock Acquired to object 'EngineLock:{exclusiveLocks='[HostedEngine=
VM_NAME]', sharedLocks='[4d6a0ffb-a221-4ef8-9846-6ada7690e74a=VM]'}'
2022-05-27 14:34:10,968+06 WARN [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-220) [9cdfe99b-b7a1-46a4-ab3f-fc110b939f08] Validation of action 'UpdateVm' failed for user admin@internal-auth
z. Reasons: VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD
2022-05-27 14:34:10,969+06 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-220) [9cdfe99b-b7a1-46a4-ab3f-fc110b939f08] Lock freed to object 'EngineLock:{exclusiveLocks='[HostedEngine=VM_
NAME]', sharedLocks='[4d6a0ffb-a221-4ef8-9846-6ada7690e74a=VM]'}'
It is not clear what is happening and what changes to the configuration are trying to be saved and what to do about it. Help please.
2 years, 9 months
Re: gluster service on the cluster is unchecked on hci cluster
by Strahil Nikolov
Can you check for AVC denials and the error message like the described in https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?
Best Regards,Strahil Nikolov
On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka<jiri.slezka(a)slu.cz> wrote: Hello,
On 7/11/22 14:34, Strahil Nikolov wrote:
> Can you check something on the host:
> cat /etc/glusterfs/eventsconfig.json
cat /etc/glusterfs/eventsconfig.json
{
"log-level": "INFO",
"port": 24009,
"disable-events-log": false
}
> semanage port -l | grep $(awk -F ':' '/port/ {gsub(",","",$2); print
> $2}' /etc/glusterfs/eventsconfig.json)
semanage port -l | grep 24009
returns empty set, it looks like this port is not labeled
Cheers,
Jiri
>
> Best Regards,
> Strahil Nikolov
> В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří Sléžka
> <jiri.slezka(a)slu.cz> написа:
>
>
> Hi,
>
> I would like to change CPU Type in my oVirt 4.4.10 HCI cluster (based on
> 3 glusterfs/virt hosts). When I try to I got this error
>
> Error while executing action: Cannot disable gluster service on the
> cluster as it contains volumes.
>
> As I remember I had Gluster Service enabled on this cluster but now both
> (Enable Virt Services and Enable Gluster Service) checkboxes are grayed
> out and Gluster Service is unchecked.
>
> Also Storage / Volumes displays my volumes... well, displays one brick
> on particular host in unknown state (? mark) which is new situation. As
> I can see from command line all bricks are online, no healing in
> progress, all looks good...
>
> I am not sure if the second issue is relevant to first one so main
> question is how can I (re)enable gluster service in my cluster?
>
> Thanks in advance,
>
> Jiri
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJ...
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJ...>
2 years, 9 months
Upgrade 4.4 to 4.5 node package issue
by Jason Beard
I'm updating my environment from 4.4 to 4.5. The Hosted Engine upgrade completed with no errors. On both nodes it fails at step 1. My nodes are CentOS Stream 8. Is there a release package I can download somewhere? Or is something else going on?
# dnf install -y centos-release-ovirt45
Last metadata expiration check: 0:08:42 ago on Thu 11 Aug 2022 10:43:28 PM CDT.
No match for argument: centos-release-ovirt45
Error: Unable to find a match: centos-release-ovirt45
# cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8.6.2109.0"
VARIANT="oVirt Node 4.4.10.2"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.4.10"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
PLATFORM_ID="platform:el8"
2 years, 9 months
oVirt 4.5.2 is now generally available
by Lev Veyde
The oVirt project is excited to announce the general availability of oVirt
4.5.2, as of August 10th, 2022.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.4.
Important notes before you install / upgrade
Some of the features included in oVirt 4.5.2 require content that is
available in RHEL 8.6 (or newer) and derivatives.
NOTE: If you’re going to install oVirt 4.5.2 on RHEL or similar, please
read Installing on RHEL or derivatives
<https://ovirt.org/download/install_on_rhel.html> first.
Documentation
Be sure to follow instructions for oVirt 4.5!
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.5.2 Release?
This release is available now on x86_64 architecture for:
-
CentOS Stream 8
-
RHEL 8.6 and derivatives
This release supports Hypervisor Hosts on x86_64:
-
oVirt Node NG (based on CentOS Stream 8)
-
CentOS Stream 8
-
RHEL 8.6 and derivatives
This release also supports Hypervisor Hosts on x86_64 as tech preview
without secure boot:
-
CentOS Stream 9
-
RHEL 9.0 and derivatives
-
oVirt Node NG based on CentOS Stream 9
Builds are also available for ppc64le and aarch64.
Known issues:
-
On EL9 with UEFI secure boot, vdsm fails to decode DMI data due to
Bug 2081648 <https://bugzilla.redhat.com/show_bug.cgi?id=2081648> -
python-dmidecode module fails to decode DMI data
Security fixes included in oVirt 4.5.2 compared to latest oVirt 4.5.1:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
Some of the RFEs with high user impact are listed below:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
Some of the Bugs with high user impact are listed below:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
oVirt Node will be released shortly after the release will reach the CentOS
mirrors.
See the release notes for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.5.2 release highlights:
https://www.ovirt.org/release/4.5.2/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
2 years, 10 months
Q: libvirtd-tls startup failure on node (v4.4.10)
by Andrei Verovski
Hi,
I have one problematic node, and identified a culprit why vdsmd service
can't start:
-- The unit libvirtd-tls.socket has entered the 'failed' state with
result 'service-start-limit-hit'.
Aug 13 15:00:45 node14.starlett.lv systemd[1]: Closed Libvirt TLS IP socket.
-- Subject: Unit libvirtd-tls.socket has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
Node was installed on clean CentOS Stream from oVirt Web (not oVirt node
disk image), and was working fine until recent problem with certificates.
Can't reinstall/reconfigure node from oVirt Web since it is marked as
non-responsive.
How to fix this?
Thanks in advance for any help.
---------------------
[root@node14 vdsm]# cat /etc/centos-release
CentOS Stream release 8
[root@node14 vdsm]# rpm -qa | grep vdsm
vdsm-api-4.40.100.2-1.el8.noarch
vdsm-hook-openstacknet-4.40.100.2-1.el8.noarch
vdsm-client-4.40.100.2-1.el8.noarch
vdsm-network-4.40.100.2-1.el8.x86_64
vdsm-hook-vhostmd-4.40.100.2-1.el8.noarch
vdsm-4.40.100.2-1.el8.x86_64
vdsm-http-4.40.100.2-1.el8.noarch
vdsm-common-4.40.100.2-1.el8.noarch
vdsm-hook-fcoe-4.40.100.2-1.el8.noarch
vdsm-python-4.40.100.2-1.el8.noarch
vdsm-jsonrpc-4.40.100.2-1.el8.noarch
vdsm-hook-ethtool-options-4.40.100.2-1.el8.noarch
vdsm-yajsonrpc-4.40.100.2-1.el8.noarch
[root@node14 vdsm]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
[root@node14 vdsm]# systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● libvirtd.service loaded failed failed Virtualization daemon
● libvirtd-admin.socket loaded failed failed Libvirt admin socket
● libvirtd-ro.socket loaded failed failed Libvirt local read-only socket
● libvirtd-tls.socket loaded failed failed Libvirt TLS IP socket
● libvirtd.socket loaded failed failed Libvirt local socket
2 years, 10 months
RHV/oVirt and Ansible Tower/awx
by Colin Coe
Hey all
We've recently moved from RHV v4.3 to RHV v4.4SP1 (ovirt v4.5) and I'm
wanting to rebuild one of our application "clusters" (not HA, simply a
group of related nodes that run the application stack)
Ansible Tower is v3.8.5 and is running on RHEL7.
To further complicate matters, we've just gone from Arcserve UDP to Veeam
and Veeam for RHV.
As I need to ensure the new VMs are built with COW disks with backup
"incremental" enabled. After a lot of mucking about I find I have to fully
qualify the Ansible module names (i.e. ovirt_diskl becomes
ovirt.ovirt.ovirt_disk) but this has uncovered the next problem.
The workflow on Ansible Tower server is failing with "ovirtsdk4 version
4.4.0 or higher is required for this module".
So how can I wedge the v4.4 ovirt SDK onto this RHEL7 Ansible Tower node?
Thanks
2 years, 10 months
Nested Virtualization, please, help-me
by Jorge Visentini
Hi folks.
So... I read the documentation in many sites and this forum too, buuut the
nested feature do not worked yet.
*My host oVirt so far:*
cat /sys/module/kvm_intel/parameters/nested
1
Kernel parameters edited by Engine on *Edit Host -> Kernel*.
Reinstall the host and reboot.
*cat /proc/cmdline*
BOOT_IMAGE=(hd0,msdos1)//ovirt-node-ng-4.4.10.2-0.20220303.0+1/vmlinuz-4.18.0-365.el8.x86_64
crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1
rd.lvm.lv=onn/swap rhgb quiet
root=/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1
boot=UUID=07e688ed-8d46-4932-b070-378c75ba1921 rootflags=discard
img.bootid=ovirt-node-ng-4.4.10.2-0.20220303.0+1 kvm-intel.nested=1
*On VM guest:*
Host set to "Specific Host" and "Pass-Through Host CPU"
vmx is ok
*root@kceve01:~# kvm-ok*
INFO: /dev/kvm exists
KVM acceleration can be used
*Result:*
I am trying to virtualize the EVE-NG. I can start the guest of the EVE-NG,
but I don't know why I can't access it. It's like blocking any package...
I don't know yet if it is a virtualization problem or network problem...
If you have other tips for me, I glad, very glad.
All the best!
--
Att,
Jorge Visentini
+55 55 98432-9868
2 years, 10 months
VM have illegal disk
by calvineadiwinata@gmail.com
VM have illegal disk how to solve it
can't delete the Old snapshot
2 years, 10 months
Re: Problem with engine deployment
by Facundo Badaracco
Dear david.
U where right. The reinstall ansible collection solved all the problems. I
have deployed the engine vm without any issues.
Many thanks. Hope this help someone in the future.
El lun., 8 de agosto de 2022 02:44, Yedidyah Bar David <didi(a)redhat.com>
escribió:
> On Thu, Aug 4, 2022 at 6:05 PM Facundo Badaracco <varekoarfa(a)gmail.com>
> wrote:
> >
> > awesome david, im really thankful.
> >
> > [root@vs05 ~]# rpm -q rpm -q ovirt-ansible-collection
> > rpm-4.14.3-23.el8.x86_64
> > ovirt-ansible-collection-2.1.0-1.el8.noarch
> > [root@vs05 ~]# rpm -V ovirt-ansible-collection
> > S.5....T.
> /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml
> > S.5....T.
> /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml
>
> Whoops! These files are changed/corrupted. That's the '5' there.
> Please check them.
> If you changed them manually for some reason, it might be enough to
> 'dnf reinstall ovirt-ansible-collection'. Otherwise, I'd suspect some
> bigger issue/corruption/whatever and check more carefully. If you are
> certain that the hardware is fine, and can't spot a specific issue
> leading to this, you might want to reinstall the OS.
>
> >
> > i have attached journal and messages...
>
> I didn't check them.
>
> >
> >
> > its a clean installation. i can reinstall everything if needed, even the
> SO.
>
> See above. I'd personally not reinstall the OS without first checking
> what caused this corruption - if it's a hardware issue, better handle
> it before continuing, especially if this installation is eventually
> intended for production work.
>
> Good luck and best regards,
>
> >
> >
> > El jue, 4 ago 2022 a la(s) 11:23, Yedidyah Bar David (didi(a)redhat.com)
> escribió:
> >>
> >> On Tue, Aug 2, 2022 at 3:51 PM Facundo Badaracco <varekoarfa(a)gmail.com>
> wrote:
> >>>
> >>> hi everyone, thanks for ur help.
> >>>
> >>> i tried what itforums suggested, but nothing worked.
> >>> cleaned the log, make a new run, i have found what u say david,
> "otopi_net_host" but i cant find something that helps me to fix it. i have
> attached the logs if u can help with this, will be greatly appreciated.
> >>
> >>
> >> In your ovirt-hosted-engine-setup-20220802093048-j34sz6.log.txt, there
> is this error:
> >>
> >> 2022-08-02 09:31:17,677-0300 DEBUG otopi.context
> context._executeMethod:145 method exception
> >> Traceback (most recent call last):
> >> File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/network/bridge.py",
> line 156, in _customization
> >> 'otopi_host_net'
> >> KeyError: 'otopi_host_net'
> >>
> >> And indeed, you can't find 'otopi_host_net' in
> ovirt-hosted-engine-setup-ansible-get_network_interfaces-20220802093113-agfg7e.log.txt.
> In fact, the last tasks there are 'Collect Team devices', then 'Filter team
> devices', then 'Fail if only team devices are available', and that's it -
> which is very weird, as these are in filter_team_devices.yml, which is
> imported in the middle of 001_validate_network_interfaces.yml - right after
> that, it imports filter_unsupported_vlan_devices.yml, but nothing from
> there is in the log. The next task you should have seen is 'Search VLAN
> devices', as I can see in my own log (for example).
> >>
> >> Please check:
> >>
> >> - rpm -q rpm -q ovirt-ansible-collection
> >> - rpm -V ovirt-ansible-collection
> >> - Perhaps some more logs, such as /var/log/messages, journalctl, etc.,
> that might include relevant errors from ansible. Weird.
> >>
> >>
> >>>
> >>>
> >>> if u run the deploy from cockpit, no logs are created but if i do it
> from cli, the logs are created.
> >>
> >>
> >> The cockpit deployment is deprecated. Not sure we ever announced this
> officially for oVirt.
> >>
> >> We did remove the cockpit-based installation guide from the
> documentation section on the website.
> >>
> >> It had too many problems and quite little use.
> >>
> >> So please use the CLI. Thanks.
> >>
> >> Best regards,
> >>
> >>>
> >>>
> >>>
> >>> El mar, 2 ago 2022 a la(s) 02:49, <itforums51(a)gmail.com> escribió:
> >>>>
> >>>> Hi, your issue is probably related to this
> https://www.mail-archive.com/users@ovirt.org/msg70657.html ....
> >>>>
> >>>> I also have 3x servers (using bond for storage network) and was able
> to successfully deploy the engine, but using the workaround suggested by
> 'Dax Kelson's thread above' and also later by editing a vars file on an
> ansible role: https://github.com/oVirt/ovirt-engine/issues/520
> >>>>
> >>>> I'd say give it a try and let us know the outcome.
> >>>> _______________________________________________
> >>>> Users mailing list -- users(a)ovirt.org
> >>>> To unsubscribe send an email to users-leave(a)ovirt.org
> >>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/647W6ROQFCD...
> >>>
> >>> _______________________________________________
> >>> Users mailing list -- users(a)ovirt.org
> >>> To unsubscribe send an email to users-leave(a)ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QD65NZ5TFLZ...
> >>
> >>
> >>
> >> --
> >> Didi
>
>
>
> --
> Didi
>
>
2 years, 10 months
struggling with 4.5.1 install
by Chris Smith
Hello,
Here's the general gist of what I've done so far:
graphical install from el8 and el9 iso, using as few modifications as
possible with dhcp networking.
3.6TB of disk space for ovirt install dest.
system comes up, dashboard is online, i try to deploy hosted engine, no
dice.
[image: image.png]
it's similar to this:
https://bugzilla.redhat.com/show_bug.cgi?id=1946095
ok, so i go for cli install
i manually install ovirt-engine-appliance and that seems to go ok
i kick off ovirt-hosted-engine-setup and follow the prompts
i watch the logs and such and seems to be ok
i have to make entry in /etc/hosts to have new hostname for not yet built
hosted engine VM to resolve to IP address ( I don't have "real" DNS at the
moment)
that succeeds but then new engine VM is spun up in new IP subnet
192.168.222.0/24 that is different from primary 192.168.1.0/24 not sure why.
i am at a point where i can't get a GUI to the hosted engine IP because
it's only available to the internal adapter on the node.
i feel like the installers in the el8 and el9 iso's is just completely
b0rked.
what am I doing wrong?
oh, and where is everyone at in the IRC chat room?
thanks,
Chris
2 years, 10 months
No "Virtualization" tab on Node cockpit
by giuliano.david@nvgroup.it
Hi. I'm new to oVirt.
Trying to install the first node with hosted engine on it.
The node bare install process from the iso image (ovirt-node-ng-installer-4.5.1-2022062306.el9.iso) is fine on a Fujitsu server.
Accessing via browser the node cockpit at port 9090, on the left panel the entry "Virtualization" is missing (installing Node version 4.4.6 it was present and working), so I can't launch the hosted engine setup via UI.
Trying to deploy via command line ( $ hosted-engine --deploy ), the installation process stops with the error:
[ ERROR ] {'changed': True, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': "[ -r /etc/sysconfig/libvirtd ] && sed -i '/## beginning of configuration section by vdsm-4.[0-9]\\+.[0-9]\\+/,/## end of configuration section by vdsm-4.[0-9]\\+.[0-9]\\+/d' /etc/sysconfig/libvirtd", 'start': '2022-08-04 18:24:31.817314', 'end': '2022-08-04 18:24:31.819845', 'delta': '0:00:00.002531', 'failed': True, 'msg': 'non-zero return code', 'invocation': {'module_args': {'warn': False, '_raw_params': "[ -r /etc/sysconfig/libvirtd ] && sed -i '/## beginning of configuration section by vdsm-4.[0-9]\\+.[0-9]\\+/,/## end of configuration section by vdsm-4.[0-9]\\+.[0-9]\\+/d' /etc/sysconfig/libvirtd", '_uses_shell': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], '_ansible_no_log': False, 'item': '/etc/sysconfig/libvirtd', 'ansible_loop_var': 'item', '_ansible_
item_label': '/etc/sysconfig/libvirtd'}
I can't really understand what's going on ...
Can anyone please point me in the right direction to fix that? Is it a bug?
Thanks!!!
g
2 years, 10 months
how to use ovirt-engine on a new Architecture
by zhangwenlong@loongson.cn
Hello, I am using ovirt-engine-4.4.10.7 on a new Architecture(LoongArch64), but an error is reported during use, the error message is:
2022-08-04 12:03:30,762-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-72) [4fad3531] EVENT_ID: CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION(156), Host node210 moved to Non-Operational state as host CPU type is not supported in this cluster compatibility version or is not supported at all,
which CPUs are supported by ovirt-engine-4.4.10.7? how can i get the support list of CPUs, If i want to use ovirt-engine-4.4.10.7 on a new Architecture, what should i do, thank you
the CPU information about LoongArch just like:
system type : generic-loongson-machine
processor : 0
package : 0
core : 0
cpu family : Loongson-64bit
model name : Loongson-3A5000LL
CPU Revision : 0x11
FPU Revision : 0x00
CPU MHz : 2000.00
BogoMIPS : 4000.00
TLB entries : 2112
Address sizes : 48 bits physical, 48 bits virtual
isa : loongarch32 loongarch64
features : cpucfg lam ual fpu lsx lasx complex crypto lvz lbt_x86 lbt_arm lbt_mips
hardware watchpoint : yes, iwatch count: 8, dwatch count: 8
processor : 1
package : 0
core : 1
cpu family : Loongson-64bit
model name : Loongson-3A5000LL
CPU Revision : 0x11
FPU Revision : 0x00
CPU MHz : 2000.00
BogoMIPS : 4000.00
TLB entries : 2112
Address sizes : 48 bits physical, 48 bits virtual
isa : loongarch32 loongarch64
features : cpucfg lam ual fpu lsx lasx complex crypto lvz lbt_x86 lbt_arm lbt_mips
hardware watchpoint : yes, iwatch count: 8, dwatch count: 8
2 years, 10 months
Manager crash
by teemu.saarinen@netum.fi
Hi,
I was wondering If I have HA- setup of manager + 2 hosts. If manager crashes, does 2 host crash also or do they still continue working normally?
Thanks,
Teemu S.
2 years, 10 months
Failover IP on Hetzner
by Tommaso - Shellrent
Hi to all.
we want to configure a failover ip on an Hetzner host. Anyone can be
able of make it works?
Ip failover does not include a MAC address, so it cant't work on a
bridged network. They sauggest su confure it on a routed network.
On ovirt, how can we make it work?
Regards,
Tommaso.
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155>
2 years, 10 months
Certificates Expiration Problem - Urgent Help Needed
by Andrei Verovski
Hi !
Today my hosts (engine + all nodes) certificates expired and I re-run
engine-setup to renew certificates.
Then I did for each node host:
Edit host -> Advanced parameters -> Fetch SSH public key (PEM)
in order to update certificates on nodes, everything was finished just fine.
Unfortunately, one of the most crucial nodes (node14) still shows this
error:
VDSM node14 command Get Host Capabilities failed: PKIX path validation
failed: java.security.cert.CertPathValidatorException: validity check failed
Restarted vdsms and vdsm-network, still same, node is marked as
non-responsive, and all VM with "?" sign (unknown status).
However, node14 pings without any problem, its storage domain shown in
green (OK), and all VMs are running fine.
Service vdsm-network status is OK, vdsmd is NOT:
Aug 08 22:07:27 node14.***.lv vdsm[1264164]: ERROR ssl handshake: socket
error, address: ::ffff:192.168.0.4
This node is running our accounting and stock control system, its
storage domain holds VM disk of that software. If its nonoperational
after restart, its a BIG trouble, I will not be able to migrate VM disk
anywhere. Restoring accounting DB from daily backup is a lengthy process
for 2 - 3 hours.
Please advice what to do next.
Thanks in advance.
2 years, 10 months
Q: How to fix ghost "locked" status of VM
by Andrei Verovski
Hi,
Creating snapshot of one of the VM vailed, and zombie tasks was killed with:
su postgres
psql -d engine -U postgres
select * from job order by start_time desc;
select DeleteJob('UUID_FROZEN_TASK_ID’);
However, VM remains in locked state (with lock sign left-below red “DOWN” arrow in status column of web interface.
I run:
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all
then rebooted engine VM, still no luck. Can’t do anything with that VM.
Please advise how to fix.
Thanks in advance.
2 years, 10 months