Ovirt-4.4.7 Adding host to existing cluster will attempt to run ovirt-hosted-engine-setup
by Paul-Erik Törrönen
I have an existing oVirt-installation to which I want to add a new host
(from the UI).
However the adding fails because oVirt tries to then install the
ovirt-hosted-engine-setup despite the fact that in the New host-dialog I
left the Choose hosted engine deployment action as None.
As a result the installation then aborts:
2021-08-15 13:17:37 EEST - TASK [ovirt-host-deploy-vdsm : Install
ovirt-hosted-engine-setup package] ******
2021-08-15 13:17:37 EEST -
2021-08-15 13:17:37 EEST - fatal: [new.host]: FAILED! => {"changed":
false, "failures": ["No package ovirt-hosted-engine-setup available."],
"msg": "Failed to install some of the specified packages", "rc": 1,
"results": []}
2021-08-15 13:17:37 EEST - {
"status" : "OK",
"msg" : "",
"data" : {
"uuid" : "98a244fb-15da-4192-a8fc-b23a5e5b7594",
"counter" : 55,
"stdout" : "fatal: [new.host]: FAILED! => {\"changed\": false,
\"failures\": [\"No package ovirt-hosted-engine-setup available.\"],
\"msg\": \"Failed to install some of the specified packages\", \"rc\":
1, \"results\": []}",
"start_line" : 47,
"end_line" : 48,
"runner_ident" : "fe710372-fdb1-11eb-81f0-ecf4bb63099f",
"event" : "runner_on_failed",
"pid" : 24931,
"created" : "2021-08-15T10:17:35.963320",
"parent_uuid" : "ecf4bb63-099f-2448-1b83-0000000001a7",
"event_data" : {
"playbook" : "ovirt-host-deploy.yml",
"playbook_uuid" : "8f70f945-4008-4159-b469-7ff04c9c242b",
"play" : "all",
"play_uuid" : "ecf4bb63-099f-2448-1b83-000000000006",
"play_pattern" : "all",
"task" : "Install ovirt-hosted-engine-setup package",
"task_uuid" : "ecf4bb63-099f-2448-1b83-0000000001a7",
"task_action" : "yum",
"task_args" : "",
"task_path" :
"/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/packages.yml:6",
"role" : "ovirt-host-deploy-vdsm",
"host" : "new.host",
"remote_addr" : "new.host",
"res" : {
"msg" : "Failed to install some of the specified packages",
"failures" : [ "No package ovirt-hosted-engine-setup available." ],
"results" : [ ],
"rc" : 1,
"invocation" : {
"module_args" : {
"name" : [ "ovirt-hosted-engine-setup" ],
"state" : "present",
"allow_downgrade" : false,
"autoremove" : false,
"bugfix" : false,
"disable_gpg_check" : false,
"disable_plugin" : [ ],
"disablerepo" : [ ],
"download_only" : false,
"enable_plugin" : [ ],
"enablerepo" : [ ],
...
Poltsi
2 years, 7 months
about the clonetype of the vm disk when vm is cloned from template
by Tommy Sway
There are some options for clone disk for other platform like oVirt.
--clonetype [thin|full]
A thin clone uses copy-on-write (COW) reflinked
clone meaning it is dependent on the base template, but enables rapid
cloning.
Full clone will make a full physical copy hence not
be dependent on the base template.
Thin clone uses less space and will not be as performant as full clone,
especially under heavy write workloads.
By default a thin clone is created, great for
testing; should be avoided in production environments.
I want to know which type is oVirt using when we clone vm from template ?
2 years, 7 months
Hosted engine on HCI cluster is not running
by David White
Hello,
It appears that my Manager / hosted-engine isn't working, and I'm unable to get it to start.
I have a 3-node HCI cluster, but right now, Gluster is only running on 1 host (so no replication).
I was hoping to upgrade / replace the storage on my 2nd host today, but aborted that maintenance when I found that I couldn't even get into the Manager.
The storage is mounted, but here's what I see:
> [root@cha2-storage dwhite]# hosted-engine --vm-statusThe hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
>
> [root@cha2-storage dwhite]# systemctl status ovirt-ha-agent● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
> Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
> Active: active (running) since Fri 2021-08-13 11:10:51 EDT; 2h 44min ago
> Main PID: 3591872 (ovirt-ha-agent)
> Tasks: 1 (limit: 409676)
> Memory: 21.5M
> CGroup: /system.slice/ovirt-ha-agent.service
> └─3591872 /usr/libexec/platform-python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
>
> Aug 13 11:10:51 cha2-storage.mgt.barredowlweb.com systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
Any time I try to do anything like connect the engine storage, disconnect the engine storage, or connect to the console, it just sits there, and doesn't do anything, and I eventually have to ctl-c out of it.
Maybe I have to be patient? When I ctl-c, I get a trackback error:
> [root@cha2-storage dwhite]# hosted-engine --console^CTraceback (most recent call last):
> File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
>
> "__main__", mod_spec)
> File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/vdsm_helper.py", line 214, in <module>
> [root@cha2-storage dwhite]# args.command(args)
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/vdsm_helper.py", line 42, in func
> f(*args, **kwargs)
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/vdsm_helper.py", line 91, in checkVmStatus
> cli = ohautil.connect_vdsm_json_rpc()
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 472, in connect_vdsm_json_rpc
> __vdsm_json_rpc_connect(logger, timeout)
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 395, in __vdsm_json_rpc_connect
> timeout=timeout)
> File "/usr/lib/python3.6/site-packages/vdsm/client.py", line 154, in connect
> outgoing_heartbeat=outgoing_heartbeat, nr_retries=nr_retries)
> File "/usr/lib/python3.6/site-packages/yajsonrpc/stompclient.py", line 426, in SimpleClient
> nr_retries, reconnect_interval)
> File "/usr/lib/python3.6/site-packages/yajsonrpc/stompclient.py", line 448, in StandAloneRpcClient
> client = StompClient(utils.create_connected_socket(host, port, sslctx),
> File "/usr/lib/python3.6/site-packages/vdsm/utils.py", line 379, in create_connected_socket
> sock.connect((host, port))
> File "/usr/lib64/python3.6/ssl.py", line 1068, in connect
> self._real_connect(addr, False)
> File "/usr/lib64/python3.6/ssl.py", line 1059, in _real_connect
> self.do_handshake()
> File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
> self._sslobj.do_handshake()
> File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
> self._sslobj.do_handshake()
This is what I see in /var/log/ovirt-hosted-engine-ha/broker.log:
> MainThread::WARNING::2021-08-11 10:24:41,596::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: Connection to storage server failed
> MainThread::ERROR::2021-08-11 10:24:41,596::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run) Failed initializing the broker: Connection to storage server failed
> MainThread::ERROR::2021-08-11 10:24:41,598::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run) Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 64, in run
> self._storage_broker_instance = self._get_storage_broker()
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 143, in _get_storage_broker
> return storage_broker.StorageBroker()
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 97, in __init__
> self._backend.connect()
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 375, in connect
> sserver.connect_storage_server()
> File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py", line 451, in connect_storage_server
> 'Connection to storage server failed'
> RuntimeError: Connection to storage server failed
>
> MainThread::ERROR::2021-08-11 10:24:41,599::broker::72::ovirt_hosted_engine_ha.broker.broker.Broker::(run) Trying to restart the broker
> MainThread::INFO::2021-08-11 10:24:42,439::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.4.7 started
> MainThread::INFO::2021-08-11 10:24:44,442::monitor::45::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> MainThread::INFO::2021-08-11 10:24:44,443::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
> MainThread::INFO::2021-08-11 10:24:44,449::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
> MainThread::INFO::2021-08-11 10:24:44,450::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
> MainThread::INFO::2021-08-11 10:24:44,451::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
> MainThread::INFO::2021-08-11 10:24:44,451::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
> MainThread::INFO::2021-08-11 10:24:44,452::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
> MainThread::INFO::2021-08-11 10:24:44,452::monitor::62::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
> MainThread::INFO::2021-08-11 10:24:44,452::monitor::63::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Finished loading submonitors
And I see this in /var/log/vdsm/vdsm.log:
> 2021-08-13 14:08:10,844-0400 ERROR (Reactor thread) [ProtocolDetector.AcceptorImpl] Unhandled exception in acceptor (protocoldetector:76)
> Traceback (most recent call last):
> File "/usr/lib64/python3.6/asyncore.py", line 108, in readwrite
> File "/usr/lib64/python3.6/asyncore.py", line 417, in handle_read_event
> File "/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py", line 57, in handle_accept
> File "/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py", line 173, in _delegate_call
> File "/usr/lib/python3.6/site-packages/vdsm/protocoldetector.py", line 53, in handle_accept
> File "/usr/lib64/python3.6/asyncore.py", line 348, in accept
> File "/usr/lib64/python3.6/socket.py", line 205, in accept
> OSError: [Errno 24] Too many open files
Can anyone help?
Sent with ProtonMail Secure Email.
2 years, 7 months
Unable to connect to the Graphic server
by Gangi Reddy
We are receiving below error while attempt to connect SPICE console. No block in network firewall.
Version: Software Version:4.4.6.7-1.el8
Error:
Unable to connect to the Graphic server .vv
2 years, 7 months
Automigration of VMs from other hypervisors
by KK CHN
Hi list,
I am in the process of migrating 150+ VMs running on Rhevm4.1 to KVM
based OpenStack installation ( Ussuri with KVm and glance as image storage.)
What I am doing now, manually shutdown each VM through RHVM GUI and export
to export domain and scp those image files of each VM to our OpenStack
controller node and uploading to glance and creating each VM manually.
query 1.
Is there a better way to automate this migration by any utility or scripts ?
Any one done this kind of automatic migration before what was your
approach? Or whats the better approach instead of doing manual migration ?
Or only manually I have to repeat the process for all 150+ Virtual
machines? ( guest VMs are CentOS7 and Redhat Linux 7 with LVM data
partitions attached)
Kindly share your thoughts..
Query 2.
other than this 150+ VMs Redhat Linux 7 and Centos VMs on Rhevm 4.1, I
have to migrate 50+ VMs which hosted on hyperV.
What the method / approach for exporting from HyperV and importing to
OpenStack Ussuri version with glance with KVM hpervisor ? ( This is the
ffirst time I am going to use hyperV, no much idea about export from hyperv
and Import to KVM)
Will the images exported form HyperV(vhdx image disks with single disk
and multiple disk(max 3 disk) VMs) can be directly imported to KVM ? does
KVM support this or need to modify vhdx disk images to any other format ?
What is the best approach should be in case of HyperV hosted VMs( Windows
2012 guest machines and Linux guest machines ) to be imported to KVM based
OpenStack(Ussuri version with glance as image storage ).
Thanks in advance
Kris
2 years, 7 months
Ubuntu 20.04 cloud-init
by Pavel Šipoš
Hi.
We are using oVirt version 4.3.10.4-1.el7
I am trying to use clod-init function in ovirt to set password and add
ssh keys for VM on first boot.
It is working perfectly on Centos7, Centos8, but not with Ubuntu 20.04.02 .
On VM Cloud-init package and quemu-guest-agent are installed, services
are running - I checked that and then used Run once to test further but
no luck.
It looks like its not even trying to set anything.
I am open to suggestions how to debug this. I see no errors if I look
into /var/log/cloud-init logs on VM.
Did anyone else had similar problem?
Can anyone confirm that cloud-init function in ovirt should work with
ubuntu 20.04?
Packages used:
cloud-init 21.2-3-g899bfaa9-0ubuntu2~20.04.1
qemu-guest-agent 1:4.2-3ubuntu6.17
I made ovirt template myself using packer (qemu) that also uses
cloud-init for bootstrapping.
Thank you in advance!
Pavel
--
--
Pavel Sipos, Arnes <pavel.sipos(a)arnes.si>
ARNES, p.p. 7, SI-1001 Ljubljana, Slovenia
T: +386 1 479 88 00
W: www.arnes.si, aai.arnes.si
2 years, 7 months
low iscsi storage space
by Leonardo Costa
Hello.
I'm having a problem that I can't solve,
I have 20Tb of space on Ovirt but on my DELL SCV 3000 storage, only 5Tb
free on the volume.
It seems to me that the deleted machines are not removing the disks in the
storage iscsi.
Could anyone help?
2 years, 7 months
Major network issue with 4.4.7
by Andrea Chierici
Dear all,
I've been using ovirt for at least 6 years and only lately I've stepped
into a weird problem that I hope someone will be able to give help.
My hardware is:
- blade lenovo for hosts, with dual switch
- dell equallogic for iscsi storage, directly connected to the blade
switches
- The two host network cards are configured with bonding and all the
vlans are accessed from it (mtu 9000)
- all the hosts and ovirt engine have firewalld service disabled)
My engine is hosted on a separate vmware vm (I will evaluate the self
hosted engine later...). I want to stress the fact that for years this
setup worked smoothly without any significant issue (and all the minor
updates were completed flawlessly).
A few weeks ago I started the update from the rock solid 4.3 to the
latest 4.4.7. I began with the manager, following the docs, installing a
new centos8 vm and importing the backup: everything went smootly and I
was able to get access to the manager without any problem, all the
machines still there :)
I then began updating the hosts, from centos7 to centos8 stream, one by one.
Immediately I noticed network issues, with the VMs hosted on the first
updated host. Migrating VMs from centos8 host to other centos8 quite
often fails, but the main issue is this: *if I start one of the VMs on
the centos8 host, they have no network connectivity. If I migrate them
to a centos7 hosts the network starts to work, and if I migrate the VMs
back to the centos8 host, the network keeps working.*
I am puzzled and can't understand what's going on. Generally speaking
all the centos8 hosts (I have 6 in my cluster, and now 3 are centos8
while the rest is still centos7) seem to be very unstable, meaning that
the VMs they host are quite often showing network issues and temporary
glitches.
Can someone give a hint on how to solve this weird issue?
Thanks,
Andrea
--
Andrea Chierici - INFN-CNAF
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463
SkypeID ataruz
--
2 years, 7 months
live merge of snapshots failed
by g.vasilopoulos@uoc.gr
Hello
I have a situation with a vm in which I cannot delete the snapshot.
The whole thing is quite strange because I can delete the snapshot when I create and delete it from the web interface but when I do it with a python script through the API it failes.
The script does create snapshot-> download snapshot-> delete snapshot and I used the examples from ovirt python sdk on githab to create it ,in general it works prety well.
But on a specific machine (so far) it cannot delete the live snapshot
Ovirt is 4.3.10 and the guest is a windows 10 pc. Windows 10 guest has 2 disks attached both on different fc domains one on an ssd emc and the other on an hdd emc. Both disks are prealocated.
I cannot figure out what the problem is so far
the related engine log:
2021-08-03 15:51:00,385+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Comma
nd 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:00,385+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (
jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
2021-08-03 15:51:00,387+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:01,388+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-30) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:07,491+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'MoveImageGroup' (id: '1de1b800-873f-405f-805b-f44397740909') waiting on child command id: 'd1136344-2888-4d63-8fe1-b506426bc8aa' type:'CopyImageGroupWithData' to complete
2021-08-03 15:51:11,513+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-41) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:12,522+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:12,523+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee)
2021-08-03 15:51:12,527+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:13,528+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:21,635+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-58) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:22,655+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: '74c83880-581b-4774-ae51-8c4af0c92c53' type:'Merge' to complete
2021-08-03 15:51:22,661+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Merge command (jobId = 62bf8c83-cd78-42a5-b57d-d67ddfdee8ee) has completed for images '7611ebcf-5323-45ca-b16c-9302d0bdedc6'..'17618ba1-4ab8-49eb-a991-fc3d602ced14'
2021-08-03 15:51:22,664+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:23,664+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-41) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on merge command to complete (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f)
2021-08-03 15:51:24,672+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-6) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Live Merge command step 'MERGE_STATUS'
2021-08-03 15:51:24,699+03 INFO [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Running command: MergeStatusCommand internal: true. Entities affected : ID: 96000ec9-e181-44eb-893f-e0a36e3a6775 Type: Storage
8-03 15:51:24,749+03 INFO [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Successfully removed volume 17618ba1-4
ab8-49eb-a991-fc3d602ced14 from the chain
2021-08-03 15:51:24,749+03 INFO [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Volume merge type 'COMMIT'
2021-08-03 15:51:25,691+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGroup
WithData' (id: 'd1136344-2888-4d63-8fe1-b506426bc8aa') waiting on child command id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5' type:'CopyImageGroupVolumesData' to complete
2021-08-03 15:51:26,692+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-35) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGrou
pVolumesData' (id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5') waiting on child command id: 'f46d65cb-32d9-4269-982e-6e19331b8a27' type:'CopyData' to complete
2021-08-03 15:51:26,711+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-35) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Liv
e Merge command step 'DESTROY_IMAGE'
2021-08-03 15:51:26,726+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Running command:
DestroyImageCommand internal: true. Entities affected : ID: 96000ec9-e181-44eb-893f-e0a36e3a6775 Type: Storage
2021-08-03 15:51:26,747+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DestroyImageVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, DestroyIma
geVDSCommand( DestroyImageVDSCommandParameters:{storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', ignoreFailoverLimit='false', storageDomainId='96000ec9-e181-44eb-893f-e0a36e3a6775', imageGroupId='205a30a3-f
c06-4ceb-8ef2-018f16d4ccbb', imageId='00000000-0000-0000-0000-000000000000', imageList='[17618ba1-4ab8-49eb-a991-fc3d602ced14]', postZero='false', force='false'}), log id: 307a3fb1
2021-08-03 15:51:26,834+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DestroyImageVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, DestroyIm
ageVDSCommand, return: , log id: 307a3fb1
2021-08-03 15:51:26,846+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::Adding CommandM
ultiAsyncTasks object for command 'baaa5254-261c-452a-84b4-0a8b397cdb62'
2021-08-03 15:51:26,847+03 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandMultiAsyncTasks::attachTas
k: Attaching task 'ae2e80a4-d224-4fc3-a84b-859042811525' to command 'baaa5254-261c-452a-84b4-0a8b397cdb62'.
2021-08-03 15:51:26,857+03 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Adding task 'ae2e80a4-d224-4fc3-a
84b-859042811525' (Parent Command 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2021-08-03 15:51:26,858+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Successfully star
ted task to remove orphaned volumes
2021-08-03 15:51:26,863+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] BaseAsyncTask::startPollingTask: Star
ting to poll task 'ae2e80a4-d224-4fc3-a84b-859042811525'.
2021-08-03 15:51:26,863+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [3bf9345d-fab2-490f-ba44-6aa014bbb743] BaseAsyncTask::startPollingTask: Star
ting to poll task 'ae2e80a4-d224-4fc3-a84b-859042811525'.
2021-08-03 15:51:28,799+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on destroy image command to complete the task (taskId = ae2e80a4-d224-4fc3-a84b-859042811525)
2021-08-03 15:51:30,805+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-48) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '80dc4609-b91f-4e93-bc12-7b2083933e5a') waiting on child command id: 'baaa5254-261c-452a-84b4-0a8b397cdb62' type:'DestroyImage' to complete
2021-08-03 15:51:31,824+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-97) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:32,825+03 INFO [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Waiting on destroy image command to complete the task (taskId = ae2e80a4-d224-4fc3-a84b-859042811525)
2021-08-03 15:51:32,831+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' (id: '87bc90c7-2aa5-4a1b-b58c-54296518658a') waiting on child command id: 'ec806ac6-929f-42d9-a86e-98d6a39a4718' type:'Merge' to complete
2021-08-03 15:51:33,594+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'DestroyImage' completed, handling the result.
2021-08-03 15:51:33,594+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'DestroyImage' succeeded, clearing tasks.
2021-08-03 15:51:33,594+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'ae2e80a4-d224-4fc3-a84b-859042811525'
2021-08-03 15:51:33,595+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', ignoreFailoverLimit='false', taskId='ae2e80a4-d224-4fc3-a84b-859042811525'}), log id: 6225ae22
2021-08-03 15:51:33,596+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, HSMClearTaskVDSCommand(HostName = ovirt2-7.vmmgmt-int.uoc.gr, HSMTaskGuidBaseVDSCommandParameters:{hostId='10599b78-5f45-48d2-bfe0-028f3dae69eb', taskId='ae2e80a4-d224-4fc3-a84b-859042811525'}), log id: 45b18a8a
2021-08-03 15:51:33,611+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, HSMClearTaskVDSCommand, return: , log id: 45b18a8a
2021-08-03 15:51:33,611+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, SPMClearTaskVDSCommand, return: , log id: 6225ae22
2021-08-03 15:51:33,614+03 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] BaseAsyncTask::removeTaskFromDB: Removed task 'ae2e80a4-d224-4fc3-a84b-859042811525' from DataBase
2021-08-03 15:51:33,614+03 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-1481337) [3bf9345d-fab2-490f-ba44-6aa014bbb743] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'baaa5254-261c-452a-84b4-0a8b397cdb62'
2021-08-03 15:51:33,839+03 INFO [org.ovirt.engine.core.bll.MergeCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Merge command (jobId = c57fb3e5-da20-4838-8db3-31655ba76c1f) has completed for images '84c005da-cbec-4ace-8619-5a8e2ae5ea75'..'b43b7c33-5b53-4332-a2e0-f950debb919b'
2021-08-03 15:51:34,847+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Live Merge command step 'MERGE_STATUS'
2021-08-03 15:51:34,917+03 ERROR [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Failed to live merge. Top volume b43b7c33-5b53-4332-a2e0-f950debb919b is still in qemu chain [b43b7c33-5b53-4332-a2e0-f950debb919b, 84c005da-cbec-4ace-8619-5a8e2ae5ea75]
2021-08-03 15:51:35,866+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-41) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGroupWithData' (id: 'd1136344-2888-4d63-8fe1-b506426bc8aa') waiting on child command id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5' type:'CopyImageGroupVolumesData' to complete
2021-08-03 15:51:36,867+03 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [b929fd4a-8ce7-408f-927d-ab0169879c4e] Command 'CopyImageGroupVolumesData' (id: 'b40aeec9-b7cf-4ee5-9683-54d98f4307d5') waiting on child command id: 'f46d65cb-32d9-4269-982e-6e19331b8a27' type:'CopyData' to complete
2021-08-03 15:51:36,873+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command id: '87bc90c7-2aa5-4a1b-b58c-54296518658a failed child command status for step 'MERGE_STATUS'
2021-08-03 15:51:36,873+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-50) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' id: '87bc90c7-2aa5-4a1b-b58c-54296518658a' child commands '[a7de75fe-e94c-4795-9310-2c8fc3d6d3fc, ec806ac6-929f-42d9-a86e-98d6a39a4718, 6443529c-d753-48f6-8a9e-af1f9f09dfb5]' executions were completed, status 'FAILED'
2021-08-03 15:51:37,982+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Merging of snapshot '1cd4985b-b0c0-40d6-bbde-54451e43bef6' images '84c005da-cbec-4ace-8619-5a8e2ae5ea75'..'b43b7c33-5b53-4332-a2e0-f950debb919b' failed. Images have been marked illegal and can no longer be previewed or reverted to. Please retry Live Merge on the snapshot to complete the operation.
2021-08-03 15:51:37,985+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' with failure.
2021-08-03 15:51:39,014+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Executing Live Merge command step 'REDUCE_IMAGE'
2021-08-03 15:51:39,029+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] No need to execute reduce image command, skipping its execution. Storage Type: 'FCP', Disk: 'anova.admin.uoc.gr_Disk2' Snapshot: 'anova.admin.uoc.gr-2021-08-03'
2021-08-03 15:51:39,034+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshotSingleDiskLive' id: '80dc4609-b91f-4e93-bc12-7b2083933e5a' child commands '[18653ea6-166c-41a3-b335-84525871b9e6, 74c83880-581b-4774-ae51-8c4af0c92c53, 92e92d8e-b099-47dd-ba8c-e3db907f9a62, baaa5254-261c-452a-84b4-0a8b397cdb62]' executions were completed, status 'SUCCEEDED'
2021-08-03 15:51:39,040+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' (id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9') waiting on child command id: '80dc4609-b91f-4e93-bc12-7b2083933e5a' type:'RemoveSnapshotSingleDiskLive' to complete
2021-08-03 15:51:40,046+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', ignoreFailoverLimit='false', storageDomainId='96000ec9-e181-44eb-893f-e0a36e3a6775', imageGroupId='205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', imageId='7611ebcf-5323-45ca-b16c-9302d0bdedc6'}), log id: 4c98e4db
2021-08-03 15:51:40,047+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] START, GetVolumeInfoVDSCommand(HostName = ovirt2-7.vmmgmt-int.uoc.gr, GetVolumeInfoVDSCommandParameters:{hostId='10599b78-5f45-48d2-bfe0-028f3dae69eb', storagePoolId='5da76866-7b7d-11eb-9913-00163e1f2643', storageDomainId='96000ec9-e181-44eb-893f-e0a36e3a6775', imageGroupId='205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', imageId='7611ebcf-5323-45ca-b16c-9302d0bdedc6'}), log id: 4aca5f83
2021-08-03 15:51:40,084+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@70097e6, log id: 4aca5f83
2021-08-03 15:51:40,084+03 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@70097e6, log id: 4c98e4db
2021-08-03 15:51:40,283+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Successfully merged snapshot '1cd4985b-b0c0-40d6-bbde-54451e43bef6' images '17618ba1-4ab8-49eb-a991-fc3d602ced14'..'7611ebcf-5323-45ca-b16c-9302d0bdedc6'
2021-08-03 15:51:40,287+03 INFO [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' successfully.
2021-08-03 15:51:40,296+03 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-96) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Command 'RemoveSnapshot' id: '04e9d61e-28a2-4ab0-9bb7-5c805ee871e9' child commands '[87bc90c7-2aa5-4a1b-b58c-54296518658a, 80dc4609-b91f-4e93-bc12-7b2083933e5a]' executions were completed, status 'FAILED'
2021-08-03 15:51:41,322+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [3bf9345d-fab2-490f-ba44-6aa014bbb743] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
2021-08-03 15:51:41,353+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [3bf9345d-fab2-490f-ba44-6aa014bbb743] EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot 'anova.admin.uoc.gr-2021-08-03' for VM 'anova.admin.uoc.gr'.
Any help would be highly appreciated
2 years, 7 months