Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 9 participants
- 19179 discussions
02 Dec '19
Dear all,
after a GlusterFS reboot of two servers out of three (and thus a loss of
quorum) by an admin we've got strange problems:
1. The hosted engine and one export work perfectly fine. (Same three
servers!)
2. Another export (main one) seems to be fine within GlusterFS itself.
But: When we activate this domain most of the hosts go into "non
operational" with the subject's error message. As soon as we
deactivate this domain all those hosts come back online. Strange
thing: The GlusterFS export seems to be mounted on the SPM.
Does anyone know what could have happened and how to fix that?
Kind regards
Daniel
1
0
Dear Community,
I have installed Ovirt Engine 4.3 and Ovirt Node 4.3. Node got successfully added to engine and setup host network also done. When trying to ping host using “ovirtmgmt” as vNic profile for a VM, it is not even able to ping it’s host or any other machine on that same network. Also added a VLAN network which is passed via same uplink of Node interface where “Ovirtmgmt” is passed, that is also not working.
Although this vnet is vnic type is VirtIO and state shows “UNKNOWN”, would this be a problem?
Any help would be highly appreciated.
Thanks
Vijay Sachdeva
Senior Manager – Service Delivery
IndiQus Technologies
O +91 11 4055 1411 | M +91 8826699409
www.indiqus.com
3
7
01 Dec '19
I am seeing more success than failures at creating single and triple node hyperconverged setups after some weeks of experimentation so I am branching out to additional features: In this case the ability to use SSDs as cache media for hard disks.
I tried first with a single node that combined caching and compression and that fails during the creation of LVMs.
I tried again without the VDO compression, but actually the results where identical whilst VDO compression but without the LV cache worked ok.
I tried various combinations, using less space etc., but the results are always the same and unfortunately rather cryptic (substituted the physical disk label with {disklabel}):
TASK [gluster.infra/roles/backend_setup : Extend volume group] *****************
failed: [{hostname}] (item={u'vgname': u'gluster_vg_{disklabel}p1', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_{disklabel}p1', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_{disklabel}p1', u'cachedisk': u'/dev/sda4', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_{disklabel}p1', u'cachemode': u'writeback', u'cachemetalvsize': u'70G', u'cachelvsize': u'630G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/mapper/vdo_{disklabel}p1\" still in use\n", "item": {"cachedisk": "/dev/sda4", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_{disklabel}p1", "cachelvsize": "630G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_{disklabel}p1", "cachemetalvsize": "70G", "cachemode": "writeback", "cachethinpoolname": "gluster_thinpool_gluster_vg_{disklabel}p1", "vgname": "gluster_vg_{disklabel}p1"}, "msg": "Unable to reduce gluster_vg_{disklabel}p1 by /dev/dm-15.", "rc": 5}
somewhere within that I see something that points to a race condition ("still in use").
Unfortunately I have not been able to pinpoint the raw logs which are used at that stage and I wasn't able to obtain more info.
At this point quite a bit of storage setup is already done, so rolling back for a clean new attempt, can be a bit complicated, with reboots to reconcile the kernel with data on disk.
I don't actually believe it's related to single node and I'd be quite happy to move the creation of the SSD cache to a later stage, but in a VDO setup, this looks slightly complex to someone without intimate knowledge of LVS-with-cache-and-perhaps-thin/VDO/Gluster all thrown into one.
Needless the feature set (SSD caching & compressed-dedup) sounds terribly attractive but when things don't just work, it's more terrifying.
5
7
Hi List,
In one of my installs, I set up the first storage domain (and where
the HostedEngine is) on a bigger NFS NAS - since then I have created a
Gluster volume that spans the 3 hosts and I'm putting a few VM's in
there for higher reliability (as SAN is single point of failure) namely
I'd like to put HostedEngine in there so it stays up no matter what and
can help report if issues occur (network issue to NAS, NAS dies etc etc)
Looking through other posts and documentation, there's no real way to
move the HostedEngine storage, is this correct? The solution I've seen
is to backup the hosted engine DB, blow it away, and re-deploy it from
the .backup file configuring it to the new storage domain in the deploy
script - is this the only process? How likely is this to fail? Is it
likely that all VM's and settings will be picked straight back up and
continue to operate like normal? I dont have a test setup to play around
with atm so just trying to gauge confidence in such a solution.
Thanks,
Joe
3
3
Hello,
What would be the procedure to migrate a VM from oVirt to oVirt?
Migrate from oVirt 4.2 running on Site A to oVirt 4.3 Site B.
thanks!
5
7
The HostedEngine does it automatically, but the option that defines the OVF refresh interval is not accurate.
Just power it up without that option (cluster in maintenance) and keep it running for a day.
On the next day power it off and try to power it up via hosted-engine --vm-start.
Best Regards,
Strahil NikolovOn Nov 28, 2019 11:54, Ralf Schenk <rs(a)databay.de> wrote:
>
> Hello,
>
> I did something like that via "virsh edit HostedEngine".
>
> But how is the change written back to the shared storage ("hosted_storage" so it stays permanent for HA Engine ?
>
> I was able to boot up HostedEngine manually via virsh start after removing the required flag from XML (I first added a user to sasldb in /etc/libvirt/passwd.db to be able to log into libvirt).
>
> Bye
>
>
> Am 28.11.2019 um 05:51 schrieb Strahil:
>
> Hi Ralf,
> When the deployment fail - you can dump the xml from virsh , edit it, undefine the current HostedEngine and define your modified HostedEngine 's xml.
>
> Once you do that, you can try to start the
> VM.
>
> Good luck.
>
> Best Regards,
> Strahil Nikolov
>
> On Nov 27, 2019 18:28, Ralf Schenk <rs(a)databay.de> wrote:
>>
>> Hello,
>>
>> This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.
>>
>> At the time the locally deployed Engine ist copied to hosted-storage (in my case NFS) and deployment tries to start the Engine (via ovirt-ha-agent) this fails.
>>
>> QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows "2019-11-27 16:17:16.833+0000: shutting down, reason=failed".
>>
>> Researching the cause is: The built Libvirt VM XML includes the feature "virt-ssbd" as requirement, which is simly not there.
>>
>> From VM XML:
>>
>> <cpu mode='custom' match='exact' check='partial'>
>> <model fallback='allow'>EPYC</model>
>> <topology sockets='16' cores='4' threads='1'/>
>> <feature policy='require' name='ibpb'/>
>> <feature policy='require' name='virt-ssbd'/>
>>
>> from cat /proc/cpuinfo:
>>
>> processor : 47
>> vendor_id : AuthenticAMD
>> cpu family : 23
>> model : 49
>> model name : AMD EPYC 7402P 24-Core Processor
>> stepping : 0
>> microcode : 0x830101c
>> cpu MHz : 2800.000
>> cache size : 512 KB
>> physical id : 0
>> siblings : 48
>> core id : 30
>> cpu cores : 24
>> api
1
0
Check on host 'node3' in the Network configuration if you have a network out of sync or any clues about the error .
Best Regards,
Strahil NikolovOn Nov 28, 2019 11:44, Kim Kargaard <Kim.Kargaard(a)noroff.no> wrote:
>
> Hi,
>
> I am getting the attached error when trying to move the Display from the one network to the other within the cluster. I can't see any place to set an IP for the vnic that I want to set as the display network.
>
> Any thoughts?
>
> Kim
>
> On 28/11/2019, 06:41, "Strahil" <hunter86_bg(a)yahoo.com> wrote:
>
> As far as I know , the engine plays a role as a proxy during the establishment of the connection.
> Check that you can reach both engine and the host from your system.
>
> For the same reason, I use noVNC - as you just need a single port to the engine in addition to the rest of the settings.
>
> Best Regards,
> Strahil NikolovOn Nov 27, 2019 11:27, kim.kargaard(a)noroff.no wrote:
> >
> > Hi,
> >
> > When trying to connect from a remote network on the spice console to a VM, I get the following error:
> >
> > (remote-viewer:80195): virt-viewer-WARNING **: 11:05:22.322: Channel error: Could not connect to proxy server xx.xx.xx.xx: Socket I/O timed out
> >
> > I found that the display is set to the management network and not the VM networkn in the cluster logical network. However, when I try to set the other vlan to be the display network, I get the following error:
> >
> > Error while executing action: Cannot edit Network. IP address has to be set for the NIC that bears a role network. Network: student-vlan100, Nic: p2p1.100 on host node3 violates that rule.
> >
> > I am not sure what this means. Any ideas?
> >
> > Kind regards
> >
> > Kim
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YEI3I4NCAHQO…
>
>
1
0
I have removed all deployment of the hosted engine by running the following commands.
ovirt-hosted-engine-cleanup
vdsm-tool configure --force
systemctl restart libvirtd
systemctl restart vdsm
on my hosts I have the following ovirt 1 is the host I ran the hosted engine setup.
I have set the Gluster network to use the same subnet and set up forward and reverse DNS for the Gluster port network NIC's
I had this working using a separate subnet but thought to try it on the same subnet to avoid any issues that may have occurred while using a separate network subnet.
the main Host IP address is still showing in Unamanaged Connections on Ovirt 1 .. is this anything to be concerned about after running the commands above...
I have restarted all machines.
All come back with these VDSM errors...
Node 1
[root@ovirt1 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-11-28 13:29:40 UTC; 37min ago
Process: 31178 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
Process: 30721 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/share/vdsm/vdsmd (code=exited, status=0/SUCCESS)
Main PID: 30721 (code=exited, status=0/SUCCESS)
Nov 26 22:42:49 ovirt1.kvm.private vdsm[30721]: WARN MOM not available, KSM stats will be missing.
Nov 26 22:42:49 ovirt1.kvm.private vdsm[30721]: WARN Not ready yet, ignoring event '|virt|VM_status|871ce9d5-417a-4278-8446-28b681760c1b' args={'871ce9d5-417a-4278-8446-28b681760c1b': {'status': 'Poweri...
Nov 28 13:28:43 ovirt1.kvm.private vdsm[30721]: WARN File: /var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:29:26 ovirt1.kvm.private vdsm[30721]: WARN File: /var/lib/libvirt/qemu/channels/871ce9d5-417a-4278-8446-28b681760c1b.com.redhat.rhevm.vdsm already removed
Nov 28 13:29:26 ovirt1.kvm.private vdsm[30721]: WARN File: /var/lib/libvirt/qemu/channel/target/domain-1-HostedEngineLocal/org.qemu.guest_agent.0 already removed
Nov 28 13:29:39 ovirt1.kvm.private vdsm[30721]: WARN MOM not available.
Nov 28 13:29:39 ovirt1.kvm.private vdsm[30721]: WARN MOM not available, KSM stats will be missing.
Nov 28 13:29:39 ovirt1.kvm.private systemd[1]: Stopping Virtual Desktop Server Manager...
Nov 28 13:29:39 ovirt1.kvm.private vdsmd_init_common.sh[31178]: vdsm: Running run_final_hooks
Nov 28 13:29:40 ovirt1.kvm.private systemd[1]: Stopped Virtual Desktop Server Manager.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ovirt1 ~]# nodectl check
Status: WARN
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... OK
Checking available space in thinpool ... OK
Checking thinpool auto-extend ... OK
vdsmd ... BAD
NODE 2
[root@ovirt2 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-11-28 13:57:30 UTC; 1min 13s ago
Process: 3626 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 5418 (vdsmd)
Tasks: 38
CGroup: /system.slice/vdsmd.service
└─5418 /usr/bin/python2 /usr/share/vdsm/vdsmd
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: libvirt: Network Filter Driver error : Network filter not found: no nwfilter with matching name 'vdsm-no-mac-spoofing'
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running dummybr
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running tune_system
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running test_space
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running test_lo
Nov 28 13:57:30 ovirt2.kvm.private systemd[1]: Started Virtual Desktop Server Manager.
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN File: /var/run/vdsm/trackedInterfaces/eno1 already removed
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN File: /var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN MOM not available.
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN MOM not available, KSM stats will be missing.
[root@ovirt2 ~]# nodectl check
Status: OK
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... OK
Checking available space in thinpool ... OK
Checking thinpool auto-extend ... OK
vdsmd ... OK
[root@ovirt2 ~]#
Node 3
[root@ovirt3 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-11-28 13:57:08 UTC; 12min ago
Process: 2849 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 3220 (vdsmd)
Tasks: 38
CGroup: /system.slice/vdsmd.service
└─3220 /usr/bin/python2 /usr/share/vdsm/vdsmd
Nov 28 13:57:07 ovirt3.kvm.private vdsmd_init_common.sh[2849]: vdsm: Running nwfilter
Nov 28 13:57:07 ovirt3.kvm.private vdsmd_init_common.sh[2849]: vdsm: Running dummybr
Nov 28 13:57:08 ovirt3.kvm.private vdsmd_init_common.sh[2849]: vdsm: Running tune_system
Nov 28 13:57:08 ovirt3.kvm.private vdsmd_init_common.sh[2849]: vdsm: Running test_space
Nov 28 13:57:08 ovirt3.kvm.private vdsmd_init_common.sh[2849]: vdsm: Running test_lo
Nov 28 13:57:08 ovirt3.kvm.private systemd[1]: Started Virtual Desktop Server Manager.
Nov 28 13:57:09 ovirt3.kvm.private vdsm[3220]: WARN File: /var/run/vdsm/trackedInterfaces/eno1 already removed
Nov 28 13:57:09 ovirt3.kvm.private vdsm[3220]: WARN File: /var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:57:09 ovirt3.kvm.private vdsm[3220]: WARN MOM not available.
Nov 28 13:57:09 ovirt3.kvm.private vdsm[3220]: WARN MOM not available, KSM stats will be missing.
[root@ovirt3 ~]# nodectl check
Status: OK
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... OK
Checking available space in thinpool ... OK
Checking thinpool auto-extend ... OK
vdsmd ... OK
[root@ovirt3 ~]#
1
0
Hi Ralf,
When the deployment fail - you can dump the xml from virsh , edit it, undefine the current HostedEngine and define your modified HostedEngine 's xml.
Once you do that, you can try to start the
VM.
Good luck.
Best Regards,
Strahil NikolovOn Nov 27, 2019 18:28, Ralf Schenk <rs(a)databay.de> wrote:
>
> Hello,
>
> This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.
>
> At the time the locally deployed Engine ist copied to hosted-storage (in my case NFS) and deployment tries to start the Engine (via ovirt-ha-agent) this fails.
>
> QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows "2019-11-27 16:17:16.833+0000: shutting down, reason=failed".
>
> Researching the cause is: The built Libvirt VM XML includes the feature "virt-ssbd" as requirement, which is simly not there.
>
> From VM XML:
>
> <cpu mode='custom' match='exact' check='partial'>
> <model fallback='allow'>EPYC</model>
> <topology sockets='16' cores='4' threads='1'/>
> <feature policy='require' name='ibpb'/>
> <feature policy='require' name='virt-ssbd'/>
>
> from cat /proc/cpuinfo:
>
> processor : 47
> vendor_id : AuthenticAMD
> cpu family : 23
> model : 49
> model name : AMD EPYC 7402P 24-Core Processor
> stepping : 0
> microcode : 0x830101c
> cpu MHz : 2800.000
> cache size : 512 KB
> physical id : 0
> siblings : 48
> core id : 30
> cpu cores : 24
> apicid : 61
> initial apicid : 61
> fpu : yes
> fpu_exception : yes
> cpuid level : 16
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
> bogomips : 5600.12
> TLB size : 3072 4K pages
> clflush size : 64
> cache_alignment : 64
> address sizes : 43 bits physical, 48 bits virtual
> power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
>
> Any solution/workaround available ?
2
1
It is highly 'super duper' not recommended to do that.
Also, consider using thinLVM (requirement for gluster snapshots) for a brick of the separate gluster volume.
I'm using the 3rd partirion of my OS SSD to host my gluster's volume and I have no issues so far.
Best Regards,
Strahil NikolovOn Nov 28, 2019 01:05, Joseph Goldman <joseph(a)goldman.id.au> wrote:
>
> So I can't host OTHER VM's on this gluster volume? If its already a running GLuser for other VMs i can't now re-reploy HE in that gluster volume?
>
> On 2019-11-28 3:27 AM, Alan G wrote:
>
> I've had to do this a couple of times and always ended up with a working system in the end.
>
> As a fall back option (although I've never had to use it) I have a backup engine VM running completely outside of oVIrt (ESXi host in my case). Then if the hosted_engine deploy fails for any reason you can restore onto the backup vm as a temp solution while you work through the hosted engine deploy issues.
>
> A few things that come to mind: -
>
> * You will need a dedicated gluster volume for hosted_storage and it needs to be replica+arbiter.
>>
>> * Make sure you put the cluster in global maint mode before performing the engine backup, I recall having issues with the restore when I didn't do that.
>> * Migrate all other VMs off the host running Engine before doing the backup. This will be the host you will restore onto.
>>
>>
>> ---- On Wed, 27 Nov 2019 09:46:23 +0000 Joseph Goldman <joseph(a)goldman.id.au> wrote ----
>>
>>> Hi List,
>>>
>>> In one of my installs, I set up the first storage domain (and where
>>> the HostedEngine is) on a bigger NFS NAS - since then I have created a
>>> Gluster volume that spans the 3 hosts and I'm putting a few VM's in
>>> there for higher reliability (as SAN is single point of failure) namely
>>> I'd like to put HostedEngine in there so it stays up no matter what and
>>> can help report if issues occur (network issue to NAS, NAS dies etc etc)
>>>
>>> Looking through other posts and documentation, there's no real way to
>>> move the HostedEngine storage, is this correct? The solution I've seen
>>> is to backup the hosted engine DB, blow it away, and re-deploy it from
>>> the .backup file configuring it to the new storage domain in the deploy
>>> script - is this the only process? How likely is this to fail? Is it
>>> likely that all VM's and settings will be picked straight back up and
>>> continue to operate like normal? I dont have a test setup to play around
>>> with atm so just trying to gauge confidence in such a solution.
>>>
>>> Thanks,
>>> Joe
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
1
0
Hi Joseph,
It is mandatory to create a dedicated gluster volume for the HostedEngine.
There is a thread which explains why and the risks of not following these guidlines.
You can set the volume on thinLVM, so you can snapshot your gluster's volume before patching of the engine (of course you have to shutdown the engine in advance).
About the transfer, the safest way is to backup & restore as you are not migrating from 1 type of storage domain to another of same type.
Yet, I can't help you with that procedure - as I have never done it.
Best Regards,
Strahil NikolovOn Nov 27, 2019 11:46, Joseph Goldman <joseph(a)goldman.id.au> wrote:
>
> Hi List,
>
> In one of my installs, I set up the first storage domain (and where
> the HostedEngine is) on a bigger NFS NAS - since then I have created a
> Gluster volume that spans the 3 hosts and I'm putting a few VM's in
> there for higher reliability (as SAN is single point of failure) namely
> I'd like to put HostedEngine in there so it stays up no matter what and
> can help report if issues occur (network issue to NAS, NAS dies etc etc)
>
> Looking through other posts and documentation, there's no real way to
> move the HostedEngine storage, is this correct? The solution I've seen
> is to backup the hosted engine DB, blow it away, and re-deploy it from
> the .backup file configuring it to the new storage domain in the deploy
> script - is this the only process? How likely is this to fail? Is it
> likely that all VM's and settings will be picked straight back up and
> continue to operate like normal? I dont have a test setup to play around
> with atm so just trying to gauge confidence in such a solution.
>
> Thanks,
> Joe
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTO6Q3UTEDJCD…
1
0
As far as I know , the engine plays a role as a proxy during the establishment of the connection.
Check that you can reach both engine and the host from your system.
For the same reason, I use noVNC - as you just need a single port to the engine in addition to the rest of the settings.
Best Regards,
Strahil NikolovOn Nov 27, 2019 11:27, kim.kargaard(a)noroff.no wrote:
>
> Hi,
>
> When trying to connect from a remote network on the spice console to a VM, I get the following error:
>
> (remote-viewer:80195): virt-viewer-WARNING **: 11:05:22.322: Channel error: Could not connect to proxy server xx.xx.xx.xx: Socket I/O timed out
>
> I found that the display is set to the management network and not the VM networkn in the cluster logical network. However, when I try to set the other vlan to be the display network, I get the following error:
>
> Error while executing action: Cannot edit Network. IP address has to be set for the NIC that bears a role network. Network: student-vlan100, Nic: p2p1.100 on host node3 violates that rule.
>
> I am not sure what this means. Any ideas?
>
> Kind regards
>
> Kim
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YEI3I4NCAHQO…
1
0
Hi Rob,
Can you check the gluster's volume ?
gluster volume status engine
gluster volume heal engine info summary
Still, this is strange that the firewall is having such differences.
Fastest way to sync that node to the settings from the other is to:
1. Backup /etc/firewalld
2. Remove /etc/firewalld
3. Copy /etc/firewalld from another node (either scp or rsync wil do)
4. Restart the firewalld daemon
5. Make a test firewalld reload
Best Regards,
Strahil NikolovOn Nov 27, 2019 00:09, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>
> Is this the issue ?
>
> on the deployment host the firewall is active but only on eno2, eno1 appears to have decided to be unmanaged …
>
> also the deployment host has 17 rules active
>
> the other two have 7 each…
>
>
> Zone
> Interfaces
> IP Range
> Public
> default
> eno2
> *
>
> Unmanaged Interfaces
>
>
> Name
>
> IP Address
>
> Sending
>
> Receiving
>
> ;vdsmdummy;
>
>
> Inactive
>
> eno1
>
>
>
>
> ovirtmgmt
>
> 192.168.100.38/24
>
>
>
> virbr0-nic
>
>
>>
>> On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>
>> Rather than disabling firewalld,
>> You can add the ports and restart the firewalld service
>>
>> # firewall-cmd --add-service=cockpit
>>
>> # firewall-cmd --add-service=cockpit --permanent
>>
>>
>>
>> On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer(a)redhat.com> wrote:
>>>
>>> firewalld is running?
>>>
>>> systemctl disable --now firewalld
>>>
>>> On Monday, November 25, 2019, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>
>>>> It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
>>>>
>>>>
>>>>> On 25 Nov 2019, at 12:55, Gobinda Das <godas(a)redhat.com> wrote:
>>>>>
>>>>> There could be two reason
>>>>> 1- Your gluster service may not be running.
>>>>> 2- In Storage Connection there mentioned <volumename> may not exists
>>>>>
>>>>> can you please paste the output of "gluster volume status" ?
>>>>>
>>>>> On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>
>>>>>> [ INFO ] skipping: [localhost]
>>>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
>>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
>>>>>>
>>>>>>
>>>>>>> On 25 Nov 2019, at 09:16, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>
>>>>>>> Yes,
>>>>>>>
>>>>>>> I’ll restart all Nodes after wiping the failed setup of Hosted engine using.
>>>>>>>
>>>>>>> ovirt-hosted-engine-cleanup
>>>>>>> vdsm-tool configure --force
>>>>>>> systemctl restart libvirtd
>>>>>>> systemctl restart vdsm
>>>>>>>
>>>>>>> although last time I did
>>>>>>>
>>>>>>> systemctl restart vdsm
>>>>>>>
>>>>>>> VDSM did not restart maybe that is OK as Hosted Engine was then de deployed or is that the issue ?
>>>>>>>
>>>>>>>
>>>>>>>> On 25 Nov 2019, at 09:13, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>>>>>>>
>>>>>>>> Can you please share the error in case it fails again?
>>>>>>>>
>>>>>>>> On Mon, Nov 25, 2019 at 2:42 PM Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>>>
>>>>>>>>> hmm, I’’l try again, that failed last time.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On 25 Nov 2019, at 09:08, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Hey!
>>>>>>>>>>
>>>>>>>>>> For
>>>>>>>>>> Storage Connection you can add - <hostname1>:/engine
>>>>>>>>>> And for
>>>>>>>>>> Mount Options - backup-volfile-servers=<hostname2>:<hostname3>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 25, 2019 at 2:31 PM <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>>>>>
>>>>>>>>>>> So...
>>>>>>>>>>>
>>>>>>>>>>> I have got to the last step
>>>>>>>>>>>
>>>>>>>>>>> 3 Machines with Gluster Storage configured however at the last screen
>>>>>>>>>>>
>>>>>>>>>>> ....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
>>>>>>>>>>>
>>>>>>>>>>> Hosted Engine Deployment
>>>>>>>>>>>
>>>>>>>>>>> Storage Connection
>>>>>>>>>>> and
>>>>>>>>>>> Mount Options
>>>>>>>>>>>
>>>>>>>>>>> I also had to expand /tmp as it was not big enough to fit the engine before moving...
>>>>>>>>>>>
>>>>>>>>>>> What can I do to get the auto complete sorted out ?
>>>>>>>>>>>
>>>>>>>>>>> I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name
>>>>>>>>>>> and
>>>>>>>>>>> ovirt1.kvm.private:/gluster_bricks/engine
>>>>>>>>>>>
>>>>>>>>>>> Ovirt1 being the actual machine I'm running this on.
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQAG…
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6ZP…
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6ABPL…
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Gobinda
>>>>
>>>>
>
1
0
Hello,
This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.
At the time the locally deployed Engine ist copied to hosted-storage (in
my case NFS) and deployment tries to start the Engine (via
ovirt-ha-agent) this fails.
QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows
"2019-11-27 16:17:16.833+0000: shutting down, reason=failed".
Researching the cause is: The built Libvirt VM XML includes the feature
"virt-ssbd" as requirement, which is simly not there.
From VM XML:
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>EPYC</model>
<topology sockets='16' cores='4' threads='1'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='virt-ssbd'/>
from cat /proc/cpuinfo:
processor : 47
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7402P 24-Core Processor
stepping : 0
microcode : 0x830101c
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 48
core id : 30
cpu cores : 24
apicid : 61
initial apicid : 61
fpu : yes
fpu_exception : yes
cpuid level : 16
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology
nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3
fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2
cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp
vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap
clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc
cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv
svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists
pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov
succor smca
bogomips : 5600.12
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
Any solution/workaround available ?
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
1
0
Hello everybody,
since last update to 4.3.7 I get this error message:
Certificate of host *host.name <http://host.name>* is invalid. The
certificate doesn't contain valid subject alternative name, please enroll
new certificate for the host.
Have you an idea of how I can fix that?
Regards
Jonathan
1
0
Hi,
When trying to connect from a remote network on the spice console to a VM, I get the following error:
(remote-viewer:80195): virt-viewer-WARNING **: 11:05:22.322: Channel error: Could not connect to proxy server xx.xx.xx.xx: Socket I/O timed out
I found that the display is set to the management network and not the VM networkn in the cluster logical network. However, when I try to set the other vlan to be the display network, I get the following error:
Error while executing action: Cannot edit Network. IP address has to be set for the NIC that bears a role network. Network: student-vlan100, Nic: p2p1.100 on host node3 violates that rule.
I am not sure what this means. Any ideas?
Kind regards
Kim
1
0
26 Nov '19
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
....Deploying the Engine to Gluster and the wizard does not auto fill the two fields
Hosted Engine Deployment
Storage Connection
and
Mount Options
I also had to expand /tmp as it was not big enough to fit the engine before moving...
What can I do to get the auto complete sorted out ?
I have tried entering ovirt1.kvm.private:/gluster_lv_engine - The Volume name
and
ovirt1.kvm.private:/gluster_bricks/engine
Ovirt1 being the actual machine I'm running this on.
Thanks
5
16
You need to set it also permanent via '--permanent', otherwise after a reload - it's gone.
Best Regards,
Strahil NikolovOn Nov 25, 2019 19:51, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>
> [root@ovirt1 ~]# firewall-cmd --add-service=cockpit
> Warning: ALREADY_ENABLED: 'cockpit' already in 'public'
> success
> [root@ovirt1 ~]#
>
>
>
>> On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth(a)redhat.com> wrote:
>>
>> Rather than disabling firewalld,
>> You can add the ports and restart the firewalld service
>>
>> # firewall-cmd --add-service=cockpit
>>
>> # firewall-cmd --add-service=cockpit --permanent
>>
>>
>>
>> On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer(a)redhat.com> wrote:
>>>
>>> firewalld is running?
>>>
>>> systemctl disable --now firewalld
>>>
>>> On Monday, November 25, 2019, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>
>>>> It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
>>>>
>>>>
>>>>> On 25 Nov 2019, at 12:55, Gobinda Das <godas(a)redhat.com> wrote:
>>>>>
>>>>> There could be two reason
>>>>> 1- Your gluster service may not be running.
>>>>> 2- In Storage Connection there mentioned <volumename> may not exists
>>>>>
>>>>> can you please paste the output of "gluster volume status" ?
>>>>>
>>>>> On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>
>>>>>> [ INFO ] skipping: [localhost]
>>>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
>>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
>>>>>>
>>>>>> <
2
1
Hello everyone,
I'm trying to export a virtual machine, but I'm getting the following error:
2019-11-26 16:30:06,250-02 ERROR
[org.ovirt.engine.core.bll.exportimport.GetVmsFromExportDomainQuery]
(default task-22) [b9a0b9d5-2127-4002-9cee-2e3525bccc89] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to GetVmsInfoVDS, error =
Storage domain does not exist:
(u'5ac6c35d-0406-4a06-a682-ed8fb2d1933f',), code = 358 (Failed with
error StorageDomainDoesNotExist and code 358)
2019-11-26 16:30:06,249-02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-22) [b9a0b9d5-2127-4002-9cee-2e3525bccc89] EVENT_ID:
IMPORTEXPORT_GET_VMS_INFO_FAILED(200), Correlation ID: null, Call Stack:
null, Custom ID: null, Custom Event ID: -1, Message: Failed to retrieve
VM/Templates information from export domain BackupMV
The version of the oVirt that I am using is 4.1.
Att,
--
Arthur Rodrigues Stilben
1
0
26 Nov '19
Hi Martin,
Resending as delivery of my last email failed.
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
Date: Monday, 25 November 2019 at 09:53
To: Martin Perina <mperina(a)redhat.com>, Eyal Shenitzky <eshenitz(a)redhat.com>
Cc: Ryan Barry <rbarry(a)redhat.com>, Ondra Machacek <omachace(a)redhat.com>, "users(a)ovirt.org" <users(a)ovirt.org>, Darko Stojchev <D.Stojchev(a)activevideo.com>, Tal Nisan <tnisan(a)redhat.com>
Subject: Re: [ovirt-users] Re: Fail to attach/create disk from template when running via Ansible
Hi Martin,
Thank you for driving this issue.
I have updated the bug ticket on bugzilla, as I was able to reproduce it via WebUI as well.
Regarding my other question, How to clean up Locked Instances”, not to lose sight from it:
I have attached the engine and server log as requested (below), but I did not get any answer yet.
If more information is needed to proceed, please let me know.
Kindly awaiting your reply.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
From: Martin Perina <mperina(a)redhat.com>
Date: Thursday, 21 November 2019 at 10:00
To: Eyal Shenitzky <eshenitz(a)redhat.com>
Cc: Ryan Barry <rbarry(a)redhat.com>, Ondra Machacek <omachace(a)redhat.com>, "users(a)ovirt.org" <users(a)ovirt.org>, Darko Stojchev <D.Stojchev(a)activevideo.com>, "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>, Tal Nisan <tnisan(a)redhat.com>
Subject: Re: [ovirt-users] Re: Fail to attach/create disk from template when running via Ansible
On Thu, Nov 21, 2019 at 9:17 AM Martin Perina <mperina(a)redhat.com<mailto:mperina@redhat.com>> wrote:
Hi Eyal,
could you please take a look? There is NPE cause most probably by passing invalid vdsId from CreateSnapshotCommand ...
Just noticed that Marko created https://bugzilla.redhat.com/show_bug.cgi?id=1772853 to track the issue so let's continue the investigation there ...
Thanks,
Martin
On Thu, Nov 21, 2019 at 8:51 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Hi Martin,
The engine.log and server.log files attached.
Here is the list of servers from this morning, in last hour, if it helps with going through the logs.
[cid:16e8d08b4114cff311]
The ones from previous screenshot are there since few days ago.
If any additional info or tests are needed, let me know.
Kindly awaiting your reply.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
From: Martin Perina <mperina(a)redhat.com<mailto:mperina@redhat.com>>
Date: Thursday, 21 November 2019 at 08:34
To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>>, Ryan Barry <rbarry(a)redhat.com<mailto:rbarry@redhat.com>>
Cc: Ondra Machacek <omachace(a)redhat.com<mailto:omachace@redhat.com>>, "users(a)ovirt.org<mailto:users@ovirt.org>" <users(a)ovirt.org<mailto:users@ovirt.org>>, Darko Stojchev <D.Stojchev(a)activevideo.com<mailto:D.Stojchev@activevideo.com>>
Subject: Re: [ovirt-users] Re: Fail to attach/create disk from template when running via Ansible
Hi Marko,
could you please share your engine.log and server.log?
Thanks,
Martin
On Thu, Nov 21, 2019 at 8:31 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Hi Martin and Ondra,
I cannot find clean way to remove the entities which are in unknown (probably locked) state.
Due to the issue I reported below, the entities I try to create end up in, I assume, locked state.
[A screenshot of a cell phone Description automatically generated]
Due to this state, I cannot remove them from UI, and if I try to remove them using Ansible, get following error:
FAILED! => {"changed": false, "msg": "Timeout exceed while waiting on result state of the entity."
Would you be so kind to provide me with steps to clean up these locked instances?
Kindly awaiting your reply.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>>
Date: Friday, 15 November 2019 at 11:34
To: Martin Perina <mperina(a)redhat.com<mailto:mperina@redhat.com>>
Cc: Ondra Machacek <omachace(a)redhat.com<mailto:omachace@redhat.com>>, "users(a)ovirt.org<mailto:users@ovirt.org>" <users(a)ovirt.org<mailto:users@ovirt.org>>, Darko Stojchev <D.Stojchev(a)activevideo.com<mailto:D.Stojchev@activevideo.com>>
Subject: Re: [ovirt-users] Re: Fail to attach/create disk from template when running via Ansible
Hi Martin,
Here is the BUG URI:
https://bugzilla.redhat.com/show_bug.cgi?id=1772853
I am not sure if I selected correct component and team, but its created.
If any additional information is required, do not hesitate to let me know.
Kindly awaiting your reply.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
From: Martin Perina <mperina(a)redhat.com<mailto:mperina@redhat.com>>
Date: Friday, 15 November 2019 at 11:07
To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>>
Cc: Ondra Machacek <omachace(a)redhat.com<mailto:omachace@redhat.com>>, "users(a)ovirt.org<mailto:users@ovirt.org>" <users(a)ovirt.org<mailto:users@ovirt.org>>, Darko Stojchev <D.Stojchev(a)activevideo.com<mailto:D.Stojchev@activevideo.com>>
Subject: Re: [ovirt-users] Re: Fail to attach/create disk from template when running via Ansible
On Fri, Nov 15, 2019 at 11:05 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Hi Martin,
You mean to create a bug under Ansible? Sure.
No, in bugzilla for oVirt, it doesn't seem to me as ansible modules bug, but rather some issue in engine.
Thanks,
Martin
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
From: Martin Perina <mperina(a)redhat.com<mailto:mperina@redhat.com>>
Date: Friday, 15 November 2019 at 10:30
To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>>
Cc: Ondra Machacek <omachace(a)redhat.com<mailto:omachace@redhat.com>>, "users(a)ovirt.org<mailto:users@ovirt.org>" <users(a)ovirt.org<mailto:users@ovirt.org>>, Darko Stojchev <D.Stojchev(a)activevideo.com<mailto:D.Stojchev@activevideo.com>>
Subject: Re: [ovirt-users] Re: Fail to attach/create disk from template when running via Ansible
Hi Marko,
could you please create a bug for that and attach full SOS collection logs from engine and relevant host?
Thanks,
Martin
On Wed, Nov 13, 2019 at 2:54 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Hi Ondra,
I have rebuilt the staging with 4.3.6 and still same issue. Images are there, I can again create VM via UI, but not via Ansible.
Here is the output of the complete run form engine.log:
2019-11-12 12:09:00,457Z INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-49) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2019-11-12 12:09:00,978Z INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-49) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2019-11-12 12:09:01,538Z INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-49) [233de27] Running command: CreateUserSessionCommand internal: false.
2019-11-12 12:09:01,549Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-49) [233de27] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '172.16.105.206' using session 'bsgNpfGJTNtzuuQWSyXVJqHhpLK4TY9TQRERWxxzGtpHonoqdk7WIHGZ7mID2+QkBKH9bPp8HVhyvDu66tzPnQ==' logged in.
2019-11-12 12:09:01,691Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Lock Acquired to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:01,773Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Running command: AddVmCommand internal: false. Entities affected : ID: d3c90169-ecc8-46a0-a4ec-7c3d99b81b61 Type: ClusterAction group CREATE_VM with role type USER, ID: 819aecf6-c83d-4470-9915-e309390d8098 Type: VmTemplateAction group CREATE_VM with role type USER, ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: StorageAction group CREATE_DISK with role type USER
2019-11-12 12:09:01,866Z INFO [org.ovirt.engine.core.bll.AddRngDeviceCommand] (default task-49) [4d4703a1] Running command: AddRngDeviceCommand internal: true. Entities affected : ID: 6343c55a-c67e-4d75-a767-dfc9412006e6 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2019-11-12 12:09:01,876Z INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-49) [4d4703a1] START, SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{vmId='6343c55a-c67e-4d75-a767-dfc9412006e6', status='ImageLocked', exitStatus='Normal'}), log id: 4eedcefb
2019-11-12 12:09:01,887Z INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-49) [4d4703a1] FINISH, SetVmStatusVDSCommand, return: , log id: 4eedcefb
2019-11-12 12:09:01,906Z INFO [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Running command: CreateSnapshotFromTemplateCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:01,928Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', imageSizeInBytes='8589934592', volumeFormat='COW', newImageId='7270dd54-a075-4b14-aca3-7c2a118057f0', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='767c6837-5191-4650-a49d-abca7d69a708', sourceImageGroupId='400688e8-3c5b-4cca-9ef8-bbb6c79d54ed'}), log id: 2dfac68e
2019-11-12 12:09:02,009Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] FINISH, CreateVolumeVDSCommand, return: 7270dd54-a075-4b14-aca3-7c2a118057f0, log id: 2dfac68e
2019-11-12 12:09:02,014Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b'
2019-11-12 12:09:02,014Z INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] CommandMultiAsyncTasks::attachTask: Attaching task '0eb09758-113f-43b9-aa67-97d2e5a57e05' to command 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b'.
2019-11-12 12:09:02,027Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Adding task '0eb09758-113f-43b9-aa67-97d2e5a57e05' (Parent Command 'CreateSnapshotFromTemplate', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2019-11-12 12:09:02,098Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] EVENT_ID: USER_ADD_VM_STARTED(37), VM centos-testvm-a910 creation was initiated by admin@internal-authz.
2019-11-12 12:09:02,098Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-49) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] BaseAsyncTask::startPollingTask: Starting to poll task '0eb09758-113f-43b9-aa67-97d2e5a57e05'.
2019-11-12 12:09:02,758Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Command 'AddVm' (id: '97d4103b-acef-4b55-a379-71a1baa04a4f') waiting on child command id: 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b' type:'CreateSnapshotFromTemplate' to complete
2019-11-12 12:09:04,773Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-52) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Command 'AddVm' (id: '97d4103b-acef-4b55-a379-71a1baa04a4f') waiting on child command id: 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b' type:'CreateSnapshotFromTemplate' to complete
2019-11-12 12:09:06,117Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2019-11-12 12:09:06,121Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] Failed in 'HSMGetAllTasksStatusesVDS' method
2019-11-12 12:09:06,128Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt-staging-hv-02.avinity.tv<http://ovirt-staging-hv-02.avinity.tv> command HSMGetAllTasksStatusesVDS failed: Invalid parameter: 'capacity=8589934592'
2019-11-12 12:09:06,128Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] SPMAsyncTask::PollTask: Polling task '0eb09758-113f-43b9-aa67-97d2e5a57e05' (Parent Command 'CreateSnapshotFromTemplate', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'cleanSuccess'.
2019-11-12 12:09:06,131Z ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] BaseAsyncTask::logEndTaskFailure: Task '0eb09758-113f-43b9-aa67-97d2e5a57e05' (Parent Command 'CreateSnapshotFromTemplate', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Invalid parameter: 'capacity=8589934592', code = 100',
-- Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Invalid parameter: 'capacity=8589934592', code = 100'
2019-11-12 12:09:06,133Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b' has ended -> executing 'endAction'
2019-11-12 12:09:06,133Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-26) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b'): calling endAction '.
2019-11-12 12:09:06,133Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-274926) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'CreateSnapshotFromTemplate',
2019-11-12 12:09:06,137Z INFO [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Command [id=fb915b7e-4814-4e93-a3ef-4f40c4e3c41b]: Updating status to 'FAILED', The command end method logic will be executed by one of its parent commands.
2019-11-12 12:09:06,137Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'CreateSnapshotFromTemplate' completed, handling the result.
2019-11-12 12:09:06,137Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'CreateSnapshotFromTemplate' succeeded, clearing tasks.
2019-11-12 12:09:06,137Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '0eb09758-113f-43b9-aa67-97d2e5a57e05'
2019-11-12 12:09:06,137Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', taskId='0eb09758-113f-43b9-aa67-97d2e5a57e05'}), log id: baaf58a
2019-11-12 12:09:06,138Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] START, HSMClearTaskVDSCommand(HostName = ovirt-staging-hv-02.avinity.tv<http://ovirt-staging-hv-02.avinity.tv>, HSMTaskGuidBaseVDSCommandParameters:{hostId='3c70c563-e32e-4352-b1e1-f6bcc41f77e2', taskId='0eb09758-113f-43b9-aa67-97d2e5a57e05'}), log id: 7321c57d
2019-11-12 12:09:06,147Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] FINISH, HSMClearTaskVDSCommand, return: , log id: 7321c57d
2019-11-12 12:09:06,147Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] FINISH, SPMClearTaskVDSCommand, return: , log id: baaf58a
2019-11-12 12:09:06,152Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] BaseAsyncTask::removeTaskFromDB: Removed task '0eb09758-113f-43b9-aa67-97d2e5a57e05' from DataBase
2019-11-12 12:09:06,152Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-274926) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'fb915b7e-4814-4e93-a3ef-4f40c4e3c41b'
2019-11-12 12:09:08,790Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-34) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Command 'AddVm' id: '97d4103b-acef-4b55-a379-71a1baa04a4f' child commands '[fb915b7e-4814-4e93-a3ef-4f40c4e3c41b]' executions were completed, status 'FAILED'
2019-11-12 12:09:09,821Z ERROR [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2019-11-12 12:09:09,825Z ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand' with failure.
2019-11-12 12:09:09,835Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] Running command: RemoveImageCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:09,873Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'}), log id: 28915db3
2019-11-12 12:09:09,907Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:09,907Z ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:09,907Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] FINISH, DeleteImageGroupVDSCommand, return: , log id: 28915db3
2019-11-12 12:09:09,907Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] Disk '56ddac45-1913-4d24-af97-2f969fc3fa2e' doesn't exist on storage domain '78bfcb57-3a0d-4a71-a3f8-480ebae37c74', rolling forward
2019-11-12 12:09:09,918Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [44f6ef0c] Removed task '25bf448d-9271-4a85-ad9b-9c63012753ac' from DataBase
2019-11-12 12:09:09,983Z INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] transaction rolled back
2019-11-12 12:09:09,983Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] Lock freed to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:10,001Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM centos-testvm-a910 creation.
2019-11-12 12:09:10,002Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] Failed invoking callback end method 'onFailed' for command '97d4103b-acef-4b55-a379-71a1baa04a4f' with exception 'null', the callback is marked for end method retries
2019-11-12 12:09:12,019Z ERROR [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2019-11-12 12:09:12,023Z ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand' with failure.
2019-11-12 12:09:12,041Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] Running command: RemoveImageCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:12,063Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'}), log id: 376f65dc
2019-11-12 12:09:12,090Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:12,090Z ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:12,090Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] FINISH, DeleteImageGroupVDSCommand, return: , log id: 376f65dc
2019-11-12 12:09:12,090Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] Disk '56ddac45-1913-4d24-af97-2f969fc3fa2e' doesn't exist on storage domain '78bfcb57-3a0d-4a71-a3f8-480ebae37c74', rolling forward
2019-11-12 12:09:12,101Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [1b861e85] Removed task 'ebe0e879-8595-44db-9d64-e237afd6c8e8' from DataBase
2019-11-12 12:09:12,157Z INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] transaction rolled back
2019-11-12 12:09:12,157Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Trying to release a shared lock for key: '400688e8-3c5b-4cca-9ef8-bbb6c79d54edDISK' , but lock does not exist
2019-11-12 12:09:12,157Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Trying to release a shared lock for key: '819aecf6-c83d-4470-9915-e309390d8098TEMPLATE' , but lock does not exist
2019-11-12 12:09:12,157Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Trying to release exclusive lock which does not exist, lock key: 'centos-testvm-a910VM_NAME'
2019-11-12 12:09:12,157Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Lock freed to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:12,172Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM centos-testvm-a910 creation.
2019-11-12 12:09:12,172Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failed invoking callback end method 'onFailed' for command '97d4103b-acef-4b55-a379-71a1baa04a4f' with exception 'null', the callback is marked for end method retries
2019-11-12 12:09:16,192Z ERROR [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2019-11-12 12:09:16,195Z ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand' with failure.
2019-11-12 12:09:16,215Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] Running command: RemoveImageCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:16,240Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'}), log id: 75027448
2019-11-12 12:09:16,263Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:16,263Z ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:16,263Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] FINISH, DeleteImageGroupVDSCommand, return: , log id: 75027448
2019-11-12 12:09:16,263Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] Disk '56ddac45-1913-4d24-af97-2f969fc3fa2e' doesn't exist on storage domain '78bfcb57-3a0d-4a71-a3f8-480ebae37c74', rolling forward
2019-11-12 12:09:16,274Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [8f3849f] Removed task '688274fc-0596-44b5-b167-36c86b8b8e5d' from DataBase
2019-11-12 12:09:16,341Z INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] transaction rolled back
2019-11-12 12:09:16,341Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] Trying to release a shared lock for key: '400688e8-3c5b-4cca-9ef8-bbb6c79d54edDISK' , but lock does not exist
2019-11-12 12:09:16,341Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] Trying to release a shared lock for key: '819aecf6-c83d-4470-9915-e309390d8098TEMPLATE' , but lock does not exist
2019-11-12 12:09:16,342Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] Trying to release exclusive lock which does not exist, lock key: 'centos-testvm-a910VM_NAME'
2019-11-12 12:09:16,342Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] Lock freed to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:16,357Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM centos-testvm-a910 creation.
2019-11-12 12:09:16,357Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [] Failed invoking callback end method 'onFailed' for command '97d4103b-acef-4b55-a379-71a1baa04a4f' with exception 'null', the callback is marked for end method retries
2019-11-12 12:09:24,376Z ERROR [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2019-11-12 12:09:24,380Z ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand' with failure.
2019-11-12 12:09:24,387Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] Running command: RemoveImageCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:24,419Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'}), log id: 29a560b1
2019-11-12 12:09:24,444Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:24,444Z ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:24,444Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] FINISH, DeleteImageGroupVDSCommand, return: , log id: 29a560b1
2019-11-12 12:09:24,445Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] Disk '56ddac45-1913-4d24-af97-2f969fc3fa2e' doesn't exist on storage domain '78bfcb57-3a0d-4a71-a3f8-480ebae37c74', rolling forward
2019-11-12 12:09:24,455Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [153e3c43] Removed task '13b2490d-09a6-4230-a5d3-7b86b536e8c3' from DataBase
2019-11-12 12:09:24,509Z INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] transaction rolled back
2019-11-12 12:09:24,509Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] Trying to release a shared lock for key: '400688e8-3c5b-4cca-9ef8-bbb6c79d54edDISK' , but lock does not exist
2019-11-12 12:09:24,509Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] Trying to release a shared lock for key: '819aecf6-c83d-4470-9915-e309390d8098TEMPLATE' , but lock does not exist
2019-11-12 12:09:24,509Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] Trying to release exclusive lock which does not exist, lock key: 'centos-testvm-a910VM_NAME'
2019-11-12 12:09:24,509Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] Lock freed to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:24,527Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM centos-testvm-a910 creation.
2019-11-12 12:09:24,527Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [] Failed invoking callback end method 'onFailed' for command '97d4103b-acef-4b55-a379-71a1baa04a4f' with exception 'null', the callback is marked for end method retries
2019-11-12 12:09:34,547Z ERROR [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2019-11-12 12:09:34,551Z ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand' with failure.
2019-11-12 12:09:34,568Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] Running command: RemoveImageCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:34,591Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'}), log id: 207e5265
2019-11-12 12:09:34,617Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:34,617Z ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:34,618Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] FINISH, DeleteImageGroupVDSCommand, return: , log id: 207e5265
2019-11-12 12:09:34,618Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] Disk '56ddac45-1913-4d24-af97-2f969fc3fa2e' doesn't exist on storage domain '78bfcb57-3a0d-4a71-a3f8-480ebae37c74', rolling forward
2019-11-12 12:09:34,629Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [28afe05e] Removed task 'ad58e8d8-13cc-4ae4-8785-9df252b38c1f' from DataBase
2019-11-12 12:09:34,688Z INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] transaction rolled back
2019-11-12 12:09:34,688Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] Trying to release a shared lock for key: '400688e8-3c5b-4cca-9ef8-bbb6c79d54edDISK' , but lock does not exist
2019-11-12 12:09:34,688Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] Trying to release a shared lock for key: '819aecf6-c83d-4470-9915-e309390d8098TEMPLATE' , but lock does not exist
2019-11-12 12:09:34,688Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] Trying to release exclusive lock which does not exist, lock key: 'centos-testvm-a910VM_NAME'
2019-11-12 12:09:34,688Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] Lock freed to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:34,707Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM centos-testvm-a910 creation.
2019-11-12 12:09:34,707Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-33) [] Failed invoking callback end method 'onFailed' for command '97d4103b-acef-4b55-a379-71a1baa04a4f' with exception 'null', the callback is marked for end method retries
2019-11-12 12:09:42,871Z INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-75) [58fc7cd2] Lock Acquired to object 'EngineLock:{exclusiveLocks='[02a18cd9-ead2-44c7-9923-8fe84d1865ee=PROVIDER]', sharedLocks=''}'
2019-11-12 12:09:42,880Z INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-75) [58fc7cd2] Running command: SyncNetworkProviderCommand internal: true.
2019-11-12 12:09:42,975Z INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-49) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2019-11-12 12:09:43,096Z INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-75) [58fc7cd2] Lock freed to object 'EngineLock:{exclusiveLocks='[02a18cd9-ead2-44c7-9923-8fe84d1865ee=PROVIDER]', sharedLocks=''}'
2019-11-12 12:09:44,729Z ERROR [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
2019-11-12 12:09:44,732Z ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand' with failure.
2019-11-12 12:09:44,757Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] Running command: RemoveImageCommand internal: true. Entities affected : ID: 78bfcb57-3a0d-4a71-a3f8-480ebae37c74 Type: Storage
2019-11-12 12:09:44,780Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'}), log id: aaea0bc
2019-11-12 12:09:44,804Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:44,804Z ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='47523e52-311f-4829-a731-3d91f7885974', ignoreFailoverLimit='false', storageDomainId='78bfcb57-3a0d-4a71-a3f8-480ebae37c74', imageGroupId='56ddac45-1913-4d24-af97-2f969fc3fa2e', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=56ddac45-1913-4d24-af97-2f969fc3fa2e, domain=78bfcb57-3a0d-4a71-a3f8-480ebae37c74'
2019-11-12 12:09:44,804Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] FINISH, DeleteImageGroupVDSCommand, return: , log id: aaea0bc
2019-11-12 12:09:44,804Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] Disk '56ddac45-1913-4d24-af97-2f969fc3fa2e' doesn't exist on storage domain '78bfcb57-3a0d-4a71-a3f8-480ebae37c74', rolling forward
2019-11-12 12:09:44,815Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [14057067] Removed task 'c964f397-f054-4d81-98d6-052ce5a30168' from DataBase
2019-11-12 12:09:44,887Z INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [] transaction rolled back
2019-11-12 12:09:44,887Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [] Trying to release a shared lock for key: '400688e8-3c5b-4cca-9ef8-bbb6c79d54edDISK' , but lock does not exist
2019-11-12 12:09:44,887Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [] Trying to release a shared lock for key: '819aecf6-c83d-4470-9915-e309390d8098TEMPLATE' , but lock does not exist
2019-11-12 12:09:44,888Z WARN [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [] Trying to release exclusive lock which does not exist, lock key: 'centos-testvm-a910VM_NAME'
2019-11-12 12:09:44,888Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [] Lock freed to object 'EngineLock:{exclusiveLocks='[centos-testvm-a910=VM_NAME]', sharedLocks='[400688e8-3c5b-4cca-9ef8-bbb6c79d54ed=DISK, 819aecf6-c83d-4470-9915-e309390d8098=TEMPLATE]'}'
2019-11-12 12:09:44,903Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM centos-testvm-a910 creation.
2019-11-12 12:09:44,924Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Failed invoking callback end method 'onFailed' for command '97d4103b-acef-4b55-a379-71a1baa04a4f' with exception 'null', the callback is marked for end method retries but max number of retries have been attempted. The command will be marked as Failed.
2019-11-12 12:09:44,925Z INFO [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Exception in invoking callback of command AddVm (97d4103b-acef-4b55-a379-71a1baa04a4f): NullPointerException:
2019-11-12 12:09:44,925Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Error invoking callback method 'onFailed' for 'FAILED' command '97d4103b-acef-4b55-a379-71a1baa04a4f'
2019-11-12 12:09:44,925Z ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [7748e6ab-7fb3-4ec5-9b2a-c7012a050efa] Exception: javax.ejb.EJBTransactionRolledbackException
at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:203) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:420) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:148) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81) [weld-ejb-3.1.1.Final.jar:3.1.1.Final]
at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:618)
at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view2.endAction(Unknown Source) [bll.jar:]
at sun.reflect.GeneratedMethodAccessor1071.invoke(Unknown Source) [:1.8.0_222]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:410) [weld-core-impl-3.1.1.Final.jar:3.1.1.Final]
at org.jboss.weld.module.ejb.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:134) [weld-ejb-3.1.1.Final.jar:3.1.1.Final]
at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl-3.1.1.Final.jar:3.1.1.Final]
at org.jboss.weld.module.ejb.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:68) [weld-ejb-3.1.1.Final.jar:3.1.1.Final]
at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:106) [weld-core-impl-3.1.1.Final.jar:3.1.1.Final]
at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.endAction(Unknown Source) [bll.jar:]
at org.ovirt.engine.core.bll.VmCommand.endActionOnDisks(VmCommand.java:266) [bll.jar:]
at org.ovirt.engine.core.bll.AddVmCommand.endWithFailure(AddVmCommand.java:1452) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.internalEndWithFailure(CommandBase.java:736) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:658) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2003) [bll.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:]
at org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:520) [bll.jar:]
at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.endAction(ChildCommandsCallbackBase.java:134) [bll.jar:]
at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.onFailed(ChildCommandsCallbackBase.java:174) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.endCallback(CommandCallbacksPoller.java:67) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:166) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_222]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_222]
at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:]
at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_222]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_222]
at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:]
Caused by: java.lang.NullPointerException
Please take a look.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
On 10/10/2019, 14:23, "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Hi,
I could not check as in the meantime I destroyed my staging setup.
Re-provisioning it now and will retest with ansible.
On 10/10/2019, 10:51, "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Hi Ondra,
That is true,
But why does it work from Portal? Image ID is not differ when run when ansible against same template?
I will check anyway.
On 10/10/2019, 09:47, "Ondra Machacek" <omachace(a)redhat.com<mailto:omachace@redhat.com>> wrote:
The error message is:
Image does not exist in domain:
u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
Can you please check if the image with id
'd680a46a-7188-45f6-b6a7-3333f830c0b4' exists on storage domain with id
'935de3ac-a735-4fc8-9161-26bfd751ffc7'?
On 08/10/2019 13:05, Vrgotic, Marko wrote:
> Dear oVirt,
>
> Just recently, I have upgraded oVirt staging environment from 4.3.4 to
> 4.3.6 release.
>
> Since then I noticed the issue when creating VMs from template, using
> Ansible. When creating VM from template using UI, all works well.
>
> Tests were executed with same user, with SuperUser priviledges.
>
> pip freeze
>
> *ansible==2.7.13*
>
> asn1crypto==1.0.0
>
> bcrypt==3.1.7
>
> cffi==1.12.3
>
> cryptography==2.7
>
> dnspython==1.16.0
>
> ipaddress==1.0.22
>
> Jinja2==2.10.1
>
> lxml==4.4.1
>
> MarkupSafe==1.1.1
>
> netaddr==0.7.19
>
> *ovirt-engine-sdk-python==4.3.3*
>
> paramiko==2.6.0
>
> pycparser==2.19
>
> pycurl==7.43.0.3
>
> PyNaCl==1.3.0
>
> PyYAML==5.1.2
>
> six==1.12.0
>
> *Logs from engine: *
>
> 2019-10-08 09:57:42,294Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (default task-23) [067ec489-ae6c-4871-8ab6-8296016ca1ce] Lock Acquired
> to object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:57:42,299Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43687) [390b4c75] FINISH,
> HSMClearTaskVDSCommand, return: , log id: be932a0
>
> 2019-10-08 09:57:42,299Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43687) [390b4c75] FINISH,
> SPMClearTaskVDSCommand, return: , log id: 3e25883
>
> 2019-10-08 09:57:42,303Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43687) [390b4c75]
> BaseAsyncTask::removeTaskFromDB: Removed task
> '7e1d68a2-bb49-4456-b228-db851fe6603c' from DataBase
>
> 2019-10-08 09:57:42,303Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43687) [390b4c75]
> CommandAsyncTask::HandleEndActionResult [within thread]: Removing
> CommandMultiAsyncTasks object for entity
> '4f8a6897-66e8-453b-b706-246cb8505b3e'
>
> 2019-10-08 09:57:42,375Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (default task-23) [067ec489-ae6c-4871-8ab6-8296016ca1ce] Running
> command: AddVmCommand internal: false. Entities affected : ID:
> 2e428504-c339-11e9-87ef-00163e3ec101 Type: ClusterAction group CREATE_VM
> with role type USER, ID: 9ac6f4ad-58d0-4a7e-b424-91f2d76abcac Type:
> VmTemplateAction group CREATE_VM with role type USER, ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: StorageAction group
> CREATE_DISK with role type USER
>
> 2019-10-08 09:57:42,488Z INFO
> [org.ovirt.engine.core.bll.AddRngDeviceCommand] (default task-23)
> [332c57d4] Running command: AddRngDeviceCommand internal: true. Entities
> affected : ID: d5719cf0-2a64-4a28-bbf2-edc1bb05b777 Type: VMAction
> group EDIT_VM_PROPERTIES with role type USER
>
> 2019-10-08 09:57:42,498Z INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default
> task-23) [332c57d4] START, SetVmStatusVDSCommand(
> SetVmStatusVDSCommandParameters:{vmId='d5719cf0-2a64-4a28-bbf2-edc1bb05b777',
> status='ImageLocked', exitStatus='Normal'}), log id: 2fe004e2
>
> 2019-10-08 09:57:42,507Z INFO
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default
> task-23) [332c57d4] FINISH, SetVmStatusVDSCommand, return: , log id:
> 2fe004e2
>
> 2019-10-08 09:57:42,528Z INFO
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (default task-23) [067ec489-ae6c-4871-8ab6-8296016ca1ce] Running
> command: CreateSnapshotFromTemplateCommand internal: true. Entities
> affected : ID: 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:57:42,554Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
> (default task-23) [067ec489-ae6c-4871-8ab6-8296016ca1ce] START,
> CreateVolumeVDSCommand(
> CreateVolumeVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4',
> imageSizeInBytes='8589934592', volumeFormat='COW',
> newImageId='9a96018c-ce7d-4c77-a91a-30fc16c7b2ba', imageType='Sparse',
> newImageDescription='', imageInitialSizeInBytes='0',
> imageId='f9e0c4d9-81dd-4e44-8a71-33e55d4399c6',
> sourceImageGroupId='a7e42574-be60-4c94-94a5-cc4b30fdb16f'}), log id:
> 37b29c7d
>
> 2019-10-08 09:57:42,639Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
> (default task-23) [067ec489-ae6c-4871-8ab6-8296016ca1ce] FINISH,
> CreateVolumeVDSCommand, return: 9a96018c-ce7d-4c77-a91a-30fc16c7b2ba,
> log id: 37b29c7d
>
> 2019-10-08 09:57:42,645Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-23)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] CommandAsyncTask::Adding
> CommandMultiAsyncTasks object for command
> 'f1665b66-2032-4a60-944e-c04beace088e'
>
> 2019-10-08 09:57:42,645Z INFO
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-23)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce]
> CommandMultiAsyncTasks::attachTask: Attaching task
> '99abc310-b44b-435c-8d43-922b782a69dc' to command
> 'f1665b66-2032-4a60-944e-c04beace088e'.
>
> 2019-10-08 09:57:42,660Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-23)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Adding task
> '99abc310-b44b-435c-8d43-922b782a69dc' (Parent Command
> 'CreateSnapshotFromTemplate', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
> hasn't started yet..
>
> 2019-10-08 09:57:42,934Z INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [067ec489-ae6c-4871-8ab6-8296016ca1ce] EVENT_ID:
> USER_ADD_VM_STARTED(37), VM centos-testvm-024 creation was initiated by
> mteam@internal-authz.
>
> 2019-10-08 09:57:42,935Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-23)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] BaseAsyncTask::startPollingTask:
> Starting to poll task '99abc310-b44b-435c-8d43-922b782a69dc'.
>
> 2019-10-08 09:57:44,384Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-74)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Command 'AddVm' (id:
> '1b65f576-365d-4e8b-a920-621516cbdaed') waiting on child command id:
> 'f1665b66-2032-4a60-944e-c04beace088e' type:'CreateSnapshotFromTemplate'
> to complete
>
> 2019-10-08 09:57:48,393Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Command 'AddVm' (id:
> '1b65f576-365d-4e8b-a920-621516cbdaed') waiting on child command id:
> 'f1665b66-2032-4a60-944e-c04beace088e' type:'CreateSnapshotFromTemplate'
> to complete
>
> 2019-10-08 09:57:48,394Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'AddDisk' id: '937776b8-b94d-4178-b9c2-e39fe42cb06c' child commands
> '[63409b0b-749a-45d5-85a5-baf55d5fa2a4]' executions were completed,
> status 'SUCCEEDED'
>
> 2019-10-08 09:57:48,394Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'AddDisk' id: '937776b8-b94d-4178-b9c2-e39fe42cb06c' Updating status to
> 'SUCCEEDED', The command end method logic will be executed by one of its
> parent commands.
>
> 2019-10-08 09:57:48,398Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'AddDisk' id: 'fdc5571e-a925-4400-a54a-a1ef1dcd23ae' child commands
> '[54d68cae-6585-4351-aaf7-fe687905acf4]' executions were completed,
> status 'SUCCEEDED'
>
> 2019-10-08 09:57:48,398Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'AddDisk' id: 'fdc5571e-a925-4400-a54a-a1ef1dcd23ae' Updating status to
> 'SUCCEEDED', The command end method logic will be executed by one of its
> parent commands.
>
> 2019-10-08 09:57:48,401Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'ProcessOvfUpdateForStorageDomain' (id:
> 'bdcdae3a-00f6-4176-9d27-0bb0f04912a3') waiting on child command id:
> 'b4050edd-62a7-4b83-a4ab-a21c83bf3ade'
> type:'CreateOvfVolumeForStorageDomain' to complete
>
> 2019-10-08 09:57:48,402Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'CreateOvfVolumeForStorageDomain' id:
> 'b4050edd-62a7-4b83-a4ab-a21c83bf3ade' child commands
> '[937776b8-b94d-4178-b9c2-e39fe42cb06c]' executions were completed,
> status 'SUCCEEDED'
>
> 2019-10-08 09:57:48,404Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-37) [5830ba58] Command
> 'CreateOvfVolumeForStorageDomain' id:
> '5fd47910-95b2-4387-8e26-ea2c80b078a0' child commands
> '[fdc5571e-a925-4400-a54a-a1ef1dcd23ae]' executions were completed,
> status 'SUCCEEDED'
>
> 2019-10-08 09:57:49,416Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [390b4c75] Command
> 'ProcessOvfUpdateForStorageDomain' id:
> '849bd16c-00d2-48a2-8fde-631af0dfe1b5' child commands
> '[a80a3657-c813-42c1-a43d-27442b5dc238,
> 4f8a6897-66e8-453b-b706-246cb8505b3e]' executions were completed, status
> 'SUCCEEDED'
>
> 2019-10-08 09:57:49,418Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.CreateOvfVolumeForStorageDomainCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] Ending
> command
> 'org.ovirt.engine.core.bll.storage.ovfstore.CreateOvfVolumeForStorageDomainCommand'
> successfully.
>
> 2019-10-08 09:57:49,420Z INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] Ending
> command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> successfully.
>
> 2019-10-08 09:57:49,426Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3)
> [5830ba58] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
>
> 2019-10-08 09:57:49,427Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] START,
> GetImageInfoVDSCommand(
> GetImageInfoVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='66984468-bbfa-4941-8c6b-df72a6d920f2',
> imageId='7db55211-041a-40c8-85a6-a2ff1bd182ce'}), log id: 6c02a73c
>
> 2019-10-08 09:57:49,429Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] START,
> GetVolumeInfoVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> GetVolumeInfoVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c',
> storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='66984468-bbfa-4941-8c6b-df72a6d920f2',
> imageId='7db55211-041a-40c8-85a6-a2ff1bd182ce'}), log id: 2e131d9d
>
> 2019-10-08 09:57:49,456Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] FINISH,
> GetVolumeInfoVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.storage.DiskImage@26992dd0, log
> id: 2e131d9d
>
> 2019-10-08 09:57:49,456Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] FINISH,
> GetImageInfoVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.storage.DiskImage@26992dd0, log
> id: 6c02a73c
>
> 2019-10-08 09:57:49,464Z WARN
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] VM is null - no
> unlocking
>
> 2019-10-08 09:57:49,481Z INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] EVENT_ID:
> USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'OVF_STORE' was
> successfully added.
>
> 2019-10-08 09:57:49,490Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.CreateOvfVolumeForStorageDomainCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] Ending
> command
> 'org.ovirt.engine.core.bll.storage.ovfstore.CreateOvfVolumeForStorageDomainCommand'
> successfully.
>
> 2019-10-08 09:57:49,491Z INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] Ending
> command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> successfully.
>
> 2019-10-08 09:57:49,495Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3)
> [5830ba58] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
>
> 2019-10-08 09:57:49,497Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] START,
> GetImageInfoVDSCommand(
> GetImageInfoVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='ee208fd3-e05c-4ba8-aba9-fa521d69a870',
> imageId='484c1eed-b03a-4fa0-a12d-0087294dec86'}), log id: 7a3dad31
>
> 2019-10-08 09:57:49,498Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] START,
> GetVolumeInfoVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> GetVolumeInfoVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c',
> storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='ee208fd3-e05c-4ba8-aba9-fa521d69a870',
> imageId='484c1eed-b03a-4fa0-a12d-0087294dec86'}), log id: 5b85c4a5
>
> 2019-10-08 09:57:49,522Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] FINISH,
> GetVolumeInfoVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.storage.DiskImage@13555e4,
> log id: 5b85c4a5
>
> 2019-10-08 09:57:49,522Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [5830ba58] FINISH,
> GetImageInfoVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.storage.DiskImage@13555e4,
> log id: 7a3dad31
>
> 2019-10-08 09:57:49,529Z WARN
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] VM is null - no
> unlocking
>
> 2019-10-08 09:57:49,540Z INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] EVENT_ID:
> USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'OVF_STORE' was
> successfully added.
>
> 2019-10-08 09:57:50,552Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [390b4c75] Ending
> command
> 'org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand'
> successfully.
>
> 2019-10-08 09:57:50,575Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] START,
> SetVolumeDescriptionVDSCommand(
> SetVolumeDescriptionVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='ee208fd3-e05c-4ba8-aba9-fa521d69a870',
> imageId='484c1eed-b03a-4fa0-a12d-0087294dec86'}), log id: 7e77a9a8
>
> 2019-10-08 09:57:50,576Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] --
> executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
>
> 2019-10-08 09:57:50,576Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> spUUID=2e3b3484-c339-11e9-8d02-00163e3ec101
>
> 2019-10-08 09:57:50,576Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> sdUUID=935de3ac-a735-4fc8-9161-26bfd751ffc7
>
> 2019-10-08 09:57:50,576Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> imageGroupGUID=ee208fd3-e05c-4ba8-aba9-fa521d69a870
>
> 2019-10-08 09:57:50,576Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> volUUID=484c1eed-b03a-4fa0-a12d-0087294dec86
>
> 2019-10-08 09:57:50,576Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> description={"Updated":false,"Last Updated":null,"Storage
> Domains":[{"uuid":"935de3ac-a735-4fc8-9161-26bfd751ffc7"}],"Disk
> Description":"OVF_STORE"}
>
> 2019-10-08 09:57:50,606Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] FINISH,
> SetVolumeDescriptionVDSCommand, return: , log id: 7e77a9a8
>
> 2019-10-08 09:57:50,614Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Lock
> Acquired to object 'EngineLock:{exclusiveLocks='',
> sharedLocks='[bfc4c379-bd5f-45be-98a4-f0469dc3817c=VDS_EXECUTION]'}'
>
> 2019-10-08 09:57:50,623Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Running
> command: UploadStreamCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:57:50,624Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] START,
> UploadStreamVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> UploadStreamVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c'}),
> log id: 531a4a4a
>
> 2019-10-08 09:57:50,625Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] --
> executeVdsBrokerCommand, parameters:
>
> 2019-10-08 09:57:50,625Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> spUUID=2e3b3484-c339-11e9-8d02-00163e3ec101
>
> 2019-10-08 09:57:50,625Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> sdUUID=935de3ac-a735-4fc8-9161-26bfd751ffc7
>
> 2019-10-08 09:57:50,625Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> imageGUID=ee208fd3-e05c-4ba8-aba9-fa521d69a870
>
> 2019-10-08 09:57:50,625Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> volUUID=484c1eed-b03a-4fa0-a12d-0087294dec86
>
> 2019-10-08 09:57:50,625Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++ size=30720
>
> 2019-10-08 09:57:51,331Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] FINISH,
> UploadStreamVDSCommand, return: , log id: 531a4a4a
>
> 2019-10-08 09:57:51,342Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58]
> CommandAsyncTask::Adding CommandMultiAsyncTasks object for command
> 'c11f4d3c-f85b-43c5-986f-a517aed66d7e'
>
> 2019-10-08 09:57:51,342Z INFO
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58]
> CommandMultiAsyncTasks::attachTask: Attaching task
> 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d' to command
> 'c11f4d3c-f85b-43c5-986f-a517aed66d7e'.
>
> 2019-10-08 09:57:51,351Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Adding
> task 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d' (Parent Command
> 'UploadStream', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
> hasn't started yet..
>
> 2019-10-08 09:57:51,355Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58]
> BaseAsyncTask::startPollingTask: Starting to poll task
> 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d'.
>
> 2019-10-08 09:57:51,358Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Lock freed
> to object 'EngineLock:{exclusiveLocks='',
> sharedLocks='[bfc4c379-bd5f-45be-98a4-f0469dc3817c=VDS_EXECUTION]'}'
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] START,
> SetVolumeDescriptionVDSCommand(
> SetVolumeDescriptionVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='ee208fd3-e05c-4ba8-aba9-fa521d69a870',
> imageId='484c1eed-b03a-4fa0-a12d-0087294dec86'}), log id: 238169f6
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] --
> executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> spUUID=2e3b3484-c339-11e9-8d02-00163e3ec101
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> sdUUID=935de3ac-a735-4fc8-9161-26bfd751ffc7
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> imageGroupGUID=ee208fd3-e05c-4ba8-aba9-fa521d69a870
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> volUUID=484c1eed-b03a-4fa0-a12d-0087294dec86
>
> 2019-10-08 09:57:51,360Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> description={"Updated":true,"Size":30720,"Last Updated":"Tue Oct 08
> 09:57:50 UTC 2019","Storage
> Domains":[{"uuid":"935de3ac-a735-4fc8-9161-26bfd751ffc7"}],"Disk
> Description":"OVF_STORE"}
>
> 2019-10-08 09:57:51,389Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] FINISH,
> SetVolumeDescriptionVDSCommand, return: , log id: 238169f6
>
> 2019-10-08 09:57:51,596Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] START,
> SetVolumeDescriptionVDSCommand(
> SetVolumeDescriptionVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='66984468-bbfa-4941-8c6b-df72a6d920f2',
> imageId='7db55211-041a-40c8-85a6-a2ff1bd182ce'}), log id: 59bde70f
>
> 2019-10-08 09:57:51,627Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] --
> executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
>
> 2019-10-08 09:57:51,627Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> spUUID=2e3b3484-c339-11e9-8d02-00163e3ec101
>
> 2019-10-08 09:57:51,627Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> sdUUID=935de3ac-a735-4fc8-9161-26bfd751ffc7
>
> 2019-10-08 09:57:51,627Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> imageGroupGUID=66984468-bbfa-4941-8c6b-df72a6d920f2
>
> 2019-10-08 09:57:51,627Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> volUUID=7db55211-041a-40c8-85a6-a2ff1bd182ce
>
> 2019-10-08 09:57:51,627Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> description={"Updated":false,"Last Updated":null,"Storage
> Domains":[{"uuid":"935de3ac-a735-4fc8-9161-26bfd751ffc7"}],"Disk
> Description":"OVF_STORE"}
>
> 2019-10-08 09:57:51,664Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] FINISH,
> SetVolumeDescriptionVDSCommand, return: , log id: 59bde70f
>
> 2019-10-08 09:57:51,672Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Lock
> Acquired to object 'EngineLock:{exclusiveLocks='',
> sharedLocks='[bfc4c379-bd5f-45be-98a4-f0469dc3817c=VDS_EXECUTION]'}'
>
> 2019-10-08 09:57:51,683Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Running
> command: UploadStreamCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] START,
> UploadStreamVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> UploadStreamVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c'}),
> log id: 1cb812b6
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] --
> executeVdsBrokerCommand, parameters:
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> spUUID=2e3b3484-c339-11e9-8d02-00163e3ec101
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> sdUUID=935de3ac-a735-4fc8-9161-26bfd751ffc7
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> imageGUID=66984468-bbfa-4941-8c6b-df72a6d920f2
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> volUUID=7db55211-041a-40c8-85a6-a2ff1bd182ce
>
> 2019-10-08 09:57:51,684Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++ size=30720
>
> 2019-10-08 09:57:51,933Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] FINISH,
> UploadStreamVDSCommand, return: , log id: 1cb812b6
>
> 2019-10-08 09:57:51,943Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58]
> CommandAsyncTask::Adding CommandMultiAsyncTasks object for command
> 'd0c07988-0e43-4e6d-b86c-e736b28d4537'
>
> 2019-10-08 09:57:51,943Z INFO
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58]
> CommandMultiAsyncTasks::attachTask: Attaching task
> 'ba88edd0-8188-472e-8f59-aa3f90e63181' to command
> 'd0c07988-0e43-4e6d-b86c-e736b28d4537'.
>
> 2019-10-08 09:57:51,952Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Adding
> task 'ba88edd0-8188-472e-8f59-aa3f90e63181' (Parent Command
> 'UploadStream', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
> hasn't started yet..
>
> 2019-10-08 09:57:51,958Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58]
> BaseAsyncTask::startPollingTask: Starting to poll task
> 'ba88edd0-8188-472e-8f59-aa3f90e63181'.
>
> 2019-10-08 09:57:51,961Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] Lock freed
> to object 'EngineLock:{exclusiveLocks='',
> sharedLocks='[bfc4c379-bd5f-45be-98a4-f0469dc3817c=VDS_EXECUTION]'}'
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] START,
> SetVolumeDescriptionVDSCommand(
> SetVolumeDescriptionVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='66984468-bbfa-4941-8c6b-df72a6d920f2',
> imageId='7db55211-041a-40c8-85a6-a2ff1bd182ce'}), log id: 213b3075
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] --
> executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> spUUID=2e3b3484-c339-11e9-8d02-00163e3ec101
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> sdUUID=935de3ac-a735-4fc8-9161-26bfd751ffc7
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> imageGroupGUID=66984468-bbfa-4941-8c6b-df72a6d920f2
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> volUUID=7db55211-041a-40c8-85a6-a2ff1bd182ce
>
> 2019-10-08 09:57:51,962Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] ++
> description={"Updated":true,"Size":30720,"Last Updated":"Tue Oct 08
> 09:57:50 UTC 2019","Storage
> Domains":[{"uuid":"935de3ac-a735-4fc8-9161-26bfd751ffc7"}],"Disk
> Description":"OVF_STORE"}
>
> 2019-10-08 09:57:52,028Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-1) [5830ba58] FINISH,
> SetVolumeDescriptionVDSCommand, return: , log id: 213b3075
>
> 2019-10-08 09:57:52,224Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) [] Polling and
> updating Async Tasks: 7 tasks, 3 tasks to poll now
>
> 2019-10-08 09:57:52,228Z ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) [] Failed in
> 'HSMGetAllTasksStatusesVDS' method
>
> 2019-10-08 09:57:52,235Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) [] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>
> command HSMGetAllTasksStatusesVDS failed: Invalid parameter: 'size=16777216'
>
> 2019-10-08 09:57:52,235Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> SPMAsyncTask::PollTask: Polling task
> '99abc310-b44b-435c-8d43-922b782a69dc' (Parent Command
> 'CreateSnapshotFromTemplate', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
> status 'finished', result 'cleanSuccess'.
>
> 2019-10-08 09:57:52,240Z ERROR
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> BaseAsyncTask::logEndTaskFailure: Task
> '99abc310-b44b-435c-8d43-922b782a69dc' (Parent Command
> 'CreateSnapshotFromTemplate', Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
> with failure:
>
> -- Result: 'cleanSuccess'
>
> -- Message: 'VDSGenericException: VDSErrorException: Failed to
> HSMGetAllTasksStatusesVDS, error = Invalid parameter: 'size=16777216',
> code = 100',
>
> -- Exception: 'VDSGenericException: VDSErrorException: Failed to
> HSMGetAllTasksStatusesVDS, error = Invalid parameter: 'size=16777216',
> code = 100'
>
> 2019-10-08 09:57:52,241Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> CommandAsyncTask::endActionIfNecessary: All tasks of command
> 'f1665b66-2032-4a60-944e-c04beace088e' has ended -> executing 'endAction'
>
> 2019-10-08 09:57:52,241Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> CommandAsyncTask::endAction: Ending action for '1' tasks (command ID:
> 'f1665b66-2032-4a60-944e-c04beace088e'): calling endAction '.
>
> 2019-10-08 09:57:52,241Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> SPMAsyncTask::PollTask: Polling task
> 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d' (Parent Command 'UploadStream',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
> status 'finished', result 'success'.
>
> 2019-10-08 09:57:52,241Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> BaseAsyncTask::onTaskEndSuccess: Task
> 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d' (Parent Command 'UploadStream',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
> successfully.
>
> 2019-10-08 09:57:52,241Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43693) []
> CommandAsyncTask::endCommandAction [within thread] context: Attempting
> to endAction 'CreateSnapshotFromTemplate',
>
> 2019-10-08 09:57:52,242Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> CommandAsyncTask::endActionIfNecessary: All tasks of command
> 'c11f4d3c-f85b-43c5-986f-a517aed66d7e' has ended -> executing 'endAction'
>
> 2019-10-08 09:57:52,243Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> CommandAsyncTask::endAction: Ending action for '1' tasks (command ID:
> 'c11f4d3c-f85b-43c5-986f-a517aed66d7e'): calling endAction '.
>
> 2019-10-08 09:57:52,243Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> SPMAsyncTask::PollTask: Polling task
> 'ba88edd0-8188-472e-8f59-aa3f90e63181' (Parent Command 'UploadStream',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
> status 'finished', result 'success'.
>
> 2019-10-08 09:57:52,243Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> BaseAsyncTask::onTaskEndSuccess: Task
> 'ba88edd0-8188-472e-8f59-aa3f90e63181' (Parent Command 'UploadStream',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
> successfully.
>
> 2019-10-08 09:57:52,243Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43694) []
> CommandAsyncTask::endCommandAction [within thread] context: Attempting
> to endAction 'UploadStream',
>
> 2019-10-08 09:57:52,244Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> CommandAsyncTask::endActionIfNecessary: All tasks of command
> 'd0c07988-0e43-4e6d-b86c-e736b28d4537' has ended -> executing 'endAction'
>
> 2019-10-08 09:57:52,244Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-91) []
> CommandAsyncTask::endAction: Ending action for '1' tasks (command ID:
> 'd0c07988-0e43-4e6d-b86c-e736b28d4537'): calling endAction '.
>
> 2019-10-08 09:57:52,245Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43695) []
> CommandAsyncTask::endCommandAction [within thread] context: Attempting
> to endAction 'UploadStream',
>
> 2019-10-08 09:57:52,245Z INFO
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Command
> [id=f1665b66-2032-4a60-944e-c04beace088e]: Updating status to 'FAILED',
> The command end method logic will be executed by one of its parent commands.
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce]
> CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
> action type 'CreateSnapshotFromTemplate' completed, handling the result.
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce]
> CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
> action type 'CreateSnapshotFromTemplate' succeeded, clearing tasks.
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] SPMAsyncTask::ClearAsyncTask:
> Attempting to clear task '99abc310-b44b-435c-8d43-922b782a69dc'
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58] Ending command
> 'org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand'
> successfully.
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58] Ending command
> 'org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand'
> successfully.
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] START, SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> taskId='99abc310-b44b-435c-8d43-922b782a69dc'}), log id: 66a4e67
>
> 2019-10-08 09:57:52,247Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] START,
> HSMClearTaskVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> HSMTaskGuidBaseVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c',
> taskId='99abc310-b44b-435c-8d43-922b782a69dc'}), log id: 7ec6d8b8
>
> 2019-10-08 09:57:52,249Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58]
> CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
> action type 'UploadStream' completed, handling the result.
>
> 2019-10-08 09:57:52,249Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58]
> CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
> action type 'UploadStream' succeeded, clearing tasks.
>
> 2019-10-08 09:57:52,249Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58]
> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
> 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d'
>
> 2019-10-08 09:57:52,249Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58] START,
> SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> taskId='e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d'}), log id: 356927d
>
> 2019-10-08 09:57:52,250Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58]
> CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
> action type 'UploadStream' completed, handling the result.
>
> 2019-10-08 09:57:52,250Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58]
> CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
> action type 'UploadStream' succeeded, clearing tasks.
>
> 2019-10-08 09:57:52,250Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58]
> SPMAsyncTask::ClearAsyncTask: Attempting to clear task
> 'ba88edd0-8188-472e-8f59-aa3f90e63181'
>
> 2019-10-08 09:57:52,251Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58] START,
> SPMClearTaskVDSCommand(
> SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> taskId='ba88edd0-8188-472e-8f59-aa3f90e63181'}), log id: 75844e34
>
> 2019-10-08 09:57:52,259Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] FINISH, HSMClearTaskVDSCommand,
> return: , log id: 7ec6d8b8
>
> 2019-10-08 09:57:52,259Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] FINISH, SPMClearTaskVDSCommand,
> return: , log id: 66a4e67
>
> 2019-10-08 09:57:52,260Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58] START,
> HSMClearTaskVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> HSMTaskGuidBaseVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c',
> taskId='e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d'}), log id: 3b239769
>
> 2019-10-08 09:57:52,263Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] BaseAsyncTask::removeTaskFromDB:
> Removed task '99abc310-b44b-435c-8d43-922b782a69dc' from DataBase
>
> 2019-10-08 09:57:52,263Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43693)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce]
> CommandAsyncTask::HandleEndActionResult [within thread]: Removing
> CommandMultiAsyncTasks object for entity
> 'f1665b66-2032-4a60-944e-c04beace088e'
>
> 2019-10-08 09:57:52,297Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58] FINISH,
> HSMClearTaskVDSCommand, return: , log id: 3b239769
>
> 2019-10-08 09:57:52,297Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58] FINISH,
> SPMClearTaskVDSCommand, return: , log id: 356927d
>
> 2019-10-08 09:57:52,298Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58] START,
> HSMClearTaskVDSCommand(HostName = ovirt-staging-hv-01.avinity.tv<http://ovirt-staging-hv-01.avinity.tv>,
> HSMTaskGuidBaseVDSCommandParameters:{hostId='bfc4c379-bd5f-45be-98a4-f0469dc3817c',
> taskId='ba88edd0-8188-472e-8f59-aa3f90e63181'}), log id: 32d44e1
>
> 2019-10-08 09:57:52,301Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58]
> BaseAsyncTask::removeTaskFromDB: Removed task
> 'e5d00ff5-eaa2-48e9-ab10-d6d06a01e89d' from DataBase
>
> 2019-10-08 09:57:52,301Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43694) [5830ba58]
> CommandAsyncTask::HandleEndActionResult [within thread]: Removing
> CommandMultiAsyncTasks object for entity
> 'c11f4d3c-f85b-43c5-986f-a517aed66d7e'
>
> 2019-10-08 09:57:52,314Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58] FINISH,
> HSMClearTaskVDSCommand, return: , log id: 32d44e1
>
> 2019-10-08 09:57:52,314Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58] FINISH,
> SPMClearTaskVDSCommand, return: , log id: 75844e34
>
> 2019-10-08 09:57:52,317Z INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58]
> BaseAsyncTask::removeTaskFromDB: Removed task
> 'ba88edd0-8188-472e-8f59-aa3f90e63181' from DataBase
>
> 2019-10-08 09:57:52,317Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-43695) [5830ba58]
> CommandAsyncTask::HandleEndActionResult [within thread]: Removing
> CommandMultiAsyncTasks object for entity
> 'd0c07988-0e43-4e6d-b86c-e736b28d4537'
>
> 2019-10-08 09:57:54,039Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-44) [5830ba58] Command
> 'ProcessOvfUpdateForStorageDomain' id:
> 'bdcdae3a-00f6-4176-9d27-0bb0f04912a3' child commands
> '[b4050edd-62a7-4b83-a4ab-a21c83bf3ade,
> 5fd47910-95b2-4387-8e26-ea2c80b078a0,
> c11f4d3c-f85b-43c5-986f-a517aed66d7e,
> d0c07988-0e43-4e6d-b86c-e736b28d4537]' executions were completed, status
> 'SUCCEEDED'
>
> 2019-10-08 09:57:55,046Z INFO
> [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-40) [5830ba58] Ending
> command
> 'org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand'
> successfully.
>
> 2019-10-08 09:57:58,077Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-76)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Command 'AddVm' id:
> '1b65f576-365d-4e8b-a920-621516cbdaed' child commands
> '[f1665b66-2032-4a60-944e-c04beace088e]' executions were completed,
> status 'FAILED'
>
> 2019-10-08 09:57:59,094Z ERROR [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
>
> 2019-10-08 09:57:59,099Z ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
> with failure.
>
> 2019-10-08 09:57:59,104Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] Running
> command: RemoveImageCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:57:59,126Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] START,
> DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'}), log id: 69861a78
>
> 2019-10-08 09:57:59,139Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
> failed: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:57:59,139Z ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] Command
> 'DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'})' execution failed:
> IRSGenericException: IRSErrorException: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:57:59,139Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] FINISH,
> DeleteImageGroupVDSCommand, return: , log id: 69861a78
>
> 2019-10-08 09:57:59,139Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] Disk
> 'd680a46a-7188-45f6-b6a7-3333f830c0b4' doesn't exist on storage domain
> '935de3ac-a735-4fc8-9161-26bfd751ffc7', rolling forward
>
> 2019-10-08 09:57:59,149Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [5794ed92] Removed
> task 'b4c4c723-4ba1-4d5a-86e6-24658a293a04' from DataBase
>
> 2019-10-08 09:57:59,220Z INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] transaction
> rolled back
>
> 2019-10-08 09:57:59,220Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] Lock freed to
> object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:57:59,233Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] EVENT_ID:
> USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM
> centos-testvm-024 creation.
>
> 2019-10-08 09:57:59,233Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] Failed invoking
> callback end method 'onFailed' for command
> '1b65f576-365d-4e8b-a920-621516cbdaed' with exception 'null', the
> callback is marked for end method retries
>
> 2019-10-08 09:58:01,252Z ERROR [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
>
> 2019-10-08 09:58:01,257Z ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
> with failure.
>
> 2019-10-08 09:58:01,270Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] Running
> command: RemoveImageCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:58:01,308Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] START,
> DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'}), log id: 35fec55
>
> 2019-10-08 09:58:01,324Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
> failed: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:01,324Z ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] Command
> 'DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'})' execution failed:
> IRSGenericException: IRSErrorException: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:01,324Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] FINISH,
> DeleteImageGroupVDSCommand, return: , log id: 35fec55
>
> 2019-10-08 09:58:01,324Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] Disk
> 'd680a46a-7188-45f6-b6a7-3333f830c0b4' doesn't exist on storage domain
> '935de3ac-a735-4fc8-9161-26bfd751ffc7', rolling forward
>
> 2019-10-08 09:58:01,335Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [19251787] Removed
> task '6ed0a844-5363-4afd-bffc-a80b6ace69ea' from DataBase
>
> 2019-10-08 09:58:01,395Z INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] transaction
> rolled back
>
> 2019-10-08 09:58:01,395Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] Trying to release
> a shared lock for key: '9ac6f4ad-58d0-4a7e-b424-91f2d76abcacTEMPLATE' ,
> but lock does not exist
>
> 2019-10-08 09:58:01,395Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] Trying to release
> a shared lock for key: 'a7e42574-be60-4c94-94a5-cc4b30fdb16fDISK' , but
> lock does not exist
>
> 2019-10-08 09:58:01,395Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] Trying to release
> exclusive lock which does not exist, lock key: 'centos-testvm-024VM_NAME'
>
> 2019-10-08 09:58:01,395Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] Lock freed to
> object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:58:01,407Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] EVENT_ID:
> USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM
> centos-testvm-024 creation.
>
> 2019-10-08 09:58:01,407Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] Failed invoking
> callback end method 'onFailed' for command
> '1b65f576-365d-4e8b-a920-621516cbdaed' with exception 'null', the
> callback is marked for end method retries
>
> 2019-10-08 09:58:05,427Z ERROR [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
>
> 2019-10-08 09:58:05,430Z ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
> with failure.
>
> 2019-10-08 09:58:05,437Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] Running
> command: RemoveImageCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:58:05,452Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] START,
> DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'}), log id: 6ac726b6
>
> 2019-10-08 09:58:05,466Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
> failed: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:05,466Z ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] Command
> 'DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'})' execution failed:
> IRSGenericException: IRSErrorException: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:05,466Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] FINISH,
> DeleteImageGroupVDSCommand, return: , log id: 6ac726b6
>
> 2019-10-08 09:58:05,467Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] Disk
> 'd680a46a-7188-45f6-b6a7-3333f830c0b4' doesn't exist on storage domain
> '935de3ac-a735-4fc8-9161-26bfd751ffc7', rolling forward
>
> 2019-10-08 09:58:05,477Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [cf91364] Removed
> task '31bd7a11-203d-4459-ac54-3c398c85771d' from DataBase
>
> 2019-10-08 09:58:05,490Z INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] transaction
> rolled back
>
> 2019-10-08 09:58:05,490Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] Trying to release
> a shared lock for key: '9ac6f4ad-58d0-4a7e-b424-91f2d76abcacTEMPLATE' ,
> but lock does not exist
>
> 2019-10-08 09:58:05,490Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] Trying to release
> a shared lock for key: 'a7e42574-be60-4c94-94a5-cc4b30fdb16fDISK' , but
> lock does not exist
>
> 2019-10-08 09:58:05,490Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] Trying to release
> exclusive lock which does not exist, lock key: 'centos-testvm-024VM_NAME'
>
> 2019-10-08 09:58:05,490Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] Lock freed to
> object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:58:05,503Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] EVENT_ID:
> USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM
> centos-testvm-024 creation.
>
> 2019-10-08 09:58:05,503Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] Failed invoking
> callback end method 'onFailed' for command
> '1b65f576-365d-4e8b-a920-621516cbdaed' with exception 'null', the
> callback is marked for end method retries
>
> 2019-10-08 09:58:13,523Z ERROR [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
>
> 2019-10-08 09:58:13,527Z ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
> with failure.
>
> 2019-10-08 09:58:13,533Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] Running
> command: RemoveImageCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:58:13,548Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] START,
> DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'}), log id: 33164673
>
> 2019-10-08 09:58:13,560Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
> failed: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:13,560Z ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] Command
> 'DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'})' execution failed:
> IRSGenericException: IRSErrorException: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:13,560Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] FINISH,
> DeleteImageGroupVDSCommand, return: , log id: 33164673
>
> 2019-10-08 09:58:13,560Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] Disk
> 'd680a46a-7188-45f6-b6a7-3333f830c0b4' doesn't exist on storage domain
> '935de3ac-a735-4fc8-9161-26bfd751ffc7', rolling forward
>
> 2019-10-08 09:58:13,570Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [56103b5a] Removed
> task 'ce6882de-2d89-4298-bdcf-86d621a54084' from DataBase
>
> 2019-10-08 09:58:13,583Z INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] transaction
> rolled back
>
> 2019-10-08 09:58:13,583Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Trying to release
> a shared lock for key: '9ac6f4ad-58d0-4a7e-b424-91f2d76abcacTEMPLATE' ,
> but lock does not exist
>
> 2019-10-08 09:58:13,583Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Trying to release
> a shared lock for key: 'a7e42574-be60-4c94-94a5-cc4b30fdb16fDISK' , but
> lock does not exist
>
> 2019-10-08 09:58:13,583Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Trying to release
> exclusive lock which does not exist, lock key: 'centos-testvm-024VM_NAME'
>
> 2019-10-08 09:58:13,583Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Lock freed to
> object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:58:13,595Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] EVENT_ID:
> USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM
> centos-testvm-024 creation.
>
> 2019-10-08 09:58:13,595Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Failed invoking
> callback end method 'onFailed' for command
> '1b65f576-365d-4e8b-a920-621516cbdaed' with exception 'null', the
> callback is marked for end method retries
>
> 2019-10-08 09:58:23,616Z ERROR [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
>
> 2019-10-08 09:58:23,620Z ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
> with failure.
>
> 2019-10-08 09:58:23,627Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] Running
> command: RemoveImageCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:58:23,642Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] START,
> DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'}), log id: 7346a5bd
>
> 2019-10-08 09:58:23,663Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
> failed: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:23,663Z ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] Command
> 'DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'})' execution failed:
> IRSGenericException: IRSErrorException: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:23,663Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] FINISH,
> DeleteImageGroupVDSCommand, return: , log id: 7346a5bd
>
> 2019-10-08 09:58:23,663Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] Disk
> 'd680a46a-7188-45f6-b6a7-3333f830c0b4' doesn't exist on storage domain
> '935de3ac-a735-4fc8-9161-26bfd751ffc7', rolling forward
>
> 2019-10-08 09:58:23,682Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [48f2f9bd] Removed
> task '8483357d-6d4f-418e-9751-70fcb69685cb' from DataBase
>
> 2019-10-08 09:58:23,695Z INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] transaction
> rolled back
>
> 2019-10-08 09:58:23,695Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] Trying to release
> a shared lock for key: '9ac6f4ad-58d0-4a7e-b424-91f2d76abcacTEMPLATE' ,
> but lock does not exist
>
> 2019-10-08 09:58:23,695Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] Trying to release
> a shared lock for key: 'a7e42574-be60-4c94-94a5-cc4b30fdb16fDISK' , but
> lock does not exist
>
> 2019-10-08 09:58:23,695Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] Trying to release
> exclusive lock which does not exist, lock key: 'centos-testvm-024VM_NAME'
>
> 2019-10-08 09:58:23,695Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] Lock freed to
> object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:58:23,706Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] EVENT_ID:
> USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM
> centos-testvm-024 creation.
>
> 2019-10-08 09:58:23,706Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] Failed invoking
> callback end method 'onFailed' for command
> '1b65f576-365d-4e8b-a920-621516cbdaed' with exception 'null', the
> callback is marked for end method retries
>
> 2019-10-08 09:58:33,725Z ERROR [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.AddVmCommand' with failure.
>
> 2019-10-08 09:58:33,729Z ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Ending command
> 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
> with failure.
>
> 2019-10-08 09:58:33,736Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] Running
> command: RemoveImageCommand internal: true. Entities affected : ID:
> 935de3ac-a735-4fc8-9161-26bfd751ffc7 Type: Storage
>
> 2019-10-08 09:58:33,752Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] START,
> DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'}), log id: 6d120001
>
> 2019-10-08 09:58:33,767Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] EVENT_ID:
> IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS
> failed: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:33,767Z ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] Command
> 'DeleteImageGroupVDSCommand(
> DeleteImageGroupVDSCommandParameters:{storagePoolId='2e3b3484-c339-11e9-8d02-00163e3ec101',
> ignoreFailoverLimit='false',
> storageDomainId='935de3ac-a735-4fc8-9161-26bfd751ffc7',
> imageGroupId='d680a46a-7188-45f6-b6a7-3333f830c0b4', postZeros='false',
> discard='false', forceDelete='false'})' execution failed:
> IRSGenericException: IRSErrorException: Image does not exist in domain:
> u'image=d680a46a-7188-45f6-b6a7-3333f830c0b4,
> domain=935de3ac-a735-4fc8-9161-26bfd751ffc7'
>
> 2019-10-08 09:58:33,767Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] FINISH,
> DeleteImageGroupVDSCommand, return: , log id: 6d120001
>
> 2019-10-08 09:58:33,767Z INFO
> [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] Disk
> 'd680a46a-7188-45f6-b6a7-3333f830c0b4' doesn't exist on storage domain
> '935de3ac-a735-4fc8-9161-26bfd751ffc7', rolling forward
>
> 2019-10-08 09:58:33,778Z INFO
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [3eea0406] Removed
> task 'bfb5e88a-0e62-4e03-b02b-a5a0b36f84db' from DataBase
>
> 2019-10-08 09:58:33,791Z INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] transaction
> rolled back
>
> 2019-10-08 09:58:33,791Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] Trying to release
> a shared lock for key: '9ac6f4ad-58d0-4a7e-b424-91f2d76abcacTEMPLATE' ,
> but lock does not exist
>
> 2019-10-08 09:58:33,791Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] Trying to release
> a shared lock for key: 'a7e42574-be60-4c94-94a5-cc4b30fdb16fDISK' , but
> lock does not exist
>
> 2019-10-08 09:58:33,791Z WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] Trying to release
> exclusive lock which does not exist, lock key: 'centos-testvm-024VM_NAME'
>
> 2019-10-08 09:58:33,791Z INFO [org.ovirt.engine.core.bll.AddVmCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] Lock freed to
> object 'EngineLock:{exclusiveLocks='[centos-testvm-024=VM_NAME]',
> sharedLocks='[9ac6f4ad-58d0-4a7e-b424-91f2d76abcac=TEMPLATE,
> a7e42574-be60-4c94-94a5-cc4b30fdb16f=DISK]'}'
>
> 2019-10-08 09:58:33,804Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] EVENT_ID:
> USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM
> centos-testvm-024 creation.
>
> 2019-10-08 09:58:33,824Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Failed invoking callback end
> method 'onFailed' for command '1b65f576-365d-4e8b-a920-621516cbdaed'
> with exception 'null', the callback is marked for end method retries but
> max number of retries have been attempted. The command will be marked as
> Failed.
>
> 2019-10-08 09:58:33,824Z INFO
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Exception in invoking callback of
> command AddVm (1b65f576-365d-4e8b-a920-621516cbdaed): NullPointerException:
>
> 2019-10-08 09:58:33,824Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Error invoking callback method
> 'onFailed' for 'FAILED' command '1b65f576-365d-4e8b-a920-621516cbdaed'
>
> 2019-10-08 09:58:33,824Z ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (EE-ManagedThreadFactory-engineScheduled-Thread-63)
> [067ec489-ae6c-4871-8ab6-8296016ca1ce] Exception:
> javax.ejb.EJBTransactionRolledbackException
>
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:203)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:420) [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:148)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
>
> at
> org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
> [weld-ejb-3.1.1.Final.jar:3.1.1.Final]
>
> at
> org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
> [wildfly-ejb3-17.0.1.Final.jar:17.0.1.Final]
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
>
> at
> org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:618)
>
> at
> org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
>
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>
> at
> org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
>
> at
> org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
>
> at
> org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
>
> at
> org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
>
> at
> org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view3.endAction(Unknown
> Source) [bll.jar:]
>
> at
> sun.reflect.GeneratedMethodAccessor1032.invoke(Unknown Source) [:1.8.0_222]
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_222]
>
> at java.lang.reflect.Method.invoke(Method.java:498)
> [rt.jar:1.8.0_222]
>
> at
> org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:410)
> [weld-core-impl-3.1.1.Final.jar:3.1.1.Final]
>
> at
> org.jboss.weld.module.ejb.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:134)
> [weld-ejb-3.1.1.Final.jar:3.1.1.Final]
>
> at
> org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56)
> [weld-core-impl-3.1.1.Final.jar:3.1.1.Final]
>
> at
> org.jboss.weld.module.ejb.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:68)
> [weld-ejb-3.1.1.Final.jar:3.1.1.Final]
>
> at
> org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:106)
> [weld-core-impl-3.1.1.Final.jar:3.1.1.Final]
>
> at
> org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.endAction(Unknown
> Source) [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.VmCommand.endActionOnDisks(VmCommand.java:266)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.AddVmCommand.endWithFailure(AddVmCommand.java:1452)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.CommandBase.internalEndWithFailure(CommandBase.java:736)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:658)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2003)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202)
> [utils.jar:]
>
> at
> org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137)
> [utils.jar:]
>
> at
> org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105)
> [utils.jar:]
>
> at
> org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:520)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.ChildCommandsCallbackBase.endAction(ChildCommandsCallbackBase.java:134)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.ChildCommandsCallbackBase.onFailed(ChildCommandsCallbackBase.java:174)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.endCallback(CommandCallbacksPoller.java:67)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:166)
> [bll.jar:]
>
> at
> org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
> [bll.jar:]
>
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [rt.jar:1.8.0_222]
>
> at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [rt.jar:1.8.0_222]
>
> at
> org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
> [javax.enterprise.concurrent-1.0.jar:]
>
> at
> org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
> [javax.enterprise.concurrent-1.0.jar:]
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [rt.jar:1.8.0_222]
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [rt.jar:1.8.0_222]
>
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_222]
>
> at
> org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
> [javax.enterprise.concurrent-1.0.jar:]
>
> Caused by: java.lang.NullPointerException
>
> *Ansible output:*
>
> The full traceback is:
>
> Traceback (most recent call last):
>
> File
> "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_vm_payload_uy4lqway/__main__.py",
> line 2024, in main
>
> clone_permissions=module.params['clone_permissions'],
>
> File
> "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_vm_payload_uy4lqway/ansible_ovirt_vm_payload.zip/ansible/module_utils/ovirt.py",
> line 610, in create
>
> poll_interval=self._module.params['poll_interval'],
>
> File
> "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_vm_payload_uy4lqway/ansible_ovirt_vm_payload.zip/ansible/module_utils/ovirt.py",
> line 341, in wait
>
> raise Exception("Timeout exceed while waiting on result state of
> the entity.")
>
> Exception: Timeout exceed while waiting on result state of the entity.
>
> fatal: [centos-testvm-024.avinity.tv<http://centos-testvm-024.avinity.tv> -> localhost]: FAILED! => {
>
> "changed": false,
>
> "invocation": {
>
> "module_args": {
>
> "affinity_group_mappings": [],
>
> "affinity_label_mappings": [],
>
> "allow_partial_import": null,
>
> "ballooning_enabled": null,
>
> "boot_devices": null,
>
> "boot_menu": null,
>
> "cd_iso": null,
>
> "clone": false,
>
> "clone_permissions": false,
>
> "cloud_init": {
>
> "authorized_ssh_keys": "ssh-rsa removed pubkey",
>
> "host_name": "centos-testvm-024.avinity.tv<http://centos-testvm-024.avinity.tv>",
>
> "user_name": "centos"
>
> },
>
> "cloud_init_nics": [],
>
> "cloud_init_persist": false,
>
> "cluster": "avshared1",
>
> "cluster_mappings": [],
>
> "comment": "oVirt Template Bakery Validation",
>
> "cpu_cores": null,
>
> "cpu_mode": null,
>
> "cpu_pinning": null,
>
> "cpu_shares": null,
>
> "cpu_sockets": null,
>
> "cpu_threads": null,
>
> "custom_compatibility_version": null,
>
> "custom_properties": null,
>
> "delete_protected": null,
>
> "description": null,
>
> "disk_format": "cow",
>
> "disks": [],
>
> "domain_mappings": [],
>
> "fetch_nested": false,
>
> "force": false,
>
> "graphical_console": null,
>
> "high_availability": null,
>
> "high_availability_priority": null,
>
> "host": null,
>
> "host_devices": null,
>
> "id": null,
>
> "initrd_path": null,
>
> "instance_type": "Medium",
>
> "io_threads": null,
>
> "kernel_params": null,
>
> "kernel_path": null,
>
> "kvm": null,
>
> "lease": null,
>
> "lun_mappings": [],
>
> "memory": null,
>
> "memory_guaranteed": null,
>
> "memory_max": null,
>
> "name": "centos-testvm-024",
>
> "nested_attributes": [],
>
> "nics": [
>
> {
>
> "interface": "virtio",
>
> "name": "nic1",
>
> "profile_name": "tenant1"
>
> }
>
> ],
>
> "numa_nodes": [],
>
> "numa_tune_mode": null,
>
> "operating_system": null,
>
> "placement_policy": null,
>
> "poll_interval": 3,
>
> "quota_id": null,
>
> "reassign_bad_macs": null,
>
> "rng_device": null,
>
> "role_mappings": [],
>
> "serial_console": null,
>
> "serial_policy": null,
>
> "serial_policy_value": null,
>
> "smartcard_enabled": null,
>
> "soundcard_enabled": null,
>
> "sso": null,
>
> "state": "running",
>
> "stateless": null,
>
> "storage_domain": null,
>
> "sysprep": null,
>
> "template": "av-centos-75-baseimage_nfs9000",
>
> "template_version": null,
>
> "ticket": null,
>
> "timeout": 180,
>
> "timezone": null,
>
> "type": null,
>
> "usb_support": null,
>
> "use_latest_template_version": null,
>
> "vmware": null,
>
> "vnic_profile_mappings": [],
>
> "wait": true,
>
> "watchdog": null,
>
> "xen": null
>
> }
>
> },
>
> "msg": "Timeout exceed while waiting on result state of the entity."
>
> }
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EB5AXLUNU6LRV…
>
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ATKD7TYOKB2UH…
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
1
0
After a few tricky power issues at a site in a less developed country, I had to restart my 3 node ovirt cluster with hosted engine. Everything seemed to come up fine, but that night when my scheduled snapshots/backups etc ran (vProtect) some weird things happened. It created a couple snapshots and couldn't copy them saying disk is locked (even though in the GUI no disks or snapshots were locked)
After a couple tries all of a sudden the whole VM is 'locked' this means I can't migrate it, cant start it (if it shuts down) couldn't remove old snapshots etc. I had this happen on another VM and my only solution there was to clone it, but now its happened to all of them and im not sure why.
I have run the db utility on command line to unlock everything, nothing is listed as locked, but I still cant run any operations on the VM's. I even cloned them to get them up and running again, but I can't delete the old VM still sitting there (or the disk, snapshots or anything) just keep getting VM is locked.
How the hell do I force an unlock of these entities when everything tells me its not locked? I'm almost at a point where I want to back up all VM's and start again but i'd much rather learn some more so I can manage these issues in future.
Any help is appreciated!
thanks
2
2
Gluster setup 3 Node - Now only showing single node setup in setup Wizard
by rob.downer@orbitalsystems.co.uk 25 Nov '19
by rob.downer@orbitalsystems.co.uk 25 Nov '19
25 Nov '19
I have set up 3 Nodes with a separate volume for Gluster, I have set up the two networks and DNS works fine SSH has been set up for Gluster and you can login via ssh to the other two hosts from the host used to set up.
When going to Virtualisation > Setup Gluster and Hosted Engine only single node shows up.
I have restarted all 3 machines.
All nodes machines show up in Dashboard etc...
I have set this up before but it all worked, I erased everything and set up again with the separate volume to be used for Gluster storage
3
4
Hey folks!
Quick question, really: I have 4 servers of identical hardware. The documentation says "you need 3", not "you need 3 or more"; is it possible to run hyperconverged with 4 servers (in same rack, let's neglect the possibility of 2 failing servers (split brain))?
Also: I have a nice seperate server (low power footprint, big on cpu (8) and has 32gb ram)-- would love to use this as engine server only. Can I initiate a hyperconverged system with 4 working host starting from a stand-alone engine? Or does "hyperconverged" a must for HA engine on the cluster?
One last one: Installing ovirt nodes, obviously. This will by default create a lvm across the entire RAID volume; can I simply create folders (with correct permissions) and use those as gluster bricks? Or do I need to partition the RAID in a special way?
Thanks for your pointers,
Chris.
4
3
Hi List,
My Windows Server 2019 installs dont seem to report memory usage to
the ovirt engine. They have the ovirt-guest-agent installed, is there
something else I need to do to get them reporting back to the
engine/hypervisor?
Thanks,
Joe
1
0
Hi *,
Trying to figure out where a VM's lockstate is stored.
I have a few VM's that seem to be stuck in lock state.
I have done an ./unlock_entity.sh -t all, and a -q gives me nothing in
a lock state.
I have jumped on psql DB direct and looked through as many tables as I
can looking for a locked status, everything seems fine.
In the engine, on these VM's if I try to do any actions I get a
message similar to:
'Failed to Acquire Lock to object
'EngineLock:{exclusiveLocks='[6c2fc524-2f13-4cad-9108-347be2e88d1a=VM]',
sharedLocks=''}''
When this issue first started, I was trying to do backups with
snapshots, I got a lot of messages similar to this:
/var/log/ovirt-engine/engine.log-20191121.gz:2019-11-20
11:44:30,441+10 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-41) [6cafc1ef] Trying to
release a shared lock for key: '6
c2fc524-2f13-4cad-9108-347be2e88d1aVM' , but lock does not exist
/var/log/ovirt-engine/engine.log-20191121.gz:2019-11-20 11:44:30,441+10
INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-41) [6cafc1ef] Lock
freed to object '
EngineLock:{exclusiveLocks='',
sharedLocks='[6c2fc524-2f13-4cad-9108-347be2e88d1a=VM,
ab5d4e18-f0c6-4472-8d2e-8c3bd1ff1a6a=DISK]'}'
I have made sure there is nothing in async_tasks, commands and the
_entities tables for both. There was some command_entities, but I took a
backup of the engine and decided to run a delete on that table (There
was about 5 stuck records there) but that hasn't helped
The affected VMs, cant delete snapshots, cant delete their disks, cant
delete the VM, cant boot the VMs - they are just stuck in the engine.
My ultimate question is, where is the engine looking up that lockstate
as I believe it is stuck locked in error, so I can go there and manually
force a removal of the lock state and at least delete these VM's - as
ultimately I have cloned them to get the VM's back up and running.
Can anyone point me in the right direction? Willing to pay consulting
fee if you believe you have the answer for this one.
Thanks,
Joe
3
4