Yep tried all that.

I actually looked over the .sql file unlock_entities.sql that gave a hint to what the unlock_entities.sh file was achieving:

    DOWN:=0;
    OK:=1;
    LOCKED:=2;
    TEMPLATE_OK:=0;
    TEMPLATE_LOCKED:=1;
    IMAGE_LOCKED:=15;
    SNAPSHOT_OK:='OK';
    SNAPSHOT_LOCKED:='LOCKED';
    ILLEGAL:=4;
    update vm_static set template_status = TEMPLATE_OK where template_status = TEMPLATE_LOCKED;
    update vm_dynamic set status = DOWN where status = IMAGE_LOCKED;
    update images set imagestatus = OK where imagestatus = LOCKED;
    update snapshots set status = SNAPSHOT_OK where status ilike SNAPSHOT_LOCKED;
    UPDATE images SET imagestatus = OK WHERE imagestatus = ILLEGAL;

I went through and checked all those statuses - which were all set to OK, or DOWN, nothing was actually in a LOCKED state - otherwise the unlock_entities.sh script would have fixed the issue.

It was a calculated risk - only 5 entries and I took both a full engine backup and a dump of just that table so I could 'reset' it if required - this was definitely a fringe case where a LOCK status was held when it really shouldn't have been.

On 2019-11-25 10:52 AM, Wesley Stewart wrote:
Definitely don't recommend messing with the DB.

However I have some old notes I took on trying to change the lock status.... This was probably on 4.1 or early 4.2, so YMMV.

Use at your own risk.  No official documentation will ever tell you to go DB diving, and for good reason.

==============

Image locked issue

==============

sudo su postgres

psql -d engine -U postgres 

psql engine -c "SELECT vm_guid, vm_name from vm_static WHERE vm_name = 'Server_Name_Here';"

Returns:0eb29824-f1ca-4d27-bf57-a3ed36c40c18


psql engine -c "update vm_dynamic SET status=0 where vm_guid='0eb29824-f1ca-4d27-bf57-a3ed36c40c18';"


On Sun, Nov 24, 2019, 6:35 PM Joseph Goldman <joseph@goldman.id.au> wrote:
I'm going to answer my own question here - as I figured it out about 2
minutes after sending the email, and hopefully for other users to find
in case they have a similar issue.

The last thing I tried was clearing the command_entities table with a
DELETE statement - a bit risky but I was getting desperate. It didn't
work at first but after restarting the ovirt-engine service on the
hosted engine, it suddenly started working - so it appears that this
information is not a fresh DB lookup each time and is perhaps stored in
memory until a new LockState is taken or removed (and not even re-looked
at when a lock state fails).

I wish I knew more about how the issue came about to post a proper bug
for the devs but ultimately I just think there needs to be an extra
check for 'orphaned' command_entities entries.

 >have you tried to first to restart your HostedEngine ?

I did at first, both the ovirt-engine service and the HostedEngine VM -
but the problem persisted between both until I actually cleared the
command_entities table and restarted again. Thanks for replying though
im just glad I got it figured out and know about it for future.

Thanks,
Joe

On 2019-11-25 9:56 AM, Joseph Goldman wrote:
> Hi *,
>
>  Trying to figure out where a VM's lockstate is stored.
>
>  I have a few VM's that seem to be stuck in lock state.
>  I have done an ./unlock_entity.sh -t all, and a -q gives me nothing
> in a lock state.
>
>  I have jumped on psql DB direct and looked through as many tables as
> I can looking for a locked status, everything seems fine.
>
>  In the engine, on these VM's if I try to do any actions I get a
> message similar to:
>
>  'Failed to Acquire Lock to object
> 'EngineLock:{exclusiveLocks='[6c2fc524-2f13-4cad-9108-347be2e88d1a=VM]',
> sharedLocks=''}''
>
>  When this issue first started, I was trying to do backups with
> snapshots, I got a lot of messages similar to this:
>
>  /var/log/ovirt-engine/engine.log-20191121.gz:2019-11-20
> 11:44:30,441+10 WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-41) [6cafc1ef] Trying
> to release a shared lock for key: '6
> c2fc524-2f13-4cad-9108-347be2e88d1aVM' , but lock does not exist
> /var/log/ovirt-engine/engine.log-20191121.gz:2019-11-20
> 11:44:30,441+10 INFO
> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-41) [6cafc1ef] Lock
> freed to object '
> EngineLock:{exclusiveLocks='',
> sharedLocks='[6c2fc524-2f13-4cad-9108-347be2e88d1a=VM,
> ab5d4e18-f0c6-4472-8d2e-8c3bd1ff1a6a=DISK]'}'
>
>  I have made sure there is nothing in async_tasks, commands and the
> _entities tables for both. There was some command_entities, but I took
> a backup of the engine and decided to run a delete on that table
> (There was about 5 stuck records there) but that hasn't helped
>
>  The affected VMs, cant delete snapshots, cant delete their disks,
> cant delete the VM, cant boot the VMs - they are just stuck in the
> engine.
>
>  My ultimate question is, where is the engine looking up that
> lockstate as I believe it is stuck locked in error, so I can go there
> and manually force a removal of the lock state and at least delete
> these VM's - as ultimately I have cloned them to get the VM's back up
> and running.
>
>  Can anyone point me in the right direction? Willing to pay consulting
> fee if you believe you have the answer for this one.
>
> Thanks,
> Joe
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2ME6XMVQRYD4PYWMU76RXPDDCMGJ5NY/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MCSKIYTO5LLOSANEJEQYG745W6PBRQRE/