ah I see,  thank you.

image transfer row already dissipated, but backup task is still there with last update on Saturday's night.

and in the job table is the same

select * from job;
               job_id                | action_type  |               description                | status  |               owner_id               | visible |         start_time         | end_time |     last_upda
te_time      |            correlation_id            | is_external | is_auto_cleared | engine_session_seq_id  
--------------------------------------+--------------+------------------------------------------+---------+--------------------------------------+---------+----------------------------+----------+--------------
-------------+--------------------------------------+-------------+-----------------+-----------------------
818a893d-05bc-436e-b27d-a7123d2b29f7 | HybridBackup | Backing up VM log1.util.prod.hq.sldev.cz | STARTED | 775e759c-255e-11ed-9c24-00163e2172be | t       | 2022-09-17 00:44:56.562+02 |          | 2022-09-17 00
:45:17.99+02 | 26fa4612-8e4a-4142-89a2-c38adacead86 | f           | t               |                  3788
(1 row)

select * from vm_backups;      
             backup_id               | from_checkpoint_id |           to_checkpoint_id           |                vm_id                 | phase |        _create_date        | host_id | description |        _up
date_date        | backup_type |             snapshot_id              | is_stopped  
--------------------------------------+--------------------+--------------------------------------+--------------------------------------+-------+----------------------------+---------+-------------+-----------
-----------------+-------------+--------------------------------------+------------
b9c458e6-64e2-41c2-93b8-96761e71f82b |                    | 7a558f2a-57b6-432f-b5dd-85f5fb9dac8e | c3b2199f-35cc-41dc-8787-835e945217d2 | Ready | 2022-09-17 00:44:56.877+02 |         |             | 2022-09-17
00:45:19.057+02 | hybrid      | 0c6ebd56-dcfe-46a8-91cc-327cc94e9773 | f


Jirka


On 9/19/22 14:03, Benny Zlotnik wrote:
Completed transfers (phase 9/10) shouldn't interfere with anything and
are cleared automatically after 15 minutes

On Mon, Sep 19, 2022 at 2:48 PM Jirka Simon <jirka@vesim.cz> wrote:
Hello Benny,

thank you, i updater phase to 9 and restarted ovirt engine , but it is still there. should i update phase column in vm_backups table as well ?


Jirka


On 9/19/22 13:27, Benny Zlotnik wrote:

OK, I see an issue with the transfer
2022-09-17 01:43:05,856+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-74)
 [7f8657ae-1f57-438f-b849-80aa9e805021] Updating image transfer
'8d8e2674-ac0a-47cd-887f-d0cf6ee5e4ba' phase from 'Finalizing Success'
to 'Finished Success'
2022-09-17 01:43:05,873+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-
74) [7f8657ae-1f57-438f-b849-80aa9e805021] EVENT_ID:
TRANSFER_IMAGE_SUCCEEDED(1,032), Image Download with disk
log1.util.prod.hq.sldev.cz_log1.util.prod.hq.sldev.cz succeede
d.
2022-09-17 01:43:09,194+02 INFO
[org.ovirt.engine.core.sso.service.AuthenticationService] (default
task-122) [] User admin@internal-authz with profile [internal]
successful
ly logged in with scopes: ovirt-app-api
ovirt-ext=token-info:authz-search
ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
ovirt-ext=token:password-acc
ess
2022-09-17 01:43:09,216+02 INFO
[org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
task-122) [1d4c2113] Running command: CreateUserSessionCommand
internal: f
alse.
2022-09-17 01:43:09,223+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-122) [1d4c2113] EVENT_ID: USER_VDC_LOGIN(30), User admi
n@internal-authz connecting from '10.36.191.253' using session
'PS71uIYNYn6AJH4NSLfuEtGxMCrWEYS4SqiawmvWMQHAwVXkXXS8o1IdoMQkPINtUIkZY+qojVNeC4oeSSnBwA=='
logged in.
2022-09-17 01:43:09,228+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-122) [dab3e098-03a8-46d3-b2c6-5952a4236090] Running
command: TransferImageStatusCommand internal: false. Entities affected
:  ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2022-09-17 01:43:09,229+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(default task-122) [dab3e098-03a8-46d3-b2c6-5952a4236090] Updating
image
 transfer '8d8e2674-ac0a-47cd-887f-d0cf6ee5e4ba' phase from 'Finished
Success' to 'Finalizing Success'

The phase of the transfer was moved from Finished Success (phase 9) to
Finalizing Success (phase 7), this is a bug [1] in oVirt that was
fixed and will be in the next release, this also means that the backup
client finalized the transfer twice.
Since the transfer is complete you can move it to phase 9, and
finalize the backup

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2092816

On Mon, Sep 19, 2022 at 2:20 PM Jirka Simon <jirka@vesim.cz> wrote:

Hi Benny,

Thank you for Very fast answer.

no I haven't done anything with it yet

here is record from image_transfer table

select *  from image_transfers;
             command_id              | command_type | phase |        last_updated        | message |                vds_id                |               disk_id                | imaged_ti
cket_id |                 proxy_uri                 |  bytes_sent  | bytes_total  | type | active |                daemon_uri                 | client_inactivity_timeout | image_format | ba
ckend |              backup_id               | client_type | shallow | timeout_policy
--------------------------------------+--------------+-------+----------------------------+---------+--------------------------------------+--------------------------------------+----------
--------+-------------------------------------------+--------------+--------------+------+--------+-------------------------------------------+---------------------------+--------------+---
------+--------------------------------------+-------------+---------+----------------
8d8e2674-ac0a-47cd-887f-d0cf6ee5e4ba |         1024 |     7 | 2022-09-17 01:43:09.229+02 |         | a7d6e143-4230-42af-863b-d83667810d78 | 950279ef-485c-400e-ba66-a3f545618de5 |
       | https://ovirtm.corp.sldev.cz:54323/images | 214748364800 | 214748364800 |    1 | f      | https://ovirt4.corp.sldev.cz:54322/images |                      3600 |            5 |
   1 | b9c458e6-64e2-41c2-93b8-96761e71f82b |           2 | f       | legacy
(1 row)


Backup is performed from vProtect, we used  image transfer backup earlier, but we had problem with hanged snapshots as well (it was on ovirt4.4 we discused it couple weeks ago, now we have 4.5 and the situation is the same. And we wanted to try CBT backups)


thank you Jirka



On 9/19/22 13:08, Benny Zlotnik wrote:

Please attach the ovirt-engine logs

The backup has the ready status, did you finalize it?
Can show all the fields for the disk transfer? Was it finalized?
How is the backup performed?

On Mon, Sep 19, 2022 at 2:06 PM Jirka Simon <jirka@vesim.cz> wrote:

Hello there.

we have issue with backups on our cluster, one backup started 2 days ago and is is still in state finalizing.

select * from vm_backups;
             backup_id               | from_checkpoint_id |           to_checkpoint_id           |                vm_id                 | phase |        _create_date        | host_id | des
cription |        _update_date        | backup_type |             snapshot_id              | is_stopped
--------------------------------------+--------------------+--------------------------------------+--------------------------------------+-------+----------------------------+---------+----
---------+----------------------------+-------------+--------------------------------------+------------
b9c458e6-64e2-41c2-93b8-96761e71f82b |                    | 7a558f2a-57b6-432f-b5dd-85f5fb9dac8e | c3b2199f-35cc-41dc-8787-835e945217d2 | Ready | 2022-09-17 00:44:56.877+02 |         |
        | 2022-09-17 00:45:19.057+02 | hybrid      | 0c6ebd56-dcfe-46a8-91cc-327cc94e9773 | f
(1 row)


and if I check imagetransfer table, I see  bytes_sent  = bytes_total.

engine=# select it.disk_id,bd.disk_alias,it.last_updated, it.bytes_sent, it.bytes_total  from image_transfers as it , base_disks as bd where  it.disk_id =  bd.disk_id;
              disk_id                |                      disk_alias                       |        last_updated        |  bytes_sent  | bytes_total
--------------------------------------+-------------------------------------------------------+----------------------------+--------------+--------------
950279ef-485c-400e-ba66-a3f545618de5 | log1.util.prod.hq.sldev.cz_log1.util.prod.hq.sldev.cz | 2022-09-17 01:43:09.229+02 | 214748364800 | 214748364800


there is no error in logs


if i use  /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all  -qc  there is no record in any part.


I can clean these record from DB to fix it but it will happen again in few days.


vdsm.x86_64   4.50.2.2-1.el8

ovirt-engine.noarch    4.5.2.4-1.el8


is there anything i can check to find reason of this ?


Thank you Jirka


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFPXMJ5RIJZ2JT7FUIZ2NFRRLSICV3IW/