Upgrade GWT to 2.9.0
by Radoslaw Szwajkowski
Hi all,
we plan to bump the GWT version to 2.9.0 (patch [1]).
Key changes:
1. support for building with Java 11
2. removing support for classic dev mode ( use super dev mode instead)
3. accumulated improvements and bug fixes (from versions 2.8.1, 2.8.2, 2.9.0)
Please review and check if your areas are impacted.
For details see BZ [2]
best regards,
radek
[1] https://gerrit.ovirt.org/110359/
[2] https://bugzilla.redhat.com/1860309
4 years, 4 months
vdsm.storage.exception.UnknownTask: Task id unknown (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1641 - Still Failing!)
by Yedidyah Bar David
On Wed, Jun 17, 2020 at 6:28 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>
> Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
> Build: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1641/
This one failed while trying to create the disk image for the hosted-egnine VM:
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/16...
:
2020-06-16 23:03:20,527-0400 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml',
'ansible_task': 'ovirt.hosted_engine_setup : Add HE disks'}
...
2020-06-16 23:14:12,702-0400 DEBUG var changed: host "localhost" var
"add_disks" type "<class 'dict'>" value: "{
...
"msg": "Timeout exceed while waiting on result state of the entity."
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/16...
:
2020-06-16 23:03:22,612-04 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-1)
[16c24599-0048-44eb-a410-d39b7ce98712]
CommandMultiAsyncTasks::attachTask: Attaching task
'6b2a7648-748c-430b-94b6-5e3f719df2ac' to command
'fa81759d-c57a-4237-81e0-beb210faa64d'.
2020-06-16 23:03:22,659-04 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-1)
[16c24599-0048-44eb-a410-d39b7ce98712] Adding task
'6b2a7648-748c-430b-94b6-5e3f719df2ac' (Parent Command
'AddImageFromScratch', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'),
polling hasn't started yet..
2020-06-16 23:03:22,699-04 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-1)
[16c24599-0048-44eb-a410-d39b7ce98712]
BaseAsyncTask::startPollingTask: Starting to poll task
'6b2a7648-748c-430b-94b6-5e3f719df2ac'.
...
2020-06-16 23:03:25,835-04 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25)
[] SPMAsyncTask::PollTask: Polling task
'6b2a7648-748c-430b-94b6-5e3f719df2ac' (Parent Command
'AddImageFromScratch', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters')
returned status 'finished', result 'success'.
2020-06-16 23:03:25,863-04 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25)
[] BaseAsyncTask::onTaskEndSuccess: Task
'6b2a7648-748c-430b-94b6-5e3f719df2ac' (Parent Command
'AddImageFromScratch', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
But then:
2020-06-16 23:03:25,897-04 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-29)
[16c24599-0048-44eb-a410-d39b7ce98712]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'AddImageFromScratch' succeeded, clearing tasks.
2020-06-16 23:03:25,897-04 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-29)
[16c24599-0048-44eb-a410-d39b7ce98712] SPMAsyncTask::ClearAsyncTask:
Attempting to clear task '6b2a7648-748c-430b-94b6-5e3f719df2ac'
2020-06-16 23:03:25,899-04 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-29)
[16c24599-0048-44eb-a410-d39b7ce98712] START, SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='3bcde3b4-b044-11ea-bbb6-5452c0a8c863',
ignoreFailoverLimit='false',
taskId='6b2a7648-748c-430b-94b6-5e3f719df2ac'}), log id: 481c2d3d
2020-06-16 23:03:25,900-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-29)
[16c24599-0048-44eb-a410-d39b7ce98712] START,
HSMClearTaskVDSCommand(HostName = lago-he-basic-suite-master-host-0,
HSMTaskGuidBaseVDSCommandParameters:{hostId='85ecc51c-f2cb-46a1-9452-fd487399d8dd',
taskId='6b2a7648-748c-430b-94b6-5e3f719df2ac'}), log id: 17360b3d
...
2020-06-16 23:03:26,054-04 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-29)
[16c24599-0048-44eb-a410-d39b7ce98712]
BaseAsyncTask::removeTaskFromDB: Removed task
'6b2a7648-748c-430b-94b6-5e3f719df2ac' from DataBase
But then:
2020-06-16 23:03:26,315-04 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-55)
[7fe7b467] Command 'UploadStreamVDSCommand(HostName =
lago-he-basic-suite-master-host-0,
UploadStreamVDSCommandParameters:{hostId='85ecc51c-f2cb-46a1-9452-fd487399d8dd'})'
execution failed: javax.net.ssl.SSLPeerUnverifiedException:
Certificate for <lago-he-basic-suite-master-host-0.lago.local> doesn't
match any of the subject alternative names:
[lago-he-basic-suite-master-host-0.lago.local]
2020-06-16 23:03:26,315-04 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-55)
[7fe7b467] FINISH, UploadStreamVDSCommand, return: , log id: 7e3a3e80
2020-06-16 23:03:26,316-04 ERROR
[org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-55)
[7fe7b467] Command
'org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
javax.net.ssl.SSLPeerUnverifiedException: Certificate for
<lago-he-basic-suite-master-host-0.lago.local> doesn't match any of
the subject alternative names:
[lago-he-basic-suite-master-host-0.lago.local] (Failed with error
VDS_NETWORK_ERROR and code 5022)
Any idea why?
Anything changed in how we check the certificate?
Perhaps related to upgrade to CentOS 8.2?
And, how come it failed only this late? Don't we check the certificate earlier?
Anyway, this left the host in "not responding" state, so:
2020-06-16 23:03:29,994-04 ERROR
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-79)
[16c24599-0048-44eb-a410-d39b7ce98712] Failed to get volume info:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
No host was found to perform the operation (Failed with error
RESOURCE_MANAGER_VDS_NOT_FOUND and code 5004)
And perhaps due to an unrelated issue, also:
2020-06-16 23:03:31,177-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMRevertTaskVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-43)
[16c24599-0048-44eb-a410-d39b7ce98712] Trying to revert unknown task
'6b2a7648-748c-430b-94b6-5e3f719df2ac'
I looked a bit also at:
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/16...
and see there some relevant stuff, but nothing I can spot about the
root cause (e.g. the word "cert" does not appear there).
Can anyone please have a look? Thanks.
> Build Number: 1641
> Build Status: Still Failing
> Triggered By: Started by timer
>
> -------------------------------------
> Changes Since Last Success:
> -------------------------------------
> Changes for Build #1633
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
> [Ehud Yonasi] mock: fix yum repos injection.
>
> [Ehud Yonasi] onboard ost-images to stdci.
>
>
> Changes for Build #1634
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
>
> Changes for Build #1635
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
>
> Changes for Build #1636
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
>
> Changes for Build #1637
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
>
> Changes for Build #1638
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
> [Ehud Yonasi] stdci_runner: update templates node to ost-images.
>
>
> Changes for Build #1639
> [Marcin Sobczyk] ost-images: Drop rebasing of qcows
>
>
> Changes for Build #1640
> [Yedidyah Bar David] Allow engine 20 minutes to come up after VM restart
>
>
> Changes for Build #1641
> [Michal Skrivanek] test live storage migration again
>
> [Ehud Yonasi] poll: add ost-images to nightly.
>
>
>
>
> -----------------
> Failed Tests:
> -----------------
> No tests ran.
--
Didi
4 years, 5 months
Check OVF_STORE volume status task failures
by Artem Hrechanychenko
Hi all,
maybe I miss some information, but I'm still have troubles with he
installation using ost CI.
Is that already fixed ?
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/10415/
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/104...
2020-07-20 06:36:04,455-0400 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup :
Check OVF_STORE volume status]
2020-07-20 06:40:22,815-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:105 {'results': [{'cmd': ['vdsm-client',
'Volume', 'getInfo',
'storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863',
'storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142',
'imageID=4d2f7009-5b79-4b44-b0ef-e152bc51649f',
'volumeID=7b953d0e-662d-4e72-9fdc-823ea867262b'], 'stdout': '{\n
"apparentsize": "134217728",\n "capacity": "134217728",\n
"children": [],\n "ctime": "1595241259",\n "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",\n "disktype": "OVFS",\n
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",\n "format":
"RAW",\n "generation": 0,\n "image":
"4d2f7009-5b79-4b44-b0ef-e152bc51649f",\n "lease": {\n
"offset": 0,\n "owners": [],\n "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/4d2f7009-5b79-4b44-b0ef-e152bc51649f/7b953d0e-662d-4e72-9fdc-823ea867262b.lease",\n
"version": null\n },\n "legality": "LEGAL",\n "mtime":
"0",\n "parent": "00000000-0000-0000-0000-000000000000",\n
"pool": "",\n "status": "OK",\n "truesize": "134217728",\n
"type": "PREALLOCATED",\n "uuid":
"7b953d0e-662d-4e72-9fdc-823ea867262b",\n "voltype": "LEAF"\n}',
'stderr': '', 'rc': 0, 'start': '2020-07-20 06:38:13.456845', 'end':
'2020-07-20 06:38:13.897280', 'delta': '0:00:00.440435', 'changed':
True, 'invocation': {'module_args': {'_raw_params': 'vdsm-client
Volume getInfo storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863
storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142
imageID=4d2f7009-5b79-4b44-b0ef-e152bc51649f
volumeID=7b953d0e-662d-4e72-9fdc-823ea867262b', 'warn': True,
'_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends':
True, 'argv': None, 'chdir': None, 'executable': None, 'creates':
None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['{', '
"apparentsize": "134217728",', ' "capacity": "134217728",', '
"children": [],', ' "ctime": "1595241259",', ' "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",', ' "disktype": "OVFS",', '
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",', ' "format":
"RAW",', ' "generation": 0,', ' "image":
"4d2f7009-5b79-4b44-b0ef-e152bc51649f",', ' "lease": {', '
"offset": 0,', ' "owners": [],', ' "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/4d2f7009-5b79-4b44-b0ef-e152bc51649f/7b953d0e-662d-4e72-9fdc-823ea867262b.lease",',
' "version": null', ' },', ' "legality": "LEGAL",', '
"mtime": "0",', ' "parent":
"00000000-0000-0000-0000-000000000000",', ' "pool": "",', '
"status": "OK",', ' "truesize": "134217728",', ' "type":
"PREALLOCATED",', ' "uuid":
"7b953d0e-662d-4e72-9fdc-823ea867262b",', ' "voltype": "LEAF"',
'}'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': True,
'attempts': 12, 'item': {'name': 'OVF_STORE', 'image_id':
'7b953d0e-662d-4e72-9fdc-823ea867262b', 'id':
'4d2f7009-5b79-4b44-b0ef-e152bc51649f'}, 'ansible_loop_var': 'item',
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id':
'7b953d0e-662d-4e72-9fdc-823ea867262b', 'id':
'4d2f7009-5b79-4b44-b0ef-e152bc51649f'}}, {'cmd': ['vdsm-client',
'Volume', 'getInfo',
'storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863',
'storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142',
'imageID=044e384a-dedf-4589-8dfb-beca170138ee',
'volumeID=033d64fd-6f93-42be-84bc-082b03095ef3'], 'stdout': '{\n
"apparentsize": "134217728",\n "capacity": "134217728",\n
"children": [],\n "ctime": "1595241260",\n "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",\n "disktype": "OVFS",\n
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",\n "format":
"RAW",\n "generation": 0,\n "image":
"044e384a-dedf-4589-8dfb-beca170138ee",\n "lease": {\n
"offset": 0,\n "owners": [],\n "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/044e384a-dedf-4589-8dfb-beca170138ee/033d64fd-6f93-42be-84bc-082b03095ef3.lease",\n
"version": null\n },\n "legality": "LEGAL",\n "mtime":
"0",\n "parent": "00000000-0000-0000-0000-000000000000",\n
"pool": "",\n "status": "OK",\n "truesize": "134217728",\n
"type": "PREALLOCATED",\n "uuid":
"033d64fd-6f93-42be-84bc-082b03095ef3",\n "voltype": "LEAF"\n}',
'stderr': '', 'rc': 0, 'start': '2020-07-20 06:40:22.272462', 'end':
'2020-07-20 06:40:22.621943', 'delta': '0:00:00.349481', 'changed':
True, 'invocation': {'module_args': {'_raw_params': 'vdsm-client
Volume getInfo storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863
storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142
imageID=044e384a-dedf-4589-8dfb-beca170138ee
volumeID=033d64fd-6f93-42be-84bc-082b03095ef3', 'warn': True,
'_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends':
True, 'argv': None, 'chdir': None, 'executable': None, 'creates':
None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['{', '
"apparentsize": "134217728",', ' "capacity": "134217728",', '
"children": [],', ' "ctime": "1595241260",', ' "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",', ' "disktype": "OVFS",', '
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",', ' "format":
"RAW",', ' "generation": 0,', ' "image":
"044e384a-dedf-4589-8dfb-beca170138ee",', ' "lease": {', '
"offset": 0,', ' "owners": [],', ' "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/044e384a-dedf-4589-8dfb-beca170138ee/033d64fd-6f93-42be-84bc-082b03095ef3.lease",',
' "version": null', ' },', ' "legality": "LEGAL",', '
"mtime": "0",', ' "parent":
"00000000-0000-0000-0000-000000000000",', ' "pool": "",', '
"status": "OK",', ' "truesize": "134217728",', ' "type":
"PREALLOCATED",', ' "uuid":
"033d64fd-6f93-42be-84bc-082b03095ef3",', ' "voltype": "LEAF"',
'}'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': True,
'attempts': 12, 'item': {'name': 'OVF_STORE', 'image_id':
'033d64fd-6f93-42be-84bc-082b03095ef3', 'id':
'044e384a-dedf-4589-8dfb-beca170138ee'}, 'ansible_loop_var': 'item',
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id':
'033d64fd-6f93-42be-84bc-082b03095ef3', 'id':
'044e384a-dedf-4589-8dfb-beca170138ee'}}], 'changed': True, 'msg':
'All items completed'}
2020-07-20 06:40:22,919-0400 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'cmd': ['vdsm-client', 'Volume',
'getInfo', 'storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863',
'storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142',
'imageID=4d2f7009-5b79-4b44-b0ef-e152bc51649f',
'volumeID=7b953d0e-662d-4e72-9fdc-823ea867262b'], 'stdout': '{\n
"apparentsize": "134217728",\n "capacity": "134217728",\n
"children": [],\n "ctime": "1595241259",\n "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",\n "disktype": "OVFS",\n
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",\n "format":
"RAW",\n "generation": 0,\n "image":
"4d2f7009-5b79-4b44-b0ef-e152bc51649f",\n "lease": {\n
"offset": 0,\n "owners": [],\n "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/4d2f7009-5b79-4b44-b0ef-e152bc51649f/7b953d0e-662d-4e72-9fdc-823ea867262b.lease",\n
"version": null\n },\n "legality": "LEGAL",\n "mtime":
"0",\n "parent": "00000000-0000-0000-0000-000000000000",\n
"pool": "",\n "status": "OK",\n "truesize": "134217728",\n
"type": "PREALLOCATED",\n "uuid":
"7b953d0e-662d-4e72-9fdc-823ea867262b",\n "voltype": "LEAF"\n}',
'stderr': '', 'rc': 0, 'start': '2020-07-20 06:38:13.456845', 'end':
'2020-07-20 06:38:13.897280', 'delta': '0:00:00.440435', 'changed':
True, 'invocation': {'module_args': {'_raw_params': 'vdsm-client
Volume getInfo storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863
storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142
imageID=4d2f7009-5b79-4b44-b0ef-e152bc51649f
volumeID=7b953d0e-662d-4e72-9fdc-823ea867262b', 'warn': True,
'_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends':
True, 'argv': None, 'chdir': None, 'executable': None, 'creates':
None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['{', '
"apparentsize": "134217728",', ' "capacity": "134217728",', '
"children": [],', ' "ctime": "1595241259",', ' "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",', ' "disktype": "OVFS",', '
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",', ' "format":
"RAW",', ' "generation": 0,', ' "image":
"4d2f7009-5b79-4b44-b0ef-e152bc51649f",', ' "lease": {', '
"offset": 0,', ' "owners": [],', ' "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/4d2f7009-5b79-4b44-b0ef-e152bc51649f/7b953d0e-662d-4e72-9fdc-823ea867262b.lease",',
' "version": null', ' },', ' "legality": "LEGAL",', '
"mtime": "0",', ' "parent":
"00000000-0000-0000-0000-000000000000",', ' "pool": "",', '
"status": "OK",', ' "truesize": "134217728",', ' "type":
"PREALLOCATED",', ' "uuid":
"7b953d0e-662d-4e72-9fdc-823ea867262b",', ' "voltype": "LEAF"',
'}'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': True,
'attempts': 12, 'item': {'name': 'OVF_STORE', 'image_id':
'7b953d0e-662d-4e72-9fdc-823ea867262b', 'id':
'4d2f7009-5b79-4b44-b0ef-e152bc51649f'}, 'ansible_loop_var': 'item',
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id':
'7b953d0e-662d-4e72-9fdc-823ea867262b', 'id':
'4d2f7009-5b79-4b44-b0ef-e152bc51649f'}}
2020-07-20 06:40:23,026-0400 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'cmd': ['vdsm-client', 'Volume',
'getInfo', 'storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863',
'storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142',
'imageID=044e384a-dedf-4589-8dfb-beca170138ee',
'volumeID=033d64fd-6f93-42be-84bc-082b03095ef3'], 'stdout': '{\n
"apparentsize": "134217728",\n "capacity": "134217728",\n
"children": [],\n "ctime": "1595241260",\n "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",\n "disktype": "OVFS",\n
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",\n "format":
"RAW",\n "generation": 0,\n "image":
"044e384a-dedf-4589-8dfb-beca170138ee",\n "lease": {\n
"offset": 0,\n "owners": [],\n "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/044e384a-dedf-4589-8dfb-beca170138ee/033d64fd-6f93-42be-84bc-082b03095ef3.lease",\n
"version": null\n },\n "legality": "LEGAL",\n "mtime":
"0",\n "parent": "00000000-0000-0000-0000-000000000000",\n
"pool": "",\n "status": "OK",\n "truesize": "134217728",\n
"type": "PREALLOCATED",\n "uuid":
"033d64fd-6f93-42be-84bc-082b03095ef3",\n "voltype": "LEAF"\n}',
'stderr': '', 'rc': 0, 'start': '2020-07-20 06:40:22.272462', 'end':
'2020-07-20 06:40:22.621943', 'delta': '0:00:00.349481', 'changed':
True, 'invocation': {'module_args': {'_raw_params': 'vdsm-client
Volume getInfo storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863
storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142
imageID=044e384a-dedf-4589-8dfb-beca170138ee
volumeID=033d64fd-6f93-42be-84bc-082b03095ef3', 'warn': True,
'_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends':
True, 'argv': None, 'chdir': None, 'executable': None, 'creates':
None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['{', '
"apparentsize": "134217728",', ' "capacity": "134217728",', '
"children": [],', ' "ctime": "1595241260",', ' "description":
"{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage
Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk
Description\\":\\"OVF_STORE\\"}",', ' "disktype": "OVFS",', '
"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",', ' "format":
"RAW",', ' "generation": 0,', ' "image":
"044e384a-dedf-4589-8dfb-beca170138ee",', ' "lease": {', '
"offset": 0,', ' "owners": [],', ' "path":
"/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/044e384a-dedf-4589-8dfb-beca170138ee/033d64fd-6f93-42be-84bc-082b03095ef3.lease",',
' "version": null', ' },', ' "legality": "LEGAL",', '
"mtime": "0",', ' "parent":
"00000000-0000-0000-0000-000000000000",', ' "pool": "",', '
"status": "OK",', ' "truesize": "134217728",', ' "type":
"PREALLOCATED",', ' "uuid":
"033d64fd-6f93-42be-84bc-082b03095ef3",', ' "voltype": "LEAF"',
'}'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': True,
'attempts': 12, 'item': {'name': 'OVF_STORE', 'image_id':
'033d64fd-6f93-42be-84bc-082b03095ef3', 'id':
'044e384a-dedf-4589-8dfb-beca170138ee'}, 'ansible_loop_var': 'item',
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id':
'033d64fd-6f93-42be-84bc-082b03095ef3', 'id':
'044e384a-dedf-4589-8dfb-beca170138ee'}}
2020-07-20 06:40:23,128-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:105 PLAY RECAP [localhost] : ok: 69
changed: 19 unreachable: 0 skipped: 10 failed: 1
4 years, 5 months
Re: [ovirt-users] Parent checkpoint ID does not match the actual leaf checkpoint
by Nir Soffer
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński <l.kolacinski(a)storware.eu>
wrote:
> Hello,
> Thanks to previous answers, I was able to make backups. Unfortunately, we
> had some infrastructure issues and after the host reboots new problems
> appeared. I am not able to do any backup using the commands that worked
> yesterday. I looked through the logs and there is something like this:
>
> 2020-07-17 15:06:30,644+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
> [944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM backup
> operation 'StartVmBackup': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to StartVmBackupVDS, error =
> Checkpoint Error: {'parent_checkpoint_id': None, 'leaf_checkpoint_id':
> 'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
> '116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent checkpoint ID
> does not match the actual leaf checkpoint*'}, code = 1610 (Failed with
> error unexpected and code 16)
>
>
It looks like engine sent:
parent_checkpoint_id: None
This issue was fix in engine few weeks ago.
Which engine and vdsm versions are you testing?
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
>
>
> And the last error is:
>
> 2020-07-17 15:13:45,835+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
> [f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM backup
> operation 'GetVmBackupInfo': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to GetVmBackupInfoVDS, error
> = No such backup Error: {'vm_id': '116aa6eb-31a1-43db-9b1e-ad6e32fb9260',
> 'backup_id': 'bf1c26f7-c3e5-437c-bb5a-255b8c1b3b73', 'reason': '*VM
> backup not exists: Domain backup job id not found: no domain backup job
> present'*}, code = 1601 (Failed with error unexpected and code 16)
>
>
This is likely a result of the first error. If starting backup failed the
backup entity
is deleted.
> (these errors are from full backup)
>
> Like I said this is very strange because everything was working correctly.
>
>
> Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacinski(a)storware.eu
> <m.helbert(a)storware.eu>
>
>
>
>
> *[image: STORWARE]* <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> <https://www.storware.eu/>*
>
> *[image: facebook]* <https://www.facebook.com/storware>
>
> *[image: twitter]* <https://twitter.com/storware>
>
> *[image: linkedin]* <https://www.linkedin.com/company/storware>
>
> *[image: Storware_Stopka_09]*
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
> *Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa
> 000510131* *, NIP 5213672602.** Wiadomość ta jest przeznaczona jedynie
> dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne
> i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie,
> przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub
> podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub
> podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez
> pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie
> tej wiadomości z wszelkich komputerów. **This message is intended only
> for the person or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you have received this message in error, please contact
> the sender and remove the material from all of your computer systems.*
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3PLYPOZGT6...
>
4 years, 5 months
Re: Status code: 403 for http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
by Lev Veyde
Hi Nir,
Looks like the issue is fixed now, and everything is back to normal.
Thanks in advance,
On Sun, Jul 19, 2020 at 2:34 PM Lev Veyde <lveyde(a)redhat.com> wrote:
> Hi Nir,
>
> Got this error only on the re-build attempt.
> And looks like it's the Jenkins slave that ran out of disk space, not our
> server.
>
> And the job was failing before on something else, if I understand
> correctly it was timing out while waiting on some cron job to complete it's
> work.
> But I think that the CI team could have better answers for that.
>
> Thanks in advance,
>
> On Sun, Jul 19, 2020 at 2:27 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>
>> On Sun, Jul 19, 2020 at 2:18 PM Lev Veyde <lveyde(a)redhat.com> wrote:
>>
>>> Hi,
>>>
>>> Looks like my job re-build attempt fails on not enough disk space:
>>>
>>> *14:15:32* Traceback (most recent call last):
>>>
>>>
>> Disk is full?
>>
>>
>>> <https://jenkins.ovirt.org/job/ovirt_master_publish-rpms_nightly/1806/cons...>*14:15:32* IOError: [Errno 28] No space left on device
>>>
>>>
>>> On Sun, Jul 19, 2020 at 2:15 PM Lev Veyde <lveyde(a)redhat.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Looks like the nightly snapshot job has failed, and it it caused the
>>>> removal of all -snapshot repos.
>>>>
>>>> I'm now trying to re-run the failed job:
>>>> https://jenkins.ovirt.org/job/ovirt_master_publish-rpms_nightly/1806/console
>>>>
>>>> But we need to understand why it failed - is it just the timeout issue
>>>> we see in the log, or there is something wrong with it?
>>>>
>>>> Currently all snapshot repos are broken (i.e. 4.2, 4.3 and master).
>>>>
>>>> Thanks in advance,
>>>>
>>>> On Sun, Jul 19, 2020 at 1:38 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>>>>
>>>>> Downloading ovirt-release-master.rpm is Forbidden now. This breaks all
>>>>> developers
>>>>> workflows.
>>>>>
>>>>> Please fix as soon as possible.
>>>>>
>>>>> Example failure when trying to build vdsm centos-8 docker image:
>>>>>
>>>>> [MIRROR] ovirt-release-master.rpm: Status code: 403 for
>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>>>>> [FAILED] ovirt-release-master.rpm: Status code: 403 for
>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>>>>> Status code: 403 for
>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>> Lev Veyde
>>>>
>>>> Senior Software Engineer, RHCE | RHCVA | MCITP
>>>>
>>>> Red Hat Israel
>>>>
>>>> <https://www.redhat.com>
>>>>
>>>> lev(a)redhat.com | lveyde(a)redhat.com
>>>> <https://red.ht/sig>
>>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>>
>>>
>>>
>>> --
>>>
>>> Lev Veyde
>>>
>>> Senior Software Engineer, RHCE | RHCVA | MCITP
>>>
>>> Red Hat Israel
>>>
>>> <https://www.redhat.com>
>>>
>>> lev(a)redhat.com | lveyde(a)redhat.com
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>
>>
>
> --
>
> Lev Veyde
>
> Senior Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> <https://www.redhat.com>
>
> lev(a)redhat.com | lveyde(a)redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 5 months
Re: [ovirt-users] Problem with backuping ovirt 4.4 with SDK
by Nir Soffer
On Tue, Jul 14, 2020 at 9:33 AM Łukasz Kołaciński <l.kolacinski(a)storware.eu>
wrote:
> Hello,
>
Hi Lukaz,
Lets move the discussion to devel(a)ovirt.org, I think it will be more
productive.
Also, always CC me and Eyal on incremental backup questions for a quicker
response.
> I am trying to do full backup on ovirt 4.4 using sdk.
>
Which version of oVirt? libvirt?
> I used steps from this youtube video:
> https://www.youtube.com/watch?v=E2VWUVcycj4 and I got error after running
> backup_vm.py. I see that sdk has imported disks and created backup entity
> and then I got sdk.NotFoundError exception.
>
This means that starting backup failed. Unfortunately the API does not have
a good way to get
the error that caused the backup to fai.
You should be able to see the error in the event log in the UI, and in
engine log.
> I also tried to do full backup with API and after finalizing backup
> disappeared (I think)
>
So backup from the API was successful?
Backups are expected to disappear, they are temporary objects used to
manage the backup
process. Once the backup process was finished you can do nothing with the
backup object,
and you cannot fetch the same backup data again.
> and I couldn't try incremental.
>
The fact that the backup disappeared should not prevent the next backup.
After you create a backup, you need to poll backup status until the backup
is ready.
while backup.phase != BackupPhase.READY:
time.sleep(1)
backup = backup_service.get()# to_checkpoint_id will be used as
If the backup does not end in ready state, it failed, and you cannot do
anything with
this backup.
When the backup is ready, you can fetch the to_checkpoint_id created for
this backup.
checkpoint_id = backup.to_checkpoint_id
At this point you need to persist the checkpoint id. This will be used to
create the incremental
backup.
[ 0.0 ] Starting full backup for VM '51708c8e-6671-480b-b2d8-199a1af9cbdc'
> Password:
> [ 4.2 ] Waiting until backup 0458bf7f-868c-4859-9fa7-767b3ec62b52 is
> ready
> Traceback (most recent call last):
> File "./backup_vm.py", line 343, in start_backup
> backup = backup_service.get()
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/services.py", line
> 32333, in get
> return self._internal_get(headers, query, wait)
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
> 211, in _internal_get
> return future.wait() if wait else future
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 55,
> in wait
> return self._code(response)
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
> 208, in callback
> self._check_fault(response)
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
> 130, in _check_fault
> body = self._internal_read_body(response)
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
> 312, in _internal_read_body
> self._raise_error(response)
> File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
> 118, in _raise_error
> raise error
> ovirtsdk4.NotFoundError: HTTP response code is 404.
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "./backup_vm.py", line 476, in <module>
> main()
> File "./backup_vm.py", line 173, in main
> args.command(args)
> File "./backup_vm.py", line 230, in cmd_start
> backup = start_backup(connection, args)
> File "./backup_vm.py", line 345, in start_backup
> raise RuntimeError("Backup {} failed".format(backup.id))
> RuntimeError: Backup 0458bf7f-868c-4859-9fa7-767b3ec62b52 failed
>
This is correct, backup has failed.
Please check the event log to understand the failure.
Eyal, can you show how to get the error from the backup using the SDK, in a
way
that can be used by a program?
e.g. a public error code that can be used to decide on the next step, and
an error
message that can be used for displaying error to users of the backup
application.
This should be added to the backup_vm.py example.
Nir
4 years, 5 months