Attached XML dump.
Looks like its let me run a 'reboot' but im afraid to do a shutdown at
this point.
I have taken just a raw copy of the whole image group folder in the hope
if worse came to worse I'd be able to recreate the disk with the actual
files.
All existing files seem to be referenced in the xmldump.
On 9/12/2020 11:54 pm, Benny Zlotnik wrote:
The VM is running, right?
Can you run:
$ virsh -r dumpxml <vm_name>
On Wed, Dec 9, 2020 at 2:01 PM Joseph Goldman <joseph(a)goldman.id.au> wrote:
> Looks like the physical files dont exist:
>
> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] START
> merge(drive={u'imageID': u'23710238-07c2-46f3-96c0-9061fe1c3e0d',
> u'volumeID': u'4b6f7ca1-b70d-4893-b473-d8d30138bb6b',
u'domainID':
> u'74c06ce1-94e6-4064-9d7d-69e1d956645b', u'poolID':
> u'e2540c6a-33c7-4ac7-b2a2-175cf51994c2'},
> baseVolUUID=u'c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1',
> topVolUUID=u'a6d4533b-b0b0-475d-a436-26ce99a38d94', bandwidth=u'0',
> jobUUID=u'ff193892-356b-4db8-b525-e543e8e69d6a')
> from=::ffff:192.168.5.10,56030,
> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:48)
>
> 2020-12-09 22:01:00,122+1000 INFO (jsonrpc/4) [api.virt] FINISH merge
> return={'status': {'message': 'Drive image file could not be
found',
> 'code': 13}} from=::ffff:192.168.5.10,56030,
> flow_id=c149117a-1080-424c-85d8-3de2103ac4ae,
> vmId=2a0df965-8434-4074-85cf-df12a69648e7 (api:54)
>
> Although looking on the physical file system they seem to exist:
>
> [root@ov-node1 23710238-07c2-46f3-96c0-9061fe1c3e0d]# ll
> total 56637572
> -rw-rw----. 1 vdsm kvm 15936061440 Dec 9 21:51
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
> -rw-rw----. 1 vdsm kvm 1048576 Dec 8 01:11
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.lease
> -rw-r--r--. 1 vdsm kvm 252 Dec 9 21:37
> 4b6f7ca1-b70d-4893-b473-d8d30138bb6b.meta
> -rw-rw----. 1 vdsm kvm 21521825792 Dec 8 01:47
> a6d4533b-b0b0-475d-a436-26ce99a38d94
> -rw-rw----. 1 vdsm kvm 1048576 May 17 2020
> a6d4533b-b0b0-475d-a436-26ce99a38d94.lease
> -rw-r--r--. 1 vdsm kvm 256 Dec 8 01:49
> a6d4533b-b0b0-475d-a436-26ce99a38d94.meta
> -rw-rw----. 1 vdsm kvm 107374182400 Dec 9 01:13
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
> -rw-rw----. 1 vdsm kvm 1048576 Feb 24 2020
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.lease
> -rw-r--r--. 1 vdsm kvm 320 May 17 2020
> c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1.meta
>
> The UUID's match the UUID's in the snapshot list.
>
> So much stuff happens in vdsm.log its hard to pinpoint whats going on
> but grepping 'c149117a-1080-424c-85d8-3de2103ac4ae' (flow-id) shows
> pretty much those 2 calls and then XML dump.
>
> Still a bit lost on the most comfortable way forward unfortunately.
>
> On 8/12/2020 11:15 pm, Benny Zlotnik wrote:
>>> [root@ov-engine ~]# tail -f /var/log/ovirt-engine/engine.log | grep ERROR
>> grepping error is ok, but it does not show the reason for the failure,
>> which will probably be on the vdsm host (you can use flow_id
>> 9b2283fe-37cc-436c-89df-37c81abcb2e1 to find the correct file
>> Need to see the underlying error causing: VDSGenericException:
>> VDSErrorException: Failed to SnapshotVDS, error =
>> Snapshot failed, code = 48
>>
>>> Using unlock_entity.sh -t all sets the status back to 1 (confirmed in
>>> DB) and then trying to create does not change it back to illegal, but
>>> trying to delete that snapshot fails and sets it back to 4.
>> I see, can you share the removal failure log (similar information as
>> requested above)
>>
>> regarding backup, I don't have a good answer, hopefully someone else
>> has suggestions
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJHKYBPBTIN...