On Sat, Apr 20, 2019 at 10:06 PM Jonathan Baecker <jonbae77(a)gmail.com>
wrote:
Am 20.04.2019 um 20:38 schrieb Jonathan Baecker:
Am 14.04.2019 um 14:01 schrieb Jonathan Baecker:
Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:
On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker <jonbae77(a)gmail.com>
wrote:
> Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:
>
> Seems like your SPM went down while you had running Live merge operation.
>
> Can you please submit a bug and attach the logs?
>
> Yes I can do - but you really think this is a bug? Because in that time I
> had only one host running, so this was the SPM. And the time in the log is
> exactly this time when the host was restarting. But the merging jobs and
> snapshot deleting was starting ~20 hours before.
>
We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there may be
a bug here.
Please attach all the logs including the beginning of the snapshot
deletion.
Ok, I did:
https://bugzilla.redhat.com/show_bug.cgi?id=1699627
The logs are also in full length.
Now I have the same issue, that my host trying to delete the snapshots. It
is still running, no reboot until now. But is there anything I can do?
I'm happy that the backup before was made correctly, other while I would
be in big trouble. But it looks like that I can not make any more normal
backup jobs.
Ok here is a interesting situation. I starting to shutdown my VM. fist
this ones which had no snapshots deleting running. The also VMs which are
in process, and now all deleting jobs finished successfully. Can it be,
that the host and VM are not communicating correctly, and somehow this
brings the host in a situation that it can not merge and delete a created
snapshot? From some VMs I also get the waring that I need a newer
ovirt-guest-agent, but there is no updates for it.
When you shutdown the VM the engine preform a - "Cold merge" for the
deleted snapshot, this is a good workaround when you encountered some
problems during "Live merge".
Those flows are different so "Cold merge" can succeed while "Live
merge"
failed.
> On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker <jonbae77(a)gmail.com>
> wrote:
>
>> Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:
>>
>> Hi Jonathan,
>>
>> Can you please add the engine and VDSM logs?
>>
>> Thanks,
>>
>> Hi Eyal,
>>
>> my last message had the engine.log in a zip included.
>>
>> Here are both again, but I delete some lines to get it smaller.
>>
>>
>>
>> On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker <jonbae77(a)gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> I make automatically backups of my VMs and last night there was making
>>> some new one. But somehow ovirt could not delete the snapshots anymore,
>>> in the log it show that it tried the hole day to delete them but they
>>> had to wait until the merge command was done.
>>>
>>> In the evening the host was totally crashed and started again. Now I
>>> can
>>> not delete the snapshots manually and I can also not start the VMs
>>> anymore. In the web interface I get the message:
>>>
>>> VM timetrack is down with error. Exit message: Bad volume specification
>>> {'address': {'bus': '0', 'controller':
'0', 'type': 'drive', 'target':
>>> '0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
>>> 'index': 0, 'iface': 'scsi', 'apparentsize':
'1572864', 'specParams':
>>> {}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
>>> 'truesize': '229888', 'type': 'disk',
'domainID':
>>> '9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize':
'0', 'format':
>>> 'cow',
>>> 'poolID': '59ef3a18-002f-02d1-0220-000000000124',
'device': 'disk',
>>> 'path':
>>>
'/rhev/data-center/59ef3a18-002f-02d1-0220-000000000124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',
>>>
>>> 'propagateErrors': 'off', 'name': 'sda',
'bootOrder': '1', 'volumeID':
>>> '47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
>>> 'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.
>>>
>>> When I check the path permission is correct and there are also files in
>>> it.
>>>
>>> Is there any ways to fix that? Or to prevent this issue in the future?
>>>
>>> In the attachment I send also the engine.log
>>>
>>>
>>> Regards
>>>
>>> Jonathan
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTV...
>>>
>>
>>
>> --
>> Regards,
>> Eyal Shenitzky
>>
>>
>>
>
> --
> Regards,
> Eyal Shenitzky
>
>
>
--
Regards,
Eyal Shenitzky
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMLMUCLSKDL...
--
Regards,
Eyal Shenitzky