Thank you Strahil,I'll proceed with these steps and come back to you.Cheers,LeoOn Tue, Oct 15, 2019, 06:45 Strahil <hunter86_bg@yahoo.com> wrote:Have you checked this thread :
https://lists.ovirt.org/pipermail/users/2016-April/039277.htmlYou can switch to postgre user, then 'source /opt/rhn/postgresql10/enable' & then 'psql engine'.
As per the thread you can find illegal snapshots via 'select image_group_id,imagestatus from images where imagestatus =4;'
And then update them via 'update images set imagestatus =1 where imagestatus = 4 and <other criteria>; commit'
Best Regards,
Strahil NikolovOn Oct 13, 2019 15:45, Leo David <leoalex@gmail.com> wrote:
>
> Hi Everyone,
> Im still not being able to start the vms... Could anyone give me an advice on sorign this out ?
> Still having th "Bad volume specification" error, although the disk is present on the storage.
> This issue would force me to reinstall a 10 nodes Openshift cluster from scratch, which would not be so funny..
> Thanks,
>
> Leo.
>
> On Fri, Oct 11, 2019 at 7:12 AM Strahil <hunter86_bg@yahoo.com> wrote:
>>
>> Nah...
>> It's done directly on the DB and I wouldn't recommend such action for Production Cluster.
>> I've done it only once and it was based on some old mailing lists.
>>
>> Maybe someone from the dev can assist?
>>
>> On Oct 10, 2019 13:31, Leo David <leoalex@gmail.com> wrote:
>>>
>>> Thank you Strahil,
>>> Could you tell me what do you mean by changing status ? Is this something to be done in the UI ?
>>>
>>> Thanks,
>>>
>>> Leo
>>>
>>> On Thu, Oct 10, 2019, 09:55 Strahil <hunter86_bg@yahoo.com> wrote:
>>>>
>>>> Maybe you can change the status of the VM in order the engine to know that it has to blockcommit the snapshots.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> On Oct 9, 2019 09:02, Leo David <leoalex@gmail.com> wrote:
>>>>>
>>>>> Hi Everyone,
>>>>> Please let me know if any thoughts or recommandations that could help me solve this issue..
>>>>> The real bad luck in this outage is that these 5 vms are part on an Openshift deployment, and now we are not able to start it up...
>>>>> Before trying to sort this at ocp platform level by replacing the failed nodes with new vms, I would rather prefer to do it at the oVirt level and have the vms starting since the disks are still present on gluster.
>>>>> Thank you so much !
>>>>>
>>>>>
>>>>> Leo
>
>
>
> --
> Best regards, Leo David