Thank you for help Strahil,

But although there where 4 images with a status 4 in the database, and did the update query on them, same bloody message, and vms wont start.
Eventually, ive decided to delete the vms, and do a from scratch installation. Persistance openshift vms are still ok, so i should be able to reuse the volumes somehow.
This is why sometimes a subscription is good, when a lack of knowledge on my side is happening. Production systems should not rely on upstreams unless there is a strong understanding of the product.
Again, thank you so much for trying helping me out !
Cheers,

Leo

On Tue, Oct 15, 2019, 07:00 Leo David <leoalex@gmail.com> wrote:
Thank you Strahil,
I'll proceed with these steps and come back to you.
Cheers,

Leo

On Tue, Oct 15, 2019, 06:45 Strahil <hunter86_bg@yahoo.com> wrote:

Have you checked this thread :
https://lists.ovirt.org/pipermail/users/2016-April/039277.html

You can switch to postgre user, then 'source /opt/rhn/postgresql10/enable' & then 'psql engine'.

As per the thread you can find illegal snapshots via 'select image_group_id,imagestatus from images where imagestatus =4;'

And then update them via 'update images set imagestatus =1 where imagestatus = 4 and <other criteria>; commit'

Best Regards,
Strahil Nikolov

On Oct 13, 2019 15:45, Leo David <leoalex@gmail.com> wrote:

>
> Hi Everyone,
> Im still not being able to start the vms... Could anyone give me an advice on sorign this out ?
> Still having th "Bad volume specification" error,  although the disk is present on the storage.
> This issue would force me to reinstall a 10 nodes Openshift cluster from scratch,  which would not be so funny..
> Thanks,
>
> Leo.
>
> On Fri, Oct 11, 2019 at 7:12 AM Strahil <hunter86_bg@yahoo.com> wrote:

>>
>> Nah...
>> It's done directly on the DB and I wouldn't recommend such action for Production Cluster.
>> I've done it only once and it was based on some old mailing lists.
>>
>> Maybe someone from the dev can assist?
>>
>> On Oct 10, 2019 13:31, Leo David <leoalex@gmail.com> wrote:

>>>
>>> Thank you Strahil,
>>> Could you tell me what do you mean by changing status ? Is this something to be done in the UI ?
>>>
>>> Thanks,
>>>
>>> Leo
>>>
>>> On Thu, Oct 10, 2019, 09:55 Strahil <hunter86_bg@yahoo.com> wrote:

>>>>
>>>> Maybe you can change the status of the VM in order the engine to know that it has to blockcommit the snapshots.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> On Oct 9, 2019 09:02, Leo David <leoalex@gmail.com> wrote:

>>>>>
>>>>> Hi Everyone,
>>>>> Please let me know if any thoughts or recommandations that could help me solve this issue..
>>>>> The real bad luck in this outage is that these 5 vms are part on an Openshift deployment,  and now we are not able to start it up...
>>>>> Before trying to sort this at ocp platform level by replacing the failed nodes with new vms, I would rather prefer to do it at the oVirt level and have the vms starting since the disks are still present on gluster.
>>>>> Thank you so much !
>>>>>
>>>>>
>>>>> Leo

>
>
>
> --
> Best regards, Leo David