Hi Everyone,
Im still not being able to start the vms... Could anyone give me an advice
on sorign this out ?
Still having th "Bad volume specification" error, although the disk is
present on the storage.
This issue would force me to reinstall a 10 nodes Openshift cluster from
scratch, which would not be so funny..
Thanks,
Leo.
On Fri, Oct 11, 2019 at 7:12 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
Nah...
It's done directly on the DB and I wouldn't recommend such action for
Production Cluster.
I've done it only once and it was based on some old mailing lists.
Maybe someone from the dev can assist?
On Oct 10, 2019 13:31, Leo David <leoalex(a)gmail.com> wrote:
Thank you Strahil,
Could you tell me what do you mean by changing status ? Is this something
to be done in the UI ?
Thanks,
Leo
On Thu, Oct 10, 2019, 09:55 Strahil <hunter86_bg(a)yahoo.com> wrote:
Maybe you can change the status of the VM in order the engine to know that
it has to blockcommit the snapshots.
Best Regards,
Strahil Nikolov
On Oct 9, 2019 09:02, Leo David <leoalex(a)gmail.com> wrote:
Hi Everyone,
Please let me know if any thoughts or recommandations that could help me
solve this issue..
The real bad luck in this outage is that these 5 vms are part on an
Openshift deployment, and now we are not able to start it up...
Before trying to sort this at ocp platform level by replacing the failed
nodes with new vms, I would rather prefer to do it at the oVirt level and
have the vms starting since the disks are still present on gluster.
Thank you so much !
Leo
--
Best regards, Leo David