Dear Nir,
According to redhat solution 1179163 'add_lockspace fail result -233' indicates corrupted ids lockspace.
During the install, the VM fails to get up.
In order to fix it, I stop:
ovirt-ha-agent, ovirt-ha-broker, vdsmd, supervdsmd, sanlock
Then reinitialize the lockspace via 'sanlock direct init -s' (used bugreport 1116469 as guidance).
Once the init is successful and all the services are up - the VM is started but the deployment was long over and the setup needs additional cleaning up.
I will rebuild the gluster cluster and then will repeat the deployment.
Can you guide me what information will be needed , as I'm quite new in ovirt/RHV ?
Best Regards,
Strahil Nikolov
On Jan 28, 2019 20:34, Nir Soffer <
nsoffer@redhat.com> wrote:
>
> On Sat, Jan 26, 2019 at 6:13 PM Strahil <
hunter86_bg@yahoo.com> wrote:
>>
>> Hey guys,
>>
>> I have noticed that with 4.2.8 the sanlock issue (during deployment) is still not fixed.
>> Am I the only one with bad luck or there is something broken there ?
>>
>> The sanlock service reports code 's7 add_lockspace fail result -233'
>> 'leader1 delta_acquire_begin error -233 lockspace hosted-engine host_id 1'.
>
>
> Sanlock does not have such error code - are you sure this is -233?
>
> Here sanlock return values:
>
https://pagure.io/sanlock/blob/master/f/src/sanlock_rv.h>
> Can you share your sanlock log?
>
>
>>
>>
>> Best Regards,
>> Strahil Nikolov
>> _______________________________________________
>> Users mailing list --
users@ovirt.org>> To unsubscribe send an email to
users-leave@ovirt.org>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZMF5KKHSXOUTLGX3LR2NBN7E6QGS6G3/