Did you identify any errors in the Engine log that could provide any clue ?

Best Regards,
Strahil Nikolov 

On Wed, Jul 20, 2022 at 16:15, Jiří Sléžka
<jiri.slezka@slu.cz> wrote:
On 7/19/22 22:40, Strahil Nikolov wrote:
> Then, just ensure that the glusterd.service is enabled on all hosts and
> leave it as it is.
>
> If it worries you, you will have to move one of the hosts in another
> cluster (probably a new one) and slowly migrate the VMs from the old to
> the new one.
> Yet, if you use only 3 hosts that can put your VMs in risk (new cluster
> having a single host could lead to downtimes).

well, it blocks me from any changes on cluster so it is serious
problem... but personally I don't like this "new cluster and migration"
approach :-(

> To be honest, I wouldn't change DB if it's a productive cluster. If you
> decide to go that one -> make an engine backup before that.

Would anyone from oVirt/gluster developers have a look?

Thanks in advance,

Jiri

>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>    On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka
>    <jiri.slezka@slu.cz> wrote:
>    On 7/16/22 07:53, Strahil Nikolov wrote:
>      > Try first with a single host. Set it into maintenance and check
>    if the
>      > checkmark is available.
>
>    setting single host to maintenance didn't change state of the gluster
>    services checkbox in cluster settings.
>
>      > If not, try to 'reinstall' (UI, Hosts, Installation, Reinstall) the
>      > host. During the setup, it should give you to update if the host
>    can run
>      > the HE and it should allow you to select the checkmark for Gluster.
>
>    well, in my oVirt install there is no way to setup glusterfs services
>    during host reinstall. There are only choices to configure firewall,
>    activate host after install, reboot host after install and
>    deploy/undeploy hosted engine...
>
>    I think that gluster related stuff is installed automatically as it is
>    configured on cluster level (where in my case are gluster services
>    disabled).
>
>      > Let's work with a single node before being so drastic and
>    outage-ing a
>      > cluster.
>
>
>    Cheers,
>
>    Jiri
>
>      >
>      > Best Regards,
>      > Strahil Nikolov
>      >
>      >    On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka
>      >    <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote:
>      >    Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
>      >      > Go to the UI, select the volume , pres 'Start' and mark the
>      >    checkbox for
>      >      > 'Force'-fully start .
>      >
>      >    well, it worked :-) Now all bricks are in UP state. In fact from
>      >    commandline point of view all volumes were active and all
>    bricks up all
>      >    the time.
>      >
>      >      > At least it should update the engine that everything is
>    running .
>      >      > Have you checked if the checkmark for the Gluster service is
>      >    available
>      >      > if you set the Host into maintenance?
>      >
>      >    which host do you mean? If all hosts in the cluster I have to
>    plan an
>      >    outage... will try...
>      >
>      >    Thanks,
>      >
>      >    Jiri
>      >
>      >      >
>      >      > Best Regards,
>      >      > Strahil Nikolov
>      >      >
>      >      >    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
>      >      >    <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>
>    <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote:
>      >      >    _______________________________________________
>      >      >    Users mailing list -- users@ovirt.org
>    <mailto:users@ovirt.org> <mailto:users@ovirt.org
>    <mailto:users@ovirt.org>>
>      >    <mailto:users@ovirt.org <mailto:users@ovirt.org>
>    <mailto:users@ovirt.org <mailto:users@ovirt.org>>>
>      >      >    To unsubscribe send an email to users-leave@ovirt.org
>    <mailto:users-leave@ovirt.org>
>      >    <mailto:users-leave@ovirt.org <mailto:users-leave@ovirt.org>>
>      >
>      >      >    <mailto:users-leave@ovirt.org
>    <mailto:users-leave@ovirt.org> <mailto:users-leave@ovirt.org
>    <mailto:users-leave@ovirt.org>>>
>      >      >    Privacy Statement:
>    https://www.ovirt.org/privacy-policy.html
>    <https://www.ovirt.org/privacy-policy.html>
>      >    <https://www.ovirt.org/privacy-policy.html
>    <https://www.ovirt.org/privacy-policy.html>>
>      >      >    <https://www.ovirt.org/privacy-policy.html
>    <https://www.ovirt.org/privacy-policy.html>
>      >    <https://www.ovirt.org/privacy-policy.html
>    <https://www.ovirt.org/privacy-policy.html>>>
>      >      >    oVirt Code of Conduct:
>      >      >
>    https://www.ovirt.org/community/about/community-guidelines/
>    <https://www.ovirt.org/community/about/community-guidelines/>
>      >    <https://www.ovirt.org/community/about/community-guidelines/
>    <https://www.ovirt.org/community/about/community-guidelines/>>
>      >      > 
>    <https://www.ovirt.org/community/about/community-guidelines/
>    <https://www.ovirt.org/community/about/community-guidelines/>
>      >    <https://www.ovirt.org/community/about/community-guidelines/
>    <https://www.ovirt.org/community/about/community-guidelines/>>>
>      >      >    List Archives:
>      >      >
>      >
>    https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/>
>      > 
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/>>
>      >      >
>      > 
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/>
>      > 
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
>    <https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/>>>