Sure Greg, I will look into this and get back to you guys.
On Tue, Feb 5, 2019 at 7:22 AM Greg Sheremeta <gshereme(a)redhat.com> wrote:
Sahina, Gobinda,
Can you check this thread?
On Mon, Feb 4, 2019 at 6:02 PM feral <blistovmhz(a)gmail.com> wrote:
> Glusterd was enabled, just crashes on boot. It's a known issue that was
> resolved in 3.13, but ovirt-node only has 3.12.
> The VM is at that point, paused. So I manually startup glusterd again and
> ensure all nodes are online, and then resume the hosted engine. Sometimes
> it works, sometimes not.
>
> I think the issue here is that there are multiple issues with the current
> ovirt-node release iso. I was able to get everything working with Centos
> base and installing ovirt manually. Still had the same problem with the
> gluster wizard not using any of my settings, but after that, and ensuring i
> restart all services after a reboot, things came to life.
> Trying to discuss with devs, but so far no luck. I keep hearing that the
> previous release of ovirt-node (iso) was just much smoother, but haven't
> seen anyone addressing the issues in current release.
>
>
> On Mon, Feb 4, 2019 at 2:16 PM Edward Berger <edwberger(a)gmail.com> wrote:
>
>> On each host you should check if systemctl status glusterd shows
>> "enabled" and whatever is the gluster events daemon. (I'm not
logged in to
>> look right now)
>>
>> I'm not sure which part of gluster-wizard or hosted-engine engine
>> installation is supposed to do the enabling, but I've seen where incomplete
>> installs left it disabled.
>>
>> If the gluster servers haven't come up properly then there's no working
>> image for engine.
>> I had a situation where it was in a "paused" state and I had to run
>> "hosted-engine --vm-status" on possible nodes to find which one has VM
in
>> paused state
>> then log into that node and run this command..
>>
>> virsh -c
>> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume
>> HostedEngine
>>
>>
>> On Mon, Feb 4, 2019 at 3:23 PM feral <blistovmhz(a)gmail.com> wrote:
>>
>>> On that note, have you also had issues with gluster not restarting on
>>> reboot, as well as all of the HA stuff failing on reboot after power loss?
>>> Thus far, the only way I've got the cluster to come back to life, is to
>>> manually restart glusterd on all nodes, then put the cluster back into
"not
>>> mainentance" mode, and then manually starting the hosted-engine vm.
This
>>> also fails after 2 or 3 power losses, even though the entire cluster is
>>> happy through the first 2.
>>>
>>> On Mon, Feb 4, 2019 at 12:21 PM feral <blistovmhz(a)gmail.com> wrote:
>>>
>>>> Yea, I've been able to build a config manually myself, but sure
would
>>>> be nice if the gdeploy worked (at all), as it takes an hour to deploy
every
>>>> test, and manually creating the conf, I have to be super conservative
about
>>>> my sizes, as I'm still not entirely sure what the deploy script
actually
>>>> does. IE: I've got 3 nodes with 1.2TB for the gluster each, but if I
try to
>>>> build a deployment to make use of more than 900GB, it fails as it's
>>>> creating the thinpool with whatever size it wants.
>>>>
>>>> Just wanted to make sure I wasn't the only one having this issue.
>>>> Given we know at least two people have noticed, who's the best to
contact?
>>>> I haven't been able to get any response from devs on any of (the
myriad)
>>>> of issues with the 4.2.8 image.
>>>>
>>>
Have you reported bugs?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
is a good generic place to start
> Also having a ton of strange issues with the hosted-engine vm deployment.
>>>>
>>>
Can you elaborate and or report bugs?
https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
>
>>>> On Mon, Feb 4, 2019 at 11:59 AM Edward Berger
<edwberger(a)gmail.com>
>>>> wrote:
>>>>
>>>>> Yes, I had that issue with an 4.2.8 installation.
>>>>> I had to manually edit the "web-UI-generated" config to be
anywhere
>>>>> close to what I wanted.
>>>>>
>>>>
Please report a bug on this, with steps to reproduce.
https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt
>
>>>>> I'll attach an edited config as an example.
>>>>>
>>>>> On Mon, Feb 4, 2019 at 2:51 PM feral <blistovmhz(a)gmail.com>
wrote:
>>>>>
>>>>>> New install of ovirt-node 4.2 (from iso). Setup each node with
>>>>>> networking and ssh keys, and use the hyperconverged gluster
deployment
>>>>>> wizard. None of the user specified settings are ever reflected in
the
>>>>>> gdeployConfig.conf.
>>>>>> Anyone running into this?
>>>>>>
>>>>>> --
>>>>>> _____
>>>>>> Fact:
>>>>>> 1. Ninjas are mammals.
>>>>>> 2. Ninjas fight ALL the time.
>>>>>> 3. The purpose of the ninja is to flip out and kill people.
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF56FSFRNGC...
>>>>>>
>>>>>
>>>>
>>>> --
>>>> _____
>>>> Fact:
>>>> 1. Ninjas are mammals.
>>>> 2. Ninjas fight ALL the time.
>>>> 3. The purpose of the ninja is to flip out and kill people.
>>>>
>>>
>>>
>>> --
>>> _____
>>> Fact:
>>> 1. Ninjas are mammals.
>>> 2. Ninjas fight ALL the time.
>>> 3. The purpose of the ninja is to flip out and kill people.
>>>
>>
>
> --
> _____
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3WPWZZUFH3...
>
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<
https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<
https://red.ht/sig>