On 9/26/2019 12:34 AM, Strahil wrote:
Unassigned in 99% of the cases means engine cannot communicate with
vdsm.service on the host.
Check that vdsm.service & glusterd.service are running.
Check gluster volume status for each volume.
If they are OK - from UI just select activate.
Ya. See my second post. VDSM certs have expired so communication
between hosts and oVirt is broken. Please see my second post.
Best Regards,
Strahil NikolovOn Sep 25, 2019 22:24, TomK <tomkcpr(a)mdevsys.com> wrote:
>
> On 9/25/2019 8:49 AM, TomK wrote:
>> On 9/24/2019 10:55 AM, Sahina Bose wrote:
>>>
>>>
>>> On Tue, Sep 24, 2019 at 6:38 PM TomK <tomkcpr(a)mdevsys.com
>>> <mailto:tomkcpr@mdevsys.com>> wrote:
>>>
>>> Hey Sahina,
>>>
>>> Thanks very much.
>>>
>>> I've taken a quick glance and it does mention I need a third host
>>> which
>>> I don't have.
>>>
>>> However, since I wrote, I removed everything. Had to force remove
>>> hosts
>>> however the gluster setup and data in it stayed (Or so I'm
thinking).
>>>
>>> I'll read the article more thoroughly however is there a specific
>>> procedure for adding existing oVirt gluster volumes back into a
>>> cluster?
>>>
>>> One additional question. Is the sequence below correct?
>>>
>>> 1) Create Data Center
>>> 2) Create Cluster within the DC
>>>
>>>
>>> When you create the cluster - there's an Import cluster option, if I
>>> remember correctly. This should discover the peers and volumes and add
>>> it to the engine
>>
>> Trying that now. On a side note, the autodiscovery box is hard to edit.
>>
>> So in the meantime, trying to up the version to take advantage of some
>> new features and ensure Gluster 6 is fully supported. Not having luck
>> with the oVirt UI upgrade at the moment ( Not a self hosted engine. It's
>> on a separate VM. ):
>>
>> --> Processing Conflict:
>> ovirt-engine-setup-plugin-ovirt-engine-4.3.5.5-1.el7.noarch conflicts
>> ovirt-engine < 4.2.6
>> --> Finished Dependency Resolution
>> Error: Package: ovirt-engine-ui-extensions-1.0.6-1.el7.noarch (ovirt-4.3)
>> Requires: ovirt-engine-webadmin-portal >= 4.3
>> Installed:
>> ovirt-engine-webadmin-portal-4.2.1.7-1.el7.centos.noarch (@ovirt-4.2)
>> ovirt-engine-webadmin-portal = 4.2.1.7-1.el7.centos
>> Error: ovirt-engine-setup-plugin-ovirt-engine conflicts with
>> ovirt-engine-4.2.1.7-1.el7.centos.noarch
>> You could try using --skip-broken to work around the problem
>> You could try running: rpm -Va --nofiles --nodigest
>>
>>
>>
>> Cheers,
>> TK
>>
>
> Just upgraded to 4.3.5.
>
> What do I do if I can't remove a host from a cluster? Maintenance
> window options Active and Maintenance are grayed out. Host is currently
> showing as Unassigned with an exclamation mark next to it. Items listed
> include:
>
> "Power Management is not configured for this Host. Enable Power
Management"
> "Gluster status is disconnected for this host. Restart Glusterd service"
> "Host has no default route."
>
>
> I try to modify some of this thinking maybe it will nudge the server out
> of it's slumber but get this when trying to fix Power Management:
>
>
> "Error while executing action:
> mdskvm-p01.nix.mds.xyz:
> Cannot edit Host. Host parameters cannot be modified while Host is
> operational.
> Please switch Host to Maintenance mode first."
>
>
> Of course I can't put it into maintenance since it's grayed out.
>
> Anyway to remove this host and start fresh?
>
> Cheers,
> TK
>
>
>>>
>>> 3) Create a Domain within the Cluster
>>> 4) Add hosts to DC
>>> 5) Import Gluster Volume
>>>
>>>
>>> 4 & 5 not required if you change step 2 to import.
>>> You are running into errors below because the engine sees the addition
>>> of new hosts as gluster peer probing to a new cluster
>>>
>>>
>>>
>>> Some issues. I create a DC, add a Cluster then add the Host to it.
>>> Then
>>> I try to add another the second node but it says It's already in a
>>> cluster. There is no other cluster and the second host is not listed
>>> anywhere.
>>>
>>> So I try to remove the first host to res
--
Thx,
TK.