Hello,

 

I ‚ve setup a test lab with 3 nodes installed with centos 7

I configured manualy gluster fs. Glusterfs is up and running

 

[root@kvm380 ~]# gluster peer status

Number of Peers: 2

 

Hostname: kvm320.durchhalten.intern

Uuid: dac066db-55f7-4770-900d-4830c740ffbf

State: Peer in Cluster (Connected)

 

Hostname: kvm360.durchhalten.intern

Uuid: 4291be40-f77f-4f41-98f6-dc48fd993842

State: Peer in Cluster (Connected)

[root@kvm380 ~]# gluster volume info

 

Volume Name: data

Type: Replicate

Volume ID: 3586de82-e504-4c62-972b-448abead13d3

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: kvm380.durchhalten.intern:/gluster/data

Brick2: kvm360.durchhalten.intern:/gluster/data

Brick3: kvm320.durchhalten.intern:/gluster/data

Options Reconfigured:

storage.owner-uid: 36

storage.owner-gid: 36

features.shard: on

performance.low-prio-threads: 32

performance.strict-o-direct: on

network.ping-timeout: 30

user.cifs: off

network.remote-dio: off

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

cluster.eager-lock: enable

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

Volume Name: engine

Type: Replicate

Volume ID: dcfbd322-5dd0-4bfe-a775-99ecc79e1416

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: kvm380.durchhalten.intern:/gluster/engine

Brick2: kvm360.durchhalten.intern:/gluster/engine

Brick3: kvm320.durchhalten.intern:/gluster/engine

Options Reconfigured:

storage.owner-uid: 36

storage.owner-gid: 36

features.shard: on

performance.low-prio-threads: 32

performance.strict-o-direct: on

network.remote-dio: off

network.ping-timeout: 30

user.cifs: off

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

cluster.eager-lock: enable

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

 

After that I deployed a selfhosted engine

And add the two other hosts, at the beginning it looks good, but without changing anything I got following error by two hosts

 

!https://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif

20.12.2018 11:35:05

Failed to connect Host kvm320.durchhalten.intern to Storage Pool Default

!https://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif

20.12.2018 11:35:05

Host kvm320.durchhalten.intern cannot access the Storage Domain(s) hosted_storage attached to the Data Center Default. Setting Host state to Non-Operational.

Xhttps://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif

20.12.2018 11:35:05

Host kvm320.durchhalten.intern reports about one of the Active Storage Domains as Problematic.

https://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif!

20.12.2018 11:35:05

Kdump integration is enabled for host kvm320.durchhalten.intern, but kdump is not configured properly on host.

https://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif!

20.12.2018 11:35:04

Failed to connect Host kvm360.durchhalten.intern to Storage Pool Default

https://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif!

20.12.2018 11:35:04

Host kvm360.durchhalten.intern cannot access the Storage Domain(s) hosted_storage attached to the Data Center Default. Setting Host state to Non-Operational.

Xhttps://he.durchhalten.intern/ovirt-engine/webadmin/clear.cache.gif

20.12.2018 11:35:04

Host kvm360.durchhalten.intern reports about one of the Active Storage Domains as Problematic.

 

Before glusterfs I had a setup with nfs on 4. Server

 

Where is the problem?

 

thx