gluster pool list

UUID                                    Hostname        State

1d17652f-f567-4a6d-9953-e0908ef5e361    localhost       Connected

 

gluster pool list

UUID                                    Hostname        State

612be7ce-6673-433e-ac86-bcca93636d64    localhost       Connected

 

gluster pool list

UUID                                    Hostname        State

772faa4f-44d4-45a7-8524-a7963798757b    localhost       Connected

 

gluster peer status

Number of Peers: 0

 

cat cmd_history.log

[2021-10-29 17:33:22.934750]  : peer probe storage1.private.net : SUCCESS : Probe on localhost not needed

[2021-10-29 17:33:23.162993]  : peer probe storage2.private.net : SUCCESS

[2021-10-29 17:33:23.498094]  : peer probe storage3.private.net : SUCCESS

[2021-10-29 17:33:24.918421]  : volume create engine replica 3 transport tcp storage1.private.net:/gluster_bricks/engine/engine storage2.private.net:/gluster_bricks/engine/engine storage3.private.net:/gluster_bricks/engine/engine force : FAILED : Staging failed on storage3.private.net. Error: Host storage1.private.net not connected

Staging failed on storage2.private.net. Error: Host storage1.private.net not connected

[2021-10-29 17:33:28.226387]  : peer probe storage1.private.net : SUCCESS : Probe on localhost not needed

[2021-10-29 17:33:30.618435]  : volume create data replica 3 transport tcp storage1.private.net:/gluster_bricks/data/data storage2.private.net:/gluster_bricks/data/data storage3.private.net:/gluster_bricks/data/data force : FAILED : Staging failed on storage2.private.net. Error: Host storage1.private.net not connected

Staging failed on storage3.private.net. Error: Host storage1.private.net not connected

[2021-10-29 17:33:33.923032]  : peer probe storage1.private.net : SUCCESS : Probe on localhost not needed

[2021-10-29 17:33:38.656356]  : volume create vmstore replica 3 transport tcp storage1.private.net:/gluster_bricks/vmstore/vmstore storage2.private.net:/gluster_bricks/vmstore/vmstore storage3.private.net:/gluster_bricks/vmstore/vmstore force : FAILED : Staging failed on storage3.private.net. Error: Host storage1.private.net not connected

Staging failed on storage2.private.net. Error: Host storage1.private.net is not in 'Peer in Cluster' state

[2021-10-29 17:49:40.696944]  : peer detach storage2.private.net : SUCCESS

[2021-10-29 17:49:43.787922]  : peer detach storage3.private.net : SUCCESS

 

OK this is what I have so far, still looking for the complete ansible log.

 

Brad

 

From: Strahil Nikolov <hunter86_bg@yahoo.com>
Sent: October 30, 2021 10:27 AM
To: admin@foundryserver.com; users@ovirt.org
Subject: Re: [ovirt-users] Gluster Install Fail again :(

 

What is the output of :

gluster peer list (from all nodes)

 

Output from the ansible will be useful.

 

 

Best Regards,

Strahil Nikolov

I have been working on getting this up and running for about a week now and I am totally frustrated.  I am not sure even where to begin.  Here is the error I get when it fails,

 

TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] *******

 

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None

failed: [storage1.private.net] (item={'volname': 'engine', 'brick': '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create engine replica 3 transport tcp storage1.private.net:/gluster_bricks/engine/engine storage2.private.net:/gluster_bricks/engine/engine storage3.private.net:/gluster_bricks/engine/engine force) command (rc=1): volume create: engine: failed: Staging failed on storage3.private.net. Error: Host storage1.private.net not connected\nStaging failed on storage2.private.net. Error: Host storage1.private.net not connected\n"}

 

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None

failed: [storage1.private.net] (item={'volname': 'data', 'brick': '/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", "volname": "data"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create data replica 3 transport tcp storage1.private.net:/gluster_bricks/data/data storage2.private.net:/gluster_bricks/data/data storage3.private.net:/gluster_bricks/data/data force) command (rc=1): volume create: data: failed: Staging failed on storage2.private.net. Error: Host storage1.private.net not connected\nStaging failed on storage3.private.net. Error: Host storage1.private.net not connected\n"}

 

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None

failed: [storage1.private.net] (item={'volname': 'vmstore', 'brick': '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create vmstore replica 3 transport tcp storage1.private.net:/gluster_bricks/vmstore/vmstore storage2.private.net:/gluster_bricks/vmstore/vmstore storage3.private.net:/gluster_bricks/vmstore/vmstore force) command (rc=1): volume create: vmstore: failed: Staging failed on storage3.private.net. Error: Host storage1.private.net not connected\nStaging failed on storage2.private.net. Error: Host storage1.private.net is not in 'Peer in Cluster' state\n"}

 

Here are the facts.

 

using 4.4.9 of ovirt.

using ovirtnode os

partion for gluster:  /dev/vda4  > 4T in unformatted space.

 

able to ssh into each host on the private.net and known hosts and fqdn passes fine.

 

On the volume  page:

all default settings.

 

On the bricks page:

JBOD / Blacklist true / storage host  storage1.private.net / default lvm except the device is /dev/sda4

 

I really need to get this setup.  The first failure was the filter error, so I edited the /etc/lvm/lvm.conf  to comment out the filter line.  Then without doing a clean up I reran the deployment and got the above error. 

 

Thanks in advance

Brad

 

_______________________________________________

Users mailing list -- users@ovirt.org

To unsubscribe send an email to users-leave@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/

List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SAYZ3STV3ILDE42T6JUXLKVHSIX7LRI5/