Hi Satheesaran,
gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: 3caae601-74dd-40d1-8629-9a61072bec0f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/engine/brick
Brick2: gluster1:/gluster/engine/brick
Brick3: gluster2:/gluster/engine/brick (arbiter)
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
nfs.export-volumes: on
As per my previous, i have resolved this by following the steps described.
On Tue, Jun 27, 2017 at 1:42 PM, Satheesaran Sundaramoorthi <
sasundar(a)redhat.com> wrote:
On Sat, Jun 24, 2017 at 3:17 PM, Abi Askushi
<rightkicktech(a)gmail.com>
wrote:
> Hi all,
>
> For the records, I had to remove manually the conflicting directory and
> ts respective gfid from the arbiter volume:
>
> getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>
> That gave me the gfid: 0x277c9caa9dce4a17a2a93775357befd5
>
> Then cd .glusterfs/27/7c
>
> rm -rf 277c9caa-9dce-4a17-a2a9-3775357befd5 (or move it out of there)
>
> Triggerred heal: gluster volume heal engine
>
> Then all ok:
>
> gluster volume heal engine info
> Brick gluster0:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster1:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Thanx.
>
Hi Abi,
What is the volume type of 'engine' volume ?
Could you also provide the output of 'gluster volume info engine' to get
to the
closer look at the problem
-- sas