1. What does the glustershd.log say on all 3 nodes when you run the command? Does it complain anything about these files?
2. Are these 12 files also present in the 3rd data brick?
3. Can you provide the output of `gluster volume info` for the this volume?
Volume Name: engineType: ReplicateVolume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: node01:/gluster/engine/brickBrick2: node02:/gluster/engine/brickBrick3: node04:/gluster/engine/brickOptions Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetstorage.owner-uid: 36performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: offcluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: fullcluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-gid: 36features.shard-block-size: 512MBnetwork.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: onauth.allow: *
Some extra info:
We have recently changed the gluster from: 2 (full repliacated) + 1 arbiter to 3 full replicated cluster
Just curious, how did you do this? `remove-brick` of arbiter brick followed by an `add-brick` to increase to replica-3?
Thanks,
Ravi