On Thu, Jul 6, 2017 at 6:55 AM, Atin Mukherjee <amukherj(a)redhat.com> wrote:
>
You can switch back to info mode the moment this is hit one more time with
the debug log enabled. What I'd need here is the glusterd log (with debug
mode) to figure out the exact cause of the failure.
>
> Let me know,
> thanks
>
>
Yes, but with the volume in the current state I cannot run the reset-brick
command.
I have another volume, named "iso", that I can use, but I would like to use
it as clean after understanding the problem on "export" volume.
Currently on "export" volume in fact I have this
[root@ovirt01 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 1
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...
While on the other two nodes
[root@ovirt02 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
[root@ovirt03 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...
Eventually I can destroy and recreate this "export" volume again with the
old names (ovirt0N.localdomain.local) if you give me the sequence of
commands, then enable debug and retry the reset-brick command
Gianluca