[ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

Atin Mukherjee amukherj at redhat.com
Thu Jul 6 12:16:54 UTC 2017


On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:

> On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com
> > wrote:
>
>>
>> Eventually I can destroy and recreate this "export" volume again with the
>> old names (ovirt0N.localdomain.local) if you give me the sequence of
>> commands, then enable debug and retry the reset-brick command
>>
>> Gianluca
>>
>
>
> So it seems I was able to destroy and re-create.
> Now I see that the volume creation uses by default the new ip, so I
> reverted the hostnames roles in the commands after putting glusterd in
> debug mode on the host where I execute the reset-brick command (do I have
> to set debug for the the nodes too?)
>

You have to set the log level to debug for glusterd instance where the
commit fails and share the glusterd log of that particular node.


>
>
> [root at ovirt01 ~]# gluster volume reset-brick export
> gl01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: success: reset-brick start operation successful
>
> [root at ovirt01 ~]# gluster volume reset-brick export
> gl01.localdomain.local:/gluster/brick3/export ovirt01.localdomain.local:/gluster/brick3/export
> commit force
> volume reset-brick: failed: Commit failed on ovirt02.localdomain.local.
> Please check log file for details.
> Commit failed on ovirt03.localdomain.local. Please check log file for
> details.
> [root at ovirt01 ~]#
>
> See here the glusterd.log in zip format:
> https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/
> view?usp=sharing
>
> Time of the reset-brick operation in logfile is 2017-07-06 11:42
> (BTW: can I have time in log not in UTC format, as I'm using CEST date in
> my system?)
>
> I see a difference, because the brick doesn't seems isolated as before...
>
> [root at ovirt01 glusterfs]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: 10.10.2.103:/gluster/brick3/export
> Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)
>
> [root at ovirt02 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: 10.10.2.103:/gluster/brick3/export
> Brick3: 10.10.2.104:/gluster/brick3/export (arbiter)
>
> And also in oVirt I see all 3 bricks online....
>
> Gianluca
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170706/7db8d48f/attachment.html>


More information about the Users mailing list