[ovirt-users] [Gluster-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)
Gianluca Cecchi
gianluca.cecchi at gmail.com
Wed Jul 5 22:17:03 UTC 2017
On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
> OK, so the log just hints to the following:
>
> [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit]
> 0-management: Commit failed for operation Reset Brick on local node
> [2017-07-05 15:04:07.178214] E [MSGID: 106123]
> [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
> 0-management: Commit Op Failed
>
> While going through the code, glusterd_op_reset_brick () failed resulting
> into these logs. Now I don't see any error logs generated from
> glusterd_op_reset_brick () which makes me thing that have we failed from a
> place where we log the failure in debug mode. Would you be able to restart
> glusterd service with debug log mode and reran this test and share the log?
>
>
What's the best way to set glusterd in debug mode?
Can I set this volume, and work on it even if it is now compromised?
I ask because I have tried this:
[root at ovirt01 ~]# gluster volume get export diagnostics.brick-log-level
Option
Value
------
-----
diagnostics.brick-log-level INFO
[root at ovirt01 ~]# gluster volume set export diagnostics.brick-log-level
DEBUG
volume set: failed: Error, Validation Failed
[root at ovirt01 ~]#
While on another volume that is in good state, I can run
[root at ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level DEBUG
volume set: success
[root at ovirt01 ~]#
[root at ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option
Value
------
-----
diagnostics.brick-log-level DEBUG
[root at ovirt01 ~]# gluster volume set iso diagnostics.brick-log-level INFO
volume set: success
[root at ovirt01 ~]#
[root at ovirt01 ~]# gluster volume get iso diagnostics.brick-log-level
Option
Value
------
-----
diagnostics.brick-log-level
INFO
[root at ovirt01 ~]#
Do you mean to run the reset-brick command for another volume or for the
same? Can I run it against this "now broken" volume?
Or perhaps can I modify /usr/lib/systemd/system/glusterd.service and change
in [service] section
from
Environment="LOG_LEVEL=INFO"
to
Environment="LOG_LEVEL=DEBUG"
and then
systemctl daemon-reload
systemctl restart glusterd
I think it would be better to keep gluster in debug mode the less time
possible, as there are other volumes active right now, and I want to
prevent fill the log files file system
Best to put only some components in debug mode if possible as in the
example commands above.
Let me know,
thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170706/7b4c3b6c/attachment.html>
More information about the Users
mailing list