[ovirt-users] I’m having trouble deleting a test gluster volume
knarra
knarra at redhat.com
Wed Apr 12 06:02:12 UTC 2017
On 04/12/2017 01:45 AM, Precht, Andrew wrote:
> Here is an update…
>
> I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the
> node that had the trouble volume (test1). I didn’t see any errors. So,
> I ran a tail -f on the log as I tried to remove the volume using the
> web UI. here is what was appended:
>
> [2017-04-11 19:48:40.756360] I [MSGID: 106487]
> [glusterd-handler.c:1474:__glusterd_handle_cli_list_friends]
> 0-glusterd: Received cli list req
> [2017-04-11 19:48:42.238840] I [MSGID: 106488]
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume]
> 0-management: Received get vol req
> The message "I [MSGID: 106487]
> [glusterd-handler.c:1474:__glusterd_handle_cli_list_friends]
> 0-glusterd: Received cli list req" repeated 6 times between
> [2017-04-11 19:48:40.756360] and [2017-04-11 19:49:32.596536]
> The message "I [MSGID: 106488]
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume]
> 0-management: Received get vol req" repeated 20 times between
> [2017-04-11 19:48:42.238840] and [2017-04-11 19:49:34.082179]
> [2017-04-11 19:51:41.556077] I [MSGID: 106487]
> [glusterd-handler.c:1474:__glusterd_handle_cli_list_friends]
> 0-glusterd: Received cli list req
>
> I’m seeing that the timestamps on these log entries do not match the
> time on the node.
gluster logs are in UTC format. That is the reason you might be seeing a
different timestamp on your node and in the gluster logs.
>
> The next steps
> I stopped the glusterd service on the node with volume test1
> I deleted it with: rm -rf /var/lib/glusterd/vols/test1
> I started the glusterd service.
>
> After starting the gluster service back up, the directory
> /var/lib/glusterd/vols/test1 reappears.
> I’m guessing syncing with the other nodes?
yes, since you deleted it only one one node.
> Is this because I have the Volume Option: auth allow *
> Do I need to remove the directory /var/lib/glusterd/vols/test1 on all
> nodes in the cluster individually?
you need to remove the file /var/lib/glusterd/vols/test1 on all nodes
and restart glusterd service on all the nodes in the cluster.
>
> thanks
>
> ------------------------------------------------------------------------
> *From:* knarra <knarra at redhat.com>
> *Sent:* Tuesday, April 11, 2017 11:51:18 AM
> *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> Mureinik; Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test
> gluster volume
> On 04/11/2017 11:28 PM, Precht, Andrew wrote:
>> Hi all,
>> The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
>> On the node I can not find /var/log/glusterfs/glusterd.log However,
>> there is a /var/log/glusterfs/glustershd.log
> can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
> exists? if yes, can you check if there is any error present in that file ?
>>
>> What happens if I follow the four steps outlined here to remove the
>> volume from the node _BUT_, I do have another volume present in the
>> cluster. It too is a test volume. Neither one has any data on them.
>> So, data loss is not an issue.
> Running those four steps will remove the volume from your cluster . If
> the volumes what you have are test volumes you could just follow the
> steps outlined to delete them (since you are not able to delete from
> UI) and bring back the cluster into a normal state.
>>
>> ------------------------------------------------------------------------
>> *From:* knarra <knarra at redhat.com>
>> *Sent:* Tuesday, April 11, 2017 10:32:27 AM
>> *To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon
>> Mureinik; Nir Soffer
>> *Cc:* users
>> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test
>> gluster volume
>> On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
>>> Adding some people
>>>
>>> Il 11/Apr/2017 19:06, "Precht, Andrew" <Andrew.Precht at sjlibrary.org
>>> <mailto:Andrew.Precht at sjlibrary.org>> ha scritto:
>>>
>>> Hi Ovirt users,
>>> I’m a newbie to oVirt and I’m having trouble deleting a test
>>> gluster volume. The nodes are 4.1.1 and the engine is 4.1.0
>>>
>>> When I try to remove the test volume, I click Remove, the dialog
>>> box prompting to confirm the deletion pops up and after I click
>>> OK, the dialog box changes to show a little spinning wheel and
>>> then it disappears. In the end the volume is still there.
>>>
>> with the latest version of glusterfs & ovirt we do not see any issue
>> with deleting a volume. Can you please check
>> /var/log/glusterfs/glusterd.log file if there is any error present?
>>
>>
>>> The test volume was distributed with two host members. One of
>>> the hosts I was able to remove from the volume by removing the
>>> host form the cluster. When I try to remove the remaining host
>>> in the volume, even with the “Force Remove” box ticked, I get
>>> this response: Cannot remove Host. Server having Gluster volume.
>>>
>>> What to try next?
>>>
>> since you have already removed the volume from one host in the
>> cluster and you still see it on another host you can do the following
>> to remove the volume from another host.
>>
>> 1) Login to the host where the volume is present.
>> 2) cd to /var/lib/glusterd/vols
>> 3) rm -rf <vol_name>
>> 4) Restart glusterd on that host.
>>
>> And before doing the above make sure that you do not have any other
>> volume present in the cluster.
>>
>> Above steps should not be run on a production system as you might
>> loose the volume and data.
>>
>> Now removing the host from UI should succed.
>>
>>>
>>> P.S. I’ve tried to join this user group several times in the
>>> past, with no response.
>>> Is it possible for me to join this group?
>>>
>>> Regards,
>>> Andrew
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170412/1aff52d4/attachment.html>
More information about the Users
mailing list