[ovirt-users] remove volume from DB

Sahina Bose sabose at redhat.com
Fri Nov 20 00:15:07 EST 2015



On 11/18/2015 10:22 PM, paf1 at email.cz wrote:
> Hello,
> yes, I'm talking about gluster volumes.
> "storages" not defined yet.
> The main problem is about how to remove all definitions from gluster 
> configs on nodes and in ovirt too ( maybe oVirt will update 
> automaticaly , as U wrote before ).
>
>
>
>
> 1) nodes are in maintenance mode, glustred is running with errors
>
> *# systemctl status glusterd*
> glusterd.service - GlusterFS, a clustered file-system server
>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
>    Active: *active (running)* since St 2015-11-18 14:12:26 CET; 3h 
> 16min ago
>   Process: 4465 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>  Main PID: 4466 (glusterd)
>    CGroup: /system.slice/glusterd.service
>            ├─4466 /usr/sbin/glusterd -p /var/run/glusterd.pid 
> --log-level INFO
>            └─4612 /usr/sbin/glusterfs -s localhost --volfile-id 
> gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid 
> -l /var/log/glusterfs/glustershd.log -S /var/run/glus...
>
> lis 18 17:25:44 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:25:44.288734] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:26:23 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:26:23.297273] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:26:41 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:26:41.302793] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:26:54 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:26:54.307579] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:27:33 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:27:33.316049] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:27:51 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:27:51.321659] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:28:04 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:28:04.326615] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:28:43 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:28:43.335278] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:29:01 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:29:01.340909] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> lis 18 17:29:14 1hp2.algocloud.net etc-glusterfs-glusterd.vol[4466]: 
> [2015-11-18 16:29:14.345827] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management: server 
> 16.0.0...onnecting.
> Hint: Some lines were ellipsized, use -l to show in full.

The log at /var/log/glusterfs/etc-glusterfs-glusterd.vol.log will give 
you more information on the errors

>
> 2) all gluster data was cleared from filesystem  ( meaning  " 
> .glusterfs" , VM's data , etc.   ( rm -rf ./.* ; rm -rf ./*   = really 
> cleaned ) )
> 3)  from command line :
>
> #*gluster volume info 1HP-R2P1**
> *
> Volume Name: 1HP-R2P1
> Type: Replicate
> Volume ID: 8b667651-7104-4db9-a006-4effa40524e6
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 1hp1-san:/STORAGE/p1/G
> Brick2: 1hp2-san:/STORAGE/p1/G
> Options Reconfigured:
> performance.readdir-ahead: on
>
> *# cat /etc/hosts*
>  172.16.5.151 1hp1
>  172.16.5.152 1hp2
>  172.16.5.153 2hp1
>  172.16.5.154 2hp2
>
>  16.0.0.151 1hp1-SAN
>  16.0.0.152 1hp2-SAN
>  16.0.0.153 2hp1-SAN
>  16.0.0.154 2hp2-SAN
>
>
> # *gluster peer status* ( in Ovirt nodes defined in 172.16.5.0 (mgmt - 
> 1Gb ), but bricks in 16.0.0.0 network ( VM's - 10Gb ( repl/move) )
> Number of Peers: 4
>
> Hostname: *172.16.5.152*
> Uuid: 47b030ab-75d8-49ec-b67d-650e22dc2271
> State: Peer in Cluster (Connected)
> Other names:
> *1hp2* *
>
>  which of them are correct - both ??* - would I mix it ?? ( peers in 
> the same net, of course ( 16.0.0.0 ) )

If you want to add bricks using the 16.0.0.0 network, from ovirt you 
will need to set this up like below
1. Define a network in the cluster with "gluster" network role.
2. After you add the hosts to ovirt using the 172.16.. network, assign 
the "gluster" network to the 16.0.. interface using the "Setup Networks" 
dialog.
Now when you create the volume from oVirt, the 16.0 network will be used 
to add the bricks.

But in your case it looks like the same host is known as 2 peers - 1hp2 
and 1hp2-SAN? Did you set this up from gluster CLI?
You could try peer detaching 1hp2-SAN and peer probing it again from 
another host. (1hp2-SAN should be shown as other name for 1hp2)


>
> Hostname: 1hp2-SAN
> Uuid: 47b030ab-75d8-49ec-b67d-650e22dc2271
> State: Peer in Cluster (Connected)
>
> Hostname: 2hp2-SAN
> Uuid: f98ff1e1-c866-4af8-a6fa-3e8141a207cd
> State: Peer in Cluster (Connected)
>
> Hostname: 2hp1-SAN
> Uuid: 7dcd603f-052f-4188-94fa-9dbca6cd19b3
> State: Peer in Cluster (Connected)
>
> #*gluster  volume delete 1HP-R2P1*
> Deleting volume will erase all information about the volume. Do you 
> want to continue? (y/n) y
> *Error : Request timed out*

Please attach gluster log for identifying the issue.

> *node info :*  all in current version
> OS Version:RHEL - 7 - 1.1503.el7.centos.2.8
> Kernel Version 3.10.0 - 229.20.1.el7.x86_64
> KVM Version:2.3.0 - 29.1.el7
> LIBVIRT Version:libvirt-1.2.8-16.el7_1.5
> VDSM Version:vdsm-4.17.999-152.git84c0adc.el7
> SPICE Version:0.12.4 - 9.el7_1.3
> GlusterFS Version:glusterfs-3.7.6-1.el7
> oVirt : 3.6
>
>
>
> regs.
> pavel
>
> On 18.11.2015 17:17, Sahina Bose wrote:
>> Are you talking of gluster volumes shown in Volumes tab?
>>
>> If you have removed only the gluster volumes and not the gluster 
>> nodes - the oVirt engine will update the configuration with backend 
>> gluster.
>> However, if the gluster nodes are also removed from backend - the 
>> nodes should be in Non-responsive state in the UI?
>> You could put all nodes in gluster cluster in maintenance mode, and 
>> forceremove(checkbox provided) the nodes.
>>
>> On 11/18/2015 07:26 PM, paf1 at email.cz wrote:
>>> Hello,
>>> howto remove volume definition from oVirt DB ( & from nodes gluster 
>>> config ) if volume totaly cleaned in background in running mode ??
>>>
>>> regs.
>>> Paf1
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151120/9448bb1a/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 32892 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151120/9448bb1a/attachment-0001.png>


More information about the Users mailing list