<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 11/18/2015 10:22 PM, <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1@email.cz</a>
wrote:<br>
</div>
<blockquote cite="mid:564CACB1.3000103@email.cz" type="cite">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
Hello, <br>
yes, I'm talking about gluster volumes.<br>
"storages" not defined yet.<br>
The main problem is about how to remove all definitions from
gluster configs on nodes and in ovirt too ( maybe oVirt will
update automaticaly , as U wrote before ).<br>
<br>
<img src="cid:part1.00080909.05020609@redhat.com" alt=""
height="323" width="918"><br>
<br>
<br>
1) nodes are in maintenance mode, glustred is running with errors<br>
<br>
<b># systemctl status glusterd</b><br>
glusterd.service - GlusterFS, a clustered file-system server<br>
Loaded: loaded (/usr/lib/systemd/system/glusterd.service;
enabled)<br>
Active: <b><font color="#33cc00">active (running)</font></b>
since St 2015-11-18 14:12:26 CET; 3h 16min ago<br>
Process: 4465 ExecStart=/usr/sbin/glusterd -p
/var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS
(code=exited, status=0/SUCCESS)<br>
Main PID: 4466 (glusterd)<br>
CGroup: /system.slice/glusterd.service<br>
├─4466 /usr/sbin/glusterd -p /var/run/glusterd.pid
--log-level INFO<br>
└─4612 /usr/sbin/glusterfs -s localhost --volfile-id
gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S /var/run/glus...<br>
<br>
<font color="#ff0000">lis 18 17:25:44 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:25:44.288734] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:26:23 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:26:23.297273] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:26:41 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:26:41.302793] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:26:54 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:26:54.307579] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:27:33 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:27:33.316049] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:27:51 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:27:51.321659] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:28:04 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:28:04.326615] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:28:43 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:28:43.335278] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:29:01 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:29:01.340909] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.<br>
lis 18 17:29:14 1hp2.algocloud.net
etc-glusterfs-glusterd.vol[4466]: [2015-11-18 16:29:14.345827] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-management:
server 16.0.0...onnecting.</font><br>
Hint: Some lines were ellipsized, use -l to show in full.<br>
</blockquote>
<br>
The log at /var/log/glusterfs/etc-glusterfs-glusterd.vol.log will
give you more information on the errors<br>
<br>
<blockquote cite="mid:564CACB1.3000103@email.cz" type="cite"> <br>
2) all gluster data was cleared from filesystem ( meaning "
.glusterfs" , VM's data , etc. ( rm -rf ./.* ; rm -rf ./* =
really cleaned ) )<br>
3) from command line :<br>
<br>
#<b> gluster volume info 1HP-R2P1</b><b><br>
</b><br>
Volume Name: 1HP-R2P1<br>
Type: Replicate<br>
Volume ID: 8b667651-7104-4db9-a006-4effa40524e6<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 1hp1-san:/STORAGE/p1/G<br>
Brick2: 1hp2-san:/STORAGE/p1/G<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
<br>
<b># cat /etc/hosts</b><br>
<tt> 172.16.5.151 1hp1 </tt><tt><br>
</tt><tt> 172.16.5.152 1hp2 </tt><tt><br>
</tt><tt> 172.16.5.153 2hp1</tt><tt><br>
</tt><tt> 172.16.5.154 2hp2 </tt><tt><br>
</tt><tt><br>
</tt><tt> 16.0.0.151 1hp1-SAN </tt><tt><br>
</tt><tt> 16.0.0.152 1hp2-SAN </tt><tt><br>
</tt><tt> 16.0.0.153 2hp1-SAN</tt><tt><br>
</tt><tt> 16.0.0.154 2hp2-SAN </tt><br>
<br>
<br>
# <b>gluster peer status</b> ( in Ovirt nodes defined in
172.16.5.0 (mgmt - 1Gb ), but bricks in 16.0.0.0 network ( VM's -
10Gb ( repl/move) )<br>
Number of Peers: 4<br>
<br>
Hostname: <font color="#cc0000"><b>172.16.5.152</b></font><br>
Uuid: 47b030ab-75d8-49ec-b67d-650e22dc2271<br>
State: Peer in Cluster (Connected)<br>
Other names:<br>
<b><font color="#cc0000">1hp2</font></b> <font color="#cc0000"><b><br>
<br>
which of them are correct - both ??</b></font> - would I mix
it ?? ( peers in the same net, of course ( 16.0.0.0 ) )<br>
</blockquote>
<br>
If you want to add bricks using the 16.0.0.0 network, from ovirt you
will need to set this up like below<br>
1. Define a network in the cluster with "gluster" network role.<br>
2. After you add the hosts to ovirt using the 172.16.. network,
assign the "gluster" network to the 16.0.. interface using the
"Setup Networks" dialog.<br>
Now when you create the volume from oVirt, the 16.0 network will be
used to add the bricks.<br>
<br>
But in your case it looks like the same host is known as 2 peers -
1hp2 and 1hp2-SAN? Did you set this up from gluster CLI?<br>
You could try peer detaching 1hp2-SAN and peer probing it again from
another host. (1hp2-SAN should be shown as other name for 1hp2)<br>
<br>
<br>
<blockquote cite="mid:564CACB1.3000103@email.cz" type="cite"> <br>
Hostname: 1hp2-SAN<br>
Uuid: 47b030ab-75d8-49ec-b67d-650e22dc2271<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 2hp2-SAN<br>
Uuid: f98ff1e1-c866-4af8-a6fa-3e8141a207cd<br>
State: Peer in Cluster (Connected)<br>
<br>
Hostname: 2hp1-SAN<br>
Uuid: 7dcd603f-052f-4188-94fa-9dbca6cd19b3<br>
State: Peer in Cluster (Connected)<br>
<br>
#<b> gluster volume delete 1HP-R2P1</b><br>
Deleting volume will erase all information about the volume. Do
you want to continue? (y/n) y<br>
<font color="#cc0000"><b>Error : Request timed out</b></font><br>
</blockquote>
<br>
Please attach gluster log for identifying the issue.<br>
<br>
<blockquote cite="mid:564CACB1.3000103@email.cz" type="cite"> <b>node
info :</b> all in current version<br>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>OS Version:</tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row0_value">
RHEL - 7 - 1.1503.el7.centos.2.8</span></tt></div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>Kernel Version </tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row1_value">3.10.0
- 229.20.1.el7.x86_64</span></tt></div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>KVM Version:</tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row2_value">
2.3.0 - 29.1.el7</span></tt></div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>LIBVIRT Version:</tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row3_value">
libvirt-1.2.8-16.el7_1.5</span></tt></div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>VDSM Version:</tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row4_value">
vdsm-4.17.999-152.git84c0adc.el7</span></tt></div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>SPICE Version:</tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row5_value">
0.12.4 - 9.el7_1.3</span></tt></div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-3">
<div class="row">
<div class="col-md-6">
<div class="col-md-6"><tt>GlusterFS Version:</tt><tt><span
class="GJ1IWOQCMMD"
id="SubTabHostGeneralSoftwareView_formPanel_col0_row6_value">
glusterfs-3.7.6-1.el7</span></tt></div>
</div>
</div>
</div>
</div>
oVirt : 3.6<br>
<br>
<br>
<br>
regs.<br>
pavel<br>
<br>
<div class="moz-cite-prefix">On 18.11.2015 17:17, Sahina Bose
wrote:<br>
</div>
<blockquote cite="mid:564CA4B0.9090209@redhat.com" type="cite">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
Are you talking of gluster volumes shown in Volumes tab?<br>
<br>
If you have removed only the gluster volumes and not the gluster
nodes - the oVirt engine will update the configuration with
backend gluster.<br>
However, if the gluster nodes are also removed from backend -
the nodes should be in Non-responsive state in the UI? <br>
You could put all nodes in gluster cluster in maintenance mode,
and forceremove(checkbox provided) the nodes.<br>
<br>
<div class="moz-cite-prefix">On 11/18/2015 07:26 PM, <a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:paf1@email.cz"><a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1@email.cz</a></a> wrote:<br>
</div>
<blockquote cite="mid:564C8375.9020506@email.cz" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=utf-8">
Hello, <br>
howto remove volume definition from oVirt DB ( & from
nodes gluster config ) if volume totaly cleaned in background
in running mode ??<br>
<br>
regs.<br>
Paf1<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>