[ovirt-users] ovirt - import detached gluter volumes

paf1 at email.cz paf1 at email.cz
Wed Apr 22 15:44:14 UTC 2015


Hello dears,
i've got some troubles with reattaching gluster volumes with data.

1) Base on a lot of tests I decided clear oVirt database ( # 
engine-cleanup ; # yum remove ovirt-engine;  # yum -y install 
ovirt-engine; #  engine-setup)
2) clearing sucessfully done and start with empty oVirt envir.
3) then I added networks, nodes and make basic network adjustment = all 
works fine
4) time to attach  volumes/ domains with original data ( a lot of 
virtuals , ISO files , .... )

So, main question is about HOWTO attach this volumes if I haven't 
defined any domain and can't clearly  import them ??

Current status of nodes are without glusterfs NFS mounted, but bricks are OK

# gluster volume info

Volume Name: VGFS1
Type: Replicate
Volume ID: b9a1c347-6ffd-4122-8756-d513fe3f40b9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p1/GFS1
Brick2: 1kvm1:/FastClass/p1/GFS1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36

Volume Name: VGFS2
Type: Replicate
Volume ID: b65bb689-ecc8-4c33-a4e7-11dea6028f83
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p2/GFS1
Brick2: 1kvm1:/FastClass/p2/GFS1
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36


[root at 1kvm1 glusterfs]# gluster volume status
Status of volume: VGFS1
Gluster process Port    Online  Pid
------------------------------------------------------------------------------
Brick 1kvm1:/FastClass/p1/GFS1 N/A     N       N/A
NFS Server on localhost N/A     N       N/A
Self-heal Daemon on localhost N/A     N       N/A

Task Status of Volume VGFS1
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: VGFS2
Gluster process Port    Online  Pid
------------------------------------------------------------------------------
Brick 1kvm1:/FastClass/p2/GFS1 N/A     N       N/A
NFS Server on localhost N/A     N       N/A
Self-heal Daemon on localhost N/A     N       N/A

Task Status of Volume VGFS2
------------------------------------------------------------------------------
There are no active volume tasks

[root at 1kvm1 glusterfs]# gluster volume start VGFS1
volume start: VGFS1: failed: Volume VGFS1 already started



# mount | grep mapper # base XFS mounting
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p1 on /FastClass/p1 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p2 on /FastClass/p2 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)


*5)* import screen
/VGFS1 dir exists & iptables flushed


# cat rhev-data-center-mnt-glusterSD-1kvm1:_VGFS1.log
[2015-04-22 15:21:50.204521] I [MSGID: 100030] [glusterfsd.c:2018:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.2 
(args: /usr/sbin/glusterfs --volfile-server=1kvm1 --volfile-id=/VGFS1 
/rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1)
[2015-04-22 15:21:50.220383] I [dht-shared.c:337:dht_init_regex] 
0-VGFS1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2015-04-22 15:21:50.222255] I [client.c:2280:notify] 0-VGFS1-client-1: 
parent translators are ready, attempting connect on transport
[2015-04-22 15:21:50.224528] I [client.c:2280:notify] 0-VGFS1-client-2: 
parent translators are ready, attempting connect on transport
Final graph:
+------------------------------------------------------------------------------+
   1: volume VGFS1-client-1
   2:     type protocol/client
   3:     option ping-timeout 42
   4:     option remote-host 1kvm2
   5:     option remote-subvolume /FastClass/p1/GFS1
   6:     option transport-type socket
   7:     option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
   8:     option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
   9:     option send-gids true
  10: end-volume
  11:
  12: volume VGFS1-client-2
  13:     type protocol/client
  14:     option ping-timeout 42
  15:     option remote-host 1kvm1
  16:     option remote-subvolume /FastClass/p1/GFS1
  17:     option transport-type socket
  18:     option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
  19:     option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
  20:     option send-gids true
  21: end-volume
  22:
  23: volume VGFS1-replicate-0
  24:     type cluster/replicate
  25:     subvolumes VGFS1-client-1 VGFS1-client-2
  26: end-volume
  27:
  28: volume VGFS1-dht
  29:     type cluster/distribute
  30:     subvolumes VGFS1-replicate-0
  31: end-volume
  32:
  33: volume VGFS1-write-behind
  34:     type performance/write-behind
  35:     subvolumes VGFS1-dht
  36: end-volume
  37:
  38: volume VGFS1-read-ahead
  39:     type performance/read-ahead
  40:     subvolumes VGFS1-write-behind
  41: end-volume
  42:
  43: volume VGFS1-io-cache
  44:     type performance/io-cache
  45:     subvolumes VGFS1-read-ahead
  46: end-volume
  47:
  48: volume VGFS1-quick-read
  49:     type performance/quick-read
  50:     subvolumes VGFS1-io-cache
  51: end-volume
  52:
  53: volume VGFS1-open-behind
  54:     type performance/open-behind
  55:     subvolumes VGFS1-quick-read
  56: end-volume
  57:
  58: volume VGFS1-md-cache
  59:     type performance/md-cache
  60:     subvolumes VGFS1-open-behind
  61: end-volume
  62:
  63: volume VGFS1
  64:     type debug/io-stats
  65:     option latency-measurement off
  66:     option count-fop-hits off
  67:     subvolumes VGFS1-md-cache
  68: end-volume
  69:
  70: volume meta-autoload
  71:     type meta
  72:     subvolumes VGFS1
  73: end-volume
  74:
+------------------------------------------------------------------------------+
[2015-04-22 15:21:50.227017] E [socket.c:2267:socket_connect_finish] 
0-VGFS1-client-1: connection to 172.16.8.161:24007 failed (No route to host)
[2015-04-22 15:21:50.227191] E 
[client-handshake.c:1496:client_query_portmap_cbk] 0-VGFS1-client-2: 
failed to get the port number for remote subvolume. Please run 'gluster 
volume status' on server to see if brick process is running.
[2015-04-22 15:21:50.227218] I [client.c:2215:client_rpc_notify] 
0-VGFS1-client-2: disconnected from VGFS1-client-2. Client process will 
keep trying to connect to glusterd until brick's port is available
[2015-04-22 15:21:50.227227] E [MSGID: 108006] 
[afr-common.c:3591:afr_notify] 0-VGFS1-replicate-0: All subvolumes are 
down. Going offline until atleast one of them comes back up.
[2015-04-22 15:21:50.229930] I [fuse-bridge.c:5080:fuse_graph_setup] 
0-fuse: switched to graph 0
[2015-04-22 15:21:50.233176] I [fuse-bridge.c:4009:fuse_init] 
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 
kernel 7.22
[2015-04-22 15:21:50.233244] I [afr-common.c:3722:afr_local_init] 
0-VGFS1-replicate-0: no subvolumes up
[2015-04-22 15:21:50.234996] I [afr-common.c:3722:afr_local_init] 
0-VGFS1-replicate-0: no subvolumes up
[2015-04-22 15:21:50.235020] W [fuse-bridge.c:779:fuse_attr_cbk] 
0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)
[2015-04-22 15:21:50.237342] I [afr-common.c:3722:afr_local_init] 
0-VGFS1-replicate-0: no subvolumes up
[2015-04-22 15:21:50.237762] I [fuse-bridge.c:4921:fuse_thread_proc] 
0-fuse: unmounting /rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1
[2015-04-22 15:21:50.237980] W [glusterfsd.c:1194:cleanup_and_exit] (--> 
0-: received signum (15), shutting down
[2015-04-22 15:21:50.237993] I [fuse-bridge.c:5599:fini] 0-fuse: 
Unmounting '/rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1'.
[root at 1kvm1 glusterfs]#


THX a lot
Pa.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150422/65417a98/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: hdaihcbd.png
Type: image/png
Size: 22895 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150422/65417a98/attachment-0001.png>


More information about the Users mailing list