<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Could you try "gluster volume start VGSF1 force" to make sure the
    brick processes are restarted.<br>
    From the status output, it looks like the brick processes are not
    online.<br>
    <br>
    <div class="moz-cite-prefix">On 04/22/2015 09:14 PM, <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1@email.cz</a>
      wrote:<br>
    </div>
    <blockquote cite="mid:5537C1CE.6070901@email.cz" type="cite">
      <meta http-equiv="content-type" content="text/html;
        charset=windows-1252">
      Hello dears, <br>
      i've got some troubles with reattaching gluster volumes with data.<br>
      <br>
      1) Base on a lot of tests I decided clear oVirt database ( #
      engine-cleanup ; # yum remove ovirt-engine;  # yum -y install
      ovirt-engine; #  engine-setup)<br>
      2) clearing sucessfully done and start with empty oVirt envir.<br>
      3) then I added networks, nodes and make basic network adjustment
      = all works fine<br>
      4) time to attach  volumes/ domains with original data ( a lot of
      virtuals , ISO files , .... )<br>
      <br>
      So, main question is about <font color="#cc0000">HOWTO attach
        this volumes if I haven't defined any domain and can't clearly 
        import them ??</font><br>
      <br>
      Current status of nodes are without glusterfs NFS mounted, but
      bricks are OK<br>
      <br>
      <big><font color="#330000"><small><font color="#000066"># gluster
              volume info</font><br>
            <br>
            Volume Name: VGFS1<br>
            Type: Replicate<br>
            Volume ID: b9a1c347-6ffd-4122-8756-d513fe3f40b9<br>
            Status: Started<br>
            Number of Bricks: 1 x 2 = 2<br>
            Transport-type: tcp<br>
            Bricks:<br>
            Brick1: 1kvm2:/FastClass/p1/GFS1<br>
            Brick2: 1kvm1:/FastClass/p1/GFS1<br>
            Options Reconfigured:<br>
            storage.owner-gid: 36<br>
            storage.owner-uid: 36<br>
            <br>
            Volume Name: VGFS2<br>
            Type: Replicate<br>
            Volume ID: b65bb689-ecc8-4c33-a4e7-11dea6028f83<br>
            Status: Started<br>
            Number of Bricks: 1 x 2 = 2<br>
            Transport-type: tcp<br>
            Bricks:<br>
            Brick1: 1kvm2:/FastClass/p2/GFS1<br>
            Brick2: 1kvm1:/FastClass/p2/GFS1<br>
            Options Reconfigured:<br>
            storage.owner-uid: 36<br>
            storage.owner-gid: 36</small></font></big><br>
      <br>
      <br>
      <font color="#000066">[root@1kvm1 glusterfs]# gluster volume
        status</font><br>
      <font color="#330000"><small><big>Status of volume: VGFS1<br>
            Gluster process                                        
            Port    Online  Pid<br>
------------------------------------------------------------------------------<br>
            Brick 1kvm1:/FastClass/p1/GFS1                         
            N/A     N       N/A<br>
            NFS Server on localhost                                
            N/A     N       N/A<br>
            Self-heal Daemon on localhost                          
            N/A     N       N/A<br>
            <br>
            Task Status of Volume VGFS1<br>
------------------------------------------------------------------------------<br>
            There are no active volume tasks<br>
            <br>
            Status of volume: VGFS2<br>
            Gluster process                                        
            Port    Online  Pid<br>
------------------------------------------------------------------------------<br>
            Brick 1kvm1:/FastClass/p2/GFS1                         
            N/A     N       N/A<br>
            NFS Server on localhost                                
            N/A     N       N/A<br>
            Self-heal Daemon on localhost                          
            N/A     N       N/A<br>
            <br>
            Task Status of Volume VGFS2<br>
------------------------------------------------------------------------------<br>
            There are no active volume tasks<br>
          </big><br>
          <font color="#000066"><big>[root@1kvm1 glusterfs]# gluster
              volume start VGFS1</big></font><br>
          volume start: VGFS1: failed: Volume VGFS1 already started</small></font><br>
      <br>
      <br>
      <br>
      <font color="#000066"># mount | grep mapper # base XFS mounting</font><br>
      <font color="#330000"><small>/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p1

          on /FastClass/p1 type xfs
          (rw,relatime,seclabel,attr2,inode64,noquota)<br>
          /dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p2 on
          /FastClass/p2 type xfs
          (rw,relatime,seclabel,attr2,inode64,noquota)</small></font><br>
      <br>
      <br>
      <b>5)</b> import screen <br>
      /VGFS1 dir exists &amp; iptables flushed<br>
      <img src="cid:part1.04000005.09050107@redhat.com" alt=""
        height="345" width="609"><br>
      <br>
      <font color="#000066"># cat
        rhev-data-center-mnt-glusterSD-1kvm1:_VGFS1.log</font><br>
      <font color="#330000">[2015-04-22 15:21:50.204521] I [MSGID:
        100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started
        running /usr/sbin/glusterfs version 3.6.2 (args:
        /usr/sbin/glusterfs --volfile-server=1kvm1 --volfile-id=/VGFS1
        /rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1)<br>
        [2015-04-22 15:21:50.220383] I [dht-shared.c:337:dht_init_regex]
        0-VGFS1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$<br>
        [2015-04-22 15:21:50.222255] I [client.c:2280:notify]
        0-VGFS1-client-1: parent translators are ready, attempting
        connect on transport<br>
        [2015-04-22 15:21:50.224528] I [client.c:2280:notify]
        0-VGFS1-client-2: parent translators are ready, attempting
        connect on transport<br>
        Final graph:<br>
+------------------------------------------------------------------------------+<br>
          1: volume VGFS1-client-1<br>
          2:     type protocol/client<br>
          3:     option ping-timeout 42<br>
          4:     option remote-host 1kvm2<br>
          5:     option remote-subvolume /FastClass/p1/GFS1<br>
          6:     option transport-type socket<br>
          7:     option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66<br>
          8:     option password 34bac9cd-0b4f-41c6-973b-7af568784d7b<br>
          9:     option send-gids true<br>
         10: end-volume<br>
         11:  <br>
         12: volume VGFS1-client-2<br>
         13:     type protocol/client<br>
         14:     option ping-timeout 42<br>
         15:     option remote-host 1kvm1<br>
         16:     option remote-subvolume /FastClass/p1/GFS1<br>
         17:     option transport-type socket<br>
         18:     option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66<br>
         19:     option password 34bac9cd-0b4f-41c6-973b-7af568784d7b<br>
         20:     option send-gids true<br>
         21: end-volume<br>
         22:  <br>
         23: volume VGFS1-replicate-0<br>
         24:     type cluster/replicate<br>
         25:     subvolumes VGFS1-client-1 VGFS1-client-2<br>
         26: end-volume<br>
         27:  <br>
         28: volume VGFS1-dht<br>
         29:     type cluster/distribute<br>
         30:     subvolumes VGFS1-replicate-0<br>
         31: end-volume<br>
         32:  <br>
         33: volume VGFS1-write-behind<br>
         34:     type performance/write-behind<br>
         35:     subvolumes VGFS1-dht<br>
         36: end-volume<br>
         37:  <br>
         38: volume VGFS1-read-ahead<br>
         39:     type performance/read-ahead<br>
         40:     subvolumes VGFS1-write-behind<br>
         41: end-volume<br>
         42:<br>
         43: volume VGFS1-io-cache<br>
         44:     type performance/io-cache<br>
         45:     subvolumes VGFS1-read-ahead<br>
         46: end-volume<br>
         47:<br>
         48: volume VGFS1-quick-read<br>
         49:     type performance/quick-read<br>
         50:     subvolumes VGFS1-io-cache<br>
         51: end-volume<br>
         52:<br>
         53: volume VGFS1-open-behind<br>
         54:     type performance/open-behind<br>
         55:     subvolumes VGFS1-quick-read<br>
         56: end-volume<br>
         57:<br>
         58: volume VGFS1-md-cache<br>
         59:     type performance/md-cache<br>
         60:     subvolumes VGFS1-open-behind<br>
         61: end-volume<br>
         62:<br>
         63: volume VGFS1<br>
         64:     type debug/io-stats<br>
         65:     option latency-measurement off<br>
         66:     option count-fop-hits off<br>
         67:     subvolumes VGFS1-md-cache<br>
         68: end-volume<br>
         69:<br>
         70: volume meta-autoload<br>
         71:     type meta<br>
         72:     subvolumes VGFS1<br>
         73: end-volume<br>
         74:<br>
+------------------------------------------------------------------------------+<br>
        [2015-04-22 15:21:50.227017] E
        [socket.c:2267:socket_connect_finish] 0-VGFS1-client-1:
        connection to 172.16.8.161:24007 failed (No route to host)<br>
        [2015-04-22 15:21:50.227191] E
        [client-handshake.c:1496:client_query_portmap_cbk]
        0-VGFS1-client-2: failed to get the port number for remote
        subvolume. Please run 'gluster volume status' on server to see
        if brick process is running.<br>
        [2015-04-22 15:21:50.227218] I [client.c:2215:client_rpc_notify]
        0-VGFS1-client-2: disconnected from VGFS1-client-2. Client
        process will keep trying to connect to glusterd until brick's
        port is available<br>
        [2015-04-22 15:21:50.227227] E [MSGID: 108006]
        [afr-common.c:3591:afr_notify] 0-VGFS1-replicate-0: All
        subvolumes are down. Going offline until atleast one of them
        comes back up.<br>
        [2015-04-22 15:21:50.229930] I
        [fuse-bridge.c:5080:fuse_graph_setup] 0-fuse: switched to graph
        0<br>
        [2015-04-22 15:21:50.233176] I [fuse-bridge.c:4009:fuse_init]
        0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs
        7.22 kernel 7.22<br>
        [2015-04-22 15:21:50.233244] I
        [afr-common.c:3722:afr_local_init] 0-VGFS1-replicate-0: no
        subvolumes up<br>
        [2015-04-22 15:21:50.234996] I
        [afr-common.c:3722:afr_local_init] 0-VGFS1-replicate-0: no
        subvolumes up<br>
        [2015-04-22 15:21:50.235020] W [fuse-bridge.c:779:fuse_attr_cbk]
        0-glusterfs-fuse: 2: LOOKUP() / =&gt; -1 (Transport endpoint is
        not connected)<br>
        [2015-04-22 15:21:50.237342] I
        [afr-common.c:3722:afr_local_init] 0-VGFS1-replicate-0: no
        subvolumes up<br>
        [2015-04-22 15:21:50.237762] I
        [fuse-bridge.c:4921:fuse_thread_proc] 0-fuse: unmounting
        /rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1<br>
        [2015-04-22 15:21:50.237980] W
        [glusterfsd.c:1194:cleanup_and_exit] (--&gt; 0-: received signum
        (15), shutting down<br>
        [2015-04-22 15:21:50.237993] I [fuse-bridge.c:5599:fini] 0-fuse:
        Unmounting '/rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1'.</font><br>
      [root@1kvm1 glusterfs]#<br>
      <br>
      <br>
      THX a lot<br>
      Pa.<br>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>